id
stringlengths 25
96
| input
stringlengths 137
1.08M
| output
stringlengths 501
1.6k
| instruction
stringclasses 5
values | num_tokens
int64 73
522
|
---|---|---|---|---|
arxiv-format/2011_08106v1.md | # Recovering and Simulating Pedestrians in the Wild
Ze Yang\\({}^{1,2}\\), Siva Manivasagam\\({}^{1,2}\\), Ming Liang\\({}^{1}\\), Bin Yang\\({}^{1,2}\\), Wei-Chiu Ma\\({}^{1,3}\\), Raquel Urtasun\\({}^{1,2}\\)
Uber Advanced Technologies Group\\({}^{1}\\), University of Toronto\\({}^{2}\\), MIT\\({}^{3}\\)
{zey,manivasagam,ming.liang,byang10,weichiu,urtasun}@uber.com
## 1 Introduction
A key requirement for mobile robots is that they interact and maneuver safely around humans. This is especially the case in autonomous driving, where the self-driving car should perceive in 3D each pedestrian in the scene and forecast their future trajectories. To deploy in the real world, we must verify that our autonomy system is robust and handles safety-critical cases such as a child occluded by a bus running in front of the car. However, it is unethical to test such cases in the real-world. Moreover, it is expensive and not scalable to collect and manually label the full distribution of pedestrian scenarios to generate training and testing data for current ML-based perception systems.
An appealing alternative is leveraging realistic sensor simulation systems to train and test the perception system. Here we focus on simulating realistic traffic scenes with pedestrians for the LiDAR sensor, a common sensor in self-driving. However, pedestrians are especially difficult to simulate; unlike vehicles, they are non-rigid objects that have a wide variety of shape, poses, and behaviors.
There are two lines of work when it comes to sensor simulation of pedestrian assets. One approach is to use artist-designed human meshes (e.g., CARLA [1]). Another is to use high-end 3D scanning systems in a controlled lighting setting with multiple cameras and/or depth sensors to create high-resolution human meshes [2, 3, 4, 5]. Both approaches require an artist to \"rig\" and animate behaviors for each human, which requires significant effort: the artist must first add a skeleton to the mesh for skinning and posing the character and then design the sequence of joint angles for the pedestrian skeleton required to simulate a particular behavior. While these approaches have been widely used for creating realistic looking pedestrians in video games and movies, they are expensive and not scalable: it is difficult to manually create or 3D scan all the diverse variations in shape, pose, and trajectories a pedestrian may take in the real-world.
There has also been a large body of prior work on estimating 3D pose and shape from single images [6, 7, 8, 9, 10] or video [11, 12, 13, 14]. This is a more scalable solution, as images and videos of people are everywhere. However, image-only methods are prone to having incorrect location/movement estimates in 3D and can sometimes produce unrealistic looking meshes due to inaccurate depth estimates. As a consequence, while they have produced visually appealing results, which might be sufficient in some application domains (e.g., augmented reality, online games), their 3D fidelity is not sufficient when simulating pedestrian LiDAR readings.
Towards this goal, we leverage real-world sensor data captured by our autonomous driving fleet, which contain LiDAR point clouds and camera image sequences, to recover accurate 3D motion and shapes of pedestrians. Our approach, LiDAR for human **M**esh **E**stimation (LiME), only requires a single low-cost artist-created mesh that we exploit to create a prior over human shapes, which we then pose and deform to match the sensor data. We leverage the power of both deep learning and energy minimization methods to accurately recover shape and pose in the wild when no ground-truth is available. To simulate the virtual world, we use a realistic LiDAR simulation system, LiDARsim [15], which uses real world data to generate a large collection of realistic background meshes of different scenes as well as vehicle assets. We then enhance it with a diverse bank of pedestrian shapes and poses reconstructed in the wild using LiME. We can then generate novel scenarios by selecting pedestrians in our bank, applying motion retargeting, and placing them in the scene. LiDARsim then renders the scene to generate realistic LiDAR point clouds. We show that we can generate simulated LiDAR that has little to no sim2real domain gap, which allows us to evaluate a state-of-the-art perception system. Furthermore, we demonstrate that when generating low-cost simulation data at scale for training data augmentation, we reduce the need for labeled data.
## 2 Related Work
3D Human Pose and Motion Estimation:Human motion capture (MoCap) is usually conducted in highly-calibrated and laboratory-controlled environments. With the help of multi-view sensing [16] and marker-based technology [2], many high-quality dynamics measurements [4; 5; 17] have been collected, including accurate 2D and 3D skeletal joint locations over time. Based on these datasets, several methods have been developed to predict 3D human pose and motion from monocular images [18; 19; 20] and monocular video [21; 22; 23], achieving state-of-the-art performance. Unfortunately, these data, while useful, are still over-simplified. Numerous real world scenarios are not captured, _e.g._, environmental occlusions. To overcome such limitations, recent work has focused on capturing large scale \"in-the-wild\" datasets with 3D pose using IMU and cameras [24; 25; 26]. Most efforts still focus on pose estimation from images. However, they have difficulty obtaining precise shape and pose because accurate depth is missing. We require more accuracy for simulating pedestrian scenarios and testing autonomy. Recent work [27; 28] have proposed using RGB-D images to predict 3D pose in indoor environments, but to our knowledge, we are the first to tackle estimating 3D pose over time from images and sparse LiDAR points at distance. This setting is important for recovering and simulating realistic humans in the wild for self-driving.
Non-rigid Body Surface Reconstruction:For realistic simulation, we need to reconstruct the 3D human mesh in the scene. While real-time mesh reconstruction of non-rigid objects from depth camera [29] or RGB camera [30; 31] exist, to re-articulate the humans for downstream tasks we also require human pose. We now discuss past work that recover both 3D pose and articulated meshes. Most of these work [6; 7; 8; 9; 10] rely on strong shape priors such as SMPL [32]. They either directly regress human model parameters from observations or fit the parametric model to RGB images by minimizing carefully designed energy functions. To further ensure temporal consistency, [11; 12; 13; 14] leverage training signals from videos. [33] align articulated models with free-form deformation on densely sampled point clouds from multiple sensors. We focus on extending the recovery of 3D human pose and shape with small error from partial LiDAR data and images.
Sensor simulation of Pedestrians:Prior work [1], simulate pedestrians by first creating artist-designed meshes that are manually rigged and animated, and then performing sensor simulation via graphics engine rendering. While this allows for fine-grained control of pedestrian appearance and behavior, it is a time-consuming and expensive, which does not scale to capture the real-world pedestrian distribution. Efforts have been made to incorporate human avatars into robot simulators such as MORSE [34] for prototyping, data collection and evaluation [35]. This has focused mostly on indoor-environments and with unrealistic rendering. Our work focuses on leveraging data from in-the-wild, where we can automatically capture diverse human appearance, motion, and behavior and directly adapt these assets for realistic sensor simulation.
Figure 1: We recover realistic 3D human meshes and poses from sequences of LiDAR and camera readings, which can then be used in sensor simulation for perception algorithm training and testing.
Human Model
We utilize a Linear Blend Skinning (LBS) model, which we enhanced with both bone scaling and per-vertex deformations to represent how the human body deforms as a function of its pose. We use this enhanced LBS model due to its simplicity and efficient computation, as opposed to higher-order blend skinning methods such as spherical [36] or non-skeleton based deformation methods [37]. Our experiments show that this simple representation outperforms popular human models (e.g., SMPL [32]) in reconstructing 3D shape from sensor data. Furthermore, it proves sufficient for our downstream task of simulating LiDAR data for testing and improving perception algorithms. We now review the LBS model and describe our bone-scaling and per-vertex deformation modifications to handle shape variation and appearance.
LBS represents the human body in two parts: a mesh representation and a hierarchical set of interconnected bones, _i.e._, the skeleton. The key idea is that as the skeleton moves, the positions of the mesh's vertices will change, but not their connectivity. Each bone in the skeleton is associated with some portion of the character's visual representation (i.e., set of vertices) in a process called _skinning_. Each mesh vertex has a specific corresponding \"blend weight\" for each skeleton bone. To calculate the final position of a vertex, a transformation matrix is created for each bone which, when applied to the vertex, first puts the vertex in bone space, and then puts it back into mesh space. After applying this transformation to the vertex, it is scaled by its corresponding blend weight.
More formally, let the template mesh \\(\\mathbf{V}\\in\\mathbb{R}^{N\\times 3}\\) be the set of \\(N\\) vertices \\(\\mathbf{V}=\\{\\mathbf{v}_{i}\\}_{i=1}^{N}\\) (with oriented normals \\(\\mathcal{N}=\\{\\mathbf{n}_{i}\\}_{i=1}^{N}\\)), and let \\(\\mathbf{W}\\in\\mathbb{R}^{N\\times K}\\) be the set of blend weights1. We represent a skeleton pose with the set of joint rotation matrices \\(\\mathbf{\\Theta}_{i}\\in\\mathbf{SO}(3)\\), one for each joint representing the rotation with respect to its parent in the skeletal tree. While this original LBS formulation is a good approximation of the human skeleton, it cannot model well different human body sizes deviating from the template mesh. To address this, we introduce a learnable scale factor for each bone in the skeleton: where \\(s_{p}\\) denotes the bone length scale factor between the \\(p\\)-th joint and its parent, which we model to be symmetric with respect to the human spine, _e.g._, the left and right arms share the same bone scale factor. We thus traverse the tree and construct the transformation matrix for each joint \\(\\mathbf{T}_{k}(\\mathbf{\\Theta})\\in\\mathbf{SE}(3)\\):
Footnote 1: These blend weights can be created for example by diffusing artist-annotated part-segmentations [32].
\\[\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta})=\\prod_{p\\in A(k)}\\begin{bmatrix}s_ {p}\\mathbf{\\Theta}_{p}&(\\mathbf{I}-s_{p}\\mathbf{\\Theta}_{p})\\mathbf{j}_{p}\\\\ \\mathbf{0}&1\\end{bmatrix} \\tag{1}\\]
where \\(A(k)\\) is the set of joint ancestors of the \\(k\\)-th joint in order, \\(\\mathbf{\\Theta}_{p}\\) is the rotation matrix of the \\(p\\)-th joint wrt its parent, and \\(\\mathbf{j}_{p}\\) is the coordinate of the \\(p\\)-th joint in the template mesh. The coordinate for the \\(i\\)-th vertex can now be computed as a linear combination of the joint transformation matrices and its unique blend weights. However, the template mesh vertices alone cannot handle shape variations. Therefore, following [33], we also add a displacement vector for each vertex. The coordinate for the \\(i\\)-th vertex and the \\(k\\)-th joint in the posed mesh are computed as:
\\[\\mathbf{\\bar{v}}_{i}=\\sum_{k=1}^{K}\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta}) (\\mathbf{v}_{i}+\\mathbf{n}_{i}d_{i})\\;w_{i,k}+\\mathbf{c}\\;,\\qquad\\mathbf{ \\bar{j}}_{k}=\\mathbf{T}_{k}(\\mathbf{s},\\mathbf{\\Theta})\\mathbf{j}_{k}+\\mathbf{c} \\tag{2}\\]
where \\(w_{i,j}\\) is the skinning weight describing the influence of the \\(k\\)-th joint on the \\(i\\)-th vertex in the template shape, and \\(\\mathbf{c}\\in\\mathbb{R}^{3}\\) is the global translation of the root joint. The final posed mesh model is
\\[\\mathbf{M}=\\mathcal{M}(\\mathbf{W},\\mathbf{V},\\mathcal{N},\\mathbf{\\Theta}, \\mathbf{c},\\mathbf{s},\\mathbf{D}) \\tag{3}\\]
with posed mesh \\(\\mathbf{M}\\), blend weights \\(\\mathbf{W}\\), mesh vertices \\(\\mathbf{V}\\), normals \\(\\mathcal{N}\\), joint angles \\(\\mathbf{\\Theta}\\), root location \\(\\mathbf{c}\\), bone scale factors \\(\\mathbf{s}\\), and per-vertex deformation matrix \\(\\mathbf{D}\\).
## 4 Reconstructing Pedestrians in the Wild
We now describe our method, LiDAR for human Mesh Estimation (LiME), for reconstructing pedestrians in the wild. Given a sequence of LiDAR measurements and camera images captured by a self-driving car, as well as 3D bounding boxes enclosing the pedestrians we want to reconstruct, we seek to estimate the pose trajectory (including global motion) and shape of each pedestrian in the scene. We use our modified LBS model \\(\\mathcal{M}\\) defined in Eq. 3 as our human body parameterization. For our reconstructions, the body model's skinning weights \\(\\mathbf{W}\\), template shape \\(\\mathbf{V}\\) and normals \\(\\mathcal{N}\\)are fixed and we infer from data the pose (joint angles \\(\\mathbf{\\Theta}\\), offset \\(\\mathbf{c}\\)) and shape modifications (joint scale factors \\(\\mathbf{s}\\) and deformations \\(\\mathbf{D}\\)). We first use a regression network to predict the initial estimates of \\((\\mathbf{\\Theta},\\mathbf{c},\\mathbf{s},\\mathbf{D})\\) from data. We then perform energy minimization to refine the prediction (see Figure 2). As we do not have ground-truth pose or shape, we use the objective function to self-supervise our network. We now describe the regression network and energy minimization in more detail.
### Sensor Fusion Regression Network
Our regression network takes as input the LiDAR and camera image centered and cropped around each pedestrian, and outputs the initial estimate of the body model parameters \\((\\mathbf{\\Theta},\\mathbf{c},\\mathbf{s},\\mathbf{D})\\). Towards this goal, the camera image is fed into a 2D CNN to compute image features. We then apply bilinear interpolation to sample the corresponding image feature for each LiDAR point using geometry and the camera calibration. Finally, each LiDAR point and its concatenated image feature are consumed by a PointNet [38] network to predict the human parameters. Since the regression network has difficulty identifying which direction the human is facing, we follow [39] and run two branches of the network, where the root joint angle is initialized to either face forward (\\(0^{\\circ}\\)) or backward (\\(180^{\\circ}\\)).
### Energy Formulation
We define the objective function to capture the fact that our shape should be consistent with the point clouds from the LiDAR measurements (\\(E_{\\text{sim}}\\)) and that the estimated 3D joints should be consistent with the 2D joints estimated from images (\\(E_{\\text{joint}}\\)). We add an additional term, \\(E_{\\text{prior}}\\), to regularize the poses to be natural, and the deformed shape to be smooth and not have large deviations from the mesh template. The full objective function is:
\\[E(\\mathbf{\\Theta}_{1:T},\\mathbf{c}_{1:T},\\mathbf{s},\\mathbf{D})=\\sum_{t}\\lambda_{ \\text{sim}}E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{ D})+\\lambda_{\\text{joint}}E_{\\text{joint}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t}, \\mathbf{s})+E_{\\text{prior}}(\\mathbf{\\Theta}_{t},\\mathbf{s},\\mathbf{D}) \\tag{4}\\]
where \\(t\\) is the time step in the pedestrian trajectory, and \\(\\mathbf{\\Theta}_{1:T}\\), \\(\\mathbf{c}_{1:T}\\) are the sequence of pose joint angles and root offsets. We next describe how we compute each term.
LiDAR Consistency:The LiDAR consistency term encourages the ray-casted point cloud from the estimated mesh \\(M(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})\\) to match with the real partial point cloud \\(\\mathbf{X}\\) of the pedestrian through the Chamfer loss:
\\[E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})=\\frac{1 }{\\left|\\mathbf{X}\\right|}\\sum_{\\mathbf{x}\\in\\mathbf{X}}\\min_{\\mathbf{y}\\in \\mathbf{Y}}\\left\\|\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+\\frac{1}{\\left| \\mathbf{Y}\\right|}\\sum_{\\mathbf{y}\\in\\mathbf{Y}}\\min_{\\mathbf{x}\\in\\mathbf{X}} \\left\\|\\mathbf{y}-\\mathbf{x}\\right\\|_{2}^{2} \\tag{5}\\]
where \\(\\left|\\mathbf{X}\\right|\\) denotes the cardinality of point set \\(\\mathbf{X}\\), and \\(\\mathbf{Y}=\\{y_{1}\\dots y_{n}|y_{i}\\in\\mathbb{R}^{3}\\}\\) is the rendered points from the estimated mesh. Note that this is a differentiable point set distance and we exploit the Moller-Trumbore [40] ray casting algorithm which is differentiable (w.r.t. the mesh vertices) such that the full model can be trained end-to-end. We refer the reader to the supplementary material for details of the ray-caster and its differentiability. When computing \\(E_{\\text{sim}}\\), we take into account objects that occlude the sensor's field-of-view of the pedestrian, thereby ignoring simulated points from the ray-caster that would not appear due to occlusion.
Human Joints Consistency:We exploit camera images by first detecting 2D joints using a state-of-the-art 2D pose estimator [41]. We then encourage the projection of the predicted 3D pose to be consistent with the 2D pose estimates:
\\[E_{\\text{joint}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s})=\\sum_{k\\in B}m_{ k}\\rho(\\pi(\\mathbf{j}_{k},\\mathbf{\\Omega})-p_{k}) \\tag{6}\\]
where \\(\\mathbf{j}_{k}\\) is the \\(k\\)-th joint transformed according to Eq. 2, \\(B\\) is the subset of 3D joints that have 2D counterparts, and \\(p_{k}\\) and \\(m_{k}\\) are the corresponding estimated 2D joint and confidence score. \\(\\pi\\) is the
Figure 2: LiDAR for human Mesh Estimation, (LiME): Given sensory observations, a sensor fusion regression network predicts the human parameters which minimize the objective function in Eq. 4. We then perform energy minimization over the sequence to obtain an optimized shape and 3D pose.
projection function that takes the camera parameters \\(\\mathbf{\\Omega}\\), which are given as cameras of self-driving cars are calibrated, and projects the 3D joint locations onto the image plane. \\(\\rho\\) is the \\(\\sigma^{2}\\)-scaled Geman-McClure robust penalty function defined as \\(\\rho(x)=(x^{2}*\\sigma^{2})/(x^{2}+\\sigma^{2})\\), with \\(\\sigma=100\\).
Pose and Shape Priors:We incorporate our prior knowledge of what are reasonable human poses and shapes to be robust to noisy sensor data. For joint angles, we follow [6; 11] and represent the joint angle prior as the negative log-likelihood of a Gaussian Mixture Model (GMM) learned from the CMU Mocap dataset [17]. We also add a bone scale prior that encourages the bone length to be close to a canonical size. The pose prior is:
\\[E_{\\text{pose}}(\\mathbf{\\Theta}_{t},\\mathbf{s})=-(\\log(\\sum_{r}^{R}g_{r} \\mathcal{N}(\\mathbf{\\Theta};\\mu_{r},\\mathbf{\\Sigma}_{r})))+\\lambda\\sum_{k}^{K }(\\prod_{p\\in A(k)}s_{p}-1)^{2} \\tag{7}\\]
with \\(R=8\\) Gaussians, \\((g_{r},\\mu_{r},\\mathbf{\\Sigma}_{r})\\) the weight, mean and covariance of the \\(p\\)-th Gaussian, and \\(\\prod_{p\\in A(k)}s_{p}\\) the cumulated scale factor for the bone length between the \\(k\\)-th joint and its ancestors. To ensure the deformed mesh still retains most of the mesh template shape and has smoothly-varying and small deformations, we add a Laplacian mesh regularizer [42] and \\(\\ell_{2}\\) regularizer, respectively:
\\[E_{\\text{shape}}(\\mathbf{D})=\\sum_{i=1}^{N}\\lVert\\mathcal{L}(\\mathbf{v}_{i}+ \\mathbf{n}_{i}d_{i})-\\mathcal{L}(\\mathbf{v}_{i})\\rVert_{2}^{2}+\\lambda\\sum_{i =1}^{N}d_{i}^{2} \\tag{8}\\]
where \\(\\mathbf{v}_{i}\\) and \\(\\mathbf{n}_{i}\\) are the vertex location and normal in the mesh template, \\(d_{i}\\) is the corresponding displacement along the normal direction, and \\(\\mathcal{L}\\) is the Laplace operator. The total prior is:
\\[E_{\\text{prior}}(\\mathbf{\\Theta}_{t},\\mathbf{s},\\mathbf{D})=\\lambda_{\\text{ pose}}E_{\\text{pose}}(\\mathbf{\\Theta},\\mathbf{s})+\\lambda_{\\text{shape}}E_{\\text{shape}}( \\mathbf{D}) \\tag{9}\\]
### Learning and Inference
Inference:We perform a forward pass for each pedestrian frame and output the initial model parameters. These predictions are then further refined by minimizing the differentiable energy defined in Eq. 4, which obtains the final pose and shape of each pedestrian at each frame. In practice we found that a two step energy minimization works well, where we first optimize \\(\\mathbf{\\Theta}_{1:T}\\), \\(\\mathbf{c}_{1:T}\\), \\(\\mathbf{s}\\) till convergence, and then optimize the deformation variable \\(\\mathbf{D}\\) till convergence. Each subject converges in typically 50 iterations. We adopt the Adam optimizer [43], which ran much faster than a second-order optimizer, to optimize our objective. Please see supplementary for more details.
Learning:Since we do not have ground-truth shape or pose for our in the wild setting, we use Eq. 4 (for a single frame) as the loss function to train the network in a self-supervised fashion. As mentioned in Section 4.1 we use two branches with different root initializations, and perform hindsight loss during training [39]. We pass the result with the lower loss to the energy minimization step during inference. Please see supplementary for more details.
## 5 LiDAR Simulation of Pedestrians
In order to produce realistic sensor simulation, we first require a virtual world with realistic backgrounds (i.e. roads, buildings, signs) and dynamic objects (e.g., vehicles, pedestrians), as well as a sensor simulation system that has high-fidelity with the real LiDAR. LiDARsim [15] is a LiDAR simulator that uses real data to generate background scenes and vehicle dynamic objects. LiDARsim then places the assets in a scenario (provided by either a labeled snippet, a tracking algorithm, artist-drawn trajectories, or algorithmically) and renders realistic LiDAR at each time step usingboth physics and machine learning. In particular, a neural network is used to enhance the realism of the ray-casted LiDAR by determining which points would not return in the real world (e.g., due to spectral reflections or far distance). We note that this simulator is different from the ray-tracer desribed in Section 4, as LiDARsim has a high-performing ray-tracer to scale to millions of scene elements. This raytracer is also non-differentiable, thus not suited for our reconstruction framework. While LiDARsim provides high-fidelity backgrounds and vehicles, it lacks realistic pedestrians.
We now describe how we enhance LiDARsim with the pedestrians reconstructed using LiME. Towards this goal, we first build an asset bank of pedestrian sequences and their corresponding meshes directly from data captured by our self-driving fleet. Since the trajectories and mesh sequences in the asset bank can be quite diverse (walking, running, standing, sitting, etc.), we ease the reuse of the action-specific pose dynamics by clipping each cyclic pedestrian trajectory to consist of a single cycle, where the human poses in the start and end frames are similar. The average action cycle length is 1.5 seconds. Then, for each new query pedestrian trajectory to be simulated, we select a pedestrian moving at a similar speed from the asset bank, adapt it to the new scene, and simulate the LiDAR data with LiDARsim. We now discuss each step in more detail.
Our simulation approach works as follows: The user provides a bird's eye view (BEV) 2D trajectory in the scene map as a high level description of the motion to simulate. Note that this trajectory can come from an existing trajectory (recovered from recorded snippets via tracking or labeling), can be drawn by a test engineer, or can be produced algorithmically. In our experiments, we use labeled snippet trajectories as our query trajectories. We then retrieve the asset in the bank which is most similar to this trajectory query. We use velocity as our similarity function (specifically, the asset trajectory whose velocity is consistently within 0.5 m/s of the query trajectory's), as action-specific pose dynamics are specific to particular velocities. We then modify the retrieved asset and retarget it to perform the desired motion. Specifically, we project the query trajectory to the retrieved asset trajectory in BEV, and use Slerp [44] to interpolate the human poses for each time-step in the query trajectory. Note that this modification affects both the joint angles and associated mesh via our skinning model (see Fig. 4). Finally, we use LiDARsim to simulate the scene as seen by the sensor.
## 6 Experimental Evaluation
We first evaluate our proposed method for estimating human surface geometry from LiDAR and image sequences. We then show how capturing realistic pedestrian trajectories in the wild enhances simulation environments and improves performance on autonomy tasks such as pedestrian detection.
### Pedestrian Reconstruction from Sparse LiDAR Points
We evaluate our model on the 3DPW [26] dataset, which contains 60 sequences (12 validation split) of real world scenarios and 18 different humans, with images and ground-truth pose and complete clothed 3D shape. We place a virtual LiDAR sensor at the camera center and ray-cast the clothed human mesh in the dataset to generate simulated LiDAR points. Given 3DPW real images and synthetic LiDAR, we evaluate our algorithm on estimating pose and shape. We measure the mesh error in cm with the mean Per-Vertex-Error (PVE) and square root of the Chamfer distance (CD) between the vertices of our prediction vs. the ground-truth's. We measure the joint estimation error with the mean Per-Joint-Position-Error (MPJPE) in cm.
Ablation on input feature and energy minimization:The effect of using different input features is reported in Table 1. When we fuse image and LiDAR, the reconstruction error is lower than using either feature. Energy minimization further reduces the error.
Alternate human model:We study the effect of using different human models in Table 2. The results are reported with running energy minimization included. Using the LBS model alone, we achieve \\(6.49\\) cm mean PVE. With the additional bone scale factors and per-vertex displacement vectors, the PVE is \\(5.78\\) cm, outperforming the SMPL model.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline & CD & PVE & MPJPE \\\\ \\hline Image & 6.77 & 14.26 & 12.16 \\\\ LiDAR & 4.94 & 11.17 & 9.51 \\\\ Fused & 4.37 & 9.30 & 7.98 \\\\ \\hline Fused + EM & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Effect of input/energy minimization (EM)
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline Human Model & CD & PVE & MPJPE \\\\ \\hline LBS & 2.62 & 6.49 & 5.69 \\\\ SMPL & 2.44 & 6.04 & 5.17 \\\\ LBS + bone scale & 2.38 & 5.97 & 5.19 \\\\ Ours & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Effect of different human model.
**Ablation on energy terms:** Results in Table 3 are reported after running energy minimization. Leveraging LiDAR point cloud observations is important to achieving lower Chamfer error. Leveraging 2D joints is important to achieving lower mean PVE and MPJPE, which measure dense and sparse correspondence between our prediction and the ground-truth shape. Each energy term contributes to the final model.
**State-of-the-art (SoTA) comparison:** We compare our model with SoTA image-only approaches on the 3DPW [26] test set in Table 4. \"PVE*\" denotes the typically reported mean Per-Vertex-Error between prediction and ground-truth naked shape, while \"PVE\" denotes the mean Per-Vertex-Error between prediction and ground-truth clothed shape, which is more relevant to our task. We note that our approach uses sparse LiDAR, while other SoTA approaches uses ground-truth meshes and 3D poses and mix multiple datasets during training. Figure 5 shows qualitative results. Using LiDAR's sparse depth greatly improves the accuracy of the shape.
### Simulation for Downstream Visual Application
We have demonstrated our approach on recovering human pose and shape from 3DPW pedestrian sequences in the wild, and that leveraging LiDAR point clouds in our energy formulation improves reconstruction performance over prior methods. We now leverage diverse and realistic pedestrians for downstream perception tasks. We conduct our experiments on the ATG4D [45] self-driving dataset, which contains diverse scenes across multiple metropolitan cities in North America with bounding box labels annotated for each object in the scene. Each log snippet has 64-beam LiDAR sweeps at \\(10\\) Hz for \\(\\approx 25\\) seconds with corresponding camera images. We use a detector similar to PnPNet [46], which takes as input five consecutive LiDAR sweeps in birds-eye-view (BEV) and outputs 3D bounding boxes for detected vehicles and pedestrians in the scene. More details about the detector can be found in the supplementary material.
We use LiME to reconstruct the pedestrian shape and pose trajectory from the ATG4D dataset. LiME accurately captures the geometry compared to the original LiDAR sequence, as seen in Figure 3. To generate the assets in Section 5, we select 211 unique pedestrian trajectory annotations from the ATG4D [45] training split with over 3300 individually posed meshes. Each selected pedestrian trajectory annotation has: (1) visible camera images and \\(70\\%\\) of joint detection score \\(>\\) 0.1; (2) has \\(\\geq 10\\) consecutive frames; (3) has \\(\\geq 100\\) number of LiDAR points per frame; (4) \\(E_{sim}<20\\), \\(E_{joint}<6\\), and \\(E_{joint}<22\\); (5) Forms a complete action cycle (Sec. 5).
We then use the method described in Section 5 to generate simulated LiDAR sweeps, as seen in Figure 6. We can place pedestrians in new configurations (bottom panel one), generate occlusion (panel two), sample safety-critical behaviors (looking at phone, panel three), or create group behavior (panel four). We show through data augmentation experiments that training on our simulated LiDAR data improves pedestrian detection performance with limited amounts of real data.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline Methods & PVE* & PVE & MPJPE \\\\ \\hline HMMR [12] & 13.93 & β & 11.65 \\\\ SPIN [9] & 11.64 & β & 9.69 \\\\ VIBE [14] & 9.91 & β & 8.29 \\\\ Ours & **7.36** & **8.17** & **6.57** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Evaluation of 3D pose estimation and shape reconstruction on the 3DPW test set. βPVE*β means Per-Vertex-Error when the ground-truth human is naked.
Figure 5: Quantitative results of our method on 3DPW [26] dataset. The sensory input consists of camera image and the synthetic LiDAR points. We show our method using both SMPL model and our human model in Section 3, and compare with SPIN [9].
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multicolumn{3}{c}{Objectives} & \\multicolumn{3}{c}{Error} \\\\ \\hline \\(E_{\\text{sim}}\\) & \\(E_{\\text{prior}}\\) & \\(E_{\\text{joint}}\\) & CD & PVE & MPJPE \\\\ \\hline β & & & & 3.40 & 30.47 & 28.36 \\\\ β & β & & & 3.41 & 23.15 & 21.36 \\\\ & β & β & 5.84 & 11.60 & 9.96 \\\\ β & & β & 2.49 & 7.24 & 5.40 \\\\ β & β & β & **2.17** & **5.78** & **5.01** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: The ablation on different objective term for shape reconstruction and 3D joint estimationEvaluating Pedestrian Detector on Simulated Data:We first evaluate the pedestrian detector on our simulated LiDAR data, and compare the result with the one evaluated on real LiDAR data. This indicates how well we can use our simulation for testing the perception system on safety-critical scenarios. To properly evaluate the realism of our simulated LiDAR points, we generate LiDARsim point clouds from the ground-truth scene layouts. The pedestrian detector was trained on real LiDAR data only. We evaluate the average precision (AP) of the detector for the pedestrian class at two IoU thresholds: \\(0.3\\) and \\(0.5\\). As seen in Table 5, our method has a small gap of 0.7 points at IoU 0.5. This means we can directly use it with little to no domain adaptation to evaluate autonomy systems.
Training Data Augmentation with Simulated Data:We train the detector on varying amounts of real LiDAR data and show how performance changes when we augment the dataset with 100k examples of simulated LiDAR data containing vehicles and pedestrians. We report the results in Table 7. Note that to strictly evaluate the realism of the sensor data, the pedestrian layout and trajectory in the 100k simulated examples are different from that in the 100k real examples. When we combine simulated LiDAR data with real data, we consistently get performance gains, especially when we only have limited real data. Moreover, when we combine both large amounts of real and simulation data (100k examples each), we get about \\(3\\) AP point improvement over real data alone. As shown in Table 6, if 100k real LiDAR examples and 100k simulated LiDAR examples use the same scene layout and pedestrian trajectory, we get \\(1.7\\) AP point improvement over real data alone, highlighting the value of simulating diverse pedestrian LiDAR sequences even with the same layout.
## 7 Conclusion
In this paper, we propose to leverage LiDAR and camera images collected by self-driving cars driving around a city to generate diverse pedestrian shapes and motions at scale, which we then use for accurate simulation to test and train a state-of-the-art perception system. Towards this goal, we have designed a deep-structured model, LiME, to reconstruct pedestrians in the wild using image and LiDAR sequences. We then perform motion retargeting and pedestrian scenario simulation in urban scenes to generate realistic LiDAR data. Our results show that the generated LiDAR point clouds have little domain gap and enhance the performance of downstream detectors via data augmentation. In the future we plan to reconstruct and simulate other categories such as animals.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline Train data (100k) & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline Real & 72.0 & 66.8 \\\\ Real + Sim & 73.6 & 68.5 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Trained on 100k real and 100k sim (same layout) and evaluated on real data.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline Eval data & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline Real & 72.0 & 66.8 \\\\ Sim & 67.8 & 66.1 \\\\ \\hline \\hline \\end{tabular}
\\begin{tabular}{c c c c} \\hline \\hline Real Amount & \\multicolumn{2}{c}{Real Only} & \\multicolumn{2}{c}{Real+100k Sim} \\\\ \\cline{2-5} & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) & AP\\({}_{0.3}\\) & AP\\({}_{0.5}\\) \\\\ \\hline
0k & β & β & 66.9 & 61.6 \\\\
5k & 30.9 & 27.5 & 68.7 & 63.2 \\\\
10k & 40.2 & 36.6 & 69.4 & 64.3 \\\\
20k & 53.2 & 48.6 & 70.4 & 65.4 \\\\
50k & 67.4 & 62.7 & 73.4 & 68.5 \\\\
100k & 72.0 & 66.8 & 74.9 & 69.9 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: Training with simulated data boosts pedestrian detection performance.
Figure 6: **Top Left:** reconstructed scene. **Top Right:** simulated LiDAR and pedestrian detections (green box). Detector trained on real-data only. **Bottom:** Reconstructions and simulated LiDAR.
## References
* [1] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In _CoRL_, 2017.
* [2] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. _PAMI_, 2014.
* [3] F. Bogo, J. Romero, M. Loper, and M. J. Black. Faust: Dataset and evaluation for 3d mesh registration. In _CVPR_, 2014.
* [4] L. Sigal, A. O. Balan, and M. J. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. _IJCV_, 2010.
* [5] M. Trumble, A. Gilbert, C. Malleson, A. Hilton, and J. Collomosse. Total capture: 3d human pose estimation fusing video and inertial sensors. In _BMVC_, 2017.
* [6] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In _ECCV_, 2016.
* [7] A. O. Balan, L. Sigal, M. J. Black, J. E. Davis, and H. W. Haussecker. Detailed human shape and pose from images. In _CVPR_, 2007.
* [8] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In _CVPR_, 2018.
* [9] N. Kolotouros, G. Pavlakos, M. J. Black, and K. Daniilidis. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In _ICCV_, 2019.
* [10] T. Alldieck, G. Pons-Moll, C. Theobalt, and M. Magnor. Tex2shape: Detailed full human body geometry from a single image. In _ICCV_, 2019.
* [11] A. Arnab, C. Doersch, and A. Zisserman. Exploiting temporal context for 3d human pose estimation in the wild. In _CVPR_, 2019.
* [12] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik. Learning 3d human dynamics from video. In _CVPR_, 2019.
* [13] T. Alldieck, M. Magnor, W. Xu, C. Theobalt, and G. Pons-Moll. Video based reconstruction of 3d people models. In _CVPR_, 2018.
* [14] M. Kocabas, N. Athanasiou, and M. J. Black. Vibe: Video inference for human body pose and shape estimation. In _CVPR_, 2020.
* [15] S. Manivasagam, S. Wang, K. Wong, W. Zeng, M. Sazanovich, S. Tan, B. Yang, W.-C. Ma, and R. Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In _CVPR_, 2020.
* [16] H. Joo, T. Simon, and Y. Sheikh. Total capture: A 3d deformation model for tracking faces, hands, and bodies. In _CVPR_, 2018.
* [17] CMU. Carnegie-mellon mocap database. URL [http://mocap.cs.cmu.edu/](http://mocap.cs.cmu.edu/).
* [18] A. Agarwal and B. Triggs. Recovering 3d human pose from monocular images. _PAMI_, 2005.
* [19] J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3d human pose estimation. In _ICCV_, 2017.
* [20] G. Pavlakos, X. Zhou, and K. Daniilidis. Ordinal depth supervision for 3d human pose estimation. In _CVPR_, 2018.
* [21] L. Sigal, M. Isard, H. Haussecker, and M. J. Black. Loose-limbed people: Estimating 3d human pose and motion using non-parametric belief propagation. _IJCV_, 2012.
* [22] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In _CVPR_, 2019.
* [23] B. Tekin, A. Rozantsev, V. Lepetit, and P. Fua. Direct prediction of 3d body poses from motion compensated sequences. In _CVPR_, 2016.
* [24] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In _ECCV_, 2018.
* [25] N. Saini, E. Price, R. Tallamraju, R. Enficiaud, R. Ludwig, I. Martinovic, A. Ahmad, and M. J. Black. Markerless outdoor human motion capture using multiple autonomous micro aerial vehicles. In _ICCV_, 2019.
* [26] T. von Marcard, R. Henschel, M. Black, B. Rosenhahn, and G. Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In _ECCV_, 2018.
* [27] F. Bogo, M. J. Black, M. Loper, and J. Romero. Detailed full-body reconstructions of moving people from monocular rgb-d sequences. In _ICCV_, 2015.
* [28] C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard, and T. Brox. 3d human pose estimation in rgbd images for robotic task learning. In _ICRA_, 2018.
* [29] R. A. Newcombe, D. Fox, and S. M. Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In _CVPR_, 2015.
* [30] S. Saito,, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. _arXiv_, 2019.
* [31] Z. Zheng, T. Yu, Y. Wei, Q. Dai, and Y. Liu. Deephuman: 3d human reconstruction from a single image. _arXiv_, 2019.
* [32] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. _TOG_, 2015.
* [33] C.-L. Li, T. Simon, J. Saragih, B. Poczos, and Y. Sheikh. Lbs autoencoder: Self-supervised fitting of articulated meshes to point clouds. In _CVPR_, 2019.
* [34] G. Echeverria, N. Lassabe, A. Degroote, and S. Lemaignan. Modular open robots simulation engine: Morse. In _ICRA_, 2011.
* recent perspectives with the morse simulator. 2014.
* [36] L. Kavan and J. Zara. Spherical blend skinning: a real-time deformation of articulated models. In _I3D_, 2005.
* [37] P. Joshi, M. Meyer, T. DeRose, B. Green, and T. Sanocki. Harmonic coordinates for character articulation. _TOG_, 2007.
* [38] W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert. Pcn: Point completion network. In _3DV_, 2018.
* [39] E. Insafutdinov and A. Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. In _NIPS_, 2018.
* [40] T. Moller and B. Trumbore. Fast, minimum storage ray/triangle intersection. In _ACM SIGGRAPH 2005 Courses_, 2005.
* [41] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick. Detectron2, 2019.
* [42] O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rossl, and H.-P. Seidel. Laplacian surface editing. In _Eurographics/ACM SIGGRAPH symposium on Geometry processing_, 2004.
* [43] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _arXiv_, 2014.
* [44] K. Shoemake. Animating rotation with quaternion curves. In _SIGGRAPH_, 1985.
* [45] B. Yang, W. Luo, and R. Urtasun. Pixor: Real-time 3d object detection from point clouds. In _CVPR_, 2018.
* [46] M. Liang, B. Yang, W. Zeng, Y. Chen, R. Hu, S. Casas, and R. Urtasun. Pnpnet: End-to-end perception and prediction with tracking in the loop. In _CVPR_, 2020.
## Appendix
In this supplementary, we cover additional details and analysis of our method for recovering and simulating pedestrians in the wild. In Section A1 we provide details about our obstacle-aware ray-tracer that allows us to incorporate LiDAR observations to improve 3D pose and shape reconstruction, and we discuss the differentiability of our ray-tracer. Then in Section A2 we provide more details about our learning and inference pipelines. Finally in Section A3 we provide the details of our pedestrian detector used in the experiments.
Additionally, please see our supplementary video, which showcases (1) Motivation and our methodology overview of LiME (LiDAR for human Mesh Estimation); (2) Human shape and pose reconstruction results using LiME on our real-world data, demonstrating the diversity of pedestrians we recover; (3) Application of our pedestrian asset bank for downstream evaluation of perception algorithms trained only on real data; and (4) Demonstration of our method for training perception algorithms by showing a side-by-side comparison of a detector trained on either simulated or real data and evaluated on real data.
### Details of our Obstacle-aware Differentiable Ray-tracer
As described in the main paper, real LiDAR point cloud observations of pedestrians in the wild are sparse (due to distance and LiDAR resolution) and partial (due to occlusions of other objects). The LiDAR sensor can be approximated via ray casting, where each laser ray shot by the sensor is parameterized in spherical coordinates \\((r,\\phi,\\theta)\\), representing the radius (distance travelled), azimuth, and elevation of the ray. We therefore design our ray-tracer to generate synthetic point clouds that better match the real ones by using the same LiDAR resolution when generating the ray-casted rays and removing ray-casted rays that hit occluded objects, which we can infer based on the real LiDAR point cloud.
Ray-casting algorithm:Given the pedestrian LiDAR point cloud and LiDAR sensor location, we first compute the radius, azimuth, and elevation ranges of the rays that might hit the pedestrian as \\(\\{r_{\\text{min}},r_{\\text{max}}\\}\\), \\(\\{\\phi_{\\text{min}},\\phi_{\\text{max}}\\}\\) and \\(\\{\\theta_{\\text{min}},\\theta_{\\text{max}}\\}\\). We determine these values based on the 3D bounding box enclosing the pedestrian LiDAR point cloud. We then compute the set of rays within the azimuth and elevation range according to the resolution of LiDAR sensor \\((d_{\\phi},d_{\\theta})\\):
\\[\\mathcal{R}=\\left\\{(i*d_{\\phi},j*d_{\\theta})\\Big{|}\\,\\big{|}\\,\\frac{\\phi_{ \\text{min}}}{d_{\\phi}}\\,\\big{]}<i<\\lfloor\\frac{\\phi_{\\text{max}}}{d_{\\phi}} \\rfloor,\\lfloor\\frac{\\theta_{\\text{min}}}{d_{\\theta}}\\rfloor<j<\\lfloor\\frac{ \\phi_{\\text{max}}}{d_{\\phi}}\\rfloor\\right\\} \\tag{10}\\]
where \\(\\lfloor\\cdot\\rfloor\\) is the floor function, and \\(i,j\\) are integers. For each ray \\(\\mathbf{r}=(i*d_{\\phi},j*d_{\\theta})\\in\\mathcal{R}\\), \\(i*d_{\\phi}\\) is the azimuth of the ray and \\(j*d_{\\theta}\\) is the elevation of the ray. For simplicity, we assume the centre of the ray-caster is at origin \\(\\mathbf{o}\\).
We then cast the set of rays \\(\\mathcal{R}\\) into the reconstructed mesh using the Moller-Trumbore [40] ray casting algorithm. Moller-Trumbore ray casting efficiently computes the ray-triangle intersection for each triangle in the mesh by converting the representation of the intersection point \\(\\mathbf{p}\\) from cartesian coordinates to the Barycentric coordinates of the triangle of interest. We define the cartesian coordinate of the intersection point as \\(\\mathbf{p_{cart}}=\\mathbf{o}+c\\ \\mathbf{d}\\), where \\(\\mathbf{o}\\) and \\(\\mathbf{d}\\) are the origin and direction of the raycasted ray in \\((x,y,z)\\) cartesian coordinate space, and \\(c\\) is the distance travelled. For a triangle face \\(\\mathbf{f}\\) with vertices \\((\\mathbf{v}_{1},\\mathbf{v}_{2},\\mathbf{v}_{3})\\), we define \\(\\mathbf{e}_{1}=\\mathbf{v}_{2}-\\mathbf{v}_{1},\\mathbf{e}_{2}=\\mathbf{v}_{3}- \\mathbf{v}_{2}\\) and \\(\\mathbf{t}=\\mathbf{o}-\\mathbf{v}_{1}\\). The Barycentric coordinates of the intersection point \\((u,v)\\) are obtained by solving:
\\[\\begin{bmatrix}c\\\\ u\\\\ v\\end{bmatrix}=\\frac{1}{(\\mathbf{d}\\times\\mathbf{e}_{2})\\cdot\\mathbf{e}_{1}} \\begin{bmatrix}(\\mathbf{t}\\times\\mathbf{e}_{1})\\cdot\\mathbf{e}_{2}\\\\ (\\mathbf{d}\\times\\mathbf{e}_{2})\\cdot\\mathbf{t}\\\\ (\\mathbf{t}\\times\\mathbf{e}_{1})\\cdot\\mathbf{d}\\end{bmatrix} \\tag{11}\\]
where \\(\\times\\) is the cross product operator and \\(\\cdot\\) is the inner product operator between two vectors. If the intersection point lies inside the triangle, we can convert the intersection point back to cartesian coordinates as \\(\\mathbf{y}=\\mathbf{v}_{1}+u\\mathbf{e}_{1}+v\\mathbf{e}_{2}\\). Note that if the ray intersect with multiple triangle faces, we choose the ray-casted point with **minimum** distance to the ray-caster origin. The ray-casted points on the mesh form the set \\(\\mathbf{Y}\\) (Eq. 5 in the main paper).
Occlusion-aware ray-caster:Directly using a ray-tracer to generate a syntethic point cloud will not match well with the observed LiDAR points if the set of rays \\(\\mathcal{R}\\) hit occluded objects in the real LiDAR scene. Not accounting for these occlusions will incorrectly penalize the posed mesh to not have points generated in these regions not visible to the real LiDAR sensor. To account for occlusions, we first define an occluded object as an object in front of sensor path to the bounding box enclosing the pedestrian LiDAR points. Note that we do not account for occlusion inside the bounding box. Then the set of points in the real LiDAR scan forming the occlusion is:
\\[\\mathbf{O}=\\{\\mathbf{p}\\mid r_{\\mathbf{p}}<r_{\\text{min}},\\phi_{\\text{min}}< \\phi_{\\mathbf{p}}<\\phi_{\\text{max}},\\theta_{\\text{min}}<\\theta_{\\mathbf{p}}< \\theta_{\\text{max}}\\} \\tag{12}\\]
where \\(r_{\\mathbf{p}},\\phi_{\\mathbf{p}},\\theta_{\\mathbf{p}}\\) are the radius, azimuth and elevation of point \\(\\mathbf{p}\\), respectively. We determine the rays in \\(\\mathcal{R}\\) that hit the occlusion \\(\\mathbf{O}\\) as:
\\[\\mathcal{O}=\\left\\{\\left(\\lfloor\\frac{\\phi_{\\mathbf{p}}}{d_{\\phi}}\\rfloor, \\lfloor\\frac{\\theta_{\\mathbf{p}}}{d_{\\theta}}\\rfloor\\right)\\right|\\mathbf{p} \\in\\mathbf{O}\\right\\} \\tag{13}\\]
We then mask out occluded rays \\(\\mathcal{O}\\) from \\(\\mathcal{R}\\) and use the set of rays \\(\\mathcal{R}\\setminus\\mathcal{O}\\) to generate the ray-casted points \\(\\mathbf{Y}\\), and compute \\(E_{\\text{sim}}\\) (Eq. 5 in the main paper):
\\[E_{\\text{sim}}(\\mathbf{\\Theta}_{t},\\mathbf{c}_{t},\\mathbf{s},\\mathbf{D})= \\frac{1}{\\left|\\mathbf{X}\\right|}\\sum_{\\mathbf{x}\\in\\mathbf{X}}\\min_{\\mathbf{y }\\in\\mathbf{Y}}\\left\\|\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+\\frac{1}{\\left| \\mathbf{Y}\\right|}\\sum_{\\mathbf{y}\\in\\mathbf{Y}}\\min_{\\mathbf{x}\\in\\mathbf{X} }\\left\\|\\mathbf{y}-\\mathbf{x}\\right\\|_{2}^{2} \\tag{14}\\]
See Figure 7 for a visual explanation.
Differentiability of the ray-tracer.Since the coordinate of the ray-casted point is:
\\[\\mathbf{y} =\\mathbf{v}_{1}+u\\mathbf{e}_{1}+v\\mathbf{e}_{2}\\] \\[=\\mathbf{v}_{1}+u(\\mathbf{v}_{2}-\\mathbf{v}_{1})+v(\\mathbf{v}_{3 }-\\mathbf{v}_{2})\\] \\[=(1-u)\\mathbf{v}_{1}+(u-v)\\mathbf{v}_{2}+v\\mathbf{v}_{3} \\tag{15}\\]
which is a linear combination of the vertices \\(\\mathbf{v}_{1},\\mathbf{v}_{2},\\mathbf{v}_{3}\\) in the mesh. The LiDAR ray-tracer is differentiable with respect to the mesh vertices. Although this ray-caster is not differentiable with respect to which triangle it intersects, empirically we find this works well in practice, as we have additional energy terms such as the 2D-joints consistency from the image to provide key-points correspondence supervision, and shape and pose priors to help with the shape and pose estimates.
### Learning and Inference Details
In our Sensor Fusion Network, we use ResNet-50 as the 2D CNN backbone, and we use the Point Completion Network [38] as the Point Cloud feature extractor. To learn the neural network, we use batch size of 16, Adam optimizer with learning rate \\(1e-4\\). And we train the network for \\(50000\\) iterations.
When we perform energy minimization, we use the Adam optimizer with learning rate of \\(1e-2\\). The weight for simulation, joints, pose prior, bone scale prior, L2 smoothness and Laplacian term are \\(144^{2}\\), \\(0.2^{2}\\), \\(0.478^{2}\\), \\(2^{2}\\), \\(100^{2}\\), \\(1000^{2}\\), respectively.
Figure 7: We determine the rays to be casted using the bounding box enclosing the object LiDAR points (blue), and we mask out the rays that hit obstacles (orange). We use the remaining rays to compute the ray-casted points.
Pedestrian Detector Details
We use the object detector similar to [46], it takes as input five consecutive LiDAR sweeps (0.5s) in birds-eye-view (BEV). The LiDAR data uses a voxel based representation in BEV, and the five consecutive sweeps are combined by concatenating along the height dimension (with the ego motion compensated for the previous sweeps). Each instance label box includes at least one pedestrian LiDAR point.
Given the aforementioned BEV representation of the LiDAR as input, the network first down-sample the input BEV image by a factor of 4 using three Conv2D layers. Then a cross-scale module [46] was applied three times sequentially. Next, a FPN was applied to fuse multi-scale feature maps, resulting in a 4x down-sampled BEV feature map. Finally we use 4 Conv2D layers to generate 3D bounding box prediction. | Sensor simulation is a key component for testing the performance of self-driving vehicles and for data augmentation to better train perception systems. Typical approaches rely on artists to create both 3D assets and their animations to generate a new scenario. This, however, does not scale. In contrast, we propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around. Towards this goal, we formulate the problem as energy minimization in a deep structured model that exploits human shape priors, reprojection consistency with 2D poses extracted from images, and a ray-caster that encourages the reconstructed mesh to agree with the LiDAR readings. Importantly, we do not require any ground-truth 3D scans or 3D pose annotations. We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.
Pedestrian Reconstruction, Pedestrian LiDAR Simulation | Provide a brief summary of the text. | 222 |
copernicus/e51ab409_d34a_46f8_8611_0cf9506bd0c3.md | "Geosci. Instrum. Method. Data Syst., 1, 111-134, 2012\n\nwww.geosci-instrum-method-data-syst.net/1/111/2012/\n\ndoi:10.5194/gi-1-111-2012\n\nO Author(s) 2012. CC Attribution 3.0 License.\n\nThe GPlates Geological Information Model and Markup Language\n\nX. Qin\\({}^{1}\\), R. D. Muller\\({}^{1}\\), J. Cannon\\({}^{1}\\), T. C. W. Landgrebe\\({}^{1}\\), C. Heine\\({}^{1}\\), R. J. Watson\\({}^{2}\\), and M. Turner\\({}^{3}\\)\n\n\\({}^{1}\\)EarthByte Group, School of Geosciences, University of Sydney, Sydney, NSW 2006, Australia\n\n\\({}^{2}\\)Geodynamics Team, Geological Survey of Norway, P.O. Box 6315, Sluppen, 7491 Trondheim, Norway\n\n\\({}^{3}\\)Seismological Laboratory, California Institute of Technology, Pasadena, California, USA\n\nX. Qin ([email protected])\n\nReceived: 18 May 2012 - Published in Geosci. Instrum. Method. Data Syst. Discuss.: 4 July 2012\n\n10 September 2012 - Accepted: 10 September 2012 - Published: 8 October 2012\n\nUnderstanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and modelling, including a variety of new functionalities, such as 4-D data-mining.\n\n## References\n\n* Abadi et al. (1995) Abadi, M., Cardelli, L., Pierce, B. C., and Remy, D.: Dynamic typing in polymorphic languages, J. Funct. Programm., 5, 111-130, 1995.\n* Boag et al. (2010) Boag, S., Chamberlin, D., Fernandez, M. F., Florescu, D., Robie, J. and Simeon, J. (Eds.): XQuery 1.0: An XML query language, 2nd Edn., W3C Recommendation, available from: [http://www.w3.org/TR/xquery/](http://www.w3.org/TR/xquery/) (last access: 5 October 2012), 2010.\n* Bonham-Carter (1994) Bonham-Carter, G.: Geographic information systems for geoscientists: modelling with GIS, Pergamon Press, 1994.\n* Library use of a microcomputer database management system, Program: electronic library and information systems, MCB UP Ltd, 18, doi:10.1108/eb046876, 157-165, 1984.\n* Boyden et al. (2011) Boyden, J. A., Muller, R. D., Gurnis, M., Torsvik, T. H., Clark, J. A., Turner, M., Ivey-Law, H., Watson, R. J., and Cannon, J. S.: Next-generation plate-tectonic reconstructions using GPlates, in: Geoinformatics: Cyberinfrastructure for the Solid Earth Sciences, edited by: Keller, G. R. and Baru, C., Cambridge University Press, 2011.\n* Bray et al. (1997) Bray, T., Paoli, J., Sperberg-McQueen, C. M., Maler, E., and Yergeau, F.: Extensible markup language (XML), World Wide Web J., 2, 27-66, 1997.\n* Cox & Hart (1986) Cox, A. and Hart, B. R.: Plate Tectonics: How It Works, Blackwell Science Inc., 400 pp., 1986.\n* Demsar et al. (2004) Demsar, J., Zupan, B., Leban, G., and Curk, T.: Orange: From experimental machine learning to interactive data mining, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Pisa, Italy, 2004.\n* ESRI (1998) ESRI: ESRI Shapefile Technical Description, available from: [http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf) (last access: 8 May 2012), 1998.\n* Euler (2012) Euler, L. and Sten, J.: Euler's original text (in Latin) and English translation, available from: [http://www.17centurymaths.com/contents/euler/e478tr.pdf](http://www.17centurymaths.com/contents/euler/e478tr.pdf), last access: 8 May 2012.\n* Geraci et al. (1991) Geraci, A., Katki, F., McMenegel, L., Meyer, B., Lane, J., Wilson, P., Radatz, J., Yee, M., Porteous, H., and Springsteel, F.: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries, IEEE Press Piscataway, NJ, USA, 1991.\n* Greiner (1999) Greiner, B.: Euler rotations in plate-tectonic reconstructions, Comput. Geosci., 25, 209-216, 1999.\n* Gurnis et al. (2012) Gurnis, M., Turner, M., Zahriovic, S., Dicaprio, L., Spasojevich, S., Muller, R. D., Boyden, J., Seton, M., Manea, V. C., and Bower, D. J.: Plate Tectonic Reconstructions with Continuously Closing Plates, Comput. Geosci., 38, 35-42, doi:10.1016/j.cageo.2011.04.014, 2012.\n* Heim (2007) Heim, M.: Exploring Indiana Highways: Trip Trivia, Exploring America's Highway, Travel Organization Network Exchange, Inc., Wabasha, 2007.\n* Hellinger (1981) Hellinger, S. J.: The uncertainties of finite rotations in plate tectonics, J. Geophys. Res., 86, 9312-9318, 1981.\n* Reference model, 2002.\n* Schema for coverage geometry and functions, 2005.\n* Lake (2005) Lake, R.: The application of geography markup language (GML) to the geological sciences, Comput. Geosci., 31, 1081-1094, 2005.\n* Landgrebe & Muller (2011) Landgrebe, T. C. W. and Muller, R. D.: A Spatio-Temporal Knowledge-Discovery Platform for Earth-Science Data, Digital Image Computing Techniques and Applications (DICTA), 2011 International Conference on 6-8 December 2011, Noosa, QLD, Australia, 394-399, doi:10.1109/DICTA.2011.73, 2011.\n* Larman (2004) Larman, C.: Applying UML and patterns: an introduction to object-oriented analysis and design and iterative development, Prentice Hall PTR, 2004.\n* Lee (2012) Lee, Y. T.: Information modeling: From design to implementation, available from: [http://www.mel.nist.gov/msidlibrary/doc/tina99im.pdf](http://www.mel.nist.gov/msidlibrary/doc/tina99im.pdf) (last access: 29 June 2012), 1999.\n* Muller et al. (2008) Muller, R. D., Sdrolias, M., Gaina, C., and Roest, W. R.: Age, spreading rates and spreading asymmetry of the world's ocean crust, Geochem. Geophy. Geosy., 9, Q04006, doi:04010.01029/02007GC001743 2008.\n* OGC (2010) OGC: Network Common Data Form (NetCDF) Core Encoding Standard version 1.0, available from: [http://www.opengeospatial.org/standards/netd](http://www.opengeospatial.org/standards/netd) (last access: 6 September 2012), 2010.\n* Peng & Zhang (2004) Peng, Z. R. and Zhang, C.: The roles of geography markup language (GML), scalable vector graphics (SVG), and Web feature service (WFS) specifications in the development of Internet geographic information systems (GIS), J. Geogr. Syst., 6, 95-116, 2004.\n* Portele (2007) Portele, C. (Ed.): Geography Markup Language (GML) Encoding Standard v3.2, OGC Implementation Standard, OGC document 07-036, [http://www.opengis.net/doc/gml](http://www.opengis.net/doc/gml), last access: 5 October 2012, also published as ISO 19136:2007, 2007.\n* Sen & Duffy (2005) Sen, M. and Duffy, T.: GeoSciML: Development of a generic GeoScience Markup Language, Comput. Geosci., 31, 1095-1103, 2005.\n* Simons et al. (2006) Simons, B., Boisvert, E., Brodaric, B., Cox, S., Duffy, T. R., Johnson, B. R., Lavxton, J. L., and Richard, S.: GeoSciML: enabling the exchange of geological map data, ASEG Extended Abstracts, CSIRO Publishing, Collingwood, Victoria, Australia, 1-4, 2006.\n\n## References\n\n* Abadi et al. (1995) Abadi, M., Cardelli, L., Pierce, B. C., and Remy, D.: Dynamic typing in polymorphic languages, J. Funct. Programm., 5, 111-130, 1995.\n* Boag et al. (2010) Boag, S., Chamberlin, D., Fernandez, M. F., Florescu, D., Robie, J. and Simeon, J. (Eds.): XQuery 1.0: An XML query language, 2nd Edn., W3C Recommendation, available from: [http://www.w3.org/TR/xquery/](http://www.w3.org/TR/xquery/) (last access: 5 October 2012), 2010.\n* Bonham-Carter (1994) Bonham-Carter, G.: Geographic information systems for geoscientists: modelling with GIS, Pergamon Press, 1994.\n* Library use of a microcomputer database management system, Program: electronic library and information systems, MCB UP Ltd, 18, doi:10.1108/eb046876, 157-165, 1984.\n* Boyden et al. (2011) Boyden, J. A., Muller, R. D., Gurnis, M., Torsvik, T. H., Clark, J. A., Turner, M., Ivey-Law, H., Watson, R. J., and Cannon, J. S.: Next-generation plate-tectonic reconstructions using GPlates, in: Geoinformatics: Cyberinfrastructure for the Solid Earth Sciences, edited by: Keller, G. R. and Baru, C., Cambridge University Press, 2011.\n* Bray et al. (1997) Bray, T., Paoli, J., Sperberg-McQueen, C. M., Maler, E., and Yergeau, F.: Extensible markup language (XML), World Wide Web J., 2, 27-66, 1997.\n* Cox & Hart (1986) Cox, A. and Hart, B. R.: Plate Tectonics: How It Works, Blackwell Science Inc., 400 pp., 1986.\n* Demsar et al. (2004) Demsar, J., Zupan, B., Leban, G., and Curk, T.: Orange: From experimental machine learning to interactive data mining, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Pisa, Italy, 2004.\n* ESRI (1998) ESRI: ESRI Shapefile Technical Description, available from: [http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf) (last access: 8 May 2012), 1998.\n* Euler (2012) Euler, L. and Stern, J.: Euler's original text (in Latin) and English translation, available from: [http://www.17centurymaths.com/contents/euler/e478tr.pdf](http://www.17centurymaths.com/contents/euler/e478tr.pdf), last access: 8 May 2012.\n* Geraci et al. (1991) Geraci, A., Katki, F., McMenegel, L., Meyer, B., Lane, J., Wilson, P., Radatz, J., Yee, M., Porteous, H., and Springsteel, F.: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries, IEEE Press Piscataway, NJ, USA, 1991.\n* Greiner (1999) Greiner, B.: Euler rotations in plate-tectonic reconstructions, Comput. Geosci., 25, 209-216, 1999.\n* Gurnis et al. (2012) Gurnis, M., Turner, M., Zahriovic, S., Dicaprio, L., Spasojevich, S., Muller, R. D., Boyden, J., Seton, M., Manea, V. C., and Bower, D. J.: Plate Tectonic Reconstructions with Continuously Closing Plates, Comput. Geosci., 38, 35-42, doi:10.1016/j.cageo.2011.04.014, 2012.\n* Heim (2007) Heim, M.: Exploring Indiana Highways: Trip Trivia, Exploring America's Highway, Travel Organization Network Exchange, Inc., Wabasha, 2007.\n* Hellinger (1981) Hellinger, S. J.: The uncertainties of finite rotations in plate tectonics, J. Geophys. Res., 86, 9312- A compressible thermo-chemical mantle convection code, 01, American Geophysical Union, Fall Meeting 2007, abstract #DI14A-01, 2007.\n* [Vretanos(2005)Vret\n\n" | \n\nFigure 33: Screenshot showing GPlates displaying geometries (green) defined in GeoSciML data, which is retrieved by Web Feature Service (WFS) from a Geosciences Australia website. The service base URL is [http://www-a.ga.gov.au/geows/geologicunits/oneg_aus_2_5m/wfs](http://www-a.ga.gov.au/geows/geologicunits/oneg_aus_2_5m/wfs). The WFS request is shown in the dialog (left). All GPlates layers are listed in the dialog (right).\n\nplate-tectonic information modelling and software - a generation in which plate-tectonic data and applications are an integrated visualisation and processing component within a data grid and computational grid; and a plate-tectonic reconstruction is no longer an isolated result, but a single stage in an adaptable workflow.\n\nWe wish to acknowledge James Boyden and James Clark as pioneers of initial GPGIM development and their substantial contributions to GPlates. X. Q., R. D. M., J. C. and T. C. W. L. are supported by ARC grant FL0992245, C. H. was supported by ARC grant LP0989312, and GPlates and GPGIM development was supported by the AuScope NCRIS project (www.auscope.org.au/) in Sydney.\n\n | Write a summary of the passage below. | 295 |
arxiv-format/2303_08454v1.md | # Range-Aided LiDAR-Inertial Multi-Vehicle Mapping in Degenerate Environment
Zhe Jin, Chaoyang Jiang
This work is supported by the National Natural Science Foundation of China(No.52002026, U2OA20333), and the National Key Research and Development Project (No. 2020YFC1512500)(_Corresponding author: ChaoyangangJiang_).The authors are with the School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China, 100081 (email: [email protected]; [email protected];
## I Introduction
Multi-vehicle simultaneous localization and mapping (SLAM) has been widely used for search and rescue, maintenance investigations, underwater detection, and space exploration [1]. It is a great challenge for a single vehicle to handle the tasks in large-scale and degenerate environments while multi-vehicles working together have great potential to improve mapping accuracy and efficiency. Therefore, multi-vehicle collaborative mapping systems have increasingly attracted attention in recent years [2].
Features in large-scale and degenerate environments are usually sparse which leads to great accumulate errors for SLAM systems. Fortunately, range sensors are invulnerable in degenerate environments in the absence of shading. On one hand, range constraints are simpler and more efficient than finding loop closures for collaborative mapping; on the other hand, range factors can be easily introduced into a pose graph optimization (PGO) procedure. Therefore, range-aided multi-vehicle SLAM has great potential to improve the robustness of localization and mapping in degenerate environments.
### _Related works_
The multi-vehicle mapping has two main branches: centralized mapping and decentralized mapping [3]. Centralized mapping systems collect and optimize messages from all connected vehicles. Riazuelo et al. [4] proposed a typical centralized mapping system in which the expensive map optimization and storage were allocated on a cloud server while a light camera tracking client run on a local computer. Deutsch et al. [5] further introduced a software framework for real-time multi-vehicle collaborative SLAM which can potentially work with various SLAM algorithms. They both require an external server for the aggregation of data and information feedback, and thus network delays become a hidden problem. In contrast, Dube et al. [6] shifted the master node into one of the vehicles and proposed a fully-integrated online multi-vehicle SLAM system, which saves the long-distance communication but requires a high-performance onboard processor. Decentralized methods do not rely on a central server and split the computation to each vehicle node. Choudhary et al. [1] proposed a set of distributed algorithms for pose graph optimization in which vehicles communicate and exchange relative measurements only when the rendezvous is detected. Different from [1], inter-vehicle communications and pose-graph optimization are real-time implemented in [7]. Lajoie et al. [8] then extended and improved the above two methods [1, 7], and proposed DOOR-SLAM, a fully distributed SLAM system with an outlier rejection mechanism that can work with less conservative parameters. The above-mentioned multi-vehicle mapping systems applied loop detection of inter-or-intra vehicles to address data association and have achieved great progress. However, they still cannot work well in degenerate environments, especially when environmental characteristics are similar.
Degeneracy is caused by fewer constraints in some directions, leading to less robustness for state estimation. The characteristics of degenerate environments include lacking geometrical, textural, and/or thermal features. Zhang et al. [9] first proposed a degeneration detection method and separated the degenerate directions in the state space to reduce the influence of the degeneracy in structured environments. Similarly, Hinduja et al. [10] only optimized the pose graph in well-constrained directions of the state space. These directions were selected based on a dynamic threshold and real-time updated. Extending the above two methods, Ren et al. [11] proposed a reliable degeneracy indicator that can evaluate the scan-matching performance in off-road environments. The evaluated degeneracy indicator was then integrated into afactor graph optimization framework. However, these methods [9, 10, 11] only adopted a single sensor and were unable to optimize the degenerate dimension. Khattak et al. [12] utilized a visual-inertial odometry and a thermal-inertial odometry to find robust priors for LiDAR pose estimation. One of the two odometry was selected for propagation when LiDAR odometry failed due to degeneration, which can improve the reliability of the pose estimation. Great progress has been achieved in past decades, but robust mapping is still a big challenge in degenerate environments.
Degenerate environments have no influence on the distance observations of range sensors like Bluetooth, ultra-wideband (UWB) ranging sensors, Zigbee and WiFi. Song et al. [13] fused LiDAR and UWB measurements for single-vehicle localization, and allowed the unknown anchors to change their positions. To some extent, it was more robust and resisted degeneration. Similarly, applying more sensors like inertial measurement unit (IMU), light detection and ranging sensors (LiDAR), and camera, Nguyen et al. [14] performed a comprehensive optimization-based estimator for the state of an unmanned aerial vehicle. Both methods [13, 14] depend on preset anchors which greatly limits their applications for multi-vehicle cases. Xu et al. [15] proposed a decentralized state estimation system, fusing stereo wide-field-of-view cameras and UWB sensors for a multi-vehicle case. Similarly, Nguyen et al. [16] proposed a visual-inertial-UWB multi-vehicle localization system that loosely fuses the UWB and visual-inertial odometry data while tightly fusing all onboard sensors. Both methods achieved a great localization improvement but only in small-scale and undegenerate environments. The current range-aided methods focus on localization with or without anchor beacons but few of them focused on mapping in degenerate environments.
Prior related works on multi-vehicle mapping are rich, but they still have further room for improvement: 1) range-aided multi-vehicle mapping systems with fixed anchors can hardly extend to large-scale environments due to the requirement for numerous anchors while those without anchors still cannot work well in degenerate environments; 2) centralized systems rely on a central server which is vulnerable, and decentralized systems cannot easily achieve a globally consistent map in real time; 3) most anti-degenerate methods ignore the information of degeneration directions or compensate with other sensors like thermal sensors that depend on environmental features; 4) few works cope with the degeneration correction.
### _Contribution_
Considering the above-mentioned problems, we propose the RaLI-Multi: a range-aided LiDAR-inertial multi-vehicle mapping system. Each vehicle performs a local mapping procedure with IMU integration, LiDAR feature extraction and registration, degeneration detection, and degeneration correction. Range measurements compensate for the error in the degenerate direction when both the degenerate level and the gap between the LiDAR-inertial odometry and the range measurements exceed their preset thresholds. The RaLI-Multi dynamically schedules one vehicle as an anchor vehicle which stops and can be viewed as an anchor for range measurement. The anchor vehicle also acts as a temporary central server, which receives local maps, LiDAR-inertial odometry, and range constraints between vehicles to optimize and broadcast the global map that in turn updates the local states of each vehicle. The main contributions of this paper are as follows:
1. We propose a multi-metric weights LiDAR-inertial front-end, which assigns weights to each feature point and can achieve better odometry in degenerate environments.
2. A geometry-based degeneration detection method is proposed as the foundation of the following degeneration correction module, which can online monitor the degeneration level and estimate the corresponding degenerate direction.
3. The range-aided degenerate correction module compensates the error of LiDAR-inertial odometry from the degeneration direction which is considered as the main component of the pose estimation error. In this way, we can improve the robustness of the mapping systems in degenerate environments.
4. The proposed RaLI-Multi has both advantages of centralized and decentralized methods. All vehicles have communications with the central node and share the same global map. The anchor vehicle plays the role of the central node, which can dynamically shift to other vehicles. Hence, the proposed system is more robust and flexible, which has potential to apply in large-scale degenerate environments.
### _Notations and Outline_
We denote a point cloud set captured by a 3D LiDAR sensor on a vehicle as \\(\\mathbf{\\mathcal{L}}\\), and denote a processed feature cloud and normal cloud as \\({}^{\\mathcal{F}}\\mathcal{L}\\) and \\({}^{\\mathcal{N}}\\mathcal{L}\\), respectively. Range measurements between vehicle \\(j\\) and vehicle \\(k\\) are denoted by \\(u_{i}^{jk}\\in\\mathbf{\\mathcal{U}}\\). The elements of these sets are presented with subscript of time sequence, e.g., \\(\\left(\\cdot\\right)_{t}\\) or \\(\\left(\\cdot\\right)_{i}\\).
\\(\\mathcal{X}\\) is the vehicle state including position, orientation, velocity, etc. For simplicity, we also represent the position of a vehicle as \\(\\mathbf{x}\\in\\mathbb{R}^{3}\\). The initial pose transformation between tag vehicles and the anchor vehicle are denoted by \\(\\mathbf{\\mathcal{T}}=\\left\\{\\mathcal{T}^{1},\\mathcal{T}^{2},\\mathcal{T}^{3}, \\cdots\\right\\}\\),
\\[\\mathcal{T}^{v}=\\left[\\begin{array}{cc}\\mathbf{R}^{v}&\\mathbf{t}^{v}\\\\ 0&1\\end{array}\\right]\\in SE\\left(3\\right),v=\\left\\{1,2,3,\\cdots\\right\\} \\tag{1}\\]
where \\(\\mathbf{R}^{v}\\in SO\\left(3\\right)\\) and \\(\\mathbf{t}^{v}\\in\\mathbb{R}^{3}\\) are the rotation matrix and the translation vector, respectively. The corresponding quaternion of the rotation is represented by Hamilton notation.
The rest of this paper is organized as follows. Section II provides the overview. Section III proposes details of the RaLI-Multi mapping system. Experiment results are shown in Section IV, and conclusions are given in Section V.
## II Overview
### _System Definition_
We propose a range-aid LiDAR-inertial multi-vehicle mapping system, in which all vehicles take the same onboardhardware and software. Each vehicle has an IMU, a LiDAR, a range sensor, a router, and a computing unit. All vehicles have two roles: the anchor role and the tag role, but they cannot be activated simultaneously. If the anchor role is activated, the vehicle acts as an anchor vehicle and vice versa. During the exploration, one of the vehicles is automatically selected to be the anchor. 'The anchor vehicle' also plays the role of the central node of such a multi-vehicle network. All other vehicles are called 'the tag vehicle'.
The RaLI-Multi mapping procedure consists of continuous exploration rounds, as shown in Fig. 1. Each round begins with the tag-vehicle exploration and ends with the anchor-vehicle selection. In the first round, a dynamical initialization (see Section III-D) is required, which estimates the relative transformation between the global frame (the coordinate frame of the initial anchor vehicle) and local frames (coordinate frames of the initial tag vehicles). When all tag vehicles finish their exploration, the role of the anchor and the central node shifts from one vehicle to another. A tag vehicle finishes its exploration in the current round if one of the following three events is triggered: 1) the Received Signal Strength Indicator (RSSI) of the communication is less than a pre-defined threshold; 2) the distance with the anchor vehicle exceeds a pre-defined value; 3) the environment around the tag vehicle has been fully explored.
### _Problem Formulation_
We aim to reconstruct 3-D maps for large-scale environments with degeneration via multiple vehicles. Our main ideas are applying range observations between the anchor vehicle and all tag vehicles for degeneration correction, and utilizing communications and range observations for the improvements of the global mapping and pose estimation of all vehicles. Consequently, this work mainly focuses on the following three problems:
1. How to correct the localization and mapping for degenerate cases?
2. How to globally optimize the mapping and the pose estimation of all vehicles in such a RaLI-Multi mapping system?
3. How to dynamically select the role of the anchor vehicle?
### _System Overview_
The structure of the tag-vehicle exploration is shown in Fig. 2. The anchor vehicle stays stationary while all other vehicles, i.e., the tag vehicles, explore the environment. Each tag vehicle performs a LiDAR-inertial odometry, a degeneration detection module, a degeneration correction module with the range measurements from the anchor vehicle, and a local PGO. With the information received from tag vehicles, the anchor vehicle optimizes the poses of tag vehicles and the global map. When all tag vehicles finish their exploration, one of the vehicles is dynamically selected to be the anchor vehicle in the anchor
Fig. 1: Illustration of two exploration rounds. Blue dotted lines represent the trajectories of tag vehicles and green dashed lines represent range measurements. In the former round, vehicle 2 is selected to be the anchor vehicle and vehicles 1 and 3 are tag vehicles. During exploration, vehicle 3 detects degeneration at the time stamps 15 and 4, which is then corrected by the range measurements between vehicle 2 and vehicle 3. At 4, both tag vehicles finish their exploration. Meanwhile, the anchor role is transferred to vehicle 3, and the latter round starts. 4 in the former round and 0 in the latter is the same time stamp.
Fig. 2: System structure of the tag-vehicle exploration. At the end of each round, a new vehicle is selected to be the anchor vehicle via the anchor transfer module on the current anchor vehicle. Tag roles are then triggered for the rest tag vehicles.
transfer module, followed by the next exploration round.
Each tag vehicle first preprocesses the raw data received from its onboard IMU, LiDAR, and the range sensor. The observations of IMU are pre-integrated (see Section III-A1). Features are extracted from the point cloud of LiDAR (see Section III-A2), and the range measurements are pre-smoothed. Then, the LiDAR-inertial front-end takes pre-integrated IMU states as an initial guess to perform scan-to-map registration (see Section III-A3). Meanwhile, the features are used for degeneration detection (see Section III-B), and the range constraints are used for degeneration correction (see Section III-C2). Finally, the corrected LiDAR odometry, IMU pre-integration, and range constraints are jointly optimized via a local PGO in the back end. With the above procedure, the first question mentioned in Section II-B is answered.
During a round, local data from each tag vehicle are sent to the anchor vehicle after local PGO for global PGO. If tag vehicles have stable range signals between each other and the RSSI is stronger than the pre-set threshold, these range measurements are also added to global PGO (see Section III-C1). The anchor vehicle then performs an incremental global optimization and map merging (see Section III-E). After global optimization, the anchor vehicle broadcasts the global map and the optimized states of all tag vehicles to each tag vehicle. In this way, we solve the second problem mentioned in Section II-B.
Like the classical frontier-based exploration method [17], we define frontiers as the boundary between known free space and unknown space. If no frontiers exist, the environment is regarded as fully explored. When all tag vehicles finish their exploration, the current anchor vehicle starts to select the next anchor vehicle (see Section III-F). The frontiers of each tag vehicle are combined if they are close to each other. The vehicle that is closest to the center of the largest frontier is selected as the new anchor. In such a manner, the third problem mentioned in Section II-B is figured out.
To easily understand the workflow of the RaLI-Multi, a two-vehicle example is shown in Fig. 3, which tells the procedure of how the two vehicles explore a corridor-like environment. The global coordinate frame is fixed on the local coordinate frame of the blue triangle, i.e., the initial anchor vehicle. Before mapping, an initial relative pose prior between two vehicles consisting of a range measurement and pre-set parameters is added to the pose graph, as shown in the yellow rectangle. Next, the tag vehicle, i.e., the orange triangle, begins to explore around, as shown in Fig. 3 (b). During this period, range measurements between two vehicles constrain the poses of the tag vehicle and reduce the influence of degeneration. Meanwhile, the anchor vehicle receives poses and corresponding LiDAR point clouds of the tag vehicle to perform initialization and incremental global pose graph optimization. After the tag vehicle finishes its exploration, the anchor vehicle transfers optimized results back to the tag vehicle. Finally, two vehicles exchange the roles of tag and anchor to start the next round of exploration, as shown in Fig. 3 (c).
## III RaLI-Multi Mapping System
### _LiDAR-inertial Odometry_
#### Iii-A1 IMU Pre-integration
IMU pre-integration was first introduced by Forster et al. in [18] to reduce recomputation when changing linearization points. However, it can also be seamlessly integrated whether visual-inertial, LiDAR-inertial, or other inertial-related pipelines under the holistic framework of factor graphs. Here, we use the same procedure as [18], and ignore the details of IMU pre-integration.
#### Iii-A2 Feature Extraction
As pointed out in Ye et al. [19], edge points can hardly improve the results of the LiDAR-inertial odometry in practice. Additionally, extracting edge points is time-consuming, and we find that edge points bring larger errors than planar points due to the less horizontal resolution of LiDAR sensors. As a result, we only extract planar points.
We first downsample the raw point cloud and call the four nearest points of each candidate point as the neighbor points, which are found by the \\(k\\)-d tree, shown in Fig. 4. The distances between each neighbor point and the candidate point should be less than double of the downsample resolution. Furthermore, the neighbor points should distribute in three different rings. The candidate point and two neighbor points are on the same ring, as shown in the blue points in Fig. 4. The rest two neighbor points are in the nearest rings, as shown in the orange and green points respectively in Fig. 4. Two unit normal vectors, \\(\\mathbf{n}_{G}\\) and \\(\\mathbf{n}_{O}\\), as shown in the green and orange arrows, are the cross products of two vectors, i.e., dash lines with corresponding colors in Fig. 4. Finally, the angle between the two normal vectors is calculated via their dot product: \\(\\theta=\\cos^{-1}\\left\\langle\\mathbf{n}_{G},\\mathbf{n}_{O}\\right\\rangle\\). The point is selected as a planar point if \\(\\theta\\) is less than a pre-set threshold. Otherwise, this point will be rejected. The normal vector of the planar point can be defined as the unit vector of the summation of two normal vectors, i.e., \\(\\mathbf{n}_{i}=\\frac{\\mathbf{n}_{G}+\\mathbf{n}_{O}}{|\\mathbf{n}_{G}+\\mathbf{n}_{O}|}\\).
#### Iii-A3 Scan-to-Map Matching with Multi-Metric Weights
We propose a group of multi-metric weights of LiDAR points and apply the relative transformation obtained from IMU pre-integration as the initial guesses to update the front end. The source cloud is the planar feature cloud \\({}^{\\mathcal{F}}\\mathcal{L}\\) extracted in the former part and the target cloud is the submap consisting of the nearest \\(N_{kf}\\) keyframes in the local map of each vehicle. Our scan matching module then estimates the pose of the current point cloud in the submap coordinate system.
Fig. 3: Illustration of a two-vehicle mapping system exploring a corridor-like environment. The orange and blue triangles represent two vehicles. The yellow rectangle is the initial relative pose constraint. Green dashed lines are range measurements. Orange and blue dotted lines are the trajectories of two vehicles.
For each iteration, we first transform a point to the submap frame. The neighbor points in the submap are determined by the nearest neighbor search within a pre-set range threshold with the origin as the center of the current point. Then, we estimate the normal vector \\(\\mathbf{n}_{j}\\) the same as extracting planar feature points. The optimal pose \\(\\mathcal{X}_{i}\\) is given by the resolution of the point-to-plane distance cost function,
\\[r_{\\mathcal{L}}\\left(\\mathcal{X}_{i},\\mathbf{\\mathcal{L}}_{i}\\right)=\\operatorname* {argmin}_{\\mathcal{X}_{i}}\\sum_{j}\\rho\\left(\\omega_{j}\\left\\langle\\mathbf{R}_{i}\\bm {p}_{j}+\\mathbf{t}_{i}-\\mathbf{p}_{j}^{center},\\mathbf{n}_{j}\\right\\rangle\\right) \\tag{2}\\]
where \\(\\rho\\left(\\cdot\\right)\\) is a Huber lost function and \\(\\mathbf{R}_{i}\\) and \\(\\mathbf{t}_{i}\\) are the rotation matrix and the translation vector of \\(\\mathcal{X}_{i}\\), respectively. \\(\\mathbf{p}_{j}\\) is the current point and \\(\\mathbf{p}_{j}^{center}\\) is the mass centroid of the neighbor points. The multi-metric weight \\(\\omega_{j}\\) is
\\[\\omega_{j}=\\eta_{r}\\omega_{j}^{range}+\\eta_{n}\\omega_{j}^{neighbor}+\\eta_{k} \\omega_{j}^{kinematic}, \\tag{3}\\]
where
\\[\\omega_{j}^{range}=\\frac{1}{1+e^{-\\frac{2.5}{l_{Q3}}\\left(r_{j}-l_{Q2}\\right)}}, \\tag{4}\\]
\\[\\omega_{j}^{neighbor}=\\left\\{\\begin{array}{cc}\\frac{n_{j}^{neighbor}}{N_{ neighbor}},&n_{j}^{neighbor}<N_{neighbor}\\\\ 1,&n_{j}^{neighbor}\\geq N_{neighbor}\\end{array}\\right., \\tag{5}\\]
\\[\\omega_{j}^{kinematic}=\\left\\{\\begin{array}{cc}\\cos^{-1}\\left\\langle p_{j}, n_{j}\\right\\rangle\\cdot r_{j},&\\delta\\theta_{j}>\\theta_{th}\\\\ 0,&else\\end{array}\\right., \\tag{6}\\]
\\(\\eta_{r}\\), \\(\\eta_{n}\\) and \\(\\eta_{k}\\) are normalized weights (taken 0.5, 0.2 and 0.3 for all experiments, respectively). \\(\\omega_{j}^{range}\\) enhances the influence of far points. In (4), \\(r_{j}\\) represents the range of the current point \\(\\mathbf{p}_{j}\\). \\(l_{Q2}\\) and \\(l_{Q3}\\) are the second and the third quartile of all ranges in current feature points. \\(e\\) is a constant. \\(\\omega_{j}^{neighbor}\\) guarantees that the current point locates in a sphere area with a high point density. In (5), \\(n_{j}^{neighbor}\\) is the number of the neighbor points found by the nearest neighbor search and the search radius is usually set to be double of the point cloud downsample resolution. \\(N_{neighbor}\\) is a pre-set threshold relating to the sample resolution. \\(\\omega_{j}^{kinematic}\\) is designed for large rotation conditions. In (6), \\(\\delta\\theta_{j}\\) represents the rotation angle of the IMU pre-integration result and can be defined from quaternion \\(\\delta\\mathbf{q}_{j}=(w,x,y,z)\\) as \\(\\delta\\theta_{j}=\\tan^{-1}\\left(\\sqrt{x^{2}+y^{2}+z^{2}},w\\right)\\). \\(\\theta_{th}\\) is a pre-defined rotation angle threshold.
#### Iii-A4 Keyframe Selection
We find in experiments that common keyframe selection methods [20, 21] including both distance-based and rotation-based methods are unstable in an indoor or narrow environment, especially at the corner of a corridor. The distance-based keyframe selection methods are hard to obtain a keyframe at the corner of a corridor leading to less robustness when passing through the corner. The rotation-based methods can easily induce distortion of point clouds due to vehicle vibration.
To efficiently select keyframes in indoor or narrow environments, we consider the overlap of two point clouds through Octree [22], which is faster than \\(k\\)-d tree in voxel searching. After transferring the current scan to the frame of the last keyframe, if the distance between a point in the current scan and its closest point in the last keyframe is less than double of the downsampled resolution, the point is labeled as overlap. If the ratio of overlap points in the current scan is less than a pre-set threshold, we select the current scan as a keyframe.
### _Geometry-based Degeneration Detection_
Firstly, we take two examples to present the degeneration detection method: a non-degenerate environment and a degenerate environment, as shown in Fig. 5 (a) and (c). The colors of points represent different clusters of normal vectors and are generated randomly. In Fig. 5 (a), red and green points represent mutually perpendicular walls, while brown points are the ground plane, and other colors, such as pink and purple for example, can be treated as noise points. However, in Fig. 5 (c), green points represent the wall that occupies most of the view and red points are the ground plane. Purple points are the other wall at an angle of approximately 45 degrees to the green-point wall. In order to better visualize the degeneracy of the environment, we project normal vectors from the three-dimensional sphere coordinate system to a two-dimensional plane coordinate system by applying the
Fig. 4: Illustration of the feature extraction. Points on the same ring are represented as the same color. For simplicity, only three different rings are shown. The point with a red border represents the candidate point. (a) The candidate point is selected to be a planar point whose neighbor region is in a plane. (b) The candidateβs point is rejected.
Fig. 5: Point clouds with random colors in (a) and (c), represent different clusters of normal vectors. (b) and (d) are corresponding normal vectors projected onto a two-dimensional plane. Stick marks in red and green colors on the coordinate axes represent the distribution of raw data on different axes, respectively. The brighter the color, the more normal vectors there are in this area.
Mercator-like projection method. The results are shown in Fig. 5 (b) and (d), respectively. From the density maps, it is simple to identify walls perpendicular to the floor from yellow and green areas in both scenarios. However, ground points in both pictures and purple-points wall in Fig. 5 (c), colored in light blue, are not obvious due to fewer number, which are located around (20, -140) in Fig. 5 (b) and (80, 0), (-30, -130) in Fig. 5 (d).
According to the above examples, we find that normal vectors in a degenerate environment are highly characterized. These vectors can be classified into finite clusters and the number of them in different clusters varies widely. Then, we formulate the degeneracy by analyzing the distribution of normal vectors through the Principal Components Analysis module (PCA). We treat normal vectors set as the normal cloud \\({}^{\\mathcal{N}}\\mathcal{L}\\), and the covariance matrix \\(\\mathbf{\\Sigma}_{n}\\) of \\({}^{\\mathcal{N}}\\mathcal{L}\\) is calculated as follows,
\\[\\mathbf{\\Sigma}_{n}=\\frac{1}{N_{{}^{\\mathcal{N}}\\mathcal{L}}}\\sum_{i=1}^{N_{{}^{ \\mathcal{N}}\\mathcal{L}}}\\left(\\mathbf{n}_{i}-\\bar{\\mathbf{n}}\\right)\\left(\\mathbf{n}_{i}- \\bar{\\mathbf{n}}\\right)^{\\top} \\tag{7}\\]
where \\(\\bar{\\mathbf{n}}\\) is the mass center of \\({}^{\\mathcal{N}}\\mathcal{L}\\) and \\(N_{{}^{\\mathcal{N}}\\mathcal{L}}\\) is the number of points in \\({}^{\\mathcal{N}}\\mathcal{L}\\). Then, eigenvalues \\(\\lambda_{1}\\geq\\lambda_{2}\\geq\\lambda_{3}\\geq 0\\) are determined by eigenvalue decomposition of \\(\\mathbf{\\Sigma}_{n}\\).
The degeneration can occur in all directions separately or simultaneously. To simplify the problem, we make the following two assumptions: 1) Due to our vehicles moving on the ground, we assume that the LiDAR sensors will always observe the ground plane and vehicles will not degenerate in the vertical direction; 2) Unlike exploring the open terrain such as grassland, a desert, a lake, etc. where there are no sufficient constraints in all horizontal directions for the LiDAR odometry, we assume that only one direction is mainly degenerated in the horizontal plane. The typical examples include corridor, tunnel, underground passage, and so on. Thus, we merely consider the smallest two eigenvalues, i.e., \\(\\lambda_{2}\\) and \\(\\lambda_{3}\\), and the distribution of the normal cloud \\({}^{\\mathcal{N}}\\mathcal{L}\\) can be determined by the _degenerate degree_\\(\\sigma_{deg}\\) inspired by [23]: \\(\\sigma_{deg}=\\frac{\\lambda_{2}}{\\lambda_{3}}\\), and the _degenerate direction_ is the eigenvector of the smallest eigenvalue, i.e., \\(\\mathbf{e}_{3}\\). If the _degenerate degree_\\(\\sigma_{deg}\\) is less than a threshold, the environment is considered as degeneration.
### _Range Constraints for Degenerate Correction_
#### Iii-C1 Range Residuals
Diverse sensors can be used for range measurement, such as UWB, Zigbee, WiFi, light sensors, and so on. All measurements are noisy. Considering the measurement noise, we online smooth the raw data of the range observations for a past time horizon with a least square smoother. Then, the residuals of range measurements between vehicle \\(j\\) and vehicle \\(k\\) at timestamp \\(i\\) can be formulated as,
\\[r_{u}\\left(\\mathcal{X}_{i}^{v_{j}},\\mathcal{X}_{i}^{v_{k}},u_{i}^{jk}\\right) =\\left|\\mathbf{x}_{i}^{v_{j}}-\\mathbf{x}_{i}^{v_{k}}\\right|-u_{i}^{jk}+\\eta_{u_{i}^{jk}} \\tag{8}\\]
where \\(\\mathcal{X}_{i}^{v_{j}}\\) and \\(\\mathcal{X}_{i}^{v_{k}}\\) represent states of two vehicles at the timestamp \\(i\\) obtained from the LiDAR-inertial odometry and \\(u_{i}^{jk}\\) represents the corresponding smoothed range measurement. \\(\\eta_{u_{i}^{jk}}\\sim\\mathcal{N}\\left(0,\\sigma_{u_{i}^{jk}}^{2}\\right)\\) represents the noise following a zero-mean Gaussian noise.
#### Iii-C2 Degenerate Component Correction
According to the distribution of features proposed in Section III-B, the environmental degeneration can be real-time monitored. If the degeneration is detected, and the gap between the LiDAR-inertial odometry and the range measurement exceeds a threshold, we can apply the range observation to reduce the position drift, based on the _degenerate direction_ calculated in Section III-B. As shown in Fig. 6, the corrected state \\(\\mathbf{x}_{k}^{correct}\\) should be located on the circle centered on the anchor vehicle with radius \\(u_{k}\\), which represents the range measurement between the anchor vehicle and the tag vehicle at the timestamp \\(k\\). We omit the superscript of \\(u_{k}\\) for simplicity. We view the state estimation \\(\\mathbf{x}_{k}^{deg}\\) as a vector with \\(\\mathbf{x}^{anchor}\\) as the origin. Since the gap is mainly due to the degeneration, the error vector of the estimated state \\(\\mathbf{x}_{k}^{deg}\\) is considered on the _degenerate direction_, which is represented by the unit eigenvector \\(\\mathbf{e}_{3}\\). We denote the magnitude of the error by \\(s\\). Then, constraining the problem on the XY coordinate, we can obtain the error \\(s\\mathbf{e}_{3}\\), which we call the compensation vector, from the equation
\\[\\left|\\mathbf{x}_{k}^{deg}-\\mathbf{x}_{k}^{anchor}+s\\mathbf{e}_{3}\\right|=u_{k}+\\eta_{u_{ k}}. \\tag{9}\\]
With \\(s\\mathbf{e}_{3}\\), we can correct the influence of the degeneration and obtain
\\[\\mathbf{x}_{k}^{correct}=\\mathbf{x}_{k}^{deg}+s\\mathbf{e}_{3}. \\tag{10}\\]
### _Dynamical Initialization_
Before globally optimizing the pose graph, an anchor vehicle needs to unify the coordinate systems of all vehicles. To reduce the computational burden in the following exploration, we estimate the transformations between the global frame and each local frame in the first round of exploration and fix these transformations in the following rounds.
At the beginning of the first exploration round, an initial anchor vehicle is randomly selected from all vehicles. The global frame is defined as the local frame of this initial anchor vehicle. As the RaLI-Multi is dynamically centralized,
Fig. 6: Degeneration correction through the range information. The orange triangle is a static anchor vehicle and the rest two triangles represent a common tag vehicle where the light blue triangle is the degenerate state and the deep blue triangle is the corrected state. The green dotted line is the range circle and the green dashed lines are the corresponding radius. The red curly bracket shows the difference between the LiDAR-inertial odometry and the range measurement. The yellow dashed line and the purple arrow represent the degenerate direction and the compensation vector at timestamp \\(k\\), respectively.
the anchor vehicle may be different in various exploration missions and therefore is the global frame.
During initialization, each tag vehicle performs the odometry as described in Section III-A. Meanwhile, the anchor vehicle receives the odometry and local maps published by each tag vehicle, range measurements between two vehicles, and pre-set initial pose priors. When the size of local maps exceeds a pre-set threshold, the anchor vehicle starts to perform the initialization as follows,
\\[\\operatorname*{argmin}_{{{}^{L}\\mathbf{\\mathcal{X}},\\mathbf{\\mathcal{T}}}} \\left\\{{\\sum\\limits_{v\\in V}{\\left\\|{r_{\\mathcal{L}}^{v}\\left({{}^{L}\\mathbf{ \\mathcal{X}}^{v},\\mathbf{\\mathcal{L}}^{v}}\\right)}\\right\\|_{\\mathbf{P}_{\\mathcal{L }}^{-1}}}^{2}+r_{scan2map}^{v}\\left(\\mathbf{\\mathcal{T}}\\right)}\\right. \\tag{11}\\]
where \\(\\mathbf{\\mathcal{V}}\\) represents the set of tag vehicles and \\(\\mathbf{\\mathcal{V}}_{a}\\) represents the set of vehicles in the RaLI-Multi, including an anchor vehicle and all tag vehicles. \\(r_{\\mathcal{L}}^{v}\\left({{}^{L}\\mathbf{\\mathcal{X}}^{v},\\mathbf{\\mathcal{L}}^{v}}\\right)\\) is the LiDAR-inertial odometry residual and \\({{}^{L}\\mathbf{\\mathcal{X}}^{v}}\\) is a set of local poses of vehicle \\(v\\). \\(r_{scan2map}^{v}\\left(\\mathbf{\\mathcal{T}}\\right)\\) is the scan-to-map registration residual of all tag vehicles where the scan represents the point cloud captured by an anchor vehicle and the map is a local map from the corresponding tag vehicle. \\({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L}\\mathcal{X}_{i}^{v_{k}}}\\) are local poses of two vehicles with range constraints \\(u_{i}^{jk}\\). \\(r_{u}\\left({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L}\\mathcal{X}_{i}^{v_{k}}, \\mathcal{T}^{v_{j}},\\mathcal{T}^{v_{k}},u_{i}^{jk}}\\right)\\) is the range constraint between two vehicles and is defined as,
\\[\\begin{split}& r_{u}\\left({{}^{L}\\mathcal{X}_{i}^{v_{j}},\\,{}^{L} \\mathcal{X}_{i}^{v_{k}},\\mathcal{T}^{v_{j}},\\mathcal{T}^{v_{k}},u_{i}^{jk}} \\right)=\\\\ &\\quad\\left|{\\mathbf{R}^{v_{j}}{{}^{L}\\mathbf{x}_{i}^{v_{j}}}+\\mathbf{t}^{v_ {j}}-\\mathbf{R}^{v_{k}}{{}^{L}\\mathbf{x}_{i}^{v_{k}}}-\\mathbf{t}^{v_{k}}}\\right|-u_{i}^{jk} +\\eta_{u_{i}^{jk}}.\\end{split} \\tag{12}\\]
### _Incremental Global PGO and Map Merging_
During the exploration of tag vehicles, an anchor vehicle serves as a temporal base station, processing incremental global PGO and map merging. Messages transferring from tag vehicles to an anchor vehicle include vehicle poses optimized by local PGO, corresponding LiDAR point clouds, and range measurements between two vehicles, tag-to-tag or tag-to-anchor. The optimization progress is similar to scan-to-map matching described in Section III-A and range constraints in Section III-C1. To reduce the computational burden of an anchor vehicle, we reduce the iteration number of the global optimization if no degeneration occurs, and only optimize poses in the current exploration round when there are no loop closures at the system level. At the end of each exploration round, an anchor vehicle publishes the global map and optimized poses to corresponding tag vehicles. Hence, all vehicles share the same global map.
### _Dynamically Anchor Role Selection_
After all tag vehicles finish their exploration, the next anchor vehicle is selected. Finish conditions are listed as three cases described in Section II-A. In the third case, if all frontiers in the exploration area of a tag vehicle have been examined, the exploration of this tag vehicle in the current round is finished. Then, the selection of next anchor vehicle is determined by frontiers. The current anchor vehicle combines frontiers received from each tag vehicle and finds the largest frontier area. Finally, the vehicle closest to the center of the largest frontier is selected as the new anchor.
## IV Experiments
### _Implementation_
We perform three experiments to evaluate the proposed methods: the LiDAR-inertial odometry analysis (exp1), the RaLI-Multi with two vehicles in a long corridor-like environment (exp2), and the RaLI-Multi with three vehicles in a complex environment (exp3). The first experiment is mainly designed for evaluating the accuracy of the LiDAR-inertial odometry. UWB anchors applied in exp1 are shown in Fig. 7 (a) to provide a reference trajectory. Fig. 7 (b-c) show the scenarios in exp2 and exp3. Fig. 7 (d) shows unmanned ground vehicles with LiDAR (RoboSense RS-LiDAR-16), UWB (Nooploop LinkTrack P-B), and IMU (Xsens). In exp2 and exp3, the UWB node on each vehicle is applied for inter-vehicle distance measurement, and we use the UWB model proposed by Nguyen et al. [24]. Specifically, our experimental vehicles equip with differential steering and spring-damped suspension and there is high friction between rubber tires and tiled floors. These reasons lead to vehicles being prone to sharp changes in height when steering, which probably results in large errors in the Z-axis.
We implemented the proposed RaLI-Multi in C++ and Robots Operating System (ROS). We use the GTSAM [25] framework for the local and global PGO. The Levenberg-Marquardt algorithm is used to solve the pose graph optimization. Trajectory errors in exp1 are calculated by EVO [26] and
Fig. 7: (a) The first experimental scenario with UWB anchors. (b) and (c) are experimental scenarios in exp2 and exp3, respectively. (d) Hardware setup of the RaLI-Multi.
point cloud map errors are estimated by point-to-mesh distance in CloudCompare1 after a coarse-to-fine alignment.
Footnote 1: [https://github.com/CloudCompare/CloudCompare](https://github.com/CloudCompare/CloudCompare)
### _Degeneration Analysis_
We first analyze the degenerate level in exp1 and exp2 as shown in Fig. 8 (a) and Fig. 8 (b), respectively. The lower the degenerate degree, the higher the degenerate level. Red crosses in Fig. 8 (a) are locations of UWB anchors which ensure that each vehicle receives at least four range measurements anywhere along the route. In order to clearly illustrate the degenerate level, we present both the spatial and temporal dimensions. In the spatial part, degenerate values are higher in a corner than in a straight corridor, and the longer the corridor, the lower the degenerate value. As shown in the x-y coordinate system, degenerate values in corners are colored in red or green while straight corridors are mostly in blue. In the temporal part, we find that degenerate values of exp1 are higher than that of exp2. It corresponds with that exp1 is less degenerate than exp2. In our experiments, we define a place that is degenerate when its degenerate value is smaller than 3.0.
### _LiDAR-inertial Odometry Evaluation_
Firstly, we discuss the performance of the LiDAR-inertial odometry in exp1. As a result of the lack of ground truth, we place UWB anchors around the environment to measure the distances between a tag and anchors via Time of Flight (TOF). We then calculate the tag coordinates with these distances. Before the experiment, we pre-deploy UWB beacons as shown in Fig. 7 (a). These UWB beacons are placed at different heights to monitor the height variation of these vehicles. Theoretically, it is enough for three UWB beacons to estimate the position of a target. However, considering the robustness and preciseness of the proposed distributed localization system, we redundantly arrange beacons and make sure that each vehicle can receive more than three UWB beacon signals wherever in exp1.
Fig. 8: Degeneration degree in exp1 and 2.
Fig. 9: Trajectory and orientation results of different methods in exp1.
We compare the proposed LiDAR-inertial odometry with DLO [20], A-LOAM2, LeGO-LOAM [27], FAST-LIO2 [28] and LIO-SAM [21], as shown in Fig. 9 (a) and (b). We can see that LeGO-LOAM and LIO-SAM failed in exp1 where the former degenerates at the beginning of this experiment and the latter degenerates when entering the corridor located near (10, 8.5) in Fig. 9 (a). As a result, we exclude two of them in Fig. 9 (b). Among the rest four methods, FAST-LIO2 and A-LOAM also drifts in various degrees. Similar to LIO-SAM, A-LOAM starts to drift next to (10, 8.5) in Fig. 9 (a), especially on the y-axis. In contrast, FAST-LIO2 drifts after the last corner, in the vicinity of (50, 14) in Fig. 9 (a). The odometry degeneration mostly occurs on the x-axis and the drifted point cloud map can be seen in Fig. 10 (a). Moreover, both A-LOAM and FAST-LIO2 have an obvious deviation in the z-axis and pitch. Comparing DLO and the proposed method, both of them resist the degeneration and show little difference to the reference trajectory, labeled ground truth (GT) in Fig. 9 (a). However, DLO drifts more on the z-axis than the proposed method. Trajectory errors compared with reference are illustrated in table I through EVO APE (Absolute Pose Error), \\(APE_{i}=\\|E_{i}\\|\\), where \\(E_{i}=x_{est,i}-x_{ref}\\). MEAN and RMSE in table I can be calculated from the APEs of all timestamps as follows,
Footnote 2: [https://github.com/HKUST-Aerial-Robotics/A-LOAM](https://github.com/HKUST-Aerial-Robotics/A-LOAM)
\\[\\mathrm{MEAN}=\\frac{1}{N}\\sum_{i=1}^{N}APE_{i} \\tag{13}\\]
\\[\\mathrm{RMSE}=\\sqrt{\\frac{1}{N}\\sum_{i=1}^{N}APE_{i}^{2}} \\tag{14}\\]
We evaluate the mapping results of DLO, A-LOAM, FAST-LIO2, and the proposed method. The ground truth of the point cloud map is established through CATIA 3D modeling and geometric dimensions are measured by laser measuring instruments. A point cloud map of the proposed method, shown in Fig. 10 (b), is labeled in different colors where magenta points are outlier points excluded in Cloud-to-Mesh (C2M) distance estimation and the rest colors corresponding to C2M distances, the same color as in Fig. 11 (c). Considering large drifts in the mapping results of FAST-LIO2, we ignore further evaluation. Results of the rest three methods are shown in Fig. 11 and table II. On one hand, as a result that A-LOAM records every LiDAR scan into the map, there are more pedestrian points in the final map than DLO and the proposed method. These outlier points are difficult to remove. Thus, the C2M distances statistic results of A-LOAM will be higher than the other two methods though we downsample its point cloud. On the other hand, A-LOAM suffers from degeneration as analyzed in the former paragraph. Both reasons lead to C2M distances of A-LOAM being much higher than those of DLO and the proposed method. Comparing Fig. 11 (a) and (c), the distribution of C2M distances of the proposed method is closer to zero than DLO. Moreover, the bar closest to zero occupies over half of the whole points while DLO is less than one-quarter.
### _Evaluation in a Long Corridor like Environment_
Next, we evaluate different methods in a long corridor-like environment. As a result that the narrow environment constrains the number of vehicles, we evaluate the RaLI-Multi system with two vehicles and they play 'the anchor role' in turn during the exploration. Since the environment in exp2 is longer and narrower than that in exp1, which is difficult to arrange our distributed localization system, we only evaluate the accuracy of the point cloud map.
Trajectories are demonstrated in Fig. 12. LIO-SAM degenerated and failed at the end of the first long corridor, around (65, 2) in Fig. 12 (a). Although A-LOAM and LeGO-LOAM resisted degradation to some extent, they drifted in different axes, with A-LOAM mostly in the x-axis and LeGO-LOAM in XY-axes. Both of them drift at the same place as LIO-SAM. Moreover, A-LOAM drifts in the z-axis shortly after the beginning of this experiment. The rest three methods, DLO, FAST-LIO2, and the RaLI-Multi show similar results.
Then, we evaluate the mapping results. The results of our RaLI-Multi and A-LOAM are shown in Fig. 13 (a) and (b), respectively. In the zoom area of 13 (b), gray points are point clouds sampled from the 3D reference model and green points show significant offsets from the reference. Due to large drifts of A-LOAM, C2M distances also distribute widely. As we can see from Fig. 13 (d), A-LOAM shows two local peaks near 0.65m and 1m. Although FAST-LIO2 and DLO show similar performance in table II, the peak of the C2M distance histogram of FAST-LIO2 in Fig. 13 (c) locates away from zero. Among these methods, the first histogram of C2M distances of the RaLI-Multi occupies the most percentage, over 20%, and most C2M distances are within 0.2m. Details of C2M distances are also shown in table II.
vehicles and the benchmarks, DLO, A-LOAM, LeGO-LOAM, FAST-LIO2, and LIO-SAM, apply different trajectories, as shown in Fig. 14.
Fig. 14 (b)-(d) show three exploration rounds of the RaLI-Multi in this experiment. In the first round, vehicles 1 and 2 are two tag vehicles and vehicle 3 is the initial anchor vehicle. Vehicle 1 explores the initial place while vehicle 2 explores in the right direction. The anchor, vehicle 3, receives information from vehicles 1 and 2 to perform the initialization and global optimization. After two tag vehicles finish their exploration, vehicle 2 is selected as the next anchor because only vehicle 2 has frontiers among tag vehicles. In the second round, both tag vehicles are heading in the bottom right direction. At the corner of the corridor, vehicle 1 stops and is selected as the next anchor. In the third round, vehicle 3 explores the bottom right area while vehicle 2 heads toward the end of the corridor.
The mapping results are shown in table II and Fig. 15, where LeGO-LOAM, FAST-LIO2, and LIO-SAM failed in different places. We then evaluate the mapping results of the rest three methods. As shown in Fig. 15 (a)-(c), DLO and A-LOAM have higher C2M distances at the labeled area than the RaLI-Multi. From the zoom area of point cloud maps, A-LOAM drifts more in horizontal directions (XY axes) than DLO, demonstrated as
Fig. 11: C2M distance results in exp1.
Fig. 12: Trajectory and orientation results from different methods in exp2.
green points, while DLO shows more errors in the vertical direction (Z axis), represented in green and yellow points. Combining Fig. 15 (d)-(f), most C2M distance histograms of the RaLI-Multi are within 0.2m while DLO and A-LOAM still have local peaks around 0.4m and 0.5m, respectively. Statistics of mapping results shown in table II also demonstrate the proposed method achieves a better result than the state-of-the-art.
## V Conclusion
In this paper, we propose a range-aided LiDAR-inertial multi-vehicle mapping system for a large-scale environment with degeneration. The multi-metric weights LiDAR-inertial front-end assigns weights to each feature point, based on the distance and the neighbors of them, and the kinematics of the vehicle, which improves the performance in narrow and degenerate environments. The degeneration detection module can online monitor the degeneration via the distribution of normal vectors of feature points. The degenerate correction module can compensate for the LiDAR-inertial odometry along the degenerate direction. The dynamically centralized multi-vehicle system can robustly and flexibly operate in various complex and degenerate environments.
Three experiments demonstrate that: 1) the proposed LiDAR-inertial front-end can resist degeneration and achieve
Fig. 14: Trajectory schematic of vehicles in the benchmarks and the RaLI-Multi. (a) The trajectory schematic of the vehicle in the benchmarks. (b)-(d) Trajectory schematics of vehicles in the RaLI-Multi in three exploration rounds and each color represents a vehicle.
Fig. 13: Mapping results in exp2. (a) is the mapping result of the RaLI-Multi. (b) is the mapping result of A-LOAM. The drifted area is detailed in partial enlargement and the gray points are sampled from the 3D model established in CATIA. (c)-(f) are C2M distances from DLO, A-LOAM, FAST-LIO2 and ours.
Fig. 15: Mapping results in exp3. (a)-(c) are mapping results from DLO, A-LOAM and ours. (d)-(f) are corresponding C2M distances.
better mapping results; 2) with the help of degeneration detection and correction, the proposed multi-vehicle system can obtain a low-drift global map in degenerate environments; 3) compared with the state-of-the-art, the RaLI-Multi is more robust in the three experiments.
## References
* [1]S. Choudhary, L. Carlone, C. Nieto, J. Rogers, H. I. Christensen, and F. Dellaert (2017) Distributed mapping with privacy and communication constraints: lightweight algorithms and object-based models. The International Journal of Robotics Research36 (12), pp. 1286-1311. Cited by: SSI.
* [2]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [3]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [4]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [5]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [6]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [7]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [8]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [9]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for autonomous exploration. In 2022 IEEE International Conference on Robotics and automation (ICRA), pp. 146-151. Cited by: SSI.
* [10]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [11]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI.
* [12]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [13]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [14]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [15]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [16]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [17]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [18]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for autonomous exploration. In 2022 IEEE International Conference on Robotics and automation (ICRA), pp. 146-151. Cited by: SSI.
* [19]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI.
* [20]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI.
* [21]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-125. Cited by: SSI.
* [22]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [23]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [24]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [25]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [26]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI.
* [27]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-125. Cited by: SSI.
* [28]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-128. Cited by: SSI.
* [29]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [30]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-113. Cited by: SSI.
* [31]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 104-118. Cited by: SSI.
* [32]T. M. Nguyen, T. Nguyen, and L. Xie (2022) A unified framework for multi-robot cooperative visual-inertial-robot SLAM. IEEE Robotics and Automation Letters66 (5), pp. 104-128. Cited by: SSI.
* [33]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [34]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [35]T. M. Nguyen, T. Nguyen, and L. Xie (2022) Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization. IEEE Robotics and Automation Letters7 (2), pp. 928-935. Cited by: SSI.
* [36]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [37]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [38]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [39]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [40]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [41]T. M. Nguyen, M. Cao, S. Yuan, Y. Lyu, T. H. Nguyen, and L. Xie (2021) Viral-fusion: a visual-inertial-ranging-lidar sensor fusion approach. IEEE Transactions on Robotics. Cited by: SSI.
* [42]T. M. Nguyen, A. H. Zaini, and C. Wang (2022) A unified framework for multi-robot cooperative visual-inertial-inertial-robot SLAM. IEEE Robotics and Automation Letters7 (2), pp. 104-105. Cited by: SSI.
* [43]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [44]T. M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie (2018) Robust target-relative localization with ultra-wideband ranging and communication. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 2312-2319. Cited by: SSI.
* [45]T. Shan and B. Englot (2018) Lego-loom: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758-4765. Cited by: SSI.
* [46]T. Shan and B. Englot (20 | This paper presents a range-aided LiDAR-inertial multi-vehicle mapping system (RaIJ-Multi). Firstly, we design a multi-metric weights LiDAR-inertial odometry by fusing observations from an inertial measurement unit (IMU) and a light detection and ranging sensor (LiDAR). The degenerate level and direction are evaluated by analyzing the distribution of normal vectors of feature point clouds and are used to activate the degeneration correction module in which range measurements correct the pose estimation from the degeneration direction. We then design a multi-vehicle mapping system in which a centralized vehicle receives local maps of each vehicle and range measurements between vehicles to optimize a global pose graph. The global map is broadcast to other vehicles for localization and mapping updates, and the centralized vehicle is dynamically fungible. Finally, we provide three experiments to verify the effectiveness of the proposed RaIJ-Multi. The results show its superiority in degeneration environments.
Multi-vehicle system, simultaneous localization and mapping, range measurement, degeneration detection and correction. | Summarize the following text. | 214 |
arxiv-format/1901_03193v3.md | # Thermal convection, ensemble weather forecasting and distributed chaos
A. Bershadskii
ICAR, P.O. Box 31155, Jerusalem 91000, Israel
######
## I Distributed chaos
Systems with chaotic dynamics often have frequency power spectra with exponential decay [1]-[7]. For the systems described by dynamical equations with partial derivatives (in particular for the systems based on the Navier-Stokes equations) observations are less conclusive, especially for the wavenumber (spatial) power spectra. Figure 1 shows kinetic energy spectrum for a perturbation in statistically stationary isotropic homogeneous turbulence at Reynolds number \\(Re\\simeq 2500\\)[8] (the spectral data can be found at the site Ref. [9]). In this paper a direct numerical simulation (DNS) with the Navier-Stokes equations
\\[\\frac{\\partial\\mathbf{u}(\\mathbf{x},t)}{\\partial t}+(\\mathbf{u}\\cdot\
abla)\\mathbf{u}=-\
abla p +\
u\\Delta\\mathbf{u}+\\mathbf{f} \\tag{1}\\]
\\[\
abla\\cdot\\mathbf{u}(\\mathbf{x},t)=0 \\tag{2}\\]
was performed and a velocity field realization \\(\\mathbf{u}_{1}\\) was transformed into a new realization \\(\\mathbf{u}_{2}\\) by a slight instant perturbation of the forcing \\(\\mathbf{f}(\\mathbf{x},t)\\). Power spectrum of the field \\(\\delta\\mathbf{u}=\\mathbf{u}_{1}-\\mathbf{u}_{2}\\) was then computed as
\\[E_{d}(k,t)=\\frac{1}{2}\\int_{|\\mathbf{k}|=k}d\\mathbf{k}|\\hat{\\mathbf{u}}_{1}(\\mathbf{k },t)-\\hat{\\mathbf{u}}_{2}(\\mathbf{k},t)|^{2} \\tag{3}\\]
for a steady state.
The dashed straight line in the Fig. 1 indicates the exponential decay
\\[E(k)=a\\exp-(k/k_{0}) \\tag{4}\\]
The insert to the Fig. 1 has been added in order to show that the \\(k_{0}\\) from the Eq. (4) corresponds to the peak of the \\(E_{d}(k)\\) spectrum. This is an indication of a tuning of the high-wavenumber chaotic dynamics to the coherent structures with the scale \\(k_{0}\\).
Ensemble weather forecasting allows to take into account the intrinsic uncertainty in numerical forecasts of chaotic systems. In recent paper Ref. [10] results of an idealized ensemble simulation of mesoscale deep-convective systems were reported. A nonhydrostatic cloud-resolving model was used in order to generate ensembles of 20 perturbed and 1 control members. The ensembles were initialized by large-scale (91-km-wavelength) moisture perturbations with random phases. A strong line of thunderstorms was developed in all cases (see Ref. [10] for more details of the model configuration and simulation strategy).
Figure 2 shows vertically averaged over the layer \\(0\\leq z\\leq 16\\) km background (total) kinetic energy spectrum at 6 hours of the system development with 1km resolution (simulations were performed in a doubly periodic horizontal square domain of 512km \\(\\times\\) 512km, \\(k\\) is horizontal wavenumber). The dashed curve indicates the exponential spectral decay Eq. (4) in the log-log scales (here and in all other figures \\(\\log k=\\log_{10}k\\)). The faint straight line, indicating the '-5/3' slope in the log -log scales, is drawn in the figure for reference. Figure 3 shows corresponding vertically and ensemble averaged spectrum of perturbations in kinetic energy about the ensemble mean at the 6 hours of the system development. The dashed curve indicates the exponential spectral
Figure 1: Perturbation kinetic energy spectrum for the steady isotropic turbulence. The dashed straight line indicates the exponential decay Eq. (4).
decay Eq. (4) in the log-log scales. The spectral data for the Figs. 2 and 3 were taken from the Fig. 7 of the Ref. [10].
In the general case of a statistical ensemble defined by parameters \\(a\\) and \\(k_{0}\\) the ensemble averaged spectrum can be represented by
\\[E(k)=\\int P(a,k_{0})\\ \\exp-(k/k_{0})\\ dadk_{0} \\tag{5}\\]
with a joint probability distribution \\(P(a,k_{0})\\). If the variables \\(a\\) and \\(k_{0}\\) are statistically independent, then
\\[E(k)\\propto\\int P(k_{0})\\ \\exp-(k/k_{0})\\ dk_{0} \\tag{6}\\]
with distribution \\(P(k_{0})\\) of the parameter \\(k_{0}\\).
Let the characteristic velocity \\(u_{0}\\) vary with the scale \\(k_{0}\\) in a scale invariant form (scaling)
\\[u_{0}\\propto k_{0}^{\\alpha} \\tag{7}\\]
If the vorticity \\(\\mathbf{\\omega}({\\bf x},t)\\) correlation integral
\\[I_{\\omega}=\\int\\langle\\mathbf{\\omega}({\\bf x},t)\\cdot\\mathbf{ \\omega}({\\bf x}+{\\bf r},t)\\rangle_{V}\\ d{\\bf r} \\tag{8}\\]
(\\(< >_{V}\\) denotes the ensemble-volume average, cf. Ref. [11]) dominates the scaling Eq. (7), then from the dimensional considerations one obtains
\\[u_{0}\\propto I_{\\omega}^{1/2}k_{0}^{1/2} \\tag{9}\\]
For Gaussian distribution of the characteristic velocity \\(u_{0}\\) the variable \\(k_{0}\\) has the chi-squared (\\(\\chi^{2}\\)) distribution:
\\[P(k_{0})\\propto k_{0}^{-1/2}\\exp-(k_{0}/4k_{\\beta}) \\tag{10}\\]
here \\(k_{\\beta}\\) is a constant.
Substituting the Eq. (10) into the Eq. (6) one obtains
\\[E(k)\\propto\\exp-(k/k_{\\beta})^{1/2} \\tag{11}\\]
## III Thermal Convection
At thermal (Rayleigh-Benard) convection a horizontal layer of the fluid is cooled from top and heated from below. The Boussinesq approximation of the nondimensional equations describing the thermal convection is
\\[\\frac{1}{\\rm Pr}\\left[\\frac{\\partial{\\bf u}}{\\partial t}+({\\bf u}\\cdot\
abla) {\\bf u}\\right]=-\
abla\\sigma+\\theta\\hat{z}+\\frac{1}{\\sqrt{\\rm Ra}}\
abla^{2}{ \\bf u}, \\tag{12}\\]
\\[\\frac{\\partial\\theta}{\\partial t}+({\\bf u}\\cdot\
abla)\\theta={\\bf u_{z}}+ \\frac{1}{\\sqrt{\\rm Ra}}\
abla^{2}\\theta, \\tag{13}\\]
\\[\
abla\\cdot{\\bf u}={\\bf 0}, \\tag{14}\\]
where \\(Pr\\) is the Prandtl number, \\(Ra\\) is the Rayleigh number, \\(\\hat{z}\\) is the buoyancy direction, and \\(\\theta\\) is deviation of temperature from the heat conduction state [12].
Figure 4 shows kinetic energy spectrum computed for a direct numerical simulation of the thermal (Rayleigh-Benard) convection at \\(Pr=10^{2}\\) and \\(Ra=10^{7}\\) (the spectral data for this figure were taken from Fig. 10 of the Ref. [13]). The direct numerical simulation (DNS) was performed in a three-dimensional box with standard periodic boundary conditions on the lateral boundaries. On the bottom and top boundaries isothermal conditions for the temperature and free-slip conditions for velocity were used. The dashed curve in the Fig. 4 indicates the stretched exponential spectrum Eq. (11).
Figure 5 shows kinetic energy spectrum computed for the Weather Research and Forecast Model [14] numerical simulation of the atmospheric moist convection without the Coriolis effect (the spectral data were taken from Fig. 10 of the Ref. [15]). Seven warm bubbles were used in the initial condition in order to initiate convection. The bubbles interact with each other under a wind shear (for more details see the Ref. [15]). The spectrum was averaged between 0 and 15 km of the height and over 4-6
Figure 3: As in Fig. 2 but for perturbation.
Figure 2: Vertically and ensemble averaged background (total) kinetic energy spectrum at 6 hours of the system development (here and in all other figures \\(\\log k\\equiv\\log_{10}k\\)).
hours of the evolution. The dashed curve indicates the stretched exponential spectrum Eq. (11).
## III Helicity dominated distributed chaos
The vorticity dominated thermal convection (distributed chaos) has the stretched exponential kinetic energy spectrum spectrum Eq. (11) (see also Ref. [16]). Therefore, let us look at a generalization:
\\[E(k)\\propto\\int P(k_{0})\\;\\exp-(k/k_{0})\\;dk_{0}\\propto\\exp-(k/k_{\\beta})^{\\beta} \\tag{15}\\]
If distribution of the characteristic velocity \\(u_{0}\\) is \\({\\cal P}(u_{0})\\), then
\\[{\\cal P}(u_{0})du_{0}\\propto P(k_{0})dk_{0} \\tag{16}\\]
Form the Eqs. (7) and (16) one obtains
\\[P(k_{0})\\propto k_{0}^{\\alpha-1}\\;{\\cal P}(u_{0}(k_{0})) \\tag{17}\\]
From the Eq. (15) asymptote of \\(P(k_{0})\\) at \\(k_{0}\\rightarrow\\infty\\) can be estimated as [17]
\\[P(k_{0})\\propto k_{0}^{-1+\\beta/[2(1-\\beta)]}\\;\\exp(-bk_{0}^{\\beta/(1-\\beta)}) \\tag{18}\\]
with a constant \\(b\\).
Then it follows from the Eqs. (7),(17) and (18) that for the Gaussian distribution \\({\\cal P}(u_{0})\\) the parameters \\(\\alpha\\) and \\(\\beta\\) are related by the equation
\\[\\beta=\\frac{2\\alpha}{1+2\\alpha} \\tag{19}\\]
For the helicity \\(h=({\\bf u}\\!\\cdot\\!{\\mathbf{\\omega}})\\) dominated distributed chaos the helicity correlation integral
\\[I_{h}=\\int\\langle h({\\bf x},t)\\cdot h({\\bf x}+{\\bf r},t)\\rangle_{V}d{\\bf r} \\tag{20}\\]
should be used instead of the vorticity correlation integral. The helicity correlation integral \\(I_{h}\\) was for the first time considered in the paper Ref. [18] and is known as the Levich-Tsinober invariant. It is usually associated with the helical waves [19].
Then it follows from the dimensional considerations:
\\[u_{0}\\propto I_{h}^{1/4}k_{0}^{1/4} \\tag{21}\\]
and using the Eq. (19) one obtains \\(\\beta=1/3\\), i.e.
\\[E(k)\\propto\\exp-(k/k_{\\beta})^{1/3} \\tag{22}\\]
Figure 6 shows kinetic energy spectrum computed for the Weather Research and Forecast Model [14] numerical simulation of the atmospheric moist convection with the Coriolis effect (the spectral data were taken from Fig. 11a of the Ref. [15]). The dashed curve in the Fig. 6 indicates the stretched exponential spectrum Eq. (22) in the log-log scales (cf. previous Section, Fig. 5).
Figure 5: Kinetic energy spectrum for the Weather Research and Forecast Model numerical simulation of the atmospheric moist convection.
Figure 6: As in Fig. 5 but with addition of the Coriolis effect.
Figure 7 shows kinetic energy spectrum computed for a DNS of a Rayleigh-Benard-like (thermal) convection on a hemisphere (the spectral data were taken from Fig. 18 of the Ref. [20] for the stationary state spectrum). The fluid was heated at the equator and the temperature gradient between the equator and the pole produces thermal plumes near the equator which move up toward the pole and initiate a thermal convection.
The dashed curve in the Fig.7 indicates the stretched exponential spectrum Eq. (22) in the log-log scales.
Figure 8 shows mean spectrum of kinetic energy in 48h weather forecasts experiment at 500 hPa. The spectral data were taken from the Fig. 7b of the Ref. [21] (the forecasts were made with the Environment Canada Deterministic Weather Forecasting Systems based on ensemble-variational data assimilation). The dashed curve indicates the stretched exponential spectrum Eq. (22) and covers Meso- and Synoptic scales (the dotted vertical line indicates the Planetary scales).
## V Ensemble Weather Forecasting
An ensemble forecast for an East Coast snowstorm was reported in Ref. [22]. The 100-member ensembles were generated by ensemble Kalman filter [23]. The Coupled Ocean-Atmosphere Mesoscale Prediction System - COAMPS [24] was then used in order to integrate the ensembles for 36 hours forecast. The initial conditions were slightly altered for this purpose. The forecasting simulation started at 1200UTC 25 Dec. 2010 with real atmospheric data.
Figure 9 shows the ensemble and meridional averaged kinetic energy spectrum at the height 500 hPa. Figure 10 shows ensemble and meridional averaged kinetic energy spectrum of the initially generated perturbation at the 36 hours of the lead time (the
Figure 8: Mean spectrum of kinetic energy in short-range weather forecasts (48-hours) experiment at 500 hPa.
Figure 10: Kinetic energy spectrum of the perturbation for the 25 Dec. 2010 snowstorm at 36 hours of the lead time.
Figure 7: Kinetic energy spectrum for the stationary state of the thermal convection on a hemisphere.
Figure 9: Background kinetic energy spectrum for the 25 Dec. 2010 snowstorm.
were taken from Fig. 6b Ref. [22]). The perturbation is the difference between one ensemble member and the ensemble mean. The dashed curves in the figures indicate the stretched exponential decay Eq. (22). The authors of the Ref. [22] believe that the perturbation growth in their simulation is a result of quasi-uniform amplification of the perturbation at all wavenumbers (see also Refs. [22],[25]-[27]).
Another snowstorm was studied by the same method for the Pacific Northwest in the Ref. [25]. Figure 11 shows the mean horizontal kinetic energy spectrum at the hight 700-hPa at 1200UTC 17 Dec. 2008 (the data were taken from Fig. 13 Ref. [25]). Figure 12 shows the kinetic energy spectrum of the initially generated perturbation at the same height at the 36 hours of the lead time (the data were taken from the Fig. 14d Ref. [25]). The forecasting simulation started at 0000UTC 17 Dec. 2008 with real atmospheric data. The dashed curves in the figures 11 and 12 indicate the stretched exponential decay Eq. (11).
Finally, let us consider results of a simulation experiment with eleven cases of mid-latitude convection in the central US [28]. In this experiment influence of the multiscale perturbations generated by initial conditions on the storm-scale ensemble forecasts was studied using the Weather Research and Forecasting Advanced Research Model and the Global Forecast System Model at NCEP (see for more details about the cases, configuration and simulation strategy in the Refs. [28],[29]).
Figure 13 shows power spectrum of ensemble perturbations: ensemble member minus ensemble mean (averaged over all ensemble and case members), for the \\(u\\) component of wind at 900 hPa for 3h forecast time. The spectral data were taken from Fig. 2 of the Ref. [28]. The dashed curve indicates the stretched exponential decay Eq. (11).
## IV Discussion
In the paper Ref. [30] a two-dimensional barotropic vorticity model with the scaling kinetic energy spectra \\(E(k)\\propto k^{-5/3}\\) and \\(E(k)\\propto k^{-7/3}\\) was used in order to estimate predictability properties of the atmospheric phenomena. A vast amount of studies was then devoted to the multiscale systems' predictability for the cases with power-law (scaling) kinetic energy spectra (see, for instance, recent Refs. [10],[15] and references therein). The power-law spectra are related to the scale-local interactions (such as cascades, for instance) [31], whereas the exponential spectra are a result of the non-local interactions directly relating very different scales [32]. This difference has serious consequences for predictability [16]. The non-local interactions, directly relating large scales with small ones, provide a basis for more efficient predictability.
Figure 11: Mean horizontal kinetic energy spectrum at the height 700-hPa at 1200UTC 17 Dec. 2008.
Figure 12: Kinetic energy spectrum of the perturbations for the 17 Dec. 2008 snowstorm at the 36 hours of the lead time.
Figure 13: Power spectrum of ensemble perturbations: ensemble member minus ensemble mean (averaged over all ensemble and case members), for the \\(u\\) component of wind at 900 hPa for 3h forecast time.
The above considered examples show that the distributed chaos approach with the stretched exponential spectra Eq. (15) seems to be more relevant for description of the the buoyancy driven fluid dynamics and, especially, for the ensemble weather forecasting [33].
## Acknowledgement
I thank A. Berera and R.D.J.G. Ho for sharing their data and discussions, and S. Vannitsem for comments.
## References
* (1) U. Frisch and R. Morf, Phys. Rev., **23**, 2673 (1981).
* (2) J. D. Farmer, Physica D, **4**, 366 (1982).
* (3) N. Ohtomo, K. Tokiwano, Y. Tanaka et. al., J. Phys. Soc. Jpn. **64** 1104 (1995).
* (4) D.E. Sigoti, Phys. Rev. E, **52**, 2443 (1995).
* (5) A. Bershadskii, EPL, **88**, 60004 (2009).
* (6) S.M. Osprey and M.H.P Ambaum, Geophys. Res. Lett. **38**, L15702 (2011).
* (7) J.E. Maggs and G.J. Morales, Phys. Rev. Lett., **107**, 185003 (2011)
* (8) A. Berera and R.D.J.G. Ho, Phys. Rev. Lett., **120**, 024101 (2018).
* (9)[https://datashare.is.ed.ac.uk/handle/10283/2650](https://datashare.is.ed.ac.uk/handle/10283/2650)
* (10) J.A. Weyn and D.R. Durran, J. Atmos. Sci., **75**, 3331 (2018).
* (11) A. Bershadskii, arXiv:1601.07364 (2016).
* (12) G. Silano, K. R. Sreenivasan and R. Verzicco, J. Fluid Mech. **662**, 409 (2010).
* (13) A. Pandey, M.K. Verma, and P.K. Mishra, Phys. Rev. E, **89**, 023006 (2014).
* (14) W.C. Skamarock et al., NCAR Tech. Note NCAR/TN-4751STR, 113 pp., doi:10.5065/D68S4MVH.
* (15) Y.Q. Sun, R. Rotunno, and F. Zhang, J. Atmos. Sci., **74**, 185 (2017).
* (16) A. Bershadskii, arXiv:1811.02449 (2018).
* (17) D.C. Johnston, Phys. Rev. B, **74**, 184430 (2006).
* (18) E. Levich and A. Tsinboer, Phys. Lett. A **93**, 293 (1983).
* (19) E. Levich, Concepts of Physics **VI**, 239 (2009).
* (20) C.-H Bruneau, et al., Phys. Rev. Fluids, **3**, 043502 (2018).
* (21) M. Buehner et al., Mon. Wea. Rev., **143**, 2532 (2015).
* (22) D.R Durran, and M. Gingrich, J. Atmos. Sci., **71**, 2476 (2014).
* (23) J.S. Whitaker and T.M. Hamill, Mon.Wea. Rev., **130**, 1913 (2002). **71**, 2476 (2014).
* (24) R.M. Hodur, Mon. Wea. Rev., **125**, 1414 (1997).
* (25) D.R. Durran, P.A. Reinecke and J.D. Doyle, J. Atmos. Sci., **70**, 1470 (2013).
* (26) N. Bei and F. Zhang, Quart. J. Roy. Meteor. Soc., **133**, 83 (2007).
* (27) B.E. Mapes, et al., J. Meteor. Soc. Japan, **86A**, 175 (2008).
* (28) A. Johnson and X. Wang, Mon. Wea. Rev., **144**, 2579 (2016).
* (29) A. Johnson et al., Mon. Wea. Rev., **143**, 3087 (2015).
* (30) E.N. Lorenz, Tellus, XXI (3), 289 (1969).
* (31) A. S. Monin, A. M. Yaglom, Statistical Fluid Mechanics, Vol. II: Mechanics of Turbulence (Dover Pub. NY, 2007).
* (32) A. Bershadskii, Phys. Fluids **20**, 085103 (2008).
* (33) Statistical Postprocessing of Ensemble Forecasts (Editors: S. Vannitsem, D.S. Wilks and J.W. Messner, Elsevier, 2019). | Results of direct numerical simulations have been used to show that intensive thermal convection in a horizontal layer and on a hemisphere can be described by the distributed chaos approach. The vorticity and helicity dominated distributed chaos were considered for this purpose. Results of numerical simulations of the Weather Research and Forecast Model (with the moist convection and with the Coriolis effect) and of the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS) were also analysed to demonstrate applicability of this approach to the atmospheric processes. The ensemble forecasts of the real winter storms in the East Coast and Pacific Northwest as well as results of a simulation experiment with the multiscale storm-scale ensemble forecasts for eleven cases of mid-latitude convection in the central U.S. have been also discussed in this context. | Write a summary of the passage below. | 163 |
arxiv-format/0505215v4.md | Inhomogeneous Equation of State of the Universe: Phantom Era, Future Singularity and Crossing the Phantom Barrier
Shin'ichi Nojiri
Department of Applied Physics, National Defence Academy, Hashirimizu Yokosuka 239-8686, Japan
Sergei D. Odintsov
Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Institut d'Estudis Espacials de Catalunya (IEEC), Edifici Nexus, Gran Capita 2-4, 08034 Barcelona, Spain
November 3, 2021
## I Introduction
The increasing number of evidences from the observational data indicates that current universe lives in a narrow strip near \\(w=-1\\) (where w is the equation of state (EOS) parameter), quite probably being below \\(-1\\) in so-called phantom region. It is also assumed that modern universe is filled with some mysterious, negative pressure fluid (dark energy) which represents about 70 percents of total energy in the universe. (The simplest, phenomenological approach is to consider that this fluid satisfies to EOS with constant \\(w\\)). The origin of this dark energy is really dark: the proposed explanations vary from the modifications of gravity to the introduction of new fields (scalars, spinors, etc) with really strange properties. Moreover, forgetting for the moment about the origin of dark energy, even more-less satisfactory mechanism of evolving dark energy is missing, so far. At best, each of existing theoretical models for dark energy explains some specific element(s) of late-time evolution, lacking the complete understanding. Definitely, the situation may be improved with the new generation of observational data when they will present the realistic evolving EOS of dark energy during sufficiently large period.
The most strange (if realistic) era in the universe evolution is phantom era. There are many attempts to describe the phantom cosmology (see, for instance, [1; 2] and references therein), especially near to future, finite-time singularity (Big Rip) which is the essential element of classical phantom cosmology. (Note that quantum effects may basically provide the escape from future, finite type singularity, for recent discussion, see[3; 4]). Unfortunately, the easiest way to describe the phantom cosmology in the Lagrangian formulation leads to the necessity of the introduction of not very appreciated scalar with negative kinetic energy[5]. Another, easy way is to use some phenomenological EOS which may produce dark epoch of the universe (whatever it is). It is remarkable that such description shows the possibility of other types of future, finite type singularity. For instance, even when EOS is suddenly phantomic (near to rip time where negative pressure diverges), the sudden singularity occurs [6]. There may exist future singularities where energy/pressure is finite at rip time, for classification of future singularities, see [4]. They may occur even in modified gravity at late times, see[7] for explicit examples. Nevertheless, it is remarkable that effective phantom phase may be produced also in string-inspired gravities[8].
The present paper is devoted to study the phantom cosmology and related regimes (for instance, crossing of phantom divide) when phenomenological equation of state of the universe is inhomogeneous. In other words, it contains terms dependent explicitly from Hubble parameter (or, even from its derivatives). Definitely, one needs quite strong motivation for such modification of dark energy EOS. The first one comes from the consideration of time-dependent bulk viscosity[9; 10]. (For earlier discussion of cosmology with time-dependent bulk viscosity, see see also [11].) Actually, it was constructed the specific model of dark energy with possibility of crossing of phantom divide due to time-dependent bulk viscosity [9]. The construction of EOS from symmetry considerations [12] indicates to the necessity of some inhomogeneous correction. Finally, big number of gravities: from low-energy string effective actions to gravity with higher derivative terms or with inverse terms on curvatures modifies the FRW equations in requested form.
The paper is organized as follows. In the next section we consider spatially-flat FRW universe filled by the ideal fluid with specific, dark energy EOS [3]. Short review of four types of future singularity for different choices of EOS parameters is given, following to ref.[4]. The inhomogeneous term of specific form is introduced to EOS. The role of such term in the transition of different types of singularity to another ones is investigated.
The cosmological regimes crossing phantom barrier due to such terms are explicitly constructed. Finally, the dependence of the inhomogeneous term from Hubble parameter derivatives is briefly discussed as well as emerging oscillating universe. Section three is devoted to the study of similar questions when FRW universe is filled by the interacting mixture of two fluids. The modification of two fluids EOS by inhomogeneous term is again considered. The explicit example of late-time cosmology (which may be oscillating one) quite naturally crossing the phantom divide in such a universe is presented. It is interesting that inhomogeneous term may effectively compensate the interaction between two fluids. In the section four we discuss the FRW cosmology admitting the crossing of barrier \\(w=-1\\) due to specific form of the implicit dark energy EOS proposed in ref.[13]. Again, the generalized, Hubble parameter dependent EOS is considered. Some thermodynamical dark energy model passing the barrier \\(w=-1\\) is constructed, based on above EOS. It is demonstrated that in such a model the universe entropy may be positive even during the phantom era. Some summary and outlook are given in the discussion section. The Appendix deals with couple simple versions of modified gravity which may predict the requested generalization of EOS.
## II FRW cosmology with inhomogeneous dark energy equation of state
In the present section we make brief review of FRW cosmology with explicit dark energy equation of state (power law). The modification of EOS by Hubble dependent term (constrained by energy conservation law) is done and its role to FRW cosmology evolution is investigated. The starting FRW universe metric is:
\\[ds^{2}=-dt^{2}+a(t)^{2}\\sum_{i=1}^{3}\\left(dx^{i}\\right)^{2}. \\tag{1}\\]
In the FRW universe, the energy conservation law can be expressed as
\\[0=\\dot{\\rho}+3H\\left(p+\\rho\\right). \\tag{2}\\]
Here \\(\\rho\\) is energy density, \\(p\\) is pressure. The Hubble rate \\(H\\) is defined by \\(H\\equiv\\dot{a}/a\\). When \\(\\rho\\) and \\(p\\) satisfy the following simple EOS:
\\[p=w\\rho\\, \\tag{3}\\]
and if \\(w\\) is a constant, Eq.(2) can be easily integrated:
\\[\\rho=\\rho_{0}a^{-3(1+w)}. \\tag{4}\\]
Using the first FRW equation
\\[\\frac{3}{\\kappa^{2}}H^{2}=\\rho\\, \\tag{5}\\]
the well-known solution follows
\\[a=a_{0}\\left(t-t_{1}\\right)^{\\frac{2}{3(w+1)}}\\ \\ \\ \\mbox{or}\\ \\ \\ a_{0}\\left(t_{2}-t\\right)^{\\frac{2}{3(w+1)}}\\, \\tag{6}\\]
when \\(w\
eq-1\\), and
\\[a=a_{0}\\mbox{e}^{\\kappa t\\sqrt{\\frac{\\rho_{0}}{3}}} \\tag{7}\\]
when \\(w=-1\\). In (6), \\(t_{1}\\) and \\(t_{2}\\) are constants of the integration. Eq.(7) expresses the deSitter universe. In (6), since the exponent \\(2/3(w+1)\\) is not integer in general, we find \\(t>t_{1}\\) or \\(t<t_{2}\\) so that \\(a\\) should be real number. If the exponent \\(2/3(w+1)\\) is positive, the first solution in (6) expresses the expanding universe but the second one expresses the shrinking universe. If the exponent \\(2/3(w+1)\\) is negative, the first solution in (6) expresses the shrinking universe but the second one expresses the expanding universe. In the following, we only consider the case that the universe is expanding. Then for the second solution, however, there appears a singularity in a finite time at \\(t=t_{2}\\), which is called the Big Rip singularity ( for discussion of phantom cosmology near Big Rip and related questions, see[1; 2] and references therein) when
\\[w<-1. \\tag{8}\\]
In general, the singularities may behave in different ways. One may classify the future singularities as following[4]:
* Type I (\"Big Rip\") : For \\(t\\to t_{s}\\), \\(a\\rightarrow\\infty\\), \\(\\rho\\rightarrow\\infty\\) and \\(|p|\\rightarrow\\infty\\)
* Type II (\"sudden\") : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\rightarrow\\rho_{s}\\) or \\(0\\) and \\(|p|\\rightarrow\\infty\\)
* Type III : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\rightarrow\\infty\\) and \\(|p|\\rightarrow\\infty\\)
* Type IV : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\to 0\\), \\(|p|\\to 0\\) and higher derivatives of \\(H\\) diverge. This also includes the case when \\(\\rho\\) (\\(p\\)) or both of them tend to some finite values while higher derivatives of \\(H\\) diverge.
Here \\(t_{s}\\), \\(a_{s}\\) and \\(\\rho_{s}\\) are constants with \\(a_{s}\
eq 0\\). The type I may correspond to the Big Rip singularity [1], which emerges when \\(w<-1\\) in (3). The type II corresponds to the sudden future singularity [6] at which \\(a\\) and \\(\\rho\\) are finite but \\(p\\) diverges. The type III appears for the model with \\(p=-\\rho-A\\rho^{\\alpha}\\)[14], which is different from the sudden future singularity in the sense that \\(\\rho\\) diverges. This type of singularity has been discovered in the model of Ref. [3] where the corresponding Lagrangian model of a scalar field with potential has been constructed.
One may start from the dark energy EOS as
\\[p=-\\rho-f(\\rho)\\, \\tag{9}\\]
where \\(f(\\rho)\\) can be an arbitrary function in general. The function \\(f(\\rho)\\propto\\rho^{\\alpha}\\) with a constant \\(\\alpha\\) was proposed in Ref. [3] and was investigated in detail in Ref.[14]. Using (2) for such choice, the scale factor is given by
\\[a=a_{0}\\exp\\left(\\frac{1}{3}\\int\\frac{d\\rho}{f(\\rho)}\\right). \\tag{10}\\]
Using (5) the cosmological time may be found
\\[t=\\int\\frac{d\\rho}{\\kappa\\sqrt{3\\rho}f(\\rho)}\\, \\tag{11}\\]
In case
\\[f(\\rho)=A\\rho^{\\alpha}\\, \\tag{12}\\]
by using Eq.(10), it follows
\\[a=a_{0}\\exp\\left[\\frac{\\rho^{1-\\alpha}}{3(1-\\alpha)A}\\right]. \\tag{13}\\]
When \\(\\alpha>1\\), the scale factor remains finite even if \\(\\rho\\) goes to infinity. When \\(\\alpha<1\\), \\(a\\rightarrow\\infty\\) (\\(a\\to 0\\)) as \\(\\rho\\rightarrow\\infty\\) for \\(A>0\\) (\\(A<0\\)). Since the pressure is now given by
\\[p=-\\rho-A\\rho^{\\alpha}\\, \\tag{14}\\]
\\(p\\) always diverges when \\(\\rho\\) becomes infinite. If \\(\\alpha>1\\), the EOS parameter \\(w=p/\\rho\\) also goes to infinity, that is, \\(w\\rightarrow+\\infty\\) (\\(-\\infty\\)) for \\(A<0\\) (\\(A>0\\)). When \\(\\alpha<1\\), we have \\(w\\rightarrow-1+0\\) (\\(-1-0\\)) for \\(A<0\\) (\\(A>0\\)) as \\(\\rho\\rightarrow\\infty\\).
By using Eq.(11) for (12), one finds[4]
\\[t=t_{0}+\\frac{2}{\\sqrt{3}\\kappa A}\\frac{\\rho^{-\\alpha+1/2}}{1-2\\alpha}\\,\\ \\ \\ {\\rm for}\\ \\ \\ \\alpha\
eq\\frac{1}{2}\\, \\tag{15}\\]
and
\\[t=t_{0}+\\frac{\\ln\\left(\\frac{\\rho}{\\rho_{0}}\\right)}{\\sqrt{3}\\kappa A}\\,\\ \\ \\ {\\rm for}\\ \\ \\ \\alpha=\\frac{1}{2}\\,. \\tag{16}\\]
Therefore if \\(\\alpha\\leq 1/2\\), \\(\\rho\\) diverges in an infinite future or past. On the other hand, if \\(\\alpha>1/2\\), the divergence of \\(\\rho\\) corresponds to a finite future or past. In case of finite future, the singularity could be regarded as a Big Rip or type I singularity.
For the choice (12), the following cases were discussed [4]:
* In case \\(\\alpha=1/2\\) or \\(\\alpha=0\\), there does not appear any singularity.
* In case \\(\\alpha>1\\), Eq.(15) tells that when \\(t\\to t_{0}\\), the energy density behaves as \\(\\rho\\rightarrow\\infty\\) and therefore \\(|p|\\rightarrow\\infty\\) due to (14). Eq.(13) shows that the scale factor \\(a\\) is finite even if \\(\\rho\\rightarrow\\infty\\). Therefore \\(\\alpha>1\\) case corresponds to type III singularity.
* \\(\\alpha=1\\) case corresponds to the case (3) if we replace \\(-1-A\\) with \\(w\\). Therefore if \\(A>0\\), there occurs the Big Rip or type I singularity but if \\(A\\leq 0\\), there does not appear future singularity.
* In case \\(1/2<\\alpha<1\\), when \\(t\\to t_{0}\\), all of \\(\\rho\\), \\(|p|\\), and \\(a\\) diverge if \\(A>0\\) then this corresponds to type I singularity.
* In case \\(0<\\alpha<1/2\\), when \\(t\\to t_{0}\\), we find \\(\\rho\\), \\(|p|\\to 0\\) and \\(a\\to a_{0}\\) but by combining (13) and (15), we find \\[\\ln a\\sim|t-t_{0}|^{\\frac{\\alpha-1}{\\alpha-1/2}}\\.\\] (17) Since the exponent \\((\\alpha-1)/(\\alpha-1/2)\\) is not always an integer, even if \\(a\\) is finite, the higher derivatives of \\(H\\) diverge in general. Therefore this case corresponds to type IV singularity.
* In case \\(\\alpha<0\\), when \\(t\\to t_{0}\\), we find \\(\\rho\\to 0\\), \\(a\\to a_{0}\\) but \\(|p|\\rightarrow\\infty\\). Therefore this case corresponds to type II singularity.
Hence, the brief review of FRW cosmology with specific homogeneous EOS as well as its late-time behaviour (singularities) is given (see [4] for more detail).
At the next step, we will consider the inhomogeneous EOS for dark energy, so that the dependence from Hubble parameter is included in EOS. The motivation for such EOS comes from including of time-dependent bulk viscosity in ideal fluid EOS [9] or from the modification of gravity (see Appendix). Hence, we suggest the following EOS
\\[p=-\\rho+f(\\rho)+G(H). \\tag{18}\\]
where \\(G(H)\\) is some function. Then the energy conservation law (2) has the following form:
\\[0=\\dot{\\rho}+3H\\left(f(\\rho)+G(H)\\right). \\tag{19}\\]
By using the first FRW equation (5) and assuming the expanding universe (\\(H\\geq 0\\)), one finds
\\[\\dot{\\rho}=F(\\rho)\\equiv-3\\kappa\\sqrt{\\frac{\\rho}{3}}\\left(f(\\rho)+G\\left( \\kappa\\sqrt{\\rho/3}\\right)\\right). \\tag{20}\\]
or
\\[G(H)=-f\\left(3H^{2}/\\kappa^{2}\\right)+\\frac{2}{\\kappa^{2}}\\dot{H}. \\tag{21}\\]
Hence, one can express \\(G(H)\\) in terms of \\(f\\) as above.
As a first example, let assume that EOS(3) could be modified as
\\[p=w_{0}\\rho+w_{1}H^{2}. \\tag{22}\\]
Using (5), it follows
\\[p=\\left(w_{0}+\\frac{\\kappa^{2}w_{1}}{3}\\right)\\rho. \\tag{23}\\]
Therefore \\(w\\) is effectively shifted as
\\[w\\to w_{\\rm eff}\\equiv w_{0}+\\frac{\\kappa^{2}w_{1}}{3}. \\tag{24}\\]Then even if \\(w_{0}<-1\\), as long as \\(w_{\\rm eff}>-1\\), there does not occur the Big Rip singularity. From another side one can start with quintessence value of \\(w_{0}\\), the inhomogeneous EOS (23) with sufficiently negative \\(w_{1}\\) brings the cosmology to phantom era.
As a second example, we assume \\(f(\\rho)\\) (12) is modified as
\\[f(\\rho)=A\\rho^{\\alpha}\\to f(\\rho)+G(H)=-A\\rho^{\\alpha}-BH^{2\\beta}. \\tag{25}\\]
By using the first FRW equation (5), we find \\(f(\\rho)\\) is modified as
\\[f_{\\rm eff}(\\rho)=f(\\rho)+G(H)=-A\\rho^{\\alpha}-B^{\\prime}\\rho^{ \\beta}\\,\\] \\[B^{\\prime}\\equiv B\\left(\\frac{\\kappa^{2}}{3}\\right)^{\\beta}. \\tag{26}\\]
If \\(\\beta>\\alpha\\), when \\(\\rho\\) is large, the second term in (26) becomes dominant:
\\[f_{\\rm eff}(\\rho)\\to B^{\\prime}\\rho^{\\beta}. \\tag{27}\\]
On the other hand, if \\(\\beta<\\alpha\\), the second term becomes dominant and we obtain (27) again when \\(\\rho\\to 0\\). In case of (12) without \\(G(H)\\), when \\(1/2<\\alpha<1\\), there is the type I singularity where \\(\\rho\\) goes to infinity in a finite time. When \\(G(H)\\) is given by (25), if \\(\\beta>\\alpha\\), the second term in (26) becomes dominant and therefore if \\(\\beta>1\\), instead of type I singularity there occurs type III singularity. In case of (12) with \\(\\alpha>1\\), the type III singularity appears before \\(G(H)\\) is included. Even if we include \\(G(H)\\) with \\(\\beta>\\alpha>1\\), we obtain the type III singularity again and the structure of the singularity is not changed qualitatively. For (22) without \\(G(H)\\), when \\(0<\\alpha<1/2\\) or \\(\\alpha<0\\), there appears the type IV or type II singularity where \\(\\rho\\) tends to zero. Since the second term becomes dominant if \\(\\beta<\\alpha\\), if \\(\\beta<0\\), the type IV singularity for \\(0<\\alpha<1/2\\) case becomes the type II singularity but the type II singularity for \\(\\alpha<0\\) is not qualitatively changed.
In accordance with the previous cases, one finds
* In case \\(\\alpha>1\\), for most values of \\(\\beta\\), there occurs type III singularity. In addition to the type III singularity, when \\(0<\\beta<1/2\\), there occurs type IV singularity and when \\(\\beta<0\\), there occurs type II singularity.
* \\(\\alpha=1\\) case, if \\(\\beta>1\\), the singularity becomes type III. \\(\\beta=1\\) case corresponds to (22). If \\(\\beta<1\\) and \\(A>0\\), there occurs the Big Rip or type I singularity. In addition to the type I singularity, we have type IV singularity when \\(0<\\beta<1/2\\) and type II when \\(\\beta<1\\).
* In case \\(1/2<\\alpha<1\\), one sees singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) (even for \\(\\beta=1/2\\)) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case. In addition to type I, type IV case occurs for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\).
* In case \\(\\alpha=1/2\\), we have singularity of type III for \\(\\beta>1\\), type I for \\(1/2<\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)), type IV for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\). When \\(\\beta=1/2\\) or \\(\\beta=0\\), there does not appear any singularity.
* In case \\(0<\\alpha<1/2\\), we find type IV for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\). In addition to type IV singularity, there occurs singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case.
* In case \\(\\alpha<0\\), there will always occur type II singularity. In addition to type II singularity, we have a singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case.
Thus, we demonstrated how the modification of EOS by Hubble dependent, inhomogeneous term changes the structure of singularity in late-time dark energy universe.
We now consider general case and assume \\(F(\\rho)\\) in (20) behaves as
\\[F(\\rho)\\sim F_{0}\\rho^{\\alpha}\\, \\tag{28}\\]
with constant \\(F_{0}\\) and \\(\\alpha\\) in a proper limit (e.g. for large \\(\\rho\\) or small \\(\\rho\\)). Then when \\(\\alpha\
eq 1\\), Eq.(20) can be integrated as
\\[F_{0}\\left(t-t_{c}\\right)\\sim\\frac{\\rho^{1-\\alpha}}{1-\\alpha}\\, \\tag{29}\\]
that is,
\\[\\rho\\sim\\left(\\left(1-\\alpha\\right)F_{0}\\left(t-t_{c}\\right)\\right)^{\\frac{1}{1- \\alpha}}. \\tag{30}\\]
Here \\(t_{c}\\) is a constant of the integration. When \\(\\alpha=1\\), the energy becomes
\\[\\rho=\\rho_{0}{\\rm e}^{F_{0}t}\\, \\tag{31}\\]
with a constant of integration \\(\\rho_{0}\\). By using the first FRW equation (5), the scale factor may be found
\\[a=a_{0}{\\rm e}^{\\pm\\frac{2\\alpha}{\\left(3-2\\alpha\\right)\\sqrt{3F_{0}}}\\left( \\left(1-\\alpha\\right)F_{0}\\left(t-t_{c}\\right)\\right)^{\\frac{3-2\\alpha}{2(1- \\alpha)}}}\\, \\tag{32}\\]
when \\(\\alpha\
eq 1\\) and
\\[a=a_{0}{\\rm e}^{\\frac{2\\alpha}{\\rho_{0}}\\sqrt{\\frac{\\rho_{0}}{3}}{\\rm e}^{ \\frac{F_{0}t}{2}}}\\, \\tag{33}\\]
when \\(\\alpha=1\\).
In [4], there has been given an explicit example of the EOS where crossing of \\(w=-1\\) phantom divide occurs:
\\[a(t)=a_{0}\\left(\\frac{t}{t_{s}-t}\\right)^{n}. \\tag{34}\\]
Here \\(n\\) is a positive constant and \\(0<t<t_{s}\\). The scale factor diverges in a finite time (\\(t\\to t_{s}\\)) as in the Big Rip. Therefore \\(t_{s}\\) corresponds to the life time of the universe. When \\(t\\ll t_{s}\\), \\(a(t)\\) evolves as \\(t^{n}\\), which means that the effective EOS is given by \\(w=-1+2/(3n)>-1\\). On the other hand, when \\(t\\sim t_{s}\\), it appears \\(w=-1-2/(3n)<-1\\). The solution (34) has been obtained with
\\[f(\\rho)=\\pm\\frac{2\\rho}{3n}\\left\\{1-\\frac{4n}{t_{s}}\\left(\\frac{3}{\\kappa^{2} \\rho}\\right)^{\\frac{1}{2}}\\right\\}^{\\frac{1}{2}}. \\tag{35}\\]
Therefore the EOS needs to be double-valued in order for the transition to occur between the region \\(w<-1\\) and the region \\(w>-1\\). Then in general, there could not be one-to-one correspondence between \\(p\\) and \\(\\rho\\) in the above EOS. In such a case, instead of (18), we may suggest the implicit, inhomogeneous equation of the state
\\[F(p,\\rho,H)=0. \\tag{36}\\]
The following example may be of interest:
\\[\\left(p+\\rho\\right)^{2}-C_{0}\\rho^{2}\\left(1-\\frac{H_{0}}{H}\\right)=0. \\tag{37}\\]
Here \\(C_{0}\\) and \\(H_{0}\\) are positive constants. Combining (37) with the energy conservation law (19) and the first FRW equation (5), one can delete \\(p\\) and \\(\\rho\\) as
\\[\\dot{H}^{2}=\\frac{9}{4}C_{0}H^{4}\\left(1-\\frac{H_{0}}{H}\\right)\\, \\tag{38}\\]
which can be integrated as
\\[H=\\frac{16}{9C_{0}^{2}H_{0}\\left(t-t_{-}\\right)\\left(t_{+}-t\\right)}. \\tag{39}\\]
Here
\\[t_{\\pm}=t_{0}\\pm\\frac{4}{3C_{0}H_{0}}\\, \\tag{40}\\]
and \\(t_{0}\\) is a constant of the integration. Hence
\\[p = -\\rho\\left\\{1+\\frac{3C_{0}^{2}}{4H_{0}}\\left(t-t_{0}\\right)\\right\\}\\,\\] \\[\\rho = \\frac{2^{8}}{3^{3}C_{0}^{4}H_{0}^{2}\\kappa^{2}\\left(t-t_{-} \\right)^{2}\\left(t_{+}-t\\right)^{2}}. \\tag{41}\\]
In (39), since \\(t_{-}<t_{0}<t_{+}\\), as long as \\(t_{-}<t<t_{+}\\), the Hubble rate \\(H\\) is positive. The Hubble rate \\(H\\) has a minimum \\(H=H_{0}\\) when \\(t=t_{0}=\\left(t_{-}+t_{+}\\right)/2\\) and diverges when \\(t\\to t_{\\pm}\\). Then we may regard \\(t\\to t_{-}\\) as a Big Bang singularity and \\(t\\to t_{+}\\) as a Big Rip one. As clear from (41), the parameter \\(w=p/\\rho\\) is larger than \\(-1\\) when \\(t_{-}<t<t_{0}\\) and smaller than \\(-1\\) when \\(t_{0}<t<t_{+}\\). Therefore there occurs crossing of phantom divide \\(w=-1\\) when \\(t=t_{0}\\) thanks to the effect of inhomogeneous term in EOS.
One more example may be of interest:
\\[\\left(\\rho+p\\right)^{2}+\\frac{16}{\\kappa^{4}t_{0}^{2}}\\left(h_{0}-H\\right)\\ln \\left(\\frac{h_{0}-H}{h_{1}}\\right)=0. \\tag{42}\\]
Here \\(t_{0}\\), \\(h_{0}\\), \\(h_{1}\\) are constants and \\(h_{0}>h_{1}>0\\). A solution is given by
\\[H=h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0}^{2}}\\,\\quad\\rho=\\frac{3}{ \\kappa^{2}}\\left(h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0}^{2}}\\right)^{2}\\,\\] \\[p=-\\frac{3}{\\kappa^{2}}\\left(h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0} ^{2}}\\right)^{2}-\\frac{4h_{1}t}{\\kappa^{2}t_{0}^{2}}\\mathrm{e}^{-t^{2}/t_{0}^ {2}}. \\tag{43}\\]
Hence,
\\[\\dot{H}=\\frac{2h_{1}t}{t_{0}^{2}}\\mathrm{e}^{-t^{2}/t_{0}^{2}}. \\tag{44}\\]
Using the energy conservation law (19) and the first FRW equation (5), the second FRW equation may be found:
\\[-\\frac{2}{\\kappa^{2}}\\dot{H}=\\rho+p. \\tag{45}\\]
As in (44), \\(\\dot{H}\\) is negative when \\(t<0\\) and positive when \\(t>0\\). Eq.(45) tells that the effective parameter \\(w=p/\\rho\\) of the equation of the state is \\(w>-1\\) when \\(t<0\\) and \\(w<-1\\) when \\(t>0\\). As we find the Hubble rate \\(H\\) goes to a constant \\(h_{0}\\), \\(H\\to h_{0}\\), in the limit of \\(t\\rightarrow\\pm\\infty\\), the universe asymptotically approaches to deSitter phase. Therefore there does not appear Big Rip nor Big Bang singularity.
Hence, we presented several examples of inhomogeneous EOS for ideal fluid and demonstrated how the final state of the universe filled with such fluid changes if compare with homogeneous case. The ideal fluid with implicit EOS may be used to construct the cosmologies which cross the phantom divide.
The interesting remark is in order (see also Appendix). In principle, the more general EOS may contain the derivatives of \\(\\dot{H}\\), like \\(\\dot{H}\\), \\(\\ddot{H}\\), Then more general EOS than (36) has the following form:
\\[F\\left(p,\\rho,H,\\dot{H},\\ddot{H},\\cdots\\right)=0. \\tag{46}\\]
Trivial example is that
\\[p=w\\rho-\\frac{2}{\\kappa^{2}}\\dot{H}-\\frac{3(1+w)}{\\kappa^{2}}H^{2}. \\tag{47}\\]
By using the first (5) or second (45) FRW equations, we find
\\[\\rho=\\frac{3}{\\kappa^{2}}H^{2}\\,\\quad p=-\\frac{2}{\\kappa^{2}}\\dot{H}-\\frac{3}{ \\kappa^{2}}H^{2}. \\tag{48}\\]
Therefore Eq.(47) becomes an identity, which means that any cosmology can be a solution if EOS (47) is assumed.
Another, non-trivial example is
\\[p=w\\rho-G_{0}-\\frac{2}{\\kappa^{2}}\\dot{H}+G_{1}\\dot{H}^{2}. \\tag{49}\\]
Here it is supposed \\(G_{0}(1+w)>0\\). If \\(G_{1}(1+w)>0\\), there appears a solution which describes an oscillating universe,
\\[H=h_{0}\\cos\\omega t\\,\\quad a=a_{0}\\mathrm{e}^{\\frac{h_{0}}{\\omega}\\sin\\omega t}. \\tag{50}\\]Here
\\[h_{0}\\equiv\\kappa\\sqrt{\\frac{G_{0}}{3(1+w)}}\\,\\quad\\omega=\\sqrt{\\frac{3(1+w)}{G_{1} \\kappa^{2}}}. \\tag{51}\\]
In case \\(G_{1}(1+w)<0\\), another cosmological solution appears
\\[H=h_{0}\\cosh\\tilde{\\omega}t\\,\\quad a=a_{0}\\mathrm{e}^{\\frac{h_{0}}{\\omega} \\sinh\\tilde{\\omega}t}. \\tag{52}\\]
Here \\(h_{0}\\) is defined by (51) again and \\(\\tilde{\\omega}\\) is defined by
\\[\\tilde{\\omega}=\\sqrt{-\\frac{3(1+w)}{G_{1}\\kappa^{2}}}. \\tag{53}\\]
One can go further and present many more examples of inhomogeneous EOS cosmology.
## III FRW cosmology with inhomogeneous interacting fluids
In the present section, we study FRW universe filled with two interacting fluids. Note that there is some interest to study the cosmology with homogeneous interacting fluids [4; 15]. The inhomogeneous terms for such cosmology may be again motivated by (bulk) viscosity account [16].
Let us consider a system with two fluids, which satisfy the following EOS:
\\[p_{1,2}=-\\rho_{1,2}-f_{1,2}\\left(\\rho_{1,2}\\right)-G_{1,2}\\left(H\\right). \\tag{54}\\]
For simplicity, the only case is considered that
\\[p_{\\pm}=w_{\\pm}\\rho_{\\pm}-G_{\\pm}\\left(H\\right). \\tag{55}\\]
In the above equation and in the following, the indexes \\(\\pm\\) instead of \\(1,2\\), as \\(p_{1,2}=p_{\\pm}\\) are used. In a spatially flat FRW universe with a scale factor \\(a\\), the cosmological equations are given by
\\[\\dot{\\rho}_{\\pm}+3H(\\rho_{\\pm}+p_{\\pm})=\\mp Q\\, \\tag{56}\\] \\[\\dot{H}=-\\frac{\\kappa^{2}}{2}(\\rho_{+}+p_{+}+\\rho_{-}+p_{-})\\,\\] (57) \\[H^{2}=\\frac{\\kappa^{2}}{3}(\\rho_{+}+\\rho_{-}). \\tag{58}\\]
Not all of the above equations are independent, for example, Eqs.(56) and (58) lead to (57). From Eqs.(58), (57), and the equation for \\(\\rho_{+}\\) and \\(p_{+}\\) of (56), one obtains the equation for \\(\\rho_{-}\\) and \\(p_{-}\\) of (56). In [4], the following case has been considered:
\\[G_{\\pm}(H)=0\\,\\quad Q=\\delta H^{2}\\,\\quad w_{+}=0\\,\\quad w_{-}=-2 \\tag{59}\\]
where \\(\\delta\\) is a constant. Then combining Eq. (58) with Eqs. (56), the explicit solution follows
\\[H = \\frac{2}{3}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right)\\, \\tag{60}\\] \\[\\rho_{+} = \\frac{4}{3\\kappa^{2}}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right) \\frac{1}{t}\\,\\] (61) \\[\\rho_{-} = \\frac{4}{3\\kappa^{2}}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right) \\frac{1}{t_{s}-t}\\,, \\tag{62}\\]
where
\\[t_{s}\\equiv\\frac{9}{\\delta\\kappa^{2}}\\,. \\tag{63}\\]
In (60), it is assumed \\(0<t<t_{s}\\). The Hubble rate \\(H\\) diverges in a finite time (\\(t\\to t_{s}\\)) as in the Big Rip singularity. Therefore \\(t_{s}\\) corresponds to the life time of the universe. When \\(t\\ll t_{s}\\), \\(H\\) behaves as \\(2/3t\\), which means that the effective EOS is given by \\(w_{\\rm eff}\\sim 0>-1\\). On the other hand, when \\(t\\sim t_{s}\\), it appears \\(w_{\\rm eff}=-2<-1\\). Therefore the crossing of phantom divide \\(w_{\\rm eff}=-1\\) occurs.
From (55 - 58), we obtain
\\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}H^{2} \\tag{64}\\] \\[\\pm\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\ G_{+}(H)+G_{-}(H)\\] \\[-\\frac{3}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H^{2}- \\frac{2}{\\kappa^{2}}\\ddot{H}\\bigg{\\}}\\,\\] \\[Q = -\\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\left(G^{\\prime}_{+}(H)+G^{\\prime} _{-}(H)\\right)\\dot{H}\\] (65) \\[-\\frac{6}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H\\dot{H }-\\frac{2}{\\kappa^{2}}\\ddot{H}\\bigg{\\}}\\] \\[+3H\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)\\] \\[\\times\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\ G_{+}(H)+G_{-}(H)\\] \\[-\\frac{3}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H^{2}- \\frac{2}{\\kappa^{2}}\\dot{H}\\bigg{\\}}\\] \\[-\\frac{9\\left(w_{+}-w_{-}\\right)}{4\\kappa^{2}}H^{3}\\] \\[-\\frac{3}{2}H\\left(G_{+}(H)-G_{-}(H)\\right)\\.\\]
First, the case is considered that the Hubble rate \\(H\\) satisfies the following equation:
\\[\\dot{H}=S(H)\\, \\tag{66}\\]
where \\(S(H)\\) is a proper function of \\(H\\). Hence, \\(Q\\) can be presented as a function of \\(H\\) as
\\[Q = Q(H) \\tag{67}\\] \\[= -\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\left(G^{\\prime}_{+}(H)+G^{\\prime}_{ -}(H)\\right)S(H)\\] \\[+3\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H\\left(G_{+}(H)+G_{-}(H) \\right)\\bigg{\\}}\\] \\[+\\frac{12}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}\\left(1+\\frac{w_{+ }+w_{-}}{2}\\right)HS(H)\\] \\[-\\frac{9}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}\\] \\[\\times\\left\\{\\left(1+\\frac{w_{1}+w_{2}}{2}\\right)^{2}+\\frac{ \\left(w_{+}-w_{-}\\right)^{2}}{4}\\right\\}H^{3}\\] \\[+\\frac{2}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}S^{\\prime}(H)S(H)\\] \\[-\\frac{3}{2}H\\left(G_{+}(H)-G_{-}(H)\\right)\\.\\]
If \\(Q\\) is given by (67) for proper \\(G_{p}(H)\\) and \\(S(H)\\), the solution of Eqs. (55 - 58) can be obtained by solving Eq.(66) with respect to \\(H\\). Then from (64), one finds the behavior of \\(\\rho_{\\pm}\\). As an example, if we consider \\(S(H)\\) given by
\\[S(H)=-\\frac{1}{h_{1}}\\left(H-h_{0}\\right)\\, \\tag{68}\\]
the solution of (67) is given by
\\[H=h_{0}+\\frac{h_{1}}{t-t_{0}}\\, \\tag{69}\\]
Here \\(t_{0}\\) is a constant of the integration. In the solution (69), as \\(H\\) behaves as \\(H\\sim\\frac{h_{1}}{t-t_{0}}\\) when \\(t-t_{0}\\sim 0\\), the effective \\(w_{\\rm eff}\\) is given by \\(w=-1+\\frac{2}{3h_{1}}\\). On the other hand, as \\(H\\) becomes a constant \\(h_{0}\\) when \\(t\\) is large, we obtain the effective \\(w_{\\rm eff}=1\\).
Next the simpler case is considered:
\\[w_{\\pm}=-1\\pm w\\,\\quad G_{\\pm}(H)=\\pm G(H). \\tag{70}\\]
Then (64) and (65) have the following forms:
\\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}H^{2}\\mp\\frac{1}{\\kappa^{2}w}\\dot{H}\\, \\tag{71}\\] \\[Q = \\frac{1}{\\kappa^{2}w}\\ddot{H}-\\frac{9w}{2\\kappa^{2}}H^{3}-3HG(H). \\tag{72}\\]
Thus, for example, for an arbitrary \\(G(H)\\), if \\(Q\\) is given by a function of \\(H\\) as
\\[Q=\\frac{\\omega^{2}}{\\kappa^{2}w}\\left(H-h_{0}\\right)-\\frac{9w}{2\\kappa^{2}}H^ {3}-3HG(H)\\, \\tag{73}\\]
that is,
\\[\\ddot{H}=\\omega^{2}\\left(h_{0}-H\\right)\\, \\tag{74}\\]
the solution of Eqs.(55 - 58) is given by
\\[H = h_{0}+h_{1}\\sin\\left(\\omega t+\\alpha\\right)\\,\\] \\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}\\left(h_{0}+h_{1}\\sin\\left(\\omega t+\\alpha \\right)\\right)^{2} \\tag{75}\\] \\[\\mp\\frac{h_{1}\\omega}{\\kappa^{2}w}\\cos\\left(\\omega t+\\alpha \\right)\\.\\]
Here \\(h_{1}\\) and \\(\\alpha\\) are constants of the integration. This demonstrates how the inhomogeneous term modifies late-time cosmology.
Choosing \\(G_{\\pm}(H)\\) and \\(Q\\), one may realize a rather general cosmology. As was shown, if we introduce two fluids, even without assuming the non-linear EOS as in (36), the model crossing \\(w=-1\\) effectively can be realized. In fact, from (75) one has
\\[\\dot{H}=h_{1}\\omega\\cos\\left(\\omega t+\\alpha\\right)\\, \\tag{76}\\]
which changes its sign depending on time. When \\(\\dot{H}>0\\), effectively \\(w<-1\\), and when \\(\\dot{H}<0\\), \\(w>-1\\). Note that, as a special case in (73), we may choose,
\\[G(H)=\\frac{\\omega^{2}}{3\\kappa^{2}w}\\left(1-\\frac{h_{0}}{H}\\right)-\\frac{3w}{ 2\\kappa^{2}}H^{2}\\, \\tag{77}\\]
which gives \\(Q=0\\). As \\(Q=0\\), from (56), there is no direct interaction between two fluids. As is clear from (75), however, there is an oscillation in the energy densities, which may indicate that there is a transfer of the energy between the fluids. Hence, the \\(G(H)\\) term might generate indirect transfer between two fluids.
## IV Crossing the phantom barrier with inhomogeneous EOS and thermodynamical considerations
Let us start from the EOS (9). Assuming that \\(w\\) crosses \\(-1\\), which corresponds to \\(f(\\rho)=0\\), in order that the integrations in (10) and (11) are finite, \\(f(\\rho)\\) should behave as
\\[f(\\rho)\\sim f_{0}\\left(\\rho-\\rho_{0}\\right)^{s}\\,\\quad 0<s<1. \\tag{78}\\]
Here \\(f(\\rho_{0})=0\\). Since \\(0<s<1\\), \\(f(\\rho)\\) could be multi-valued at \\(\\rho=\\rho_{0}\\), in general. Near \\(\\rho=\\rho_{0}\\), Eq.(11) gives,
\\[t-t_{0}\\sim\\frac{\\left(\\rho-\\rho_{0}\\right)^{1-s}}{\\kappa\\sqrt{3}\\rho_{0}f_{0} (1-s)}. \\tag{79}\\]
Here \\(t=t_{0}\\) when \\(\\rho=\\rho_{0}\\). Since
\\[\\dot{H}=\\frac{\\kappa^{2}}{2}f(\\rho)\\, \\tag{80}\\]
from the second FRW Eq.(45), one finds
\\[\\dot{H} \\sim \\frac{\\kappa}{2^{2}}f_{0}\\left(\\frac{t-t_{0}}{t_{1}}\\right)^{s/( 1-s)}\\,\\] \\[t_{1} \\equiv \\frac{1}{\\kappa\\sqrt{3}\\rho_{0}f_{0}(1-s)}. \\tag{81}\\]Hence, when \\(s/(1-s)\\) is positive odd integer, the sign of \\(\\dot{H}\\) changes at \\(t=t_{0}\\), which shows the crossing \\(w=-1\\).
In recent paper[13], based on consideration of mixture of two fluids: effective quintessence and effective phantom, the following, quite interesting EOS has been suggested:
\\[A\\rho^{m}+Bp^{m}=\\left(C\\rho^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{82}\\]
Here \\(A\\), \\(B\\), \\(C\\), \\(D\\), and \\(\\alpha\\) are constants and \\(m\\) is an integer. This EOS can be regarded as a special case of (36). By writing \\(p\\) as
\\[p=Q(\\rho)\\rho\\, \\tag{83}\\]
one obtains
\\[\\rho^{m(\\alpha-1)}=F\\left(Q^{m}\\right)\\equiv\\left(A+BQ^{m}\\right)\\left(C+DQ^{ m}\\right)^{-\\alpha}. \\tag{84}\\]
Since
\\[F^{\\prime}\\left(Q^{m}\\right) = \\left(C+DQ^{m}\\right)^{-\\alpha-1} \\tag{85}\\] \\[\\times\\left(BC-\\alpha AD+(1-\\alpha)BDQ^{m}\\right)\\,\\]
it follows \\(F^{\\prime}\\left(Q^{m}\\right)=0\\) when
\\[Q^{m}=-\\frac{\\frac{C}{D}-\\alpha\\frac{A}{B}}{1-\\alpha}. \\tag{86}\\]
By properly choosing the parameters, we assume
\\[\\frac{\\frac{C}{D}-\\alpha\\frac{A}{B}}{1-\\alpha}=1. \\tag{87}\\]
When \\(Q\\sim-1\\),
\\[F(Q^{m})\\sim q_{0}+q_{2}\\left(Q+1\\right)^{2}. \\tag{88}\\]
Here
\\[q_{0} = F(1)=(A-B)(C+D)^{-\\alpha}\\,\\] \\[q_{2} = \\frac{1}{2}\\left.\\frac{d^{2}F}{dQ^{2}}\\right|_{Q=-1} \\tag{89}\\] \\[= -\\alpha(\\alpha-1)(C-D)^{-\\alpha-2}D^{2}(A-B)m^{2}\\.\\]
In (89), it is supposed \\(m\\) is an odd integer. Solving (84) with (88) with respect to \\(Q\\), one arrives at
\\[Q=-1\\pm\\left\\{\\frac{m(\\alpha-1)\\rho_{0}^{m\\alpha-m-1}\\left(\\rho-\\rho_{0} \\right)}{q_{2}}\\right\\}^{1/2}. \\tag{90}\\]
Here \\(\\rho_{0}\\) is defined by
\\[q_{0}=\\rho_{0}^{m(\\alpha-1)}. \\tag{91}\\]
Using (83), the function Q is
\\[p\\sim-\\rho\\pm\\rho_{0}\\left\\{\\frac{m(\\alpha-1)\\rho_{0}^{m\\alpha-m-1}\\left(\\rho -\\rho_{0}\\right)}{q_{2}}\\right\\}^{1/2}. \\tag{92}\\]
Comparing (92) with (78), we find that the EOS (82) surely corresponds to \\(s=1/2\\) case in (78).
For the EOS (82), there are interesting, exactly solvable cases. We now consider such a case and see that there are really the cases of EOS crossing barrier \\(w=-1\\). The energy conservation law (2) may be rewritten as follows:
\\[p=-\\rho-V\\frac{d\\rho}{dV}\\,\\ \\ \\ V\\equiv V_{0}a^{3}. \\tag{93}\\]
Here \\(V_{0}\\) is a constant with the dimension of the volume. Use of Eq.(83) gives
\\[0=V\\frac{d\\rho}{dV}+\\left(1+Q(\\rho)\\right)\\rho. \\tag{94}\\]
Using (84), we further rewrite (94) as an equation with respect to \\(Q\\):
\\[0 = -\\frac{\\left(BC-\\alpha AD+BD\\left(1-\\alpha\\right)Q^{m}\\right)Q^{ m-1}}{\\left(1-\\alpha\\right)\\left(C+DQ^{m}\\right)\\left(A+BQ^{m}\\right)}V\\frac{dQ}{dV} \\tag{95}\\] \\[+1+Q\\.\\]
Assuming Eq.(87), the above Eq.(95) takes a simple form:
\\[0=-\\frac{BD\\left(1+Q^{m}\\right)Q^{m-1}}{\\left(C+DQ^{m}\\right)\\left(A+BQ^{m} \\right)}V\\frac{dQ}{dV}+1+Q. \\tag{96}\\]
Especially in the simplest case \\(m=1\\), one can easily solve (96)
\\[Q=-\\frac{C-A\\left(\\frac{V}{V_{1}}\\right)^{\\beta}}{D-B\\left(\\frac{V}{V_{1}} \\right)^{\\beta}}. \\tag{97}\\]
Here \\(V_{1}\\) is a constant of the integration and
\\[\\beta\\equiv\\frac{BD}{AD-BC}=\\frac{1}{\\left(1-\\alpha\\right)\\left(\\frac{A}{B}-1 \\right)}. \\tag{98}\\]
In the above equation, Eq.(87) is used. Hence, when \\(\\left(V/V_{1}\\right)^{\\beta}\\to 0\\), it follows \\(w=p/\\rho=Q\\rightarrow-C/D\\). On the other hand, when \\(\\left(V/V_{1}\\right)^{\\beta}\\rightarrow\\infty\\), one arrives at \\(w=Q\\rightarrow-A/B\\). Hence, the value of \\(w\\) changes depending on the size of the universe. Especially when
\\[\\frac{V}{V_{1}}=\\left(\\frac{C-D}{A-B}\\right)^{1/\\beta}\\, \\tag{99}\\]
there occurs the crossing of phantom divide \\(w=Q=-1\\) (compare with [13]).
As the inhomogeneous generalization of the EOS (82), we may consider
\\[A\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{m}+Bp^{m}=\\left(C\\rho^{m}+Dp^{m} \\right)^{\\alpha}\\, \\tag{100}\\]
or
\\[A\\rho^{m}+Bp^{m}=\\left(C\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{m}+Dp^{m} \\right)^{\\alpha}\\, \\tag{101}\\]or, more general EOS
\\[(A-A^{\\prime})\\rho^{m}+A^{\\prime}\\left(\\frac{3}{\\kappa^{2}}H^{2} \\right)^{m}+Bp^{m}\\] \\[=\\left((C-C^{\\prime})\\rho^{m}+C^{\\prime}\\left(\\frac{3}{\\kappa^{2}} H^{2}\\right)^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{102}\\]
By using the first FRW equation (5), it follows that the EOS (100), (101), and (102) are equivalent to (82). Especially if \\(m=1\\) and (87) could be satisfied, one obtains the solution (97).
Hence, using the first and second FRW Eqs.(5) and (45), the EOS (82) with \\(m=1\\) can be rewritten as
\\[\\frac{d^{2}}{dt^{2}}\\left(a^{\\frac{3}{2}\\left(1-\\frac{A}{B} \\right)}\\right)\\] \\[=\\frac{3\\kappa^{2}(A-B)}{4B^{2}}\\left(\\frac{3\\kappa^{2}(C-D)}{4D^ {2}}\\right)^{-\\alpha}\\] \\[\\quad\\times a^{\\frac{3}{2}\\left\\{1-\\frac{A}{B}-\\alpha\\left(1- \\frac{C}{D}\\right)\\right\\}}\\left\\{\\frac{d^{2}}{dt^{2}}\\left(a^{\\frac{3}{2} \\left(1-\\frac{C}{D}\\right)}\\right)\\right\\}. \\tag{103}\\]
When (87) is satisfied, this second order differential Eq. looks as
\\[\\frac{d^{2}X}{dt^{2}} = \\left(\\frac{4B}{3\\kappa^{2}(A-B)}\\right)^{\\alpha-1}\\alpha^{ \\alpha}\\left(\\frac{d^{2}X^{\\frac{1}{\\alpha}}}{dt^{2}}\\right)^{\\alpha}\\,\\] \\[X \\equiv a^{\\frac{3}{2}\\left(1-\\frac{A}{B}\\right)}\\,, \\tag{104}\\]
which also admits, besides the solution crossing \\(w=-1\\) (97), a flat universe solution
\\[a=a_{0}\\,\\quad(a_{0}:\\text{constant})\\, \\tag{105}\\]
and deSitter universe solution
\\[a=a_{0}\\mathrm{e}^{\\frac{2}{\\sqrt{\\frac{B}{\\kappa^{2}}-B}\\alpha^{\\frac{B}{2( 1-\\alpha)}}t}}. \\tag{106}\\]
As next generalization of (82), one may consider the following EOS:
\\[A\\rho+Bp-\\frac{A-B}{\\kappa^{2}}H^{2}\\] \\[=\\left(C\\rho+Dp-\\frac{C-D}{\\kappa^{2}}H^{2}\\right)^{\\alpha(H)}. \\tag{107}\\]
Here \\(\\alpha\\) is assumed to be a function of \\(H\\). Then by using the first and second FRW Eqs.(5) and (45), the EOS (107) can be rewritten as
\\[-\\frac{2B}{\\kappa^{2}}\\dot{H}=\\left(-\\frac{2D}{\\kappa^{2}}\\dot{H}\\right)^{ \\alpha(H)}\\, \\tag{108}\\]
which gives
\\[-\\frac{\\kappa^{2}}{2D}t=\\int^{H}dH\\mathrm{e}^{-\\frac{\\ln\\frac{B}{\\kappa(t)-1} }{\\alpha(t)-1}}. \\tag{109}\\]
As an example, for the solution (75)
\\[\\omega t=\\frac{1}{h_{1}}\\int^{H}\\frac{dH}{\\sqrt{1-\\left(\\frac{H-h_{0}}{h_{1}} \\right)^{2}}}. \\tag{110}\\]
Comparing (109) with (110), in case that
\\[h_{1}\\omega=-\\frac{\\kappa^{2}}{2D}\\,\\quad\\alpha(H)=1+\\frac{2\\ln\\frac{B}{D}}{ \\ln\\left(1-\\left(\\frac{H-h_{0}}{h_{1}}\\right)^{2}\\right)}\\, \\tag{111}\\]
the solution (75) follows from the EOS (107).
As another generalization of (82), we may consider the following EOS:
\\[A\\rho^{m}+Bp^{m}=G(H)\\left(C\\rho^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{112}\\]
Here \\(G(H)\\) is a function of the Hubble rate. For simplicity, the following case is considered
\\[m=1\\,\\quad G(H)=\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{\\gamma}. \\tag{113}\\]
Then, writing \\(p\\) as (83) and using \\(Q\\), the energy looks like
\\[\\rho=(A+BQ)^{\\frac{1}{\\gamma+\\alpha-1}}(C+DQ)^{-\\frac{\\alpha}{\\gamma+\\alpha-1} }\\, \\tag{114}\\]
which corresponds to (84). Assuming Eq.(87), by using (83), instead of (96), one gets
\\[0=-\\frac{(1-\\alpha)BD}{\\left(1-\\alpha-\\gamma\\right)\\left(C+DQ\\right)\\left(A+BQ \\right)}V\\frac{dQ}{dV}+1\\, \\tag{115}\\]
which can be solved as
\\[Q=-\\frac{C-A\\left(\\frac{V}{V_{1}}\\right)^{\\beta}}{D-B\\left(\\frac{V}{V_{1}} \\right)^{\\beta}}. \\tag{116}\\]
Here \\(V_{1}\\) is again a constant of the integration and
\\[\\tilde{\\beta}\\equiv\\frac{(1-\\alpha)BD}{\\left(1-\\alpha-\\gamma\\right)AD-BC}. \\tag{117}\\]
Then as in (97), when \\(\\left(V/V_{1}\\right)^{\\beta}\\to 0\\), we have \\(w=p/\\rho=Q\\rightarrow-C/D\\) and when \\(\\left(V/V_{1}\\right)^{\\beta}\\rightarrow\\infty\\), we have \\(w=Q\\rightarrow-A/B\\). The power of \\(V\\), however, is changed in Eq.(116) if compare with Eq.(97). Thus, we presented number of FRW cosmologies (including oscillating universes) filled by cosmic fluid with inhomogeneous EOS where phantom divide is crossing. Definitely, one can suggest more examples or try to fit the astrophysical data with more precise model of above sort.
In [17], the thermodynamical models of the dark energy have been constructed. Especially it has been shown that, for the fluid with constant \\(w\\), the free energy \\(F(T,V)\\) is generally given by
\\[F(T,V)=T\\hat{F}\\left((T/T_{0})^{1/w}(V/V_{0})\\right). \\tag{118}\\]Here \\(T\\) is the temperature and \\(V\\) is the volume of the universe. For the dimensional reasons, the positive parameters \\(T_{0}\\) and \\(V_{0}\\) are introduced.
The interesting question is what happens with the entropy when the value of \\(w\\) crosses \\(-1\\). As a model, the case that \\(w=Q\\) depends on \\(V\\) as in (97) may be considered:
\\[w=w(V)=\\frac{w_{0}+w_{1}\\left(\\frac{V}{V_{0}}\\right)^{\\beta}}{1+\\left(\\frac{V} {V_{0}}\\right)^{\\beta}}. \\tag{119}\\]
When \\(\\beta>0\\), \\(w\\to w_{0}\\) for small universe and \\(w\\to w_{1}\\) for large universe.
The specific dependence of free energy may be taken as below
\\[F=\\frac{f_{0}T}{T_{0}}\\left\\{\\left(\\frac{T}{T_{0}}\\right)^{\\frac{1}{w(V)}}\\frac {V}{V_{0}}\\right\\}^{\\gamma}. \\tag{120}\\]
Here \\(\\gamma\\) is a constant. When \\(\\gamma=1\\) and \\(w\\) is a constant, the free energy is proportional to the volume. For usual matter, due to self-interaction and related effects, \\(\\gamma\\) is not always unity. Then, the pressure \\(p\\), the energy density \\(\\rho\\), and the entropy \\({\\cal S}\\) are given by
\\[p = -\\frac{\\partial F}{\\partial V}\\] \\[= -\\frac{f_{0}\\gamma}{V_{0}}\\left(\\frac{T}{T_{0}}\\right)^{1+\\frac{ \\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma-1}\\] \\[\\times\\left\\{1+\\gamma\\ln\\left(\\frac{T}{T_{0}}\\right)\\frac{\\left( w_{1}-w_{0}\\right)\\beta\\left(\\frac{V}{V_{0}}\\right)^{\\beta}}{\\left(w_{0}+w_{1} \\left(\\frac{V}{V_{0}}\\right)^{\\beta}\\right)^{2}}\\right\\}\\,\\] \\[\\rho = \\frac{1}{V}\\left(F-T\\frac{\\partial F}{\\partial T}\\right)\\] \\[= -\\frac{f_{0}\\gamma}{wV_{0}}\\left(\\frac{T}{T_{0}}\\right)^{1+\\frac{ \\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma-1}\\,\\] \\[{\\cal S} = -\\frac{\\partial F}{\\partial T} \\tag{121}\\] \\[= -\\frac{f_{0}}{T_{0}}\\left(1+\\frac{\\gamma}{w}\\right)\\left(\\frac{T }{T_{0}}\\right)^{\\frac{\\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma}\\.\\]
In the pressure \\(p\\), the second term in large \\(\\{\\}\\) comes from \\(V\\) dependence of \\(w\\) in (119), which vanishes for large or small universe (\\(V\\rightarrow\\infty\\) or \\(V\\to 0\\)). Hence, for small or large universe \\(p/\\rho\\to w(V)\\to w_{0,1}\\). As seen from the expression for \\({\\cal S}\\), the sign of the entropy changes at
\\[w=-\\gamma. \\tag{122}\\]
If \\(\\gamma=1\\), the sign of the entropy \\({\\cal S}\\) changes when crossing \\(w=-1\\) (the entropy becomes negative when \\(w\\) is less than \\(-1\\) as it was observed in [17]), but in the case that
\\[\\gamma<|w_{0}|,\\,|w_{1}|\\, \\tag{123}\\]
the entropy does not change its sign.
We should note that the expressions (121) are not well-defined, unless \\(\\gamma=0\\), when \\(w=0\\), which corresponds to dust. One may assume \\(0<\\gamma<w_{0}\\ll 1\\) and \\(w_{1}\\lesssim-1\\). Then as clear from (119), \\(w\\) changes from \\(w_{0}\\sim 0\\) for small universe to \\(w_{1}\\lesssim-1\\) for large universe and crosses \\(-1\\). Since we always have \\(|\\gamma/w_{0}|<1\\) and therefore \\(1+\\gamma/w>0\\), the entropy \\({\\cal S}\\) (121) is always positive and does not change its sign as long as \\(f_{0}<0\\). This explicitly demonstrates very beautiful phenomenon: there exist thermodynamical models for dark energy with crossing of phantom divide. Despite the preliminary expectations, the entropy of such dark energy universe even in its phantom phase may be positive!
## V Discussion
In summary, the effect of modification of general EOS of dark energy ideal fluid by the insertion of inhomogeneous, Hubble parameter dependent term in the late-time universe is considered. Several explicit examples of such term which is motivated by time-dependent bulk viscosity or deviations from general relativity are considered. The corresponding late-time FRW cosmology (mainly, in its phantom epoch) is described. It is demonstrated how the structure of future singularity is changed thanks to generalization of dark energy EOS. The number of FRW cosmologies admitting the crossing of phantom barrier are presented. The inhomogeneous term in EOS helps to realize such a transition in a more natural way.
It is interesting that in the case when universe is filled with two interacting fluids (for instance, dark energy and dark matter) the Hubble parameter dependent term may effectively absorb the coupling between the fluids. Again, in case of two dark fluids the phantom epoch with possibility of crossing of \\(w=-1\\) barrier occurs is constructed. It is also very interesting that there exists thermodynamical dark energy model where despite the preliminary expectations[17] the entropy in phantom epoch may be positive. This is caused by crossing of phantom barrier.
As it was demonstrated making the dark energy EOS more general, this extra freedom in inhomogeneous term brings a number of new possibilities to construct the late-time universe. One can go even further, assuming that inhomogeneous terms in EOS are not restricted by energy conservation law (as it is often the case in braneworld approach). Nevertheless, only more precise astrophysical data will help to understand which of number of EOS of the universe under consideration (in other words, dark energy models) is realistic.
## Acknowledgements
We thank S. Tsujikawa for participation at the early stage of this work. The research by SDO has been partially supported by RFBR grant 03-01-00105 and LRSSgrant 1252.2003.2.
## Appendix A Inhomogeneous terms from modified gravity
Let us consider the possibility to obtain the inhomogeneous EOS from the modified gravity. As an illustrative example, the following action is considered:
\\[S=\\int d^{4}x\\sqrt{-g}\\left(\\frac{1}{2\\kappa^{2}}+{\\cal L}_{\\rm matter}+f(R) \\right). \\tag{100}\\]
Here \\(f(R)\\) can be an arbitrary function of the scalar curvature \\(R\\) and \\({\\cal L}_{\\rm matter}\\) is the Lagrangian for the matter. In the FRW universe, the gravitational equations are:
\\[0 = -\\frac{3}{\\kappa^{2}}H^{2}+\\rho-f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{101}\\] \\[+6\\left(\\dot{H}+H^{2}-H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\,\\] \\[0 = \\frac{1}{\\kappa^{2}}\\left(2\\dot{H}+3H^{2}\\right)+p+f\\left(R=6 \\dot{H}+12H^{2}\\right)\\] (102) \\[+2\\left(-\\dot{H}-3H^{2}+\\frac{d^{2}}{dt^{2}}+2H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\.\\]
Here \\(\\rho\\) and \\(p\\) are the energy density and the pressure coming from \\({\\cal L}_{\\rm matter}\\). They may satisfy the equation of state like \\(p=w\\rho\\). One may now define the effective energy density \\(\\tilde{\\rho}\\) and \\(\\tilde{p}\\) by
\\[\\tilde{\\rho} \\equiv \\rho-f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{103}\\] \\[+6\\left(\\dot{H}+H^{2}-H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\,\\] \\[\\tilde{p} = p+f\\left(R=6\\dot{H}+12H^{2}\\right)\\] (104) \\[+2\\left(-\\dot{H}-3H^{2}+\\frac{d^{2}}{dt^{2}}+2H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\.\\]
Thus, it follows
\\[\\tilde{p} = w\\tilde{\\rho}+(1+w)f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{105}\\] \\[+2\\left(\\left(-1-3w\\right)\\dot{H}-3\\left(1+w\\right)H^{2}+\\frac{d ^{2}}{dt^{2}}\\right.\\] \\[+\\left(2+3w\\right)H\\frac{d}{dt}\\right)f^{\\prime}\\left(R=6\\dot{H }+12H^{2}\\right)\\.\\]
In the situation where the derivative of \\(H\\) can be neglected as \\(\\dot{H}\\ll H^{2}\\) or \\(\\dot{H}\\ll H^{3}\\), we find
\\[\\tilde{p} \\sim w\\tilde{\\rho}+G(H)\\,\\] \\[G(H) \\equiv \\left(1+w\\right)f\\left(R=12H^{2}\\right) \\tag{106}\\] \\[-3\\left(1+w\\right)H^{2}f^{\\prime}\\left(R=12H^{2}\\right)\\.\\]
Typically \\(H\\) has a form like \\(H\\sim h_{0}/\\left(t-t_{1}\\right)\\) or \\(H\\sim h_{0}/\\left(t_{2}-t\\right)\\), with \\(h_{0}=2/3(w+1)\\), corresponding to (6). Hence, the condition \\(\\dot{H}\\ll H^{2}\\) or \\(\\ddot{H}\\ll H^{3}\\) requires \\(h_{0}\\gg 1\\), which shows \\(w\\sim-1\\) as in the modern universe. This supports our observation that inhomogeneous terms may be the effective ones which are predicted due to currently modified gravity theory.
The modification of the EOS by \\(G(H)\\) terms might come also from the braneworld scenario. Indeed, the single brane model is described by the following simple action
\\[S = \\frac{M_{\\rm Pl}^{2}}{r_{c}}\\int d^{4}xdy\\sqrt{-g^{(5)}}R^{(5)} \\tag{107}\\] \\[+\\int d^{4}x\\sqrt{-g}\\left(M_{\\rm Pl}^{2}R+{\\cal L}_{\\rm matter} \\right)\\.\\]
Here \\(M_{\\rm Pl}^{2}=1/8\\pi G\\), \\(y\\) is the coordinate of the extra dimension, and \\({\\cal L}_{\\rm matter}\\) is the Lagrangian density of the matters on the brane. The five-dimensional quantities are denoted by suffix \"(5)\". In ref.[18] it has been shown that the FRW equation for 4d brane universe could be given by
\\[\\frac{3}{\\kappa^{2}}\\left(H^{2}\\pm\\frac{H}{r_{c}}\\right)=\\rho. \\tag{108}\\]
Here \\(\\rho\\) is the matter energy density coming from \\({\\cal L}_{\\rm matter}\\). More general case is considered in ref.[19] where the FRW equation is modified as
\\[\\frac{3}{\\kappa^{2}}\\left(H^{2}-\\frac{H^{\\alpha}}{r_{c}^{2-\\alpha}}\\right)= \\rho. \\tag{109}\\]
Here \\(\\alpha\\) is a constant. One may assume that the matter energy density \\(\\rho\\) satisfies the energy conservation as in (2). Then from (108), we find
\\[-\\frac{2}{\\kappa^{2}}\\left(1-\\frac{\\alpha H^{\\alpha-2}}{2r_{c}^{2-\\alpha}} \\right)\\dot{H}=\\rho+p. \\tag{110}\\]
By comparing (109) with the first FRW equation (5) and (110) with the second FRW equation (45), one may define the effective energy density \\(\\tilde{\\rho}\\) and pressure \\(\\tilde{p}\\) as
\\[\\tilde{\\rho}\\equiv\\rho+\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}\\,\\quad\\tilde{p}\\equiv-\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}-\\frac{ \\alpha H^{\\alpha-2}\\dot{H}}{\\kappa^{2}r_{c}^{2-\\alpha}}\\, \\tag{111}\\]
They satisfy the first (5) and second (45) FRW equations:
\\[\\frac{3}{\\kappa^{2}}H^{2}=\\tilde{\\rho}\\,\\quad-\\frac{2}{\\kappa^{2}}\\dot{H}= \\tilde{\\rho}+\\tilde{p}. \\tag{112}\\]If it is also assumed the matter energy density \\(\\rho\\) and the matter pressure \\(p\\) satisfy the EOS like \\(p=w\\rho\\), the effective EOS for \\(\\tilde{\\rho}\\) and \\(\\tilde{p}\\) is given by
\\[\\tilde{p}=w\\tilde{\\rho}-(1+w)\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}- \\frac{\\alpha H^{\\alpha-2}\\dot{H}}{\\kappa^{2}r_{c}^{2-\\alpha}}. \\tag{100}\\]
Especially if one can neglect \\(\\dot{H}\\), it follows
\\[\\tilde{p}\\sim w\\tilde{\\rho}-(1+w)\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha }}. \\tag{101}\\]
This shows that brane-world scenario may also suggest various forms of inhomogeneous modification for effective EOS of matter on the brane.
## References
* (1) R. R. Caldwell, M. Kamionkowski and N. N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003) [arXiv:astro-ph/0302506]; B. McInnes, JHEP **0208**, 029 (2002) [arXiv:hep-th/0112066]; hep-th/0502209; V. Faraoni, Int. J. Mod. Phys. D **11**, 471 (2002); A. E. Schulz, Martin J. White, Phys. Rev. D **64**, 043514 (2001); S. Nojiri and S. D. Odintsov, Phys. Lett. B **562**, 147 (2003) [arXiv:hep-th/0303117]; Phys. Lett. B **571**, 1 (2003) [arXiv:hep-th/0306212]; P. Singh, M. Sami and N. Dadhich, arXiv:hep-th/0305110; P. Gonzalez-Diaz, Phys. Lett. **B586**, 1 (2004) [arXiv:astro-ph/0312579]; hep-th/0408225; H. Stefancic, Eur. Phys. J. C **36**, 523 (2004) [arXiv:astro-ph/0312484]; M. Sami and A. Toporensky, Mod. Phys. Lett. **A19**, 1509 (2004) [arXiv:gr-qc/0312009]; X. Meng and P. Wang, arXiv:hep-ph/0311070; Z. Guo, Y. Piao and Y. Zhang, arXiv:astro-ph/0404225; S. M. Carroll, A. De Felice and M. Trodden, arXiv:astro-ph/0408081; C. Csaki, N. Kaloper and J. Terning, arXiv:astro-ph/0409596; S. Tsujikawa and M. Sami, arXiv:hep-th/0409212; P. Gonzales-Diaz and C. Siguenza, Nucl. Phys. **B697**, 363 (2004) [arXiv:astro-ph/0407421]; L. P. Chimento and R. Lazkoz, Mod. Phys. Lett. **A19**, 2479 (2004) [arXiv:gr-qc/0405020]; gr-qc/0307111; J. Hao and X. Li, arXiv:astro-ph/0404154; G. Calcagni, Phys. Rev. D **71**, 023511 (2005) [arXiv:gr-qc/0410027]; P. Wu and H. Yu, arXiv:astro-ph/0407424; J. Lima and J. S. Alcaniz, arXiv:astro-ph/0402265; S. Nesseris and L. Perivolaropoulos, Phys. Rev. D **70**, 123529 (2004) [arXiv:astro-ph/0410309]; M. Bento, O. Bertolami, N. Santos and A. Sen, arXiv:astro-ph/0412638; P. Scherrer, arXiv:astro-ph/0410508; Z. Guo,Y. Piao, X. Zhang and Y. Zhang, Phys. Lett. B **608** 177 (2005) [arXiv:astro-ph/0410654]; E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 043539 (2004) [arXiv:hep-th/0405034]; E. Babichev, V. Dokuchaev and Yu. Eroshenko, arXiv:astro-ph/0407190; S. Sushkov, arXiv:gr-qc/0502084; K. Bronnikov, arXiv:gr-qc/0410119; L. Perivolaropoulos, arXiv:astro-ph/0412308; A. Vikman, Phys. Rev. D **71**, 023515 (2005) [arXiv:astro-ph/0407107]; X. Zhang, H. Li, Y. Piao and X. Zhang, arXiv:astro-ph/ 0501652; M. Bouhmadi-Lopez and J. Jimenez-Madrid, arXiv:astro-ph/0404540; Y. Wei, arXiv:gr-qc/0410050; gr-qc/0502077; S. K. Srivastava, arXiv:hep-th/0411630; V. K. Onemli and R. Woodard, arXiv:gr-qc/0406098; M. Dabrowski and T. Stachowiak, arXiv:hep-th/0411199. I. Ya. Arefeva, A. S. Koshelev and S. Yu. Vernov, arXiv:astro-ph/0412638; E. Elizalde, S. Nojiri, S. D. Odintsov and P. Wang, Phys. Rev. D **71**, 103504 (2005) [arXiv:hep-th/0502082]; V. Sahni, arXiv:astro-ph/0502032; H. Wei, R.-G. Cai and D. Zeng, arXiv:hep-th/0501160; R. Curbelo, T. Gonzalez and I. Quiros, arXiv:astro-ph/0502141; B. Gumjudgia, T. Naskar, M. Sami and S. Tsujikawa, arXiv:hep-th/0502191; F. Lobo, arXiv:gr-qc/0502099; R. Lazkoz, S. Nesseris and L. Perivolaropoulos, arXiv:astro-ph/0503230; H. Lu, Z. Huang and W. Fang, arXiv:hep-th/0504038. X. Zhang, arXiv:astro-ph/0501160; F. Bauer, arXiv:gr-qc/0501078; A. Anisimov, E. Babichev, A. Vikman, arXiv:astro-ph/0504560; J. Sola and H. Stefancic, arXiv:astro-ph/0505133; A. Andrianov, F. Cannata and A. Kamenshchik, arXiv:gr-qc/0505087.
* (3) S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 103522 (2004) [arXiv:hep-th/0408170].
* (4) S. Nojiri, S. D. Odintsov and S. Tsujikawa, Phys. Rev. D **71**, 063004 (2005) [arXiv:hep-th/0501025].
* (5) R. Caldwell, Phys. Lett. B **545**, 23 (2002).
* (6) J. D. Barrow, Class. Quant. Grav. **21**, L79 (2004) [arXiv:gr-qc/0403084]; S. Nojiri, S. D. Odintsov, Phys. Lett. B **595**, 1 (2004), [arXiv:hep-th/0405078]; J. D. Barrow, Class. Quant. Grav. **21**, 5619 (2004) [arXiv:gr-qc/0409062]; M. C. B. Abdalla, S. Nojiri and S. D. Odintsov, Class. Quant. Grav. **22**, L35 (2005), [arXiv:hep-th/0409177]; S. Cotsakis and I. Klaoudatou, arXiv:gr-qc/0409022; V. Sahni and Yu. Shtanov, JCAP **0311**, 014 (2003) [arXiv:astro-ph/0202346];K. Lake, Class. Quant. Grav. **21**, L129 (2004) [arXiv:gr-qc/0407107]; M. Dabrowski, arXiv:gr-qc/0410033; L. Fernandez-Jambrina and R. Lazkoz, Phys. Rev. D **70**, 121503 (2004) [arXiv:gr-qc/0410124]; J. D. Barrow and C. Tsagas, arXiv:gr-qc/0411045.
* (7) S. Nojiri and S. D. Odintsov, Phys. Lett. B **599**, 137 (2004) [arXiv:astro-ph/0403622]; G. Allemandi, A. Borowiec, M. Francaviglia and S. D. Odintsov, arXiv:gr-qc/0504057.
* (8) S. Nojiri, S. D. Odintsov and M. Sasaki, arXiv:hep-th/0504052; M. Sami, A. Toporensky, P. Trejakov and S. Tsujikawa, arXiv:hep-th/0504154.
* (9) I. Brevik and O. Gorbunova, arXiv:gr-qc/0504001.
* (10) I. Brevik, arXiv:gr-qc/0404095; O. Gron, Astrophys. Space Sci. **173**, 191 (1990); S. Weinberg, _Gravitation and Cosmology_, John Wiley& Sons, 1972.
* (11) J. D. Barrow, Phys. Lett. B **180**, 335 (1986); Nucl. Phys. B **310**, 743 (1988).
* (12) M. Szydlowski, W. Godlowski and R. Wojtak, arXiv:astro-ph/0505202.
* (13) H. Stefancic, arXiv:astro-ph/0504518.
* (14) H. Stefancic, arXiv:astro-ph/0411630.
* (15) L. Amendola, Phys. Rev. D**62**, 043511 (2000); W. Zimdahl, D. Pavon and L. P. Chimento, Phys. Lett. B **521**, 133 (2001); G. Mangano, G. Miele and V. Pettorino, Mod. Phys. Lett. A **18**, 831(2003); G. Farrar and P. J. E. Peebles, Astrophys. J. **604**, 1 (2004); S. del Campo, R. Herrera and D. Pavon, Phys. Rev. D **70**, 043540 (2004); R.-G. Cai and A. Wang, JCAP **0503**, 002 (2005); D. Pavon and W. Zimdahl, arXiv:gr-qc/0505020; L. Chimento and D. Pavon, gr-qc/0505096.
* (16) M. Giovannini, arXiv:gr-qc/0504132; arXiv:astro-ph/0504655.
* (17) I. Brevik, S. Nojiri, S. D. Odintsov, L. Vanzo, Phys. Rev. D **70**, 043520 (2004) [arXiv:hep-th/0401073].
* (18) C. Deffayet, G. Dvali and G. Gabadadze, Phys. Rev. D **65**, 044023 [arXiv:astro-ph/0105068].
* (19) G. Dvali and M. S. Turner, arXiv:astro-ph/0301510. | The dark energy universe equation of state (EOS) with inhomogeneous, Hubble parameter dependent term is considered. The motivation to introduce such a term comes from time-dependent viscosity considerations and modifications of general relativity. For several explicit examples of such EOS it is demonstrated how the type of future singularity changes, how the phantom epoch emerges and how crossing of phantom barrier occurs. Similar cosmological regimes are considered for the universe with two interacting fluids and for universe with implicit EOS. For instance, the crossing of phantom barrier is realized in easier way, thanks to the presence of inhomogeneous term. The thermodynamical dark energy model is presented where the universe entropy may be positive even at phantom era as a result of crossing of \\(w=-1\\) barrier.
pacs: 98.70.Vc | Summarize the following text. | 164 |
arxiv-format/1810_07511v1.md | # Modeling and Analysis of Wildfire Detection using Wireless Sensor Network with Poisson Deployment
Kaushlendra Pandey, Abhishek Gupta
The authors are with the Modern Wireless Networks Group, Electrical Engineering at Indian Institute of Technology Kanpur, Kanpur (India) 208016 (Email:{kpandey.gkrabhi}@iitk.ac.in).
## I Introduction
Wireless sensor networks are a fundamental and fundamental and fundamental and fundamental and fundamental problems in wireless communication. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. The wireless sensor networks are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel, and themodels were studied in [12] to understand the dynamics of wildfire propagation. The two primary approaches to model forest fire propagation are raster-based approach and vector based approach [13]. The raster-based approach assumes that the fire propagates from cell to cell under certain propagating conditions. The vector-based based approach assumes that fire grows according to certain geometrical shape which can expand and shift with time. Few of the vector-based fire propagation models were introduced in [14]. In [15], it was shown that fire propagates according to an expanding circular shape in a homogeneous forest fire with no wind. The elliptical and circular fire front propagation models were discussed in [16]. The work [17] studied the impact of the wind on the growth of the fire. The coverage performance of a random WSN to sense a time-evolving event has not been studied in the previous work which is the main focus of our paper.
In this paper, we have considered a random wireless fire sensor network (WFSN) deployed in a forest to sense an event of the fire. We develop an analytical framework to model the propagation of wildfire with time in the presence of wind and derive the performance of WFSN in terms of the fire-sensing probability as a function of time passed since the start of the fire. We also characterize the fire detection probability and the critical sensor density to detect a fire before it goes critical/uncontrollable. We also investigate the impact of wind velocity on it and show that a larger wind velocity may not necessarily imply the requirement of a denser deployment.
Fig. 1: Illustration showing a network of wireless fire sensors deployed over the forest. Each sensor has a random sensing range \\(r\\). A fire started at a point grows into the fire-region modeled by the set \\(\\mathcal{K}(t)\\) at time \\(t\\).
## II System model
This paper analyzes the early detection of forest fire before it becomes uncontrollable with the help of a randomly deployed network of wireless fire sensors. A list of symbols used in this paper is shown in the table I.
We consider that nodes of the WFSN are deployed in the \\(2d\\) space \\(\\mathbb{R}^{2}\\). Each sensor has a sensing region around it which denotes the region this sensor can sense for fire. We model the complete network of wireless sensors (locations and sensing regions of sensors) by a boolean model \\(\\Psi\\). In this model, the locations of wireless sensors are model as a PPP, and each sensor is assumed to have an identically distributed and independent (iid) sensing zone around it.
We model the locations of sensors by the PPP \\(\\Phi\\) with density \\(\\lambda\\), which represents the number of sensors deployed per unit area of the forest. Let \\(x_{i}\\) denote the \\(i\\)th wireless sensor location. We represent each sensing zone of \\(i\\)th sensor as a ball (\\(\\mathcal{B}(x_{i},r_{i})\\)) of radius \\(r_{i}\\) centered at \\(x_{i}\\). Here, \\(r_{i}\\) is the sensing radius of \\(i\\)th sensor and assumed to be a iid random variable. Let \\(\\mathsf{S}_{i}\\) denote \\(\\mathcal{B}(0,r_{i})\\). Let us denote \\(i\\)th sensor by the tuple (\\(x_{i},r_{i}\\)) which denotes \\(i\\)th sensor located at \\(x_{i}\\) with sensing radius \\(r_{i}\\).
To model \\(r_{i}\\), we consider the hybrid sensing model which is a combination of disk sensing model and exponential model [18].
In this hybrid model, the total sensing range of a sensor \\(x\\) is modeled as summation of a fixed sensing range \\(r_{\\rm in}\\) and a random variable \\(y\\):
\\[r=r_{\\rm in}+y. \\tag{1}\\]
where \\(y\\) is a truncated exponential random variable between \\(0\\) to \\(R^{\\prime}\\) with probability of density of function (pdf):
\\[f(y)=\\begin{cases}\\frac{e^{-y}}{1-e^{-R^{\\prime}}}&\\text{if }0<y\\leq R^{\\prime} \\\\ 0&\\text{otherwise.}\\end{cases} \\tag{2}\\]
Here \\(R^{\\prime}=r_{\\rm out}-r_{\\rm in}\\) with \\(r_{\\rm out}\\) being the maximum sensing range of sensor. The expected value of \\(r\\) and \\(r^{2}\\) is given as
\\[\\mathbb{E}[y] =\\left(1-\\frac{R^{\\prime}e^{-R^{\\prime}}}{1-e^{-R^{\\prime}}}\\right)\\] \\[\\mathbb{E}[r] =\\frac{1+r_{\\rm in}-(1+r_{\\rm out})e^{-(r_{\\rm in}-r_{\\rm out})} }{1-e^{-(r_{\\rm in}-r_{\\rm out})}} \\tag{3}\\] \\[\\mathbb{E}[r^{2}] =r_{\\rm in}^{2}+2\\mathbb{E}[y](1+r_{\\rm in})-\\frac{R^{\\prime 2}e^{-R^ {\\prime}}}{1-e^{-R^{\\prime}}}. \\tag{4}\\](3) and (4) will be later used in calculating Minkowski addition.
Now, the total occupied space \\(\\xi\\) by the sensors \\(\\Psi\\) is a subset of \\(\\mathbb{R}^{2}\\) and is represented as
\\[\\xi=\\bigcup_{i\\in\\mathbb{N}}x_{i}+\\mathsf{S}_{i}. \\tag{5}\\]
## III Modeling of Time-Evolution of Wildfire
We model the fire-front including the areas affected by it at time \\(t\\) as a set \\(\\mathcal{K}(t)\\) where \\(t=0\\) denotes the start of the fire. The dependence on time \\(t\\) represents the dynamic nature of the fire-size. \\(\\mathcal{K}(t)\\) can be assumed to be convex [19]. Let us define the critical fire-area \\(A_{\\rm cr}\\) as the area of \\(\\mathcal{K}(t)\\) before the fire turns critical (uncontrollable or difficult to manage). The time at which the fire becomes critical is termed as critical time \\(t_{\\rm cr}\\)
\\[t_{\\rm cr}:\\mathcal{A}(\\mathcal{K}(t_{\\rm cr}))=A_{\\rm cr}. \\tag{6}\\]
In the past literature, various models for fire propagation are used considering the impact of the local environmental condition and the velocity of wind. In this section, we consider three specific models motivated from the past literature [19].
\\begin{table}
\\begin{tabular}{|l|l|} \\hline
**Symbol** & **Definition** \\\\ \\hline \\(\\Phi\\) & Homogeneous PPP which models locations of sensor nodes in the network. \\\\ \\hline \\(\\lambda\\) & Density of wireless sensor network per unit area. \\\\ \\hline \\(r_{i}\\) & The (random) sensing radius of \\(i\\)th sensor. \\\\ \\hline \\(x_{i}\\) & The location of \\(i\\)th sensor. \\\\ \\hline \\(\\mathsf{S}_{i}\\) & \\(\\mathcal{B}(0,r_{i})\\) \\\\ \\hline \\(\\xi\\) & Combined covered area of all sensors. \\\\ \\hline \\(p(t)\\) & Sensing probability of set \\(\\mathcal{K}\\) at time \\(t\\). \\\\ \\hline \\(\\mathcal{B}(x,r)\\) & Ball of radius \\(r\\) centred at \\(x\\). \\\\ \\hline \\(\\oplus\\) & Minkowski addition. \\\\ \\hline \\(r_{\\rm in}\\) & Fixed sensing range of a sensor node. \\\\ \\hline \\(r_{\\rm out}\\) & Maximum sensing range of a sensor node. \\\\ \\hline \\(\\mathcal{A}(.)\\) & Area of a set (.). \\\\ \\hline \\(\\ell(.)\\) & Perimeter of a set (.). \\\\ \\hline \\(A_{\\rm cr},t_{\\rm cr}\\) & The critical area of fire and critical time to reach it. \\\\ \\hline \\(p_{\\rm f}\\) & Fire detection probability of fire before it goes critical. \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Notation Table
### _Elliptical Model_
In [20, 21, 22], the authors developed the generalized anisotropic propagation model with non-local radiation term, and proposed the elliptical model as a candidate model. This model is also validated with experimental data and simulation and it was found that fire may not have a steady state rate initially but after some time, it grows with an elliptical geometric shape.
Motivated by these results, we model the dynamic fire ignited on a point as an elliptical shape at any time \\(t\\) with the major axis aligned along the direction of air. At time \\(t\\), the fire region \\(\\mathcal{K}(t)\\) (see Fig. 2a) is given as:
\\[\\mathcal{K}(t)=\\left\\{x,y:\\begin{array}{c}x=t(g+f\\cos\\phi)\\\\ y=t(h\\sin\\phi)\\end{array},0\\leq\\phi\\leq 2\\pi\\right\\} \\tag{7}\\]
where \\(f\\), \\(g\\) and \\(h\\) are homogeneous to velocity and are determined by experimental data. It can be seen that the major axis of \\(\\mathcal{K}(t)\\) is \\(a(t)=ft\\), minor axis is \\(b(t)=ht\\) and the center is \\((gt,0)\\).
The different values of \\(a(t)\\) and \\(b(t)\\) under the different wind velocity are discussed in the result section of [20] and it can be concluded that variations in the major and minor axis of the fire region \\(\\mathcal{K}(t)\\) can be modeled as:
\\[a(t)=\\alpha t(1+\\frac{v_{x}}{V}) \\tag{8}\\] \\[b(t)=\\alpha t(1+\\frac{v_{y}}{V}). \\tag{9}\\]
where \\(v_{x}\\) and \\(v_{y}\\) are the wind velocities in \\(x\\) and \\(y\\) direction. \\(V\\) is the scaling factor. \\(\\alpha\\) is the firefront velocity in the absence of wind and depends on the other environment conditions and forest density. Without loss of generality that we assume that there is no wind in the \\(y\\)-direction (\\(v_{y}=0\\)) which gives the following expressions for major and minor axes:
\\[a(t)\\approx\\alpha t(1+\\frac{v_{x}}{V}) \\tag{10}\\] \\[b(t)\\approx\\alpha t.\\]
The area \\(\\mathcal{A}(.)\\) and the perimeter \\(\\ell(.)\\) of the set \\(\\mathcal{K}(t)\\) under elliptical model is given as
\\[\\mathcal{A}(\\mathcal{K}(t))=\\pi.a(t)b(t) \\tag{11}\\] \\[\\ell(\\mathcal{K}(t))\\approx\\pi[3(a(t)+b(t))-\\sqrt{(3a(t)+b(t))(a( t)+3b(t))}]. \\tag{12}\\]
### _Circular Model_
In the absence of wind (\\(v_{x}=v_{y}=0\\)) the elliptical model converges to a circular model. Intuitively, we can also see that under the uniform condition such as vegetation and humidity and absence of wind, the fire front will propagate circularly.
Fig. (b)b shows the propagation of fire ignited at point \\(O\\). As time grows, the radius \\(r_{\\mathcal{K}}(t)\\) of the fire affected area is given by
\\[r_{\\mathcal{K}}(t)=\\alpha t. \\tag{13}\\]
As discussed earlier, \\(\\alpha\\) is the fire velocity in absence of wind which is consistent with the definition.
### _Piriform Model_
The other simple fire propagation model is pear shaped or piriform propagation model (see Fig. (c)c). It has been seen that if the air is dominating in a particular direction, the fire envelop attains a piriform shape [23]. Under the priorm model, the fire envelop \\(\\mathcal{K}(t)\\) is given as:
\\[\\mathcal{K}(t)=\\left\\{x,y:\\begin{array}{l}x=a(t)(1+\\sin\\phi)\\\\ y=b(t)\\cos\\phi(1+\\sin\\phi)\\end{array},0\\leq\\phi\\leq 2\\pi\\right\\}. \\tag{14}\\]
Here, \\(a(t)\\) and \\(b(t)\\) are the two axes as given in (10):
Area and perimeter of \\(\\mathcal{K}(t)\\) associated with the Periform model is given as:
\\[\\mathcal{A}(\\mathcal{K}(t)) =\\pi a(t)b(t). \\tag{15}\\] \\[\\ell(\\mathcal{K}(t)) =\\int\\limits_{0}^{2\\pi}\\sqrt{a^{2}(t)\\cos^{2}\\theta+b^{2}(t)( \\cos 2\\theta-\\cos\\theta)^{2}}\\mathrm{d}\\theta \\tag{16}\\]
The perimeter of the curve \\(\\ell(\\mathcal{K}(t))\\) can be calculated by performing numerical integration.
## IV Coverage Analysis
In this section, we will compute the sensing performance of the considered WFSN in terms of fire detection probability. The fire detection probability \\(p_{\\mathrm{f}}\\) of the system is defined as the probability that fire is detected by at least one sensor of the sensor network before the fire turns critical. Recall that an event of fire occurrence is said to be not sensed at time \\(t\\) if:
\\[\\xi\\cap\\mathcal{K}(t)=\\phi. \\tag{17}\\]Therefore, the fire detection probability is equal to the probability that any part of the fire region falls in the sensing region of at least one sensor at critical time \\(t_{\\rm cr}\\). Hence, the fire detection probability is given as
\\[p_{\\rm f}=\\mathbb{P}(\\xi\\cap\\mathcal{K}(t_{\\rm cr})\
eq\\phi). \\tag{18}\\]
Fig. 2: Various propagation models of wildfire in a forest. (a) In the presence of wind: elliptical model. (b) In the absence of wind: circular model (c) In the presence of dominant wind: piriform model
### _Fire Sensing Probability at Time \\(t\\)_
Let us first compute the probability that a fire event is not sensed at time \\(t\\) which is given by:
\\[\\mathcal{G}(t)= \\mathbb{P}(\\xi\\cap\\mathcal{K}(t)=\\phi)\\] \\[= \\exp(-\\lambda\\mathbb{E}(\\mathcal{A}(\\hat{\\mathsf{S}}\\oplus \\mathcal{K}(t))) \\tag{19}\\]
where \\(\\hat{\\mathsf{S}}\\) is the mirror image of \\(\\mathsf{S}\\), \\(\\oplus\\) is the Minkowski addition [10]. Therefore, the probability of the set \\(\\mathcal{K}(t)\\) being covered at time instant \\(t\\) is:
\\[p(t)= 1-\\mathcal{G}(t).\\] \\[= 1-\\exp(-\\underbrace{\\lambda\\mathbb{E}[\\mathcal{A}(\\mathcal{K}(t )\\oplus\\hat{\\mathsf{S}}])}_{N(\\mathcal{K}(t))}. \\tag{20}\\]
Note that \\(N(\\mathcal{K}(t))\\) represents the mean number of sensors that have detected the fire in their sensing range. In 2-dimensional case, the area of the Minkowski addition of the set \\(\\mathcal{K}(t)\\) with \\(\\hat{S}=\\mathcal{B}(0,r)\\) can be evaluated by Steiner formula [24]:
\\[\\mathcal{A}(\\mathcal{K}(t)\\oplus\\mathcal{B}(0,r))=\\mathcal{A}( \\mathcal{K}(t))+\\ell(\\mathcal{K}(t))r+\\pi r^{2}. \\tag{21}\\]
Recall that \\(\\ell(\\mathcal{K}(t))\\) is the boundary length of set \\(\\mathcal{K}(t)\\) and \\(\\mathcal{A}(\\mathcal{K}(t))\\) is the area of \\(\\mathcal{K}(t)\\).
### _Fire Detection Probability_
Now, the fire detection probability can be computed as
\\[p_{\\rm f}=1-\\exp(-\\lambda\\mathbb{E}[\\mathcal{A}(\\mathcal{K}(t_{ \\rm cr})\\oplus\\hat{\\mathsf{S}}]). \\tag{22}\\]
### _Critical Sensor Density_
The critical sensor density (\\(\\lambda_{\\rm cr}\\)) is defined as the density of sensors which can detect fire with probability \\(\\tau\\) before fire turns critical. The generalized expression for critical sensor density is given as follows:
\\[\\lambda_{\\rm cr}:\\tau=p_{\\rm f}=1-\\exp(-\\lambda_{\\rm cr}\\mathbb{ E}[\\mathcal{A}(\\mathcal{K}(t_{\\rm cr})\\oplus\\hat{\\mathsf{S}}]).\\]
This gives
\\[\\lambda_{\\rm cr}(\\tau)=\\frac{1}{\\mathbb{E}[\\mathcal{A}(\\mathcal{K} (t_{\\rm cr})\\oplus\\hat{\\mathsf{S}}]}\\log\\Big{(}\\frac{1}{1-\\tau}\\Big{)}. \\tag{23}\\]
We will now analyze the specific fire propagation models proposed in Section III.
### _Fire Detection Probability in the Absence of Wind (Circular Model)_
Substituting area and perimeter for circular model in (21), the mean number of sensors detecting the fire can be computed as
\\[N(\\mathcal{K}(t))=\\lambda\\mathbb{E}[\\pi(\\alpha t)^{2}+2\\pi\\alpha tr +\\pi r^{2}] \\tag{24}\\] \\[=\\lambda\\pi\\left[(\\alpha t)^{2}+2\\alpha t\\mathbb{E}[r]+\\mathbb{E }[r^{2}]\\right]. \\tag{25}\\]
Using (22), the fire detection probability is given as
\\[p_{\\text{f}}(t)=1-\\exp\\left(-\\lambda\\pi\\left[(\\alpha t_{\\text{cr}})^{2}+2 \\alpha t_{\\text{cr}}\\mathbb{E}[r]+\\mathbb{E}[r^{2}]\\right]\\right). \\tag{26}\\]
The critical time is given as:
\\[t_{\\text{cr}}\\leq\\frac{1}{\\alpha}\\sqrt{\\frac{A_{\\text{cr}}}{\\pi}}. \\tag{27}\\]
Using the value of \\(t_{\\text{cr}}\\) in (23), critical sensor density \\(\\lambda_{\\text{cr}}\\) is given as
\\[\\lambda_{\\text{cr}}(\\tau)= \\frac{1}{\\pi(\\alpha t_{\\text{cr}})^{2}+2\\pi\\alpha t_{\\text{cr}} \\mathbb{E}[r]+\\pi\\mathbb{E}[r^{2}]}\\ln\\Big{(}\\frac{1}{1-\\tau}\\Big{)} \\tag{28}\\] \\[= \\frac{1}{A_{\\text{cr}}+2\\sqrt{\\pi A_{\\text{cr}}}\\mathbb{E}[r]+ \\pi\\mathbb{E}[r^{2}]}\\ln\\Big{(}\\frac{1}{1-\\tau}\\Big{)}. \\tag{29}\\]
Therefore, any \\(\\lambda\\geq\\lambda_{\\text{cr}}\\) will provide the fire detection probability more than \\(\\tau\\).
Fig. 3: Fire sensing probability \\(p(t)\\) with respect to time for various sensor density and fire propagation models. (a) Circular fire propagation model. WSN with \\(\\lambda\\)= 10 \\(\\times 10^{-2}\\) Sensors/\\(m^{2}\\) provides the fire detection probability \\(p_{\\text{f}}\\) close to 100 \\(\\%\\). (b) Elliptical fire propagation model. WSN with sensor density of 4 \\(\\times 10^{-2}\\) Sensors/\\(m^{2}\\) sensing probability is less than 60\\(\\%\\) but after 4-second sensing probability is above 80\\(\\%\\). (c) Piriform fire propagation model. For \\(\\lambda\\)= 3 \\(\\times 10^{-2}\\)Sensors/\\(m^{2}\\) which is less than the elliptical case initially the fire sensing probability is around 40\\(\\%\\) but within 4 seconds it reaches more than 80\\(\\%\\).
### _Fire Detection Probability in the Presence of Wind_
The mean number of sensor that can detect fire can be obtained using (11) and (12) and is given by as follows:
\\[N(\\mathcal{K}(t)) =\\lambda\\mathbb{E}[\\pi a(t)b(t)+\\pi r[3(a(t)+b(t))-\\sqrt{(3a(t)+b(t ))(a(t)+3b(t)))}]+\\pi r^{2}]\\] \\[=\\lambda\\pi\\mathbb{E}\\left[(\\alpha t)^{2}\\left(1+\\frac{v_{x}}{V} \\right)+r\\alpha t\\left[3\\left(2+\\frac{v_{x}}{V}\\right)-\\sqrt{(4+\\frac{v_{x}}{ V})(4+\\frac{3v_{x}}{V})}\\right]+r^{2}\\right]\\] \\[=\\lambda\\pi\\left[(\\alpha t)^{2}\\left(1+\\frac{v_{x}}{V}\\right)+ \\mathbb{E}[r]\\alpha t\\left[3\\left(2+\\frac{v_{x}}{V}\\right)-\\sqrt{(4+\\frac{v_{ x}}{V})(4+\\frac{3v_{x}}{V})}\\right]+\\mathbb{E}[r^{2}]\\right]. \\tag{30}\\]
The critical time \\(t_{\\rm cr}\\) is given as:
\\[t_{\\rm cr}\\leq \\frac{1}{\\alpha}\\sqrt{\\frac{A_{\\rm cr}}{\\pi(1+\\frac{v_{x}}{V})}}. \\tag{31}\\]
It is clear that critical time reduces in the presence of wind. Using (30) and (31) in (23), the critical sensor density can be computed as :
\\[\\lambda_{\\rm cr} =\\frac{1}{[\\pi a(t_{\\rm cr})b(t_{\\rm cr})+\\ell(\\mathcal{K}(t_{\\rm cr }))\\mathbb{E}[r]+\\pi\\mathbb{E}[r^{2}]}\\log\\left(\\frac{1}{1-\\tau}\\right)\\] \\[=\\frac{\\log(\\frac{1}{1-\\tau})}{A_{\\rm cr}+\\sqrt{\\frac{\\pi A_{\\rm cr }}{1+\\frac{V}{V}}}\\left[3(2+\\frac{v_{x}}{V})-\\sqrt{(4+3\\frac{v_{x}}{V})(4+ \\frac{v_{x}}{V})}\\right]\\mathbb{E}[r]+\\mathbb{E}[r^{2}]} \\tag{32}\\]
### _Fire Detection Probability for Piriform Model_
The critical time \\(t_{\\rm cr}\\) for piriform model is the same as the elliptical model. Now, the critical density is given as
\\[\\lambda_{\\rm cr}=\\frac{1}{A_{\\rm cr}+\\ell\\left(\\mathcal{K}\\left(\\frac{1}{ \\alpha}\\sqrt{\\frac{A_{\\rm cr}}{\\pi(1+\\frac{v_{x}}{V})}}\\right)\\right)\\mathbb{ E}[r]+\\pi\\mathbb{E}[r^{2}]}\\log\\Big{(}\\frac{1}{1-\\tau}\\Big{)}. \\tag{33}\\]
## V Numerical Results
In this section, we will evaluate fire detection probability and present some numerical results and insights for the models considered. The simulation parameters taken are listed in the Table II.
**Impact of sensor density:** Fig. 3 shows the variation of fire sensing probability \\(p_{\\rm f}(t)\\) with time (\\(t\\)) for the different different scenarios: (a) in the absence of wind velocity (circular firepropagation), (b) in the presence of wind velocity (elliptical propagation) and (c) in the presence of dominant wind (piriform propagation). It can be seen that increasing sensor density can significantly improve static detection probability which denotes the probability a fire is detected at its start only. For example, WSN with sensor density \\(\\lambda\\)=.05 Sensors/\\(m^{2}\\) can provide a static detection probability of \\(60\\%\\) in the absence of fire. It means that there is 60% chance that fire start is immediately detected in the beginning. After 3 second of fire event, the detection probability greater than 80\\(\\%\\). In the presence of wind, impact of increasing sensor density is less prominent. It can also be identified that having large sensor density does not have much influence on sensing probability on the other hand moderate sensor density have fairly good initial sensing probability and rapidly increases with time.
**Comparison of three scenarios:** Fig. 4 shows the comparison of three scenarios. In the absence of any wind, the critical time (\\(t_{\\rm cr}\\)) is 7.6 s. The critical time in the presence of wind is 6.7 second which is less than as compared to the no-wind case. It is due to the faster spread of fire due to wind giving even smaller window to detect fire. The critical time (\\(t_{\\rm cr}\\)) for piriform type propagation is the same as the elliptical propagation. However, piriform type propagation gives better coverage due to larger perimeter-to-area ratio making it easier for sensors to detect this fire in the same time.
**Impact of wind velocity on critical sensor density:** Fig. 5 shows the impact of wind velocity on critical sensor density for various propagation models. Recall that critical sensor density corresponding to the zero wind velocity refers to the circular propagation. The critical sensor density reduces in piriform type propagation as compared to elliptical type propagation of fire.
\\begin{table}
\\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \\hline
**Parameter** & **Value** \\\\ \\hline Inner sensing range (\\(r_{\\rm in}\\)) & 2 meter \\\\ \\hline Outer sensing range (\\(r_{\\rm out}\\)) & 4 meter \\\\ \\hline \\(\\mathbb{E}[r]\\) and \\(\\mathbb{E}[r^{2}]\\) & 2.68 meter, 5.49 meter \\\\ \\hline Fire flame velocity (\\(\\alpha\\)) &.33 meter/sec. \\\\ \\hline Critical area (\\(A_{\\rm cr}\\)) & 20 m\\({}^{2}\\) \\\\ \\hline Wind velocity (\\(v_{x}\\)) in elliptical/piriform model & 3 m/s \\\\ \\hline Scaling factor (V) & 10 m/s \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Numerical ParametersThis is consistent with the previous result. The another important observation is due to the impact of wind on the propagation of fire. It seems that in high winds, critical time reduces which is one of critical concern. However, due to high rate of fire spread, the fire detection probability threshold also increases. It can be seen that the wind velocity can effectively help in the detection of wildfire. In the case of piriform type propagation, a non-monotonic behavior of critical density with respect to wind velocity can be observed.
Fig. 4: Comparative analysis between circular, elliptical and piriform models. In the presence of the wind, the sensing probability is higher than the no-wind case.
Fig. 5: Comparative analysis between elliptical and piriform model: Critical sensor density with respect to wind velocity for different values of fire detection probablity threshold (\\(\\tau\\)). In high wind areas, less density of sensors are required to achieve similar fire detection probability threshold.
## VI Conclusions
In this paper, we have considered a WSN with fire sensors for early detection of the forest fire. We present an analytical framework based on the Boolean-Poisson model, with the elliptical, circular and piriform fire flame propagation. Using the framework, we compute the critical sensor density which needs to be deployed in the forest to ensure a certain minimum fire detection probability. It has identified that in the presence of wind, critical time \\(t_{\\rm cr}\\) to detect fire decreases but the fire sensing probability also increases in comparison to the case without the wind.
## References
* [1] Satendra and A. D. Kaushik, _Forest Fire Disaster Management_. National Institute of Disaster Management, Ministry of Home Affairs, Government of India New Delhi, 2014.
* [2] R. Sabha, \"One hundred forty-ninth report on action taken by the department of science & technology on the recommendations contained in the one hundred fortieth report of the department-related parliamentary standing committee on science and technology, environment & forests on the demands for grants (2005-2006) of the department of science & technology,\" _Rajya Sabha Secretariat, New Delhi_, 2005.
* [3] G. S. Kasbekar, Y. Bejerano, and S. Sarkar, \"Lifetime and coverage guarantees through distributed coordinate-free sensor activation,\" _IEEE/ACM Trans. Netw._, vol. 19, no. 2, pp. 470-483, April 2011.
* [4] D. Simplot-Ryl, I. Stojmenovic, and J. Wu, \"Energy efficient backbone construction, broadcasting, and area coverage in sensor networks,\" _Handbook of Sensor Networks_, pp. 343-380, 2005.
* [5] H. Zhang and J. C. Hou, \"Maintaining sensing coverage and connectivity in large sensor networks,\" _Ad Hoc & Sensor Wireless Networks_, vol. 1, no. 1-2, pp. 89-124, 2005.
* [6] Z. Yun, X. Bai, D. Xuan, T. H. Lai, and W. Jia, \"Optimal deployment patterns for full coverage and \\(k\\)-connectivity (\\(k\\leq 6\\)) wireless sensor networks,\" _IEEE/ACM Trans. Netw._, vol. 18, no. 3, pp. 934-947, June 2010.
* [7] X. Bai, Z. Yun, D. Xuan, T. H. Lai, and W. Jia, \"Optimal patterns for four-connectivity and full coverage in wireless sensor networks,\" _IEEE Trans. Mob. Comput._, vol. 9, no. 3, pp. 435-448, March 2010.
* [8] S. He, X. Li, J. Chen, P. Cheng, Y. Sun, and D. Simplot-Ryl, \"EMD: Energy-efficient P2P message dissemination in delay-tolerant wireless sensor and actor networks,\" _IEEE J. Sel. Areas Commun._, vol. 31, no. 9, pp. 75-84, September 2013.
* [9] J. G. Andrews, A. K. Gupta, and H. S. Dhillon, \"A primer on cellular network analysis using stochastic geometry,\" _arXiv preprint arXiv:1604.03183_, 2016.
* [10] M. Haenggi, _Stochastic Geometry for Wireless Networks_. Cambridge University Press, 2012.
* [11] F. Baccelli and B. Blaszczyszyn, _Stochastic Geometry and Wireless Networks_, 2nd ed. NOW Publishers, 2009, vol. 1.
* [12] M. El Houssami, A. Lamorlette, D. Morvan, R. M. Hadden, and A. Simeoni, \"Framework for submodel improvement in wildfire modeling,\" _Combustion and Flame_, vol. 190, pp. 12-24, 2018.
* [13] A. L. Sullivan, \"Wildland surface fire spread modelling, 1990-2007. 3: Simulation and mathematical analogue models,\" _International Journal of Wildland Fire_, vol. 18, no. 4, pp. 387-403, 2009.
* [14] J. Coleman and A. Sullivan, \"A real-time computer application for the prediction of fire spread across the australian landscape,\" _SIMULATION_, vol. 67, no. 4, pp. 230-240, 1996. [Online]. Available: [https://doi.org/10.1177/003754979606700402](https://doi.org/10.1177/003754979606700402)* [15] R. Weber, \"Analytical models for fire spread due to radiation,\" _Combustion and flame_, vol. 78, no. 3-4, pp. 398-408, 1989.
* [16] M. A. Finney _et al._, _Fire Area Simulator-model development and evaluation_. US Department of Agriculture, Forest Service, Rocky Mountain Research Station Ogden, UT, 1998, vol. 3.
* [17] F. E. Fendell, \"UAVs for tracking the growth of large-area wildland fires,\" May 22 2018, US Patent 9,977,963.
* [18] S. Pudasaini, S. Moh, and S. Shinz, \"Stochastic coverage analysis of wireless sensor network with hybrid sensing model,\" in _Proc. International Conference on Advanced Communication Technology_, vol. 01, Feb 2009, pp. 549-553.
* 153, 2003.
* 694, 2008.
* [21] G. B. Peet, \"The shape of mild fires in jarrah forest,\" _Australian Forestry_, vol. 31, no. 2, pp. 121-127, 1967.
* [22] G. D. Richards, \"An elliptical growth model of forest fire fronts and its numerical solution,\" _International Journal for Numerical Methods in Engineering_, vol. 30, no. 6, pp. 1163-1179, 1990.
* [23] G. Ferguson, _Land on Fire: The New Reality of Wildfire in the West_. Timber Press, 2017.
* [24] S. N. Chiu, D. Stoyan, W. S. Kendall, and J. Mecke, _Stochastic geometry and its applications_. John Wiley & Sons, 2013. | We consider a new class of wireless sensor networks, called _Wireless sensor networks_, which are equipped with a single-antenna channel, and the channel is equipped with a single-antenna channel. We show that the WSN can be characterized in terms of its coverage _i.e._ the probability that the event is sensed by at least one node of the WSN. The coverage performance of a WSN with sensors having fixed disk sensing range is analyzed in [3]. Readers are advised to refer to [4] for an extensive literature survey discussing the coverage and connectivity analysis of WSNs. Maintenance of coverage and connectivity in a network by activating a minimum number of sensor nodes have been addressed in [5]. A study of deployment patterns of sensor node was performed in [6] to get full coverage and \\(k\\)-connectivity under different sensor placement schemes. Energy efficient optimal coverage and full connectivity was studied in [7, 8]. The deterministic deployment of sensor nodes may not be possible for forest applications where the terrains are not uniform. In such applications, random deployment of sensors can be assumed. Tools from Stochastic geometry provide a tractable framework to study the coverage of random networks including WSN [9]. Coverage performance of random WSNs was studied in [10, 11]. The main limitations of the above-mentioned work is assumption of the static nature of the event to be sensed.
Wireless sensor networks, wireless sensor networks, WSNs, wireless sensor networks, WSNs, wireless sensor networks, WSNs, wireless sensor networks | Provide a brief summary of the text. | 319 |
arxiv-format/2403_11735v5.md | # LSKNet: A Foundation Lightweight Backbone for Remote Sensing
Yuxuan Li\\({}^{1}\\)
Xiang Li\\({}^{1,3}\\)
Corresponding Authors
Yimian Dai\\({}^{1}\\)
Qibin Hou\\({}^{1,3}\\)
Li Liu\\({}^{2}\\)
Yongxiang Liu\\({}^{2}\\)
Ming-Ming Cheng\\({}^{1,3}\\)
Jian Yang\\({}^{1}\\)
Corresponding Authors
## 1 Introduction
Remote sensing images present unique challenges for downstream tasks due to their complex nature, including high resolution, random orientation, large intraclass variation, multiscale scenes, and dense small objects. To tackle these challenges, extensive research has been conducted, focusing on various approaches such as feature ensemble techniques [1, 2, 3, 4] and large-scale pretraining [5, 6, 7] for classification. Additionally, methods addressing rotation variance [8, 9, 10], or employing new oriented box encoding [11, 12] have been proposed for object detection tasks. Furthermore,the integration of multi-scale feature fusion [13, 14, 15, 16, 17, 18, 19] techniques has been explored to enhance the performance of detection and segmentation tasks. With the rapid development of large models like SAM [20] and LLaVA [21], numerous works utilize the powerful general knowledge of these models for robust downstream task fine-tuning [22, 23], achieving remarkable performances.
Despite these advances, relatively few works have considered the strong prior knowledge of remote sensing images to build an efficient foundation model. Aerial images are typically captured at high resolutions from a bird's eye view. In particular, most objects in aerial images may be small and difficult to identify based on their appearance alone. Instead, recognizing these objects relies on their context, as the surrounding environment can provide valuable clues about their shape, orientation, and other characteristics. According to an analysis of the remote sensing data, we identify two important priors:
1. **Accurate recognition often requires a wide range of contextual information.** As illustrated in Fig. 1, the limited context used by object detectors in remote sensing images can often lead to incorrect classifications. Rather than their appearance, the context distinguishes the ship from the vehicle.
2. **The contextual information required for different objects is very different.** As shown in Fig. 2, the soccer field requires relatively less contextual information because of the unique distinguishable court borderlines. In contrast, the roundabout may require more context information to distinguish between gardens and ring-like buildings. Intersections, especially those partially covered by trees, require an extremely large receptive field due to the long-range dependencies between the intersecting roads.
To address the challenge of accurately recognizing objects in remote sensing images, which often require a wide and dynamic range of contextual information, we propose a novel lightweight backbone network called Large Selective Kernel Network (LSKNet). Our approach employs a dynamic modulation of the receptive field within the feature extraction backbone, which allows for a more efficient accommodation and processing of the diverse, wide-ranging context that is necessitated. This is achieved through a spatial selective mechanism, which efficiently weights the features processed by a sequence of large depth-wise kernels and then spatially merge them. The weights of these kernels are determined dynamically based on the input, allowing the model to use different large kernels adaptively and adjust the receptive field for each object in space as needed.
This paper presents an extended version of our previous work, **LSKNet**[24]. Specifically, we have conducted further experiments to evaluate the generalization ability of our proposed LSKNet backbone across a wide range of remote sensing applications, including remote sensing scene classification on the UCM [25], AID [26], and NWPU [27] datasets, object detection on synthetic aperture radar modality dataset SAR-Aircraft [28], semantic segmentation tasks on the Potsdam [29], Vaihingen [30], LoveDA [31],
Fig. 1: Successfully detecting remote sensing objects requires using a wide range of contextual information. Detectors with a limited receptive field may easily lead to incorrect results.
Fig. 2: The wide range of contextual information required for different object types is very different by human criteria. The objects with red boxes are the exact ground-truth annotations.
UAVid [32] and GID [33] datasets, as well as change detection tasks on the LEVIR-CD [34] and S2Looking [35] datasets. Furthermore, we conducted a thorough and comprehensive comparison between LSKNet and SKNet to highlight the differences and advantages of LSKNet.
In summary, our contributions can be categorized into **FOUR** main aspects:
* We have identified two significant priors present in remote sensing data.
* To our knowledge, the proposed LSKNet backbone is the first to explore the utilization of large and selective kernels to exactly leverage the aforementioned priors in downstream tasks of remote sensing.
* Despite its simplicity and lightweight nature, our model achieves state-of-the-art performance on three prominent remote sensing tasks across 14 widely used public datasets, including remote sensing _scene classification_ (UCM [25], AID [26], NWPU [27]), _object detection_ (DOTA [36], HRSC2016 [37], FAIR1M [38], SAR-Aircraft [28]), _semantic segmentation_ (Potsdam [29], Vaihingen [30], LoveDA [31], UAVid [32], GID [33]) and _change detection_ (LEVIR-CD [34], S2Looking [35]).
* We provide a comprehensive analysis of our approach, further validating the importance of the identified priors and the effectiveness of the LSKNet model in addressing remote sensing challenges.
## 2 Related Work
### Remote Sensing
**Remote Sensing Scene Classification.** Scene classification [2, 4, 5, 6, 39, 40] in remote sensing images is a challenging task due to the presence of complex backgrounds and significant intra-class variation. To address this challenge, several models, such as MGML [2], ESD [3], and KFBNet [4], have been proposed. These models aim to leverage ensemble techniques that incorporate multi-level features to improve classification performance. With the emergence of Vision Transformer (ViT) [41], there has been a rise in large ViT-based models [42, 43]. Moreover, recent high-performance ViT-based models such as RSP-ViTAE [44, 5] and RVSA [6] have been pretrained on large-scale remote sensing datasets, millionAID[45], further advancing the capabilities in this field.
However, feature ensembles often introduce multiple branches in the backbones, which is complex and computationally inefficient. Similarly, using ViT-based backbones can lead to heavy and resource-intensive, which may not be suitable for certain practical applications.
**Remote Sensing Object Detection.** Remote sensing object detection [46, 47, 48, 49, 50, 51] focuses on identifying and locating objects of interest in aerial images. One recent mainstream trend is to generate bounding boxes that accurately fit the orientation of the objects being detected. Consequently, a significant amount of research has focused on improving the representation of oriented bounding boxes for remote sensing object detection. Several notable detection frameworks have been introduced to mitigate the rotation variance inherent in CNN network, including the RoI transformer [52], Oriented RCNN [11], S\\({}^{2}\\)A network [53], DRN [54] and R3Det [9]. Oriented RCNN [11] and Gliding Vertex [12] have made significant contributions by introducing new box encoding systems to address the instability of training losses caused by rotation angle periodicity. Furthermore, techniques such as GWD [10],KLD [55] and LD [56] have been developed to tackle the discontinuity of regression loss or enhance the localization quality of bounding boxes.
While these approaches have achieved promising results in addressing the issue of rotation variance, they do not consider the strong and valuable prior information presented in aerial images. Instead, our approach uses the large kernel and spatial selective mechanism to better model these priors without modifying the current detection framework.
**Remote Sensing Semantic Segmentation.** The most recent advancements in remote sensing semantic segmentation models have primarily focused on employing attention mechanisms and multi-scale feature fusion techniques [13, 14, 15, 16, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 10
large receptive field semantics for multi-scale feature fusion plays a crucial role in segmentation tasks.
Despite the success achieved by existing approaches, it is often observed that they overlook the valuable _prior 2)_ mentioned earlier. In contrast, our proposed backbone model considers the valuable priors in remote sensing images, which offers more flexible multi-range receptive field features to address this limitation.
**Remote Sensing Change Detection.** Remote sensing change detection aims to segment regions with semantic changes of interest from a pair of co-located images acquired at different times. Mainstream methods treat this task as a specialized form of segmentation with two input images. These methods involve fusing [63, 64, 65, 66, 67] or interacting [68, 69, 70, 71] the features of the bi-temporal images within the model's feature flow, and then using a segmentation head to generate the final change maps.
Numerous recent change detection frameworks [68, 72] demonstrate that more powerful backbones significantly improve performance, suggesting that the effectiveness and efficiency of backbone feature extraction remains a key factor in enhancing change detection models.
### Large Kernel Networks
Transformer-based [73] models, such as the Vision Transformer (ViT) [6, 41], Swin transformer [74, 75, 76, 77], and pyramid transformer [78, 79], have gained popularity in computer vision. Research [80, 81, 82, 83, 84] has demonstrated that the large receptive field is a key factor in their success. Recent work has shown that well-designed convolutional networks with large receptive fields can also be highly competitive with transformer-based models. For example, ConvNeXt [85] uses 7\\(\\times\\)7 depth-wise convolutions in its backbone, resulting in significant performance improvements in downstream tasks. In addition, RepLKNet [86] even uses a 31\\(\\times\\)31 convolutional kernel via re-parameterization, achieving compelling performance. A subsequent work SLAK [87], further expands the kernel size to 51\\(\\times\\)51 through kernel decomposition and sparse group techniques. RF-Next [88] automatically searches for a fixed large kernel for various tasks. VAN [89] introduces an efficient decomposition of large kernels as convolutional attention. Similarly, SegNeXt [90] and Conv2Former [91] demonstrate that large kernel convolution plays an important role in modulating the convolutional features with a richer context.
Although large kernel convolutions have received attention in general object recognition, there has been a lack of research examining their significance in remote sensing detection. As previously noted in 1, aerial images possess unique characteristics that make large kernels particularly well-suited for remote sensing. As far as we know, our work represents the first attempt to introduce large kernel convolutions for remote sensing and to examine their importance in this field.
### Attention/Selective Mechanism
The attention mechanism [92] is a simple but effective way to enhance neural representations for various tasks. The channel attention SE block [93] uses global average information to reweight feature channels, while spatial attention modules
Figure 3: Architectural comparison between our proposed LSK module and other selective mechanism modules. K: Kernel.
like GENet [94], GCNet [95], CTNet [96] and SGE [97] enhance a network's ability to model context information via spatial masks. CBAM [98] and BAM [99] combine both channel and spatial attention.
Self-attention mechanisms, originally popularized in natural language processing [73], have recently gained traction in computer vision as well. Vision Transformers (ViT) [41] leverage self-attention to capture global dependencies and contextual information across an image. In recent years, models using self-attention mechanisms achieve highly competitive performance in natural image classification [100], detection [101], and segmentation [20]. However, in many remote sensing imagery tasks, such as object detection and segmentation, global contextual information is not always necessary. For instance, to detect a car, the information about a river hundreds of meters away is not useful. Therefore, recent work has focused on incorporating locality priors into Transformer models, such as Swin [102], PVT [78, 103], HiViT [104], and ViTAE [105]. These models offer advantages in computational efficiency and optimization compared to vanilla ViT in remote sensing scenarios [6, 106].
In addition to attention mechanisms, kernel selection is a self-adaptive and effective technique for dynamic context modelling. CondConv [107] and Dynamic convolution [108] use parallel kernels to adaptively aggregate features from multiple convolution kernels. SKNet [59] introduces multiple branches with different convolutional kernels and selectively combines them along the channel dimension. ResNeSt [57] extends the idea of SKNet by partitioning the input feature map into several groups. Similarly to the SKNet, SCNet [58] uses branch attention to capturing richer information and spatial attention to improve localization ability. Deformable Convnets [109, 110] introduce a flexible kernel shape for convolution units.
Our approach bears the most similarity to SKNet [59]. However, there are **two key distinctions** between the two methods. Firstly, our proposed selective mechanism relies explicitly on a sequence of large kernels via decomposition, a departure from most existing attention-based approaches. Secondly, our method adaptively aggregates information across large kernels in the spatial dimension rather than the channel dimension utilized by SKNet. This design is more intuitive and effective for remote sensing tasks because channel-wise selection fails to model the spatial variance for different targets across the image space. The detailed structural comparisons are listed in Fig. 3.
## 3 Methods
### LSKNet Architecture
The overall architecture of the LSKNet backbone is simply built upon repeated LSK Blocks (refer to the details in Supplementary Materials). The LSK Block is inspired by ConvNeXt [111], MetaFormer [112], PVT-v2 [103], Conv2Former [91], and VAN [89]. Each LSK block consists of two residual sub-blocks: the Large Kernel Selection (LK Selection) sub-block and the Feed-forward Network (FFN) sub-block.
The LK Selection sub-block dynamically adjusts the network's receptive field as needed. The core LSK module (Fig. 4) is embedded in the LK Selection sub-block. It consists of a sequence of large kernel convolutions and a spatial kernel selection mechanism, which will be elaborated on later. The FFN sub-block is used for channel mixing and feature refinement, which consists of a sequence of a fully connected layer, a depthwise convolution, a GELU [113] activation, and a second fully connected layer.
The detailed configuration of different variants of LSKNet used in this paper is listed in Tab. 1. Additionally, Tab. 2 presents a comprehensive list of important symbols, their corresponding dimensions, and their respective meanings. These symbols are extensively referenced in Fig. 4 and equations presented in the subsequent sections.
### Large Kernel Convolutions
According to the _prior 2)_ in Section 1, it is suggested to model a series of multiple long-range contexts for adaptive selection. Therefore, we propose constructing a larger kernel convolution by
\\begin{table}
\\begin{tabular}{l l l l}
**Model** & \\{\\(C_{1}\\), \\(C_{2}\\), \\(C_{3}\\), \\(C_{4}\\)\\} & \\{\\(D_{1}\\), \\(D_{2}\\), \\(D_{3}\\), \\(D_{4}\\)\\} & \\#P \\\\ \\hline LSKNet-T & \\{32, 64, 160, 256\\} & \\{3, 3, 5, 2\\} & 4.3M \\\\ LSKNet-S & \\{64, 128, 320, 512\\} & \\{2, 2, 4, 2\\} & 14.4M \\\\ \\end{tabular}
\\end{table}
Table 1: **Variants of LSKNet used in this paper. \\(C_{i}\\): feature channel number; \\(D_{i}\\): number of LSK blocks of each stage \\(i\\).**_explicitly decomposing_ it into a sequence of depth-wise convolutions with a large growing kernel and increasing dilation. Specifically, for the \\(i\\)-th depth-wise convolution, the expansion of the kernel size \\(k\\), dilation rate \\(d\\), and the receptive field \\(RF\\) are defined as follows:
\\[k_{i-1}\\leq k_{i};\\ d_{1}=1,\\ d_{i-1}<d_{i}\\leq RF_{i-1}, \\tag{1}\\]
\\[RF_{1}=k_{1},\\ RF_{i}=d_{i}(k_{i}-1)+RF_{i-1}. \\tag{2}\\]
The increasing kernel size and dilation rate ensure that the receptive field expands quickly enough. We set an upper bound on the dilation rate to guarantee that the dilation convolution does not introduce gaps between feature maps. For instance, we can decompose a large kernel into 2 or 3 depth-wise convolutions as in Tab. 3, which have a theoretical receptive field of 23 and 29, respectively.
There are two advantages of the proposed designs. First, it explicitly yields multiple features with various large receptive fields, which makes it easier for the later kernel selection. Second, sequential decomposition is more efficient than simply applying a single larger kernel. As shown in Tab. 3, under the same resulted theoretical receptive field, our decomposition greatly reduces the number of parameters compared to the standard large convolution kernels. To obtain features with rich contextual information from different ranges for input \\(\\mathbf{X}\\), a series of decomposed depth-wise convolutions with different receptive fields are applied:
\\[\\mathbf{U}_{0}=\\mathbf{X},\\ \\ \\ \\ \\mathbf{U}_{i+1}=\\mathcal{F}_{i}^{dw}( \\mathbf{U}_{i}), \\tag{3}\\]
where \\(\\mathcal{F}_{i}^{dw}(\\cdot)\\) are depth-wise convolutions with kernel \\(k_{i}\\) and dilation \\(d_{i}\\). Assuming there are \\(N\\) decomposed kernels, each of which is further processed by a 1\\(\\times\\)1 convolution layer \\(\\mathcal{F}^{1\\times 1}(\\cdot)\\):
\\[\\widetilde{\\mathbf{U}}_{i}=\\mathcal{F}_{i}^{1\\times 1}(\\mathbf{U}_{i}),\\ \\text{for}\\ i\\ \\text{in}\\ [1,N], \\tag{4}\\]
allowing channel mixing for each spatial feature vector. Then, a selection mechanism is proposed to dynamically select kernels for various objects based on the multi-scale features obtained, which would be introduced next.
\\begin{table}
\\begin{tabular}{l|c|l} Symbol & Dim. & Meaning \\\\ \\hline \\(X\\) & \\(C\\times H\\times W\\) & input feature \\\\ \\(N\\) & 1 & number of selection kernels \\\\ \\(i\\) & 1 & the decomposed kernel index \\\\ \\(\\widetilde{\\mathbf{U}}_{i}\\) & \\(C\\times H\\times W\\) & context-rich feature \\\\ \\(\\mathbf{SA}_{max}\\) & \\(1\\times H\\times W\\) & spatial attention via max pool \\\\ \\(\\mathbf{SA}_{avg}\\) & \\(1\\times H\\times W\\) & spatial attention via avg pool \\\\ \\(\\widetilde{\\mathbf{SA}}_{i}\\) & \\(N\\times H\\times W\\) & spatial selection attentions \\\\ \\(S\\) & \\(C\\times H\\times W\\) & fused attention features \\\\ \\(Y\\) & \\(C\\times H\\times W\\) & output feature \\\\ \\end{tabular}
\\end{table}
Table 2: Symbols, dimensions, and meaning interpretations.
Figure 4: A conceptual illustration of LSK module.
\\begin{table}
\\begin{tabular}{c|l|c c} RF & (\\(k\\), \\(d\\)) sequence & \\#P & FLOPs \\\\ \\hline
23 & (23, 1) & 40.4K & 42.4G \\\\ & (5,1) \\(\\longrightarrow\\) (7, 3) & 11.3K & 11.9G \\\\ \\hline
29 & (29, 1) & 60.4K & 63.3G \\\\ & (3, 1) \\(\\longrightarrow\\) (5, 2) \\(\\longrightarrow\\) (7, 3) & 11.3K & 13.6G \\\\ \\end{tabular}
\\end{table}
Table 3: **Theoretical efficiency comparisons of two representative examples** by expanding a single large depth-wise kernel into a sequence, given channels being 64. \\(k\\): kernel size; \\(d\\): dilation.
### Spatial Kernel Selection
To enhance the network's ability to focus on the most relevant spatial context regions for detecting targets, we use a spatial selection mechanism to spatially select the feature maps from large convolution kernels at different scales. Firstly, we concatenate the features obtained from different kernels with different ranges of receptive field:
\\[\\widetilde{\\mathbf{U}}=[\\widetilde{\\mathbf{U}}_{1}; ;\\widetilde{\\mathbf{U}}_{i}], \\tag{5}\\]
and then efficiently extract the spatial relationship by applying channel-based average and maximum pooling (denoted as \\(\\mathcal{P}_{avg}(\\cdot)\\) and \\(\\mathcal{P}_{max}(\\cdot)\\)) to \\(\\widetilde{\\mathbf{U}}\\):
\\[\\mathbf{SA}_{avg}=\\mathcal{P}_{avg}(\\widetilde{\\mathbf{U}}),\\ \\ \\mathbf{SA}_{max}= \\mathcal{P}_{max}(\\widetilde{\\mathbf{U}}), \\tag{6}\\]
where \\(\\mathbf{SA}_{avg}\\) and \\(\\mathbf{SA}_{max}\\) are the average and maximum pooled spatial feature descriptors. To allow information interaction among different spatial descriptors, we concatenate the spatially pooled features and use a convolution layer \\(\\mathcal{F}^{2\\to N}(\\cdot)\\) to transform the pooled features (with 2 channels) into \\(N\\) spatial attention maps:
\\[\\widehat{\\mathbf{SA}}=\\mathcal{F}^{2\\to N}([\\mathbf{SA}_{avg};\\mathbf{SA}_{ max}]). \\tag{7}\\]
For each of the spatial attention maps, \\(\\widehat{\\mathbf{SA}}_{i}\\), a sigmoid activation function is applied to obtain the individual spatial selection mask for each of the decomposed large kernels:
\\[\\widehat{\\mathbf{SA}}_{i}=\\sigma(\\widehat{\\mathbf{SA}}_{i}), \\tag{8}\\]
where \\(\\sigma(\\cdot)\\) denotes the sigmoid function. The feature maps from the sequence of decomposed large kernels are weighted by their corresponding spatial selection masks and then fused by a convolution layer \\(\\mathcal{F}(\\cdot)\\) to obtain the attention feature \\(\\mathbf{S}\\):
\\[\\mathbf{S}=\\mathcal{F}(\\sum_{i=1}^{N}\\big{(}\\widehat{\\mathbf{SA}}_{i}\\cdot \\widetilde{\\mathbf{U}}_{i}\\big{)}). \\tag{9}\\]
The final output of the LSK module is the element-wise product between the input feature \\(\\mathbf{X}\\) and \\(\\mathbf{S}\\), similarly in [89, 90, 91]:
\\[\\mathbf{Y}=\\mathbf{X}\\cdot\\mathbf{S}. \\tag{10}\\]
Fig. 4 shows a detailed conceptual illustration of an LSK module where we intuitively demonstrate how the large selective kernel works by adaptively collecting the corresponding large receptive field for different objects.
## 4 Experiments
In this section, we report the experimental performance of our proposed model on remote sensing scene classification, object detection and semantic segmentation on a total of 11 datasets. In the main results, we adopt a 300-epoch backbone pretraining strategy on the Imagenet-1K [125] to pursue higher accuracy, similarly to [9, 11, 53]. However, for scene classification, we follow the pretraining settings outlined in [5], conducting 300 epochs of pretraining on the millionAID dataset [45]. We directly use the official/default training, validation, and testing set splits and adhere to the mainstream settings for each benchmark to ensure fairness. In the ablation study, we instead adopt a 100-epoch backbone pretraining strategy on the Imagenet-1K for experimental efficiency. The best score is indicated in **bold**, while the second-best score is underlined. \"FLOPs\" in this section are calculated by passing an image of 1024\\(\\times\\)1024 pixels to a network. More details on experimental implementations (e.g. training schedule and data preprocessing) and result visualizations are available in Supplementary Materials.
### Scene Classification
#### 4.1.1 Classification Datasets
Mainstream of remote sensing classification research [1, 120, 122, 5] conducts experiments on the three standard scene recognition datasets including the UC Merced Land Use (UCM) [25] dataset, the Aerial Image Dataset (AID) [26], and the Image Scene Classification collected by Northwestern Polytechnical University (NWPU) [27].
UCM is a relatively small dataset which contains only 2,100 images and 21 categories, each category has 100 images. All images are in size of 256 \\(\\times\\) 256.
AID contains 10,000 images of 30 categories, all images are in size of 600 \\(\\times\\) 600.
NWPU is a relatively large dataset which contains 31,500 images and 45 categories, eachcategory has 700 images. All images are in size of 256 \\(\\times\\) 256.
Following the mainstream of remote sensing classification works [1, 5, 120, 122], we conduct experiments on five standard benchmarks, i.e. UCM-82, AID-28, AID-55, NWPU-19, and NWPU-28.
#### Classification Results
The classification results of the compared methods are presented in Tab. 4. We compare our proposed LSKNets with 22 other state-of-the-art methods for remote sensing scene classification. Without any tricks such as feature ensembles in MBENet [3] and FENet [2], our vanilla lightweight models, LSKNet-T and LSKNet-S, deliver competitive performance across multiple datasets. These results exhibit promising performance, showcasing their effectiveness for accurate scene classification across diverse scenarios and the potential for feature extraction as a backbone.
### Oriented Object Detection and SAR Object Detection
#### 4.2.1 Object Detection Datasets
To evaluate the applicability of our proposed model for remote sensing detection tasks, we conducted experiments on 4 demanding datasets. These included 3 well-established oriented object detection datasets: HRSC2016 [37], DOTA-v1.0 [36], and FAIR1M-v1.0 [38], and a highly intricate and challenging synthetic aperture radar (SAR) dataset, SAR-Aircraft [28].
DOTA-v1.0 [36] consists of 2,806 remote sensing images. It contains 188,282 instances of 15 categories: Plane (PL), Baseball diamond (BD), Bridge (BR), Ground track field (GTF), Small vehicle (SV), Large vehicle (LV), Ship (SH), Tennis court (TC), Basketball court (BC), Storage tank (ST), Soccer-ball field (SBF), Roundabout (RA), Harbor (HA), Swimming pool (SP), and Helicopter (HC).
HRSC2016 [37] is a high-resolution remote sensing dataset which is collected for ship detection. It consists of 1,061 images which contain 2,976 instances of ships.
FAIR1M-v1.0 [38] is a recently published remote sensing dataset that consists of 15,266
\\begin{table}
\\begin{tabular}{l|c c|c c c c c} Model & **\\#P \\(\\downarrow\\)** & **FLOPs \\(\\downarrow\\)** & **UCM-82** & **AID-28** & **AID-55** & **NWPU-19** & **NWPU-28** \\\\ \\hline MSANet [114] & \\(>\\)42.3M & \\(>\\)164.3 & 98.96 & 93.53 & 96.01 & 90.38 & 93.52 \\\\ ViT-B [41] & 86.0M & 118.9G & 99.28 & 93.81 & 96.08 & 90.96 & 93.96 \\\\ SCCov [115] & 13.0M & - & 99.05 & 93.12 & 96.10 & 89.30 & 92.10 \\\\ MA-FE [116] & \\(>\\)25.6M & \\(>\\)86.3G & 99.66 & - & 95.98 & - & 93.21 \\\\ MG-CAP [117] & \\(>\\)42.3M & \\(>\\)164.3G & 99.00 & 93.34 & 96.12 & 90.83 & 92.95 \\\\ LSENet [118] & 25.9M & \\(>\\)86.3G & 99.78 & 94.41 & 96.36 & 92.23 & 93.34 \\\\ IDCCP [119] & 25.6M & 86.3G & 99.05 & 94.80 & 96.95 & 91.55 & 93.76 \\\\ F\\({}^{2}\\)BRM [120] & 25.6M & 86.3G & 99.58 & 96.05 & 96.97 & 92.74 & 94.87 \\\\ EAM [121] & \\(>\\)42.3M & \\(>\\)164.3 & 98.98 & 94.26 & 97.06 & 91.91 & 94.29 \\\\ MBLANet [1] & - & - & 99.64 & 95.60 & 97.14 & 92.32 & 94.66 \\\\ GRMANet [122] & 54.1M & 171.4G & 99.19 & 95.43 & 97.39 & 93.19 & 94.72 \\\\ KFBNet [4] & - & - & 99.88 & 95.50 & 97.40 & 93.08 & 95.11 \\\\ CTNet [42] & - & - & - & 96.25 & 97.70 & 93.90 & 95.40 \\\\ RSP-R50 [5] & 25.6M & 86.3G & 99.48 & 96.81 & 97.89 & 93.93 & 95.02 \\\\ RSP-Swin [5] & 27.5M & 37.7G & 99.52 & 96.83 & 98.30 & 94.02 & 94.51 \\\\ RSP-ViTAE [5] & 19.3M & 119.1G & 99.90 & 96.91 & 98.22 & **94.41** & 95.60 \\\\ RVSA [6] & 114.4M & 301.3G & - & 97.01 & 98.50 & 93.92 & 95.66 \\\\ ConvNext [85] & 28.0M & 93.7G & 99.81 & 95.43 & 97.40 & 94.07 & 94.76 \\\\ FSCNet [123] & 28.8M & 166.1G & **100** & 95.56 & 97.51 & 93.03 & 94.76 \\\\ UPetiu [124] & 87.7M & \\(>\\)322.2G & 99.05 & 96.29 & 97.06 & 92.13 & 93.79 \\\\ MBENet [3] & 23.9M & 108.5G & 99.81 & 96.00 & 98.54 & 92.50 & 95.58 \\\\ FENet [2] & 23.9M & 92.0G & 99.86 & 96.45 & **98.60** & 92.91 & 95.39 \\\\ \\hline \\(\\star\\) LSKNet-T & **4.3M** & **19.2G** & 99.81 & 96.80 & 98.14 & 94.07 & 95.75 \\\\ \\(\\star\\) LSKNet-S & 14.4M & 54.4G & 99.81 & **97.05** & 98.22 & 94.27 & **95.83** \\\\ \\end{tabular}
\\end{table}
Table 4: Results of different methods for scene classification.
high-resolution images and more than 1 million instances. It contains 5 categories and 37 subcategories objects.
The SAR-Aircraft dataset [28] is a recently published remote sensing dataset specifically collected for the SAR modality object detection. Different from the above 3 datasets which are in RGB modality, SAR datasets are in grayscale. It encompasses 7 distinct categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. The dataset comprises a training set of 3,489 images and a test set of 879 images, totalling 16,463 instances of aircraft.
#### 4.2.2 Detection Results
In our oriented object detection experiments, LSKNets are defaulting to be built within the Oriented RCNN [11] framework due to its compelling performance and efficiency.
**Results on DOTA-v1.0.** We compare our LSKNet with 20 state-of-the-art methods on the DOTA-v1.0 dataset, as reported in Tab. 5. Our LSKNet-T, LSKNet-S and LSKNet-S* achieve state-of-the-art with mAP of **81.37%**, **81.
and **81.85%** respectively. Notably, our high-performing LSKNet-S reaches an inference speed of **18.1** FPS on 1024x1024 images with a single RTX3090 GPU.
**Results on HRSC2016.** We evaluated the performance of our LSKNet against 12 state-of-the-art methods on the HRSC2016 dataset. The results presented in Tab. 7 demonstrate that our LSKNet-S outperforms all other methods with an mAP of **90.65%** and **98.46%** under the PASCAL VOC 2007 [142] and VOC 2012 [143] metrics, respectively.
**Results on FAIR1M-v1.0.** We compare our LSKNet against 6 other models on the FAIR1M-v1.0 dataset, as shown in Tab. 6. The results reveal that our LSKNet-T and LSKNet-S perform exceptionally well, achieving state-of-the-art mAP scores of **46.93%** and **47.87%** respectively, surpassing all other models by a significant margin. Fine-grained category results can be found in Supplementary Materials.
**Results on SAR-Aircraft.** We evaluate the performance of our proposed LSKNets compared to 5 state-of-the-art backbone networks under Cascade Mask RCNN [147] and RetinaNet [138] detection frameworks. The results are shown in Tab. 8, which clearly shows that our proposed LSKNets provide a significant and substantial improvement in performance for SAR object detection.
**Quantitative Analysis.** The ViTDet, which uses the vanilla ViT backbone, has the largest computational complexity (4.0x FLOPs compared to LSKNet-T) and the second largest model size (4.9x parameters compared to LSKNet-T) among the compared models, and it performs poorly on object detection on DOTA-v1.0 dataset. Another variant of the ViT-based model, RVSA, which is based on ViTAE, incorporates multi-scale and 2D locality inductive bias and is more effective at modelling image features than the vanilla ViT backbone. Despite its effectiveness, RVSA still suffers from a heavy model size (5.4x parameters compared to LSKNet-T) and high computational complexity (3.3x FLOPs compared to LSKNet-T). Neither of these ViT-based models can outperform the lightweight LSKNet-T.
LSKNet's advantages are also observed in easily mixed-up categories such as small vehicles (+2.49%) and ships (+3.59%) in the DOTA-v1.0 dataset (Tab. 5), as well as in categories that require large context information such as intersections (+2.08%), roundabouts (+6.53%), and bridges (+6.11%) in the FAIR1M dataset (Tab. S4 in the Supplementary Materials). These results further verify our identified _Prior 1_ and _Prior 2_, and justify the effectiveness of the proposed foundation backbone model.
### Semantic Segmentaion
#### 4.3.1 Segmentaion Dataset
Following the mainstream segmentation research [13, 60], we assess the effectiveness of our proposed model in remote sensing segmentation by conducting evaluations on five standard datasets: Potsdam [29], Vaihingen [30], LoveDA [31], UAVid [32] and GID [33] dataset.
Potsdam [29] is a high-resolution semantic segmentation dataset that consists of 38 high-resolution images. It is composed of 6 categories of semantics: impervious surface, building, low vegetation, tree, car, and one background category, clutter.
Vaihingen [30] is also a fine spatial resolution semantic segmentation dataset which consists of 33 high-resolution images. It has the same semantic categories as Potsdam.
LoveDA [31] is a multi-scale and complex remote sensing semantic segmentation dataset
\\begin{table}
\\begin{tabular}{l|c c c}
**RetinaNet [138] 2x** & **\\#P** & **mAP\\({}_{50}\\)** & **mAP\\({}_{75}\\)** \\\\ \\hline ResNet-50 [144] & 25.6M & 0.469 & 0.324 \\\\ PVT-Tiny [78] & 13.2M & 0.498 & 0.335 \\\\ Res2Net-50 [145] & 25.7M & 0.528 & 0.339 \\\\ Swin-T [102] & 28.3M & 0.586 & 0.346 \\\\ ConvNeXt V2-N [146] & 15.0M & 0.589 & 0.350 \\\\ VAN-B1 [89] & 13.4M & 0.603 & 0.375 \\\\ \\hline \\(\\star\\) LSKNet-T & **4.3M** & **0.582** & 0.354 \\\\ \\(\\star\\) LSKNet-S & 14.4M & **0.624** & **0.387** \\\\ \\end{tabular}
\\begin{tabular}{l|c c c}
**Cascade Mask RCNN [147] 2x** & **\\#P** & **mAP\\({}_{50}\\)** & **mAP\\({}_{75}\\)** \\\\ \\hline ResNet-50 [144] & 25.6M & 0.483 & 0.339 \\\\ PVT-Tiny [78] & 13.2M & 0.502 & 0.344 \\\\ Res2Net-50 [145] & 25.7M & 0.544 & 0.372 \\\\ ConvNeXt V2-N [146] & 15.0M & 0.581 & 0.428 \\\\ Swin-T [102] & 28.3M & 0.596 & 0.416 \\\\ VAN-B1 [89] & 13.4M & 0.604 & 0.457 \\\\ \\hline \\(\\star\\) LSKNet-T & **4.3M** & **0.586** & 0.435 \\\\ \\(\\star\\) LSKNet-S & 14.4M & **0.614** & **0.458** \\\\ \\end{tabular}
\\end{table}
Table 8: The mAP results on the test set for the SAR-Aircraft dataset.
that contains 5,987 1024\\(\\times\\)1024 pixels images. Among these images, 2522 are allocated for training, 1,669 for validation, and 1,796 for online testing. The dataset consists of 7 categories of semantics: building, road, water, barren, forest, agriculture and background (Back.G.).
UAVid [32] is a high-resolution and complex Unmanned Aerial Vehicle (UAV) semantic segmentation dataset. It contains 200 training images, 70 validation images and 150 online testing images. The dataset is composed of 8 distinct categories: Clutter, Building, Road, Tree, Vegetation, Moving Car(Mo.Car), Static Car(St.Car), and Human.
The GID [33] dataset is a medium-resolution land cover segmentation dataset with a ground sampling distance (GSD) of 4m, containing 150 images of 7,200\\(\\times\\)6,800 pixels. Following [148], we selected the 15 pre-defined images from the original GID dataset and cropped all images into 256\\(\\times\\)256 pixels, resulting in 7,830 training images and 3,915 testing images. The dataset consists of six semantic categories: build-up, farmland, forest, meadow, water, and others.
#### 4.3.2 Segmentation Results
We conducted a comprehensive comparison of our proposed models, LSKNet-T and LSKNet-S, against a multitude of recently proposed high-performance models on the 5 aforementioned datasets. For the Potsdam, Vaihingen, LoveDA and UAVid datasets, LSKNets are integrated within the UNetFormer [13] framework due to its compelling performance and open-source availability. For the GID dataset, we compare various backbone models using the SegFormer framework. Specifically, we compared our models to 14 models on the Potsdam dataset (Tab. 9), 16 models on the Vaihingen dataset (Tab. 10), 13 models on the LoveDA dataset (Tab. 11), 16 models on the UAVid dataset (Tab. 12) and 6 backbone models on the GID dataset (Tab. 13). Notably, our LSKNet-T and LSKNet-S models displays exceptional performance, surpassing all other state-of-the-art methods across all datasets in most major metrics.
### Change Detection
#### 4.4.1 Change Detection Dataset
Following the mainstream change detection research [68, 71, 176], we assess the effectiveness of our proposed model in remote sensing change detection tasks by conducting evaluations on two standard datasets: LEVIR-CD [34] and S2Looking [35].
LEVIR-CD [34] includes 637 pairs of bi-temporal images sourced from Google Earth, with each image having a size of 1024 \\(\\times\\) 1024 pixels and a ground sampling distance (GSD) of 0.5 meters. The dataset features 31,333 annotated instances of binary changes.
\\begin{table}
\\begin{tabular}{l|c c c} Model & **mF1 \\(\\uparrow\\)** & **OA \\(\\uparrow\\)** & **mIOU \\(\\uparrow\\)** \\\\ \\hline PSPNet [155] & 79.0 & 87.7 & 68.6 \\\\ ERFNet [149] & 78.9 & 85.8 & 69.1 \\\\ DANet [14] & 79.6 & 88.2 & 69.4 \\\\ DABNet [150] & 79.2 & 84.3 & 70.2 \\\\ Segmenter [154] & 84.1 & 88.1 & 73.6 \\\\ BOTNet [156] & 84.8 & 88.0 & 74.3 \\\\ FANet [16] & 85.4 & 88.9 & 75.6 \\\\ BiSeNet [151] & 84.3 & 87.1 & 75.8 \\\\ DeepLabV3+ [157] & 87.4 & 89.0 & - \\\\ ShellNet [153] & 87.5 & 89.8 & 78.3 \\\\ MARESU-Net [61] & 87.7 & 90.1 & 78.6 \\\\ Eawlet [15] & 87.7 & 89.7 & 78.7 \\\\ SwiftNet [152] & 88.3 & 90.2 & 79.6 \\\\ ABCNet [17] & 89.5 & 90.7 & 81.3 \\\\ BANet [60] & 89.6 & 90.5 & 81.4 \\\\ UNetFormer [13] & 90.4 & 91.0 & 82.7 \\\\ \\hline \\(\\star\\) LSKNet-T & 91.7 & **93.6** & 84.9 \\\\ \\(\\star\\) LSKNet-S & **91.8** & **93.6** & **85.1** \\\\ \\end{tabular}
\\end{table}
Table 10: Quantitative comparison results on the Vaihingen test set. OA: Overall Accuracy
\\begin{table}
\\begin{tabular}{l|c c c} Model & **mF1 \\(\\uparrow\\)** & **OA \\(\\uparrow\\)** & **mIOU \\(\\uparrow\\)** \\\\ \\hline ERFNet [149] & 85.8 & 84.5 & 76.2 \\\\ DABNet [150] & 88.3 & 86.7 & 79.6 \\\\ BiSeNet [151] & 89.8 & 88.2 & 81.7 \\\\ EaNet [15] & 90.6 & 88.7 & 83.4 \\\\ MARESU-Net [61] & 90.5 & 89.0 & 83.9 \\\\ DANet [14] & 88.9 & 89.1 & 80.3 \\\\ SwiftNet [152] & 91.0 & 89.3 & 83.8 \\\\ FANet [16] & 91.3 & 89.8 & 84.2 \\\\ ShelfNet [153] & 91.3 & 89.9 & 84.4 \\\\ ABCNet [17] & 92.7 & 91.3 & 86.5 \\\\ Segmenter [154] & 89.2 & 88.7 & 80.7 \\\\ BANet [60] & 92.5 & 91.0 & 86.3 \\\\ SwinUperNet [102] & 92.2 & 90.9 & 85.8 \\\\ UNetFormer [13] & 92.8 & 91.3 & 86.8 \\\\ \\hline \\(\\star\\) LSKNet-T & 92.9 & 91.7 & 86.7 \\\\ \\(\\star\\) LSKNet-S & **93.1** & **92.0** & **87.2** \\\\ \\end{tabular}
\\end{table}
Table 9: Quantitative comparison results on the Potsdam test set. OA: Overall AccuracyS2Looking [35] comprises 5,000 pairs of bi-temporal images captured by optical satellites worldwide. Each image is 1024 \\(\\times\\) 1024 pixels, with a GSD ranging from 0.5 to 0.8 meters. The dataset contains annotations for over 65,920 instances of binary changes.
#### 4.4.2 Change Detection Results
In change detection experiments, LSKNets are defaulting to be built within the Changer [68] framework due to its compelling performance and open-source availability. We conduct a comprehensive comparison of our proposed models, LSKNet-T and LSKNet-S, against 17 recent high-performance models on LEVIR-CD and S2Looking datasets. The results given in Tab. 14
\\begin{table}
\\begin{tabular}{l|c c c c c c c} \\multicolumn{1}{c|}{Backbones} & \\multicolumn{1}{c}{**mF1 \\(\\uparrow\\)**} & \\multicolumn{1}{c}{**OA \\(\\uparrow\\)**} & \\multicolumn{1}{c}{**mIoU \\(\\uparrow\\)**} \\\\ \\hline ConvNext-v2-N [146] & 75.1 & 78.9 & 62.5 \\\\ ResNet-50 [144] & 75.3 & 80.0 & 64.1 \\\\ Swin-T [102] & 77.8 & 80.8 & 65.6 \\\\ ResNet-50 [57] & 79.7 & 80.3 & 67.2 \\\\ VAN-S [89] & 80.2 & 82.1 & 68.2 \\\\ MSCAN-S [90] & 80.4 & 81.4 & 68.4 \\\\ \\hline \\(\\star\\) LSKNet-T & 79.4 & 81.5 & 67.2 \\\\ \\(\\star\\) LSKNet-S & **83.2** & **82.3** & **69.6** \\\\ \\hline \\end{tabular}
\\end{table}
Table 13: Quantitative comparison results on the GID test set.
\\begin{table}
\\begin{tabular}{l|c|c c c c c c c} Method & **mIoU \\(\\uparrow\\)** & Back.G. & Building & Road & Water & Barren & Forest & Agriculture \\\\ \\hline Segmenter [154] & 47.1 & 38.0 & 50.7 & 48.7 & 77.4 & 13.3 & 43.5 & 58.2 \\\\ SegFormer [158] & 47.4 & 43.1 & 52.3 & 55.0 & 70.7 & 10.7 & 43.2 & 56.8 \\\\ DeepLabV3+ [157] & 47.6 & 43.0 & 50.9 & 52.0 & 74.4 & 10.4 & 44.2 & 58.5 \\\\ UNet [159] & 47.6 & 43.1 & 52.7 & 52.8 & 73.0 & 10.3 & 43.1 & 59.9 \\\\ UNet++ [160] & 48.2 & 42.9 & 52.6 & 52.8 & 74.5 & 11.4 & 44.4 & 58.8 \\\\ SemanticFPN [161] & 48.2 & 42.9 & 51.5 & 53.4 & 74.7 & 11.2 & 44.6 & 58.7 \\\\ FarSeg [162] & 48.2 & 43.1 & 51.5 & 53.9 & 76.6 & 9.8 & 43.3 & 58.9 \\\\ PSPNet [155] & 48.3 & 44.4 & 52.1 & 53.5 & 76.5 & 9.7 & 44.1 & 57.9 \\\\ FactSeg [163] & 48.9 & 42.6 & 53.6 & 52.8 & 76.9 & 16.2 & 42.9 & 57.5 \\\\ TransUNet [164] & 48.9 & 43.0 & 56.1 & 53.7 & 78.0 & 9.3 & 44.9 & 56.9 \\\\ BANet [60] & 49.6 & 43.7 & 51.5 & 51.1 & 76.9 & 16.6 & 44.9 & 62.5 \\\\ HRNet [165] & 49.8 & 44.6 & 55.3 & 57.4 & 78.0 & 11.0 & 45.3 & 60.9 \\\\ SwinUperNet [102] & 50.0 & 43.3 & 54.3 & 54.3 & 78.7 & 14.9 & 45.3 & 59.6 \\\\ DC-Swin [166] & 50.6 & 41.3 & 54.5 & 56.2 & 78.1 & 14.5 & **47.2** & 62.4 \\\\ UNetFormer [13] & 52.4 & 44.7 & 58.8 & 54.9 & 79.6 & 20.1 & 46.0 & 62.5 \\\\ Hi-ResNet [167] & 52.5 & **46.7** & 58.3 & 55.9 & 80.1 & 17.0 & 46.7 & **62.7** \\\\ \\hline \\(\\star\\) LSKNet-T & 53.2 & 46.4 & 59.5 & 57.1 & 79.9 & 21.8 & 46.6 & 61.4 \\\\ \\(\\star\\) LSKNet-S & **54.0** & **46.7** & **59.9** & **58.3** & **80.2** & **24.6** & 46.4 & 61.8 \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\\\ \\end{tabular}
\\end{table}
Table 11: Quantitative comparison results on the LoveDA test set.
\\begin{table}
\\begin{tabular}{l|c|c c c c c c c} Method & **mIoU \\(\\uparrow\\)** & Clutter & Building & Road & Tree & Vegetation & Mo.Car & St.Car & Human \\\\ \\hline MSD [32] & 57.0 & 57.0 & 79.8 & 74.0 & 74.5 & 55.9 & 62.9 & 32.1 & 19.7 \\\\ CANet [168] & 63.5 & 66.0 & 86.6 & 62.1 & 79.3 & **78.1** & 47.8 & **68.3** & 19.9 \\\\ DANet [14] & 60.6 & 64.9 & 85.9 & 77.9 & 78.3 & 61.5 & 59.6 & 47.4 & 9.1 \\\\ SwiftNet [152] & 61.1 & 64.1 & 85.3 & 61.5 & 78.3 & 76.4 & 51.1 & 62.1 & 15.7 \\\\ BiSeNet [151] & 61.5 & 64.7 & 85.7 & 61.1 & 78.3 & 77.3 & 48.6 & 63.4 & 17.5 \\\\ MANet [61] & 62.6 & 64.5 & 85.4 & 77.8 & 77.0 & 60.3 & 67.2 & 53.6 & 14.9 \\\\ ABCNet [17] & 63.8 & 67.4 & 86.4 & 81.2 & 79.9 & 63.1 & 69.8 & 48.4 & 13.9 \\\\ Segmenter [154] & 58.7 & 64.2 & 84.4 & 79.8 & 76.1 & 57.6 & 59.2 & 34.5 & 14.2 \\\\ SegFormer [158] & 66.0 & 66.6 & 86.3 & 80.1 & 79.6 & 62.3 & 72.5 & 52.5 & 28.5 \\\\ BANet [60] & 64.6 & 66.7 & 85.4 & 80.7 & 78.9 & 62.1 & 69.3 & 52.8 & 21.0 \\\\ BOTNet [156] & 63.2 & 64.5 & 84.9 & 78.6 & 77.4 & 60.5 & 65.8 & 51.9 & 22.4 \\\\ CoaT [169] & 65.8 & 69.0 & **88.5** & 80.0 & 79.3 & 62.0 & 70.0 & 59.1 & 18.9 \\\\ UNetFormer [13] & 67.8 & 68.4 & 87.4 & 81.5 & 80.2 & 63.5 & 73.6 & 56.4 & 31.0 \\\\ \\hline \\(\\star\\) LSKNet-T & 59.3 & **69.6** & 87.9 & 82.8 & 80.6 & 64.8 & **77.3** & 60.2 & 31.3 \\\\ \\(\\star\\) LSKNet-S & **70.0** & **69.6** & 84.8 & **82.9** & **80.9** & 65.5 & 76.8 & **64.9** & **31.8** \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\\\ \\end{tabular}
\\end{table}
Table 12: Quantitative comparison results on the UAVid test set.
justify that the proposed LSKNet-T and LSKNet-S models have very compelling performance, surpassing all other state-of-the-art methods across all datasets in all main metrics (F1 and IoU).
### Ablation Study
In this section, we report ablation study results on the DOTA-v1.0 test set. The choice of the DOTA-v1.0 dataset for the ablation study is motivated by 2 factors. Firstly, object detection is known to be a practical and challenging task, and the DOTA-v1.0 dataset provides a diverse and complex set of objects and scenes for evaluation. Secondly, the availability of numerous models allows for comprehensive comparisons, enabling a thorough assessment of the effectiveness of our proposed method. In ablation studies, we adopt the 100-epoch backbone pretraining schedule for experimental efficiency (Tab. 15, 16, 17, 19, 18).
**Large Kernel Decomposition.** Deciding on the number of kernels to decompose is a critical choice for the LSK module. We follow Eq. (1) to configure the decomposed kernels. The results of the ablation study on the number of large kernel decompositions, when the theoretical receptive field is fixed at 29, are shown in Tab. 15. It suggests that decomposing the large kernel into two depth-wise large kernels results in a good trade-off between speed and accuracy, achieving the best performance in terms of both FPS (frames per second) and mAP (mean average precision).
**Kernel Receptive Field Size.** Based on our evaluations presented in Tab. 15, we find that decomposing the large kernel into two depth-wise kernels in **series** is optimal. Furthermore, Tab. 16 shows that excessively small or large receptive fields can hinder the performance of the LSKNet, and a receptive field size of approximately 23 is determined to be the most effective.
**Comparison with SKNet and Different Attention Selection Types.** There are two key distinctions between SKNet and LSKNet. Firstly, the proposed selective mechanism relies on explicit feature flow through a **series** of large kernels via decomposition, which deviates from the approach taken by most existing attention-based methods.
\\begin{table}
\\begin{tabular}{l c c c c} \\((k,\\,d)\\) sequence & RF & Num. & FPS & mAP (\\%) \\\\ \\hline (29, 1) & 29 & 1 & 18.6 & 80.66 \\\\ (5, 1) \\(\\longrightarrow\\) (7, 4) & 29 & 2 & **20.5** & **80.91** \\\\ (3, 1) \\(\\longrightarrow\\) (5, 2) \\(\\longrightarrow\\) (7, 3) & 29 & 3 & 19.2 & 80.77 \\\\ \\end{tabular}
\\end{table}
Table 15: **The effects of the number of decomposed large kernels on the inference FPS and mAP, given the theoretical receptive field is 29. Decomposing the large kernel into two depth-wise kernels achieves the best performance of speed and accuracy.**
\\begin{table}
\\begin{tabular}{l|c c c c|c c c c} Method & \\multicolumn{5}{c|}{**LEVIR-CD [34]**} & \\multicolumn{5}{c}{**S2Looking [35]**} \\\\ \\cline{2-10} & **Precision \\(\\uparrow\\)** & **Recall \\(\\uparrow\\)** & **F1 \\(\\uparrow\\)** & **IoU \\(\\uparrow\\)** & **Precision \\(\\uparrow\\)** & **Recall \\(\\uparrow\\)** & **F1 \\(\\uparrow\\)** & **IoU \\(\\uparrow\\)** \\\\ \\hline FC-EF [63] & 86.91 & 80.17 & 83.40 & 71.53 & 81.36 & 8.95 & 7.65 & 8.77 \\\\ FC-Siam-Conc [63] & 91.99 & 76.77 & 83.69 & 71.96 & **83.29** & 15.76 & 13.19 & 15.28 \\\\ FC-Siam-Di [63] & 89.53 & 83.31 & 86.31 & 75.92 & 68.27 & 18.52 & 13.54 & 17.05 \\\\ STANet [34] & 83.81 & 91.00 & 87.26 & 77.40 & 38.75 & 56.49 & 45.97 & 29.84 \\\\ DTCDSCN [170] & 88.53 & 86.83 & 87.67 & 78.05 & 68.58 & 49.16 & 57.27 & 40.12 \\\\ HANet [171] & 91.21 & 89.36 & 90.28 & 82.27 & 61.38 & 55.94 & 58.54 & 41.38 \\\\ CDNet [172] & 91.60 & 86.50 & 89.00 & 80.14 & 67.48 & 54.93 & 60.56 & 43.43 \\\\ CDMC [173] & 93.09 & 88.07 & 90.51 & 82.67 & 64.88 & 58.15 & 61.34 & 44.23 \\\\ IFNet [67] & 91.17 & 90.51 & 90.83 & 83.22 & 66.46 & 61.95 & 64.13 & 47.19 \\\\ SNUNet [64] & 92.45 & 90.17 & 91.30 & 83.99 & 71.94 & 56.34 & 63.19 & 46.19 \\\\ BiT [70] & 91.97 & 88.62 & 90.26 & 82.26 & 74.80 & 55.56 & 63.76 & 46.80 \\\\ HCGMNet [174] & 92.96 & 90.61 & 91.77 & 84.79 & 72.51 & 57.06 & 63.87 & 46.91 \\\\ ChangeFormer [65] & 92.59 & 89.68 & 91.11 & 83.67 & 72.82 & 56.13 & 63.39 & 46.41 \\\\ C2FNet [175] & 93.69 & 89.47 & 91.83 & 84.89 & 74.84 & 54.14 & 62.83 & 45.80 \\\\ CGNet [176] & 93.15 & 90.90 & 92.01 & 85.21 & 70.18 & 59.38 & 64.33 & 47.41 \\\\ DiFormer [71] & **93.75** & 90.59 & 92.15 & 85.44 & 72.39 & 61.19 & 66.31 & 49.60 \\\\ Changer-MiT\\_b0 [68] & 93.61 & 90.56 & 92.06 & 85.29 & 73.01 & 62.04 & 67.08 & 50.47 \\\\ \\hline \\(\\star\\) LSKNet-T & 92.56 & **91.83** & 92.19 & 85.51 & 70.44 & **64.46** & 67.32 & 50.74 \\\\ \\(\\star\\) LSKNet-S & 93.34 & 91.23 & **92.27** & **85.65** & 71.90 & 63.64 & **67.52** & **50.96** \\\\ \\end{tabular}
\\end{table}
Table 14: Quantitative comparison results for change detection on LEVIR-CD and S2Looking datasets.
In contrast, SKNet employs **parallel** decomposition. Secondly, LSKNet adaptively aggregates information across large kernels in the spatial dimension, instead of the channel dimension utilized in SKNet or LSKNet-CS. This design is more intuitive and effective for remote sensing tasks, as channel-wise selection fails to capture the spatial variance of different targets across the image space. Additionally, we evaluate a variant of LSKNet that leverages both spatial and channel selection. Our experiments in Tab. 16 suggest that, in detection tasks, spatial information plays a more critical role. However, the inclusion of both spatial and channel selection may introduce extra difficulty in model optimization, leading to a slight performance drop. A comprehensive conceptual comparison of the module architectures of SKNet, LSKNet, LSKNet-CS (channel selection version) and LSKNet-SCS (spatial and channel selection version) is presented in Supplementary Materials.
**Pooling Layers in Spatial Selection.** We conduct experiments to determine the optimal pooling layers for spatial selection, as reported in Tab. 17. The results suggest that using both max and average pooling in the spatial selection component of our LSK module provides the best performance without sacrificing inference speed.
**Performance of LSKNet backbone under different detection frameworks.** To validate the generality and effectiveness of our proposed LSKNet backbone, we evaluate its performance under various remote sensing detection frameworks, including two-stage frameworks O-RCNN [11] and RoI Transformer [52] as well as one-stage frameworks S\\({}^{2}\\)A-Net [53] and R3Det [9]. The results in Tab. 19 show that our proposed LSKNet-T backbone significantly improves detection performance compared to ResNet-18, while using only 38% of its parameters and with 50% fewer FLOPs. These findings underscore the lightweight yet powerful generality nature of the proposed LSKNet backbone.
**Comparison with Other Large Kernel/Selective Attention Backbones.** We also compare our LSKNet with 9 popular or high-performance backbone models with large kernels or dynamic/selective attention. As shown in Tab. 18, the ViTDet [132], which uses the vanilla ViT [41] backbone, has the largest model size and computational complexity among the compared models and performs poorly across all tasks. Observations in Tab. 5 show that it performs particularly poorly on objects with distinct fine-grained features (such as ball courts and Helicopter). It suggests that global contextual information is not as efficient or informative for remote sensing scenarios. Under similar or fewer model sizes and complexity budgets, our LSKNet outperforms all other models in remote sensing object detection (on DOTA-v1.0), segmentation (on Vaihingen) and change detection (on LEVIR-CD), highlighting its effectiveness in capturing and processing semantic features in remote sensing images.
## 5 Analysis
We perform analysis specifically focused on the object detection task due to the significance of instance-level information in understanding the overall behaviour of the model.
**Detection Results Visualization.** Visualization examples of detection results and Eigen-CAM [178] are shown in Fig. 5. LSKNet can
\\begin{table}
\\begin{tabular}{c c c c|c c c} \\multicolumn{2}{c|}{\\((k_{1},\\,d_{1})\\)\\((k_{2},\\,d_{2})\\)} & Flow & CS & SS & RF & FPS & mAP \\\\ \\hline \\hline (3, 1) & (5, 2) & Series & - & - & 11 & 22.1 & 80.80 \\\\ (5, 1) & (7, 3) & Series & - & - & 23 & 21.7 & 80.94 \\\\ (5, 1) & (7, 4) & Series & - & - & 29 & 20.5 & 80.91 \\\\ (7, 1) & (9, 4) & Series & - & - & 39 & 21.3 & 80.84 \\\\ \\hline (3, 1) & (5, 1) & Parallel & β & - & 5 & 23.3 & 80.19 (SKNet[59]) \\\\ (5, 1) & (7, 3) & Series & β & - & 23 & 19.6 & 80.57 (LSKNet-CS) \\\\ (5, 1) & (7, 3) & Series & β & β & 23 & 18.6 & 80.82 (LSKNet-SCS) \\\\ (5, 1) & (7, 3) & Series & - & β & 23 & 20.7 & **81.31** (LSKNet) \\\\ \\end{tabular}
\\end{table}
Table 16: **The effectiveness of the key design components** of the LSKNet when the large kernel is decomposed into a sequence of two depth-wise kernels. CS: channel selection; SS: spatial selection **(ours)**. The LSKNet achieves the best performance when using a reasonably large receptive field with spatial selection.
\\begin{table}
\\begin{tabular}{c c|c|c} \\multicolumn{2}{c|}{Pooling} & \\multirow{2}{*}{FPS} & \\multirow{2}{*}{mAP (\\%)} \\\\ Max. & & Avg. & \\\\ \\hline β & & 20.7 & 81.23 \\\\ & β & 20.7 & 81.12 \\\\ β & β & 20.7 & **81.31** \\\\ \\end{tabular}
\\end{table}
Table 17: Ablation study on the effectiveness of the **maximum and average pooling in spatial selection** of our proposed LSK module. The best result is obtained when using both.
capture a reasonable range of context information relevant to the detected targets, leading to better performance in various hard cases, which justifies our _prior 1)_. In contrast, ResNet typically captures only a limited range of context information, while ViTDet captures a large range but coarse spatial information, making it challenging to model fine-grained details when objects are small and crowded. Both models exhibit limited performance in challenging scenarios.
**Relative Context Range for Different Objects.** To investigate the relative range of
\\begin{table}
\\begin{tabular}{l|l c c|c c c|c c c|c c c} \\multirow{2}{*}{Group} & \\multirow{2}{*}{Model} & \\multirow{2}{*}{**\\#P**} & \\multirow{2}{*}{**Flops**} & \\multicolumn{2}{c|}{**DOTA-v1.0**} & \\multicolumn{2}{c|}{**Vahingen**} & \\multicolumn{3}{c}{**LEVIR-CD**} \\\\ \\cline{5-13} & & & & **mAP** & **@50** & **@75** & **F1** & **OA** & **mIoU** & **P** & **R.** & **F1** & **IoU** \\\\ \\hline Baseline & ResNet-18 & 11.2M & 38.1G & 50.54 & 79.27 & 55.33 & 90.15 & 92.62 & 82.47 & 92.97 & 90.61 & 91.77 & 84.80 \\\\ \\hline \\multirow{9}{*}{\\begin{tabular}{} \\end{tabular} } & ViTDet [132] & 86.6M & 394.9G & 45.60 & 74.41 & 49.39 & 81.01 & 83.74 & 54.91 & 80.72 & 90.59 & 85.37 & 74.48 \\\\ & ConvNeXt v2-N [146] & 15.0M & 51.2G & 52.91 & 80.81 & 58.58 & 89.13 & 92.15 & 81.17 & 93.12 & 89.73 & 91.39 & 84.15 \\\\ \\cline{1-1} & Swin-T [102] & 28.3M & 91.1G & 51.54 & 80.81 & 56.71 & 90.74 & 93.01 & 83.40 & 93.04 & 90.25 & 91.63 & 84.55 \\\\ \\cline{1-1} & MSCAN-S [90] & 13.1M & 45.0G & 52.52 & 81.12 & 57.92 & 91.16 & 93.04 & 84.10 & 93.39 & 91.14 & 92.25 & 85.62 \\\\ \\cline{1-1} & VAN-B1 [89] & 13.4M & 52.7G & 52.69 & 81.15 & 58.11 & 91.30 & 93.12 & 84.41 & 93.31 & 91.20 & 92.24 & 85.60 \\\\ \\hline \\multirow{2}{*}{
\\begin{tabular}{} \\end{tabular} } & ResNet-14 [57] & 8.6M & 57.9G & 49.79 & 79.51 & 53.41 & 90.31 & 92.84 & 82.72 & 92.47 & 90.38 & 91.41 & 84.18 \\\\ & SCNet-18 [58] & 14.0M & 50.7G & 49.91 & 79.69 & 53.55 & 90.50 & 92.97 & 83.04 & 92.03 & 91.27 & 91.65 & 84.58 \\\\ \\cline{1-1} & DCN-Res50 [177] & 26.2M & 121.2G & 49.26 & 79.74 & 52.97 & 90.93 & 93.07 & 83.72 & 92.84 & 90.67 & 91.74 & 84.74 \\\\ \\cline{1-1} & SKNet-26 [59] & 14.5M & 58.5G & 51.53 & 80.67 & 56.51 & 90.83 & 93.01 & 83.56 & 93.09 & 91.09 & 92.08 & 85.32 \\\\ \\hline
**Ours** & \\(\\star\\) LSKNet-S & 14.4M & 54.4G & **53.32** & **81.48** & **58.83** & **91.81** & **93.61** & **85.12** & **93.44** & **91.13** & **92.27** & **85.65** \\\\ \\hline \\end{tabular}
\\end{table}
Table 18: **Comparison on LSKNet-S and other (large kernel or dynamic/selective attention) backbones** in remote sensing object detection (on DOTA-v1.0), segmentation (on Vaihingen) and change detection (on LEVIR-CD). Our LSKNet achieves the best mAP under similar or less complexity budgets.
Figure 5: **Eigen-CAM visualization** of Oriented RCNN detection framework with ResNet-50, ViTDet and LSKNet-S. Our proposed LSKNet can model a reasonably long range of context information, leading to better performance in various hard cases.
receptive field for each object category, we define \\(R_{c}\\) as the _Ratio of Expected Selective RF Area and GT Bounding Box Area_ for category \\(c\\):
\\[R_{c}=\\frac{\\sum_{i=1}^{I_{c}}A_{i}/B_{i}}{I_{c}}, \\tag{11}\\] \\[A_{i}=\\sum_{d=1}^{D}\\sum_{n=1}^{N}|\\widetilde{\\mathbf{SA}}_{n}^ {d}\\cdot RF_{n}|,\\ B_{i}=\\sum_{j=1}^{J_{i}}Area(\\text{GT}_{j}), \\tag{12}\\]
where \\(I_{c}\\) is the number of images that contain the object category \\(c\\) only. The \\(A_{i}\\) is the sum of spatial selection activation in all LSK blocks for input image \\(i\\), where \\(D\\) is the number of blocks in an LSKNet, and \\(N\\) is the number of decomposed large kernels in an LSK module. \\(B_{i}\\) is the total pixel area of all \\(J_{i}\\) annotated oriented object bounding boxes (GT). The normalized \\(R_{c}\\) in Fig. 6 represents the relative range of context required for different object categories for a better view.
The results suggest that the Bridge category stands out as requiring a greater amount of additional contextual information compared to other categories, primarily due to its similarity in features with roads and the necessity of contextual clues to ascertain whether it is enveloped by water. Similarly, the roundabout category also has a relatively high \\(R_{c}\\) of 0.57. Conversely, the Court categories have relatively low \\(R_{c}\\) values, all lower than 0.1. They necessitate minimal contextual information due to their distinctive textural attributes, specifically the court boundary lines. It aligns with our knowledge and further supports _prior 2)_ that the relative range of contextual information required for different object categories varies greatly.
**Kernel Selection Behaviour.** We further investigate the kernel selection behaviour in our LSKNet. For object category \\(c\\), the _Kernel Selection Difference_\\(\\Delta A_{c}\\) (i.e., larger kernel selection - smaller kernel selection) of an LSKNet-T block is defined as:
\\[\\Delta A_{c}=|\\widetilde{\\mathbf{SA}}_{larger}-\\widetilde{\\mathbf{SA}}_{smaller }|. \\tag{13}\\]
We demonstrate the normalized \\(\\Delta A_{c}\\) over all images for three typical categories: Bridge, Roundabout and Soccer-ball-field and for each LSKNet-T block in Fig. 7. As expected, the \\(\\Delta A_{c}\\) of all blocks for Bridge is higher than that of Roundabout by about 30% on average, and Roundabout is higher than Soccer-ball-field by about 70%.
\\begin{table}
\\begin{tabular}{l|c c} Frameworks & ResNet-18 & \\(\\star\\) LSKNet-T \\\\ \\hline ORCNN [11] & 79.27 & 81.31 (**+2.04**) \\\\ RoI Trans. [52] & 78.32 & 80.89 (**+2.57**) \\\\ S\\({}^{2}\\)A-Net [53] & 76.82 & 80.15 (**+3.33**) \\\\ R3Det [9] & 74.16 & 78.39 (**+4.23**) \\\\ \\hline \\#P (backbone only) & 11.2M & 4.3M (**-62\\%**) \\\\ FLOPs (backbone only) & 38.1G & 19.1G (**-50\\%**) \\\\ \\end{tabular}
\\end{table}
Table 19: **Comparison of LSKNet-T and ResNet-18** as backbones with different detection frameworks on DOTA-v1.0. The lightweight LSKNet-T achieves significantly higher mAP in various frameworks than ResNet-18.
Figure 6: Normalised **Ratio \\(R_{c}\\) of Expected Selective RF Area and GT Bounding Box Area** for object categories in DOTA-v1.0. The relative range of context required for different object categories varies a lot. The visualized receptive field is obtained from Eq. (8) (i.e., the spatial activation) of our well-trained LSKNet model.
Figure 7: Normalised **Kernel Selection Difference** in the LSKNet-T blocks for Bridge, Roundabout and Soccer-ball-field. B.i.j represents the j-th LSK block in stage i. A greater value is indicative of a dependence on a broader context.
This aligns with the common sense that Soccerball-field indeed does not require a large amount of context, since its own texture characteristics are already sufficiently distinct and discriminatory.
We also surprisingly discover another selection pattern of LSKNet across network depth: LSKNet usually utilizes larger kernels in its shallow layers and smaller kernels in higher levels. The average \\(\\Delta A_{c}\\) for the first layer blocks is 0.78, while for the second and third blocks it is 0.40, and for the last layer blocks it is only 0.33. This indicates that networks tend to quickly focus on capturing information from large receptive fields in low-level layers so that higher-level semantics can contain sufficient receptive fields for better discrimination.
**Spatial Activation Visualisations.** Spatial activation map examples for more object categories in DOTA-v1.0 are shown in Fig. 8, where the activation map is obtained from Eq. (8) (i.e., the spatial activation) of our well-trained LSKNet model. The object categories are arranged in decreasing order from top left to bottom right based on the _Ratio of Expected Selective RF Area and GT Bounding Box Area_ as illustrated in Fig. 6. The spatial activation visualization results also demonstrate that the model's behaviour aligns with our proposed two priors and the above analysis, which in turn verifies the effectiveness of the proposed mechanism.
## 6 Conclusion
In this paper, we propose the lightweight Large Selective Kernel Network (LSKNet) as a novel approach for tackling downstream tasks in remote sensing images, such as scene classification, object detection, and semantic segmentation. LSKNet is specifically designed to leverage the inherent characteristics of remote sensing images: the need for a wider and adaptable contextual understanding. By adapting a large spatial receptive field, LSKNet can effectively capture and model diverse contextual nuances exhibited by different object types in remote sensing images. Extensive experiments demonstrate that our proposed lightweight model achieves state-of-the-art performance on competitive remote sensing benchmarks. The comprehensive analysis conducted throughout the paper validates the effectiveness and significance of our proposed lightweight model.
## Acknowledgement
This research was supported by the National Key Research and Development Program of China (No. 2021YFB3100800), Young Scientists Fund of the National Natural Science Foundation of China (Grant NO. 62206134, 62361166670, 62276145,
Figure 8: Receptive field activation for more object categories in DOTA-v1.0, where the activation map is obtained from the Eq. (8) (i.e., the spatial activation) of our well-trained LSKNet model.
62176130, 62225604, 62301261), the Fundamental Research Funds for the Central Universities (Nankai University, 070-63233084, 070-63233089), the Tianjin Key Lab of VCIP. Computation is supported by the Supercomputing Center of Nankai University, and the China Postdoctoral Science Foundation (NO. 2021M701727).
## Data Availability Statement
**Data publicly available in a repository:**
The Imagenet dataset is available at [https://www.image-net.org/](https://www.image-net.org/)
The UCM dataset is available at [http://weegee.vision.ucmerced.edu/datasets/landuse.html](http://weegee.vision.ucmerced.edu/datasets/landuse.html)
The AID dataset is available at [https://captain-whu.github.io/AID/](https://captain-whu.github.io/AID/)
The NWPU dataset is available at [https://www.tensorflow.org/datasets/catalog/resisc45](https://www.tensorflow.org/datasets/catalog/resisc45)
The MillionAID dataset is available at [https://captain-whu.github.io/DiRS/](https://captain-whu.github.io/DiRS/)
The DOTA dataset is available at [https://captain-whu.github.io/DOTA/dataset.html](https://captain-whu.github.io/DOTA/dataset.html)
The FAIR1M-v1.0 dataset is available at [https://www.gaofen-challenge.com/benchmark](https://www.gaofen-challenge.com/benchmark)
SAR-Aircraft dataset is available at: [https://radars.ac.cn/web/data/getData?](https://radars.ac.cn/web/data/getData?) dataType=SARDataset_en
The Potsdam and Vaihingen datasets are available at [https://www.isprs.org/education/benchmarks/UrbanSemLab/default.aspx](https://www.isprs.org/education/benchmarks/UrbanSemLab/default.aspx)
The LoveDA dataset is available at [https://codalab.lisn.upsaclay.fr/competitions/421](https://codalab.lisn.upsaclay.fr/competitions/421)
The UAVid dataset is available at [https://uavid.nl/](https://uavid.nl/)
The GID dataset is available at [https://x-ytong.github.io/project/GID.html](https://x-ytong.github.io/project/GID.html)
The LEVIR-CD dataset is available at [https://justchenhao.github.io/LEVIR/](https://justchenhao.github.io/LEVIR/)
The S2Looking dataset is available at [https://github.com/S2Looking/Dataset](https://github.com/S2Looking/Dataset)
## References
* [1] Chen, S.-B., Wei, Q.-S., Wang, W.-Z., Tang, J., Luo, B., Wang, Z.-Y.: Remote sensing scene classification via multi-branch local attention network. TIP (2022)
* [2] Zhao, Q., Lyu, S., Li, Y., Ma, Y., Chen, L.: Mgml: Multigranularity multilevel feature ensemble network for remote sensing scene classification. IEEE Transactions on Neural Networks and Learning Systems (2022)
* [3] Zhao, Q., Ma, Y., Lyu, S., Chen, L.: Embedded self-distillation in compact multibranch ensemble network for remote sensing scene classification. TGRS (2022)
* [4] Li, F., Feng, R., Han, W., Wang, L.: High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network. TGRS (2020)
* [5] Wang, D., Zhang, J., Du, B., Xia, G.-S., Tao, D.: An empirical study of remote sensing pretraining. TGRS (2022)
* [6] Wang, D., Zhang, Q., Xu, Y., Zhang, J., Du, B., Tao, D., Zhang, L.: Advancing plain vision transformer towards remote sensing foundation model. TGRS (2022)
* [7] Sun, X., Wang, P., Lu, W., Zhu, Z., Lu, X., He, Q., Li, J., Rong, X., Yang, Z., Chang, H., He, Q., Yang, G., Wang, R., Lu, J., Fu, K.: Ringmo: A remote sensing foundation model with masked image modeling. TGRS (2023)
* [8] Han, J., Ding, J., Xue, N., Xia, G.-S.: ReDet: A rotation-equivariant detector for aerial object detection. In: CVPR (2021)
* [9] Yang, X., Liu, Q., Yan, J., Li, A.: R3Det: Refined single-stage detector with feature refinement for rotating object. CoRR (2019)
* [10] Yang, X., Yan, J., Ming, Q., Wang, W., Zhang, X., Tian, Q.: Rethinking rotated object detection with Gaussian Wasserstein distance loss. In: ICML (2021)
* [11] Xie, X., Cheng, G., Wang, J., Yao, X., Han, J.: Oriented R-CNN for object detection. In: ICCV (2021)
* [12] Xu, Y., Fu, M., Wang, Q., Wang, Y., Chen, K., Xia, G.-S., Bai, X.: Gliding vertex on the horizontal bounding box for multi-orientedobject detection. TPAMI (2021)
* [13] Wang, L., Li, R., Zhang, C., Fang, S., Duan, C., Meng, X., Atkinson, P.M.: UNet-Former: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS Journal of Photogrammetry and Remote Sensing (2022)
* [14] Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: CVPR (2019)
* [15] Zheng, X., Huan, L., Xia, G.-S., Gong, J.: Parsing very high resolution urban scene images by learning deep convnets with edge-aware loss. ISPRS Journal of Photogrammetry and Remote Sensing (2020)
* [16] Hu, P., Perazzi, F., Heilbron, F.C., Wang, O., Lin, Z., Saenko, K., Sclaroff, S.: Real-time semantic segmentation with fast attention. IEEE Robotics and Automation Letters (2020)
* [17] Li, R., Zheng, S., Zhang, C., Duan, C., Wang, L., Atkinson, P.M.: ABCNet: Attentive bilateral contextual network for efficient semantic segmentation of fine-resolution remotely sensed imagery. ISPRS Journal of Photogrammetry and Remote Sensing (2021)
* [18] Chen, Y., Yuan, X., Wu, R., Wang, J., Hou, Q., Cheng, M.-M.: YOLO-MS: Rethinking multi-scale representation learning for real-time object detection. arXiv (2023)
* [19] Zhang, W., Jiao, L., Li, Y., Huang, Z., Wang, H.: Laplacian feature pyramid network for object detection in vhr optical remote sensing images. TGRS (2022)
* [20] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., _et al._: Segment anything. In: ICCV (2023)
* [21] Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. NeurIPS (2024)
* [22] Chen, K., Liu, C., Chen, H., Zhang, H., Li, W., Zou, Z., Shi, Z.: Rsprompter: Learning to prompt for remote sensing instance segmentation based on visual foundation model. TGRS (2024)
* [23] Kuckreja, K., Danish, M.S., Naseer, M., Das, A., Khan, S., Khan, F.S.: Geochat: Grounded large vision-language model for remote sensing. arXiv (2023)
* [24] Li, Y., Hou, Q., Zheng, Z., Cheng, M.-M., Yang, J., Li, X.: Large selective kernel network for remote sensing object detection. In: ICCV (2023)
* [25] Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the International Conference on Advances in Geographic Information Systems (2010)
* [26] Xia, G.-S., Hu, J., Hu, F., Shi, B., Bai, X., Zhong, Y., Zhang, L., Lu, X.: AID: A benchmark data set for performance evaluation of aerial scene classification. TGRS (2017)
* [27] Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE (2017)
* [28] Zhirui, W., Sun, X.: SAR-AIRcraft-1.0: High-resolution SAR Aircraft Detection and Recognition Dataset. [https://radars.ac.cn/web/data/getData?](https://radars.ac.cn/web/data/getData?) dataType=SARDataset_en (2023)
* Potsdam. [https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx](https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx) (2022)
* Vaihingen. [https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-vaihingen.aspx](https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-vaihingen.aspx) (2022)
* [31] Wang, J., Zheng, Z., Ma, A., Lu, X., Zhong, Y.: LoveDA: A remote sensing land-coverdataset for domain adaptive semantic segmentation. arXiv (2021)
* [32] Lyu, Y., Vosselman, G., Xia, G.-S., Yilmaz, A., Yang, M.Y.: UAVid: A semantic segmentation dataset for uav imagery. ISPRS Journal of Photogrammetry and Remote Sensing (2020)
* [33] Tong, X.-Y., Xia, G.-S., Lu, Q., Shen, H., Li, S., You, S., Zhang, L.: Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sensing of Environment (2020)
* [34] Chen, H., Shi, Z.: A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sensing (2020)
* [35] Shen, L., Lu, Y., Chen, H., Wei, H., Xie, D., Yue, J., Chen, R., Lv, S., Jiang, B.: S2looking: A satellite side-looking dataset for building change detection. Remote Sensing (2021)
* [36] Xia, G.-S., Bai, X., Ding, J., Zhu, Z., Belongie, S., Luo, J., Datcu, M., Pelillo, M., Zhang, L.: DOTA: A large-scale dataset for object detection in aerial images. In: CVPR (2018)
* [37] Liu, Z., Wang, H., Weng, L., Yang, Y.: Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. TGRS Letters (2016)
* [38] Sun, X., Wang, P., Yan, Z., Xu, F., Wang, R., Diao, W., Chen, J., Li, J., Feng, Y., Xu, T., Weinmann, M., Hinz, S., Wang, C., Fu, K.: FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing (2022)
* [39] Su, Z., Zhang, J., Wang, L., Zhang, H., Liu, Z., Pietikainen, M., Liu, L.: Lightweight pixel difference networks for efficient visual representation learning. TPAMI (2023)
* [40] Sun, S., Zhi, S., Liao, Q., Heikkila, J., Liu, L.: Unbiased scene graph generation via two-stage causal modeling. TPAMI (2023)
* [41] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021)
* [42] Deng, P., Xu, K., Huang, H.: When CNNs meet vision transformer: A joint framework for remote sensing scene classification. TGRS Letters (2022)
* [43] Bazi, Y., Bashmal, L., Rahhal, M.M.A., Dayil, R.A., Ajlan, N.A.: Vision transformers for remote sensing image classification. Remote Sensing (2021)
* [44] Zhang, Q., Xu, Y., Zhang, J., Tao, D.: Vitaev2: Vision transformer advanced by exploring inductive bias for image recognition and beyond. IJCV (2023)
* [45] Long, Y., Xia, G.-S., Li, S., Yang, W., Yang, M.Y., Zhu, X.X., Zhang, L., Li, D.: On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2021)
* [46] Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M., Lee, B.: A survey of modern deep learning based object detection models. Digital Signal Processing (2022)
* [47] Mei, J., Zheng, Y.-B., Cheng, M.-M.: D2ANet: Difference-aware attention network for multi-level change detection from satellite imagery. Computational Visual Media (2023)
* [48] Sun, X., Tian, Y., Lu, W., Wang, P., Niu, R., Yu, H., Fu, K.: From single- to multimodal remote sensing imagery interpretation: a survey and taxonomy. Science China Information Sciences (2023)* [49] Zhang, W., Deng, W., Cui, Z., Liu, J., Jiao, L.: Object knowledge distillation for joint detection and tracking in satellite videos. TGRS (2024)
* [50] Zhang, W., Jiao, L., Liu, F., Yang, S., Liu, J.: Dfat: Dynamic feature-adaptive tracking. IEEE Transactions on Circuits and Systems for Video Technology (2023)
* [51] Li, Y., Li, X., Li, W., Hou, Q., Liu, L., Cheng, M.-M., Yang, J.: Sardet-100k: Towards open-source benchmark and toolkit for large-scale sar object detection. arXiv (2024)
* [52] Ding, J., Xue, N., Long, Y., Xia, G.-S., Lu, Q.: Learning RoI transformer for oriented object detection in aerial images. In: CVPR (2019)
* [53] Han, J., Ding, J., Li, J., Xia, G.-S.: Align deep features for oriented object detection. TGRS (2020)
* [54] Pan, X., Ren, Y., Sheng, K., Dong, W., Yuan, H., Guo, X., Ma, C., Xu, C.: Dynamic refinement network for oriented and densely packed object detection. In: CVPR (2020)
* [55] Yang, X., Yang, X., Yang, J., Ming, Q., Wang, W., Tian, Q., Yan, J.: Learning high-precision bounding box for rotated object detection via Kullback-Leibler divergence. In: NeurIPS (2021)
* [56] Zheng, Z., Ye, R., Hou, Q., Ren, D., Wang, P., Zuo, W., Cheng, M.-M.: Localization distillation for object detection. TPAMI (2023)
* [57] Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Lin, H., Zhang, Z., Sun, Y., He, T., Mueller, J., Manmatha, R., Li, M., Smola, A.: ResNeSt: Split-attention networks. In: CVPRW (2022)
* [58] Liu, J.-J., Hou, Q., Cheng, M.-M., Wang, C., Feng, J.: Improving convolutional networks with self-calibrated convolutions. In: CVPR (2020)
* [59] Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: CVPR (2019)
* [60] Wang, L., Li, R., Wang, D., Duan, C., Wang, T., Meng, X.: Transformer meets convolution: A bilateral awareness network for semantic segmentation of very fine resolution urban scene images. Remote Sensing (2021)
* [61] Li, R., Zheng, S., Zhang, C., Duan, C., Su, J., Wang, L., Atkinson, P.M.: Multiattention network for semantic segmentation of fine-resolution remote sensing images. TGRS (2021)
* [62] Zhang, D., Zhang, H., Tang, J., Hua, X.-S., Sun, Q.: Causal intervention for weakly-supervised semantic segmentation. NeurIPS (2020)
* [63] Daudt, R.C., Le Saux, B., Boulch, A.: Fully convolutional siamese networks for change detection. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 4063-4067 (2018). IEEE
* [64] Fang, S., Li, K., Shao, J., Li, Z.: Snunet-cd: A densely connected siamese network for change detection of vhr images. IEEE Geoscience and Remote Sensing Letters (2021)
* [65] Bandara, W.G.C., Patel, V.M.: A transformer-based siamese network for change detection. In: IEEE International Geoscience and Remote Sensing Symposium (2022)
* [66] Codegoni, A., Lombardi, G., Ferrari, A.: Tinycd: A (not so) deep learning model for change detection. Neural Computing and Applications (2023)
* [67] Zhang, C., Yue, P., Tapete, D., Jiang, L., Shangguan, B., Huang, L., Liu, G.: A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing (2020)
* [68] Fang, S., Li, K., Li, Z.: Changer: Featureinteraction is what you need for change detection. TGRS (2023)
* [69] Zhao, S., Zhang, X., Xiao, P., He, G.: Exchanging dual-encoder-decoder: A new strategy for change detection with semantic guidance and spatial localization. TGRS (2023)
* [70] Chen, H., Qi, Z., Shi, Z.: Remote sensing image change detection with transformers. TGRS (2021)
* [71] Lin, H., Hang, R., Wang, S., Liu, Q.: Diformer: A difference transformer network for remote sensing change detection. IEEE Geoscience and Remote Sensing Letters (2024)
* [72] Wang, D., Zhang, J., Xu, M., Liu, L., Wang, D., Gao, E., Han, C., Guo, H., Du, B., Tao, D., et al.: Mtp: Advancing remote sensing foundation model via multi-task pretraining. arXiv (2024)
* [73] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. NeurIPS (2017)
* [74] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV (2021)
* [75] Zhang, C., Wang, L., Cheng, S., Li, Y.: SwinSUNet: Pure transformer network for remote sensing image change detection. TGRS (2022)
* [76] Panboonyuen, T., Jitkajornwanich, K., Lawawirojwong, S., Srestasathiern, P., Vateekul, P.: Transformer-based decoder designs for semantic segmentation on remotely sensed images. Remote Sensing (2021)
* [77] Wang, X., Chen, G., Qian, G., Gao, P., Wei, X.-Y., Wang, Y., Tian, Y., Gao, W.: Large-scale multi-modal pre-trained models: A comprehensive survey. Machine Intelligence Research (2023)
* [78] Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In: ICCV (2021)
* [79] Wu, Y.-H., Liu, Y., Zhan, X., Cheng, M.-M.: P2T: Pyramid pooling transformer for scene understanding. TPAMI (2022)
* [80] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: ICCV (2021)
* [81] Yan, H., Li, Z., Li, W., Wang, C., Wu, M., Zhang, C.: ConTNet: Why not use convolution and transformer at the same time? CoRR (2021)
* [82] Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P.H.S., Zhang, L.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: CVPR (2021)
* [83] Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. In: NeurIPS (2016)
* [84] Fan, D.-P., Ji, G.-P., Xu, P., Cheng, M.-M., Sakaridis, C., Gool, L.V.: Advances in deep concealed scene understanding. Visual Intelligence (2023)
* [85] Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. In: CVPR (2022)
* [86] Ding, X., Zhang, X., Han, J., Ding, G.: Scaling up your kernels to 31x31: Revisiting large kernel design in CNNs. In: CVPR (2022)
* [87] Liu, S., Chen, T., Chen, X., Chen, X., Xiao, Q., Wu, B., Pechenizkiy, M., Mocanu, D., Wang, Z.: More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity. ArXiv (2022)
* [88] Gao, S., Li, Z.-Y., Han, Q., Cheng, M.-M.,Wang, L.: RF-Next: Efficient receptive field search for convolutional neural networks. TPAMI (2023)
* [89] Guo, M.-H., Lu, C., Liu, Z.-N., Cheng, M.-M., Hu, S.: Visual attention network. Computational Visual Media (2022)
* [90] Guo, M.-H., Lu, C.-Z., Hou, Q., Liu, Z.-N., Cheng, M.-M., Hu, S.-M.: SegNeXt: Rethinking convolutional attention design for semantic segmentation. In: NeurIPS (2022)
* [91] Hou, Q., Lu, C.-Z., Cheng, M.-M., Feng, J.: Conv2Former: A simple transformer-style ConvNet for visual recognition. ArXiv (2022)
* [92] Guo, M.-H., Xu, T., Liu, J.-J., Liu, Z.-N., Jiang, P.-T., Mu, T.-J., Zhang, S.-H., Martin, R., Cheng, M.-M., Hu, S.-M.: Attention mechanisms in computer vision: A survey. Computational Visual Media (2021)
* [93] Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)
* [94] Hu, J., Shen, L., Albanie, S., Sun, G., Vedaldi, A.: Gather-Excite: Exploiting feature context in convolutional neural networks. In: NeurIPS (2018)
* [95] Cao, Y., Xu, J., Lin, S., Wei, F., Hu, H.: GCNet: Non-local networks meet squeeze-excitation networks and beyond. In: ICCVW (2019)
* [96] Li, Z., Sun, Y., Zhang, L., Tang, J.: Ctnet: Context-based tandem network for semantic segmentation. TPAMI (2022)
* [97] Li, Y., Li, X., Yang, J.: Spatial group-wise enhance: Enhancing semantic feature learning in cnn. In: ACCV (2022)
* [98] Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: Convolutional block attention module. In: ECCV (2018)
* [99] Park, J., Woo, S., Lee, J.-Y., Kweon, I.-S.: BAM: Bottleneck attention module. In: British Machine Vision Conference (2018)
* [100] Srivastava, S., Sharma, G.: Omnivec: Learning robust representations with cross modal sharing. In: Winter Conference on Applications of Computer Vision (2024)
* [101] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: ECCV (2020)
* [102] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: CVPR (2021)
* [103] Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: PVT v2: Improved baselines with pyramid vision transformer. Computational Visual Media (2022)
* [104] Zhang, X., Tian, Y., Xie, L., Huang, W., Dai, Q., Ye, Q., Tian, Q.: Hivit: A simpler and more efficient design of hierarchical vision transformer. In: ICLR (2022)
* [105] Xu, Y., Zhang, Q., Zhang, J., Tao, D.: Vitae: Vision transformer advanced by exploring intrinsic inductive bias. NeurIPS (2021)
* [106] Yu, H., Tian, Y., Ye, Q., Liu, Y.: Spatial transform decoupling for oriented object detection. In: AAAI (2024)
* [107] Yang, B., Bender, G., Le, Q.V., Ngiam, J.: CondConv: Conditionally parameterized convolutions for efficient inference. NeurIPS (2019)
* [108] Chen, Y., Dai, X., Liu, M., Chen, D., Yuan, L., Liu, Z.: Dynamic convolution: Attention over convolution kernels. In: CVPR (2020)
* [109] Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: More deformable, better results. In: CVPR (2019)
* [110] Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: ICCV (2017)* [111] Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. In: CVPR (2022)
* [112] Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y., Wang, X., Feng, J., Yan, S.: MetaFormer is actually what you need for vision. In: CVPR (2022)
* [113] Hendrycks, D., Gimpel, K.: Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR (2016)
* [114] Zhang, G., Xu, W., Zhao, W., Huang, C., Yk, E.N., Chen, Y., Su, J.: A multi-scale attention network for remote sensing scene images classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2021)
* [115] He, N., Fang, L., Li, S., Plaza, J., Plaza, A.: Skip-connected covariance network for remote sensing scene classification. IEEE Transactions on Neural Networks and Learning Systems (2020)
* [116] Liu, C., Dai, H., Wang, S., Chen, J.: Remote sensing image scene classification based on multidimensional attention and feature enhancement. IAENG International Journal of Computer Science (2023)
* [117] Wang, S., Guan, Y., Shao, L.: Multi-granularity canonical appearance pooling for remote sensing scene classification. TIP (2020)
* [118] Bi, Q., Qin, K., Zhang, H., Xia, G.-S.: Local semantic enhanced convnet for aerial scene recognition. TIP (2021)
* [119] Wang, S., Ren, Y., Parr, G.P., Guan, Y., Shao, L.: Invariant deep compressible covariance pooling for aerial scene categorization. TGRS (2020)
* [120] Zhang, X., An, W., Sun, J., Wu, H., Zhang, W., Du, Y.: Best representation branch model for remote sensing image scene classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2021)
* [121] Zhao, Z., Li, J., Luo, Z., Li, J., Chen, C.: Remote sensing image scene classification based on an enhanced attention module. TGRS Letters (2020)
* [122] Li, B., Guo, Y., Yang, J., Wang, L., Wang, Y., An, W.: Gated recurrent multiattention network for VHR remote sensing image classification. TGRS (2021)
* [123] Wang, W., Sun, Y., Li, J., Wang, X.: Frequency and spatial based multi-layer context network (fscnet) for remote sensing scene classification. International Journal of Applied Earth Observation and Geoinformation (2024)
* [124] Dong, Z., Gu, Y., Liu, T.: Upetu: A unified parameter-efficient fine-tuning framework for remote sensing foundation model. TGRS (2024)
* [125] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR (2009)
* [126] Lyu, C., Zhang, W., Huang, H., Zhou, Y., Wang, Y., Liu, Y., Zhang, S., Chen, K.: RTMDet: An empirical study of designing real-time object detectors. CoRR (2022)
* [127] Guo, Z., Liu, C., Zhang, X., Jiao, J., Ji, X., Ye, Q.: Beyond bounding-box: Convex-hull feature adaptation for oriented and densely packed object detection. In: CVPR (2021)
* [128] Lang, S., Ventola, F., Kersting, K.: DAFNe: A one-stage anchor-free deep model for oriented object detection. CoRR (2021)
* [129] Hou, L., Lu, K., Xue, J., Li, Y.: Shape-adaptive selection and measurement for oriented object detection. In: AAAI (2022)
* [130] Dai, L., Liu, H., Tang, H., Wu, Z., Song, P.: AO2-DETR: Arbitrary-oriented object detection transformer. IEEE Transactions on Circuits and Systems for Video Technology (2022)
* [131] Yang, X., Yang, J., Yan, J., Zhang, Y., Zhang, T., Guo, Z., Sun, X., Fu, K.:SCRDet: Towards more robust detection for small, cluttered and rotated objects. In: ICCV (2019)
* [132] Li, Y., Mao, H., Girshick, R., He, K.: Exploring plain vision transformer backbones for object detection. In: ECCV (2022)
* [133] Wang, J., Yang, W., Li, H.-C., Zhang, H., Xia, G.-S.: Learning center probability map for detecting objects in aerial images. TGRS (2021)
* [134] Yang, X., Yan, J.: Arbitrary-oriented object detection with circular smooth label. In: ECCV (2020)
* [135] Cheng, G., Yao, Y., Li, S., Li, K., Xie, X., Wang, J., Yao, X., Han, J.: Dual-aligned oriented detector. TGRS (2022)
* [136] Cheng, G., Wang, J., Li, K., Xie, X., Lang, C., Yao, Y., Han, J.: Anchor-free oriented proposal generator for object detection. TGRS (2022)
* [137] Yang, X., Zhou, Y., Zhang, G., Yang, J., Wang, W., Yan, J., Zhang, X., Tian, Q.: The KFIoU loss for rotated object detection. In: ICLR (2022)
* [138] Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: ICCV (2017)
* [139] Cai, Z., Vasconcelos, N.: Cascade R-CNN: Delving into high quality object detection. In: CVPR (2018)
* [140] Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NeurIPS (2015)
* [141] Ming, Q., Zhou, Z., Miao, L., Zhang, H., Li, L.: Dynamic anchor learning for arbitrary-oriented object detection. CoRR (2020)
* [142] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results
* [143] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results
* [144] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
* [145] Gao, S.-H., Cheng, M.-M., Zhao, K., Zhang, X.-Y., Yang, M.-H., Torr, P.: Res2Net: A new multi-scale backbone architecture. TPAMI (2021)
* [146] Woo, S., Debnath, S., Hu, R., Chen, X., Liu, Z., Kweon, I.-S., Xie, S.: ConvNeXt V2: Co-designing and scaling convnets with masked autoencoders. Arxiv (2023)
* [147] Cai, Z., Vasconcelos, N.: Cascade R-CNN: High quality object detection and instance segmentation. TPAMI (2019)
* [148] Li, R., Duan, C., Zheng, S., Zhang, C., Atkinson, P.M.: Macu-net for semantic segmentation of fine-resolution remotely sensed images. IEEE Geoscience and Remote Sensing Letters **19** (2022)
* [149] Romera, E., Alvarez, J.M., Bergasa, L.M., Arroyo, R.: ERFNet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Transactions on Intelligent Transportation Systems (2017)
* [150] Li, G., Yun, I., Kim, J., Kim, J.: DABNet: Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation (2019)
* [151] Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., Sang, N.: BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In: ECCV (2018)
* [152] Orsic, M., Segvic, S.: Efficient semantic segmentation with pyramidal fusion. Pattern Recognition (2021)
* [153] Zhuang, J., Yang, J., Gu, L., Dvornek, N.: ShelfNet for fast semantic segmentation. In: ICCVW (2019)* [154] Strudel, R., Garcia, R., Laptev, I., Schmid, C.: Segmenter: Transformer for semantic segmentation. In: ICCV (2021)
* [155] Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017)
* [156] Srinivas, A., Lin, T.-Y., Parmar, N., Shlens, J., Abbeel, P., Vaswani, A.: Bottleneck transformers for visual recognition. In: CVPR (2021)
* [157] Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018)
* [158] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: Simple and efficient design for semantic segmentation with transformers. In: NeurIPS (2021)
* [159] Xiao, T., Liu, Y., Zhou, B., Jiang, Y., Sun, J.: Unified perceptual parsing for scene understanding. In: ECCV (2018)
* [160] Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: A nested U-Net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (2018)
* [161] Kirillov, A., Girshick, R., He, K., Dollar, P.: Panoptic feature pyramid networks. In: CVPR (2019)
* [162] Zheng, Z., Zhong, Y., Wang, J., Ma, A.: Foreground-aware relation network for geospatial object segmentation in high spatial resolution remote sensing imagery. In: CVPR (2020)
* [163] Ma, A., Wang, J., Zhong, Y., Zheng, Z.: FactSeg: Foreground activation-driven small object semantic segmentation in large-scale remote sensing imagery. TGRS (2021)
* [164] Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., Zhou, Y.: Transunet: Transformers make strong encoders for medical image segmentation. arXiv (2021)
* [165] Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., Liu, W., Xiao, B.: Deep high-resolution representation learning for visual recognition. TPAMI (2019)
* [166] Wang, L.-L., Lui, S.S., Chan, R.C.: The past and future of mapping the biomarkers of psychosis. Current Opinion in Behavioral Sciences (2022)
* [167] Sun, L., Zou, H., Wei, J., Cao, X., He, S., Li, M., Liu, S.: Semantic segmentation of high-resolution remote sensing images based on sparse self-attention and feature alignment. Remote Sensing (2023)
* [168] Yang, M.Y., Kumaar, S., Lyu, Y., Nex, F.: Real-time semantic segmentation with context aggregation network. ISPRS Journal of Photogrammetry and Remote Sensing (2021)
* [169] Xu, W., Xu, Y., Chang, T., Tu, Z.: Coscale conv-attentional image transformers. In: ICCV (2021)
* [170] Liu, Y., Pang, C., Zhan, Z., Zhang, X., Yang, X.: Building change detection for remote sensing images using a dual-task constrained deep siamese convolutional network model. IEEE Geoscience and Remote Sensing Letters (2020)
* [171] Han, C., Wu, C., Guo, H., Hu, M., Chen, H.: Hanet: A hierarchical attention network for change detection with bi-temporal very-high-resolution remote sensing images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2023)
* [172] Chen, H., Li, W., Shi, Z.: Adversarial instance augmentation for building change detection in remote sensing images. TGRS (2021)
* [173] Zhang, C.-j., Liu, J.-w.: Change detection with incorporating multi-constraints and loss weights. Engineering Applications of Artificial Intelligence (2024)
* [174] Han, C., Wu, C., Du, B.: Hcgmnet: A hierarchical change guiding map network for change detection. In: IEEE International Geoscience and Remote Sensing Symposium (2023)
* [175] Han, C., Wu, C., Hu, M., Li, J., Chen, H.: C2f-semicd: A coarse-to-fine semi-supervised change detection method based on consistency regularization in high-resolution remote-sensing images. TGRS (2024)
* [176] Han, C., Wu, C., Guo, H., Hu, M., Li, J., Chen, H.: Change guiding network: Incorporating change prior to guide change detection in remote sensing imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2023)
* [177] Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: ICCV (2017)
* [178] Muhammad, M.B., Yeasin, M.: Eigen-CAM: Class activation map using principal components. CoRR (2020) | Remote sensing images pose distinct challenges for downstream tasks due to their inherent complexity. While a considerable amount of research has been dedicated to remote sensing classification, object detection, semantic segmentation and change detection, most of these studies have overlooked the valuable prior knowledge embedded within remote sensing scenarios. Such prior knowledge can be useful because remote sensing objects may be mistakenly recognized without referencing a sufficiently long-range context, which can vary for different objects. This paper considers these priors and proposes a lightweight Large Selective Kernel Network (LSKNet) backbone. LSKNet can dynamically adjust its large spatial receptive field to better model the ranging context of various objects in remote sensing scenarios. To our knowledge, large and selective kernel mechanisms have not been previously explored in remote sensing images. Without bells and whistles, our lightweight LSKNet backbone network sets new state-of-the-art scores on standard remote sensing classification, object detection, semantic segmentation and change detection benchmarks. Our comprehensive analysis further validated the significance of the identified priors and the effectiveness of LSKNet. The code is available at [https://github.com/zcablii/LSKNet](https://github.com/zcablii/LSKNet).
**Keywords: Remote sensing, CNN backbone, Large kernel, Attention, Object detection, Semantic segmentation.** | Summarize the following text. | 259 |
arxiv-format/2408_06356v1.md | Enhancing Ecological Monitoring with Multi-Objective Optimization: A Novel Dataset and Methodology for Segmentation Algorithms
Sophia J. Abraham, Jin Huang, Jonathan D. Hauenstein, Walter Scheirer
University of Notre Dame
{sabraha2, jhuang24, [email protected], walter.scheirer}@nd.edu
Brandon RichardWebster
Kitware Inc.
[email protected]
Michael Milford
Queensland University of Technology
[email protected]
## 1 Introduction
The global challenge of ensuring food security for a burgeoning population is exacerbated by ecological threats like invasive grass species. These species compromise natural vegetation, pose risks to livestock, and increase wildfire occurrences. The economic burden of managing these invasions, once established, is substantial, costing up to 17 times more than preventative measures [19]. A significant contributor to this dilemma is _African lovergrass_ (_ALG_), an invasive species that has spread rapidly in both Australia and the U.S. Its robust nature and rapid proliferation demand swift action for control and eradication.
Introduced in the 1930s to the Bega Valley, New South Wales, Australia, _ALG_ has become a formidable adversary to agricultural and natural landscapes [19]. Its hardness and adaptability to adverse conditions, such as drought and low soil fertility, facilitate its spread across diverse environments. The ecological impact is profound, with _ALG_ dominance leading to reduced soil fertility, displacement of native grasses, and a significant decrease in pasture productivity. This increases the risk of wildfires and jeopardizes community safety [10]. An overview of ALG infestation across Australia can be seen in Fig. 1. Current management strategies emphasize early detec
Figure 1: Estimated distribution of African lovergrass (ALG) infestation in Australia, based on data from Queensland Primary Industries and Fisheries (2009).
tion at the onset of invasion as critical measures to combat its spread [11].
In response to the urgent need for effective _ALG_ management tools, this paper introduces a novel semantic segmentation dataset consisting of 6,096 high-resolution aerial images. This dataset, focused on capturing indigenous and invasive grass species in the Bega Valley, aims to empower researchers and practitioners with the resources necessary to develop and refine algorithms capable of identifying _ALG_ invasions promptly.
This study focuses on the binary task of distinguishing grass from non-grass areas, which is a foundational step towards more fine-grained identification of different grass species, including invasive ones like _ALG_. The binary segmentation task, while already challenging due to the complex and heterogeneous nature of the ecological data, lays the groundwork for future advancements in detailed classification tasks.
This dataset distinguishes itself by offering several unique properties that address common challenges in computer vision and ecological studies:
**Underrepresented Domain**: Unlike typical datasets focusing on urban environments, vehicles, or human faces, this dataset centers on grass species, a significantly underrepresented area in computer vision.
**Class Overlap and Distribution**: It captures the complex overlap and distribution of indigenous and invasive grass species, providing a challenging environment for segmentation models.
**Ecological Relevance**: The dataset includes images taken at various altitudes and conditions, reflecting real-world ecological monitoring scenarios.
This paper also presents an innovative homotopy-based multi-objective fine-tuning approach, demonstrated through a case study on the Segment Anything Model (SAM). Traditional single-objective optimization methods often fall short in addressing the dual demands of precise segmentation and contextual coherence, especially in noisy and heterogeneous ecological data. Our approach dynamically balances segmentation accuracy and contextual consistency by integrating DiceCELoss for precise pixel-wise classification and a smoothness loss to ensure spatial coherence. The homotopy parameter evolves during training, enabling a smooth transition from prioritizing segmentation accuracy to emphasizing contextual consistency. This dual-objective strategy enhances the robustness and reliability of segmentation results. While this study focuses on SAM, our approach is general enough to be applicable to other segmentation models as well.
Through rigorous evaluation, we establish performance baselines for our fine-tuned SAM and compare it with other leading semantic segmentation models. These baselines highlight the dataset's potential to advance machine learning techniques in ecological monitoring. Our annotation methodology further enriches the dataset, providing valuable insights for researchers involved in dataset creation.
By providing a robust dataset and an advanced fine-tuning approach, this work aims to drive forward the fields of environmental monitoring and sustainable development, fostering innovations that can address complex ecological challenges and safeguard agricultural productivity and ecosystem health. This dataset not only serves as a practical tool for addressing the specific challenge of _ALG_ but also represents a broader contribution to the computer vision community by presenting a unique and challenging dataset for model development and evaluation.
## 2 Related Works
Despite existing advancements in identifying invasive plant species, the application of these techniques to aerial imagery, particularly for species such as African luggrass (ALG), remains largely unexplored [5, 12, 13, 14, 16]. The remote sensing community's recent ventures into leveraging remote sensing methodologies for invasive plant detection mark a significant step forward, yet these efforts rarely extend to grass-specific identification [4, 15, 28]. Notably, Albani et al.'s work on estimating weed coverage in open fields through UAVs underscores the potential of aerial technologies, though it stops short of achieving species-level weed identification [1].
While considerable research has been directed towards weed identification within agricultural contexts [2, 20, 27, 6], these studies predominantly focus on irrigated soils, overlooking the unique challenges presented by open prairie environments. Compounding the issue is the inherent difficulty in distinguishing among a vast array of plant species, exacerbated by similarities in color and shape, as highlighted by Waldchen et al. [29]. Their findings advocate for an interdisciplinary approach, merging the expertise of biologists and computer scientists to propel forward the field of plant identification.
This interdisciplinary proposition raises the intriguing possibility of developing a robust algorithm for ALG detection that integrates insights from ecology, computer vision, and visual psychophysics. Given the nascent state of research into the semantic segmentation of prairie grass, a critical preliminary step involves evaluating the efficacy of existing semantic segmentation methodologies [3, 7, 8, 9, 18, 21, 23, 25, 30] in this novel context.
## 3 Multi-Objective Methodology
In this section, we detail the homotopy-based multi-objective fine-tuning approach applied to the Segment Anything Model (SAM). This method addresses the dual objectives of segmentation accuracy and contextual consistency,which are crucial for handling noisy and heterogeneous ecological data.
Our approach aims to optimize two primary objectives:
1. **Segmentation Accuracy**: Ensured by DiceCELoss [26], which provides precise pixel-wise classification.
2. **Contextual Consistency**: Achieved through a smoothness loss that promotes spatial coherence across segmentations.
DiceCELoss combines Dice Loss and Cross-Entropy Loss in order to capture both the overlap between predicted and ground truth masks as well as the pixel-wise classification accuracy.
The Dice Loss is defined as:
\\[L_{\\text{Dice}}=1-\\frac{2\\sum_{i=1}^{N}p_{i}g_{i}+\\epsilon}{\\sum_{i=1}^{N}p_{i }+\\sum_{i=1}^{N}g_{i}+\\epsilon} \\tag{1}\\]
where \\(p_{i}\\) and \\(g_{i}\\) are the predicted and ground truth labels for pixel \\(i\\), respectively, and \\(\\epsilon\\) is a small positive constant to avoid division by zero.
The Cross-Entropy Loss is defined as:
\\[L_{\\text{CE}}=-\\frac{1}{N}\\sum_{i=1}^{N}\\left[g_{i}\\log(p_{i})+(1-g_{i})\\log( 1-p_{i})\\right] \\tag{2}\\]
The combined DiceCELoss is given by:
\\[L_{\\text{DiceCE}}=\\beta L_{\\text{Dice}}+(1-\\beta)L_{\\text{CE}} \\tag{3}\\]
where \\(\\beta\\) is a weighting factor between 0 and 1.
To ensure contextual consistency, we incorporate a smoothness loss [24] that penalizes abrupt changes in the segmentation map. The smoothness loss is defined as:
\\[\\begin{split} L_{\\text{smooth}}=\\lambda_{\\text{smooth}}& \\Bigg{(}\\sum_{i=1}^{N-1}\\sum_{j=1}^{M}\\left|p_{i,j}-p_{i+1,j} \\right|\\\\ &+\\sum_{i=1}^{N}\\sum_{j=1}^{M-1}\\left|p_{i,j}-p_{i,j+1}\\right| \\Bigg{)}\\end{split} \\tag{4}\\]
where \\(p_{i,j}\\) represents the predicted label at pixel \\((i,j)\\), and \\(\\lambda_{\\text{smooth}}\\) is a regularization parameter.
### Homotopy-Based Multi-Objective Optimization
Homotopy methods provide a systematic way to transition between different objective functions during optimization. In our approach (Algorithm 1), we dynamically balance the two objectives by introducing a homotopy parameter \\(t\\) that evolves from 0 to 1 over the course of training. This allows the optimization process to start by focusing on segmentation accuracy and gradually shift towards emphasizing contextual consistency as training progresses.
We define the combined loss function as:
\\[L_{\\text{combined}}=(1-t)L_{\\text{DiceCE}}+tL_{\\text{smooth}} \\tag{5}\\]
where \\(t\\) smoothly transitions the objective from prioritizing segmentation accuracy (when \\(t=0\\)) to prioritizing smoothness (when \\(t=1\\)). This gradual shift helps in preventing the model from overfitting to one objective too early and ensures a balanced optimization process [27].
The overall training objective is to minimize the combined loss function:
\\[\\mathcal{L}=\\min\\left((1-t)L_{\\text{DiceCE}}+tL_{\\text{smooth}}\\right) \\tag{6}\\]
### Training Procedure
The training procedure is as follows:
1. **Initialization**: Initialize SAM with pre-trained weights.
2. **Data Augmentation**: Apply data augmentation techniques to enhance the diversity of the training dataset.
3. **Optimization**: For each epoch, compute the combined loss and update the model parameters using gradient descent. Adjust the homotopy parameter \\(t\\) to gradually shift the focus from segmentation accuracy to contextual consistency.
4. **Evaluation**: Evaluate the model on a validation set to monitor performance and adjust hyperparameters as needed.
## 4 ALGSeg
In an interdisciplinary collaboration, our team comprising drone operators, ecologists, and roboticists undertook a field study in the Bega Valley, New South Wales, Australia, during the late Australian spring in November. Our objective was to establish a comprehensive dataset for monitoring ecological changes, with a particular focus on the dynamics between native grasses and the invasive African lovegrass (ALG). We selected and set up more than twenty 25x25m plots, aimed at long-term ecological monitoring over the next five years. These plots are designated for tracking changes in insect populations, soil nutrient content, and the composition of plant and grass species, including the interaction between native grasses and ALG. An example of one of these plots is illustrated in Fig. 2.
Utilizing aerial photography, we began the process of automating the classification of grass species from drones. This initiative is aimed at assisting farmers and landcare workers in monitoring the spread of ALG through a sharedplatform that integrates the classification data with GPS coordinates. Our efforts concentrated on Merimbula, a pivotal location within the Bega Valley region.
### Data Collection
Aerial data collection was conducted over private lands, with the gracious permission and enthusiasm of the landowners situated in the vicinity of Merimbula. For the purpose of maintaining anonymity while allowing for clear reference, the properties were designated as lots L1, L2, and L3, detailed in the supplemental. This coding system ensures the privacy of landowner identities while facilitating structured data analysis.
Over the initial two days of the project, aerial imaging captured distinct areas, denoted as 'a' and 'b' for each lot, to provide comprehensive coverage. Weather conditions varied significantly during the data collection period, ranging from a combination of cloudy and sunny skies on the first day to overcast conditions on the second, and predominantly sunny weather on the third day. These variations in lighting conditions are meticulously documented in the supplemental, highlighting the adaptability of our data collection process to environmental changes. For detailed data collection specifics, please refer to the supplemental materials.
_Note_: The careful consideration of weather conditions and the collaboration with local landowners not only exemplifies the logistical planning required for remote sensing projects but also underscores the importance of community engagement in scientific research.
#### 4.1.1 Equipment Used for Recording
Aerial imagery was captured using a DJI Inspire 2 equipped with a Zenmuse X5S camera, which boasts a resolution of 20 megapixels (5280x3956 pixels). The camera was set to capture still shots at two-second intervals, employing JPEG lossy compression at the standard compression ratio. This setup ensured an approximate 75% overlap both forward and to the sides of each image, facilitating the subsequent processing of these images into orthomosaics.
#### 4.1.2 Collection of Auxiliary Data
Beyond the primary dataset, we compiled a comprehensive set of auxiliary data to support a broad spectrum of computer vision and robotics research. This dataset encompasses extensive real-time flight metrics, including but not limited to, operational parameters of the drone (e.g., flight times, durations, distances, and varied flight attitudes). It is important to note that to uphold privacy standards, these flight details are currently undergoing an anonymization process to prevent any potential compromise of data integrity before their public release.
Additionally, while the drone's onboard systems do not capture meteorological data, we have supplemented our dataset with detailed weather information corresponding to each flight, courtesy of the Australian Government's Bureau of Meteorology. This supplementary weather dataset offers a rich array of atmospheric conditions surrounding each flight, providing variables such as temperature, dew
Figure 2: A 25x25m\\({}^{2}\\) plot established for ecological monitoring, including the study of native grasses and African lovegrass interactions.
point, humidity, wind patterns, and precipitation. The integration of these weather parameters, which are distinct from the settings used by drone operators for white balance adjustments, enriches the dataset with environmental context crucial for certain computer vision and robotics applications.
_Note:_ The weather information serves to complement the visual data, offering insights into environmental conditions that may influence the analysis and application of the collected imagery in various computational models.
### Data Annotation
In creating a dataset with high-resolution images that capture a wide variety of content, the annotation process emerges as a formidable challenge, particularly when it comes to the nuanced details of various grass types and their phenotypic stages--observable characteristics at different growth phases. Given the practical constraints at the time of data collection, enlisting experts for detailed annotation was not feasible. This led us to devise a simplified annotation strategy aimed at distinguishing between grass and non-grass elements, a method that directly tackles the inherent complexities of natural prairie imagery where a single pixel might fall into multiple categories, such as grass, other vegetation, objects, or soil. This streamlined approach, while reducing the scope of the task, nonetheless required considerable effort, with annotation times ranging from 45 minutes to 2 hours per image. For this study, we curated a set of 50 images for annotation to ensure a diverse representation across all sampled locations.
Addressing the broader challenge, the reliance on non-expert annotators due to the unavailability of experts opens a dialogue on the feasibility and limitations of employing laypersons for certain annotation tasks. This scenario, reflective of real-world constraints, offers valuable insights into the adaptability of machine learning applications in the face of resource limitations. By navigating these constraints, our approach contributes to the discourse on developing efficient, scalable solutions for dataset creation in the field of Earth Observation. Such explorations into the capabilities of non-expert annotators underscore the importance of innovative methodologies in enhancing data annotation practices, paving the way for broader applications and understandings within the domain.
_Note:_ For further details on the specific flight parameters, image capture elevations, and examples of artifacts resulting from the annotation process, please refer to the supplemental materials.
## 5 Baselines and Segmentation Models
In this study, we explore pixel-wise semantic segmentation for prairie grass by establishing benchmarks using various deep learning-based segmentation techniques. Given the absence of models specifically designed for grass segmentation in prairie settings, we selected models based on their architectural suitability and performance potential for our dataset. This section outlines the chosen models and describes the training regime implemented for their evaluation.
### Model Selection and Architecture
We selected a diverse set of models, each with unique architectural features promising for semantic segmentation tasks:
**DeepLabV3:** The DeepLabV3 models, including versions with 50 and 101 layers, leverage atrous convolution and atrous spatial pyramid pooling to capture multi-scale features. These models balance the trade-off between the number of parameters and feature resolution control. DeepLabV3-101, with a deeper ResNet architecture compared to its 50-layer counterpart, aims to extract more complex features, albeit with increased computational requirements [8].
**FCN ResNet:** Fully Convolutional Networks (FCNs) extended onto ResNet architectures (50 and 101 layers) utilize ImageNet-pretrained weights for semantic segmentation. These models resize feature maps to match input dimensions, with deeper versions capturing more complex feature hierarchies for potentially improved performance [18].
**SegNet:** SegNet employs an autoencoder architecture with efficient non-linear sparse upsampling in its decoder. Originally optimized for indoor and traffic segmentation tasks, SegNet's architecture emphasizes computational efficiency [3].
**U-Net:** Known for its effectiveness across various semantic segmentation challenges, U-Net employs a downsample-then-upsample approach with skip connections. This architecture facilitates the fusion of multi-level feature information, making it robust for diverse segmentation tasks [23].
**Segment Anything Model (SAM):** In addition to these established models, we include the Segment Anything Model (SAM) in two configurations: single-objective and our proposed multi-objective fine-tuning approach. The single-objective SAM focuses on segmentation accuracy, while the multi-objective SAM balances segmentation accuracy with contextual consistency using the homotopy-based approach [17].
### Training Regime
The training and evaluation of these models were conducted on the African Lovegrass (ALG) dataset, segmented into 224x224px patches to ensure uniform input sizes. The dataset was split into a 90/10 ratio, resulting in 19,440 training patches and 2,160 evaluation patches. This split aimedto avoid overfitting and provide a robust assessment of each model's capabilities.
**Initialization and Training:** All models were initialized with pretrained weights where available, except for U-Net and SegNet. Each model underwent a standardized training process over 250 epochs using the default configurations for loss functions, optimizers, and batch sizes. The training was performed within the flexible PyTorch framework, accommodating the specific input dimensions.
For SAM, the fine-tuning process was tailored to balance segmentation accuracy and contextual consistency through a homotopy-based approach. The SAM model was initialized with pre-trained weights from sam-vit-base. The training process involved 100 epochs, where the combined loss function integrated DiceCELoss for pixel-wise classification and a smoothness loss to ensure spatial coherence. The homotopy parameter \\(t\\) evolved linearly from 0 to 1 over the training epochs, facilitating a smooth transition from prioritizing segmentation accuracy to emphasizing contextual consistency. The optimizer used was Adam with a learning rate of \\(1\\times 10^{-5}\\).
**Data Augmentation:** To enhance model robustness, we applied data augmentation techniques such as rotations, flips, and color jittering. This helped in creating a more diverse training set and mitigating overfitting.
**Evaluation Metrics:** Model performance was evaluated using standard metrics such as Intersection over Union (IoU) and F1-score. These metrics provided a comprehensive understanding of each model's segmentation accuracy and generalization capability.
By establishing these baselines and introducing SAM with both single and multi-objective fine-tuning, we aim to provide a comprehensive evaluation framework for future research in ecological and agronomical segmentation tasks.
## 6 Experiments
Following the completion of the training phase for each model, we embarked on evaluating their performance using a set of predetermined metrics. The evaluation process began with the generation of Receiver Operating Characteristic (ROC) curves for each model, utilizing the predicted masks from both training and validation datasets to identify the optimal threshold. This threshold was determined by approximating the Equal Error Rate (EER), providing a balanced starting point for further adjustments based on specific user requirements, such as a preference for false positives by landowners (Fig. 3).
The models' output, initially on different scales due to varying final scalar functions (e.g., Sigmoidal), was normalized to the \\([0,1]\\) range for accurate threshold application. The scaling parameters used for normalizing the training/validation data were also applied to the test data's predicted masks to maintain consistency.
To comprehensively assess the models' baseline performance, we employed the following key metrics:
**Accuracy** measures the proportion of correctly predicted pixels over the total pixel count, considering true positives, true negatives, false positives, and false negatives.
The **Jaccard Index (IoU)** calculates the overlap area between the predicted and target segmentations relative to their union area, providing a measure of the segmentation's accuracy.
The **DICE score (F1 score)** refines the Jaccard Index by considering twice the overlap area divided by the sum of pixels in both predicted and target images, offering a balanced measure of precision and recall.
**ROC-AUC** (Receiver Operating Characteristic - Area Under Curve) quantifies the overall ability of the model to discriminate between classes, providing a single measure of model performance across all classification thresholds.
**EER** (Equal Error Rate) is the point at which the false positive rate equals the false negative rate, providing a balanced measure of the model's accuracy at a specific threshold.
These metrics provide a comprehensive evaluation of model performance, highlighting strengths and weaknesses in segmentation accuracy, contextual coherence, and overall discriminative ability.
## 7 Results
In this section, we analyze the performance of various segmentation models on our novel ecological dataset, focusing on the task of binary grass segmentation. The dataset consists of high-resolution aerial images captured at different altitudes (10m, 35m, and 120m) over the Bega Valley in Australia.
Multi SAM 50 (Ours) model achieved the highest accuracy (0.90) and ROC AUC (0.91), along with the lowest EER (0.17). The Dice Score (0.80) and Jaccard Index (0.67) were also among the best. The strong performance of Multi SAM 50 underscores the effectiveness of our homotopy-based multi-objective fine-tuning approach. By optimizing for both segmentation accuracy and contextual consistency, this model balances precision and robustness. Notably, it achieved these results with only 50 epochs of training, compared to the 100 epochs required for the single-objective Finetuned SAM, demonstrating the efficiency of our method.
Finetuned SAM also performed well, with an accuracy of 0.89, ROC AUC of 0.90, and EER of 0.18. The Dice Score (0.80) and Jaccard Index (0.67) were comparable to those of Multi SAM 50. The high performance of Finetuned SAM indicates the robustness of the SAM architecture for segmentation tasks. However, the additional smoothness objective in Multi SAM 50 provides a clear advantage in balancing segmentation accuracy with contextual consistency.
SegNet achieved an accuracy of 0.77, Jaccard Index of 0.61, and Dice Score of 0.76. However, its ROC AUC (0.72) and EER (0.46) were less favorable. The relatively high EER indicates that SegNet may produce more false positives, which is less desirable for applications requiring precise segmentation. Its performance highlights the importance of choosing models that balance false positives and negatives effectively.
DeepLabV3 ResNet 101 had an accuracy of 0.75, Jaccard Index of 0.54, Dice Score of 0.70, ROC AUC of 0.82, and EER of 0.24. The ResNet 50 variant showed slightly lower performance metrics. These models demonstrated consistent performance but were outperformed by both SAM-based models and Multi SAM 50. The results suggest that while DeepLabV3 models are robust, they may require additional tuning or different pretraining strategies to match the performance of SAM-based approaches.
FCN50 and FCN101 achieved similar metrics, with accuracies of 0.67, Jaccard Indices of 0.63, Dice Scores of 0.77, ROC AUCs of 0.83, and EERs of 0.22. These models showed consistent performance but were outperformed by newer architectures like SAM and Multi SAM 50.
The ROC curve and performance metrics table provide a multi-faceted view of each model's performance. Some key observations include:
**Invariance to Tuning**: Different models exhibit varying levels of sensitivity to tuning parameters. For instance, the Multi SAM 50 model, fine-tuned with only 50 epochs, achieved superior performance compared to the Finetuned SAM, which required 100 epochs. This indicates that our multi-objective approach may provide a more efficient optimization path.
**Altitude Variations**: The dataset's multi-altitude nature (10m, 35m, 120m) adds complexity to the segmentation task. Models must generalize across different resolutions and perspectives, which can affect performance. Further analysis could explore how models trained at specific altitudes perform compared to those trained on mixed-altitude data.
**Trade-offs in Performance**: The high precision end of the ROC curves reveals interesting trade-offs. For instance, models with higher AUC and lower EER values offer better recall at low error rates, making them suitable for applications where minimizing false positives is crucial. Conversely, models like SegNet, which perform well in terms of Dice and Jaccard scores, may be better for tasks requiring high segmentation precision within correctly classified areas.
**Generalization to New Environments**: The varying performance across models suggests differences in their ability to generalize to unseen environments. Models with lower EER and higher AUC are likely more robust to out-of-distribution data, an essential factor for practical deployment in diverse ecological settings.
\\begin{table}
\\begin{tabular}{l r r r r r r} \\hline \\hline
**Model Name** & **Accuracy \\(\\uparrow\\)** & **Jaccard \\(\\uparrow\\)** & **Dice \\(\\uparrow\\)** & **ROC AUC \\(\\uparrow\\)** & **EER \\(\\downarrow\\)** & **EER Threshold** \\\\ \\hline FCN50 & 0.67 & 0.63 & 0.77 & 0.83 & 0.22 & 0.54 \\\\ Unet & 0.75 & 0.52 & 0.68 & 0.85 & 0.24 & 0.44 \\\\ Finetuned SAM & 0.89 & 0.67 & 0.80 & 0.90 & 0.18 & 0.70 \\\\
**Multi SAM 50 (Ours)** & **0.90** & 0.67 & 0.80 & **0.91** & **0.17** & 0.74 \\\\ SegNet & 0.77 & 0.61 & 0.76 & 0.72 & 0.46 & 1.00 \\\\ DeepLabV3 ResNet 101 & 0.75 & 0.54 & 0.70 & 0.82 & 0.24 & 0.50 \\\\ DeepLabV3 ResNet 50 & 0.65 & 0.65 & 0.79 & 0.45 & 0.54 & 0.51 \\\\ FCN101 & 0.67 & 0.63 & 0.77 & 0.83 & 0.22 & 0.55 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Performance Metrics for Different Segmentation Models
Figure 3: Each ROC curve represents a modelβs ability to distinguish between grass and non-grass areas, with the Area Under the Curve (AUC) and Equal Error Rate (EER) values indicated in the legend. The EER points are marked with red circles on each curve, showing the threshold where the false positive rate equals the false negative rate. The dashed diagonal line represents the performance of a random classifier (AUC = 0.50). The higher AUC and lower EER values indicate better performance.
## 8 Limitations
In our error analysis (samples visualized in Figure 4), we identified several common types of errors observed in the segmentation results. Models often struggled with:
**Boundary Errors**: Misclassifications at the edges of grass patches, where the transition between grass and non-grass is abrupt.
**Small Patch Detection**: Difficulty in accurately segmenting small grass patches due to limited resolution and feature representation.
**False Positives and Negatives**: Instances where non-grass areas were misclassified as grass and vice versa, often due to similarities in texture or color.
These errors can be attributed to several factors, including the complexity of the ecological data, variations in lighting conditions, and the inherent difficulty in distinguishing between visually similar textures. Future work will focus on addressing these challenges by incorporating more sophisticated models and additional training data to improve segmentation accuracy and robustness.
## 9 Conclusion
This study has demonstrated the feasibility and challenges of the binary task of distinguishing grass from non-grass in aerial images, which serves as a foundational step towards more fine-grained identification of different grass species, including invasive ones like African lovegrass (ALG). Our research highlights the complexity of accurately segmenting ecological data due to variations in texture, lighting, and the presence of other vegetation.
The identification of common segmentation errors, such as boundary inaccuracies and false positives/negatives, provides critical insights for improving future models. Enhancing these models through better algorithms and more comprehensive training data will increase their robustness and reliability. The multi-objective approach, which balances segmentation accuracy with contextual consistency, shows promise in mitigating these errors. The strong performance of the Multi-Objective SAM model validates the effectiveness of our homotopy-based multi-objective fine-tuning approach. By optimizing both segmentation accuracy and contextual consistency, the Multi-Objective SAM model not only achieves high precision but also demonstrates efficiency, requiring only 50 epochs compared to the 100 epochs for the single-objective Finetuned SAM.
The implications of this work extend to practical applications in ecological monitoring and management. Accurate segmentation of ALG can significantly enhance the efficiency of monitoring and managing invasive species, potentially reducing their economic and ecological impact. The high accuracy and low Equal Error Rate (EER) of the Multi-Objective SAM model indicate that advanced segmentation techniques can provide reliable data for early detection and proactive management of invasive species.
While this study is focused on binary segmentation, it
Figure 4: Comparison of segmentation results from all models on the same aerial image. The comparison highlights the varying levels of detail, precision, and generalization ability of each model in distinguishing grass from non-grass areas. Note that some models like Multi-Objective SAM tend to misclassify large portions of the image as grass, while others like DeepLabV3 ResNet 101 perform better at capturing fine details.
sets the stage for future work aimed at fine-grained classification tasks. The insights and methodologies developed here will be instrumental in advancing the capabilities of segmentation models, ultimately contributing to more precise and effective ecological management strategies.
## References
* [1] Dario Albani, Daniele Nardi, and Vito Trianni. Field coverage and weed mapping by uav swarms. pages 4319-4325, 09 2017.
* [2] B. H. Y. Alsalam, K. Morton, D. Campbell, and F. Gonzalez. Autonomous uav with vision based on-board decision making for remote sensing and precision agriculture. In _2017 IEEE Aerospace Conference_, pages 1-12, 2017.
* [3] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. _IEEE transactions on pattern analysis and machine intelligence_, 39(12):2481-2495, 2017.
* [4] Erik A Bolch, Maria J Santos, Christiana Ade, Shruti Khanna, Nicholas T Basinger, Martin O Reader, and Erin L Hestir. Remote detection of invasive alien species. In _Remote Sensing of Plant Biodiversity_, pages 267-307. Springer, Cham, 2020.
* [5] Bethany Bradley. Remote detection of invasive plants: A review of spectral, textural and phenological approaches. _Biological Invasions_, 16, 07 2014.
* [6] Akshay L Chandra, Sai Vikas Desai, Wei Guo, and Vieneth N Balasubramanian. Computer vision with deep learning for plant phenotyping in agriculture: A survey. _arXiv preprint arXiv:2006.11391_, 2020.
* [7] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE transactions on pattern analysis and machine intelligence_, 40(4):834-848, 2017.
* [8] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. _arXiv preprint arXiv:1706.05587_, 2017.
* [9] Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical features for scene labeling. _IEEE transactions on pattern analysis and machine intelligence_, 35(8):1915-1929, 2012.
* [10] Jennifer Firm. African lovergrass in australia : a valuable pasture species or embarrassing invader? 2009.
* [11] Jennifer Firm, Emma Ladouceur, and Josh Dorrough. Integrating local knowledge and research to refine the management of an invasive non-native grass in critically endangered grassy woodlands. _Journal of Applied Ecology_, 55(1):321-330, 2018.
* [12] ZongYuan Ge, Chris McCool, Conrad Sanderson, and Peter Corke. Content specific feature learning for fine-grained plant classification. In Linda Cappellato, Nicola Ferro, Gareth J. F. Jones, and Eric San Juan, editors, _CLEF 2015_, volume 1391 of _CEUR Workshop Proceedings_. RWTH Aachen University, Jan. 2015. Conference and Labs of the Evaluation Forum 2015 : Experimental IR meets Multilinguality, Multimodality, and Interaction, CLEF 2015 ; Conference date: 08-09-2011 Through 11-09-2015.
* [13] Herve Goeau, Pierre Bonnet, and Alexis Joly. Plant identification in an open-world (lifecfied 2016). 09 2016.
* [14] Dingcheng Huang, Runzhi Zhang, Ke Chung Kim, and Andrew V. Suarez. Spatial pattern and determinants of the first detection locations of invasive alien species in mainland china. _PLOS ONE_, 7(2):1-7, 02 2012.
* [15] Riyad Ismail, Onisimo Mutanga, and Kabir Peerbhay. The identification and remote detection of alien invasive plants in commercial forests: An overview. _South African Journal of Geomatics_, 5(1):49-67, 2016.
* [16] Tobias Jensen, Frederik Seerup Hass, Mohammad Seam Akbar, Philip Holm Petersen, and Jamal Jokar Arsanjani. Employing machine learning for detection of invasive species using sentinel-2 and aviris data: The case of kudzu in the united states. _Sustainability_, 12(9), 2020.
* [17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4015-4026, 2023.
* [18] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3431-3440, 2015.
* [19] Richard N. Mack, Daniel Simberloff, W. Mark Lonsdale, Harry Evans, Michael Clout, and Fakhri A. Bazzaz. Biotic invasions: Causes, epidemiology, global consequences, and control. _Ecological Applications_, 10(3):689-710, 2000.
* [20] Keiichi Mochida, Satoru Koda, Komaki Inoue, Takashi Hirayama, Shojiro Tanaka, Ryuei Nishii, and Farid Melgani. Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective. _GigaScience_, 8(1), 12 2018. giy153.
* [21] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In _Proceedings of the IEEE international conference on computer vision_, pages 1520-1528, 2015.
* 81, 2018.
* MICCAI 2015_, pages 234-241, Cham, 2015. Springer International Publishing.
* [24] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. \" grabcut\" interactive foreground extraction using iterated graph cuts. _ACM transactions on graphics (TOG)_, 23(3):309-314, 2004.
* [25] Richard Socher, Cliff Chiung-Yu Lin, Andrew Y Ng, and Christopher D Manning. Parsing natural scenes and natural language with recursive neural networks. In _ICML_, 2011.
* [26] Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings 3_, pages 240-248. Springer, 2017.
* 19, 2020.
* [28] Ana Sofia Vaz, Domingo Alcaraz-Segura, Joao C Campos, Joana R Vicente, and Joao P Honrado. Managing plant invasions through the lens of remote sensing: A review of progress and the way forward. _The Science of the total environment_, 642:1328--1339, November 2018.
* [29] Jana Waldchen and Patrick Mader. Machine learning for image based species identification. _Methods in Ecology and Evolution_, 9(11):2216-2225, 2018.
* [30] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In _Proceedings of the IEEE international conference on computer vision_, pages 1529-1537, 2015. | We introduce a unique semantic segmentation dataset of 6,096 high-resolution aerial images capturing indigenous and invasive grass species in Bega Valley, New South Wales, Australia, designed to address the underrepresented domain of ecological data in the computer vision community. This dataset presents a challenging task due to the overlap and distribution of grass species, which is critical for advancing models in ecological and agronomical applications. Our study features a homotopy-based multi-objective fine-tuning approach that balances segmentation accuracy and contextual consistency, applicable to various models. By integrating DiceCELoss for pixel-wise classification and a smoothness loss for spatial coherence, this method evolves during training to enhance robustness against noisy data. Performance baselines are established through a case study on the Segment Anything Model (SAM), demonstrating its effectiveness. Our annotation methodology, emphasizing pen size, zoom control, and memory management, ensures high-quality dataset creation. The dataset and code are publicly available [**link to repo here1**], aiming to drive research in computer vision, machine learning, and ecological studies, advancing environmental monitoring and sustainable development.
Footnote 1: Currently excluded to honor double blind process | Give a concise overview of the text below. | 231 |
arxiv-format/2101_12633v1.md | _Reference: van Haren, H., H. Uchida, D. Yanagimoto, 2021. Further correcting pressure effects on_
_SBE911 CTD-conductivity data from hadal depths. J. Oceanogr., 77, 137-144._
## Further correcting pressure effects on SBE911 CTD-conductivity data from hadal depths
**by Hans van Haren\\({}^{1*}\\), Hiroshi Uchida\\({}^{2}\\), Daigo Yanagimoto\\({}^{3}\\)**
\\({}^{1}\\)Royal Netherlands Institute for Sea Research (NIOZ) and Utrecht University, P.O. Box 59, 1790 AB Den Burg, the Netherlands.
*e-mail: [email protected]
\\({}^{2}\\) Research Institute for Global Change, Japan Agency for Marine-Earth Science and Technology (JAMSTEC), Yokosuka, Kanagawa, Japan
\\({}^{3}\\)Atmosphere and Ocean Research Institute, The University of Tokyo
Kashiwanoha 5-1-5, Kashiwa, Chiba 277-8564, Japan
## 1 Introduction
Due to the large hydrostatic ambient pressure, few observations on deep-sea life conditions have been made in hadal zones deeper than 6000 m so far. Interesting marine biological questions are to be addressed on life under such harsh conditions, questions varying from molecular cellular level to sparsely distributed ecological communities both in the water column (e.g., Jamieson, 2015; Gallo et al. 2015; Nunoura et al. 2015) and in sediments of deep-ocean floors (e.g., Glud et al. 2013). Dynamical variations in the physical environment are also not well known, except that hadal waters in ocean trenches cannot be stagnant pools of cold water as species like crustaceans require sufficient replenishment of oxygen and nutrients (Johnson 1998). Further quantitative knowledge is thus required on transport processes, which are likely driven by density differences on the large scales and turbulent overturning motions on the small scales. Although progress is slow because of the required specialized instrumentation and long cables for lowering shipborne equipment, several attempts have improved the electronic sampling in the last decades (e.g., Kawagucci et al. 2018).
Water mass characteristics like temperature and salinity determining density variations were the first static physical properties to be studied. Before the use of electronic devices, discrete inverse thermometer readings were made from R/V Vityaz in the late 1950's and a small free-falling water-sampling device equipped with reversing thermometers and water samplers was dropped to the floor of the Mariana Trenches' Challenger Deep in 1976 (Mantyla and Reid 1978). The water sample data were analyzed in the laboratory for the salinity content that appeared constant to within 0.001 ppt. These data were used by Taira et al. (2005) as a reference for their deep shipborne Sea-Bird Electronics SBE-911 Conductivity Temperature Depth CTD-cast attached to a custom-made titanium wire, which was lowered down to 10877 m using a custom-made swell-compensator winch. Taira et al.'s (2005) findings led to a linear pressure-correction of the conductivity data due to length and diameter changes in the borosilicate glass cell as a function of pressure that became apparent at great depths (Sea-Bird Electronics 2013). However, even after accomplishing such correction, an additional artificial pressure effectremained in the data from recent casts into the Challenger Deep reaching a depth of 10851 m in 10907 m water depth (van Haren et al. 2017). The resulting artificial increase in salinity and thus density not only hampers water mass analyses in hadal zones, but potentially also computations on turbulence.
As turbulence microstructure profilers and (lowered) acoustic Doppler current profilers for measuring shear variance (e.g., Kunze et al. 2006) do not go deeper than 6000 m to date, we rely on shipborne CTD-instrumentation to calculate turbulent overturning displacements and values on turbulence dissipation rate and vertical eddy diffusivity following the method proposed by Thorpe (1977). However, the standard high-resolution SBE-instrumentation has to become modified in several ways to be able to calculate turbulence values within acceptable error bounds of about a factor of three. An important modification is to minimize the effects of surface-wave-induced ship motions (Taira et al. 2005) and rotation and related motions of the instrument-package (Uchida et al. 2015) on the electronic CTD-data. A direct way of minimizing is the use of an effective swell-compensator (Taira et al. 2005) together with an underwater slip ring swivel and a stabilizing fin (Uhida et al. 2018). Alternatively, during the post-processing the effects of swell may be removed via low-pass filtering the recorded raw data (van Haren 2015).
In this note, we report on a mathematical method to correct for an additional artificial pressure effect in deep high-resolution SBE-911 CTD casts into the Challenger Deep. One data set was obtained together with water sample data. This served as a lead for the proposed nonlinear mathematical correction. The correction is subsequently applied to the Challenger Deep data of van Haren et al. (2017) that serve as an example for other hadal-deep SBE-911 CTD-data. Our primary interest is in finding the approximately correct vertical slopes of CTD-profiles, culminating in the most realistic vertical slope of density anomaly values. Such slopes express the amount of vertical density variation'stratification' and the work of turbulence kinetic energy against it. Hereby we do not worry about the exact absolute values as obtained from calibrations and used in water mass definitions and studies on variations therein.
## 2 Observations and data
Shipborne CTD observations have been made from the Japanese R/V Hakuho Maru in August 2002 and the German R/V Sonne in November 2016 above two deep spots of the Challenger Deep, the southernmost part of the Mariana Trench including the world's deepest point (Fig. 1). In both cases, a Sea-Bird SBE911plus CTD was used and the instrumentation package was mounted in a horizontal position at the bottom of a water sampler frame.
The Hakuho Maru lowered the CTD using a custom-made titanium cable with deepest measurement 10366 m at 11\\({}^{\\circ}\\) 21.0\\({}^{\\circ}\\)N, 142\\({}^{\\circ}\\) 24.5\\({}^{\\prime}\\) E in 10346\\(\\pm\\)25 m water depth. To minimize ship-motion effects on the CTD-measurements, an active swell compensator, manufactured by Mitsubishi Heavy Industries, was controlled by an integrated signal of the accelerometer mounted on a vertical gyroscope under the gallows (Taira et al., 2005). In spectra of the principle parameter data swell and wind-wave peak were indeed absent. It is noted that in the surface wave range, for frequencies exceeding 0.02 cps, cycles per second, the spectra of deep CTD-data were flat white noise rather than continuing the slow decrease with frequency as at lower frequencies. During the CTD-upcast, water samples were taken for laboratory salinity determination by salinometer (Portasal 8410A, Guildline Instruments Ltd., Ontario, Canada). In hindsight, in the 2002-CTD the temperature sensor (SBE3 S/N893) had an abnormal pressure hysteresis of up to about \\(\\pm\\)0.001\\({}^{\\circ}\\)C. To correct it, the pressure dependency was estimated from comparisons with SBE35 and another SBE3 (S/N5982) during a following cruise (Kawaguchi et al., 2018).
The Sonne lowered the CTD with freshly calibrated T-C sensors using its 18 mm steel cable at 11\\({}^{\\circ}\\) 19.752\\({}^{\\circ}\\)N, 142\\({}^{\\circ}\\) 11.277\\({}^{\\prime}\\) E in 10907\\(\\pm\\)12 m water depth with deepest measurement 10851 m (van Haren et al., 2017). The ship motion effects were removed during post-processing by applying a sharp double elliptic, phase-preserving low-pass filter (Parks and Burrus, 1987) with 0.05 cps, cycles per second, cut-off frequency. For consistency between the data sets, the Hakuho Maru data are also post-processed with this low-pass filter to remove some of the whitenoise. No water samples were taken during the Sonne's CTD-upcast as the Rosette-electronic bottle firing system was not rated at pressures exceeding 6000 dbar.
Following the Taira et al. (2005) publication, Sea-Bird Electronics introduced an extra pressure correction on conductivity data that accommodated for the small linear changes in ceramics cell-length under static pressure (Sea-Bird Electronics 2013),
\\[\\mathrm{C(0)=C(p)/(1+a\\cdotp)\\ \\ [S\\ m^{-1}]}, \\tag{1}\\]
in which C denotes conductivity (at pressure p) and a = -9.57\\(\\times\\)10\\({}^{-8}\\) [dbar-1] the nominal constant correction factor. With this correction, the full calibration equation from raw frequency (f) data reads (e.g., Sea-Bird Electronics 2013),
\\[\\mathrm{C(p,\\,T)=(g+h\\cdotf\\hat{e}^{2}+i\\cdotf\\hat{e}^{3}+j\\cdotf\\hat{e}^{k})/10 (1+b\\cdotT+a\\cdotp)\\ \\ [S\\ m^{-1}]}, \\tag{2}\\]
where g,h,i and j are calibration constants and b is a constant for linear temperature T correction. Equation (2) becomes nonlinear in p when temperature is a nonlinear function of pressure.
We processed the CTD-data using the standard procedures incorporated in the SBE-software, including corrections for cell thermal mass using the parameter setting of Mensah et al. (2009) and sensor time-alignment. Conservative (\\(\\sim\\)potential) Temperature (\\(\\Theta\\)), absolute salinity SA and density anomalies \\(\\sigma_{11}\\) referenced to 11000 dbar are calculated using the Gibbs-SeaWater GSW-software that define thermodynamic properties of seawater based on a Gibbs function formulation (IOC, SCOR, IAPSO 2010).
Estimates of turbulence dissipation rate \\(\\varepsilon_{\\mathrm{T}}=\\mathrm{c}_{1}{}^{2}\\mathrm{d}^{2}\\mathrm{N}^{3}\\) and vertical eddy diffusivity \\(\\mathrm{K}_{x\\mathrm{T}}=\\mathrm{m}_{1}\\mathrm{c}_{1}{}^{2}\\mathrm{d}^{2} \\mathrm{N}\\) are made from the CTD-downcast data using the method of reordering potentially unstable vertical density profiles in statically stable ones, as proposed by Thorpe (1977). Here, d denotes the displacements between unordered (measured) and reordered profiles. N denotes the buoyancy frequency computed from the reordered profiles. Rms-values of displacements are not determined over individual overturns, as in Dillon (1982), but over 200 m vertical intervals that just exceed the largest overturn intervals. This avoids the complex distinction of smaller overturns in larger ones and allows the use of a single averaging length scale. We use standard constant values of \\(\\mathrm{c}_{1}=0.8\\) for the Ozmidov/overturn scale factor (Dillon 1982) and m\\({}_{1}\\) = 0.2 for the mixing efficiency (Osborn 1980; Oakey 1982). This is the most commonly used parameterization for oceanographic data, see further discussions in van Haren et al. (2017) and Gregg et al. (2018). For eddy diffusivity also other parametrizations are used, for comparison. As a criterion for determining overturns from the surface wave corrected density data, we only used those data of which the absolute value of difference with the local reordered value exceeds a threshold of 7\\(\\times\\)10\\({}^{5}\\) kg m\\({}^{-3}\\), which corresponds to applying a threshold of 1.4\\(\\times\\)10\\({}^{-3}\\) kg m\\({}^{-3}\\) to raw data variations (e.g., Galbraith and Kelley 1996; Stansfield et al. 2001; Gargett and Garner 2008).
## 3 Results and further correction
### General CTD-profiles obtained from Challenger Deep 14 years apart
The SBE-software processed and ship-motion corrected deeper half of the 2002 and 2016 CTD-profiles in the Challenger Deep are generally consistent (Fig. 2). Modifications on the large scale between the profiles, like a temperature difference of about 0.005\\({}^{\\circ}\\)C and a salinity difference of about 0.0015 g kg\\({}^{-1}\\), are considered due to water mass variations between the times of data taking, 14 years apart, and, on a shorter time scale, due to internal wave action, mesoscale eddies and sub-mesoscale processes like occurring near the ocean-surface (e.g., Nakano et al. 2015; Qiu et al. 2020). Pressure dependency (Uchida et al. 2015) of the temperature sensors and the batch-to-batch differences of the standard seawater (Uchida et al. 2020) used in the manufacturer's calibration for the conductivity sensors may also contribute to the differences.
Around z = -6000 m at the top of the trench or the level of the surrounding ocean floor, the vertical variation in temperature, salinity and density becomes very low. In Conservative Temperature (Fig. 2a), a near-homogeneous layer is observed between -8300 \\(<\\) z \\(<\\) -7000 m in the 2002-profile.The 2002-salinity profile demonstrates a large-scale weak instability between -7500 \\(<\\) z \\(<\\) -5500 m that is not observed in the 2016-Sonne CTD-downcast data (Fig. 2b). It is a weaker version of the S-shape instability in the data presented by Tai used the same serial numbered CTD-sensors as during the 2002-Hakuho-Maru cruise, and attributable to the linear pressure effects to the ceramics of the conductivity cell that are corrected following (1) since SeaBird Electronics (2013).
Of concern here is the exponential increase with depth in salinity for z \\(<\\) -7500 m. Despite this exponential increase being found consistently in all profiles we consider it artificial.
The above observations in salinity are reflected in the density anomaly data (Fig. 2c). It is noted that for z \\(<\\) -8500 m the 2002- and 2016-profiles almost overlap, which indicates an almost complete matching of different salinity and temperature contributions to density variations.
In both data sets, small oscillations of about 0.0001\\({}^{\\circ}\\)C are visible especially in the weakly stratified layer between about -9000 \\(<\\) z \\(<\\) -7000 m. These small oscillations are considered realistic and typical for deep trench instabilities or overturning. The oscillations are part of 100 to 200 m tall overturns, with a general shape of large-scale weak stability above a small-scale relatively strong instability (Fig. 3).
### Finding a correction for artificial conductivity data
The 2002-salinity-data are corrected using water sample data analysed in the laboratory. Although the individual water sample data showed considerable spike behaviour of 0.001 g kg\\({}^{\\text{-}}\\)[1] and larger, the general trend shows a weakly, not-exponentially increasing profile with depth (Fig. 4). The correction applied is thus one that yields a more or less steady weakly stable vertical gradient in salinity for z \\(<\\) -6700 m, leaving the small-100-m-scale instabilities unaffected.
In order to apply the above salinity correction to other deep hadal CTD-profiles we recomputed a corrected conductivity profile Cc(p) from the corrected salinity, corrected temperature and pressure 2002-CTD-data and compared it with the original conductivity profile Co(p) from (2) (Fig. 5a). The small jump in conductivity difference \\(\\Delta\\)C = Cc - Co of about 0.00001 S m\\({}^{\\text{-}1}\\) around z = -7540 m was caused by the temperature correction mentioned in Section 2.
We fitted the conductivity difference data with a 9\\({}^{\\text{th}}\\) order polynomial on pressure. The coefficients are given in the Appendix. This high order was required to more or less correctly grasp the entire conductivity difference profile from surface to trench floor, due to some wild variations in conductivity related with water property variations near the surface. Alternatively, we fitted a lower 3\\({}^{\\text{rd}}\\) order polynomial that provided the same standard deviation between fit and \\(\\Delta\\)C, but only covering z \\(<\\) -5000 m deep waters, which we find less preferential. We also attempted to fit a polynomial to corrected salinity data difference with original data, as a function of z, but the results were less good (in salinity and density profiles) than fitting the \\(\\Delta\\)C(p).
The difference in salinity between the 9\\({}^{\\text{th}}\\)-order polynomial fit and the water sample corrections is smaller than \\(\\pm\\)2\\(\\times\\)10\\({}^{-4}\\) g kg\\({}^{-1}\\) (Fig. 5b). It is one order of magnitude smaller than the differences between electronic and laboratory water sample data and the salinity difference between original 2016-Sonne data and the 9\\({}^{\\text{th}}\\)-order polynomial correction to their conductivity data using (A1). In vertical salinity profiles for z \\(<\\) -6000 m, the corrections show a slight weakening of the general vertical salinity gradient for the 2002-CTD-data and a larger vertical salinity gradient in the 2016-CTD-data that is steady on the 1000-m scale (Fig. 5c).
### Turbulence calculations
In density anomaly, the two deep profiles have different slopes with depth that become near-equal for z \\(<\\) -9500 m (Fig. 6a). In this deep range, the effects of the corrected conductivity data on computed turbulence values is evident but smaller than expected.
Between -6000 \\(<\\) z \\(<\\) -5000 m, stratification is thus considerable that turbulence values are found mainly below threshold value. For z \\(<\\) -6000 m, the stratification becomes weak and the buoyancy frequency N computed over 100 dbar pressure difference reaches a minimum of N = 1\\(\\pm\\)0.6 cpd (short for cycles per day) = 2.5\\(\\pm\\)1.5f, where f denotes the local inertial frequency. Within the error bounds, but computed over several thousands of meters vertically -10000 \\(<\\) z \\(<\\) -7000 m, the 2002-data yield N = 0.75\\(\\pm\\)0.1 cpd and the 2016-data yield N = 1.0\\(\\pm\\)0.1 cpd. Despite the different stratification, the vertical turbulence fluxes, which are proportional to the turbulence dissipation rates, give comparable results for the two CTD-casts (Fig. 6b). Averaged over -10000 \\(<\\) z \\(<\\) -7000 m, we find \\(\\varepsilon_{2002}\\) = 1.9\\(\\pm\\)1\\(\\times\\)10\\({}^{-10}\\) m\\({}^{2}\\) s\\({}^{-3}\\) for the 2002-data and \\(\\varepsilon_{2016}\\) = 1.8\\(\\pm\\)1\\(\\times\\)10\\({}^{-10}\\) m\\({}^{2}\\) s\\({}^{-3}\\) for the 2016-data. Over this vertical range, these values are not significantly different from each other and from the values calculated from uncorrected 2016-data that, however, gave negligible values for z \\(<\\) -10400. Conductivity correction of the 2016-data yielded the near-bottom range [10200 10800] m average of \\(\\varepsilon_{2016}\\) = 5\\(\\pm\\)3\\(\\times\\)10\\({}^{-11}\\) m\\({}^{2}\\) s\\({}^{-3}\\).
The equivalent turbulent flux values between the profiles and the different stratification result in approximately one-and-a-half-times larger turbulent diffusivity values for the 2002-data compared with the 2016-data (Fig. 6c). Like for the turbulence dissipation rate (\\(\\sim\\) turbulent flux), largest values are found in the range -9000 \\(<\\) z \\(<\\) -7000 m, but non-negligible values are found close to the trench-floor. Mean values for the 10000 \\(<\\) z \\(<\\) -7000 m range of 2016-data are K\\({}_{\\rm z}\\) = 6\\(\\pm\\)4\\(\\times\\)10\\({}^{-3}\\) m\\({}^{2}\\) s\\({}^{-1}\\) when computed using the Osborn (1980) parameterization. Mean values are higher using the parametrization suggested for estuaries under 'high' buoyancy Reynolds number values (Holleman et al. 2016), and tenfold less K\\({}_{\\rm z}\\) = 6\\(\\pm\\)4\\(\\times\\)10\\({}^{4}\\) m\\({}^{2}\\) s\\({}^{-1}\\) using the buoyancy Reynolds number Re\\({}_{\\rm b}\\) parametrization suggested for lakes with Re\\({}_{\\rm b}\\)\\(<\\) 400 by Bouffard and Boegman (2013). In the present data 10\\({}^{4}\\) \\(<\\) Re\\({}_{\\rm b}\\) = \\(\\varepsilon\\)/vN\\({}^{2}\\)\\(<\\) 10\\({}^{5}\\), which is generally considered high.
## 4 Discussion and conclusions
The present parameterization on mixing efficiency and turbulent diffusivity values leads to considerable ambiguity as demonstrated from our data above. As noted previously (van Haren et al. 2017), one cannot establish from CTD-profiles the particular type of turbulence and hence the mixing efficiency without knowledge on, e.g., the extent of the inertial subrange of shear-induced turbulence. One needs more extensive data sets, for example from moored instrumentation, to establish such knowledge. However, the present data can be used to establish some knowledge on the vertical variation of turbulent fluxes via turbulence dissipation rate calculations. The results between the two differently obtained data sets 14 years apart are consistent and show a weak turbulent vertical fluxing throughout the trench depth range down to the trench-floor. Values are comparable with or just below open ocean values (e.g., Gregg 1989). These turbulence values are about one order of magnitude larger than observed in the deep North-Pacific where microstructure profiler data gave [\\(\\varepsilon\\)] \\(\\sim\\) 10\\({}^{\\text{-11}}\\) m\\({}^{2}\\) s\\({}^{\\text{-3}}\\) around z = -5000 m (Yasuda et al. submitted). This is consistent with the below-threshold values found here between -6000 \\(<\\) z \\(<\\) -5000 m. It remains to be established whether the observed larger turbulence activity in deep trench waters is associated with enhanced carbon fluxes reported by Glud et al. (2013).
The water sample based polynomial correction to the conductivity data provides a consistent weakly stable deep trench stratification. This improves Mantyla and Reid's (1978) presentation of a nearly 4000 m tall unstable water column and which was attributable to their zero change in salinity values between their 7000 and 11000 dbar sampling levels. While temperature and salinity both positively contribute to vertical density variations, the contributions of salinity dominate in the present data. For z \\(<\\) -7000 m a tight temperature - density relationship is found, with \\(<\\)10% difference between coefficients for the 2002- and 2016-data. This results in \\(<\\)15% difference in mean turbulence dissipation rate values, and \\(<\\)7% in eddy diffusivity, which is well within error bounds.
Our proposed correction for artificial pressure effects on conductivity-data by applying a mathematical polynomial fit across the entire water column has no physical foundation. It is obvious that a linear trend is not observed, due to temperature contributions to conductivity. We hypothesize that the ceramics of the TC-duct have a small nonlinear response in addition to the linear pressure functionality. With the successful application of the additional conductivity correction to the 2016-Sonne data we speculate that the (A1) polynomial may work for other hadal depth SBE911 data sets, also from other trenches.
The corrected salinity data still weakly increase with depth, together with a decrease in temperature, which may point at an influx of dense modified Antarctic bottom water. For improved establishing such water mass transformations we require future extensive deep CTD-sampling.
## Acknowledgements
We thank the masters and crews of the R/V Hakuho Maru and R/V Sonne for the pleasant cooperation during the operations at sea. We acknowledge Dr. K. Taira who planned the CTD-observations in the Mariana Trench during the Hakuho Maru Cruise in 2002.
## Appendix Correction coefficient
Additional pressure correction to Sea-Bird SBE-911 Conductivity data at hadal depths.
\\[\\text{Cc(p)}=\\text{Co(p)}-\\text{a}_{0}-\\text{a}_{0}\\text{p}^{1}-\\text{a}_{2} \\text{p}^{2}-\\text{a}_{3}\\text{p}^{3}-\\text{a}_{4}\\text{p}^{4}-\\text{a}_{5} \\text{p}^{6}-\\text{a}_{7}\\text{p}^{7}-\\text{a}\\text{p}^{8}-\\text{a}\\text{p}^{9} \\text{[S m}^{-1}], \\tag{11}\\]
with best-fit coefficients,
\\[\\text{a}_{0}=-1.0940527\\times 10^{-03}\\] \\[\\text{a}_{1}=+2.0797759\\times 10^{06}\\] \\[\\text{a}_{2}=-1.9887985\\times 10^{-09}\\] \\[\\text{a}_{3}=+1.0474409\\times 10^{-12}\\] \\[\\text{a}_{4}=-3.2735743\\times 10^{-16}\\] \\[\\text{a}_{5}=+6.3408610\\times 10^{-20}\\] \\[\\text{a}_{6}=-7.7038164\\times 10^{-24}\\] \\[\\text{a}_{7}=+5.7145710\\times 10^{-28}\\] \\[\\text{a}_{8}=-2.3640827\\times 10^{-32}\\] \\[\\text{a}_{9}=+4.1779238\\times 10^{-37}.\\]
Here, Cc is the corrected conductivity and Co the original data from (2). Formula (11) is applicable for all pressures, but yields noticeable corrections for great depths only.
## References
* Bouffard and Boegman (2013) Bouffard D, Boegman L (2013) A diapycnal diffusivity model for stratified environmental flows. Dyn Atmos Oc 61-62:14-34.
* Dillon (1982) Dillon TM (1982) Vertical overturns: a comparison of Thorpe and Ozmidov length scales. J Geophys Res 87:9601-9613.
* Galbraith and Kelley (1996) Galbraith PS, Kelley DE (1996) Identifying overturns in CTD profiles. J Atmos Oceanic Technol 13:688-702.
* Gargett and Garner (2008) Gargett A, Garner T (2008) Determining Thorpe scales from ship-lowered CTD density profiles. J Atmos Oceanic Technol 25:1657-1670.
* Gallo et al (2015) Gallo ND, Cameron J, Hardy K, Fryer P, Bartlett DH, Levin LA (2015) Submersible- and lander-observed community patterns in the Mariana and New Britain trenches: Influence of productivity and depth on epibenthic and scavenging communities. Deep-Sea Res 1 99:119-133.
* Glud et al (2013) Glud RN, Wenzhofer F, Middelboe M, Oguri K, Turnewitsch R, Canfield DE, Kitazato H e (2013) High rates of microbial carbon turnover in sediments in the deepest oceanic trench on Earth. Nat Geosci 6:284-288.
* Gregg (1989) Gregg MC (1989) Scaling turbulent dissipation in the thermocline. J Geophys Res 94:9686-9698.
* Gregg et al (2018) Gregg MC, D'Asaro EA, Riley JJ, Kunze E (2018) Mixing efficiency in the ocean. Ann. Rev. Mar. Sci. 10:443-473.
* Holleman et al (2016) Holleman RC, Geyer WR, Ralston DK (2016) Stratified turbulence and mixing efficiency in a salt wedge estuary. J Phys Oceanogr 46:1769-1783.
* 2010: Calculation and use of thermodynamic properties. Intergovernmental Oceanographic Commission, Manuals and Guides No. 56, UNESCO, Paris, France, p 196
* Jamieson (2015) Jamieson A (2015) The hadal zone, life in the deepest oceans. Cambridge University Press, Cambridge, UK, p 382
* Jost et al (2016)Johnson GC (1998) Deep water properties, velocities, and dynamics over ocean trenches. J Mar Res 56:329-347.
* Kawagucci et al (2018) Kawagucci S, Makabe A, Kodama T, Matsui Y, Yoshikawa C, Ono E, Wakita M, Nunoura T, Uchida H, Yokokawa T (2018) Hadal water biogeochemistry over the Izu-Ogasawara Trench observed with a full-depth CTD-CMS. Ocean Sci 14:575-588, [https://doi.org/10.5194/os-14-575-2018](https://doi.org/10.5194/os-14-575-2018).
* Kunze et al (2006) Kunze E, Firing E, Hummon JM, Chereskin TK, Thurnherr AM (2006) Global abyssal mixing inferred from lowered ADCP shear and CTD strain profiles. J Phys Oceanogr 36:1553-1576.
* Mantyla and Reid (1978) Mantyla AW, Reid JL (1978) Measurements of water characteristics at depth greater than 10 km in the Mariana Trench. Deep-Sea Res 25:169-173.
* Mensah et al (2009) Mensah V, Le Menn M, Morel Y (2009) Thermal mass correction for the evaluation of salinity. J Atmos Oceanic Tech 26:665-672.
* Nakano et al (2015) Nakano T, Kitamura T, Sugimoto S, Suga T, Kumachi M (2015) Long-term variations of North Pacific Tropical Water along the 137\\({}^{\\rm o}\\)E repeat hydrographic section. J Oceanogr 71:229-238.
* Nunoura et al (2015) Nunoura T, Takaki Y, Hirai M, Shimamura S, Makabe A, Koide O, Kikuchi T, Miyazaki J, Koba K, Yoshida N, Sunamura M, Takai K (2015) Hadal biosphere: Insight into the microbial ecosystem in the deepest ocean on Earth. P Natl Acad Sci USA 112:E1230-E1236.
* Oakey (1982) Oakey NS (1982) Determination of the rate of dissipation of turbulent energy from simultaneous temperature and velocity shear microstructure measurements. J Phys Oceanogr 12:256-271.
* Osborn (1980) Osborn TR (1980) Estimates of the local rate of vertical diffusion from dissipation measurements. J Phys Oceanogr 10:83-89.
* Parks and Burrus (1987) Parks TW, Burrus CS (1987) Digital Filter Design. Wiley, New York, USA, p 342
* Rue et al (2015)Qiu C, Liang H, Huang Y, Mao H, Yu J, Wang D, Su D (2020) Development of double cyclonic mesoscale eddies at around Xisha islands observed by a'Sea-Whale 2000' autonomous underwater vehicle. Appl Ocean Res 101:102270.
* Sea-Bird Electronics (2013) Sea-Bird Electronics (2013) Compressibility compensation of Sea-Bird conductivity sensors. Application Note No. 10, Sea-Bird Electronics Inc., Bellevue WA, USA, p 3
* Stansfield et al (2001) Stansfield K, Garrett C, Dewey R (2001) The probability distribution of the Thorpe displacement within overturns in Juan de Fuca Strait. J Phys Oceanogr 31:3421-3434.
* Smith WHF and Sandwell (1997) Smith WHF, Sandwell DT (1997) Global seafloor topography from satellite altimetry and ship depth soundings. Science 277:1957-1962.
* Taira et al (2005) Taira K, Yanagimoto D, Kitagawa S (2005) Deep CTD casts in the Challenger Deep. Mariana Trench. J Oceanogr 61:447-454.
* Thorpe (1977) Thorpe SA (1977) Turbulence and mixing in a Scottish loch. Phil Trans Roy Soc Lond A 286:125-181.
* Uchida et al (2020) Uchida H, Kawano T, Nakano T, Wakita M, Tanaka T, Tanihara S (2020) An expanded batch-to-batch correction for IAPSO standard seawater. J Atmos Oceanic Technol 37:1507-1520.
* Uchida et al (2018) Uchida H, Maeda Y, Kawamata S (2018) Compact underwater slip ring swivel, minimizing effect of CTD package rotation on data quality. Sea Technol 59(11):30-32.
* Uchida et al (2015) Uchida H, Nakano T, Tamba J, Widiatmo JV, Yamazawa K, Ozawa S, Kawano T (2015) Deep ocean temperature measurement with an uncertainty of 0.7 mK. J Atmos Oceanic Technol 32:2199-2210.
* van Haren (2015) van Haren H (2015) Ship motion effects in CTD data from weakly stratified waters of the Puerto Rico Trench. Deep-Sea Res I 105:19-25.
* van Haren et al (2017) van Haren H, Berndt C, Klaucke I (2017) Ocean mixing in deep-sea trenches: new insights from the Challenger Deep, Mariana Trench. Deep-Sea Res I, 129:1-9.
* Yasuda et al (2020) Yasuda I, Fujio S, Yanagimoto D, Lee K-J, Sasaki Y, Zhai S, Tanaka M, Itoh S, Tanaka T, Hasegawa D, Goto Y, Sasano D (2020) Improved measurements of ocean turbulent energy dissipation using fast-response thermistors. Submitted to J Oceanogr.
* Yahata et al (2018)
[MISSING_PAGE_POST]
Figure 1: Sites of the 2002-Hakuho Maru (white) and 2016-Sonne (black) CTD stations in the Challenger Deep, Mariana Trench, North-Pacific. One minute grid version of the ocean topography database presented by Smith and Sandwell (1997). Local calibrated multibeam echosounder depth data are given in [m], which may be compared with the database values of -10945 and -10377 m for the 2016- and 2002-sites, respectively.
Figure 2: Lower 5500 m of the 2002-Hakuho Maru and 2016-Sonne CTD profiles from the Challenger Deep, Mariana Trench. Data without corrections except for ship-motions including 0.05 cps, cycle per second, low-pass filtering, see text. (a) Conservative Temperature. (b) Absolute Salinity. (c) Potential density anomaly, referenced to 11000 dbar.
Figure 3: Magnification of 2002-data in Fig. 2 demonstrating a characteristic deep trench instability around z = -7650 m. The black bars indicate the approximate error bars for the 0.05 cps low-pass filtered data.
Figure 5: Comparison between original and corrected data and the polynomial fit to the corrections. (a) Entire profile of conductivity difference between original and water-sample-corrected data (black) with a 9th order polynomial fit (purple). (b) As a., but for the lower 5500 m of Absolute Salinity difference between water-sample-corrected data and data computed from the polynomial fit-profile of a. (black). In purple the difference between water-sample and electronic CTD-data during water sample taking. In blue the difference between original (0.05 cps low-pass filtered) data and polynomial fit-data from
the 2016-Sonne cruise. (c) Lower 5500 m profile of Absolute Salinity of: 2002-data corrected using water samples (black), 2002-data corrected using 9th order polynomial on conductivity difference in a. (red), 2016-data using 9th order polynomial of 2002-Hakuho-Maru -conductivity correction (blue).
Figure 6: Lower 5500 m of turbulence characteristics computed from 9th order polynomial 2002-Hakuho Maru-conductivity corrected downcast 2002- and 2016-data applying a threshold of 7x10-5 kg m\\({}^{\\text{-3}}\\). (a) Density anomaly referenced to 11000 dbar. (b) Logarithm of dissipation rate computed from the profiles in a., averaged over 200 m vertical intervals. Values are zero when threshold is not passed. (c) As b., but for eddy diffusivity. The dashed profiles indicate values using the parameterization proposed for lake data by Bouffard and Boegman (2013).
Figure 1: Sites of the 2002-Hakuho Maru (white) and 2016-Sonne (black) CTD stations in the Challenger Deep, Mariana Trench, North-Pacific. One minute grid version of the ocean topography database presented by Smith and Sandwell (1997). Local calibrated multibeam echosounder depth data are given in [m], which may be compared with the database values of -10945 and -10377 m for the 2016- and 2002-sites, respectively.
Figure 2: Lower 5500 m of the 2002-Hakuho Maru and 2016-Sonne CTD profiles from the Challenger Deep, Mariana Trench. Data without corrections except for ship-motions including 0.05 cps, cycle per second, low-pass filtering, see text. (a) Conservative Temperature. (b) Absolute Salinity. (c) Potential density anomaly, referenced to 11000 dbar.
Figure 3: Magnification of 2002-data in Fig. 2 demonstrating a characteristic deep trench instability around z = -7650 m. The black bars indicate the approximate error bars for the 0.05 cps low-pass filtered data.
Figure 4: As Fig. 2b, but for corrections to 2002-CTD data using water sample and laboratory information. Uncorrected data in green, corrected data in black, water sample laboratory data in purple (x). The spike of about 0.01 g kg-1 at 6800 m is an apparently extreme noise contamination in the water sampling.
Figure 5: Comparison between original and corrected data and the polynomial fit to the corrections. (a) Entire profile of conductivity difference between original and water-sample-corrected data (black) with a 9\\({}^{\\text{th}}\\) order polynomial fit (purple). (b) As a., but for the lower 5500 m of Absolute Salinity difference between water-sample-corrected data and data computed from the polynomial fit-profile of a. (black). In purple the difference between water-sample and electronic CTD-data during water sample taking. In blue the difference between original (0.05 cps low-pass filtered) data and polynomial fit-data from the 2016-Sonne cruise. (c) Lower 5500 m profile of Absolute Salinity of: 2002-data corrected using water samples (black), 2002-data corrected using 9\\({}^{\\text{th}}\\) order polynomial on conductivity difference in a. (red), 2016-data using 9\\({}^{\\text{th}}\\) order polynomial of 2002-Hakuho-Maru -conductivity correction (blue).
Figure 6: Lower 5500 m of turbulence characteristics computed from 9th order polynomial 2002-Hakuho Maru-conductivity corrected downcast 2002- and 2016-data applying a threshold of 7x10\\({}^{-5}\\) kg m\\({}^{-3}\\). (a) Density anomaly referenced to 11000 dbar. (b) Logarithm of dissipation rate computed from the profiles in a., averaged over 200 m vertical intervals. Values are zero when threshold is not passed. (c) As b., but for eddy diffusivity. The dashed profiles indicate values using the parameterization proposed for lake data by Bouffard and Boegman (2013). | Hadal, \\(>\\)6000 m deep shipborne Sea-Bird Electronics SBE 911plus Conductivity Temperature Depth (CTD) data are obtained using two different systems in the vicinity of the Earth's deepest point, in the Challenger Deep, Mariana Trench 14 years apart. Below 7000 m in very weakly density stratified waters, the salinity data from both sets show an artificial increase with depth of about 10\\({}^{-6}\\) g kg\\({}^{-1}\\) m\\({}^{-1}\\) that is not covered by the SBE linear pressure correction to the conductivity data. With the aid of independent water sample data, salinity is corrected and an additional algorithm for the pressure correction on conductivity data is formulated. The corrected salinity data still weakly increase with depth, together with a decrease in temperature, which may point at an influx of dense modified Antarctic bottom water. The corrected density variations with depth are used in calculations of deep turbulence values.
Keywords: Challenger Deep; Mariana Trench; improved pressure correction of conductivity profile data; turbulence parameter estimates | Write a summary of the passage below. | 229 |
arxiv-format/2101_09126v1.md | # Will Artificial Intelligence supersede Earth System and Climate Models?
Christopher Irrgang
Helmholtz Centre Potsdam, German Research Centre for Geosciences GFZ, Potsdam, Germany
Niklas Boers
Maike Sonnewald
Elizabeth A. Barnes
Potsdam Institute for Climate Impact Research, Potsdam, Germany
Christopher Kadow
Potsdam Institute for Climate Impact Research, Potsdam, Germany
Joanna Staneva
Helmholtz Centre Potsdam, German Research Centre for Geosciences GFZ, Potsdam, Germany
Jan Saynisch-Wagner
Helmholtz Centre Potsdam, German Research Centre for Geosciences GFZ, Potsdam, Germany
######
For decades, scientists have utilized mathematical equations to describe geophysical and climate processes and to construct deterministic computer simulations that allow for the analysis of such processes. Until recently, process-based models had been considered irreplaceable tools that helped to understand the complex interactions in the coupled Earth system and that provided the only tools to predict the Earth system's response to anthropogenic climate change.
The provocative thought that Earth system models (ESMs) might lose their fundamental importance in the advent of novel artificial intelligence (AI) tools has sparked both a gold-rush feeling and contempt in the scientific communities. On the one hand, deep neural networks have been developed that complement and have started to outperform the skill of process-based models in various applications, ranging from numerical weather prediction to climate research. On the other hand, most neural networks are trained for isolated applications and lack true process knowledge. Regardless, the daily increasing data streams from Earth system observation (ESO), increasing computational resources, and the availability and accessibility of powerful AI tools, particularly in machine learning (ML), have led to numerous innovative frontier applications in Earth and climate sciences. Based on the current state, recent achievements, and recognised limitations of both process-based modelling and AI in Earth and climate research, we draw a perspective on an imminent and profound methodological transformation, hereafter named Neural Earth System Modelling (NESYM). To solve the emerging challenges, we highlight the necessity of new transdisciplinary collaborations between the involved communities.
## Overview on Earth System Modelling and Earth System Observations
Earth system models (ESMs) [1] combine process-based models of the different sub-systems of the Earth system into an integrated numerical model that yields for a given state of the coupled system at time \\(t\\) the tendencies associated with that state, i.e., a prediction of the system state for time \\(t+1\\). The individual model components, or modules, describe sub-systems including the atmosphere, the oceans, the carbon and other biogeochemical cycles, radiation processes, as well as land surface and vegetation processes and marine ecosystems. These modules are then combined by a dynamic coupler to obtain a consistent state of the full system for each time step.
For some parts of the Earth system, the primitive physical equations of motion are known explicitly, such as the Navier-Stokes equations that describe the fluid dynamics of the atmosphere and oceans (Fig. 1). In practise, it is impossible to numerically resolve all relevant scales of the dynamics and approximations have to be made. For example, the fluid dynamical equations for the atmosphere and oceans are integrated on discrete spatial grids, and all processes that operate below the grid resolution have to be parameterised to assure a closed description of the system. Since the multi-scale nature of the dynamics of geophysical fluids implies that the subgrid-scale processes interact with the larger scales that are resolved by the model, (stochastic) parameterization of subgrid-scale processes is a highly non-trivial, yet unavoidable, part of climate modelling [2, 3, 4].
For other parts of the Earth system, primitive equations of motion, such as the Navier-Stokes equationsfor atmospheric motion, do not exist. Essentially, this is due to the complexity of the Earth system, where many phenomena that emerge at a macroscopic level are not easily deducible from microscopic-scales that may or may not be well-understood. A typical example is given by ecosystems and the physiological processes governing the vegetation that covers vast parts of the land surface, as well as their interactions with the atmosphere, the carbon and other geochemical cycles. Also for these cases, approximations in terms of parameterizations of potentially crucial processes have to be made.
Regardless of the specific process, such parameterizations induce free parameters in ESMs, for which suitable values have to be found empirically. The size of state-of-the-art ESMs mostly prohibits systematic calibration methods such as, e.g., the ones based on Bayesian inference, and the models are therefore often tuned manually. The quality of the calibration as well as the overall accuracy of the model can only be assessed with respect to relatively sparse observations of the last 170 years, at most, and there is no way to assess the models' skill in predicting future climate conditions [5]. The inclusion of free parameters possibly causes biases or structural model errors and the example of the discretized spatial grid suggests that the higher the spatial resolution of an ESM, the smaller the potential errors. Likewise, it is expected that the models' representation of the Earth system will become more accurate the more processes are resolved explicitly.
The inclusion of a vastly increasing number of processes, together with continuously rising spatial resolution, have indeed led to the development of comprehensive ESMs that have become irreplaceable
Figure 1: Symbolic representation of Earth system components and exemplary deterministic or stochastic coupling mechanisms on long and short spatio-temporal scales.
tools to analyse and predict the state of the Earth system. From the first assessment report of the Intergovernmental Panel on Climate Change (IPCC) in 1990 to the fifth phase of the Climate Model Intercomparison Project (CMIP5) [6] and the associated fifth IPCC assessment report in 2014, the spatial resolution has increased from around 500km to up to 70km. In accordance, the CMIP results show that the models have, over the course of two decades, greatly improved in their accuracy to reproduce crucial characteristics of the Earth system, such as the evolution of the global mean temperatures (GMT) since the beginning of instrumental data in the second half of the 19th century, or the average present-day spatial distribution of temperature or precipitation [7, 8].
Despite the tremendous success of ESMs, persistent problems and uncertainties remain:
(1) A crucial quantity for the evaluation of ESMs is the equilibrium climate sensitivity (ECS), defined as the amount of equilibrium GMT increase that results from an instantaneous doubling of atmospheric carbon dioxide [9]. There remains a large ECS range in current ESM projections and reducing these uncertainties, and hence the uncertainties of future climate projections, is one of the key challenges in the development of ESMs. Nevertheless, from CMIP5 to CMIP6, the likely range of ECS has widened from 2.1-4.7\\({}^{\\circ}\\)C to 1.8-5.6\\({}^{\\circ}\\)C [10, 11]. A highly promising line of research in this regard focuses on the identification of emergent constraints, which in principle allow to narrow down the projected range for a model variable of interest, given that the variable has a concise relationship with another model variable that can be validated against past observations [12, 13]. The development of suitable data-driven techniques for this purpose is still in its infancy.
(2) Both theoretical considerations and paleoclimate data suggest that several sub-systems of the Earth system can abruptly change their state in response to gradual changes in forcing [14, 15]. There is concern that current ESMs will not be capable of predicting future abrupt climate changes, because the instrumental era of less than two centuries has not experienced comparable transitions, and model validation against paleoclimate data evidencing such events remains impossible due to the length of the relevant time scales [16]. In an extensive search, many relatively abrupt transitions have been identified in future projections of CMIP5 models [17], but due to the nature of these rare, high-risk events, the accuracy of ESM in predicting them remains untested.
(3) Current ESMs are not yet suitable for assessing the efficacy or the environmental impact of carbon dioxide removal techniques, which are considered key mitigation options in pathways realizing the Paris Agreement [18]. Further, ESMs still insufficiently represent key environmental processes such as the carbon cycle, water and nutrient availability, or interactions between land use and climate. This can impact the usefulness of land-based mitigation options that rely on actions such as biomass energy with carbon capture and storage or nature-based climate solutions [19, 20].
(4) The distributions of time series encoding Earth system dynamics typically exhibit heavy tails. Extreme events such as heat waves and droughts, but also extreme precipitation events and associated floods, have always caused tremendous socio-economic damages. With ongoing anthropogenic climate change, such events are projected to become even more severe, and the attribution of extremes poses another outstanding challenge of Earth system science [21]. While current ESMs are very skilful in predicting average values of climatic quantities, there remains room for improvement in representing extremes.
In addition to the possible solutions to these fundamental challenges, improvements of the overall accuracy of ESMs can be expected from more extensive and more systematic integration of the process-based numerical models with observational data. Earth system observations (ESOs) are central to ESMs, serving a multitude of purposes. ESOs are used to evaluate and compare process-based model performance, to generate model parameters and initial model states, or as boundary forcing of ESMs [22, 23]. ESOs are also used to directly influence the model output by either tuning or nudging parameters that describe unmodeled processes, or by the more sophisticated methods of data assimilation that alter the model's state variables to bring the model output in better agreement with the observations [24]. To incorporate uncertainty into model predictions, variational interfaces have been used [25]. Existing techniques for assimilating data into ESMs fall into two main categories, each with their own limitations. Gradient-based optimization, as in four-dimensional variational (4DVar) schemes, is the current state of the art for efficiency and accuracy, but currently requires time consuming design and implementation of adjoint calculation routines tailored to each model. Ensemble-based Kalman filter (EnKF) schemes are gradient-free but produce unphysical outputs and rely on strong statistical assumptions that are often unsatisfied, leading to biases and overconfident predictions [26]. The main problems of contemporary ESM data assimilation are 1) nonlinear dynamics and non-Gaussian error budgets in combination with the high dimensionality of many ESM components [27, 28, 29], and 2) constraining the governing processes over the different spatio-temporal scales found in coupled systems [30, 31]. ML approaches can be used to combine the accuracy of 4DVar with the flexibility of EnKF, essentially allowing optimization-based assimilation in cases where gradients are currently unavailable. Furthermore, these traditional approaches of model and observation fusion have slowly been expanded or replaced by ML methods in recent years [32, 33, 34].
ESOs cover a wide range of spatio-temporal scales and types, ranging from a couple of centimeters to tens of thousands of kilometers, and from seconds and decades to millennia. The types of observations range from in-situ measurements of irregular times and spaces (ship cruises, buoy arrays, etc.), over single time series (ice and sediment cores, tide gauges, etc.) to satellite-based global 2D or 3D data fields (altimetry, gravimetry, radio occultation, etc.). The amount of available observations is rapidly increasing and has reached a threshold where automated analysis becomes crucial. Yet, the available observational data pool still contains large gaps in time and space that prevent building a holistic observation-driven picture of the coupled Earth system, which result from insufficient spatio-temporal data resolution, too short observation time periods, and largely unobserved compartments of Earth systems like, for instance, abyssal oceans. The combination of these complex characteristics render Earth system observations both challenging and particularly interesting for AI applications.
## From Machine Learning-based Data Exploration Towards Learning Physics
ML and other AI techniques have achieved stunning results in computer vision [35], speech and language models [36], medical science [37, 38], economical and societal analytics [39], and other disciplines [40, 41]. Due to this wide-spread integration into both fundamental research and end-user products, and despite shortcomingsand inherent limitations [42, 43, 44, 45], ML is already praised as a key disruptive technology of the 21st century [46]. In contrast, the usage of ML in Earth and climate sciences is still in its infancy. A key observation is that ML concepts from computer vision and automated image analysis can be isomorphically transferred to ESO imagery. Pioneering studies demonstrated the feasibility of ML for remote sensing data analysis, classification tasks, and parameter inversion already in the 1990s [47, 48, 49, 50], and climate-model emulation in the early 2000s [51]. The figurative Cambrian explosion of AI techniques in Earth and climate sciences, however, only began over the last five years and will rapidly continue throughout the coming decades.
Under the overarching topic of ESO data exploration, ML has been applied for a huge variety of statistical and visual use cases. Classical prominent examples are pattern recognition in geo-spatial observations, climate data clustering, automated remote-sensing data analysis, and time series prediction [52, 32]. In this context, ML has been applied across various spatial and temporal scales, ranging from short-term regional weather prediction to Earth-spanning climate phenomena. Significant progress has been made in developing purely data-driven weather prediction networks, which start to compete with process-based model forecasts [53, 54, 55]. ML contributed to the pressing need to improve the predictability of natural hazards, for instance, by uncovering global extreme-rainfall teleconnections [56], or by improving long-term forecasts of the El Nino Southern Oscillation (ENSO) [57, 58]. ML-based image filling techniques were utilized to reconstruct missing climate information, allowing to correct previous global temperature records [59]. Furthermore, ML was applied to analyze climate data sets, e.g., to extract specific forced signals from natural climate variability [60, 61] or to predict clustered weather patterns [62]. In these applications, the ML tools function as highly specialized agents that help to uncover and categorize patterns in an automated way. A key methodological advantage of ML in comparison to covariance-based spatial analysis lies in the possibility to map nonlinear processes [63, 64]. At the same time, such trained neural networks lack actual physical process knowledge, as they solely function through identifying and generalizing statistical relations by minimizing pre-defined loss measures for a specific task [65]. Consequently, research on ML in Earth and climate science differs fundamentally from the previously described efforts of advancing ESMs in terms of methodological development and applicability.
Concepts of utilizing ML not only for physics-blind data analyses, but also as surrogates and methodological extensions for ESMs have only recently started to shape [66]. Scientists started pursuing the aim that ML methods learn aspects of Earth and climate physics, or at least plausibly relate cause and effect. The combination of ML with process-based modelling is the essential distinction from the previous ESO data exploration. Lifting ML from purely diagnosis-driven usage towards the prediction of geophysical processes will also be crucial for aiding climate change research and the development of mitigation strategies [67].
Following this reasoning, ML methods can be trained with process-based model data to inherit a specific geophysical causation or even emulate and accelerate entire forward simulations. For instance, ML has been used in combination with ESMs and ESOs to invert space-borne oceanic magnetic field observations to determine the global ocean heat content [33]. Similarly, a neural network has been trained with a continental hydrology model to recover high-resolution terrestrial water storage from satellite gravimetry [34]. ML plays an important role for upscaling unevenly distributed carbon flux measurementsto improve global carbon monitoring systems [68]. As such, the eddy covariance technique was combined with ML to measure the net ecosystem exchange of CO\\({}_{2}\\) between ecosystems and the atmosphere, offering a unique opportunity to study ecosystem responses to climate change [69]. ML has shown remarkable success in representing subgrid-scale processes and other parameterizations of ESMs, given that sufficient training data were available. As such, neural networks were applied to approximate turbulent processes in ocean models [70] and atmospheric subgrid processes in climate models [71]. Several studies highlight the potential for ML-based parameterization schemes [72, 73, 74, 75, 76], helping step-by-step to gradually remove numerically and human-induced simplifications and other biases of ESMs [77].
While some well-trained ML tools and simple hybrids have shown higher predictive power than traditional state-of-the-art process-based models, only the surface of new possibilities, but also of new scientific challenges, has been scratched. So far, ML, ESMs, and ESO have largely been independent tools. Yet, we have reached the understanding that applications of physics-aware ML and model-network hybrids pose huge benefits by filling up niches where purely process-based models persistently lack reliability.
Figure 2: Successive stages of the fusion process of Earth system models and artificial intelligence.
## The Fusion of Process-based Models and Artificial Intelligence
The idea of hybrids of process-based and ML models is not new [78]. So far, hybrids have almost exclusively been thought of as numerical models that are enhanced by ML to either improve the models' performance in the sense of a useful metric, or to accelerate the forward simulation time in exchange for a decrease in simulation accuracy. Along with the general advance regarding the individual capabilities and limitations of ESMs and ML methods, respectively, also the understanding of how ML can enhance process-based modelling has evolved. This progress allows ML to take over more and more components of ESMs, gradually blending the so far strict distinction between process-based modelling and data-driven ML approaches. Even more so, entirely new methodological concepts are dawning that justify acknowledging Neural Earth System Modelling as a distinct research branch (Fig. 2).
The long-term goal will be to consistently integrate the recently discovered advantages of ML into the already decade-long source of process knowledge in Earth system science. However, this evolution does not come without methodological caveats, which need to be investigated carefully. For the sake of comparability, we distinguish between weakly coupled NESYM hybrids, i.e., an ESM or AI technique benefits from information from the respective other, and strongly coupled NESYM hybrids, i.e., fully coupled model-network combinations that dynamically exchange information between each other.
The emergent development of weak hybrids is predominantly driven by the aim to resolve the previously described ESM limitations, particularly unresolved and especially sub-grid scale processes. Neural networks can emulate such processes after careful training with simulation data from a high-resolution model that resolves the processes of interest, or with relevant ESO data. The next methodological milestone will be the integration of such trained neural networks into ESMs for operational usage. First tests have indicated that the choice of the AI technique, e.g., neural networks versus random forests, seems to be crucial for the implementation of learning parameterization schemes, as they can significantly deteriorate the ESM's numerical stability [79]. Thus, it is not only important to identify how neural networks can be trained to resolve ESM limitations, but also how such ML-based schemes can be stabilized in the model physics context and how their effect on the process-based simulation can be evaluated and interpreted [80]. The limitations of ML-based parameterization approaches can vary widely for different problems or utilized models and, consequently, should be considered for each learning task individually [81]. Nevertheless, several ideas have been proposed to stabilize ML parameterizations, e.g., by enforcing physical consistency through customized loss functions in neural networks and specific network architectures [75, 82], or by optimizing the considered high-resolution model training data [76]. In addition, an ESM blueprint has been proposed, in which learning parameterizations can be targeted through searching an optimal fit of statistical measures between ESMs, observations, and high-resolution simulations [83]. In this context, further efforts have been made to enhance an ESM not with ML directly, but in combination with a data assimilation system [24]. For instance, emulating a Kalman filter scheme with ML has been investigated [84, 85], an ML-based estimation of atmospheric forcing uncertainties used as error covariance information in data assimilation has been proposed [86], as well as further types of Kalman-network hybrids [87, 88].
In the second class of weak hybrids, the model and AI tasks are transposed, such that the information flow is directed from the model towards the AI tool. Here, neural networks are trained directly with modelstate variables, their trajectories, or with more abstract information like seasonal signals, interannual cycles, or coupling mechanisms. The goal of the ML application might not only be model emulation, but also inverting non-linear geophysical processes [33], learning geophysical causation [89], or predicting extreme events [90, 91]. In addition to these inference and generalization tasks, a key question in this sub-discipline is whether a neural network can learn to outperform the utilized process-based trainer model in terms of physical consistency or predictive power. ESOs play a vital role in this context, as they can serve as additional training constraints for a neural network training, allowing it to build independent self-evaluation measures [34].
The given examples generally work well for validation and prediction scenarios within the given training distribution. Out-of-distribution samples, in contrast, pose a huge challenge for supervised learning, which renders the \"learning from the past\" principle particularly ill-posed for prediction tasks in NESYM. As a consequence, purely data-driven AI methods will not be able to perform accurate climate projections on their own, because of the both naturally and anthropogenically induced non-stationarity of the climate and Earth system. Overcoming these limitations requires a deeper holistic integration in terms of strongly coupled hybrids and the consideration of further, less constrained training techniques like unsupervised training [92] and generative AI methods [73, 93, 94]. For example, problems of pure AI methods with non-stationary training data can be attenuated by combining them with physical equations describing the changing energy-balance of the Earth system due to anthropogenic greenhouse-gas emissions [95]. In addition, first steps towards physics-informed AI have been made by ML-based and data-driven discovery of physical equations [96] and by the implementation of neural partial differential equations [97, 98] into the context of climate modelling [99].
Continuous maturing of the methodological fusion process will allow building hybrids of neural networks, ESMs and ESOs that dynamically exchange information. ESMs will soon utilize output from supervised and unsupervised neural networks to optimize their physical consistency and, in turn, feed back improved information content to the ML component. ESOs form another core element and function as constraining ground truth of the AI-infused process prediction. Similar to the adversarial game of generative networks [100], or coupling mechanisms in an ESM [101], also strongly coupled NESYM hybrids will require innovative interfaces that control the exchange of information that are, so far, not available. In addition, we formulate key characteristics and goals of this next stage:
(1) Hybrids can better reproduce and predict out-of-distribution samples and extreme events,
(2) hybrids perform constrained and consistent simulations that obey physical conservation laws despite potential shortcomings of the hybrids' individual components,
(3) hybrids include integrated adaptive measures for self-validation and self-correction, and
(4) NESYM allows replicability and interpretability.
We believe that cross-discipline collaborations between Earth system and AI scientists will become more important than ever to achieve these milestones. Frontier applications of Neural Earth system models are manifold. Yet, ultimately, NESYM hybrids need to drastically improve the current forecast limits of geophysical processes and contribute towards understanding the Earth's susceptible state in a changing climate. Consequently, not only the fusion of ESM and AI will be in the research focus, but also AI interpretability and resolving the common notion of a black box (Fig. 3).
## 3 Peering into the Black Box
ML has emerged as a set of methods based on the combination of statistics, applied mathematics and computer science, but it comes with a unique set of hurdles. Peering into the black box and understanding the decision making process of the ML method, termed explainable AI (XAI), is critical to the use of these tools. Especially in the physical sciences, adaptation of ML suffers from a lack of interpretability, particularly supervised ML. In contrast and in addition to XAI stands the call for interpretable AI (IAI), i.e., building specifically interpretable ML models from the beginning on, instead of explaining ML predictions through post-process diagnostics [102].
Ensuring that what is 'learned' by the machine is physically tractable or causal, and not due to trivial coincidences [103], is important before ML tools are used, e.g., in an ESM setting targeted at decision making. Thus, explainability provides the user with trust in the ML output, improving its transparency.
Figure 3: Qualitative comparison of isolated (AI - Artificial Intelligence, ESM - Earth System Model, ESO - Earth System Observation) and hybrid methodological approaches. The respective approaches are represented as trajectories in a meta space of hybrid coupling degree, interpretability, and prediction skill. The goal of Neural Earth System Modelling (NESYM) is to integrate the interpretability and to exceed the prediction skill of the respective isolated approaches. In this meta space, the necessary research increment to achieve this goal can be described through an increase in the degree of hybrid coupling (Fusion of AI and ESM) and an increase in interpretability (XAI - explainable AI, IAI - interpretable AI).
This is critical for ML use in the policy-relevant area of climate science as society is making it increasingly clear that understanding the source of AI predictive skill is of crucial importance [104, 105]. Ensuring the ML method is getting the right answers for the right reasons is essential given the transient nature of the climate system. As the climate continues to respond to anthropogenic climate change, NESYM will be required to make predictions of continually evolving underlying distributions and XAI/IAI will be critical to ensuring that the skill of the ML method can be explained, and inspire trust in its extrapolation to future climate regimes. There are many ML tools at our disposal, and XAI can assist researchers in choosing the optimal ML architecture, inputs, outputs, etc. By analyzing the decision making process, climate scientists will be able to better incorporate their own physical knowledge into the ML method, ultimately leading to improved results. Perhaps least appreciated in geoscientific applications thus far is the use of XAI to discover new science [106]. When the ML method is capable of making a prediction, XAI allows us to ask \"what did it learn?\". In this way, ML becomes more than just a prediction and allows scientists to ask \"why?\" as they normally would, but now with the power of ML.
Explaining the source of an ML applications skill can be done retrospectively [102]. The power of XAI for climate and weather applications has very recently been demonstrated [106, 107, 108]. For example, neural networks coupled with the XAI attribution method known as layerwise relevance propagation (LRP) [109, 110] have revealed modes of variability within the climate system, sources of predictability across a range of timescales, and indicator patterns of climate change [106, 61]. There is also evidence that XAI methods can be used to evaluate climate models against observations, identifying the most important climate model biases for the specific prediction task [111]. However, these methods are in their infancy and there is vast room for advancements in their application, making it explicitly appropriate to employ them within the physical sciences.
Unsupervised ML can be intuitively IAI through the design of experiments. For example, applying clustering on closed model budgets of momentum ensures all relevant physics are represented, and can be interpreted in terms of the statistically dominant balances. In this manner, different regimes can be discovered [112, 92]. Adversarial learning has been an effective tool for generating super-resolution fields of atmospheric variables in climate models [94]. Furthermore, unsupervised ML approaches have been proposed for discovering and quantifying causal interdependencies and dynamical links inside a system, such as the Earth's climate [89, 113]. The development of ESMs is increasingly turning to process-oriented diagnostics (PODs) [114], where a certain process is targeted and used as a benchmark for model improvement. A revolution of analysis tools has been called for, and ML is poised to be part of this change [115, 116, 66]. For instance, the POD approach has been applied to evaluate the ability of ESM projections to simulate atmospheric interactions and to constrain climate projection uncertainties [117].
Given the importance of both explainability and interpretability for improving ML generalization and scientific discovery, we need climate scientists working together with AI scientists to develop methods that are tailored to the field's needs. This is not just an interesting exercise - it is essential for the proper use of AI for NESYM development and use (Fig. 3). Earth and climate scientists can aid the development of consistent benchmarks that allow evaluating both stand-alone ML and NESYM hybrids in terms of geophysical consistency [118]. However, help of the AI community is needed to resolve other recently highlighted ML pitfalls, for instance, translating the concepts of adversarial examples and deep learning artifacts [119] into the ESM context or finding new measures to identify and avoid shortcut learning [45] in NESYM hybrids. In summary, only combined efforts and continuous development of both ESM and AI will evolve Neural Earth System Modelling.
Our perspective should not only be seen as the outline of a promising scientific pathway to achieve a better understanding of the Earth's present and future state, but also as an answer to the recent support call from the AI community [120]. Based on the recent advances in applying AI to Earth system and climate sciences, it seems to be a logical succession that AI will take over more and more tasks of traditional statistical and numerical ESM methods. Yet, in its current stage, it also seems unthinkable that AI alone can solve the climate prediction problem. In the forthcoming years, AI will necessarily need to rely on the geophysical determinism of process-based modelling and on careful human evaluation. However, once we find solutions to the foreseeable limitations described above and can build interpretable and geophysically consistent AI tools, this next evolutionary step will seem much more likely.
## Acknowledgements
This study was funded by the Helmholtz Association and by the Initiative and Networking Fund of the Helmholtz Association through the project Advanced Earth System Modelling Capacity (ESM). NB acknowledges funding by the Volskwagen foundation and the European Union's Horizon 2020 research and innovation program under grant agreement No 820970 (TiPES).
## Authors' contributions
CI conceived the paper and organized the collaboration. All authors contributed to writing and revising all chapters of this manuscript. In particular, NB and CI drafted the ESM overview, JSW and JS drafted the ESO and DA overview, CI and CK drafted the chapter 'From Machine Learning-based Data Exploration Towards Learning Physics', CI and JSW and NB drafted the chapter 'The Fusion of Process-based Models and Artificial Intelligence', MS and EB and CI drafted the chapter 'Peering into the Black Box'.
## Competing Interest
The authors declare no competing interest.
## References
* [1] Prinn, R. G. Development and application of earth system models. _Proceedings of the National Academy of Sciences_**110**, 3673-3680 (2013). URL [https://www.pnas.org/content/110/Supplement_1/3673](https://www.pnas.org/content/110/Supplement_1/3673). [https://www.pnas.org/content/110/Supplement_1/3673.full.pdf](https://www.pnas.org/content/110/Supplement_1/3673.full.pdf).
- 975 (2002). URL [https://journals.ametsoc.org/view/journals/atsc/59/5/1520-0469_2002_059_0959_cfscp_2.0.co_2.xml](https://journals.ametsoc.org/view/journals/atsc/59/5/1520-0469_2002_059_0959_cfscp_2.0.co_2.xml).
* [3] Klein, R. Scale-dependent models for atmospheric flows. _Annual Review of Fluid Mechanics_**42**, 249-274 (2010).
* 588 (2017). URL [https://journals.ametsoc.org/view/journals/bams/98/3/bams-d-15-00268.1.xml](https://journals.ametsoc.org/view/journals/bams/98/3/bams-d-15-00268.1.xml).
* [5] Knutti, R. Should we believe model predictions of future climate change? _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_**366**, 4647-4664 (2008). URL [https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2008.0169](https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2008.0169). [https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2008.0169](https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2008.0169).
* 498 (2012). URL [https://journals.ametsoc.org/view/journals/bams/93/4/bams-d-11-00094.1.xml](https://journals.ametsoc.org/view/journals/bams/93/4/bams-d-11-00094.1.xml).
* [7] Stocker, T. _et al._ (eds.) _Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change_ (Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2013). URL www.climatechange2013.org.
* [8] Eyring, V. _et al._ Overview of the coupled model intercomparison project phase 6 (cmip6) experimental design and organization. _Geoscientific Model Development_**9**, 1937-1958 (2016). URL [https://gmd.copernicus.org/articles/9/1937/2016/](https://gmd.copernicus.org/articles/9/1937/2016/).
* [9] Knutti, R., Rugenstein, M. A. & Hegerl, G. C. Beyond equilibrium climate sensitivity. _Nature Geoscience_**10**, 727-736 (2017).
* [10] Meehl, G. A. _et al._ Context for interpreting equilibrium climate sensitivity and transient climate response from the cmip6 earth system models. _Science Advances_**6** (2020). URL [https://advances.sciencemag.org/content/6/26/eaba1981](https://advances.sciencemag.org/content/6/26/eaba1981). [https://advances.sciencemag.org/content/6/26/eaba1981.full.pdf](https://advances.sciencemag.org/content/6/26/eaba1981.full.pdf).
* [11] Zelinka, M. D. _et al._ Causes of higher climate sensitivity in cmip6 models. _Geophysical Research Letters_**47**, e2019GL085782 (2020). URL [https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019GL085782](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019GL085782). E2019GL085782 10.1029/2019GL085782, [https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL085782](https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL085782).
* [12] Cox, P. M., Huntingford, C. & Williamson, M. S. Emergent constraint on equilibrium climate sensitivity from global temperature variability. _Nature_**553**, 319-322 (2018). URL [http://www.nature.com/doifinder/10.1038/nature25450](http://www.nature.com/doifinder/10.1038/nature25450).
* [13] Hall, A., Cox, P., Huntingford, C. & Klein, S. Progressing emergent constraints on future climate change. _Nature Climate Change_**9**, 269-278 (2019). URL [http://dx.doi.org/10.1038/s41558-019-0436-6](http://dx.doi.org/10.1038/s41558-019-0436-6).
* [14] Lenton, T. M. _et al._ Tipping elements in the earth's climate system. _Proceedings of the National Academy of Sciences_**105**, 1786-1793 (2008). URL [https://www.pnas.org/content/105/6/1786](https://www.pnas.org/content/105/6/1786). [https://www.pnas.org/content/105/6/1786.full.pdf](https://www.pnas.org/content/105/6/1786.full.pdf).
* [15] Boers, N., Ghil, M. & Rousseau, D.-D. Ocean circulation, ice shelf, and sea ice interactions explain dansgaard-oeschger cycles. _Proceedings of the National Academy of Sciences_**115**, E11005-E11014 (2018). URL [https://www.pnas.org/content/115/47/E11005](https://www.pnas.org/content/115/47/E11005). [https://www.pnas.org/content/115/47/E11005.full.pdf](https://www.pnas.org/content/115/47/E11005.full.pdf).
* [16] Valdes, P. Built for stability. _Nature Geoscience_**4**, 414-416 (2011).
* [17] Drijfhout, S. _et al._ Catalogue of abrupt shifts in intergovernmental panel on climate change climate models. _Proceedings of the National Academy of Sciences_**112**, E5777-E5786 (2015). URL [https://www.pnas.org/content/112/43/E5777](https://www.pnas.org/content/112/43/E5777). [https://www.pnas.org/content/112/43/E5777.full.pdf](https://www.pnas.org/content/112/43/E5777.full.pdf).
* [18] Masson-Delmotte, V. _et al._ (eds.) _IPCC: Global Warming of 1.5 Degree Celsius: IPCC Special Report on the Impacts of Global Warming of 1.5 Degree Celsius_ (Intergovernmental Panel on Climate Change (IPCC), 2018). URL [https://www.ipcc.ch/sr15](https://www.ipcc.ch/sr15).
* [19] Shukla, P. _et al._ (eds.) _Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable landmanagement, food security, and greenhouse gas fluxes in terrestrial ecosystems_ (Intergovernmental Panel on Climate Change (IPCC), 2019). URL [https://www.ipcc.ch/srcc1/](https://www.ipcc.ch/srcc1/).
* [20] Portner, H. _et al._ (eds.) _IPCC Special Report on the Ocean and Cryosphere in a Changing Climate_ (Intergovernmental Panel on Climate Change (IPCC), 2019). URL [https://www.ipcc.ch/srocc/](https://www.ipcc.ch/srocc/).
* [21] Otto, F. E. _et al._ Attribution of extreme weather events in Africa: a preliminary exploration of the science and policy implications. _Climatic Change_**132**, 531-543 (2015).
* [22] Balsamo, G. _et al._ Satellite and in situ observations for advancing global earth surface modelling: A review. _Remote Sensing_**10** (2018). URL [https://www.mdpi.com/2072-4292/10/12/2038](https://www.mdpi.com/2072-4292/10/12/2038).
* [23] Hersbach, H. _et al._ The era5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_**146**, 1999-2049 (2020). URL [https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.3803](https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.3803). [https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.3803](https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.3803).
* [24] Evensen, G. _Data assimilation: the ensemble Kalman filter_ (Springer Science & Business Media, 2009).
* [25] Blei, D. M., Kucukelbir, A. & McAuliffe, J. D. Variational inference: A review for statisticians. _Journal of the American Statistical Association_**112**, 859-877 (2017). URL [https://doi.org/10.1080/01621459.2017.1285773](https://doi.org/10.1080/01621459.2017.1285773). [https://doi.org/10.1080/01621459.2017.1285773](https://doi.org/10.1080/01621459.2017.1285773).
* [26] Houtekamer, P. L. & Zhang, F. Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation. _Monthly Weather Review_**144**, 4489-4532 (2016). URL [http://journals.ametsoc.org/doi/10.1175/MWR-D-15-0440.1](http://journals.ametsoc.org/doi/10.1175/MWR-D-15-0440.1).
* [27] van Leeuwen, P. J. Nonlinear data assimilation in geosciences: an extremely efficient particle filter. _Quarterly Journal of the Royal Meteorological Society_**136**, 1991-1999 (2010). URL [https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.699](https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.699). [https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.699](https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.699).
* [28] van Leeuwen, P. J., Kunsch, H. R., Nerger, L., Potthast, R. & Reich, S. Particle filters for high-dimensional geoscience applications: A review. _Quarterly Journal of the Royal Meteorological Society_**145**, 2335-2365 (2019). URL [https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.3551](https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.3551). [https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.3551](https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/qj.3551).
* [29] Vetra-Carvalho, S. _et al._ State-of-the-art stochastic data assimilation methods for high-dimensional non-gaussian problems. _Tellus A: Dynamic Meteorology and Oceanography_**70**, 1-43 (2018). URL [https://doi.org/10.1080/16000870.2018.1445364](https://doi.org/10.1080/16000870.2018.1445364). [https://doi.org/10.1080/16000870.2018.1445364](https://doi.org/10.1080/16000870.2018.1445364).
* [30] Penny, S. G. _et al._ Strongly coupled data assimilation in multiscale media: Experiments using a quasi-geostrophic coupled model. _Journal of Advances in Modeling Earth Systems_**11**, 1803-1829 (2019). URL [https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019MS001652](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019MS001652). [https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019MS001652](https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019MS001652).
* [31] Browne, P. A., de Rosnay, P., Zuo, H., Bennett, A. & Dawson, A. Weakly coupled ocean-atmosphere data assimilation in the ecmwf nwp system. _Remote Sensing_**11** (2019). URL [https://www.mdpi.com/2072-4292/11/3/234](https://www.mdpi.com/2072-4292/11/3/234).
* [32] Salcedo-Sanz, S. _et al._ Machine learning information fusion in Earth observation: A comprehensive review of methods, applications and data sources. _Information Fusion_**63**, 256-272 (2020).
* [33] Irrgang, C., Saynisch, J. & Thomas, M. Estimating global ocean heat content from tidal magnetic satellite observations. _Scientific Reports_**9**, 7893 (2019). URL [http://dx.doi.org/10.1038/s41598-019-44397-8](http://dx.doi.org/10.1038/s41598-019-44397-8).
* [34] Irrgang, C., Saynisch-Wagner, J., Dill, R., Boergens, E. & Thomas, M. Self-Validating Deep Learning for Recovering Terrestrial Water Storage From Gravity and Altimetry Measurements. _Geophysical Research Letters_**47** (2020). URL [https://onlinelibrary.wiley.com/doi/abs/10.1029/2020GL089258https://onlinelibrary.wiley.com/doi/10.1029/2020GL089258](https://onlinelibrary.wiley.com/doi/abs/10.1029/2020GL089258https://onlinelibrary.wiley.com/doi/10.1029/2020GL089258).
* [35] Voulodimos, A., Doulamis, N., Doulamis, A. & Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. _Computational Intelligence and Neuroscience_**2018** (2018).
* [36] Brown, T. B. _et al._ Language models are few-shot learners. _arXiv_ (2020). 2005.14165.
* [37] Loh, E. Medicine and the rise of the robots: A qualitative review of recent advances of artificial intelligence in health. _BMJ Leader_**2**, 59-63 (2018).
* [38] Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. _Nature Medicine_**25**, 44-56 (2019). URL [http://dx.doi.org/10.1038/s41591-018-0300-7](http://dx.doi.org/10.1038/s41591-018-0300-7).
* [39] Nosratabadi, S. _et al._ Data science in economics: Comprehensive review of advanced machine learning and deep learning methods. _Mathematics_**8**, 1-25 (2020).
* [40] Hagerty, A. & Rubinov, I. Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence. _arXiv_ (2019). 1907.07892.
* [41] Perc, M., Ozer, M. & Hojnik, J. Social and juristic challenges of artificial intelligence. _Palgrave Communications_**5**, 1-7 (2019).
* 2016 IEEE European Symposium on Security and Privacy, EURO S and P 2016_ 372-387 (2016). 1511.07528.
* [43] Adadi, A. & Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). _IEEE Access_**6**, 52138-52160 (2018).
* [44] Walton, P. Artificial intelligence and the limitations of information. _Information (Switzerland)_**9** (2018).
* [45] Geirhos, R. _et al._ Shortcut learning in deep neural networks. _Nature Machine Intelligence_**2**, 665-673 (2020). URL [http://dx.doi.org/10.1038/s42256-020-00257-zhttp://www.nature.com/articles/s42256-020-00257-z](http://dx.doi.org/10.1038/s42256-020-00257-zhttp://www.nature.com/articles/s42256-020-00257-z). 2004.07780.
* [46] Girasa, R. Ai as a disruptive technology. In _Artificial Intelligence as a Disruptive Technology_, 3-21 (Springer, 2020).
* [47] Dawson, M., Olvera, J., Fung, A. & Manry, M. Inversion of Surface Parameters Using Fast Learning Neural Networks. In _[Proceedings] IGARSS '92 International Geoscience and Remote Sensing Symposium_, vol. 2, 910-912 (IEEE, 1992). URL [http://ieeexplore.ieee.org/document/578294/](http://ieeexplore.ieee.org/document/578294/). arXiv:1011.1669v3.
* [48] Miller, D. M., Kaminsky, E. J. & Rana, S. Neural network classification of remote-sensing data. _Computers and Geosciences_**21**, 377-386 (1995).
* [49] Serpico, S. B., Bruzzone, L. & Roli, F. An experimental comparison of neural and statistical nonparametric algorithms for supervised classification of remote-sensing images. _Pattern Recognition Letters_**17**, 1331-1341 (1996). URL [https://doi.org/10.1016/S0167-8655](https://doi.org/10.1016/S0167-8655)(96)00090-6.
* Hsieh & Tang [1998] Hsieh, W. W. & Tang, B. Applying Neural Network Models to Prediction and Data Analysis in Meteorology and Oceanography. _Bulletin of the American Meteorological Society_**79**, 1855-1870 (1998). URL [http://journals.ametsoc.org/doi/abs/10.1175/1520-0477](http://journals.ametsoc.org/doi/abs/10.1175/1520-0477){%}281998{%}29079{%}3C1855{%}3AANNMT{%}3E2.0.CO{%}3B2. z0022.
* Knutti et al. [2003] Knutti, R., Stocker, T. F., Joos, F. & Plattner, G. K. Probabilistic climate change projections using neural networks. _Climate Dynamics_**21**, 257-272 (2003).
* Lary et al. [2016] Lary, D. J., Alavi, A. H., Gandomi, A. H. & Walker, A. L. Machine learning in geosciences and remote sensing. _Geoscience Frontiers_**7**, 3-10 (2016). URL [http://dx.doi.org/10.1016/j.gsf.2015.07.003http://linkinghub.elsevier.com/retrieve/pii/S1674987115000821](http://dx.doi.org/10.1016/j.gsf.2015.07.003http://linkinghub.elsevier.com/retrieve/pii/S1674987115000821).
* Arcomano et al. [2020] Arcomano, T. _et al._ A Machine Learning-Based Global Atmospheric Forecast Model. _Geophysical Research Letters_**47**, 1-9 (2020).
* Weyn et al. [2019] Weyn, J. A., Durran, D. R. & Caruana, R. Can machines learn to predict weather? Using deep learning to predict gridded 500-hPa geopotential height from historical weather data. _Journal of Advances in Modeling Earth Systems_ (2019).
* Weyn et al. [2020] Weyn, J. A., Durran, D. R. & Caruana, R. Improving Data-Driven Global Weather Prediction Using Deep Convolutional Neural Networks on a Cubed Sphere. _Journal of Advances in Modeling Earth Systems_**12** (2020).
* Boers et al. [2019] Boers, N. _et al._ Complex networks reveal global pattern of extreme-rainfall teleconnections. _Nature_**566**, 373-377 (2019). URL [http://dx.doi.org/10.1038/s41586-018-0872-x](http://dx.doi.org/10.1038/s41586-018-0872-x).
* Ham et al. [2019] Ham, Y.-g., Kim, J.-h. & Luo, J.-j. Deep learning for multi-year ENSO forecasts. _Nature_**573**, 568-572 (2019). URL [http://dx.doi.org/10.1038/s41586-019-1559-7](http://dx.doi.org/10.1038/s41586-019-1559-7).
* Yan et al. [2020] Yan, J., Mu, L., Wang, L., Ranjan, R. & Zomaya, A. Y. Temporal Convolutional Networks for the Advance Prediction of ENSO. _Scientific Reports_**10**, 1-15 (2020).
* Kadow et al. [2020] Kadow, C., Hall, D. M. & Ulbrich, U. Artificial intelligence reconstructs missing climate information. _Nature Geoscience_**13**, 408-413 (2020). URL [http://dx.doi.org/10.1038/s41561-020-0582-5](http://dx.doi.org/10.1038/s41561-020-0582-5).
* Barnes et al. [2019] Barnes, E. A., Hurrell, J. W., Ebert-Uphoff, I., Anderson, C. & Anderson, D. Viewing Forced Climate Patterns Through an AI Lens. _Geophysical Research Letters_**46**, 13389-13398 (2019). URL [https://doi.org/10.1029/2019GL084944https://onlinelibrary.wiley.com/doi/10.1029/2019GL084944](https://doi.org/10.1029/2019GL084944https://onlinelibrary.wiley.com/doi/10.1029/2019GL084944).
* Barnes et al. [2020] Barnes, E. A. _et al._ Indicator Patterns of Forced Change Learned by an Artificial Neural Network. _Journal of Advances in Modeling Earth Systems_**12** (2020). 2005.12322.
* Chattopadhyay et al. [2020] Chattopadhyay, A., Hassanzadeh, P. & Pasha, S. Predicting clustered weather patterns: A test case for applications of convolutional neural networks to spatio-temporal climate data. _Scientific Reports_**10**, 1317 (2020). URL [http://dx.doi.org/10.1038/s41598-020-57897-9http://arxiv.org/abs/1811.04817http://www.nature.com/articles/s41598-020-57897-9](http://dx.doi.org/10.1038/s41598-020-57897-9http://arxiv.org/abs/1811.04817http://www.nature.com/articles/s41598-020-57897-9). 1811.04817.
* Ramachandran et al. [2017] Ramachandran, P., Zoph, B. & Le, Q. V. Searching for activation functions. _arXiv_ 1-13 (2017). eprint 1710.05941.
* Lu et al. [2018] Lu, Z., Hunt, B. R. & Ott, E. Attractor reconstruction by machine learning. _Chaos_**28** (2018). URL [http://dx.doi.org/10.1063/1.5039508](http://dx.doi.org/10.1063/1.5039508). 1805.03362.
* Rumelhart et al. [1986] Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. _Nature_**323**, 533-536 (1986). eprint arXiv:1011.1669v3.
* Reichstein et al. [2019] Reichstein, M. _et al._ Deep learning and process understanding for data-driven Earth system science. _Nature_**566**, 195-204 (2019). URL [http://dx.doi.org/10.1038/s41586-019-0912-1](http://dx.doi.org/10.1038/s41586-019-0912-1).
* Huntingford et al. [2019] Huntingford, C. _et al._ Machine learning and artificial intelligence to aid climate change research and preparedness. _Environmental Research Letters_**14** (2019).
* Jung et al. [2020] Jung, M. _et al._ Scaling carbon fluxes from eddy covariance sites to globe: synthesis and evaluation of the fluxcom approach. _Biogeosciences_**17**, 1343-1365 (2020). URL [https://bg.copernicus.org/articles/17/1343/2020/](https://bg.copernicus.org/articles/17/1343/2020/).
* Tramontana et al. [2020] Tramontana, G. _et al._ Partitioning net carbon dioxide fluxes into photosynthesis and respiration using neural networks. _Global Change Biology_**26**, 5235-5253 (2020). URL [https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.15203](https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.15203). [https://onlinelibrary.wiley.com/doi/pdf/10.1111/gcb.15203](https://onlinelibrary.wiley.com/doi/pdf/10.1111/gcb.15203).
* Bolton & Zanna [2019] Bolton, T. & Zanna, L. Applications of Deep Learning to Ocean Data Inference and Subgrid Parameterization. _Journal of Advances in Modeling Earth Systems_**11**, 376-399 (2019). URL [http://doi.wiley.com/10.1029/2018MS001472](http://doi.wiley.com/10.1029/2018MS001472).
* Rasp et al. [2018] Rasp, S., Pritchard, M. S. & Gentine, P. Deep learning to represent subgrid processes in climate models. _Proceedings of the National Academy of Sciences_**115**, 9684-9689 (2018). URL [http://arxiv.org/abs/1806.04731http://www.pnas.org/lookup/doi/10.1073/pnas.1810286115](http://arxiv.org/abs/1806.04731http://www.pnas.org/lookup/doi/10.1073/pnas.1810286115). 1806.04731.
* O'Gorman & Dwyer [2018] O'Gorman, P. A. & Dwyer, J. G. Using Machine Learning to Parameterize Moist Convection: Potential for Modeling of Climate, Climate Change, and Extreme Events. _Journal of Advances in Modeling Earth Systems_ (2018). eprint 1806.11037.
* Gagne et al. [2020] Gagne, D. J., Christensen, H. M., Subramanian, A. C. & Monahan, A. H. Machine Learning for Stochastic Parameterization: Generative Adversarial Networks in the Lorenz '96 Model. _Journal of Advances in Modeling Earth Systems_**12** (2020). eprint 1909.04711.
* Han _et al._ [2020] Han, Y., Zhang, G. J., Huang, X. & Wang, Y. A Moist Physics Parameterization Based on Deep Learning. _Journal of Advances in Modeling Earth Systems_ 0-2 (2020).
* Beucler _et al._ [2020] Beucler, T., Pritchard, M., Gentine, P. & Rasp, S. Towards Physically-consistent, Data-driven Models of Convection. _arXiv_ 2-6 (2020). URL [http://arxiv.org/abs/2002.08525](http://arxiv.org/abs/2002.08525). 2002.08525.
* Yuval & O'Gorman [2020] Yuval, J. & O'Gorman, P. A. Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. _Nature Communications_**11**, 1-10 (2020). URL [http://dx.doi.org/10.1038/s41467-020-17142-3](http://dx.doi.org/10.1038/s41467-020-17142-3). 2001.03151.
* Brenowitz & Bretherton [2018] Brenowitz, N. D. & Bretherton, C. S. Prognostic Validation of a Neural Network Unified Physics Parameterization. _Geophysical Research Letters_ 1-10 (2018). URL [https://agupubs-onlinelibrary-wiley-com.emedien.ub.uni-muenchen.de/doi/pdf/10.1029/2018GL078510http://doi.wiley.com/10.1029/2018GL078510](https://agupubs-onlinelibrary-wiley-com.emedien.ub.uni-muenchen.de/doi/pdf/10.1029/2018GL078510http://doi.wiley.com/10.1029/2018GL078510).
* Krasnopolsky & Fox-Rabinovitz [2006] Krasnopolsky, V. M. & Fox-Rabinovitz, M. S. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction. _Neural Networks_**19**, 122-134 (2006).
* Brenowitz _et al._ [2020] Brenowitz, N. D. _et al._ Machine Learning Climate Model Dynamics: Offline versus Online Performance. _ArXiv_ 1-6 (2020). URL [http://arxiv.org/abs/2011.03081](http://arxiv.org/abs/2011.03081). 2011.03081.
* Brenowitz _et al._ [2020] Brenowitz, N. D., Beucler, T., Pritchard, M. & Bretherton, C. S. Interpreting and Stabilizing Machine-Learning Parametrizations of Convection. _Journal of the Atmospheric Sciences_**77**, 4357-4375 (2020). URL [https://journals.ametsoc.org/view/journals/atsc/77/12/jas-d-20-0082.1.xml](https://journals.ametsoc.org/view/journals/atsc/77/12/jas-d-20-0082.1.xml). 2003.06549.
* Seifert & Rasp [2020] Seifert, A. & Rasp, S. Potential and limitations of machine learning for modeling warm-rain cloud microphysical processes. _Journal of Advances in Modeling Earth Systems_ (2020). URL [https://onlinelibrary.wiley.com/doi/10.1029/2020MS002301](https://onlinelibrary.wiley.com/doi/10.1029/2020MS002301).
* Beucler _et al._ [2019] Beucler, T., Rasp, S., Pritchard, M. & Gentine, P. Achieving conservation of energy in neural network emulators for climate modeling. _arXiv_ 2-6 (2019). 1906.06622.
* Schneider _et al._ [2017] Schneider, T., Lan, S., Stuart, A. & Teixeira, J. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations. _Geophysical Research Letters_**44**, 12,396-12,417 (2017). 1709.00037.
* Cintra & Velho [2014] Cintra, R. S. & Velho, H. F. d. C. Data Assimilation by Artificial Neural Networks for an Atmospheric General Circulation Model: Conventional Observation. _Bulletin of the American meteorological Society_**77**, 437-471 (2014). URL [http://arxiv.org/abs/1407.4360http://www.intechopen.com/books/advanced-applications-for-artificial-neural-networks/data-assimilation-by-artificial-neural-networks-for-an-atmospheric-general-circulation-model](http://arxiv.org/abs/1407.4360http://www.intechopen.com/books/advanced-applications-for-artificial-neural-networks/data-assimilation-by-artificial-neural-networks-for-an-atmospheric-general-circulation-model). 1407.4360.
* [85] Wahle, K., Staneva, J. & Guenther, H. Data assimilation of ocean wind waves using Neural Networks. A case study for the German Bight. _Ocean Modelling_**96**, 117-125 (2015). URL [http://linkinghub.elsevier.com/retrieve/pii/S146350031500116Xhttp://dx.doi.org/10.1016/j.ocemod.2015.07.007](http://linkinghub.elsevier.com/retrieve/pii/S146350031500116Xhttp://dx.doi.org/10.1016/j.ocemod.2015.07.007).
* [86] Irrgang, C., Saynisch-Wagner, J. & Thomas, M. Machine Learning-Based Prediction of Spatiotemporal Uncertainties in Global Wind Velocity Reanalyses. _Journal of Advances in Modeling Earth Systems_**12** (2020).
* [87] Brajard, J., Carrassi, A., Bocquet, M. & Bertino, L. Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the lorenz 96 model. _Journal of Computational Science_**44**, 101171 (2020). URL [http://www.sciencedirect.com/science/article/pii/S1877750320304725](http://www.sciencedirect.com/science/article/pii/S1877750320304725).
* [88] Ruckstuhl, Y., Janjic, T. & Rasp, S. Training a convolutional neural network to conserve mass in data assimilation. _Nonlinear Processes in Geophysics Discussions_**2020**, 1-15 (2020). URL [https://npg.copernicus.org/preprints/npg-2020-38/](https://npg.copernicus.org/preprints/npg-2020-38/).
* [89] Runge, J. _et al._ Inferring causation from time series in Earth system sciences. _Nature Communications_**10**, 1-13 (2019). URL [http://dx.doi.org/10.1038/s41467-019-10105-3](http://dx.doi.org/10.1038/s41467-019-10105-3).
* [90] Boers, N. _et al._ Prediction of extreme floods in the eastern Central Andes based on a complex networks approach. _Nature Communications_**5**, 1-7 (2014).
* [91] Qi, D. & Majda, A. J. Using machine learning to predict extreme events in complex systems. _Proceedings of the National Academy of Sciences of the United States of America_**117**, 52-59 (2020).
* [92] Sonnewald, M., Dutkiewicz, S., Hill, C. & Forget, G. Elucidating ecological complexity: Unsupervised learning determines global marine eco-provinces. _Science Advances_**6**, 1-12 (2020).
* [93] Leinonen, J., Guillaume, A. & Yuan, T. Reconstruction of Cloud Vertical Structure With a Generative Adversarial Network. _Geophysical Research Letters_**46**, 7035-7044 (2019). URL [https://onlinelibrary.wiley.com/doi/abs/10.1029/2019GL082532](https://onlinelibrary.wiley.com/doi/abs/10.1029/2019GL082532).
* [94] Stengel, K., Glaws, A., Hettinger, D. & King, R. N. Adversarial super-resolution of climatological wind and solar data. _Proceedings of the National Academy of Sciences of the United States of America_**117**, 16805-16815 (2020).
* [95] Huber, M. & Knutti, R. Anthropogenic and natural warming inferred from changes in Earth's energy balance. _Nature Geoscience_**5**, 31-36 (2012).
* [96] Zanna, L. & Bolton, T. Data-driven Equation Discovery of Ocean Mesoscale Closures. _Geophysical Research Letters_ 1-13 (2020).
* Lagaris et al. [1998] Lagaris, I. E., Likas, A. & Fotiadis, D. I. Artificial neural networks for solving ordinary and partial differential equations. _IEEE Transactions on Neural Networks_**9**, 987-1000 (1998). 9705023.
* Raissi et al. [2019] Raissi, M., Perdikaris, P. & Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational Physics_**378**, 686-707 (2019). URL [https://linkinghub.elsevier.com/retrieve/pii/S0021999118307125](https://linkinghub.elsevier.com/retrieve/pii/S0021999118307125).
* Ramadhan et al. [2020] Ramadhan, A. _et al._ Capturing missing physics in climate model parameterizations using neural differential equations. _arXiv_ (2020). URL [http://arxiv.org/abs/2010.12559](http://arxiv.org/abs/2010.12559). 2010.12559.
* Goodfellow et al. [2014] Goodfellow, I. J. _et al._ Generative adversarial networks. _arXiv_ (2014). 1406.2661.
* 1360 (01 Sep. 2013). URL [https://journals.ametsoc.org/view/journals/bams/94/9/bams-d-12-00121.1.xml](https://journals.ametsoc.org/view/journals/bams/94/9/bams-d-12-00121.1.xml).
* Rudin [2019] Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. _Nature Machine Intelligence_**1**, 206-215 (2019). URL [http://dx.doi.org/10.1038/s42256-019-0048-x](http://dx.doi.org/10.1038/s42256-019-0048-x). 1811.10154.
* Balaji [2020] Balaji, V. Climbing down Charney's ladder: Machine Learning and the post-Dennard era of computational climate science. _arXiv_ (2020). URL [http://arxiv.org/abs/2005.11862](http://arxiv.org/abs/2005.11862). 2005.11862.
* Ethics guidelines for trustworthy ai. [https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai](https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai) (2019). Accessed: 2021-01-06.
* Executive order on promoting the use of trustworthy artificial intelligence in the federal government. [https://www.whitehouse.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/](https://www.whitehouse.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/) (2020). Accessed: 2021-01-06.
* Toms et al. [2020] Toms, B. A., Barnes, E. A. & Ebert-Uphoff, I. Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability. _Journal of Advances in Modeling Earth Systems_**12**, 1-20 (2020). URL [http://arxiv.org/abs/1912.01752https://onlinelibrary.wiley.com/doi/abs/10.1029/2019MS002002](http://arxiv.org/abs/1912.01752https://onlinelibrary.wiley.com/doi/abs/10.1029/2019MS002002). 1912.01752.
* 2199 (01 Nov. 2019). URL [https://journals.ametsoc.org/view/journals/bams/100/11/bams-d-18-0195.1.xml](https://journals.ametsoc.org/view/journals/bams/100/11/bams-d-18-0195.1.xml).
- 47 (31 Aug. 2020). URL [https://journals.ametsoc.org/view/journals/bams/aop/BAMS-D-20-0097.1/BAMS-D-20-0097.1.xml](https://journals.ametsoc.org/view/journals/bams/aop/BAMS-D-20-0097.1/BAMS-D-20-0097.1.xml).
* 397 (2004). URL [http://www.sciencedirect.com/science/article/pii/S0304380004001565](http://www.sciencedirect.com/science/article/pii/S0304380004001565).
* [110] Bach, S. _et al._ On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. _PLOS ONE_**10**, 1-46 (2015). URL [https://doi.org/10.1371/journal.pone.0130140](https://doi.org/10.1371/journal.pone.0130140).
* [111] Barnes, E. A., Mayer, K., Toms, B., Martin, Z. & Gordon, E. Identifying opportunities for skillful weather prediction with interpretable neural networks. _arXiv_ (2020). 2012.07830.
* [112] Sonnewald, M., Wunsch, C. & Heimbach, P. Unsupervised Learning Reveals Geography of Global Ocean Dynamical Regions. _Earth and Space Science_**6**, 784-794 (2019).
* [113] Runge, J., Nowack, P., Kretschmer, M., Flaxman, S. & Sejdinovic, D. Detecting and quantifying causal associations in large nonlinear time series datasets. _Science Advances_**5** (2019). URL [https://advances.sciencemag.org/content/5/11/eaau4996](https://advances.sciencemag.org/content/5/11/eaau4996). [https://advances.sciencemag.org/content/5/11/eaau4996.full.pdf](https://advances.sciencemag.org/content/5/11/eaau4996.full.pdf).
* 1686 (01 Sep. 2019). URL [https://journals.ametsoc.org/view/journals/bams/100/9/bams-d-18-0042.1.xml](https://journals.ametsoc.org/view/journals/bams/100/9/bams-d-18-0042.1.xml).
* [115] Eyring, V. _et al._ Taking climate model evaluation to the next level. _Nature Climate Change_**9**, 102-110 (2019). URL [http://dx.doi.org/10.1038/s41558-018-0355-y](http://dx.doi.org/10.1038/s41558-018-0355-y).
* [116] Schlund, M. _et al._ Constraining Uncertainty in Projected Gross Primary Production With Machine Learning. _Journal of Geophysical Research: Biogeosciences_**125** (2020).
* [117] Nowack, P., Runge, J., Eyring, V. & Haigh, J. D. Causal networks for climate model evaluation and constrained projections. _Nature Communications_**11**, 1-11 (2020). URL [http://dx.doi.org/10.1038/s41467-020-15195-y](http://dx.doi.org/10.1038/s41467-020-15195-y).
* [118] Rasp, S. _et al._ WeatherBench: A benchmark dataset for data-driven weather forecasting. _Journal of Advances in Modeling Earth Systems_ (2020). URL [http://arxiv.org/abs/2002.00469](http://arxiv.org/abs/2002.00469). 2002.00469. 00469.
* [119] Buckner, C. Understanding adversarial examples requires a theory of artefacts for deep learning. _Nature Machine Intelligence_ (2020). URL [http://dx.doi.org/10.1038/s42256-020-00266-y](http://dx.doi.org/10.1038/s42256-020-00266-y).
* [120] Rolnick, D. _et al._ Tackling Climate Change with Machine Learning. _arXiv_ (2019). URL [http://arxiv.org/abs/1906.05433](http://arxiv.org/abs/1906.05433). 1906.05433. | We outline a perspective of an entirely new research branch in Earth and climate sciences, where deep neural networks and Earth system models are dismantled as individual methodological approaches and reassembled as learning, self-validating, and interpretable Earth system model-network hybrids. Following this path, we coin the term \"Neural Earth System Modelling\" (NESYM) and highlight the necessity of a transdisciplinary discussion platform, bringing together Earth and climate scientists, big data analysts, and AI experts. We examine the concurrent potential and pitfalls of Neural Earth System Modelling and discuss the open question whether artificial intelligence will not only infuse Earth system modelling, but ultimately render them obsolete. | Condense the content of the following passage. | 135 |
arxiv-format/2101_12583v1.md | # Discovering dependencies in complex physical systems using Neural Networks
Sachin Kasture
[email protected] OptoAI, 2805DL, Gouda, The Netherlands
November 6, 2021
## I Introduction
Finding relationships between different variables in large datasets [1, 2, 3] is an important problem that has ramifications in fields ranging from environmental science to economics and genetic networks. Understanding what variables affect a certain quantity becomes increasingly challenging when these relationships are highly non-linear, like those occurring in dynamical systems with several variables. Quite often in a large dataset with several variables, only a few variables maybe significantly affecting the target variable and identifying these variables is first vital step in exploring these dependencies in more detail.
Several methods exist which can help find dependencies and correlations between variables. However most of these methods are good at detecting a certain class of functions while they fail for others. There are some methods which are quite good at detecting functional dependencies between 2 variables [1, 4], they have however not been demonstrated in a multi-variable scenario where a target variable depends on several input variables. Finding functional dependencies has been a topic explored extensively in context of relational databases[5, 6]. However these methods rely on finding exact functional relationships by finding all attributes which have a one to one or one to many relationship with a certain column Y. But this approach does not work well for small databases which are just a sample of the true distribution as in these cases one to one relations are more likely to occur. Also in such cases, it is difficult to reliably find the smallest subset of variables which are sufficient to describe Y. These methods do not offer any control over what kind of functional relationships maybe considered intuitively as good or interesting candidates. Also, these methods do not provide any kind of score to evaluate functional dependencies.
In this paper, we use Neural networks as devices to model nonlinear behavior and find complex non-linear relationships. Especially deep neural networks (DNN) which consist of more than 1 hidden layer are excellent candidates for efficiently modelling multi-variable nonlinear polynomial functions with small number of neurons [7, 8]. Additionally a regularization mechanism allows us to control the complexity of the model we wish to consider [9]. Neural networks have been used recently to discover physical concepts, identify phase transitions and design quantum experiments[10, 11, 12]. To help find dependencies, we use an DNN based autoencoder architecture which consists of an encoder-decoder pair. The encoder maps the input space to a latent space, while the decoder maps the latent space to the output space. This architecture has been used, amongst other applications, for non-linear Principle Component analysis (PCA) where the goal is to find a compressed representation of data [13]. As such the input and the output of the autoencoder is conventionally the same. In our method the input will be \\(X\\), which is the set of input features and \\(Y\\) is the target feature or the set of features. We then use compression of mutual information in the latent space to derive a loss function which can be minimized to find the smallest set of features in \\(X\\) which can be used to reliably reconstruct \\(Y\\). The loss function can be used to assign a score to compare the functional dependencies on different set of input parameters.We then demonstrate this method to find dependencies in chaotic dynamical systems. Also we show that this method can be used to find non-linear causal connections in the Grangier sense for chaotic systems [14, 15, 16], even for a small dataset of 100 samples.
## II Theory
We now derive a loss function using the information bottleneck method [17] based on the fact that the latent intermediate layer can be used to extract only relevant information from \\(X\\) and used to reconstruct \\(Y\\). We represent this latent representation by \\(L\\). We also now assume a Markov chain \\(Y\\to X\\to L\\). This means \\(P(Y|X,L)=P(Y|X)\\). This is because \\(X,Y\\) correspond to observed ground truth data.We now use the fact that we want to extract only relevant information from \\(X\\) which can reconstruct \\(Y\\). We use Shannon mutual information to quantify this information [17; 18]. Therefore want to maximize the quantity \\(I(L,Y)-\\lambda_{enc}I(L,X)\\). The first term and the second term describe the capacity of the encoder and the decoder respectively with \\(\\lambda_{enc}\\) determining the relative weight between the two terms. We can write \\(I(L,Y)\\) as:
\\[\\begin{split}& I(L,Y)=\\int dydlp(y,l)log\\frac{p(y|l)}{p(y)}\\\\ &=\\int dlp(l)\\int dyp(y|l)log(p(y|l)+H(Y)\\end{split} \\tag{1}\\]
where \\(H(Y)\\) is the Shannon entropy. We neglect \\(H(Y)\\) since it is fixed by the data. Since it is very difficult to calculate \\(p(y|l)\\), we can approximate it by another analytic function \\(\\phi(y|l)\\). Using the fact that the KL divergence which measures the 'distance' between 2 probability distributions is always non-negative:
\\[\\begin{split}& KL(p(y|l),\\phi(y|l))\\geq 0\\\\ &\\implies\\int dyp(y|l)log(y|l)\\geq\\int dyp(y|l)log\\phi(y|l)\\end{split} \\tag{2}\\]
we can write
\\[I(L,Y)\\geq\\int dydlp(y,l)log\\phi(y|l) \\tag{3}\\]
We can now choose an appropriate function for \\(\\phi(y|l)\\) which allows us to derive a suitable loss function as well as allows us to tune the complexity of the decoder. The output of the decoder is given by \\(\\theta_{dec}(l)\\) which describes the composite function of the decoder neural network which acts on the latent variable \\(l\\). To also include an additional L1 [9]regulation parameter which helps restrict the magnitude of the weights in the decoder neural network, we use the following function for \\(\\phi(y|l)\\)
\\[\\phi(y|l)=e^{-(\\theta_{dec}(l)-y)^{2}/\\sigma_{dec}^{2}-\\lambda_{dec}(|\\theta_ {d1}|+|\\theta_{d2}|+..)} \\tag{4}\\]
where \\(\\theta_{d1},\\theta_{d2}..\\) etc. are weights of different neurons in the decoder network. Therefore we can write
\\[\\begin{split} I(L,Y)&\\geq-\\int dydlp(y,l)[\\frac{( \\theta_{dec}(l)-y)^{2}}{\\sigma_{dec}^{2}}\\\\ &+\\lambda_{dec}(|\\theta_{d1}|+|\\theta_{d2}|+..)]\\end{split} \\tag{5}\\]
Now we use the fact that \\(p(y,l)=\\int dxp(x,y,l)=\\int dxp(l|x,y)p(x,y)\\). Using the Markov chain condition, this can be written as \\(p(y,l)=\\int dxp(l|x)p(x,y)\\). Approximating \\(\\int dxdyp(x,y)A(x,y)=(1/M)\\sum_{k=1}^{M}A(x^{k},y^{k})\\) where \\(M\\) is the number of distinct data points, we can write
\\[\\begin{split} I(L,Y)&\\geq-(1/M)\\sum_{k=1}^{M}\\int dlp (l|x)[\\frac{(\\theta_{dec}(l)-y^{k})^{2}}{\\sigma_{dec}^{2}}\\\\ &+\\lambda_{dec}(|\\theta_{d1}|+|\\theta_{d2}|+..)]\\end{split} \\tag{6}\\]
Figure 1: Plot shows comparison between \\(x_{i}\\) and the corresponding scaled version of \\(l_{i}\\) for (a)-(d) different values of \\(y_{i}=dx_{i}/dt\\) for equation 17. In the plots where \\(l_{i}\\) is essentially noise, information from the corresponding \\(x_{i}\\) is not used to reconstruct \\(y_{i}\\) using the decoder. \\(fac\\) is a scaling factor chosen so that \\(x_{i}\\) and \\(l_{i}/fac\\) are comparable
Similarly we can define \\(I(L,X)\\) as:
\\[\\begin{split} I(L,X)&=\\int dldxp(x,l)log\\frac{p(l|x)}{p (l)}\\\\ &=\\int dxdlp(x,l)logp(l|x)-\\int dlp(l)logp(l)\\end{split} \\tag{7}\\]
We now again use another analytical function \\(g(l)\\) in place of \\(p(l)\\) and use the result on positivity of KL divergence and get:
\\[\\begin{split} I(L,X)&=\\int dldxp(x,l)logp(l|x)- \\int p(l)logp(l)\\\\ &\\leq\\int dxdlp(x,l)log\\frac{p(l|x)}{g(l)}\\end{split} \\tag{8}\\]
For convenience we use a Gaussian function centred at 0.
\\[g(l)=e^{-\\sum_{i}l_{i}^{2}/\\sigma_{enc}^{2}} \\tag{9}\\]
where \\(l=(l_{1},l_{2}..)\\) are different components of \\(l\\) and \\(\\sigma_{enc}\\) is an adjustable parameter. For \\(p(l|x)\\) we can use:
\\[p(l|x)=\\prod_{i}e^{-(l_{i}-W_{i}x_{i})^{2}/\\sigma_{enc}^{2}} \\tag{10}\\]
where \\(x=(x_{1},x_{2},..)\\) This means we use a linear transformation from \\(X\\) and add a independent Gaussian noise with variance \\(\\sigma_{enc}^{2}\\) and mean 0 to each component. We now plug in definitions 9,10 into equation 8 and obtain:
\\[\\begin{split} I(L,X)&\\leq\\int dxdlp(x,l)loge^{-\\sum _{i}W_{i}x_{i}(W_{i}x_{i}-2l_{i})/\\sigma_{enc}^{2}}\\end{split} \\tag{11}\\]
Writing \\(p(x,l)=p(x)p(l|x)\\) we can write the above equation as
\\[\\begin{split} I(L,X)&\\leq-\\int dxdlp(x)\\prod_{i}e^{ -(l_{i}-W_{i}x_{i})^{2}/\\sigma_{enc}^{2}}\\\\ &[\\frac{\\sum_{i}W_{i}x_{i}(W_{i}x_{i}-2l_{i})}{\\sigma_{enc}^{2}} ]\\end{split} \\tag{12}\\]
Using the approximation \\(\\int dxp(x)A(x)=(1/M)\\sum_{k=1}^{M}A(x^{k})\\), we can write
\\[\\begin{split} I(L,X)&\\leq-(1/M)\\sum_{k=1}^{M}\\int dl \\prod_{i}e^{-(l_{i}-W_{i}x_{i}^{k})^{2}/\\sigma_{enc}^{2}}\\\\ &[\\frac{\\sum_{i}W_{i}x_{i}^{k}(W_{i}x_{i}^{k}-2l_{i})}{\\sigma_{ enc}^{2}}]\\end{split} \\tag{13}\\]
Similarly substituting equation 10 into equation 6 and assuming \\(\\sigma_{enc}^{2}\\) to be small enough so that \\(e^{-(l_{i}-W_{i}x_{i})^{2}/\\sigma_{enc}^{2}}\\approx\\delta(l_{i}-W_{i}x_{i})\\)we obtain:
\\[\\begin{split} I(L,Y)&-\\lambda_{enc}I(L,X)\\geq-(1/M )\\sum_{k=1}^{M}[\\frac{(\\theta_{dec}(l)-y^{k})^{2}}{\\sigma_{dec}^{2}}+\\\\ &\\lambda_{dec}(|\\theta_{d1}|+|\\theta_{d2}|+..)+\\lambda_{enc}\\sum_ {i}\\frac{(W_{i}x_{i}^{k})^{2}}{\\sigma_{enc}^{2}}]\\end{split} \\tag{14}\\]
Therefore we can define a loss function to be minimized as
\\[\\begin{split}&\\mathcal{L}=(1/M)\\sum_{k=1}^{M}[\\frac{(\\theta_{ dec}(l)-y^{k})^{2}}{\\sigma_{dec}^{2}}+\\\\ &\\lambda_{dec}(|\\theta_{d1}|+|\\theta_{d2}|+..)+\\lambda_{enc}\\sum_ {i}\\frac{(W_{i}x_{i}^{k})^{2}}{\\sigma_{enc}^{2}}]\\end{split} \\tag{15}\\]
We observe that the first term tries to minimize the least squares difference between \\(\\theta_{dec}(l)\\) and \\(y\\) and the second term controls the size of the weights of the decoder which in turn controls the maximum degree polynomials the decoder NN can approximate. For the third term we see that as we increase the \\(\\lambda_{inc}\\), the NN will try to keep \\((W_{i}x_{i}^{k})^{2}\\) small to keep the total loss function small. Assuming now that we standardize our data so that \\(x_{i}^{\\prime}s\\) on an average have similar magnitudes, we absorb it into \\(\\lambda_{enc}\\). The third term will now be smallest when only \\(W_{i}^{\\prime}s\\) corresponding to those \\(x_{i}^{\\prime}s\\) are non-zero, which are required to reproduce \\(Y\\). Using this intution and the fact that term inside the summation over \\(i\\) in equation 17 is always \\(\\geq 0\\), we can further simplify the loss function as
\\[\\begin{split}&\\mathcal{L}=(1/M)\\sum_{k=1}^{M}[\\frac{(\\theta_{dec}(l )-y^{k})^{2}}{\\sigma_{dec}^{2}}]+\\\\ &\\lambda_{dec}(|\\theta_{d1}|+|\\theta_{d2}|+..)+\\lambda_{enc}\\sum_ {i}(|W_{i}|)\\end{split} \\tag{16}\\]
where we have merged \\(\\sigma_{enc}^{2}\\) with \\(\\lambda_{enc}\\). This way we treat both the encoder and decoder weights on equal terms using L1 regularization. From a practical standpoint L1 is advantageous since it can shrink weights faster.
Figure 2: Plots shows the case of fan-in causality pattern for set of delay equations in equation 18 for set of \\(\\xi_{ij}\\) values used to obtain results in Figure 3
## III Application
For further study we use a NN in which the encoder has 2 linear layers. This gives us a mapping \\(X\\to L\\). We then add Gaussian noise to the latent variables \\(l_{i}=l_{i}+N(0,\\sigma_{enc}^{2})\\). The latent code is then sent through a multilayer decoder network with non-linear activation functions to give the output \\(\\theta_{dec}(l)\\). We perform batch-normalization in between intermediate neural network layers [19]. This layers prevents change in data distributions between adjacent layers and allows neural network learning at a higher learning rate. We then minimize the loss function in equation 16 using Stochastic gradient descent with different batch sizes. We can tune the values of \\(\\lambda_{enc},\\lambda_{dec}\\) (regularization parameters) to obtain as low values of loss function as possible. This choice of regularization parameters may also depend on our prior knowledge about the complexity of the system. The data is split into the training and validation set. The training data is used to build the model and validation set checks how well the model generalizes. The basic heuristic for tuning these parameters is as follows: after fixing the learning rate for the gradient descent, we first increase the value of \\(\\lambda_{dec}\\) which basically fixes the complexity of functions the decoder can simulate. We then increase the value of \\(\\lambda_{enc}\\) and look at the value of the mean square error and stop when the mean square error is as small as possible for both the training and the validation set. We now use this method to infer relationships in well known non-linear systems. We first consider a Lorenz96 non-linear system which is defined as:
\\[\\frac{dx_{i}}{dt}=(x_{i+1}-x_{i-2}x_{i-1}-x_{i}+F) \\tag{17}\\]
where \\(i\\) goes from 1 to \\(N\\) where \\(N\\) is the number of oscillators and \\(x_{N+1}=x_{1}\\),\\(x_{-1}=x_{N-1}\\), \\(x_{0}=x_{N}\\). \\(F\\) is the driving term and we choose \\(F=8\\) where the system behaves in the chaotic regime. Figure 1 shows the results for N=5. We run N=5 times with each time \\(y=\\frac{dx_{i}}{dt}\\) for i from 1 to 5. We see that the latent representation \\(l_{i}\\) is basically just the added Gaussian noise when the corresponding \\(y\\) has no dependency on \\(l_{i}\\). The number of data points was 3000 and learning rate was 0.0001 and values of \\(\\lambda_{dec},\\lambda_{enc}\\) where 0 and 0.1 respectively. The training was run for 1000 epochs with a batch size of 300.
Next we apply NN to infer causal relationship in a set of non-linear delay equations. For this we look at the following set of equations:
\\[Y_{i}(t+1)=(\\xi_{ii}-\\sum_{j=1,2,3}(\\xi_{jj}-\\xi_{ij}Y_{j}(t)))Y_{i}(t) \\tag{18}\\]
for i=1,2,3. We choose to choose parameters \\(\\xi_{ij}\\) which correspond to a fan-in pattern shown in Figure 2. The values of \\(\\xi\\) are as follows \\(\\xi_{11}=4,\\xi_{22}=3,\\xi_{33}=2,\\xi_{31}=3,\\xi_{32}=4,\\xi_{33}=5,\\xi_{34}=6,\\xi _{35}=7,\\xi_{36}=8\\).
Figure 3: Plot shows comparison between \\(Y_{i}\\) and the corresponding scaled version of \\(l_{i}\\) for (a)-(c) different values of \\(y_{i}=Y_{i}\\) for the set of delay equations 18. In the plots where \\(l_{i}\\) is noise, information from the corresponding \\(x_{i}\\) is not used to reconstruct \\(y_{i}\\) using the decoder\\(0.6,\\xi_{32}=-0.6\\). These parameters corresponds to a chaotic regime. In this case both \\(Y_{2}\\) and \\(Y_{3}\\) are causally driven by \\(Y_{1}\\). A fan-in pattern is a good test because correlation based tests would falsely infer a causal relationship between \\(Y_{2}\\) and \\(Y_{3}\\)[2]. To infer the causal relationships, we run the NN with \\(y=Y_{i}(t+1)\\) and input \\(X=[Y_{1}(t),Y_{2}(t),Y_{3}(t)]\\). From Figure 3 we can see that we are able to correctly infer the dependencies, even for a very small data-set of 50 points. The plots were obtained for a learning rate of 0.001 and \\(\\lambda_{enc},\\lambda_{dec}\\) values of 0.1 and 0.005 respectively.The number of epochs was 1500 with a batch size of 32. We also summarize the performance of this method using 2 metrics False discovery (FD) and Miss rate (MR) which are defined as:
\\[FD=\\frac{FP}{FP+TP} \\tag{19}\\] \\[MR=\\frac{FN}{FN+TP}\\]
where FN, FP, TP are False negatives, false positives and true positives respectively. Here a positive means a certain variable has been discovered to be independent of the output. The negative means a variable has been discovered to be related to the output.This data is obtained by obtaining results over 20 independent runs of the model. For the Lorenz96 model, the best result is obtained with \\(\\lambda_{enc}=0.2\\) while for the set of equations 18, best results are obtained for \\(\\lambda_{enc}=0.1\\)
## IV Conclusion
The proposed approach using NN is a versatile platform for inferring relationships, especially in complex non-linear systems. This is because NN are a powerful tool to model such non-linear functions. Even though it is difficult to infer the exact functional form using a NN, this method can help locate functional dependencies between variables in a multivariable system. These variables can then be probed more extensively to find the functional (or approximate functional) form of the relationships. Methods based on sparse regression have been used in the past to find functional relationships. However they rely on pre-knowledge of the set of basis functions to use for the regression. The proposed method has no such requirement and with a large enough NN, can simulate any complex non-linear function. Besides locating functional relationships, it can also help infer causal relationships in non-linear data as seen in the discussed example, where it correctly inferred causal relationship even for a small dataset of 50 samples.
## V Acknowledgements
The author would like to thank Akshatha Mohan for helpful comments and critical assessment of the manuscript.
## References
* Reshef _et al._ [2011]D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S. Lander, M. Mitzenmacher, and P. C. Sabeti, Science **334**, 1518 (2011).
* Marbach _et al._ [2010]D. Marbach, R. J. Prill, T. Schaffter, C. Mattiussi, D. Floreano, and G. Stolovitzky, Proceedings of the National Academy of Sciences **107**, 6286 (2010).
* Brunton _et al._ [2016]S. L. Brunton, J. L. Proctor, and J. N. Kutz, Proceedings of the National Academy of Sciences **113**, 3932 (2016).
* Dembo _et al._ [2001]A. Dembo, A. Kagan, and L. A. Shepp, Bernoulli **7**, 343 (2001).
* Liu _et al._ [2012]J. Liu, J. Li, C. Liu, and Y. Chen, IEEE Transactions on Knowledge and Data Engineering **24**, 251 (2012).
* Huhtala [1999]Y. Huhtala, The Computer Journal **42**, 100 (1999).
* Lin _et al._ [2017]H. W. Lin, M. Tegmark, and D. Rolnick, Journal of Statistical Physics **168**, 1223 (2017).
* Rolnick and Tegmark [2018]D. Rolnick and M. Tegmark, arXiv:1705.05502 [cs, stat] (2018), arXiv: 1705.05502.
* Tibshirani [1996]R. Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological) **58**, 267 (1996).
* Iten _et al._ [2020]R. Iten, T. Metger, H. Wilming, L. del Rio, and R. Renner, Physical Review Letters **124**, 010508 (2020).
* Rem _et al._ [2019]B. S. Rem, N. Kaming, M. Tarnowski, L. Asteria, N. Flaschner, C. Becker, K. Sengstock, and C. Weitenberg, Nature Physics **15**, 917 (2019).
* Melnikov _et al._ [2018]A. A. Melnikov, H. Poulsen Nautrup, M. Krenn, V. Dunjko, M. Tiersch, A. Zeilinger, and H. J. Briegel, Proceedings of the National Academy of Sciences **115**, 1221 (2018).
* Hinton [2006]G. E. Hinton, Science **313**, 504 (2006).
* Detto _et al._ [2012]M. Detto, A. Molini, G. Katul, P. Stoy, S. Palmroth, and D. Baldocchi, The American Naturalist **179**, 524 (2012).
Figure 4: Plot shows the plot for FD vs MR for different values of \\(\\lambda_{enc}\\). The legend also mentions the non-linear system for the plotted data. βddeβ stands for the delay difference equations in equation 18
* Runge et al. (2012) J. Runge, J. Heitzig, V. Petoukhov, and J. Kurths, Physical Review Letters **108**, 258701 (2012).
* Ma et al. (2015) H. Ma, K. Aihara, and L. Chen, Scientific Reports **4**, 7464 (2015).
* Tishby et al. (2000) N. Tishby, F. C. Pereira, and W. Bialek, arXiv:physics/0004057 (2000), arXiv: physics/0004057.
* Giannella and Robertson (2004) C. Giannella and E. Robertson, Information Systems **29**, 483 (2004).
* Ioffe and Szegedy (2015) S. Ioffe and C. Szegedy, arXiv:1502.03167 [cs] (2015), arXiv: 1502.03167. | In todays age of data, discovering relationships between different variables is an interesting and a challenging problem. This problem becomes even more critical with regards to complex dynamical systems like weather forecasting and econometric models, which can show highly non-linear behaviour. A method based on mutual information and deep neural networks is proposed as a versatile framework for discovering non-linear relationships ranging from functional dependencies to causality. We demonstrate the application of this method to actual multivariable non-linear dynamical systems. We also show that this method can find relationships even for datasets with small number of datapoints, as is often the case with empirical data. | Summarize the following text. | 125 |
isprs/45c19c47_42d8_4ce1_9f0e_7516f3ada77d.md | # Evaluating Model Fidelity in an Aerial Image Analysis System
F. Quint M. Sties
Institute for Photogrammetry and Remote Sensing
University of Karlsruhe
76128 Karlsruhe, Germany
[email protected]
## 1 Introduction
Understanding of aerial images is one of the most challenging tasks in computer vision. Due to its complexity, a model based analysis has been found to be mandatory since several years, see e.g. [14], [15], [16], [21], [17]. In our system MOSES (_Map Oriented SE_mantic image under_Standing) [18] we too perform a structural, model based analysis. We are interested in the recognition of objects in urban environment using large scale aerial images.
## 2 MOSES
One of the main characteristics of the system MOSES is that large scale topographical maps are used to automatically refine the models used for image analysis. The architecture of our system is shown in Fig. 1. The generative model contains domain independent, common sense knowledge the system designer has about the environment. The generic models in the map domain and in the image domain are specializations of the generative model and they reflect the particularities of the representations of our environment in the map and image respectively. The models contain both declarative knowledge, which describes the structure of the objects, and procedural knowledge, which contains the methods used during the map and image analysis process. As a repository for the models semantic networks [10] are used, as implemented by the system ERNEST [16].
The generative model and the generic models are that part of the system which is build by the system developer. The models and scene descriptions described in the sequel are automatically build in analysis processes. Analysis takes place in three phases.
### Map analysis
In the first phase, the generic model in the map domain is used to analyse the map, which is available as a list of digitized contours. The procedure by which map analysis is performed is similar to the one used in the image analysis process and will be described in a following section. The result of the map analysis is a description of the scene, as far as it can be constructed out of the map data. This scene description is also stored in a semantic network.
The nodes of the semantic network represent objects, parts and subparts of the scene. They are described with attributes, which in this case mainly contain the geometric properties of the scene objects. Links between the nodes represent relations between the corresponding objects or parts. Two typical relations are the _part-of_ relation, which describes the structure of the scene objects and the _specialization_ relation, along which properties of objects are inherited.
Figure 1: Architecture of the system MOSES
### Model building
In the second phase, the scene description obtained after the map analysis is combined with the generic model in the image domain and the specific model in the image domain is built. A detail of this specific model representing building 0235 and its parts as far as they are given in the map, is shown in Fig. 2.
For each node (instance) in the scene description we create a new node (concept) in the specific model. This new concept is a specialization of the corresponding concept in the generic model in the image domain and thus inherits its declarative and procedural knowledge. The values of the attributes in the scene description after map analysis are stored after a transformation as restrictions for the corresponding attributes of the newly created concepts. They serve as initial estimates for the calculation of the attribute values out of the image data.
The relations between the instances in the scene description are transferred accordingly into relations between the new concepts. Whilst the generic model in the image domain describes in a general form the representation of an arbitrary scene in an aerial image, the specific model in the image domain describes in a detailed manner that part of the world, which is subject to the current analysis. The grade of detail depends of course from the contents of the map.
### Image primitives
Prior to the model based image analysis primitives are extracted from the image data. We work with large scale color aerial images, which after digitization have a pixel size of 30 cm x 30 cm on the ground. Line segments and regions serve as primitives. The line segments are extracted with a gradient based procedure (Quint and Bahr, 1994). The regions are gained by segmenting the aerial image using a Bayesian homogeneity predicate (Quint and Landes, 1996).
The regions and the line segments are combined in an attributed undirected graph. The nodes of the graph are attributed with the regions. Nodes corresponding to neighbouring regions are connected with links. A link between two nodes is attributed with the line segment(s) which compose the border between the corresponding regions. This feature graph is the database on which the model based image analysis operates.
### Image analysis
In the third phase, the specific model in the image domain is used to perform the actual image analysis. The aim of this phase is to verify in the image the objects found after the map analysis and to detect and describe other objects of the scene which are not represented in the map. For the later, the context gained through the verification of the map objects will be helpful.
The strategy followed in the analysis process is a general, problem independent strategy provided by the shell ERNEST. The analysis starts by creating a modified concept for the goal concept (expansion step). A modified concept is a preliminary result and it reflects constraints for the concept that have been determinated out of the context of the current analysis state.
Following top-down the hierarchy in the semantic network, stepwise the concepts on lower hierarchical levels are expanded until a concept on the lowest level is reached. Since this concept does not depend from other concepts, its correspondence with a primitive in the database can be established and its attributes can be calculated. This is called instantiation.
Analysis now moves bottom-up to the concept at the next higher hierarchical level. If instances have been found for all parts of this concept, the concept itself can be instantiated. Otherwise the analysis continues with the next not yet instantiated concept on a lower level. After an instantiation, the acquired knowledge is propagated bottom-up and top-down to impose constraints and restrict the search space. Thus, in the analysis process top-down and bottom-up processing alternate. As well, expansion and instantiation alternate during the analysis.
Generally, while performing an instantiation it is possible to establish several correspondences between a concept and primitives in the data base. However, only one of these correspondences leads to the correct interpretation. Since it usually is not possible to ultimately decide at the lower levels which correspondence is correct, all possible correspondences have to be accounted for.
Thus, the image analysis is a search process, which can be graphically represented by a tree. Each node of the tree represents a state of the analysis process. If in a given state several correspondences are possible, the search tree is splitted: for each hypothesis a new node as successor of the current node is created.
The analysis process continues with that leaf node of the search tree which is considered to be the best according to a problem dependent evaluation. It is know that the problem of finding an optimal path in a search tree can be solved by the \\(A^{*}\\)-algorithm (Nilsson, 1982). Its application is possible if one can evaluate the path from the root node to the current node and if one can give an estimate for the valuation of the path from the current node to the (not yet known) terminal node containing the solution.
Figure 2: Detail of the part-of hierarchy of the specific model
## 3 Valuations
The functions which evaluate the states of the analysis are very important since they are not only responsible for the efficiency of the search, but they are also decisive for the success or failure of the analysis. We relate the valuation of the search path to the valuation of the analysis goal in the given state of the analysis. The valuation of the goal is calculated considering the valuations of the instances and modified concepts already created and the estimates for the valuations of the instances and modified concepts which will be created in the path from the current node to the solution node.
When an instantiation is performed, implicitly a hypothesis of match is established between the concept under instantiation and the chosen primitives from the database. Since we can not ultimately decide at the moment the instantiation is performed, if it is the correct one, we are working under uncertainty and we have to quantify our uncertainty. At the level of each concept in the semantic network we have a dichotomous frame of discernment with the events: the chosen primitives
* match
* do not match
to the concept (i.e. model).
The valuations computed for the instances and modified concepts in each state of the analysis are measures of our subjective belief in these hypotheses. They take values between 0 and 1 and we interpret them as basic belief masses in the framework of the Dempster-Shafer theory of evidence (Shafer, 1976). The higher a valuation is, the stronger is our subjective belief in the corresponding hypothesis. Using the methods described in (Quint, 1995), the different valuations are combined and propagated in the hierarchy of the semantic network to result in the valuation of the analysis goal.
We evaluate two aspects for our hypotheses of match: the compatibility and the model fidelity. The compatibility evaluates an analysis state considering the principles of perceptual grouping. It is calculated based on geometric, topologic and radiometric properties of the image primitives only. In this category belong for example the goodness of fit of several fine segments extracted from the image data to form an edge of an object, the goodness of fit of several edges to form a polygon, the compatibility of the polarity of edges to form a polygon etc.
The model fidelity measures the goodness of fit between the image primitives and the specific model gained through the analysis of the map. Portraying it in simplified terms, one can say that the compatibility is a measure for the ability of the chosen image primitives to form an object of the generic model, whereas the model fidelity is a measure for the ability to form exactly that object, which is predicted by the map. We present in this article measures used for the evaluation of the model fidelity.
## 4 Model fidelity
### Model fidelity for line segments
At the level of line segments we define the model fidelity with help of a distance function between the image primitives and the contours stored in the specific model. The distance function is part of a metric defined with help of a set of square integrable functions on a parametric space for line segments.
We describe a line segment with help of the coordinates of its starting point, its length and the angle between the line and the positive \\(x\\)-axis (see Fig. 3). Thus, a line segment \\(s_{i}\\) is represented in the space \\(S=(x,y,l,\\theta)\\) by the point \\(s_{i}=(x_{i},y_{i},l_{i},\\theta_{i})\\). The coordinates of a line segment are in the domain \\((x,y)\\in\\mathbb{R}^{2}\\), the length of a line is in \\(I\\in\\mathbb{R}_{+}\\) and its angle is in \\(\\theta\\in(-\\frac{\\pi}{2},\\frac{\\pi}{2}]\\). The space \\((x,y,l,\\theta)\\) is the Cartesian product of the enumerated domains and is different from \\(\\mathbb{R}^{n}\\). For this reason we do not use the Euclidean distance between two points in this space to calculate the distance between two line segments, but use instead a metric defined on an isomorphic space of functions.
We define an isomorphism by attaching each point \\(s_{i}\\) in the space \\(S\\) a function \\(n_{i}(x,y,l,\\theta)\\) from the space of square integrable functions \\(C^{2}(S)\\). We call this function _neighbourhood function_. As a distance between two line segments \\(s_{i}\\) and \\(s_{j}\\) we now use the distance defined on the family of functions \\(n_{i}\\). It is well known that a distance function defined with the expression:
\\[d_{ij}=\\left[\\int_{S}\\left(n_{i}(x,y,l,\\theta)-n_{j}(x,y,l,\\theta)\\right)^{2} dx\\,dy\\,dl\\,d\\theta\\right]^{\\frac{1}{2}} \\tag{1}\\]
satisfies the necessary properties for a metric on \\(\\mathcal{L}^{2}(S)\\). If we choose the functions \\(n_{i}(x,y,l,\\theta)\\) such that their norm in the induced metric is equal to 1, i.e.
\\[\\int_{S}\\left(n_{i}(x,y,l,\\theta)\\right)^{2}dx\\,dy\\,dl\\,d\\theta\\overset{ \\perp}{=}1, \\tag{2}\\]
Figure 4: Neighbourhood function for the position of line segments
Figure 3: Parameters used to describe a line segmentthe expression (1) simplifies to:
\\[d_{ij}=\\left[2-2\\int_{S}n_{i}(x,y,l,\\theta)n_{j}(x,y,l,\\theta)dx\\,dy\\,dl\\,d \\theta\\right]^{\\frac{1}{2}}. \\tag{3}\\]
The distance \\(d_{ij}\\) decreases when the integral in expression (3) increases. If the neighbourhood functions are positive functions, the integral in expression (3) takes values between 0 and 1.
We have formulated our search problem using as valuations of the nodes in the search tree merit functions and not cost functions. The reason for this is pragmatic: it is more natural to evaluate the goodness than the badness of a match. Thus, we will not use the distance as given by expression (3) but only the integral in expression (3) to define the model fidelity \\(m_{ij}\\) at the level of line segments:
\\[m_{ij}=\\int_{S}n_{i}(x,y,l,\\theta)n_{j}(x,y,l,\\theta)dx\\,dy\\,dl\\,d\\theta. \\tag{4}\\]
This integral equals to the cosinus of the angle between the two versors \\(n_{i}\\) and \\(n_{j}\\) in the vector space \\(L^{2}(S)\\) and can be thought of as a correlation measure between these two versors.
The neighbourhood functions are chosen regarding the physics of the image formation process and some heuristics motivated by experience. We construct the function \\(n_{i}(x,y,l,\\theta)\\) as a product of three functions defined on \\(\\mathbb{R}^{2},\\mathbb{R}_{+}\\) and \\((-\\frac{\\pi}{2},\\frac{\\pi}{2}]\\) respectively:
\\[n_{i}(x,y,l,\\theta)=f_{i}(x,y)\\,g_{i}(l)\\,h_{i}(\\theta).\\]
To define the function \\(f_{i}(x,y)\\) we take advantage of the fact that the parameters of the camera and the position of the airplane at the moment the aerial image was taken are known. We can determinate the transformation between the image coordinates and the coordinates in the specific model (map coordinates) and transform the image primitives into the map coordinate system. Assuming that the corresponding contours are depicted in the map, there are several error sources which are responsible for the fact that the line segments extracted from the image will not overlap with the map contours. These are for example inaccuracies in:
* the extraction of line segments from the image,
* the determination of the transformation parameters,
* the acquisition and digitization of the map data.
Subsuming all these effects, we can safely assume that the position of the image primitives is normally distributed around their \"true\" position as given by the specific model.
For this reason we use as a neighbourhood function \\(f_{i}(x,y)\\) for the position of the line segments a Gaussian shaped function. However, since we do not want to evaluate differently the situations when a short line segment lies in the middle of its model line or closer to the endpoints, our function is constant along the length of the line. We choose for the neighbourhood function \\(f_{i}(x,y)\\):
\\[f_{i}(x,y)=K_{xy}\\exp\\left(-\\frac{\\left((x-x_{i})\\sin\\theta_{i}-(y-y_{i})\\cos \\theta_{i}\\right)^{2}}{2\\sigma^{2}}\\right)\\]
for positions \\((x,y)\\) between the endpoints of a line, i.e. \\(\\{(x,y)\\mid(x-x_{i})\\cos\\theta_{i}+(y-y_{i})\\sin\\theta_{i}\\geq 0\\,\\wedge\\,(x-x_{i}) \\cos\\theta_{i}+(y-y_{i})\\sin\\theta_{i}\\leq l_{i}\\}\\), and \\(f_{i}(x,y)=0\\) otherwise. The neighbourhood functions \\(f_{i}(x,y)\\) and \\(f_{j}(x,y)\\) for the constellation of line segments shown in Fig. 3 are displayed in Fig. 4. The variance of the Gaussian is chosen equal to the residual mean square error of the transformation.
For the part of the neighbourhood function, which depends from the length of the line, we choose a function which \"inside\" the line is proportional to the square root of the length and which is 0 \"outside\":
\\[g_{i}(l)=\\begin{cases}K_{i}\\sqrt{l}&\\text{if }l\\in[0,l_{i}]\\\\ 0&otherwise.\\end{cases}\\]
As we will see later, this choice penalizes image primitives in an amount proportional to the ratio of their length and the length of the model contour.
The considerations regarding the uncertainty in the position of line segments applies also for small deviations of the angle. Thus, the neighbourhood function for the angle is chosen following similar reflections. But because the domain of definition of the angle is an interval and because we want a stronger penalization of large deviations of the angle, we use a trigonometric function instead of the Gaussian shaped function:
\\[h_{i}(\\theta)=K_{\\theta}\\cos(\\theta-\\theta_{i}).\\]
The constants \\(K_{xy},K_{t}\\) and \\(K_{\\theta}\\) are calculated imposing normalization for each of the partial neighbourhood functions and we can thus assure the fulfillment of condition (2).
With this choice of neighbourhood functions, the integral for the model fidelity is separable into three terms: the position fidelity, the length fidelity and the angle fidelity. The integral over the product of the neighbourhood functions for the position, i.e. the position fidelity can generally not be expressed in a closed form. However, if the angle between the two lines is small or the parameter \\(\\sigma\\) is in the same order of magnitude as the mean geometric distances between the two line segments then a good approximation is given by:
\\[\\int_{\\mathbb{R}^{2}}f_{i}(x,y)f_{j}(x,y)dx\\,dy=\\frac{\\sqrt{\\pi} \\sigma}{l_{i}\\sin\\Delta\\theta}\\times\\\\ \\left(\\text{erf}\\left(\\frac{u_{1}\\sin\\Delta\\theta-A}{\\sigma\\sqrt{2+2 \\cos\\Delta\\theta^{2}}}\\right)-\\text{erf}\\left(\\frac{u_{2}\\sin\\Delta\\theta-A}{ \\sigma\\sqrt{2+2\\cos\\Delta\\theta^{2}}}\\right)\\right). \\tag{5}\\]
with \\(\\Delta\\theta=\\theta_{j}-\\theta_{i}\\) and \\(A=-(x_{i}-x_{j})\\sin\\theta_{j}+(y_{i}-y_{j})\\cos\\theta_{j}\\). The coordinates \\(u_{1}\\) and \\(u_{2}\\) are the coordinates of the start- and of the endpoint of line \\(l_{i}\\) in a coordinate system \\(uOv\\) with its origin in the starting point of line \\(l_{j}\\) and with the \\(u\\)-axis parallel to the line \\(l_{j}\\). For a situation as shown in Fig. 3, when after a parallel displacement the perpendicular distance \\(d\\) between the two lines varies, the position fidelity varies in function of \\(d\\) as shown in Fig. 5.
The integrals over the neighbourhood functions for the length and the angle of the line segments can be expressed in closed form and result to:
\\[\\int_{\\mathbb{R}_{+}}g_{i}(l)g_{i}(l)dl=\\frac{\\min(l_{i},l_{j})^{2}}{l_{i}l_{j}}\\]
and
\\[\\int_{-\\pi/2}^{\\pi/2}h_{i}(\\theta)h_{j}(\\theta)d\\theta=cos(\\theta_{i}-\\theta_{ j}).\\]The length fidelity amounts thus to the ratio of the length of the shorter line to the length of the longer line. The angle fidelity is the cosinus of the angle difference of the two lines. The total model fidelity for line segments is given by the product of the three components.
Usually, due to noise influence the visible contour of an object in the image is broken and thus several line segments will form the contour. In this case, the contour is constructed step by step by adding line segments until the contour is completed. The \\(A^{*}\\)-algorithm requires an optimistic estimate of the merit for future instantiations. To give an optimistic estimate for the future instantiations in the case of a partially estimated contour, we elongate the already instantiated line segments in order to simulate a virtual best fit with the model. The model fidelity for this \"ideal\" best fit is evaluated an serves as an optimistic estimate for the model fidelity of future instantiations.
### Model fidelity for polygons
A different approach for the model fidelity is used at the hierarchical level of polygons. Whilst at the level of line segments the similarity in position and orientation between the selected image primitives and the model contour has been evaluated, we evaluate at the level of polygons the similarity between the shape of the polygon created by the image primitives and the shape of the model polygon.
The corner points of the polygon in the image domain are obtained as intersections of the chosen image primitives. In the case where several image primitives form an edge of an object, these primitives are replaced for the purpose of the corner point calculation with a regression line. The error produced by the approximation with the regression line is taken into account in the valuations of the compatibility. In the case where no correspondence could be established between an edge of an object and an image primitive we make a wildcard assignment to the current edge. In this case the corresponding corner points are chosen to be the end point of the image primitive assigned to the edge previous to and the starting point of the image primitive assigned to the edge after the wildcard-assigned edge. The wildcard assignments however lead to a penalization in the model fidelity of the line segments.
To not include position and orientation errors in our measure we first transform the polygon in the image domain on the model polygon. We take a similarity transformation between the corresponding corner points of the two polygons and calculate the transformation parameters such that the residual mean square error is minimal. Since the scale of the image and the map are known, we fix the scale parameter in the similarity transformation to the known value.
The resulting minimal mean square error is a measure for the similarity of the shapes of the two polygons. We gain our subjective belief in the hypotheses of match between the image polygon and the model polygon with help of a fuzzy function:
\\[p_{ij}(r)=\\exp\\left(-\\frac{r^{2}}{\\sigma_{r}^{2}}\\right) \\tag{6}\\]
where \\(r\\) is the residual mean square error after the transformation.
### Model fidelity for objects
The resulting model fidelity for an object of the scene is calculated by combining the model fidelities at the level of line segments and polygons. The model fidelities are interpreted as subjective beliefs in the corresponding hypotheses of match and treated in the framework of the Dempster-Shafer theory of evidence. With an extension (Quint, 1995) to approaches found in the literature we propagate the model fidelities calculated at a lower hierarchical level of the semantic network upwards. Model fidelities at the same hierarchical level are combined with Dempster's rule (Shafer, 1976).
The such computed model fidelity at the level of an object of the scene is used to decide whether an object represented in the map could have been verified in the image analysis process. Besides this, the model fidelity for an object is further propagated up to the goal concept of the analysis, which in our case represents the scene. At this level it is combined with the compatibility measures computed for the instances and contributes to the valuation of an analysis state. However, since we are not only interested in the verification of objects represented in the map, the model fidelity contributes in a smaller fraction to the valuation of the analysis state than the compatibility.
## 5 Results
We present the results for the verification of buildings in the scene of Fig. 6. The line segments used in the image analysis process are overlayed in black color in the Figure. There were roughly 5000 line segments presented to the system. The line segments which are found after the analysis to compose the buildings of the scene are drawn in white color. For each building its identifier is also displayed in the Figure. The model fidelity for the recognized buildings is given in Table 1.
Excepting the house in the lower left corner of the image (i.house0106) all the other buildings in the image have been verified successfully. The main reason for the failure of the verification was that the position error for this building with respect to the specific model was twice as big as the position errors of the other buildings in the scene. In this experiment the parameters \\(\\sigma\\) in expression (5) and \\(\\sigma_{r}\\) in expression (6) where chosen such, that an absolute position error of 2m in the scene leads to a model fidelity of 0.5 (i.e. half of maximal value).
Those objects rejected by the verification process are marked and passed to the following phase of the analysis, the classification phase, where these image structures are interpreted regardless of a specific model gained from map analysis, but
Figure 5: Position fidelity as a function of \\(d\\) (see also Fig. 3)
using the generic model and the context of the verified objects.
Visual inspection of the results of experiments has shown that the measures defined for the model fidelity reflect the valuations a human interpreter would qualitatively assign to the given analysis state and that the presented measures can be used successfully to guide the search in our image analysis task. Several other factors also contribute to the success of the analysis process, i.e. the compatibility measures computed for the instances and modified concepts at the different levels of the hierarchical model, although they are not in the scope of this paper. We are currently extending our system towards the recognition of composite objects like parking areas and allotments.
## Acknowledgment
This work is supported by the Deutsche Forschungsgemeinschaft (DFG).
## References
* [1]N. Finder (1979) Associative Networks. Academic Press, Orlando.
* [2]Kummert, F., Niemann, H., Prechtel, R., and Sagerer, G. (1993) Control and explanation in a signal understanding environment. Signal Processing32, pp.111-145.
* [3]Matsuyama, T. and Hwang, V. (1990) SIGMA: A Knowledge-Based Aerial Image Understanding System. Advances in Computer Vision and Machine Intelligence. Plenum Press, New York, London.
* [4]McKeown, D., Harvey, W., and McDermott, J. (1985) Rule based interpretation of aerial imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence7 (5):570-585.
* [5]Nicolin, B. and Gabler, R. (1987) A knowledge-based system for the analysis of aerial images. IEEE Transactions on Geoscience and Remote Sensing25 (3):317-329.
* [6]Nilsson, N. (1982) Principles of artificial intelligence. Springer-Verlag, Berlin.
* [7]Quint, F. (1995) An evidential merit function to guide search in a semantic network based image analysis system. Technical Report IPF-FG-11/95, University of Karlsruhe.
* [8]Quint, F. and Bahr, H.-P. (1994) Feature extraction for map based image interpretation. In Shi, X., Du, D., and Gao, W., editors, _Third International Colloquium of LIESMARS: Integration, Automation and Intelligence in Photogrammetry, Remote Sensing and GIS_, pages 1-8, Wuhan, China.
* [9]Quint, F. and Landes, S. (1996) Colour aerial image segmentation using a bayesian homogeneity predicate and map knowledge. In Proceedings of the 18th ISPRS-Congress, Vienna.
* [10]Quint, F. and Sties, M. (1995) Map-based semantic modeling for the extraction of objects from aerial images. In Grun, A., Kubler, O., and Agouris, P., editors, _Automatic Extraction of Mana-Made Objects from Aerial and Space Images_, pages 307-316. Birkhauser, Basel.
* [11]Sandakly, F. and Giraudon, G. (1994). Multispecialist system for 3D scene analysis. In Cohn, A., editor, _11th European Conference on Artificial Intelligence, ECAI 94_, pages 771-775. John Wiley & Sons, Ltd.
* [12]Shafer, G. (1976) A mathematical theory of evidence. Princeton University Press.
* [13]Stilla, U. (1995). Map-aided structural analysis of aerial images. ISPRS Journal of Photogrammetry and Remote Sensing50 (4):3-10.
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline i,house0067 & 0.98 & i,house0106 & 0.36 \\\\ \\hline i,house0145 & 0.97 & i,house0184 & 0.93 \\\\ \\hline i,house0223 & 0.84 & i,house0290 & 0.88 \\\\ \\hline i,house0343 & 0.98 & i,house0410 & 0.86 \\\\ \\hline i,house0449 & 0.95 & i,house0530 & 0.87 \\\\ \\hline i,house0569 & 0.90 & i,house0636 & 0.64 \\\\ \\hline i,house0703 & 0.95 & i,house0742 & 0.94 \\\\ \\hline i,house0809 & 0.88 & i,house0848 & 0.85 \\\\ \\hline i,house0887 & 0.98 & i,block0940 & 0.79 \\\\ \\hline i,block0979 & 0.90 & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Model fidelity for the objects of the scene in Fig. 6
Figure 6: Result of the image analysis process | The purpose of the system MOSES is the automatic recognition of objects in aerial images. To direct the model based structural image analysis, one has to evaluate each state of the analysis process. One We present in this article the procedures used in MOSES to calculate a part of these valuations, the model fidelity, which is a measure for the goodness of match between the chosen image primitives and the specific model. Metrics defined on a parametric representation of the primitives are used to evaluate the model fidelity. The results of the image analysis process directed by these valuations are presented.
Aerial Image Understanding, Model, Knowledge Base, Semantic Networks 2010 | Write a summary of the passage below. | 130 |
elsevier/0b6a8271_0429_4036_b1ed_1069312f7a96.md | # Evaluation of a forest radiative transfer model using an extensive boreal forest inventory database
Ranjith Gopalakrishnan
[email protected]
Lauri Korhonen
Mathi Mottus
Mina Rautiainen
Aarne Hovi
Lauri Mehtatalo
Mathi Maltamo
Heli Peltola
School of Forest Sciences, Faculty of Science and Forestry, University of Eastern Finland, P.O. Box 111, 80101, Joensuu, Finland
Petteri Packalen
School of Forest Sciences, Faculty of Science and Forestry, University of Eastern Finland, P.O. Box 111, 80101, Joensuu, Finland
######
2023 10009 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2
effectively quantify and incorporate these effects into long-term forest management planning. Forest management planning systems aim at quantifying the climate effects of different management scenarios, but currently ignore the dependencies between forest structure and albedo. Integrating reflectance models into simulations of forest management and growth allows the inclusion of albedo-related climatic effects into practical forest planning.
Although a sizable number of forest radiative transfer models have been developed and described in the past literature, the Forest Reflectance and Transmittance model (Kutusk and Nilson, 2000; Nilson and Peterson, 1991, henceforth FRT) has unique advantages in the boreal zone. It is primarily designed for managed forested stands (Kutusk et al., 2014) and is computationally efficient, especially compared to complex Monte-Carlo ray tracing models such as librat (Disney et al., 2009). So far, it has been validated in the Radiative transfer Model Intercomparison exercises (RAMI, Widlowski et al., 2015, 2007) and in the boreal and hemiboreal zones, using a small set of homogeneous forested stands (Kutusk et al., 2008, 2014; Rautiainen et al., 2008; Hovi et al., 2017). However, it can be parametrized using standard forest inventory data, with the help of relevant allometric equations. The model can simulate the bidirectional reflectance factor (BRF) and the hemispherical directional reflectance factor (HDRF) of a forest stand for any given illumination and viewing geometry, for the wavelength range of 400-2400 nm.
A substantial portion of boreal forests deviate from homogeneous single-species conditions where a majority of previous validation studies have been performed. First, a considerable part of forest area is classified as mixed-species: \\(\\sim\\)17% in Canada (Natural Resources Canada, 2021) and 46% in Finland (Natural Resources Institute Finland, 2018). There is also considerable variation in understory vegetation in different boreal forest types. Boreal forests commonly have open canopies, where the contribution of forest floor to the satellite-observed BRF is substantial (Rautiainen and Lukes, 2015). This is especially the case for young forests (e.g., seedling stands before first commercial thinning) with low volume that are common in areas where commercial forestry is practiced in Southern Finland. On mineral soils, the understory vegetation ranges from barren, lichen dominated sites to herb-rich groves. Peatland forests generally have quite open canopies, varied water regimes and soil types, and hence have their own understory vegetation composition that may differ greatly from mineral soils. Further, in boreal forests seasonality affects the reflectance of both overstory and understory vegetation (Rautiainen et al., 2009, 2011; Hovi et al., 2017). Practical applications of forest radiative transfer models require that they can reliably model these variations. Another justification for our study is the fact that previous validation efforts were limited to mature stands. All these reasons motivate the need for validating FRT using in situ observational data over a wide variety of heterogeneous forest conditions, over all age classes (e.g., both young and mature).
A reliable reference dataset is essential for any effort to assess the quality of simulated remote sensing data. The Landsat 8 surface reflectance product is ideal for this purpose as it is based on the Operational Land Imager (OIL) sensor, a high-quality instrument incorporating several technical advancements (Roy et al., 2014). The product itself has been thoroughly validated against other existing products (e.g., MODIS-based) and by similar means over a large number of locations (Vermote et al., 2016). The associated radiometric accuracies of that study showed that it is a high-quality and globally consistent product. Our current work represents one of the first efforts to use this product to assess the accuracy of a physically based radiative transfer model over a large, heterogeneous set of boreal forested areas.
The main objective of this study is to comprehensively and rigorously evaluate the accuracy of the FRT model over a large forest area in the boreal zone, and for different (possibly structurally complex) forest types and seasonal conditions. This is done by using an extensive set of forest field plots and corresponding satellite images from various times of the year (i.e., spring, summer, fall). We take advantage of six years of quality checked Landsat 8 surface reflectance data. The large geographical coverage of the plot data helps us to quantify uncertainties over a wide range of European boreal forest characteristics and seasonal variations. We applied the linear mixed effects model statistical framework (Mehtiadio and Lappi, 2020) to help understand the linkage between observed FRT simulation accuracy and probable causes. Such an approach has several advantages, especially given that our goal was to formulate interpretable statistical models, making further inference relatively straightforward. Our specific research questions are: 1) How accurate is FRT in reproducing the observed BRF over a wide range of forested areas? 2) How accurately can FRT reproduce seasonal trends in forest BRF? 3) How much can various forest characteristics (e.g., tree species composition, vegetation heterogeneity and understory type) explain the observed discrepancy between FRT simulations and observed BRFs? By addressing these questions, we aim to identify aspects of the FRT modelling framework where improvements are most pertinent. This will hence pave the way towards better modelling of forest reflectance using stand-level forest inventory data in the boreal zone.
## Materials and methods
### Study area
The region of our study is Southern Finland, south of the 64'N latitude, approximately bounded by the latitude/longitude based box: (59.7\"-64\"N) and (21.1\"-31.6\"E). We concentrated on the southern part of the country for the following two reasons: 1) For more northern latitudes, the uncertainty in satellite-retrieved surface reflectance values increases, mainly because of longer atmospheric paths; 2) The undersory vegetation differs at higher latitudes, and there is a lack of understory spectra suitable for FRT. The main tree species found in the study region are Scots pine (_Pinus sybvestris L_.), Norway spruce (_Picea abies_ (L.) Karst) and birches (_Betala pendula_ Roth and _Betala pubescens_ Ehrrh). In addition, a few other deciduous tree species such as as open (_Populus remula_ L.) and grey alter (_Almus incana_ (L.) Moench) may also be present. The understory composition can be variable, depending on the fertility of the site. The most common understory type is mesic, dominated by mosses and dwarf shrubs, such as bilberry (various species in genus _Vaccinium_) and lingonberry (_Vaccinium vitis-diade_ L.). The more fertile sites have an abundance of species including shrubs (e.g., long-esvsuckle; genus _Lonicer_), ferns, grasses and herbs. Low fertility sites are lichen-dominated, with patches of dwarf shrubs and herbs. We have also restricted our study to months when there is no or negligible snow cover on the ground or trees. This aspect will be elucidated in more detail later (Section 2.5).
### Field plots
We used the publicly available and downloadable forest plot dataset from the Finnish Forest Centre (FFC) (Metsakeskus, 2022). They are henceforth also referred to as forest plots in this article (for more information, see supplementary materials). The diameter at breast height (DBH), tree height and dominant tree species are available from the plot data. We use following species categories for the plots based on dominant species: \"pine group\", \"spruce group\" and \"birch group\" (Maltamo and Packalen, 2014), the latter including also other broadleaf species. Henceforth, the tree species of an FFC plot refers to this dominant tree species group.
### Surface reflectance simulations
#### 2.3.1 FRT model
We simulated the plot-level bi-directional reflectance factor (BRF) using the Forest Reflectance and Transmittance (FRT) radiative transfer model. We chose BRF because it corresponds to the only well-validatedsurface reflectance related product that is available at a forest stand level scale (e.g., 30 m), via the Landsat satellites (more details to follow). The FRT model was first described in Nilson and Peterson (1991) and later significantly modified (Kuusk and Nilson, 2000; Moftus et al., 2007). The model is classified as a hybrid-type, as it includes characteristics of both geometric-optical and radiative transfer equation-based models. FRT can simulate the BRF and the albedo (bi-hemispherical reflectance) over a given forested scene at a given point in time. The model for the forest canopy contains distinct tree crowns that are approximated by shapes such as ellipsoids. FRT works at the tree class level; there can be up to ten \"tree classes\", each representing a stratum of similar-sized trees of the same species in a stand. Additional parameters such as the leaf area per tree, needle or leaf clumping index and branch to leaf area ratio further define the structure of the canopy. The scattering elements are assumed to be homogeneously dispersed inside the tree crown envelopes, and the leaf angle distribution is assumed to be spherical. The ground surface is assumed to be covered by a homogeneous layer of understory vegetation. More factors relevant to the simulation (e.g., viewing and illumination geometries, wavelengths simulated) are explained in subsequent sections.
An important element in this context is terminology. In remote sensing based studies, the importance of specifying correctly and unambiguously the directional reflectance characteristics of the primary physical quantity of interest has been stressed (Schaepman-Strub et al., 2006). Our quantity of interest is the Bidirectional Reflectance Factor (BRF), as defined in Schaepman-Strub et al. (2006). It is given by the ratio of the radiance reflected from the surface of interest to that from an ideal and diffuse surface of the same area under identical view geometry and single direction illumination.
#### 2.3.2 Model parameters
Several model input parameters required by FRT were derived from the plot field measurements. In this database of FFC plots, the trees in each plot are divided into a number of strata that contain trees of the same tree species and size class. The \"strata\" in the plot data correspond to \"tree classes\" in FRT. Hence, the class-level tree densities (stems ha\\({}^{-1}\\)) and median tree statistics (diameter, height) required by the FRT were obtained directly from the plot data and used to simulate the tree stock at each plot. We did not simulate tree size variation within the plot strata. This is a simplification that is close to correct in managed forests that comprised the majority of our area of interest. This is because managed forests are thinned at a height of 10-15 m. Sub-dominant trees are removed in these thinnings, which leaves only the dominant layer where all trees are similar-sized and represent the generation of trees planted after the previous clear-cut. Some structural parameters needed by FRT but not available in the plot data were derived using allometric models or from earlier studies (Table 1). The leaf area index (LAI) was assumed to be constant for all months simulated.
An important input of the FRT model is the reflectance and transmittance spectra of the foliage and bark of the tree species, and of the forest floor vegetation. These were derived from existing spectral databases. For more information, see please refer to the supplementary materials section.
### Reference satellite data
The reference surface reflectance values for each plot were derived from Landsat 8 images. These surface reflectance products are generated using the Land Surface Reflectance Code (LaSRC) for atmospheric correction (USGS, 2020). Details about these algorithms, along with estimates of their accuracy can be found in (Vermote et al., 2016). Landsat surface reflectance products approximate hemispheric-conical reflectance factor; case 8 in Table 2 of (Schaepman-Strub et al., 2006), and are not normalized to any standard geometric configuration. The approximation of hemispheric-conical reflectance factor to BRF is valid under the following conditions: 1) The ratio of diffuse radiation to that of direct radiation is low (\"black sky\" condition); 2) the hemispherical directional reflectance factor remains constant over the full cone angle of the instrument instantaneous field of view (IFOV). The first assumption is justified, considering that Landsat images are acquired over Finland close to local moon, when the sun is nearest to the zenith position. In clear sky conditions, diffuse radiation is typically less than 10% of the total incoming radiation (Jones and Vaughan, 2010). The second assumption is also justified, considering that the instantaneous field of view is small. Surface reflectance products have been assumed to be approximations of BRF in previous literature; for an example involving the Sentinel-2 satellite data, see Hadi and Rautiainen (2018).
\\begin{table}
\\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \\hline \\hline \\multirow{2}{*}{S1} & \\multirow{2}{*}{Variable name Num.} & \\multirow{2}{*}{Values} & \\multirow{2}{*}{Reference} \\\\ \\hline
1. & & & \\\\ & & & \\\\ \\hline
2. & & & \\\\
3. & Dry mass of foliage/features (gg) & From allometric model & Plane eq, A4, Repola (2009); spruce: eq, A10, Repola (2009); http: eq, 12 in \\\\
4. & & & & \\\\
4. & Leaf mass per unit area (g/cm\\({}^{2}\\)) & Pine: 158, Spruce: 200, Birche: 57, & Same as Hori et al. (2016), see Table 3 therein. \\\\
5. & Ratio of the branch area to leaf area & Pine: 0.18, spruce: 0.18, birche: 0.15, & Same as Hori et al. (2016), see Table 3 therein. \\\\
6. & Tree distribution parameter & 1.2 (e.g., slightly regular and clustered) & Same as Hori et al. (2016), see Table 3 therein. \\\\
7. & & & \\\\
8. & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Sources used for model parameters specific to the main tree species.
\\begin{table}
\\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \\hline \\hline \\multirow{2}{*}{Fixed effect} & \\multirow{2}{*}{Variable Name} & \\multirow{2}{*}{Description, possible values} & \\multirow{2}{*}{
\\begin{tabular}{c} Categorical or \\\\ Continuous \\\\ \\end{tabular} } \\\\ \\hline \\multirow{2}{*}{TS} & \\multirow{2}{*}{tree species} & The dominant tree species (group) for the plot; pine (1), spruce (2) or (3) & Categorical \\\\ PC & & fertility class & Undertory type (see Table S2). Can be OMnT (1), OMT (2), MT (3), VT (4), VT (5) CT (6). Whether the plot is situated on & Categorical \\\\ ST & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Fixed effects considered for the models. All variables except d\\({}_{\\text{ms}}\\) and SG\\({}_{\\text{thresh}}\\) were derived from the FFC plot database; d\\({}_{\\text{ms}}\\) and SG\\({}_{\\text{thresh}}\\) were calculated from the satellite image metadata (date of acquisition).
### Plot level linking of Landsat and FRT reflectances
This study was restricted to the snow-free months ranging from May to October (both inclusive). This restriction was done partly because of the difficulty on acquiring representative snow spectra over our large region and partly considering that FRT currently does not have an option to account for snow on trees.
We first selected all plots that had been inventoried between the calendar years of 2014 and 2019 (both inclusive) and were south of the 64'N latitude; this yielded a total of 58,344 plots. These calendar years were chosen keeping in mind the availability of Landsat 8 data. We then used the Google Earth Engine (Gorelick et al., 2017) to associate pixels from several Landsat 8 images with each selected plot, if possible. A particular pixel was associated with a plot if.
1. The 30 \\(\\times\\) 30 m square pixel contained the centre of the plot.
2. There was a maximum of \\(\\pm\\)6 months temporal difference between the date of measurement of the plot and that of Landsat image acquisition.
3. The Landsat image was acquired between the months of May and October (both inclusive).
4. The pixel was without clouds or cloud shadow effects.
5. Snow was not present in the pixel.
6. The Landsat image was classified as \"high quality\".
The last three conditions were based on binary flags and metadata that were part of the surface reflectance product; these had been estimated using the Landsat 8 OLI bands (USGS, 2020). We also dropped some plots that were less than 100 m from each other to minimize autocorrelation effects. At this point, we had a set of plots, where each such plot was associated with one or more suitable Landsat 8 pixels. Next, we define an _observation_ as an event when a Landsat image acquisition has happened over a given plot. Such an observation is associated with a unique combination of the following.
* A unique FFC forest plot
* A Landsat image, with associated acquisition date and footprint
In all, we had a total of 17,573 such observations after the above screening conditions had been applied. These involved 12,369 unique plots in 5139 L-shaped clusters. These observations had 858 unique Landsat images associated with them.
The viewing and illumination geometries associated with FRT based simulation for each observation are important considerations. The solar azimuth and elevation angles at the plot location for each observation were computed using an open-source solar position code (Reda and Andreas, 2004). Meanwhile, it was assumed that the satellite was directly overhead the plot at that time, hence the view elevation angle was taken as zero. We also assumed a direct illumination (\"black sky\") condition, with zero sky diffuse lighting. We simulated the BRF (nadir view) of each such observation using FRT, for the wavelengths between 400 and 1700 nm, using 5 nm width bands. The plot measurement data and the sun position associated with the Landsat acquisition were the primary inputs for the FRT model. We also dropped forest strata comprising of very small trees (i.e., those with mean height less than 2.0 m or mean diameter less than 0.5 cm) from these simulations. This was done keeping in mind the ranges associated with the allometric models used.
The wavelength-specific FRT output BRFs were processed into Landsat-8 band specific ones using the relative spectral response curves for the OLI instrument. This was done by taking a weighted average of all FRT (narrowband; 5 m) bands that mapped onto a given Landsat band. The weighting was based on the spectral response curve of Landsat-8 OLI (Barsi et al., 2014). In this work, we analyzed four of those spectral bands: green (532-590 nm), red (635-673 nm), near infrared (NIR, 850-878 nm) and short-wave infrared 1 (SWIR1, 1566-1651 nm). At this point, we had both the band specific Landsat BRFs and the FRT simulated ones for each of the 17,573 observations.
### General trends in accuracy
We first compared FRT simulated BRFs with Landsat-measured ones, for the summer months (June, July and August) using a set of scatter-plots. The statistical significance of the observed discrepancies (over-estimation or underestimation) was tested by using linear mixed effects models, keeping in mind the grouped structure of the underlying data (explained in detail later).
We then examined the temporal trends in both simulated and observed BRF values using a selected set of FFC field plots. These plots were selected such that each had a temporal series of Landsat images associated with them. In other words, they were such that for each plot, Landsat images were available over it for all snow-free months (May to October) for any particular year (year could be any between 2014 and 2019). That is, only plots that had at least one observation for each of the six months in any particular year were selected for this analysis. When multiple observations were available for a month, one of them was arbitrarily chosen. Then, for each such observation, an FRT simulation was done using the same illumination geometry as the associated Landsat image. The forest stand characteristics for the simulation were obtained from the associated plot characteristics. The selected set of plots were categorized based on dominant tree species and volume of timber. Then, the average BRF (both FRT simulated and Landsat based) was calculated for each category and for each month. These averages were then analyzed as band-specific seasonal trajectories of different forest types in the study area.
### Statistical analysis
For each observation, we computed the difference between Landsat-measured and model-simulated BRF which we henceforth call the error in BRF simulations in the Landsat 8 red and NIR bands as:
\\[\\mathrm{e_{Rd}}=\\mathrm{BR}_{\\mathrm{NIR,\\ FRT}}-\\mathrm{BR}_{\\mathrm{NIR,\\ Landa}} \\tag{2}\\]
Where \\(\\mathrm{e_{Rd}}\\) and \\(\\mathrm{e_{NIR}}\\) denote the error in the red and NIR bands, respectively; \\(\\mathrm{BRF}_{\\mathrm{Red,\\ FRT}}\\) and \\(\\mathrm{BR}_{\\mathrm{NIR,\\ FRT}}\\) represent the BRFs simulated by FRT in the respective bands, and \\(\\mathrm{BRF}_{\\mathrm{Rd,\\ Landa}}\\) the \\(\\mathrm{BRF}_{\\mathrm{NIR,\\ Landa}}\\) are the BRFs observed by the Landsat 8 satellite.
We developed a set of linear regression models linking the error in BRF simulations with several potential explanatory factors to attribute the observed error to probable causes, and to estimate the relative importance of these causes. Our dataset of observations had a distinct grouped structure, with several grouping factors; the grouping is in parameter-space. We had several observations associated with each plot, which is similar to a repeated-measures experiment design. Moreover, it is important to factor in the grouping structure of Landsat images: each image is associated with a unique atmospheric condition and associated atmospheric correction artefacts. Several observations (from several forest plots) may be associated with each such image. We used mixed models, which provide a statistically sound framework to analyze such grouped data, especially when the sample sizes in some groups may be small (Mehtatalo and Lappi, 2020). Mixed models are easier to work with, compared to alternatives such as nonlinear mixed-effects models and hierarchical Bayesian models. We analyzed only the red and NIR Landsat bands in detail, using mixed models because these two bands are a parsimonious set that adequately characterize the crucial aspects of vegetation reflectance. For vegetation, red correlates with other visible bands, and NIR correlates with other infrared bands. Meanwhile, these bands are not strongly correlated with each other (Jones and Vaughan, 2010).
A general matrix-based form of a mixed model where \\(\\mathrm{e_{Rd}}\\) and \\(\\mathrm{e_{NIR}}\\) are the dependent variables, while forest plot characteristics are used asindependent variables, is as follows:
\\[\\mathbf{y}=\\mathbf{X}\\mathbf{\\hat{0}}+\\mathbf{Zb}+\\mathbf{\\varepsilon} \\tag{3}\\]
Where \\(\\mathbf{b}\\sim\\) N (**0**, **G**), \\(\\mathbf{\\varepsilon}\\sim\\) N (**0**, **R**), and cov (**b**, \\(\\mathbf{\\varepsilon}\\)) = **0**.
Where, **y** is a vector of error values in a given band (\\(\\mathbf{e}_{\\text{Bnd}}\\) or \\(\\mathbf{e}_{\\text{NB}}\\)) associated with \\(n\\) observations, **X** is the \\(n\\times p\\) design matrix for the \\(p\\) fixed effects (independent variables), and \\(\\mathbf{\\beta}\\) is a \\(p\\times 1\\) vector of coefficients associated with the \\(p\\) fixed effects, **Z** is an \\(n\\times q\\) design matrix for the \\(q\\) random group effects, **b** is a \\(q\\times 1\\) vector of random group effects and \\(\\mathbf{\\varepsilon}\\) is the \\(n\\times 1\\) vector for the residual errors (Mehtiatalo and Lappi (2020). Further, **G** and **R** are variance-covariance matrices: \\(\\mathbf{G}=\\text{var}(\\mathbf{b});\\mathbf{R}=\\text{var}(\\mathbf{\\varepsilon})\\).
We used the _lme4_ package (Douglas Bates et al., 2015) in the R environment to formulate and estimate the coefficients \\(\\mathbf{\\hat{0}}\\) and \\(\\mathbf{b}\\) of the mixed models. Separate models were formulated for the mean error in the red band (mod.meamer.red) and the NIR band (mod.meamer.NIR). Models were formulated as described in Mehtaida and Lappi (2020). All fixed effect predictor variables were scaled and normalized before being tried out in the models: we scaled them so that their mean was 0.0 and standard deviation was 1.0. This was done to make the inter-comparison between their associated coefficients possible.
Categorical variables were added as dummy variables, representing each class. For soil type (ST), categories 2 and 3 denote that the plot is on peatland soil. Further, \\(\\text{ST}_{\\text{Type}}-1\\) denote the categorical dummy variable indicating whether the plot is situated on mineral soil (i.e., values 0, 1), etc. For tree species (TS), the birch group is dominated by the birches, but a few other broad-leaved trees may also be present. Further, \\(\\text{TS}_{\\text{Sup}}-1\\) denote the categorical dummy variable indicating dominance by pine group (i.e., values 0, 1), etc. Fertility classes (FC) 5 and 6 are clubbed into a single level, \"5\". Moreover, \\(\\text{FC}_{\\text{Class}}-1\\) denote the categorical dummy variable indicating fertility class 1 (i.e., values 0, 1), etc. The categorical variable birch state (\\(\\text{SG}_{\\text{Miph}}\\)) was introduced to factor in a significant discontinuity in the birch leaf spectra; i.e., between _Spec.la.teAugust_ and _Spec.earlyOctober_ (Table S1). The latitude (\\(q\\)) of the plot was included as a fixed effect, to factor in north-south effects. We included the volume of timber (V), as it is a proxy for the stem density and maturity level of the trees. The gin coefficient of diameter (\\(\\text{GC}_{\\text{d}}\\)) was got by applying the R Ine junction in the ineq package (Zeileis & Kleiber, 2014) to the diameter of trees. Values range from 0 (all trees are of the same diameter) to 0.5 (there is considerable variation in the diameter of trees). The shannon index (\\(\\text{H}_{\\text{sp}}\\)) quantifies the species diversity of the plot; we consider only the three species groups in this case. The index was computed using the R _diversity_ function in _vegon_ package (Oksanen et al., 2022). Values ranged from 0 (when there is only one species present) to 1.1 (In (3); all three species types are present in equal tree count). The value of \\(\\text{d}_{\\text{m}}\\) (days to midsummer) was obtained by the formula: (\\(\\text{DOF}-178\\)), where \\(\\text{DOF}\\) is day of year. This was used mainly to account for the facts that all our understory spectra are from summer.
The mixed models were formulated in the following way. First, we formulated a version of the model that included all fixed effects we considered possible (Table 2). We also considered two interactions, one between the tree species and fertility class (TS:FC) and another, between the gin coefficient of diameter and the dominance of spruce trees (\\(\\text{GC}_{\\text{d}}\\): \\(\\text{TS}_{\\text{Sup}}-2\\)). Subsequently, we identified and discarded those fixed effects and interactions that were statistically insignificant; i.e., the p-value associated with their likelihood ratio test (Pinheiro and Bates, 2006) was more than 0.05.
Similarly, several random effects were initially included (Table 3). The plotID accounts for the fact that some plots are observed by Landsat several times. The clusterID is incorporated because each plot belongs to a particular L-shaped clusters. The imageID is unique for each Landsat image (typically \\(185\\times 185\\) km), which might cover a large number of forest plots. The grouping by imageID is done so that Landsat image specific atmospheric effects, and other such artefacts are taken into account. Even though atmospheric correction is carried out on all images, related artefacts can still be present. Lastly, provinceName takes into account of the fact that our study are in southern Finland consists of 17 administrative provinces. This thus accounts for some local geographic effects. A random effect was subsequently discarded if they explained less than 5% of the residual variance. This threshold was arbitrary; the discarding was done so that the final mixed models would be as parsimonious as possible. Random effects were only included as random intercepts. Further, they were added as crossed effects, with respect to each other. The marginal and conditional \\(R^{2}\\) values associated with the final models were computed by using the method of Nakagawa and Schilezleth (2013). The significance of the random variables were estimated by computing the percent of residual variance explained by them.
### Relative contributions of spectral and geometrical components
In general, the magnitude of error associated with an FRT reflectance simulation can be broadly attributed to three causes: 1) lack of representative foliage or understory spectra; 2) inaccuracies due to simplification or misrepresentation of physical reality while creating the inputs for the FRT model or via the associated allometric models (e.g., estimation of crown dimensions from tree diameter and height); and 3) the simplifications of the radiative transfer computations in FRT. We combined the last two causes into a generic modelling error component. Thus, we conceptualized two broad FRT error causes: 1) insufficient spectral data, 2) modelling errors. We then designed an analysis to partition the error magnitude between these two causes. For this, we defined the following sets of observations.
* _All_: This consists of all observations available to us, irrespective of forest type, forest plot location or month of Landsat image acquisition. This consists of 17,573 observations derived from 12,369 unique forest plots. The months associated with these observations ranged from May to October.
* _SpectrallyMatched_: Here, we identified a subset of set _All_ for which the spectral data used as input to the FRT model is well-matched with the actual spectra of the various elements associated with the plot (i. e., foliage, understory). Specifically, we only included observations for which: 1) the plots were from the Pirkamax region in Southern Finland (where our input needle and understory spectra were collected); 2) the fertility class were OMT and MT types (which is well-represented in our measured spectra); 3) the Landsat image was collected during summer, thus matching the season of the understory spectra. In all, 634 observations qualified for this set, representing 548 plots including ones from seedling stands and mixed stands.
* _SpectrallyMatched_._StructurallySimple_: This is a subset of _SpectrallyMatched_, where we apply two more conditions: 1) The plot consisted of even-sized trees of a single species (number of strata is 1), and, 2) they were mature stands (_volume_\\(\\geq\\) 100 m\\({}^{3}\\) ha\\({}^{-1}\\)). In this set, there were 84 observations based on 77 unique forest plots.
\\begin{table}
\\begin{tabular}{l c c c} \\hline Trajectory (volume category) & Pine & Spruce & Birch \\\\ \\hline Less than 20 m\\({}^{3}\\) ha\\({}^{-1}\\) & 21 & 17 & 10 \\\\ Between 20 and 100 m\\({}^{3}\\) ha\\({}^{-1}\\) & 50 & 28 & 29 \\\\ Greater than 100 m\\({}^{3}\\) ha\\({}^{-1}\\) & 149 & 82 & 40 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Number of forest plots associated with each trajectory shown in Fig. 4.
\\begin{table}
\\begin{tabular}{l l} \\hline Random effect & Description \\\\ \\hline plotID & FTC plot ID. Unique to each temporary plot created. \\\\ clusterID & Unique ID of the cluster that the plot is part of. \\\\ imageID & Unique ID of the Landsat image. \\\\ provinceName & The name of the administrative province containing the plot. \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Random effects considered for the models.
These represent plots where the forest canopy is more amenable to be well represented in FRT.
We then determined the RMSEs associated with each of the three sets given above. First, consider the sets _All_ and _SpectrallyMatched_ and their associated RMSEs. The decrease in RMSE between set _All_ and set _SpectrallyMatched_ roughly quantifies the benefit of well-representative spectra. That is, it quantifies the benefit the FRT framework would have, given that representative field spectra are available for the entire study area, and over all seasons of the year. Similarly, when one compares _SpectrallyMatched_ and _SpectrallyMatchedStructuallySimple_, the difference in RMSEs roughly quantifies the inaccuracy due to FRT's simplification of vegetation structure, for young and mixed stands (i.e., in _SpectrallyMatched_). Hence, the relative decrease in bias and RMSE between _SpectrallyMatched_ and _SpectrallyMatchedStructuallySimple_ quantifies the effect of such simplification on the RMSE statistics.
## 3 Results
### BRF estimation accuracy
FRT has a general tendency towards overestimation of BRF values compared with Landsat (Fig. 1), especially for birch-dominated plots. All these overestimations were found to be statistically significant. The magnitude of these overestimations are less in the visible bands (green and red) and more in the NIR and SWIR1 bands. The model performs best in the case of pine and spruce dominated plots, and for the green and red bands. This can be inferred by examining the bias and RMSE statistics associated with each subfigure of Fig. 1 (Fig. 2). Fig. 2 shows that the bias of FRT simulated BRFs range from a low of \\(\\sim\\)0.008 to a high of \\(\\sim\\)0.08. RMSE values range from a minimum of \\(\\sim\\)0.01 to a maximum of \\(\\sim\\)0.09. The magnitude of the estimated error is the smallest in the visible bands.
We generated two scatterplots of two representative subfigures from Fig. 1, to understand and illustrate the effect of stand maturity on BRF
Figure 1: Scatterplots of FRT simulated BRFs versus Landsat-measured ones, for the summer months. Each colored point in the scatterplot represents an observation, count indicates the number of observations represented by that colour. The dominant tree species of the forest plot and the Landsat 8 OLI band (Green, Red, NIR, SWIR1) is indicated at the top of each scatterplot. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
### Mean error
Our mixed model based analysis indicates that 27% of the variance of the error red band was explained by the fixed effects (the marginal \\(R^{2}\\) of the model mod. meanerr.red is 0.27) and as much as 79% of the variance was explained by the combination of fixed and random effects (the conditional \\(R^{2}\\) of the model mod. meanerr.red is 0.79). For the mixed model for mean error in the NIR band, mod. meanerr.NIR, the marginal and conditional \\(1\\)\\(R^{2}\\) values are 0.19 and 0.65, respectively. The fixed effects used in the final models (mod.meanerr.red, mod. meanerr.NIR) along with their coefficient values are given in Table S3.
The dominant tree species of the plot and the season of the year were the most important factors influencing the magnitude of the error in the red band (Fig. 5). The set of fixed effects and their interactions in both formulated models (mod.meanerr.red, mod. meanerr.NIR; Table S3) can be split up into two distinct factor sets: 1) tree species, fertility class, and
Figure 4: The monthly trends of BRF simulated by FRT together with the measured curves from Landsat observations. The month ranges from May (5) to October (10). The dominant species of the forest plot and the Landsat band are indicated at the top of each graph. The number of plots associated with each trajectory line of the figure can be seen in Table 4.
Figure 5: Comparison of the various components of the mixed model for mean error, red band (mod.meanerr.red). (a) Interaction plot of tree species (TS) and fertility class (FC) on error seen. (b) Estimates of the other fixed effect coefficients. The categorical variable SG\\({}_{\\text{thresh}}\\) (spectral group for birch), coefficient value 0.0368 in the mixed model is left out from the figure, because it is only applicable to a small subset of observations. The y-axis of (a) and (b) above are of the same scale, and hence the effect of error of each variable is intercomparable. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)their interactions; 2) the rest of the variables, like soil type, timber volume and the Gini coefficient. The relative contribution to the mean error of red band as per these two sets in the mod. meanerr.red model is shown in Fig. 5. Fig. 5(a) shows the effect of interaction of tree species and fertility class, when all other factors are held at such levels that they do not contribute to the error. Significant stand-alone effects are seen for tree species (TS) and fertility class (FC), along with only slight interaction effects between the two. Hence, the tree species is the most important factor; the simulated BRF values are most overestimated when the plot is dominated by hri color or under broad-leaved species. This is seen across all fertility classes too. Smaller, but still consequential overestimations can be seen with pine and spruce plots too. BRF overestimation magnitude increases with increase in several other variables (Fig. 5(b)). The most important of them is the time of year (d\\({}_{\\rm m}\\)); the large and positive value of the coefficient implies that observations from latter parts of the year are associated with higher levels of over-estimations. Soil-types 2 and 3 (spruce bog, pine bog) are associated with increased overestimations, when compared to mineral soils. The importance of tree-level heterogeneity can also be seen: plots with more tree size heterogeneity (GC\\({}_{\\rm d}\\)) and tree species diversity (H\\({}_{\\rm p}\\)) tend to have higher over-estimations by FRT. We see that plot plot plot and imageID are significant random effects (Table 5); together, they explain \\(\\sim\\)70% of the variance left over after accounting for the fixed effects.
Tree species, soil type and timber volume are the most important factors influencing the magnitude of error observed in the NIR band (Fig. 6). The components of figure are similar to those of Fig. 5. That is, it illustrates the magnitude of the coefficients of the mean error in the NIR band model (mod.meanerr.NIR) as per two sets of factors (see above). Tree species and fertility class showed small interaction effects between them, along with significant stand alone effects (Fig. 6(a)). Birch and pine dominated plots are associated with relatively large BRF overestimations. Meanwhile, FRT underestimates BRF for spruce dominated plots with fertility class 5 (poorest fertility). Most other fixed effect variables considered are associated with overestimations, except for tree size heterogeneity (GC\\({}_{\\rm d}\\)) (Fig. 6(b)). Unlike the model for the error in the red band, the timber volume (V) is also shown to be important in this model. Higher timber volumes are associated with overestimations of NIR band reflectance.
Certain individual forest plots and Landsat images are associated with more BRF simulation error magnitudes than others (Table 5). Both plotID and imageID are important random effects; as much as 42% residual variance is explained by imageID for the red band. Meanwhile, the plot cluster does not have much explanatory power (associated value is \\(\\sim\\)5%).
### Relative contribution of spectral and geometrical components
We had defined three distinct set of observations in an effort to separate out the contribution of spectral and geometrical components of the FRT framework to the observed error (see section 2.8). Considerable differences in RMSE statistics are seen between the sets _All_, _SpectrallyMatched_ and _SpectrallyMatched_,_StructurallySimple_ (Fig. 7). The percent decrease in RMSE when switching from set _All_ to set _SpectrallyMatched_ is seen in part (a) of the figure. The RMSE associated with the red band BRFs of spruce dominated forest plots drops by as much as 32%, when one compares such plots between sets _SpectrallyMatched_ and _All_ (there were 315 spruce-dominated plot observations in the set _SpectrallyMatched_). The median drop in RMSE seen in Fig. 7(a) is 17.8%, and most percent decrease values are in the range of 20-30%. This implies that as much as 20-30% of RMSE in a typical FRT simulation (i. e., set _All_) is due to the use of non-representative spectra, for our study area. The associated median statistics of Fig. 7(b) implies that an additional \\(\\sim\\) 5% of RMSE of a typical FRT simulation can be reduced, given better geometric representations of reality in the FRT model.
## 4 Discussion
### General considerations
In this article, we quantified the accuracy of the FRT reflectance simulation model using data (12,369 forest plots) from an operational forest inventory database and corresponding satellite imagery spread over six months. The FFC forest plot data is publicly available and freely downloadable, which supports the repeatability of our set of experiments. Hence, the current effort represents a significant improvement over other FRT-related accuracy quantification efforts, due to this relatively larger geographical and seasonal coverage. Our results indicate that FRT seems to be relatively capable of simulating the Landsat BRF values for a sizable fraction (65%) of the cases (Figs. 1-4). That is, bias values as low as 0.01-0.03 and RMSE values as low as 0.02-0.05 were observed over a large number (\\(\\sim\\)11,500) of observations over pine and spruce dominated plots, in the red and NIR bands (Fig. 3). This represents \\(\\sim\\)65% of the 17,573 observations we considered. As a reference for comparison, Rautiainen and Stenberg (2005) had compared BRFs simulated by the PARAS forest radiative transfer model with Landsat observations, using 800 forest stands. They reported RMSEs in the order of 0.1 units for the red band and 0.05 units for the NIR band. Discrepancies of similar order of magnitude were reported between the compared models for these two bands, in the latest round of RAMI model validation exercises (Wildowski et al., 2015). These statistics show that FRT compares well with other similar models for some cases, thus highlighting its overall potential. Meanwhile, RMSEs on the higher side (as high as 0.03 to 0.09) can also be see in Fig. 2, which shows the need for further work to improve the framework.
### Factors explaining FRT simulation inaccuracy
We used a mixed modelling framework to attribute and understand the relative importance of the causes for the observed discrepancies (i.e., error) between FRT simulated and satellite BRFs. We envision that such analyses would help the FRT developers to better focus their efforts for improving the FRT simulation framework. The model for mean error in red band (mod.meanerr.red) helps us understand the relative importance of several variables related to the forest plot and date of image acquisition (Fig. 5). Tree species is identified as the most important variable. The time of the year (d\\({}_{\\rm m}\\)) is also identified as an important factor that determines the magnitude and direction of the error; this implies the importance of having representative spectra for all months of the year. This can also be a consequence of our assumption that the canopy LAI remains constant for all months considered, especially for birch dominated plots. Important implications of the model for mean error in NIR band (mod.meanerr.NIR) can be deduced from Fig. 6. The tree species is decisive here too: the BRF is considerably overestimated in pine and birch dominated plots. Tree size heterogeneity and timber volume are also shown to be important variables.
To roughly partition the error magnitude between that caused by non-representative spectra and that from geometric representation issues, we had defined three sets of observations: _All_, _SpectrallyMatched_, _SpectrallyMatched_, _SpectrallyMatched_, _StructurallySimple_. The reduction in RMSEs associated with the two latter sets (Fig. 7) implies that implies that \\(\\sim\\)-20-30% of the RMSE of set _All_ is attributable to non-representative spectra. Further, a 5% of RMSE seems to be related to geometric representation issues in the FRT model. The pattern of colours of the squares of the figure further
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Random effect & mod.meanerr.red & mod.meanerr.NIR \\\\ \\hline plotID & 29.9 & 28.7 \\\\ clusterID & NA & 5.4 \\\\ imageID & 41.9 & 22.7 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Percent residual variance explained by random effects in the two mixed effects models formulated.
indicate that spruce stands are most affected by these two issues, and especially for the green and red bands.
### Importance of tree species, understory type and tree size heterogeneity
The tree species, the fertility class and their interactive effects were the most important factors explaining the mean error and its variance (Figs. 5 and 6). Fig. 5 indicates that the most important factor is tree species as per the mixed model mod. meamerr.red: it can increase the mean error by as much as 0.01 units. Tree species is an important driving factor of forest reflectance or albedo (Kruszikhin et al., 2013; Kutsinen et al., 2014). Kutsinen et al. (2014) reports that in middle aged or mature forests, forest albedo is influenced more by tree species composition than even Leaf Area Index (LAI) or canopy cover. Again, in birch and other broad-leaved (deciduous) stands, the forest floor dominates the total reflectance during the early and latter parts of the year. Thus, for these plots, uncertainties in the understory spectrum are more manifest in the plot-level reflectance values. We also found that several other variables such as soil type, tree size heterogeneity, species diversity and volume affect the error magnitude in the red and NIR bands.
Some fertility classes were associated with higher levels of BRF overestimations in some mixed models; e.g., class 1 and 5 in the red band and class 4 in the NIR band (Figs. 5 and 6). In general, the spectral-directional scattering behavior exhibited at the understory level of Fennoscalan forests can be very different, depending on the species present (Forsstrom et al., 2021). Again, the composition of the forest floor can depend on the overstory tree density; a previous work had found out that there was a 33% correlation between them (Majasalmi and Rautiainen, 2020). Temporally resolved seasonal understory spectra
Figure 6: The components of the mixed model for mean error in NIR band (mod.meamerr.NIR): a) interaction plot of the tree species (TS) and fertility class (FC) on error in the red and NIR bands; b) estimates of the other fixed effect coefficients. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Figure 7: Per-species and per-band decrease in RMSE (%) associated with the set, when compared to set _All_. The numbers inside the brackets are the number of observations associated with that statistic in the set. The colour of the squares helps identify low and high values: green colour indicates the highest (%) value among the 12 associated squares, while red indicates the lowest value. (a) Decrease in RMSE (%) associated with set _SpectrallyMatched_, when compared to set _All_. (b) Decrease in RMSE (%) associated with set _SpectrallyMatched_,_StructurallySimple_, when compared to set _All_. For an expanded version of this figure, see Fig. S2. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
are also lacking. For a good discussion about the challenges of modelling understory elements in BRF simulations (birch stands), see Rautiainen et al. (2009).
Our models implied that stand-level tree class heterogeneity, both in terms of size classes and species, was an important source of error (Figs. 5 and 6). Spruce plots with unequal-sized trees are especially prone towards FRT overestimations in the NIR band. This may be partly due to the fact that we assumed all trees in a stratum to have the same size as the median tree. Meanwhile, plots that are more diverse species-wise (i.e., mixed forests) tend to have more error (Figs. 5 and 6), this is mostly because the internal representation of these stands in FRT diverges fairly from reality.
### Satellite derived reference BRFs
We have used Landsat derived BRF estimates as our reference values. But for many cases, the satellite estimated BRFs may deviate from the true BRFs at the land surface. We found that imageID was an important random effect in both our mixed models: it explained 41.9% and 22.7% of the residual variance for the red and NIR band models, respectively (Table 5). The implication is that some Landsat images were associated with relatively higher error magnitudes than others. This further suggests the need for better atmospheric correction in the Landsat surface reflectance product. Our analysis of some associated satellite images suggests that cloud-wings could be a source of error; they are sometimes flagged as \"clear\" pixels in the Landsat surface reflectance product. Generating good cloud masks is problematic and is an active area of research. For a recent intercomparison study of several such algorithms and the challenges that still remain, see Skakum et al. (2022). Uncorrected satellite measurements correspond to hemispherical directional reflectance factor (HDRF) values (Schacepman-Strub et al., 2006). Even though HDRF and BRF values are near-identical in some forested land covers (Schacepman-Strub et al., 2006, Fig. 4), scattering and shading effects of nearby terrain, vegetation and water bodies can be hard to account for and correct.
### Future avenues of related work
The above analysis and our results from mixed models suggest that a promising future avenue of improvement of the FRT framework is increasing the representativeness of the field spectra.
Specifically, we recommend that the following spectra be collected.
1. On mineral soils: Collecting understory spectra for very fertile (OMaT) site (class 1), CT (class 5) and VT (class 4) sites should be a priority, as they are associated with higher levels of BRF overestimations. In Fig. 6(a), the lack of representative VT spectra coupled with the relative transparency of the canopy for this band is most probably the reason for overestimation associated with this fertility class. This fertility class represents over 22% of plots in our study area.
2. On peatlands: Understory spectra should also be collected for the peatlands, i.e., soil type 2 and 3; the associated coefficients are relatively large in Figs. 5(b) and 6(b).
3. Better seasonal spectra for the months of May, September and October (both foliage and understory) would also be useful. This statement is supported by the fact that the coefficient associated with the number of days to midsummer (d\\({}_{\\text{mm}}\\)) was positive and relatively large the two mixed models formulated. When examining the models further, we can gather that hence the error increases significantly as the date of satellite images advances beyond the midsummer, keeping all other factors constant. The trajectories seen in Fig. 4 also suggest the inadequate nature of spectra for months outside the summer period; i.e., May, September and October.
We observed that plotID was an important random effect for both mixed models formulated for mean error (Table 5). This implies that certain forest plots had specific characteristics that could not be captured by the current FRT framework. This could be related to the size class, structure, distribution of trees or vegetation present, or the terrain topography. Young stands were clearly associated with more error (Fig. 3) and improvements regarding their representation in FRT should be considered. High levels of error associated with some plots could also be related to the fact that the FFC plot and the Landsat pixel are of different shape and size, which might affect some plots more than others. Regular geometrical objects like ellipsoids, as used by FRT, may not capture the geometry of many tree crowns, which tend to be irregular. Previous work with FRT has shown the dependence of stand reflectance on tree crown shape (Rautiainen et al., 2004). These geometrical objects may not also capture the branching structure of trees, which may be pronounced and irregular in natural and old-growth forests. The contribution of woody elements such as tree trunks and first order branches to tree-level reflectance was quantified in a recent publication by Kuusinen et al. (2021), and it was estimated to vary between 0.09 and 0.2. Also, crown length and crown radius may not be well estimated in some cases by allometric equations. The representation of the spatial pattern of tree locations in the stand may not always be a realistic either. Further analysis of selected plots on these lines would be helpful to improving the FRT framework further and is a promising future avenue of work.
It is extremely challenging to develop a robust reflectance model for real-world forested conditions. This is because of the highly complex set of interactions that electromagnetic radiation can undergo, between the sun and the sensor. Nevertheless, our results indicate that FRT is capable of reproducing BRF values over a proportion of forest plot observations, given snow-free conditions. They also suggest that an augmented spectral library would result in considerable improvement of the simulation framework; such a library is relatively straightforward to incorporate into FRT. This includes the spectra of all elements of vegetation: leaf, needle, stem bark, branch bark and ground vegetation. All of these further suggest that FRT might be ultimately integrated into a forest management planning system, so that the albedo could be used as a criterion in forest management planning, and albedo-related radiative forcing could be quantified and factored in. In this case, simulated BRF studied in this article could be replaced by simulated albedo. There are significant climatic benefits in managing boreal forests considering albedo too (Bright et al., 2014). The reflectance model, in this case, should be able to realistically replicate the changes in albedo introduced by different forest management operations. But there are several significant challenges to overcome before such an integration into a forest management system happens, and we briefly touch upon some of them here. First, the managed forest stands of southern Finland are not necessarily representative of such managed or natural forests in other regions of the boreal zone. Thus, an exercise like this should be repeated with a much wider sampling of forest plot set, to identify further avenues for improvement of the FRT framework. Secondly, the model framework should be verified and extended for snow-laden months. There have been previous efforts that have attempted to factor in albedo into forest management decisions (Sjelie et al., 2013; Lutz and Howarth, 2014) but they were mostly of coarse-scale or confined to the temperate region. Third, there is the challenge of verification of the model for fully diffused lighting conditions, such as cloudy and hazy days. Again, the impact of terrain slope and topography has to be studied, before application to more mountainous areas. Additional work with respect to computational efficiency is also needed before incorporation FRT framework into a forest planning system. A library of precomputed albedo values as a function of forest attributes and a look-at-table type search could be a reasonable and fast solution in the simulation-optimization systems used in forest planning.
## 5 Conclusions
This study provides a broad picture of the performance of the Forest Reflectance and Transmittance model in reproducing observed reflectances over a wide variety of forest types. It is shown that FRT can reproduce observed reflectances over a proportion (65%) of observations considered. These are predominantly for mature forests, i.e., where the forest structure is relatively simple and representative input spectra are also available. However, it fails to adequately reproduce the observed BRFs for sizable fraction of the simulated cases, especially for young stands and for non-summer months. We also studied broad seasonal trends in BRF and ascertained that FRT can generally reproduce such trends for mature forest stands, and to a lesser extend for younger stands. We used a set of mixed models to attribute the cause of the discrepancies observed to various factors. The results of these analyses provide guidance to future model improvement efforts. Previous work has shown that FRT is applicable to boreal regions outside Finland. Hence improvements to the FRT and the input data used by the model coupled with wider-region verification efforts would lead to more accurate reflectance modelling for a geographically wide area. The necessity to collate more geographically and temporally comprehensive spectral libraries is important to the larger community of radiative transfer modelers as it holds for any physically based reflectance model. We also recommend improving the representation of reality in forest reflectance models, such as developing better associated forest allometric models. Both these efforts will be advantageous to the general reflectance modelling community.
## 6 Code and data availability
The FFC forest plot data is publicly available and can be downloaded from the website (Metsakeskus, 2022). The Google Earth engine JavaScript code for extracting plot-level surface reflectance values and the C++ code for generating FRT input files are available on RJee007 Github repository: [https://github.com/RJee007](https://github.com/RJee007). The fortran code for the specific version of the FRT used in the studies is available from the authors via e-mail request. Later FRT versions are available under the LGPL license.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Data availability
I have a \"code and data availability\" section where I provide details where one can get a copy of the relevant computer code & data.
## Acknowledgements
We acknowledge funding for this work from the Academy of Finland (grant number 317741 for the OPTIMAM project, grant number 337127 for the UNITE flagship, grant number 317387 for the AIROBEST project, grant number 348152 for the ARTISDIO project). A. Hovi and M. Raulainen received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771049). The text reflects only the authors' view, and the Agency is not responsible for any use that may be made of the information it contains. We would also like to thank Dr. Roope Ruotsalainen for sharing his code related to some R/ggplot figures.
## Appendix A Supplementary data
Supplementary data to this article can be found online at [https://doi.org/10.1016/j.srs.2023.100098](https://doi.org/10.1016/j.srs.2023.100098).
## References
* Barst et al. (2014) Barst, J.A., Lee, K., Kavran, G., Markham, L.L., Pedelty, J.A., 2014. The spectral response of the Landsat-8 operational land imager. Rem. Sens., 6, 10232-10251,
* Bright et al. (2014) Bright, R.M., Anton-Fernandez, C., Astrop, R., Cherubini, F., Kavalegro, M., Stroumann, A. H., 2014. Climate change implications of shifting forest management strategy in a boreal forest ecosystem of Norway. Global Change Biol., 20, 607-621,
* Chen (2005) Chen, F., 2005. Variability in global land surface energy budgets during high-speeds: 1988-1988 simulated by an off-line land surface model. Climate Comm., 24, 667-684.
* Dvirnshadelk et al. (2019) Dvirnshadelk, R., Skidmore, A., Abdullah, H., Chernet, E., Ali, A., Wang, T., Niewenhuis, W., Heurich, M., Vrticins, A., O'Connor, B., 2019. Mapping leaf chlorophyll content for Sentinel-2 and Rapidly data in geotech. Smart data in geotech. Smart data. In: J. Appl. Earth Geosci. 98, 70-70.
* Disney et al. (2009) Disney, M.L., Lewis, P.E., Bower, M., Pittel-Rinnoz, A., Hancock, S., 2009. Quantifying surface reflectivity for spaceborne lidar via two independent methods. IEEE Trans. Geosci. Res., Ser. A7, 326-3271.
* Douglas Bates et al. (2015) Douglas Bates, M.M., Bolker, B., Walker, S., 2015. Fitting linear mixed-effects models using lmed. J. Stat. Software 67, 1-48.
* Forsstrom et al. (2021) Forsstrom, P.A., Juda, J., Bautistam, M., 2021. Relationships between undersory spectra and fractional over northern European boreal forests. Agric. For. Meteor., 308, 108604.
* Forster et al. (2007) Forster, P., Ramaswamy, V., Artrano, P., Bernstein, T., Betts, R., Fabey, D.W., Haywood, J., Lean, J., Lowe, D.C., Myhre, G., 2007. Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007. The Physical Science Basis (Chapter 2).
* Gorlick et al. (2017) Gorlick, N., Hamcher, M., Dixon, M., Tyushchenko, S., Thuan, D., Moore, R., 2017. Google earth engine: planetary-scale geospatial analysis for everyone. Rem. Sens. Environ., 202, 18-27.
* Hadi (2018) Hadi, Raulainen, M., 2018. A study on the drivers of canopy reflectance variability in a boreal forest. Remote Sensing Letters 9, 666-675.
* Hori et al. (2016) Hori, A., Limg, J., Korhenov, L., Kobayashi, H., Raulainen, M., 2016. Quantifying the missing link between forest albedo and productivity in the boreal zone. In: Biogeogeomics 13, 6015-6030.
* Hori and Raulainen (2017) Hori, A., Raulainen, P., 2017. A spectral analysis of 25 boreal tree species. Silva Fenn., 51, 7753.
* Jones and Vaughan (2010) Jones, H.G., Vaughan, R.A., 2010. Remote Sensing of Vegetation: Principles, Techniques, and Applications. Oxford university press.
* Knyashikhin et al. (1998) Knyashikhin, T., Martonchuk, J.V., Myhrein, R.H., Diner, D.J., Running, S.W., 1998. Synergistic algorithm for estimating vegetation canopy leaf area index and fraction of absorbed photosynthetically active radiation from MODIS and MISR data. J. Geophys. Res. Atmos. 103, 3257-3275.
* Knyashkin et al. (2013) Knyashkin, T., Scholl, M.A., Stenberg, P., Motrus, M., Rautiainen, M., Yang, Y., Marshak, A., Laitre Carmo, P., Kaufman, R.K., Lewis, P., 2013. Hyperspectral remote sensing of foliar nitrogen content. Proc. Natl. Acad. Sci. USA 110, E185-E192.
* Kuinsinsen and Hori (2021) Kuinsen, N., Hori, A., Raulainen, M., 2021. Contribution of woody elements to tree level reflectance in boreal forests. Silva Fenn. 55.
* Kuinsen et al. (2014) Kuinsen, N., Lule, P., Steinberg, P., Levnik, J., Nikiman, E., Berninger, F., 2014. Measured and modelled alcohols in Finnish boreal forest forests of different species, structure and undersery. Ecol. Model., 284, 10-18.
* Kuats and Liang (2014) Kuats, A., Kunst, J., Liang, M., 2014. Modeling forest forest reflectance with the hybrid type forest reflectance model FRT. Rem. Sens. Environ. 149, 196-204.
* Kuats and Nelson (2000) Kuats, A., Nelson, T., 2000. A directional multispectral forest reflectance model. Rem. Sens. Environ. 27, 244-262.
* Kunst et al. (2008) Kunst, A., Nilson, T., Paas, M., Lang, M., Kunst, J., 2008. Validation of the forest radiative transfer model FRT. Rem. Sens. Environ. 12, 51-58.
* Lutz and Howarth (2014) Lutz, D.A., Howarth, B.B., 2014. Valuing albeda as an ecosystem: implications for forest management. Climate Change 124, 53-63.
* Majanslini and Raulainen (2020) Majanslini, T., Raulainen, M., 2020. The impact of tree canopy structure on undersery variation in a boreal forest. For.col. Manag. 466, 118100.
* Maltano and Packalen (2014) Maltano, M., Packalen, P., 2014. Species-specific management inventory in Finland. In: Forestry Applications of Airborne Laser Scanning. Springer, pp. 201-225.
* Mehtitali and Lappel (2002) Mehtitali, L., Lappel, J., 2002. Biometry for Forestry and Environmental Data with Examples in R. Chapman and Hall/CRC.
* Mehtsalesskus (2002) Mehtsalesskus (2002) Mehtsalesskus (2002) Mehtsalesskus (2002) Mehtsalesskus (2002). In: Finnish (WWW Document) 2002. Mehtsalesskus, URL [https://www.mehtsalesskus.fi/avia/media-to/aeliator-toolkit-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-](https://www.mehtsalesskus.fi/avia/media-to/aeliator-toolkit-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-alpha/eti-to-)* Nilson and Peterson (1991) Nilson, T., Peterson, U., 1991. A forest canopy reflectance model and a test case. Rem. Sens. Environ. 37, 131-142.
* Olsanen et al. (2022) Olsanen, J., Blanchet, F.G., Friendly, M., Kindt, R., Legendre, P., McGlin, D., Minchin, P., O'Itara, R., Simpson, G., Solymos, P., et al., 2022. vegan: Community Ecology Package. R package version 2.5-7. 2020.
* Pinkerty et al. (2008) Pinkerty, J., Bates, D., 2008. Mixed-effect Models in S and S-PLUS: Springer science & business media.
* Nattsianian et al. (2008) Nattsian, M., Lang, M., Mateus, M., Kunki, A., Nilson, T., Ruusk, J., Liak, T., 2008. Multi-angular reflectance properties of a hembo1 forest: an analysis using GHUS PROMA data. Rem. Sens. Environ. 112, 2267-2462.
* Nattsianian and Lukes (2015) Nattsian, M., Lukes, P., 2015. Spectral contribution of undersory to forest reflectance in a boreal site analysis of E0-1 Hyperion data. Rem. Sens. Environ. 171, 98-104.
* Nattsianian et al. (2011) Nattsian, M., Mateus, M., Heikkanen, J., Majajarvi, A., Majajarvi, T., Stephens, P., 2011. Seasonal reflectance dynamics of common absorbers types in a northern European boreal forest. Rem. Sens. Environ. 115, 3020-3028.
* Nattsianian and Nilsson (2005) Nattsian, M., Nilsson, T., 2005. Seasonal reflectance trends of hembo1 birch forests. Rem. Sens. Environ. 113, 805-815.
* Nattsianian and Sternberg (2005) Nattsian, M., Sternberg, P., 2005. Application of photon recollision probability in confieress canopy reflectance simulations. Rem. Sens. Environ. 96, 98-107.
* Nattsianian et al. (2004) Nattsian, M., Sternberg, P., Nilson, T., Kunki, A., 2004. The effect of crown shape on the reflectance of confieress stands. Rem. Sens. Environ. 89, 41-52.
* Reda and Andreas (2004) Reda, I., and Andreas, A., 2004. Solar position algorithm for solar radiation applications. Sol. Energy 67, 579-589.
* Repla (2008) Repla, J., 2008. Isomsas equations for birch in Finland. Silva Fem. 42, 605-624.
* Repla (2009) Repla, J., 2009. Biomass equations for Scots pine and Norway spruce in Finland. Silva Fem. 6, 625-647.
* Repla et al. (2014) Repla, J., Puller, M.A., Loveland, T.R., Woodcock, C.E., Allen, R.G., Anderson, M.C., Helder, D., Lems, J.R., Johnson, D.M., Kennedy, R., 2014. Landa: a science and product vision for terrestrial global change research. Rem. Sens. Environ. 145, 154-172.
* Schapman-Sirsh et al. (2006) Schapman-Sirsh, G., Schapman, M.E., Painter, T.H., Daniel, S., Martonchik, J.V., 2006. reflectance quantities in optical remote sensing--definitions and case studies. Rem. Sens. Environ. 103, 27-42.
* Sjolle and Latta (2013) Sjolle, H.K., Latta, G.S., Solberg, B., 2013. Potential impact of albedo incorporation in boreal forest sector climate change policy effectiveness. Clim. 13, 165-679.
* Skalam et al. (2022) Skalam, S., Wevers, J., Jochmann, C., Doxari, G., Aleksandrov, M., Naft, M., Franst, D., Gascon, F., Gomez-Chorus, L., Hagolle, O., 2022. Cloud Mask Intercomparison exercise (CMX) on an evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2. Rem. Sens. Environ. 274, 112990.
* Townsend et al. (2013) Townsend, P., S., Serbin, S.P., Kruger, E.L., Gamon, A.J., 2013. Distinguishing the contribution of biological and physical properties of leaves and conjoes in imaging spectroscopy data. Proc. Natl. Acad. Sci. USA 110, E1074-E1074.
* USS (2020) USS, 2020. Landsat & Collection 1 Land Surface Reference Code Product Guide. U.S. Geological Survey With Document. URL [https://www.ngs.gov/media/files/1/and/sout-collection-1-land-surface-reference-code-product-guide](https://www.ngs.gov/media/files/1/and/sout-collection-1-land-surface-reference-code-product-guide). (Accessed 18 August 2023).
* Vermee et al. (2016) Verme, E., Justice, C., Claverie, M., Franst, B., 2016. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Rem. Sens. Inviton. 185, 46-56.
* Wildowski et al. (2015) Wildowski, J.-L., Mio, C., Disney, M., Adams, J., Andredakis, I., Arhberger, J., Brentnau, D., Butetto, L., Chelle, M., Cecchetti, E., 2015. The fourth phase of the radiative transfer model intercomparison (RAM) exercise: cloud scanning and conformity testing. Rem. Sens. Environ. 169, 418-437.
* Wildowski et al. (2007) Wildowski, J.-L., Taberner, M., Pinty, B., Brunibetti-Puneh, V., Disney, M., Pemandes, R., Gastella-Etcherey, J.-P., Gokborn, N., Kunki, A., Lavergne, T., 2007. Third radiation transfer model intercomparison (RAM) exercise: documenting progress in canopy reflectance models. J. Geophys. Res. Games. 112, 112.
* Yang et al. (2010) Yang, G., Zhao, C., Liu, Q., Huang, W., Wang, J., 2010. Inversion of a radiative transfer model for estimating forest LLM non-insurance and multiangular optical remote sensing data. IEEE Trans. Geosci. Rem. Sens. 49, 988-1000.
* Zeilek and Kleiber (2014) Zeilek, A., Kleiber, C., 2014. Ineye: measuring inequality, concentration, and poverty. R package version 0.213, URL [http://CRAN.R-project.org/package-ineo](http://CRAN.R-project.org/package-ineo), Accessed on 25* August 2023. | The forest reflectance and transmittance model (FRT) is applicable over a wide swath of boreal forest landscapes mainly because its stand-specific inputs can be generated from standard forest inventory variables. We quantified the accuracy of this model over an extensive region for the first time. This was done by carrying out a simulation study over a large number (12,369) of georeferenced forest plots from operational forest management inventories conducted in Southern Finland. We compared the FRT simulated bidirectional reflectance factors (BRF) with those measured by Landsat 8 satellite Operational Land Imager (OLL). We also quantified the relative importance of several explanatory factors that affected the magnitude of the discrepancy between the measured and simulated BRFs using a linear mixed effects modelling framework. A general trend of FRT overestimating BRFs is seen across all tree species and spectral bands examined: up to \\(\\sim\\)0.05 for the red band, and \\(\\sim\\)0.10 for the near infrared band. The important explanatory factors associated with the overestimations included the dominant tree species, understory type of the forest plot, timber volume (acts as a proxy for stand maturity), vegetation heterogeneity and time of the year. Our analysis suggests that approximately 20% of the error is caused by the non-representative spectra of canopy foliage and understory. Our results demonstrate the importance of collecting representative spectra from a diverse set of forest stands, and over the full range of seasons. | Write a summary of the passage below. | 309 |
arxiv-format/2405_16038v2.md | # Rethinking Early-Fusion Strategies for Improved Multispectral Object Detection
Xue Zhang, Si-Yuan Cao, Fang Wang, Runmin Zhang, Zhe Wu, Xiaohan Zhang, Xiaokai Bai, and Hui-Liang Shen,
This work was supported in part by the National Key Research and Development Program of China under grant 2023YFB3209800, in part by the Natural Science Foundation of Zhejiang Province under grant D24F020006, in part by the National Natural Science Foundation of China under grant 62301484, and in part by the Jinhua Science and Technology Bureau Project. _(Corresponding authors: Si-Yuan Cao and Hui-Liang Shen)_
X. Zhang, R. Zhang, Z. Wu, X. Zhang, and X. Bai are with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: [email protected], [email protected], [email protected], [email protected], [email protected]).S.-Y. Cao is with the Ningbo Research Institute, College of Information Science and Electronic Engineering, Zhejiang University, China (e-mail: [email protected]).F. Wang is with the School of Information and Electrical Engineering, Hangzhou City University and also with the Hangzhou City University Binjiang Innovation Center, China (e-mail: [email protected])H.-L. Shen is with the College of Information Science and Electronic Engineering, Zhejiang University, the Jinhua Institute of Zhejiang University, and the Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, China (e-mail: [email protected]).
## I Introduction
Multispectral object detection has been widely studied, since multispectral images can provide complementary information to achieve consistent detection in various lighting conditions [3, 4, 5, 6, 7, 8, 9, 10]. This complementarity is illustrated in Fig. 1 (a) and (b). Given the multispectral inputs, modern multispectral detectors develop three fusion strategies: early-fusion, medium-fusion, and late-fusion, shown in Fig. 1 (c) - (e). The medium and late-fusion strategies often achieve superior performance compared to early-fusion [4, 5, 11, 12, 13, 14]. However, they use a two-branch structure, making model deployment on edge devices expensive. In contrast, the early-fusion strategy adopts a simple single-branch structure, facilitating deployment on edge devices. Nevertheless, its performance is low, and there are few works to address this problem, resulting in an increasing gap between high performance and high efficiency.
The main motivation for this work is to resolve the conflict between detection performance and inference efficiency. To this end, we focus on improving the performance of the early-fusion strategy while maintaining its high computational efficiency. We first conduct pilot studies and observe that a plain early-fusion strategy cannot consistently obtain improved performances compared to single-modality inputs. Based on this observation, we rethink the early-fusion strategy and summarize three key obstacles: 1) the information interference problem when simply concatenating the multispectral images, 2) the domain gap existing in thermal and RGB images, and 3) the weak feature representation of the single-branch structure. Focusing on these obstacles, we propose corresponding solutions.
**- Information interference problem** refers to the potential suppression of important information in one modality by another. In the plain early-fusion strategy, previous works [15] typically feed concatenated multispectral images into a convolution layer and generate a fused feature. The convolution layer generally has a small receptive field. Therefore, based on limited contexts, this approach is hard to determine which modality information is important. We address this issue by first recognizing that object shapes are agnostic to visible and infrared wavelengths and devise a module to fuse multispectral images based on object shape saliency, named the shape-priority early-fusion (ShaPE) module.
**- Domain gap between RGB and thermal images** is usually neglected in previous works. They generally adopt an RGB pre-trained backbone network to extract features from both RGB and thermal images [5, 13]. However, the domain gap may cause the representation distribution shift. This issue is also recognized in the work [16] on an RGB-D task. Different from previous works, we introduce a weakly supervised learning method to address this issue. Within this method, the backbone network jointly uses RGB and thermal images to learn the representation of CLIP [17], since CLIP has demonstrated promising zero-shot generalizability in bridging the domaingap [18]. Additionally, we introduce a segmentation auxiliary branch. Our method allows the backbone network to reduce representation shifts and improve semantic localization ability.
**- Weak feature representation problem** results from the early-fusion strategy employing a single-branch structure. This structure has fewer parameters and simpler fusion modules compared to medium and late-fusion strategies. We address this issue by introducing the knowledge distillation (KD) technique [19]. In KD, a key problem is how to align the feature dimensions between teacher and student models. Previous works generally introduce a convolution layer for the student model to learn all knowledge from the teacher model [20, 21]. However, we show that not all information in teacher model is helpful for downstream tasks. Therefore, we introduce core knowledge distillation (CoreKD) to transfer the most crucial knowledge for specific downstream tasks, resembling the human learning process where the teacher highlights key knowledge for quick understanding and absorption by the students.
Experimental results validate that our efficient multispectral early-fusion (EME) detector achieves a significant performance improvement without considerably increasing the number of parameters, as shown in Fig. 1 (f). Besides, our EME outperforms the previous state-of-the-art approaches. In summary, our contributions are threefold:
* We systematically analyze the causes of the performance gap between single-branch and two-branch structures. Unlike previous works, we identify and summarize three key obstacles limiting the single-branch early-fusion strategy: information interference, domain gap, and weak feature representation. Notably, information interference between multispectral images is revealed for the first time in this work.
* For each obstacle, we propose the corresponding solution: we develop 1) a ShaPE module to address the information interference issue, 2) a weakly supervised learning method to reduce domain gap and improve semantic localization abilities, and 3) a CoreKD to enhance the feature representation of single-branch networks.
* Extensive experiments validate that the early-fusion strategy, equipped with our ShaPE module, weakly supervised learning, and CoreKD technique, demonstrates significant improvement. These three modules benefit various common detectors, such as YOLOv5 [2], RetinaNet [22], and GFL [23]. Importantly, only the ShaPE module is retained during the inference phase. Consequently, our method achieves both high performance and efficiency.
## II Related Work
In this section, we offer a brief overview of multispectral object detection and introduce related works in weakly supervised learning and knowledge distillation.
### _Multispectral Object Detection_
Multi-source information fusion [24, 25, 26, 27] has exhibited promising application potential in computer vision tasks. In this work, we focus on the multispectral object detection task that uses RGB and thermal image pairs to detect objects. According to fusion strategies, multispectral object detection can be classified into three categories: early-fusion, medium-fusion and late-fusion strategies. Previous works [11, 12] and [28] confirm that both medium-fusion and late-fusion strategies outperform the early-fusion strategy.
However, both the medium and late fusion strategies adopt a two-branch structure that limits their use on resource-limited edge devices. Previous works notice this weakness and provide some solutions. For example, in [14], a model using the medium-fusion strategy is first trained as a teacher, and its knowledge is transferred to a student model. The student model only receives RGB images as inputs. Although it saves resources, it discards important complementary information from thermal images. The work [13] introduces a domain adaptation technique. It uses a medium-fusion model to guide single-branch model learning, which only receives thermal images as inputs and also discards complementary information from RGB images. To employ complementary information
Fig. 1: Multispectral object detection and fusion strategies. (a) In Scene-1, objects are easier to detect in the thermal image. (b) In Scene-2, objects are easier to detect in the RGB image. (c) Early-fusion strategy. (d) Medium-fusion strategy. (e) Late-fusion strategy. (f) Detection results of different strategies on the M3FD dataset [1]. YOLOv5 [2] is adopted as the baseline in this experiment. The area of each circle denotes the number of parameters.
while saving computational resources, [29] transfers knowledge from a medium-fusion model to an early-fusion model. Nevertheless, it neglects information interference problem. Some works in the image fusion field [1, 30, 31] demonstrate that fused images can improve detectors, but the fusion process still introduces an additional computational burden.
Different from previous works, we identify the information interference problem in early-fusion strategies. By addressing this problem, we fully employ the complementary information in multispectral images, without significantly increasing computational burden.
### _Weakly Supervised Learning in Object Detection_
Weakly supervised learning has received much attention in object localization and detection, as comprehensively surveyed in [32]. Recent works in the multispectral object detection adopt this technique. Based on the weak annotations they utilize, we can coarsely divide them into image- and box-level weakly supervised learning approaches.
In image-level weakly supervised learning approaches, previous works mainly employ the illumination condition of RGB images as weighting factors to determine the modality importance [5, 13, 14, 33]. In box-level approaches, previous works [15, 34] mainly employ the bounding-box annotations to generate masks. They use these masks to construct spatial attention mechanisms, highlighting representations within target regions.
Different from previous works, we use weakly supervised learning to address the domain gap problem in RGB and thermal images. We employ image-level labels to construct a multi-label classification auxiliary task. This task can fully exploit the complementary information in multispectral images, instead of solely using information from one modality. Along with the powerful CLIP model [17] and box-level weak labels, our method can reduce the domain gap and obtain precise semantic localization abilities.
### _Knowledge Distillation_
Knowledge distillation is first introduced in [19]. It aims to improve a lightweight student model by learning knowledge from a high-capacity teacher model. According to distillation approaches, this technique can be roughly divided into two groups: logit distillation [19] and feature distillation [20]. The former let a student model learn the logit of a teacher model, while the latter let a student model learn the feature of a teacher model. These distillation approaches are also applied to object detection [21, 35]. Recently, some works in multispectral object detection also employ the knowledge distillation technique [14, 29]. In the distillation process, they generally introduce a projection layer to align the teacher and student feature channel number. The purpose of this approach is to learn all representations in the teacher model.
Different from previous works, we first confirm that not all information in teacher features is beneficial to downstream task including classification and regression. Based on this, we propose a core knowledge distillation technique to transfer the most important features for the downstream tasks to the student model.
## III Method
Fig. 2 illustrates the overview of our method, where the training process and the inference process are marked in green and blue, respectively. We adopt a single-branch structure as the baseline model considering its low memory cost. To boost its performance, we develop three key modules: shape-priority early-fusion (ShaPE), weakly supervised auxiliary learning, and core knowledge distillation (CoreKD). Note that only the training process requires weakly supervised auxiliary learning and CoreKD, and both are removed during the inference phase. Consequently, our method adds only the ShaPE module to the early-fusion single-branch structure during the inference phase. In the following sections, we describe the ShaPE module in Section III-A, the weakly supervised auxiliary learning method in Section III-B, and CoreKD in Section III-C.
### _Shape-Priority Early-Fusion Module_
**Observation.** Given a pair of RGB-T images, the plain early-fusion strategy concatenates them in the channel dimension and then feeds them into a detector. With the plain strategy, we conduct pilot studies on the M3FD [1] dataset. We first train three commonly used one-stage detectors: RetinaNet [22], GFL [23] and YOLOv5 [2]. Then, we compute the mean values and standard deviations of their detection results and illustrate the computed results in Fig. 3. Besides, we also train these detectors using single-modality images as input for comparisons. We have the following two observations. First, the plain early-fusion strategy cannot achieve consistent improvement compared with single-modality input. Second, for objects that require color to identify, such as 'Traffic Light', the plain early-fusion strategy yields worse results than the RGB input.
**Motivation.** We attribute the above phenomena to the convolutional inductive bias, namely, local connectivity and weight sharing. The process of 2D convolution involves two steps: (1) sampling across the concatenated RGB-T images using a regular grid \\(\\mathcal{R}\\); (2) summing the sampled values with weighting factor \\(\\mathbf{W}\\). The grid \\(\\mathcal{R}\\) determines both the receptive field size and dilation. For example,
\\[\\mathcal{R}=\\{(-3,-3),(-3,-2),\\ldots,(2,3),(3,3)\\}\\]
Fig. 2: Overview of our method. We adopt the single-branch structure as the baseline model and develop three key modules: shape-priority early-fusion (ShaPE), weakly supervised auxiliary learning, and core knowledge distillation. The ShaPE module remains in both the inference and training phases, while the other two modules are removed in the inference phase.
defines a 7\\(\\times\\)7 kernel with dilation 1. For each position \\(\\mathbf{p}_{0}\\) on an out feature map \\(\\mathrm{O}\\), we have
\\[\\mathrm{O}(\\mathbf{p}_{0})=\\sum_{\\mathbf{p}_{n}\\in\\mathcal{R}}\\sum_{j\\in\\{\\mathrm{ rgb},\\mathrm{t}\\}}\\mathbf{W}_{j}(\\mathbf{p}_{n})\\mathbf{I}_{j}(\\mathbf{p}_{0}+ \\mathbf{p}_{n}), \\tag{1}\\]
where \\(\\mathbf{p}_{n}\\) enumerates the positions in \\(\\mathcal{R}\\).
This process indicates that the plain early-fusion strategy is a pixel-level weighting method, with weights learned from data. However, the limited receptive field of pixel-level weighting methods makes the weights difficult to determine which modality is important. This weakness may result in valuable information from one modality being suppressed by another. As an example, Fig. 4 (c) depicts the feature map generated from the RGB-T images of Fig. 4 (a) and (b) using the plain early-fusion strategy. It is observed from the close-up that the \"Traffic Light\" in the fused feature map doesn't preserve the significant information of the RGB image.
The straightforward solutions to this weakness are: (1) enlarging the receptive field by using a larger kernel or more convolutional layers so that the model can judge the modality importance based on a broader range of contexts, or (2) increasing the number of convolutional kernels so that the model can learn more representations. However, these solutions increase memory costs and computational burden, making them unfriendly to edge devices.
**ShaPE Module.** We realize that shape is an inherent attribute of an object. Any visible objects in RGB and thermal images have consistent shapes. Thus, we consider the salience of shape as a modifying factor to adaptively determine the modality importance, and design the shape-priority early-fusion (ShaPE) module. In the ShaPE module, the RGB and thermal images are modified by self-gating masks. In this context, Eq. (1) becomes:
\\[\\mathrm{O}(\\mathbf{p}_{0})=\\sum_{\\mathbf{p}_{n}\\in\\mathcal{R}}\\sum_{j\\in\\{ \\mathrm{rgb},\\mathrm{t}\\}}\\mathbf{W}_{j}(\\mathbf{p}_{n})\\mathbf{M}_{j}( \\mathbf{p}_{0}+\\mathbf{p}_{n})\\mathbf{I}_{j}(\\mathbf{p}_{0}+\\mathbf{p}_{n}), \\tag{2}\\]
where \\(\\mathbf{M}_{\\mathrm{rgb}}\\) and \\(\\mathbf{M}_{\\mathrm{t}}\\) denote the self-gating masks of RGB and thermal images, respectively.
In the following, we describe the generation process of self-gating masks \\(\\mathbf{M}_{\\mathrm{rgb}}\\) and \\(\\mathbf{M}_{\\mathrm{t}}\\). Since our ShaPE module focuses on the shapes of objects and structural contributions of different modalities to the fused features, we employ the gradients and structural similarities in our method. For easy understanding, we visualize some important intermediate results in Fig. 4. Given the RGB-T images as shown in Fig. 4 (a) and (b), we compute their gradients
\\[\
abla\\mathbf{I}_{\\mathrm{rgb}}(\\mathbf{p}_{0}) =\\sqrt{(\
abla_{x}\\mathbf{I}_{\\mathrm{rgb}}(\\mathbf{p}_{0}))^{2} +(\
abla_{y}\\mathbf{I}_{\\mathrm{rgb}}(\\mathbf{p}_{0}))^{2}},\\] \\[\
abla\\mathbf{I}_{\\mathrm{t}}(\\mathbf{p}_{0}) =\\sqrt{(\
abla_{x}\\mathbf{I}_{\\mathrm{t}}(\\mathbf{p}_{0}))^{2}+( \
abla_{y}\\mathbf{I}_{\\mathrm{t}}(\\mathbf{p}_{0}))^{2}},\\]
as shown in Fig. 4 (d) and (e). We then generate the union gradient as the reference using
\\[\
abla\\mathbf{I}^{\\prime}_{\\mathrm{ref}}(\\mathbf{p}_{0})=\\max(\
abla\\mathbf{I }_{\\mathrm{rgb}}(\\mathbf{p}_{0}),\
abla\\mathbf{I}_{\\mathrm{t}}(\\mathbf{p}_{0} )).\\]
We further use max-pooling within a 3\\(\\times\\)3 neighborhood \\(\\mathcal{R}^{\\prime}\\) to boost the reference gradient, which is written as
\\[\
abla\\mathbf{I}_{\\mathrm{ref}}(\\mathbf{p}_{0})=\\max_{\\mathbf{p}_{n}\\in \\mathcal{R}^{\\prime}}\
abla\\mathbf{I}^{\\prime}_{\\mathrm{ref}}(\\mathbf{p}_{0}+ \\mathbf{p}_{n}),\\]
as shown in Fig. 4 (f).
To determine the structural contributions of each modality to the fused features, we compute the structural similarities between single-modality gradient images \\(\\{\
abla\\mathbf{I}_{\\mathrm{rgb}}\\), \\(\
abla\\mathbf{I}_{\\mathrm{t}}\\}\\) and the reference gradient image \\(\
abla\\mathbf{I}_{\\mathrm{ref}}\\). Inspired by [36], for each patch \\(\\mathcal{R}\\), we compute three fundamental properties: the means \\(\\{\\mu_{\\mathrm{rgb}},\\mu_{t},\\mu_{\\mathrm{ref}}\\}\\), the standard deviations \\(\\{\\sigma_{\\mathrm{rgb}},\\sigma_{\\mathrm{t}},\\sigma_{\\mathrm{ref}}\\}\\), and the covariances \\(\\{\\sigma_{\\mathrm{rgb},\\mathrm{ref}}\\}\\), \\(\\sigma_{(\\mathrm{t},\\mathrm{ref})}\\}\\) between the single-modality gradient images and the reference gradient images.
Fig. 4: Illustration of fused feature map generation process for the plain early-fusion strategy and our ShaPE module. (a) RGB image. (b) Thermal image. (c) Fused feature map generated using the plain early-fusion strategy, with a close-up indicated by a white circle line. (d) and (e) are gradient images of the RGB and thermal images, respectively. (f) Boosted reference gradient image. (g) and (h) are self-gating masks of the RGB and thermal images, respectively. (i) Fused feature map generated by our ShaPE module.
Fig. 3: Pilot studies conducted on the M3FD [1] dataset. We use three detectors as baselines: RetinaNet [22], GFL [23] and YOLOv5 [2]. Each bar and error bar represents the mean values and standard deviation of the results obtained by these three detectors. βRGBβ represents detectors that only take RGB images as inputs, while \\({}^{\\mathrm{T}}\\) represents detectors that only take thermal images as inputs. βPlainRGB-Tβ denotes detectors that use the plain early-fusion strategy. The βAllβ column illustrates the mAP50 for all classes, and the other columns illustrate the AP50 for specific classes. Red lines denote the plain RGB-T early fusion strategy obtains works results compared to detectors that use single-modality inputs.
In this context, we generate the self-gating masks:
\\[\\mathbf{M}^{\\prime}_{\\text{rgb}}=\\frac{(2\\mu_{\\text{rgb}}\\mu_{\\text{ ref}}+\\xi_{1})(2\\sigma_{(\\text{rgb,ref})}+\\xi_{2})}{(\\mu_{\\text{rgb}}^{2}+\\mu_{ \\text{ref}}^{2}+\\xi_{1})(\\sigma_{\\text{rgb}}^{2}+\\sigma_{\\text{ref}}^{2}+ \\xi_{2})},\\] \\[\\mathbf{M}^{\\prime}_{\\text{t}}=\\frac{(2\\mu_{\\text{t}}\\mu_{\\text{ ref}}+\\xi_{1})(2\\sigma_{(\\text{t,ref})}+\\xi_{2})}{(\\mu_{\\text{t}}^{2}+\\mu_{ \\text{ref}}^{2}+\\xi_{1})(\\sigma_{\\text{t}}^{2}+\\sigma_{\\text{ref}}^{2}+\\xi_{2} )},\\]
where \\(\\xi_{1}=(k_{1}L)^{2}\\) and \\(\\xi_{2}=(k_{2}L)^{2}\\) are used to prevent instability. \\(L\\) is the dynamic range of the gradient images, \\(k_{1}=0.01\\), and \\(k_{2}=0.03\\).
Since the ranges of both \\(\\mathbf{M}^{\\prime}_{\\text{rgb}}\\) and \\(\\mathbf{M}^{\\prime}_{\\text{t}}\\) are \\([-1,1]\\), we then normalize the self-gating masks and obtain
\\[\\mathbf{M}_{\\text{rgb}}=\\frac{\\exp(\\mathbf{M}^{\\prime}_{\\text{rgb}})}{\\sum \\limits_{j\\in\\{\\text{rgb,t}\\}}\\exp(\\mathbf{M}^{\\prime}_{j})},\\;\\mathbf{M}_{ \\text{t}}=\\frac{\\exp(\\mathbf{M}^{\\prime}_{\\text{t}})}{\\sum\\limits_{j\\in\\{ \\text{rgb,t}\\}}\\exp(\\mathbf{M}^{\\prime}_{j})}, \\tag{3}\\]
as shown in Fig. 4 (g) and (h). According to Eq. (2), we can finally generate the fused feature map as shown in Fig. 4 (i).
### _Weakly Supervised Learning Method_
In RGB-T object detection, an unneglectable issue is the lack of pre-trained backbone networks on large-scale RGB-T datasets. This is because there are few large-scale datasets like ImageNet [37] and COCO [38] in RGB-T image recognition fields. Previous works generally use backbone networks pre-trained on ImageNet. However, the domain gap between thermal and RGB images would cause representation distribution shifts, as illustrated in Fig. 5 (a) and (b). This is because the backbone network is trained solely on RGB images, but is applied to thermal images.
To handle this issue, we turn to the powerful Contrastive Language-Image Pre-training (CLIP) [17] model. It has been confirmed that CLIP can bridge domain gaps [40, 41, 42, 43, 18], since it is trained using a huge number of (image, text) pairs. In this context, we feed both RGB and thermal images into the backbone network, and let it learn the representation generated by the CLIP model. Specifically, we first present a CLIP-driven image-level weakly supervised learning method. This method enables the network to recognize the classes of objects in a pair of RGB-T images while locating their coarse regions. For fine-grained localization, we then introduce a box-level weakly supervised learning method. Fig. 6 illustrates the architecture of weakly supervised learning method.
**CLIP-Driven Image-Level Weak Supervision.** To learn the CLIP model's knowledge, we construct the image-level weak supervision method. Based on three considerations, we adopt the multi-label classification task as the image-level weak supervision: (1) the CLIP model can be viewed as a classifier, (2) this auxiliary task can fully use the complementary information in the RGB-T images, and (3) by summarizing all classes and removing duplicates in an image, we can easily construct the ground-truth multi-label targets based on detection annotations.
Nevertheless, original CLIP model is only trained for recognizing a single object per image [17] and is not suitable for multi-label classification [44, 45]. To address this issue, we introduce a Divide-and-Aggregation CLIP (DA-CLIP) model. DA-CLIP first divides input images into multiple crops. Each crop is then fed into CLIP. All predictions of these crops are finally aggregated by a max-pooling operation on each class. Considering DA-CLIP may generate inaccurate predictions, we construct a learnable adapter, which consists of three fully-connected (FC) layers, to fine-tune the result of DA-CLIP. To prevent overfitting, we add a dropout layer in the adapter. We denote the predicted probability from the adapter as \\(\\mathbf{\\hat{q}}_{\\text{ad}}\\in\\mathbb{R}^{c}\\), where \\(c\\) denotes the number of classes.
For the backbone network, we add an auxiliary classification head on its top. The head consists of a global average pooling (GAP) operation and one FC layer. We denote the predicted probability from the classification head as \\(\\mathbf{\\hat{q}}_{\\text{bb}}\\in\\mathbb{R}^{c}\\).
We adopt the mutual learning approach [46] to train the backbone network and the adapter simultaneously. In this approach, an important step is that one model generates soft targets for the other model using the softmax function. However, this approach cannot be directly applied to the multi-label classification problem, since it requires the sum of predicted probabilities to be one, which is rarely satisfied in multi-label classification. To address this issue, we draw inspiration from self-training KD [47] and construct the soft targets for the adapter and backbone network as
\\[\\mathbf{\\tilde{q}}_{\\text{bd}}=(1-\\lambda)\\mathbf{q}+\\lambda\\mathbf{\\hat{q}}_ {\\text{ad}},\\quad\\mathbf{\\tilde{q}}_{\\text{bb}}=(1-\\lambda)\\mathbf{q}+\\lambda \\mathbf{\\hat{q}}_{\\text{bb}},\\]
where \\(\\mathbf{q}\\in\\mathbb{R}^{c}\\) denotes a ground-truth multi-label target, and \\(\\lambda\\) denotes a balancing factor set to 0.1. In this context, we
Fig. 5: T-SNE visualization of RGB and thermal image features. (a) and (b) visualize the image features of the M3FD [1] and FLIR [39] datasets using the ImageNet pre-trained ResNet-50 backbone network. (c) and (d) visualize the image features of the same datasets using the ResNet-50 trained with our weakly supervised learning method. Additionally, we present corresponding images of six pairs of feature points.
compute the binary cross-entropy (BCE) losses
\\[\\mathcal{H}(\\mathbf{\\tilde{q}_{\\text{ad}}},\\mathbf{\\hat{q}_{\\text{bb} }})\\] \\[=-\\sum_{i=1}^{c}\\tilde{q}_{\\text{ad},i}\\log(\\hat{q}_{\\text{bb},i})+( 1-\\tilde{q}_{\\text{ad},i})\\log(1-\\hat{q}_{\\text{bb},i}), \\tag{4a}\\] \\[\\mathcal{H}(\\mathbf{\\tilde{q}_{\\text{bb}}},\\mathbf{\\hat{q}_{\\text {ad}}})\\] \\[=-\\sum_{i=1}^{c}\\tilde{q}_{\\text{bb},i}\\log(\\hat{q}_{\\text{ad},i}) +(1-\\tilde{q}_{\\text{bb},i})\\log(1-\\hat{q}_{\\text{ad},i}). \\tag{4b}\\]
To showcase the semantic localization effect of our CLIP-driven image-level weak supervision, we visualize the class activation map (CAM) of the backbone network in Fig. 7 (a). CAM is a useful tool for understanding which regions the network focuses on to predict a class. We can observe that the backbone network can coarsely localize regions of 'Person', 'Car', and 'Traffic Light' in the image.
**Box-Level Weak Supervision.** To precisely localize the semantic regions, we introduce box-level weak supervision. The ground-truth box-level target is generated by directly filling the area within an annotation box with its corresponding class index. In this context, we add an auxiliary segmentation head on top of the backbone network to predict the target. Denoting the ground-truth box-level target mask as \\(\\mathbf{G}\\), and the predicted mask as \\(\\mathbf{\\hat{G}}\\), we compute the BCE loss between them as
\\[\\mathcal{H}(\\mathbf{G},\\mathbf{\\hat{G}})=-\\sum_{n=1}^{N}G_{n}\\log(\\hat{G}_{n}) +(1-G_{n})\\log(1-\\hat{G}_{n}), \\tag{5}\\]
where \\(N\\) denotes the number of elements in the mask.
We visualize attention maps of the backbone network for different classes, as shown in Fig. 7 (b). Using the box-level weak supervision, the backbone network can precisely localize the interest of objects, such as 'Car'. Nevertheless, it may miss some useful information in the image. Therefore, we combine the CLIP-driven image-level weak supervision and the box-level weak supervision. The results presented in Fig. 7 (c) show that our weakly supervised learning method can effectively allow the backbone network to localize the important semantic regions.
**Effect Validation.** When our weakly supervised learning method is employed, Fig. 5 (c) and (d) demonstrate that the domain gap between RGB and thermal features is reduced. This implies that the backbone network can extract information from RGB and thermal images without bias. To further illustrate this effect, we visualize the feature map generated by the ResNet-50 [48] in Fig. 8. The generation process of these feature maps is as follows: First, we resize all features of the ResNet-50 across four stages to the same resolution as the input images. Then, we aggregate these features along the channel dimension using \\(\\texttt{sum}(\\texttt{softmax}(\\mathbf{F},\\texttt{dim=0})\\otimes\\mathbf{F}, \\texttt{dim=0})\\), where \\(\\mathbf{F}\\in\\mathbb{R}^{D\\times H\\times W}\\) represents the concatenated feature. \\(D\\), \\(H\\), and \\(W\\) denote its depth, height, and width, respectively. \\(\\otimes\\) denotes the element-wise production operation.
Fig. 8 (a) and (b) present the RGB and thermal images in one example scene. Fig. 8 (c) and (d) illustrate their corresponding feature maps. Fig. 8 (e) shows the RGB-T feature map without using our weakly supervised learning method. Fig. 8 (f) shows the feature map using our weakly
Fig. 8: Illustration of feature maps generated by the backbone network. (a) and (b) present the RGB and thermal images. (c) and (d) present their corresponding features map. (e) and (f) present the feature maps generated by the ResNet-50 trained without and with our weakly supervised learning, respectively. The close-up is highlighted with a red box.
Fig. 6: Illustration of the weakly supervised learning method. It consists of a divide-and-aggregation CLIP model (DA-CLIP), an adapter, a backbone, two auxiliary heads used for classification and segmentation, and weakly supervised losses. The crops are obtained using PyTorchβs function torch.nn.functional.unfold(image, kernel_size=224, stride=112). The image-level label is determined through a two-step process. 1 gather all classes present in the image according to bounding-box annotations, and 2) remove duplicated classes. Note that all modules except the DA-CLIP are updated, and only the backbone network remains in the inference phase.
Fig. 7: Illustration of the class activation map (CAM) of the backbone network. Each rowβs triplet of images represents the CAM for a specific class, using (a) image-level auxiliary learning only, (b) box-level auxiliary learning only, and (c) both image-level and box-level auxiliary learning.
supervised learning method. Observing Fig. 8 (e), we note that the ResNet-50 tends to acquire information primarily from the RGB image. In contrast, the feature map in Fig. 8 (f) demonstrates that our method enables the ResNet-50 to gather important information from both RGB and thermal images.
### _Core Knowledge Distillation_
**Problem Description.** To further improve the detection accuracy of the early-fusion strategy without increasing its computational cost, we introduce the knowledge distillation technique [19]. To achieve knowledge transfer, we instruct the student model to mimic intermediate features of teacher model. In this process, a primary obstacle the student model faces is the unequal number of feature channels as the teacher model. Previous works introduce convolution layers to align their feature channel numbers [20, 21], while neglecting whether the teacher's knowledge is helpful to the student. To address this issue, we propose core knowledge distillation (CoreKD).
**CoreKD Architecture.** We use YOLOv5 [2] as an example and illustrate the knowledge distillation architecture in Fig. 9. In its architecture, we use the early-fusion single-branch structure as the student model and the medium-fusion two-branch structure as the teacher model. In the student model, a pair of RGB-T images is first concatenated, then fed into different network modules, and finally converted into predicted results. In the teacher model, the RGB and thermal images are respectively fed into different backbone networks. The generated multispectral features are fused in the feature space through concatenation and convolution operations. The fused features are then fed into the subsequent network modules and converted into predicted results. The predicted results of both the student and teacher models consist of bounding boxes and class-specific confidence scores.
**CoreKD Formulation.** Since we apply the same distillation techniques to different feature pyramid levels, we only describe the technique at one level and omit the subscript for simplicity. In the head modules of Fig. 9, we denote the input features of the student and teacher models as \\(\\mathbf{X}^{\\mathrm{S}}\\) and \\(\\mathbf{X}^{\\mathrm{T}}\\), respectively. Feature distillation typically transfers the teacher's knowledge to the student by minimizing the loss [20]
\\[\\mathcal{L}^{\\prime\\prime}_{\\mathrm{feat}}=||\\mathcal{A}(\\mathbf{X}^{\\mathrm{ S}})-\\mathbf{X}^{\\mathrm{T}}||_{2}^{2}, \\tag{6}\\]
where \\(\\mathcal{A}\\) denotes an adaptation layer used to match the channel dimensions between the student and teacher features. Previous works usually use a convolution layer as the adaptation layer [20, 21]. This approach aims to make \\(\\mathcal{A}(\\mathbf{X}^{\\mathrm{S}})\\) learn all information in the teacher feature \\(\\mathbf{X}^{\\mathrm{T}}\\). However, they neglect whether all the information in \\(\\mathbf{X}^{\\mathrm{T}}\\) is beneficial for downstream tasks, including classification and regression.
To address this problem, we revisit the structure of head module in the teacher model. As shown in Fig. 9, the official implementation of YOLOv5 uses a '\\(1\\times 1\\) Conv' layer to output the predicted results
\\[\\mathbf{\\hat{Y}}^{\\mathrm{T}}=\\texttt{Conv}(\\mathbf{X}^{\\mathrm{T}};\\mathbf{ W}^{\\mathrm{T}}),\\]
where \\(\\mathbf{W}^{\\mathrm{T}}\\) denotes the weighting factor in the teacher's head module. According to the 2D convolution formulation in Eq. (1), we can infer that the weighting factor \\(\\mathbf{W}^{\\mathrm{T}}\\) reflects the importance of a channel map in \\(\\mathbf{X}^{\\mathrm{T}}\\) for the downstream feature. We visualize the histogram of \\(\\mathbf{W}^{\\mathrm{T}}\\) in Fig. 10. It is evident that most of the values in \\(\\mathbf{W}^{\\mathrm{T}}\\) approximate 0. This implies that only a few feature representations in \\(\\mathbf{X}^{\\mathrm{T}}\\) are
Fig. 9: Illustration of the knowledge distillation technique. The student model adopts an early-fusion single-branch structure, while the teacher model adopts a medium-fusion two-branch structure. In the training phase, both the pre-trained teacher model and the core knowledge convolution module are fixed, while only the student model is updated. After training, only the student model is used for deployment. In this diagram, we use YOLOv5 [2] as an example, and it can be easily extended to other detectors.
important for the downstream tasks. We call these important feature representations the core knowledge in teacher model.
To learn this core knowledge, we modify the feature loss Eq. (6) into
\\[\\mathcal{L}^{\\prime}_{\\mathrm{feat}}=||\\mathtt{Conv}(\\mathcal{A}(\\mathbf{X}^{ \\mathrm{S}});\\mathbf{W}^{\\mathrm{T}})-\\mathtt{Conv}(\\mathbf{X}^{\\mathrm{T}}; \\mathbf{W}^{\\mathrm{T}}))||_{2}^{2}. \\tag{7}\\]
This modification ensures that \\(\\mathcal{A}(\\mathbf{X}^{\\mathrm{S}})\\) and \\(\\mathbf{X}^{\\mathrm{T}}\\) are projected into an identical space constructed by \\(\\mathbf{W}^{\\mathrm{T}}\\), and that the projected features are close to each other. Furthermore, to avoid introducing the adaption layer \\(\\mathcal{A}\\), we construct a core knowledge convolution (Core Knowledge Conv) operator by sampling the weighting factor \\(\\mathbf{W}^{\\mathrm{T}}\\). We denote the sampling process as \\(\\mathcal{S}(\\cdot)\\). In the process, we first obtain the channel dimension \\(d\\) of the student feature \\(\\mathbf{X}^{\\mathrm{S}}\\), then sample the top-\\(d\\) values along the 'in_channel' axis from \\(\\mathbf{W}^{\\mathrm{T}}\\) based on their absolute values. Finally, we obtain the sampled weighting factor \\(\\mathcal{S}(\\mathbf{W}^{\\mathrm{T}})\\). In this context, we rewrite the feature loss given in Eq. (7) as
\\[\\mathcal{L}_{\\mathrm{feat}} =||\\mathbf{\\hat{Y}}^{\\mathrm{CT}}-\\mathbf{\\hat{Y}}^{\\mathrm{T}} ||_{2}^{2} \\tag{8}\\] \\[=||\\mathtt{Conv}(\\mathbf{X}^{\\mathrm{S}};\\mathcal{S}(\\mathbf{W}^ {\\mathrm{T}}))-\\mathtt{Conv}(\\mathbf{X}^{\\mathrm{T}};\\mathbf{W}^{\\mathrm{T}})) ||_{2}^{2},\\]
where \\(\\mathbf{\\hat{Y}}^{\\mathrm{CT}}\\) denotes the output of core knowledge convolution. When using this feature loss, we keep the weighting factor \\(\\mathbf{W}^{\\mathrm{T}}\\) fixed and only compute the gradient with respect to the student feature \\(\\mathbf{X}^{\\mathrm{S}}\\).
**Mathematical Foundation of CoreKD.** We first explain the mathematical foundation of traditional feature distillation and analyze its weaknesses. Then, we introduce the mathematical foundation of our CoreKD. Finally, we compare the results of our CoreKD with those of the traditional one.
Traditional feature distillation uses a convolution layer to align feature channel numbers between student and teacher models. We denote the convolution layer as a function \\(\\mathcal{A}(\\cdot)\\) in the Eq. (6). Next, we denote the weighting factor of \\(\\mathcal{A}(\\cdot)\\) as \\(\\mathbf{A}\\in\\mathbb{R}^{d^{\\prime}\\times d}\\). This function is used to convert an input feature \\(\\mathbf{X}^{\\mathrm{S}}\\in\\mathbb{R}^{h\\times w\\times d}\\) into an output feature \\(\\mathbf{Z}\\in\\mathbb{R}^{h\\times w\\times d}\\), i.e., \\(\\mathbf{Z}=\\mathcal{A}(\\mathbf{X}^{\\mathrm{S}})\\). We denote the vector at an arbitrary spatial location of \\(\\mathbf{X}^{\\mathrm{S}}\\) as \\(\\mathbf{x}^{\\mathrm{S}}\\in\\mathbb{R}^{d\\times 1}\\), the \\(i\\)th row vector in the weighting factor \\(\\mathbf{A}\\) as \\(\\mathbf{a}_{i}\\in\\mathbb{R}^{1\\times d}\\), and the corresponding value in \\(\\mathbf{Z}\\) as \\(z_{i}\\in\\mathbb{R}^{1\\times 1}\\). The mathematical relation between \\(\\mathbf{x}^{\\mathrm{S}}\\), \\(\\mathbf{a}_{i}\\), and \\(z_{i}\\) can be represented as
\\[z_{i}=\\mathtt{Conv}(\\mathbf{x}^{\\mathrm{S}};\\mathbf{a}_{i})=\\mathbf{a}_{i} \\cdot\\mathbf{x}^{\\mathrm{S}}, \\tag{9}\\]
where the operator '\\(\\cdot\\)' indicates a dot product. The dot product computation can be viewed as the projection of the vector \\(\\mathbf{x}^{\\mathrm{S}}\\) onto the vector \\(\\mathbf{a}_{i}\\), as shown in Fig. 11 (a). We can infer that the generation of \\(z_{i}\\) is related to the \\(\\mathbf{a}_{i}\\) and \\(\\mathbf{x}^{\\mathrm{S}}\\) but has no relation to the teacher's features. This implies that the traditional feature distillation merely focuses on enforcing the student to learn all information from the teacher without considering whether the teacher's features are beneficial to downstream tasks.
On the contrary, our CoreKD uses the weighting factor \\(\\mathbf{W}^{\\mathrm{T}}\\) of the teacher model to align feature channel numbers between the student and teacher models, as shown in Eq. (8). We denote the \\(i\\)th row vector of \\(\\mathbf{W}^{\\mathrm{T}}\\) as \\(\\mathbf{w}^{\\mathrm{T}}\\) and the vector at an arbitrary spatial location of \\(\\mathbf{X}^{\\mathrm{T}}\\) as \\(\\mathbf{x}^{\\mathrm{T}}\\). Then we can write the \\(i\\)th loss value of \\(\\mathcal{L}_{\\mathrm{feat}}\\) as
\\[\\ell_{\\mathrm{feat}}^{i} =\\mathtt{Conv}(\\mathbf{x}^{\\mathrm{S}};\\mathcal{S}(\\mathbf{w}_{i} ^{\\mathrm{T}}))-\\mathtt{Conv}(\\mathbf{x}^{\\mathrm{T}};\\mathbf{w}_{i}^{ \\mathrm{T}}) \\tag{10}\\] \\[=\\mathcal{S}(\\mathbf{w}_{i}^{\\mathrm{T}})\\cdot\\mathbf{x}^{ \\mathrm{S}}-\\mathbf{w}_{i}^{\\mathrm{T}}\\cdot\\mathbf{x}^{\\mathrm{T}}.\\]
This loss value calculation process is illustrated in Fig. 11 (b). From the above analyses, we have two key observations: 1) our CoreKD projects the student and teacher features into an identical space constructed by \\(\\mathbf{W}^{\\mathrm{T}}\\), and 2) our CoreKD does not enforce the student feature to be the same as the teacher feature but rather focuses on minimizing the projected distances. Since the values within the weighting factor \\(\\mathbf{W}^{\\mathrm{T}}\\) reflect the importance of teacher features, our CoreKD enables the student model to learn beneficial information for downstream tasks from the teacher model.
Since the experimental results involve the introduction of both datasets and implementation details, we arrange the comparison results of our CoreKD with the traditional feature distillation in the experimental section. For details, please refer to Section IV-C.
### _Loss Function_
Our efficient multispectral early-fusion (EME) single-branch model is trained using all the losses described above. The total loss is
\\[\\mathcal{L}_{\\mathrm{total}}=\\mathcal{L}_{\\mathrm{cls}}+\\mathcal{L}_{\\mathrm{ reg}}+\\mathcal{L}_{\\mathrm{weak}}+\\mathcal{L}_{\\mathrm{feat}}, \\tag{11}\\]
where \\(\\mathcal{L}_{\\mathrm{cls}}\\) and \\(\\mathcal{L}_{\\mathrm{reg}}\\) represent the classification and regression losses defined by a detector [22, 23, 2], respectively. \\(\\mathcal{L}_{\\mathrm{weak}}\\) is the summation of weakly supervised losses defined in Eq. (4) and Eq. (5):
\\[\\mathcal{L}_{\\mathrm{weak}}=\\mathcal{H}(\\mathbf{\\tilde{q}}_{\\mathrm{ad}}, \\mathbf{\\hat{q}}_{\\mathrm{bb}})+\\mathcal{H}(\\mathbf{\\tilde{q}}_{\\mathrm{bb}}, \\mathbf{\\hat{q}}_{\\mathrm{ad}})+\\mathcal{H}(\\mathbf{G},\\mathbf{\\hat{G}}).\\]
Fig. 11: Schematic diagram of the mathematical foundation of feature distillation: (a) Convolution operation in the traditional feature distillation; (b) The loss calculation process in our CoreKD.
Fig. 10: Weighting factor histograms of the teacherβs head module in Fig. 9. (a), (b), and (c) correspond to the level-0, level-1, and level-2 convolution weighting factor histograms, respectively.
## IV Experiments
### _Experimental Setup_
**Datasets.** Our experiments are conducted on the M3FD dataset [1] and FLIR dataset [39]. **M3FD dataset** contains 4200 pairs of RGB and thermal images. These image pairs are well aligned. The dataset contains 6 classes of objects: 'Person', 'Car', 'Bus', 'Motorcycle', 'Traffic Light', and 'Truck'. Since this dataset doesn't provide unified data splits, previous works have used a random splitting approach to determine the train and validation sets [1]. However, images in this dataset are sampled from video sequences, meaning that two adjacent frames may contain identical content. In this context, the random splitting approach results in information leakage between the train and validation sets. To address this problem, we first manually divide the dataset into 73 video segments based on different scenes. Then, we collect the first 70% of images in each video segment as the train set and the remaining images as the validation set. Finally, we obtain 2905 and 1295 pairs of RGB-T images in the train and validation sets, respectively. We name this data split 'M3FD-zxSplit' and release it to the public1. For the performance evaluation in Section IV-B, we use this data split. When comparing with state-of-the-art approaches in Section IV-D, we employ both 'M3FD-zxSplit' and random splitting. Our random splitting refers to randomly selecting 80% images as the train set and the remaining images as the validation set. **FLIR dataset** originally contains unaligned RGB-T image pairs. The work [49] develops a data-processing approach to align these images and obtain 7381, 1056, and 2111 image pairs in the train, validation, and test sets. This dataset contains 3 classes: 'Person', 'Bicycle' and 'Car'.
Footnote 1: [https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection](https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection)
**Evaluation Metrics.** We use the standard mean Average Precision (mAP) with IoU thresholds ranging from 0.5 to 0.95 across various object scales as metrics.
**Inference Efficiency Evaluations.** We assess the inference efficiency of our method (Python implementation) on the edge device NVIDIA AGX Orin with 64GB memory. We also evaluate the complexity of our method using FLOPs and the number of parameters. All results are presented in Tables I and II.
**Implementation Details.** We incorporate our three key modules into commonly-used one-stage detectors, including RetinaNet [22], GFL [23], and YOLOv5 [2]. For RetinaNet and GFL, we adopt the implementations in MMDetection toolbox [50]. For YOLOv5, we use its official impletration [2].
In the early-fusion strategy based on RetinaNet [22] and GFL [23] detectors, we use ResNet-50 [48] as the backbone network. For a fair comparison, we use the same backbone network in the medium-fusion strategy. Notably, in the CoreKD technique, the teacher model utilizes ResNet-101 [48] as the backbone network. For both strategies, we train for 12 epochs using the SGD optimizer with a batch size of 4. The initial learning rate is set to 0.01 and is decayed by 0.1 at epochs 8 and 11. Random horizontal flipping is employed as a data augmentation technique.
For the early-fusion strategy based on YOLOv5, we use YOLOv5-small as the baseline detector, and we use YOLOv5-large to construct a teacher model in the CoreKD technique. For both strategies, we train for 36 epochs with a batch size of 16. We keep all other hyperparameters consistent with the official settings of the YOLOv5 repository [2].
To standardize data for RetinaNet and GFL detectors, we calculate the mean value and standard deviation of RGB and thermal images for the M3FD and FLIR datasets. All experiments use the 640 \\(\\times\\) 512 image resolution. For the M3FD dataset, we obtain \\(\\text{mean}_{\\text{rgb}}\\) = [128.2, 129.3, 125.3], \\(\\text{std}_{\\text{rgb}}\\) = [49.1, 50.2, 53.5], \\(\\text{mean}_{\\text{t}}\\) = [84.1, 84.1, 84.1], and \\(\\text{std}_{\\text{t}}\\) = [50.6, 50.6, 50.6]. For the FLIR dataset, we obtain \\(\\text{mean}_{\\text{rgb}}\\) = [149.4, 148.7, 141.7], \\(\\text{std}_{\\text{rgb}}\\) = [49.3, 52.8, 59.0], \\(\\text{mean}_{\\text{t}}\\) = [135.7, 135.7], and \\(\\text{std}_{\\text{t}}\\) = [63.6, 63.6, 63.6]. For the YOLOv5 detector, we normalize both RGB and thermal images to the range of [0, 1] following its official implementations.
### _Performance Evaluation of Proposed Modules_
Table I and Table II present the performance of our method on the M3FD [1] and FLIR [39] datasets. Key observations include: (1) the medium-fusion strategy adds more parameters and FLOPs compared to the early-fusion strategy; (2) the medium-fusion strategy achieves better performance compared to single-modality inputs, whereas the plain early-fusion strategy does not consistently improve performance; (3) our EME method, incorporating the ShaPE module, weakly supervised learning, and CoreKD techniques into the plain early-fusion strategy, achieves significant performance improvement without significantly increasing parameters and FLOPs; (4) the inference time of our EME method is longer than that of the baseline method, since the structural similarity computation process has not been optimized when calculating the self-gating mask; (5) our EME method can outperform the medium-fusion strategy in both performance and efficiency to some extent; (6) Both architectures: \"Baseline + ShaPE + WeakSup.\" and \"Baseline + ShaPE + WeakSup. + CoreKD\" have the same FLOPs, parameters, and inference time as \"Baseline + ShaPE\". This is because both the weakly supervised learning method and CoreKD are removed in the inference phase, while only the ShaPE module is retained.
Fig. 12 and Fig. 13 present visualization results for two example scenes from M3FD [1] and FLIR [39] datasets, respectively. We observe that false positives or false negatives in the single-modality results may affect the plain early-fusion strategy. For instance, the person missed in Fig. 12 (e) is also absent in Fig. 12 (g), despite being detected in Fig. 12 (f). Moreover, false positives in Fig. 13 (f) affect the detection results of plain early-fusion, as shown in Fig. 13 (g). These phenomena confirm that the problem of information interference is a key obstacle to performance in the plain early-fusion strategy. Clearly, our EME effectively alleviates this problem.
\\begin{table}
\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} & Detector & FLOPs (\\(\\downarrow\\)) & Parameter (\\(\\downarrow\\)) & Time (\\(\\downarrow\\)) & mAP (\\(\\uparrow\\)) & mAP50 (\\(\\uparrow\\)) & Person (\\(\\uparrow\\)) & Biscple (\\(\\uparrow\\)) & Car (\\(\\uparrow\\)) \\\\ \\hline RGB & RetainNet-Res50 & 61.893G & 36.434M & 0.106s & 28.1040.00 & 59.4730.12 & 44.9340.39 & 55.7040.08 & 77.8040.08 \\\\ Thermal & RetainNet-Res50 & 61.893G & 36.434M & 0.106s & 35.3540.05 & 70.9340.05 & 62.174.01 & 66.3740.09 & 84.2740.05 \\\\ \\hline RGB-T Medium Fusion & RetainNet-Res50 & 94.611G & 47.852M & 0.170s & 35.3620.08 & 71.5730.05 & 61.9342.29 & 67.6041.14 & 85.1720.09 \\\\ \\hline Baseline: RGB-T Early Fusion & RetainNet-Res50 & 62.164G & 36.434M & 0.110s & 37.4740.05 & 67.9340.05 & 60.7040.22 & 63.7347.09 & 84.3740.05 \\\\ \\hline \\# ShapF & RetainNet-Res50 & 62.218G & 36.434M & 0.149s & 38.7040.14 & 71.6040.22 & 61.4004.54 & 66.0636.36 & 84.7040.22 \\\\ \\hline \\(\\text{+ ShapF}\\) + WeakSup. & RetainNet-Res50 & 62.218G & 36.434M & 0.149s & 38.5040.22 & 72.9340.27 & 62.9040.66 & 67.8840.09 & 85.0402.21 \\\\ \\hline \\(\\text{+ ShapF}\\) + WeakSup. & 62.218G & 36.434M & 0.149s & 38.5040.47 & 72.9340.39 & 62.7240.26 & 69.2340.06 & 69.2340.02 \\\\ \\hline RGB & \\multirow{2}{*}{GFL-Res50} & 61.932G & 32.270M & 0.110s & 31.734\\(\\pm\\)0.12 & 63.7740.05 & 51.70+0.08 & 57.8740.25 & 81.7740.05 \\\\ \\cline{2-11} Thermal & \\multirow{2}{*}{GRL-Res50} & 61.932G & 32.270M & 0.110s & 42.4040.14 & 75.0740.05 & 69.0840.16 & 68.4740.24 & 86.940.05 \\\\ \\hline RGB-T Medium Fusion & Get-Res50 & 94.110G & 43.419M & 0.717s & 42.6040.08 & 76.0730.21 & 70.040.19 & 70.5740.41 & 87.6340.05 \\\\ \\hline Baseline: RGB-T Early Fusion & Get-Res50 & 61.663G & 32.271M & 0.114s & 41.902.22 & 74.740.17 & 69.7040.33 & 67.7340.26 & 87.040.00 \\\\ \\hline \\(\\text{+ ShapF}\\) & \\multirow{2}{*}{GFL-Res50} & 61.718G & 32.2271M & 0.151s & 42.0420.16 & 25.7740.17 & 69.9740.21 & 70.1340.37 & 87.2340.09 \\\\ \\hline \\(\\text{+ ShapF}\\) + WeakSup. & 66.6850 & 61.718G & 32.271M & 0.151s & 42.9310.24 & 26.0902.22 & 71.3040.14 & 71.0480.50 & 87.9740.09 \\\\ \\hline \\(\\text{+ ShapF}\\) + WeakSup. & 66.6850 & 61.718G & 32.271M & 0.151s & 44.0440.00 & **78.7140.05** & 73.0430.05 & 72.6430.17 & 88.8840.00 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Inference efficiency and detection performance on the M3FD dataset [1]. The inference time is evaluated on an edge device: NVIDIA AGX Orin. The best results in the mAP and mAP50 columns are highlighted in bold and marked in **red**, while the second best ones are underlined and marked in **green**. All detection results are obtained by running three independent experiments. The mean value and standard deviation of these results are reported.
Fig. 12: Detection results of the GFL [23] detector on two example scenes from the M3FD [1] dataset. (a) and (c) display results using only RGB images. (b) and (f) show results using only thermal images. (c) and (g) demonstrate results using the plain RGB-T early-fusion strategy. (d) and (h) depict results using our EME method. Solid boxes represent detection results. Green dashed boxes mark missed objects (false negatives) while yellow dashed boxes mark false positives.
\\begin{table}
\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} & Detector & FLOPs (\\(\\downarrow\\)) & Parameter (\\(\\downarrow\\)) & Time (\\(\\downarrow\\)) & mAP50 (\\(\\uparrow\\)) & Person (\\(\\uparrow\\)) & Biscple (\\(\\uparrow\\)) & Car (\\(\\uparrow\\)) \\\\ \\hline RGB & RetainNet-Res50 &
datasets, respectively. For comprehensive comparisons, we adopt RetinaNet-Res50 and GFL-Res50 as baseline detectors in these two tables. From the results, we can observe that our CoreKD consistently achieves superior performance compared to traditional feature distillation. For example, our CoreKD (72.23%) obtains a 1.26% mAP50 absolute gain over the traditional one (70.97%) when using RetinaNet-Res50 on the FLIR dataset.
**Backbone Network.** We evaluate our EME method using ResNet-101 [48] as the backbone network on the M3FD and FLIR datasets, and present the results in Table V. We observe that detectors using ResNet-101 consistently achieve better performance than those using ResNet-50. For example, RetinaNet with ResNet-101 (74.87%) obtains a 2.64% mAP50 absolute gain over ResNet-50 (72.23%).
### _Comparison with the State-of-The-Art Approaches_
We use the one-stage YOLOv5 [2] detector as the baseline, and incorporate our proposed modules to construct the effective multispectral early-fusion (EME) model. Table VI and Table VII compares our EME and previous state-of-the-art approaches on M3FD [1] and FLIR [39] datasets.
In Table VI, we compare our EME with 10 state-of-the-art image-fusion-based object detection approaches [1, 30, 51, 52, 53, 54, 55, 56, 57, 51, 52, 53, 57]. We first generate fused images based on their official implementations, and then train YOLOv5 [2] using these fused images with the same training settings. The results show that our EME achieves state-of-the-art performance. We observe that the results in Table VI (a) are obviously better than those in Table VI (b). This demonstrates that random splitting causes information leakage and makes it difficult to improve performance. Fig. 14 presents an example scene for visualization. Compared to other approaches, a weakness of our EME detector is that it doesn't generate a fused image for direct visualization. This is because our method focuses on detection rather than image fusion. We will address this issue in future work.
In Table VII, we compare our EME with 13 multispectral object detection approaches. These approaches include (1) medium-fusion strategies, such as CBF [58], MCG [59], MUN [59], CFR [61], GAFF [15], SMPD [63], MSAT [65], CSAA [66], and MFPT [67]; (2) domain adaptation and single-modality detection approaches, such as ODS [60], BU [62], and ThDe [64]; and (3) late-fusion strategy [28]. The results show that our EME also achieves state-of-the-art performance on the FLIR dataset [39].
### _Comparison of Inference Efficiency_
We compare the inference efficiency of our EME method with previous state-of-the-art approaches on an edge device: the NVIDIA AGX Orin with 64GB of memory. We select open-source approaches for comparison and adopt YOLOv5-small as the baseline detector for all methods.
Table VIII presents the FLOPs, number of parameters, and inference time for various methods. Experimental results show that our EME method is the fastest. Interestingly, we notice that a reduction in FLOPs does not directly lead to a similar reduction in the inference time of an approach. This phenomenon may be attributed to the frequent memory access by operators, as confirmed in PConv [68]. This observation inspires us to further speed up our EME method by reducing memory access in the future.
## V Conclusions
In this paper, we propose the effective multispectral early-fusion (EME) detector, which achieves both high performance and efficiency. We identify and address performance obstacles in a plain early-fusion strategy, such as information interference, domain gaps, and weak feature representation, by proposing solutions including shape-priority early-fusion modules, weakly supervised learning, and core knowledge
\\begin{table}
\\begin{tabular}{l|l|c|c|c} \\hline \\hline Datasets & Detector & Backbone & FLOPs & mAP (\\(\\uparrow\\)) & mAP50 (\\(\\uparrow\\)) \\\\ \\hline \\multirow{4}{*}{M3FD} & RetinaNet & ResNet-50 & 62.21SG & 33.53Β±0.17 & 53.23Β±0.09 \\\\ & RetinaNet & ResNet-101 & 65.392G & 34.23Β±0.12 & 54.63Β±0.45 \\\\ \\cline{2-5} & GFL & ResNet-50 & 61.718G & 37.03Β±0.09 & 57.70Β±0.08 \\\\ \\cline{2-5} & GFL & ResNet-101 & 64.892G & 37.37Β±0.12 & 58.60Β±0.08 \\\\ \\hline \\multirow{4}{*}{FLIR} & RetinaNet & ResNet-50 & 62.21SG & 38.83Β±0.17 & 72.23Β±0.31 \\\\ & RetinaNet & ResNet-101 & 65.392G & 40.67Β±0.05 & 74.87Β±0.34 \\\\ \\cline{1-1} \\cline{2-5} & GFL & ResNet-50 & 61.718G & 44.00Β±0.00 & 78.17Β±0.05 \\\\ \\cline{1-1} \\cline{2-5} & GFL & ResNet-101 & 64.892G & 44.47Β±0.05 & 79.57Β±0.05 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: Results of our EME method on the M3FD and FLIR datasets using different baseline detectors and backbone networks.
\\begin{table}
\\begin{tabular}{l|l|c|c} \\hline \\hline Method & Detector & mAP (\\(\\uparrow\\)) & mAP50 (\\(\\uparrow\\)) \\\\ \\hline Baseline: & \\multirow{2}{*}{RetinaNet-Res50} & \\multirow{2}{*}{32.03Β±0.05} & \\multirow{2}{*}{50.70Β±0.16} \\\\ Plain RGB-T Early Fusion & & & \\\\ \\hline Traditional Feature Distill+ & \\multirow{2}{*}{RetinaNet-Res50} & \\multirow{2}{*}{32.17Β±0.12} & \\multirow{2}{*}{52.43Β±0.05} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline CoreKD+ & \\multirow{2}{*}{RetinaNet-Res50} & \\multirow{2}{*}{33.53Β±0.17} & \\multirow{2}{*}{53.23Β±0.09} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline Baseline: & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{33.50Β±0.28} & \\multirow{2}{*}{52.77Β±0.25} \\\\ Plain RGB-T Early Fusion & & & \\\\ \\hline Traditional Feature Distill+ & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{35.83Β±0.17} & \\multirow{2}{*}{57.07Β±0.09} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline CoreKD+ & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{37.03Β±0.09} & \\multirow{2}{*}{57.70Β±0.08} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: Comparison of traditional feature distillation with our CoreKD on the FLIR dataset [39].
\\begin{table}
\\begin{tabular}{l|l|c|c|c} \\hline \\hline Method & Detector & mAP (\\(\\uparrow\\)) & mAP50 (\\(\\uparrow\\)) \\\\ \\hline Baseline: & \\multirow{2}{*}{RetinaNet-Res50} & \\multirow{2}{*}{32.03Β±0.05} & \\multirow{2}{*}{50.70Β±0.16} \\\\ Plain RGB-T Early Fusion & & & \\\\ \\hline Traditional Feature Distill+ & \\multirow{2}{*}{RetinaNet-Res50} & \\multirow{2}{*}{33.53Β±0.17} & \\multirow{2}{*}{53.23Β±0.09} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline CoreKD+ & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{33.53Β±0.17} & \\multirow{2}{*}{53.23Β±0.09} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline Baseline: & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{33.50Β±0.28} & \\multirow{2}{*}{52.77Β±0.25} \\\\ Plain RGB-T Early Fusion & & & \\\\ \\hline Traditional Feature Distill+ & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{35.83Β±0.17} & \\multirow{2}{*}{57.07Β±0.09} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline CoreKD+ & \\multirow{2}{*}{GFL-Res50} & \\multirow{2}{*}{37.03Β±0.09} & \\multirow{2}{*}{57.70Β±0.08} \\\\ Baseline+ShaPE+WeakSup. & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: Comparison of traditional feature distillation with our CoreKD on the M3FD dataset [1].
distillation. Extensive experiments on representative datasets demonstrate the effectiveness and efficiency of our EME detector.
The main advantage of our EME detector is that it improves the performance of an efficient single-branch early-fusion strategy without significantly increasing its computational burden. We demonstrate that our EME detector has similar FLOPs and parameters to a plain early-fusion strategy, while achieving better performance than a cumbersome two-branch structure. We also show that our EME detector has higher inference efficiency than the two-branch structure on an edge device: the NVIDIA AGX Orin.
A limitation of our current EME detector is its inefficient two-stage training paradigm in the knowledge distillation technique. In the future, we will work towards an optimized one-stage paradigm to accelerate the training process and further improve detection accuracy.
\\begin{table}
\\begin{tabular}{l|c c c c c c c c c c c c c} \\hline \\hline & Themal [2] & RGB [2] & AUUF [51] & CDDF [30] & DDoGAN [52] & DIVE [31] & DenseF [53] & PSF [54] & RFN [55] & SeAF [56] & TarDAL [1] & UDF [57] & EME (Ours) \\\\ \\hline mAP & 49.10 & 52.40 & 53.30 & 53.00 & 52.20 & 52.70 & 53.40 & 53.10 & 53.50 & 53.10 & 52.50 & 53.40 & **54.004.28** \\\\ mAP50 & 77.30 & 81.90 & 81.90 & 80.90 & 81.60 & 81.50 & 81.70 & 82.00 & 81.70 & 82.20 & 81.00 & 81.90 & **82.906.37** \\\\ \\hline Person & 79.30 & 68.40 & 76.70 & 76.30 & 73.60 & 74.50 & 76.50 & 76.70 & 75.30 & 77.00 & 79.10 & 77.00 & 79.530.26 \\\\ Car & 87.90 & 90.80 & 91.00 & 90.70 & 91.10 & 91.40 & 90.80 & 91.00 & 91.00 & 90.50 & 91.20 & 91.904.02 \\\\ Bus & 87.20 & 92.20 & 90.00 & 90.10 & 90.70 & 91.60 & 89.40 & 90.10 & 89.40 & 91.20 & 89.40 & 90.70 & 89.8040.45 \\\\ Motor & 70.00 & 74.00 & 72.60 & 69.20 & 74.80 & 73.50 & 72.80 & 73.30 & 73.30 & 72.70 & 70.30 & 71.30 & 74.870.95 \\\\ TrafficLight & 55.90 & 80.30 & 77.40 & 75.40 & 76.90 & 74.80 & 77.20 & 78.20 & 77.40 & 77.60 & 72.70 & 77.70 & **77.401.13** \\\\ Track & 83.40 & 85.70 & 83.70 & 83.10 & 82.90 & 83.40 & 82.90 & 82.90 & 83.90 & 84.10 & 84.00 & 83.60 & **88.004.09** \\\\ \\hline \\hline \\multicolumn{8}{c}{(b) Dataset Splitting Method: M2-mSSplit} \\\\ \\hline & Thermal [2] & RGB [2] & AUUF [51] & CDDF [30] & DDoGAN [52] & DIVE [31] & DenseF [53] & PSF [54] & RFN [55] & SeAF [56] & TerDAL [1] & UDF [57] & EME (Ours) \\\\ \\hline mAP & 34.90 & 36.10 & 38.30 & 38.60 & 37.10 & 37.10 & 38.90 & 38.00 & 38.20 & 38.90 & 39.10 & 38.70 & **41.109.29** \\\\ mAP50 & 57.20 & 60.20 & 62.00 & 61.90 & 61.00 & 60.80 & 62.40 & 61.10 & 61.30 & 62.20 & 61.90 & 61.90 & **66.234.40** \\\\ \\hline Person & 74.60 & 55.90 & 72.20 & 71.90 & 67.30 & 67.60 & 72.30 & 71.70 & 70.50 & 72.50 & 75.50 & 72.40 & 37.2340.26 \\\\ Car & 80.20 & 84.80 & 85.50 & 85.60 & 84.90 & 85.20 & 85.90 & 85.50 & 85.80 & 85.00 & 85.50 & 85.50 & 85.50 & 85.50 & 85.50 & 85.1734.019 \\\\ Bus & 58.30 & 65.70 & 58.60 & 61.80 & 61.60 & 59.80 & 61.40 & 58.30 & 61.30 & 61.50 & 60.90 & 60.10 & 62.3341.96 \\\\ Motor & 48.00 & 45.10 & 49.10 & 47.60 & 49.00 & 48.70 & 49.60 & 45.80 & 44.60 & 47.50 & 46.80 & 50.80 & 55.3340.21 \\\\ TrafficLight & 27.30 & 56.80 & 49.80 & 48.70 & 49.10 & 51.20 & 48.60 & 50.90 & 49.70 & 50.80 & 46.90 & 48.00 & 55.3340.37 \\\\ Track & 54.80 & 52.70 & 56.70 & 55.50 & 53.80 & 52.60 & 56.80 & 54.70 & 55.80 & 55.70 & 56.70 & 54.70 & **60.106.16** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE VI: Comparisons with state-of-the-art approaches on the M3FD dataset [1]. The best results are highlighted in bold and marked in **red**, while the second-best ones are underlined and marked in **green**. The detection results of our EME are obtained by running three independent experiments. The mean values and standard deviations of these results are reported.
\\begin{table}
\\begin{tabular}{l|c|c|c} \\hline \\hline Method & FLOPs & Parameters & Time (seconds) \\\\ \\hline AUIF [51] & 12.185G & 7.037M & 5.12s \\\\ \\hline CDDF [30] & 2816.279G & 8.214M & 9.507s \\\\ \\hline DensF [53] & 151.596G & 7.100M & 1.769s \\\\ \\hline PSF [54] & 939.024G & 52.925M & 2.512s \\\\ \\hline RFN [55] & 1859.908G & 9.759M & 3.473s \\\\ \\hline SeAF [56] & 272.945G & 7.192M & 1.344s \\\\ \\hline TarDAL [1] & 478.474G & 7.323M & 1.446s \\\\ \\hline
**EME (Ours)** & 15.780G & 7.063M & 0.077s \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE VII: Comparisons with state-of-the-art approaches on the FLIR [39] dataset. The best results are highlighted in bold and marked in **red**, while the second-best ones are underlined and marked in **green**. The detection results of our EME are obtained by running three independent experiments. The mean values and standard deviations of these results are reported.
Fig. 14: Detection results of the YOLOv5 [2] detector on one example scene from the M3FD [1] dataset. (a) and (b) respectively show the results using only a thermal image and only an RGB image. (c)-(l) display the detection results using fused images obtained from 10 different image fusion approaches. (m) demonstrates the results using our EME method.
* [39] \"FREE FLIR Thermal Dataset for Algorithm Training,\" [https://www.flir.com/oem/adas/adas/dataset-form/](https://www.flir.com/oem/adas/adas/dataset-form/), 5, IV-A, IV-B, IV-B, II, 13, IV, IV-D, IV-D, VII
* [40] Z. Chen, Z. Zhang, X. Tan, Y. Qu, and Y. Xie, \"Unveiling the Power of CLIP in Unsupervised Visible-Infrared Person Re-Identification,\" in _Proceedings of the ACM International Conference on Multimedia_, 2023, pp. 3667-3675, 11-B
* [41] X. Yi, H. Xu, H. Zhang, L. Tang, and J. Ma, \"Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, 2024, pp. 27 026-27 035. III-B
* [42] X. Yu, N. Dong, L. Zhu, H. Peng, and D. Tao, \"CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification,\" 2024. [Online]. Available: [https://arxiv.org/abs/2401.05806](https://arxiv.org/abs/2401.05806) III-B
* [43] Z. Wang, Y. Li, X. Chen, S.-N. Lim, A. Torralba, H. Zhao, and S. Wang, \"Detecting Everything in the Open World: Towards Universal Object Detection,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, June 2023, pp. 11 433-11 443. III-B
* [44] R. Abdelfattah, Q. Guo, X. Li, X. Wang, and S. Wang, \"CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification,\" in _Proceedings of the International Conference on Computer Vision_, 2023, pp. 1348-1357. III-B
* [45] Y. Zhong, J. Yang, P. Zhang, C. Li, N. Codella, L. H. Li, L. Zhou, X. Dai, L. Yuan, Y. Li, and J. Gao, \"RegionCLIP: Region-Based Language-Image Pretraining,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, June 2022, pp. 16 793-16 803. III-B
* [46] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu, \"Deep Mutual Learning,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, 2018, pp. 4320-4328. III-B
* [47] L. Yuan, F. Z. Yao, G. Li, T. Wang, and J. Feng, \"Revisiting Knowledge Distillation via Label Smoothing Regularization,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, 2020, pp. 3903-3911. III-B
* [48] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep Residual Learning for Image Recognition,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, 2016, pp. 770-778. III-B, IV-A, IV-A, IV-C
* [49] V. Sam, K. Ali, M. Christian, K. Laurent, and E. Lutz, \"Robust Environment Perception for Automated Driving: A Unified Learning Pipeline for Visual-Infrared Object Detection,\" in _IEEE Intelligent Vehicles Symposium_, 2022, pp. 367-374. IV-A
* [50] MMDDetection Contributors, \"OpenMMLA Detection Toolbox and Benchmark,\" 2018. [Online]. Available: [https://github.com/open-mmlab/mmediction](https://github.com/open-mmlab/mmediction) IV-A
* [51] Z. Zhao, S. Xu, J. Zhang, C. Liang, C. Zhang, and J. Liu, \"Efficient and Model-Based Infrared and Visible Image Fusion via Algorithm Unrolling,\" _IEEE Transactions on Circuits and Systems for Video Technology_, vol. 32, no. 3, pp. 1186-1196, 2022. IV-D, VI, VIII
* [52] J. Ma, H. Xu, J. Jiang, X. Mei, and X.-P. Zhang, \"DDGGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 4980-4995, 2020. IV-D, VI
* [53] H. Li and X.-J. Wu, \"DenseFuse: A Fusion Approach to Infrared and Visible Images,\" _IEEE Transactions on Image Processing_, vol. 28, no. 5, pp. 2614-2623, 19:V-D, VI, VIII
* [54] L. Tang, H. Zhang, H. Xu, and J. Ma, \"Rethinking the Necessity of Image Fusion in High-Level Vision Tasks: A Practical Infrared and Visible Image Fusion Network Based on Progressive Semantic Injection and Scene Fidelity,\" _Information Fusion_, vol. 99, p. 101870, 2023. IV-D, VI, VIII
* [55] H. Li, X.-J. Wu, and J. Kittler, \"RFN-Nest: An End-to-End Residual Fusion Network for Infrared and Visible Images,\" _Information Fusion_, vol. 73, pp. 72-86, 2021. IV-D, VI, VIII
* [56] L. Tang, J. Yuan, and J. Ma, \"Image Fusion in the Loop of High-Level Vision Tasks: A Semantic-Aware Real-Time Infrared and Visible Image Fusion Network,\" _Information Fusion_, vol. 82, pp. 28-42, 2022. IV-D, VI, VIII
* [57] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, \"U2Fusion: A Unified Unsupervised Image Fusion Network,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2020. IV-D, VI
* [58] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, \"CBAM: Convolutional Block Attention Module,\" in _Proceedings of the European Conference on Computer Vision_, 2018. IV-D, VII
* [59] C. Devaguptapu, N. Akolekar, M. M Sharma, and V. N Balasubramanian, \"Borrowing from Anywhere: Pseudo Multi-Modal Object Detection in Thermal Imagery,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops_, 2019, pp. 0-0. IV-D, IV-D, VII
* [60] F. Munir, S. Azam, M. A. Rafique, A. M. Sheri, and M. Jeon, \"Thermal Object Detection using Domain Adaptation through Style Consistency,\" _ArXiv_, vol. abs/2006.00821, 2020. [Online]. Available: [https://api.semanticscholar.org/CorpusD.219176719](https://api.semanticscholar.org/CorpusD.219176719) V-D, VII
* [61] H. Zhang, E. Fromont, S. Lefevre, and B. Avignon, \"Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks,\" in _Proceedings of the International Conference on Image Processing_, 2020, pp. 276-280. IV-D, VII
* [62] M. Kieu, A. D. Bagdanov, and M. Bertini, \"Bottom-Up and Layerwise Domain Adaptation for Pedestrian Detection in Thermal Images,\" _ACM Transactions on Multimedia Computing, Communications, and Applications_, vol. 17, no. 1, 2021, IV-D, VII
* [63] Q. Li, C. Zhang, Q. Hu, P. Zhu, H. Fu, and L. Chen, \"Stabilizing Multispectral Pedestrian Detection with Evidential Hybrid Fusion,\" _IEEE Transactions on Circuits and Systems for Video Technology_, vol. 34, no. 4, pp. 3017-3029, 2024. IV-D, VII
* [64] Y. Cao, T. Zhou, X. Zhu, and Y. Su, \"Every Feature Counts: An Improved One-Stage Detector in Thermal Imagery,\" in _Proceedings of the International Conference on Computer and Communications_, 2019, pp. 1965-1969. IV-D, VII
* [65] S. You, X. Xie, Y. Feng, C. Mei, and Y. Ji, \"Multi-Scale Aggregation Transformers for Multispectral Object Detection,\" _IEEE Signal Processing Letters_, vol. 30, pp. 1172-1176, 2023. IV-D, VII
* [66] Y. Cao, J. Bin, J. Hamari, E. Blasch, and Z. Liu, \"Multimodal Object Detection by Channel Switching and Spatial Attention,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops_, 2023, pp. 403-411. IV-D, VII
* [67] Y. Zhu, X. Sun, M. Wang, and H. Huang, \"Multi-Modal Feature Pyramid Transformer for RGB-Infrared Object Detection,\" _IEEE Transactions on Intelligent Transportation Systems_, vol. 24, no. 9, pp. 9984-9995, 2023. IV-D, VII
* [68] J. Chen, S.-h. Kao, H. He, W. Zhuo, S. Wen, C.-H. Lee, and S.-H. G. Chan, \"Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks,\" in _Proceedings of the Conference on Computer Vision and Pattern Recognition_, June 2023, pp. 12 021-12 031. IV-E | Most recent multispectral object detectors employ a two-branch structure to extract features from RGB and thermal images. While the two-branch structure achieves better performance than a single-branch structure, it overlooks inference efficiency. This conflict is increasingly aggressive, as recent works solely pursue higher performance rather than both performance and efficiency. In this paper, we address this issue by improving the performance of efficient single-branch structures. We revisit the reasons causing the performance gap between these structures. For the first time, we reveal the information interference problem in the naive early-fusion strategy adopted by previous single-branch structures. Besides, we find that the domain gap between multispectral images, and weak feature representation of the single-branch structure are also key obstacles for performance. Focusing on these three problems, we propose corresponding solutions, including a novel shape-priority early-fusion strategy, a weakly supervised learning method, and a core knowledge distillation technique. Experiments demonstrate that single-branch networks equipped with these three contributions achieve significant performance enhancements while retaining high efficiency. Our code will be available at [https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection](https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection).
Multispectral object detection; feature fusion; weakly supervised learning; knowledge distillation | Summarize the following text. | 281 |
arxiv-format/2110_02839v1.md | # Census-Independent Population Estimation using Representation Learning
Isaac Neal\\({}^{1}\\), Sohan Seth\\({}^{1}\\)\\(*\\), Gary Watmough\\({}^{1}\\), Mamadou S. Diallo\\({}^{2}\\)
University of Edinburgh\\({}^{1}\\) and United Nations Children's Fund (UNICEF)\\({}^{2}\\)
{ineal,sseth,gary.watmough}@ed.ac.uk, [email protected]
## 1 Introduction
Accurate population maps are critical for planning infrastructure and distributing resources, including public health interventions and disaster relief, as well as monitoring well-being e.g. through the fulfillment of sustainable development goals [20]. Censuses provide a 'gold standard' population map by surveying every individual household nationally. However, this is an extremely expensive endeavor, and therefore they are typically conducted every ten years [21]. During the intercensal period projections of the population are available at a coarser _enumeration_ level that is estimated either using birth, death, and migration rates [20], or through combining sample household surveys using small area estimation [21]. Estimating finer-resolution intercensal population, e.g., over a 100 m grid, has received significant interest in the last decade, and several population maps, e.g. WorldPop [17], High-Resolution Settlement Layer (HRSL) [15, 23] and GRID3 [3], have been made available publicly to aid humanitarian initiatives. These approaches primarily use satellite imagery as a predictor, and can be broadly categorized as _census-dependent_ and _census-independent_ based on their use of census as response variable [25].
Census-independent population estimation approaches (SS2.1) using microcensus (survey data) are gaining prominence since they can improve the spatial and temporal resolution of census-dependent approaches (SS2.2). However, finding census-independent methods that are sustainable and transferable, and informative data sources that are reliable and easy to procure remain active areas of research. Existing approaches rely heavily on hand-crafted features that often require manual curation, and the features used in modelling vary significantly between publications and countries where population is being estimated, making themless sustainable for repeated use, and less transferable to other regions and countries. For example, these approaches often use objects in satellite imagery, e.g., buildings and cars, [7] and distribution of services, e.g., school density [16] and road networks [25], as indicators of population. Detecting building footprints usually requires manual annotation and curation while information on road networks and school density can be incomplete, unreliable, and difficult to procure.
Recent advances in _representation learning_ have demonstrated that features, or _representations_, automatically extracted from images through end-to-end training of a deep neural network can significantly improve the performance in many computer vision tasks in a sustainable manner by removing the need for handcrafted features [19]. Additionally, _transfer learning_ can leverage features learned from a vast amount of annotated data from a separate task to improve performance on tasks lacking sufficient annotated data [2]. Furthermore, _explainable AI_ has provided meaningful insight from these, so called 'black box', models to explain the decisions made by them, enhancing their transparency [18]. Representation learning can vastly simplify the problem of estimating population from satellite imagery by removing the need for handcrafted features, manual data procurement, and human supervision, thus improving the sustainability and transferability of the process. Additionally, transfer learning removes the need for large scale training data, allowing fine-tuning on limited microcensus with minimal computational resources. Finally, these methods provide interpretation of model outcome, promoting trust around the estimated population among the end-users.
We assess the utility of representation learning in census-independent population estimation from very-high-resolution (\\(\\leq\\)5 m) satellite imagery using a retrospective microcensus in two districts of Mozambique. To the best of our knowledge, we are the first to explore the potential of such approach, and in using both very-high (50 cm spatial) resolution satellite imagery and microcensus in this manner. We observe that the proposed approach is able to produce a reliable medium-resolution (100 m) population density map with minimal human supervision and computational resources. We find that this approach performs similar to the more elaborate approach of using building footprints to estimate population, and outperforms techniques using only public datasets to estimate population in our ROIs (Table 3). It also completely avoids manual annotation of satellite images and manual access to public datasets making it potentially more transferable to other countries using _only very-high-resolution satellite imagery and gridded microcensus_. Additionally, we observe that this approach learns to predict population in a reasonable manner by using built-up area as an indicator of population. We refer to this approach as _Sustainable Census-Independent Population Estimation_ or SCIPE (see Figure 1 for an illustration), with our core motivation being developing population estimation methods that are easy to use, computationally efficient, and that can produce frequent and explainable population maps with associated uncertainty values. This will help humanitarian organizations extrapolate local microcensus information to regional level with ease, and provide more confidence in using the estimated population map in conjunction with existing ones.
Figure 1: The figure illustrates our approach of sustainable census-independent population estimation, or SCIPE, using representation learning. Satellite images of surveyed grid tiles are mapped to vector representations through a pre-trained deep neural network, and a regression model is trained on the representation space to estimate population using microcensus. The pre-trained network can also be fine-tuned with microcensus to learn better representation indicative of population.
## 2 Background
In this section, we provide a detailed overview of the existing literature on census-independent population estimation (SS2.1 and Table 1) and the application of deep neural networks in intercensal census-dependent population estimation (SS2.2 and Table 2).
### Census-independent population estimation
_Census-dependent population estimation_, also known as _population disaggregation_ or _top-down estimation_, either uses census data to train a predictive model that can estimate population of a grid tile directly [20], or to train a model to estimate a weighted surface that can be used to distribute coarse resolution projected data across a finer resolution grid [6]. _Census-independent population estimation_, also known as _bottom-up estimation_, instead relies on locally conducted microcensuses to learn a predictive model that can estimate population at non-surveyed grid tiles.
**Weber et al.[26]** used very-high-resolution satellite imagery to estimate the number of under-5s in two states in northern Nigeria in three stages: first, by building a binary settlement layer at 8 m resolution using support vector machine (SVM) with \"various low-level contextual image features\" [26, SS2.3], second, by classifying \"blocks\" constructed from OpenStreetMap data using \"a combination of supervised image segmentation and manual correction of errors\" in 8 residential types (6 urban, 1 rural and 1 non-residential) [26, SS2.4], and finally, by modelling population count of each residential type with separate log-normal distributions using microcensus. The predictions were validated against a separate survey from the same region, and were found to be highly correlated with this data [26, SS3.3].
**Engstrom et al.**[7] used LASSO regularized Poisson regression and Random Forest models to predict village level population in Sri Lanka. The authors used a variety of remote sensing indicators at various resolutions as predictors, both coarser-resolution publicly available ones such as night time lights, elevation, slope, and tree cover, and finer-resolution proprietary ones such as built up area metrics, car and shadow shapefiles, and land type classifications [7, Table 1]. The authors observed that publicly available data can explain a large amount of variation in population density for these regions, particularly in rural areas, and the addition of proprietary object footprints further improved performance. Their population estimates were highly correlated with census counts at the village level [7, Table 4].
**Hillson et al.**[10] explored the use of 30 m resolution Landsat 5 thematic mapper (TM) imagery to estimate population densities and counts for 20 neighborhoods in the city of Bo, Sierra Leone. The authors used 379 candidate Landsat features generated manually, which was reduced to 159 covariates through \"trial-and-error\" [10, p. 10] and removal of highly correlated (Pearson's \\(\\rho>0.99\\)) pairs, and finally, an optimal regression model was learned using only 6 of these covariates [10, Table 7]. These estimates were then validated through leave-one-out cross-validation on the districts surveyed. The approach estimated
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline \\hline ROI & Input Resolution & Output Resolution & Input Data Cost & Validation & MeAPE & \\(R^{2}\\) \\\\ \\hline Kano and Kaduna, & & & & & \\\\ Nigeria [26] & 0.5 m (Maxar) & 90 m & High (Maxar) & Survey (same region) & - & 0.98 \\\\ \\hline \\multirow{3}{*}{Sri Lanka [7]} & 10 m (Object shape data) & \\multirow{3}{*}{Village level} & Free (Landsat) & \\multirow{3}{*}{Train / test split} & \\multirow{3}{*}{28} & \\multirow{3}{*}{0.58} \\\\ & 12-30 m (Settlement Layer) & & & & \\\\ & 750 m (Night time lights) & & & & \\\\ Bo, Sierra Leone [10] & 30 m (Landsat) & City district level & Free (public data) & LOOCV & 11 & - \\\\ & 0.5 m (Maxar), & & Free (OSM, & & \\\\ Nigeria [16] & 100 m (WorldPop), & & WorldPop) & Train / test split & - & 0.26 \\\\ & \\multicolumn{2}{l}{density, household size)} & & High (Maxar) & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Summary of recent literature on census-independent population estimation. NOTE: ROI is region of interest, LOOCV is leave one out cross validation, MEAPEis median absolute percent error.
population density at the coarse neighborhood level with low relative error for most neighborhoods [10, Table 10].
**Leasure et al.**[16] used a hierarchical Bayesian model to estimate population at 100 m resolution grid cells nationally in Nigeria, and focused on \"provid[ing] robust probabilistic estimates of uncertainty at any spatial scale\" [16, p. 4]. The authors used the same settlement map as Weber et al.[26] to remove unset-led grid cells prior to population density estimation, and used additional geospatial covariates, including school density, average household size, and WorldPop gridded population estimates. WorldPop population estimates were generated using a census-dependent approach, so the proposed method in some sense integrates information from census into otherwise census-independent population predictions. The predicted population estimates, however, were not highly correlated with the true population counts [16, Table 3].
### Deep Learning for Intercensal Population Estimation
There are several recent approaches that apply deep learning methods to intercensal population estimation using free and readily available high-resolution satellite imagery as opposed to relatively expensive very-high resolution imagery, and census as opposed to microcensus, potentially due to the prohibitive cost of collecting sufficient microcensus for training a custom deep neural network from scratch. HRSL uses very-high resolution imagery to focus on building footprint identification using a feedforward neural network and weakly supervised learning, and redistributes the census proportionally to the fraction of built-up area [23], but does not use census as the response variable.
**Doupe et al.**[6] used an adapted VGG [22] convolutional neural network (CNN) trained on a concatenation of low resolution Landsat-7 satellite images (7 channels) and night-time light images (1 channel). The VGG network was trained on observations generated from 18,421 population labeled enumeration areas from the 2002 Tanzanian census, and validated on observations generated from 7,150 labeled areas from the 2009 Kenyan census. The authors proposed using the output of the VGG network as a weighted surface for population disaggregation from regional population totals. This approach significantly outperformed AsiaPop (a precursor to WorldPop) in RMSE, %RMSE, and MAE evaluation metrics [6, Table 1].
**Robinson et al.**[20] trained an adapted VGG [22] CNN on Landsat-7 imagery from the year 2000 and US data from the year 2004, and validated it on Landsat and data from the year 2010. The authors split the US into 15 regions, and trained a model for each with around \\(\\sim\\)800,000 training samples in total. Instead of predicting population count directly, the authors classified image patches into log scale population bands, and determined final population count by the network output weighted average of band centroids. Existing approaches for projecting data outperformed the final network when validated against the 2010 US census [20, Table 1], however the fitted model displayed an understanding of population, evidenced through visualizing the images that produced the highest probabilities for each population band [20, Figure 5].
**Hu et al.**[12] generated population density maps at the village level in rural India using a custom VGG CNN based end-to-end learning. The authors used freely available high-resolution Landsat-8 imagery (30 m resolution, RGB channels only) and Sentinel-1 radar imagery (10 m resolution, converted to RGB)
\\begin{table}
\\begin{tabular}{l c l l l l l l l} \\hline \\hline ROI & Input Shape & \\begin{tabular}{l} Input \\\\ Resolution \\\\ \\end{tabular} & \\begin{tabular}{l} Output \\\\ Resolution \\\\ \\end{tabular} &
\\begin{tabular}{l} Input \\\\ Data Cost \\\\ \\end{tabular} & Validation & \\%RMSE & MAPE & \\(R^{2}\\) \\\\ \\hline Tanzania [6] & \\(32\\times 32\\times 8\\) & 250 m (Landsat) & 8 km & Free & Train / test split & 51.5 & - & - \\\\ USA [20] & \\(74\\times 74\\times 7\\) & \\(15\\) m (Landsat) & 1 km & Free & Future census & - & 49.8 & 0.94 \\\\ India [12] & \\(224\\times 224\\times 3\\) & \\(20\\) m (Landsat) & 4.5 km & \\multirow{2}{*}{Free} & Train / test split & 24.3 & 21.5 & 0.93 \\\\ & & & & & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Summary of recent literature on deep learning driven census-dependent intercensal population estimation. NOTE: Input Resolution indicates the spatial resolution of imagery input to model (i.e. after reprojection and resizing). %RMSE is percent root mean squared error, MAPE is mean absolute percent error.
images of villages as predictor, and respective population from the 2011 Indian census as response. The training set included 350,000 villages and validation set included 150,000 villages across 32 states, and the resulting model outperformed two previous deep learning based approaches[20, 6]. The authors observed that the approach performed better at a coarser district level resolution than a finer village level resolution [12, Table 2].
Both census-dependent and census-independent approaches have their advantages and drawbacks. While census-dependent estimation is cheaper to perform using existing data, the results can be misleading if the projected intercensal population count is inaccurate, and due to the limited resolution of both data and publicly available satellite imagery, these approaches exclusively predict population at a coarser spatial resolution. Census-independent estimation uses microcensus, which can be collected more frequently and is available at a finer scale, and although this data can be relatively expensive to collect in large enough quantities, it provides 'ground truth' information at a finer scale which is not available for census-dependent approaches.
## 3 Methods and Data
In this section we discuss some recent advancements in the principles and tools for self-supervised learning, partly in the context of remote sensing (SS3.1), provide details of SCIPE, and the datasets used, i.e., satellite imagery and microcensus.
### Representation and Transfer Learning
Representation Learning learns a vector _representation_ of an image by transforming the image, for example, using deep neural network, such that the corresponding representation can be used for other tasks such as regression or classification using existing tools [2]. The learned representation can be used for transfer learning, i.e., using the transformation learned from a separate task, e.g., ImageNet classification, for a different one, e.g., population estimation [19]. Intuitively, this happens since a pre-trained network, although designed for a separate task, can extract meaningful features that can be informative for population estimation (see for example Figure 1(a)).
**Supervised pre-training** is a common approach for representation learning where a network is trained in a supervised learning context with a vast amount of annotated training data such as ImageNet. Once the network has been trained on this task, the output of the penultimate layer (or a different layer) of this pre-trained network can be used as a vector representation of the respective input image, and can be used as a predictor for further downstream analysis [19]. This approach works well in practice but its performance is inherently limited by the size of the dataset used for supervised learning which can be'small' [4]. To mitigate this issue, representation learning using unsupervised methods such as Tile2Vec [13], and in particular, self-supervised approaches have become popular. Compared to supervised learning which maximizes the objective function of a pre-defined task such as classification, self-supervised learning generates pseudolabels from a pretext task in the absence of ground truth data, and different algorithms differ in the way they define the pretext tasks [14].
In the context of population estimation, we focus on methods that either assume that the latent representations form clusters [4] or make them invariant to certain class of distortions [27]. Our intuition is that grid tiles can be grouped together based on population. This is a common practice in census-independent population estimation, i.e., to split regions in categories and model these categories separately, e.g., see [16] and [6]. We also observe this pattern in the representation space where built-up area separates well from uninhabited regions (see Figure 4(b)). Additionally, we expect the population count of a grid tile to remain unchanged even if, for example, it is rotated, converted to grayscale, or resized.
**DeepCluster [4]** (and DeepClusterV2) jointly learns parameters of a deep neural network and the cluster assignments of its representations. It works by iteratively generating representations using an _encoder_ model that transforms the image to a vector, clustering these representations using a \\(k\\)-means clustering algorithm, and using these cluster assignments as pseudo-labels to retrain the encoder. The intuition behind this being that CNNs, even without training, can find partially useful representations, and therefore clusters, due to their convolutional structure. This weak signal can be exploited to bootstrap a process of iterative improvements to the representation and cluster quality.
**SwAV [5]** clusters the representation while simultaneously enforcing consistency between cluster assignments produced for different distortions. This involves applying augmentations to each image, yielding two different _views_ of the image, which are then fed through the model, clustered, and compared to train the model. In particular, to enforce consistent cluster assignments between the views of the image, the cluster assignment of a view is predicted from the representation of another view of the same image. SwAV applies horizontal flips, color distortion and Gaussian blur after randomly cropping and resizing the image [5, p. 15]. Cropping affects the population of a grid tile, however, since satellite imagery alone can estimate population with some uncertainty, we assume that cropping will change population within this level of uncertainty. Although cropping is used as a data augmentation step in the existing pre-trained network, we avoid cropping as data augmentation when fine-tuning the network to predict population in SS3.3.
**Barlow Twins [27]** also works by applying augmentations to each image, yielding two _views_ of the image, which are then fed through the model to produce two representations of the original image. To avoid trivial constant solutions of existing self-supervised learning approaches aiming to achieve invariance to distortions, Barlow Twins considers a _redundancy-reduction_ approach, i.e., the network is optimized by maximizing the correlation along the main diagonal of the cross correlation matrix in the representation space to achieve invariance, while minimizing the correlation in all other positions to reduce redundancy of representations. Barlow Twins applies cropping, resizing, horizontal flipping, color jittering, converting to grayscale, Gaussian blurring, and solarization as distortions [27, p. 3].
### Satellite Imagery and microcensus
**Satellite Imagery** We used proprietary 50 cm resolution satellite imagery (Vivid 2.0 from Maxar, WorldView-2 satellite) covering 7773 km\\({}^{2}\\) across two districts in Mozambique: Boane (BOA) and Magude (MGD). The Vivid 2.0 data product is intended as a base map for analysis. It has worldwide coverage and is updated annually with priority given to the low cloud coverage images, and hence images can be from different time periods and different sensors in the Maxar constellation. The product is provided already mosaicked
Figure 2: ResNet-50 ImageNet predictions and architecture. **a)** ImageNet class predictions for very-high-resolution satellite image tiles. Although the classes are irrelevant, the network shows an intrinsic understanding of the difference between built-up area, vegetation, and road. **b)** Diagram of ResNet-50 encoder. Residual blocks between Input and Global Average Pooling are comprised of convolutional layers with interleaved batch normalization layers.
and colour-balanced, increasing the transferability of any methods/algorithms developed using this data. The data are provided in a three-band combination of red, green and blue. The NIR band is not provided as part of the VIVID 2.0 data product. The procured data was a mosaic of images, mostly from 2018 and 2019 (83% and 17% for BOA and 43% and 33% for MGD, remainder from 2011 to 2020).
**Microcensus** We used microcensus from 2019 conducted by SpaceSUR and GroundWork in these two districts, funded by UNICEF. The survey was conducted at a household level (with respective GPS locations available), and households were exhaustively sampled over several primary sampling units (PSUs) where PSUs were defined using natural boundaries such as road. We aggregated the household survey data to a 100 m grid to generate population counts producing 474 labelled grid tiles.
**Non-representative tiles** Since the imagery and microcensus were not perfectly aligned temporally, and the PSUs had natural boundaries, many tiles contained either unsurveyed buildings or surveyed buildings absent in the imagery. Thus, the dataset contained both developed tiles (i.e. with many buildings) labeled as low population, or undeveloped tiles labeled as high population. Although such 'outlier' tiles can be addressed with robust training, they cannot be used for validation. We, therefore, manually examined each grid tile by comparing the GPS location of surveyed buildings with those appearing in the imagery, and excluded those with a mismatch, leaving 199 curated tiles (CT).
**Zero-population tiles** Since the microcensus was conducted in settled areas, we had no labels for uninhibited tiles. Although this does not pose a problem when comparing the performance of different models on the available microcensus (Table 2), the models do not learn to predict zero population when applied to an entire district which will include many uninhabitated areas. To resolve this, we identified 75 random tiles (50 from BOA, 25 from MGD) with zero population (ZT) guided by HRSL, i.e., from regions where HRSL showed zero population. We selected more ZIs from BOA to improve regional population estimates (see Figure 2(b)). Thus, we had 274 grid tiles in total.
### Models and Training
We use a ResNet-50 [9] CNN architecture to estimate population from grid tiles. The model architecture is shown in Figure 1(b) with \\(224\\times 224\\times 3\\) dimensional input and 49 convolutional layers followed by a Global Average Pooling layer which results in a 2048 dimensional latent representation. We used the pre-trained ResNet-50 models trained on ImageNet using methods described in SS3.1 after resizing the grid tiles of size \\(200\\times 200\\times 3\\) (100 m RGB) to \\(224\\times 224\\times 3\\), and used these (representation, population) pairs to train a prediction model using Random Forest. The hyperparameters of the model were chosen using a grid search over num_estimators\\(\\in\\{100,200, ,500\\}\\), min_samples_split\\(\\in\\{2,5\\}\\) and min_samples_leaf\\(\\in\\{1,2\\}\\). A linear regression head can also be trained to predict population in an end-to-end manner. This yields several advantages: rapid inference on GPUs, a simple pipeline, and a simple method for determining uncertainty. However, we observed that the Random Forest model outperformed the linear regression head.
**Pre-trained model** We used pre-trained ResNet50 models described in SS3.1 to extract representations, and also fine-tuned these models with microcensus.
**Fine-tuning** We fine-tuned the pretrained models using a combination of curated and zero grid tiles after attaching a _linear regression head_ following the global average pooling layer and minimizing the \\(\\ell_{2}\\) loss between observed and predicted population. Given the labelled grid tiles (number of grid tiles vary depending on experimental set-up), we randomly split them into training and validation sets (80-20%). Due to the limited number of tiles in the dataset, we apply random dihedral transformations (i.e. reflections and rotations) to tiles to augment the training set, avoiding transformations that could affect the validity of the population count e.g. crops that could remove buildings. We use Adam optimizer to minimize the loss function which takes about 1 minute with a batch size of 32 on a single Nvidia GTX 1070 with 8 GB of VRAM. During training, first, the network was frozen (i.e., the weights were kept fixed) and only the regression head was trained for 5 epochs with a learning rate of \\(2\\times 10^{-3}\\), and second, the entire network was trained using a _discriminative learning rate_[11], where the learning rate is large at the top of the network, and is reduced in the earlier layers, avoiding large changes to the earlier layers of the model which typically extract more general features, and focusing training on the domain-specific later layers. The base learning rate at the top of the network was \\(1\\times 10^{-3}\\), and it was decreased in the preceding stages to a minimum of \\(1\\times 10^{-5}\\). We used early stopping to halt training when validation loss plateaued (i.e., no improvement for 2 or more epochs) to avoid overfitting.
### Evaluation Metrics and Cross-validation
**Evaluation Metrics** We compare the different methods against several evaluation metrics, i.e., R-squared, median absolute error (MeAE), median absolute percentage error (MeAPE), and aggregated percentage error (AggPE) (to capture average error at regional levels characterised by \\(A\\)), as follows,
\\[R^{2} =1-\\sum_{i}(y_{i}-\\hat{y})^{2}/\\sum_{i}(y_{i}-\\hat{y})^{2} \\text{MeAE} =\\text{median}\\left|y_{i}-\\hat{y}_{i}\\right|\\] \\[\\text{MeAPE} =\\text{median}\\left|y_{i}-\\hat{y}_{i}\\right|/y_{i} \\text{AggPE} =\\text{median}_{A}\\left|\\sum_{i\\in A}y_{i}-\\sum_{i\\in A}\\hat{y}_{ i}\\right|/\\sum_{i\\in A}y_{i}\\]
These evaluation metrics capture different aspects of the prediction, and each has different significance. For example, \\(R^{2}\\) may be dominated by large population counts while MeAPE may be dominated by small population counts.
**Null Model** As 'null model' we predict population as the mean of the training set irrespective of feature values. We used this as an initial baseline to ensure any perceived performance when transferring features from ImageNet is not trivial.
**Baseline** To properly assess the performance of automatic feature extraction, we compared to results when using hand-crafted features and public datasets to predict population that is more common for census-independent population estimation. We took a variety of public features (Landsat imagery, land
Figure 3: Regional population map comparison and zero population tiles. **a)** Gridded population count estimates over Boane (top), and comparison against microcensus (bottom). Year indicates target year for estimation where known, otherwise when estimates were published. We do not compare our results on the microcensus as this was our training and validation data. **b)** Examples of zero population tiles from Β§3.2.
cover classification, OSM road data, night-time lights), along with building footprints automatically extracted from each image tile using a U-Net model pre-trained on SpaceNet and fine-tuned with 'dot-annotation' from non-surveyed buildings, and using these features to train a Random Forest model.
**Cross-validation** We compare the different approaches to population estimation using cross-validation. For each region, we partitioned the data into four subsets spatially, and formed validation folds by taking the union of these subsets across the two regions. We reported the evaluation metrics over _pooled_ predictions from the four validation folds covering the entire microcensus. When fine-tuning, we trained one network for each fold separately (to avoid data leakage) resulting in four networks.
## 4 Results
In this section, we compare the performance of several self-supervised learning frameworks using cross-validation, apply the best performing model to predict a regional map of Boane, and compare it against existing maps from GRID3, HRSL and WorldPop where all of these methods take a census-dependent approach to population estimation within our ROIs, and cannot be regarded as 'ground-truth'. We show the interpretability of the framework using uncertainty quantification and activation maps.
**Model selection** Table 3 shows the cross-validation results for population estimation using Random Forest regression on representations extracted using ResNet-50 model with curated tiles only. We observe that, 1) representations extracted using any pre-trained network outperformed estimation using publicly available features in all but MeAE metric, 2) fine-tuning any of the representation learning frameworks with microcensus, besides DeepCluster, resulted in an improvement of the performance of the framework, 3) although all of the representation learning framework (best MeAE \\(=3.91\\)) outperformed than the null model (MeAE \\(=7.57\\)), the baseline models trained with building footprint area (best MeAE \\(=3.75\\)) as a feature still outperformed them in \\(R^{2}\\) and MeAE, and 4). Barlow Twins overall had lower error metrics and the second largest \\(R^{2}\\) metric among the fine-tuned models, so we consider this model for further analysis.
Figure 4: Clockwise from left: Difference between SCIPE and GRID3 population maps for Boane, the respective scatter plot, and examples of survey tiles from regions where they differ.
\\begin{table}
\\begin{tabular}{l c c c c c} \\hline \\hline Features used & \\(R^{2}\\) & MeAPE & MeAE & IQR & AgePE \\\\ \\hline \\multicolumn{6}{l}{_Hand-crafted features:_} \\\\ \\multicolumn{6}{l}{Public Only} & -0.22 & 57.8\\% & 5.05 & 8.60 & 21.5\\% \\\\ \\multicolumn{6}{l}{Footprint Only} & **0.47** & **44.5\\%** & 3.75 & 5.86 & **02.2\\%** \\\\ \\multicolumn{6}{l}{Public+Footprint} & 0.46 & 48.8\\% & 4.63 & 5.11 & 07.6\\% \\\\ \\multicolumn{6}{l}{_Representation Learning:_} \\\\ \\hline Supervised & 0.20 & 54.7\\% & 5.36 & 7.97 & 02.0\\% \\\\ Supervised (FT) & 0.33 & 52.9\\% & 4.72 & 6.23 & 05.5\\% \\\\ SWAV & 0.34 & 51.6\\% & 6.60 & 4.33 & 00.8\\% \\\\ SWAV (FT) & **0.41** & 46.9\\% & 5.83 & 4.35 & 03.5\\% \\\\ DeepCluster & 0.26 & 50.3\\% & 4.60 & 6.03 & 06.7\\% \\\\ DeepCluster (FT) & 0.13 & 62.5\\% & 5.98 & 8.32 & 06.8\\% \\\\ Barlow Twins & 0.27 & 51.9\\% & 5.40 & 6.65 & 02.8\\% \\\\ Barlow Twins (FT) & 0.39 & **44.0\\%** & **3.91** & 6.32 & **01.1\\%** \\\\ \\multicolumn{6}{l}{_Null Model:_} \\\\ \\multicolumn{6}{l}{None} & -0.12 & 76.45\\% & 7.57 & 10.0 & 01.7\\% \\\\ \\multicolumn{6}{l}{Existing Maps} \\\\ \\hline GRID3 & **0.22** & **51.7\\%** & **4.25** & 7.11 & **26.7\\%** \\\\ HRSL & -0.12 & 70.7\\% & 5.04 & 7.94 & 46.8\\% \\\\ WorldPop & -0.41 & 86.8\\% & 5.85 & 8.18 & 77.9\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Population model validation performance for **(top)** manually and automatically extracted features, along with **(bottom)** test performance of existing maps. See Β§4 for metric definitions. IQR is interquartile range of absolute errors. **Bold** indicates best in category. FT is fine-tuning.
We did not evaluate the performance of Tile2Vec since the available pre-trained model required an NIR band for input data, which Vivid 2.0 lacks.
**Regional Population Estimation** We use representations learned with a ResNet50 architecture pre-trained on ImageNet using Barlow Twins and fine-tuned using curated and zero grid tiles (with 80-20 random train/validation split of all tiles, no cross-validation) to extract representations for our survey tiles. These representations are used to train a Random Forest model (SS3.3), which is used to produce a population map for Boane district. The map is shown in Figure 2(a) along with three existing population maps from WorldPop, HRSL and GRID3. We observe that, with respect to our 'ground truth' microcensus, 1) GRID3 provides a more accurate population map of Boane than HRSL and WorldPop, but usually underestimates population, 2) WorldPop lacks the finer details of the other population maps, and underestimates population, 3) although the settlement map provided by HRSL matches that of SCIPE and GRID3 well, its similarity with SCIPE is less than that of GRID3, and 4) SCIPE and GRID3 provide visually similar settlement map, and the population estimates are also more similar in scale compared to HRSL and WorldPop.
**Census** We additionally compare the aggregated population estimate in Boane with 2019 census projection from 2017 census, and we observe that SCIPE overestimates population by 29%. Although projected census is not the ground truth, the discrepancy in estimated population is potentially due to SCIPE not modelling zero population explicitly. SCIPE can be extended by using, for example, a zero-inflated population model to model the zero-population better. We leave this as future work.
Since GRID3 provides a more accurate population map than HRSL and WorldPop, we compare it against SCIPE in more detail. Figure 4 shows the difference in population maps produced by these two approaches. We observe that, 1) the estimated population of these approaches matched well quantitatively (Spearman's \\(\\rho\\) 0.70, Pearson's \\(\\rho\\) 0.79), 2) there are regions where SCIPE underestimated population, and these are areas where microcensus was not available, and 3) there are regions where GRID3 underestimated population, and they usually coincided with regions where microcensus was available and SCIPE could potentially provide better estimates. Therefore, there is a high level of agreement between the two products and they provide similar estimates, and discrepancies appear in regions that lack microcensus for training. A more detailed comparison of these two population maps will be valuable, and may lead to both improved population estimation through ensemble learning and better microcensus data collection through resolving model disagreements. However, this is beyond the scope of this work.
**Uncertainty** To further assess the quality of the estimated population map, we quantify the uncertainty and qualitatively assess their 'explanations'. We can assess the uncertainty of predictions in several ways either at the level of the representation learning or at the level of the Random Forest population model. For the former, we can apply Monte Carlo dropout [8] by placing dropout layers (\\(p=0.1\\)) after each stage of the ResNet models, and predicting multiple population value for each grid tile. For the latter, the uncertainty can be quantified from the output of the individual decision trees in the Random Forest model without perturbing the representation. Figure 4(c) shows Random Forest model predictions on fine-tuned Barlow Twins features and their associated uncertainties. We observe that the estimated uncertainty matches the intuition of higher estimated population having higher uncertainty.
**Explanation** To assess the outcome of the model, we use regression activation maps (RAMs) [24] that show the discriminative regions of input image that is informative of the outcome of the model. It is widely reported that building footprint area is an important indicator of population, and we observe that a fine-tuned Barlow Twins model produces RAM plots that show a clear focus on built-up area, which agrees with the intuition (see Figure 4(a)). To further explore if SCIPE focuses on built-up area to estimate population, we observe that population estimates using SCIPE and that using building footprints (as presented in Table 3) show high correlation (Spearman's \\(\\rho\\) 0.68, Pearson's \\(\\rho\\) 0.74 over Boane region) (see Figure 4(d)) corroborating this observation.
**Embedding** Finally, we visualize the representations available from the fine-tuned Barlow Twins model to assess if they meaningfully separate in terms of population, and we observe that this is indeed the situation (see Figure 4(b)).
Figure 5: Activation maps, embedding visualization, and comparison of SCIPE with microcensus data and footprint based estimates. **a)** Tiles and their associated regression activation maps (RAMs) from fine-tuned Barlow Twins model **b)** Plot of t-SNE embeddings of representations from Barlow Twins for 75 random tiles and 75 microcensus tiles from Boane and Magude (main), with embeddings coloured by population (bottom left) and region (bottom right) **c)** Predicted over observed plot over microcensus with prediction uncertainty **d)** Comparison between SCIPE estimates and building footprint based estimates on the microcensus grid tiles.
Discussion
We find representation learning to be an effective tool for estimating population at a medium-resolution from limited local microcensus. Although this approach did not outperform building footprint area based estimations; it is fast, does not require human supervision, and only relies on very-high-resolution satellite images, making it sustainable and transferable in the sense that users can extrapolate their own local microcensus with relative ease, and also quantify uncertainty and capture explanations.
There is likely a hard limit to the predictive power of satellite imagery alone, owed to the difficulty of distinguishing inhabited and uninhabited areas in some contexts. For example, Robinson et al.[20] gave the example of Walt Disney World which is built to look like a settled area but has 0 population. To address this issue, an interesting extension of SCIPE will be to use multiple data sources, such as night-time light, land-cover data, altitude and slope information, location of services, of possibly different resolutions in the model alongside very-high-resolution imagery without changing its core focus, i.e., of using pre-trained network and fine-tuning them with limited amount of microcensus. This can potentially improve the prediction of population in areas that are uninhabited. We have also used a Random Forest model which effectively treats grid tiles as independent and identically distributed samples which they are not. Considering the spatial arrangement of grid tiles can improve estimation further by using broader contextual information around it, for example to establish the socioeconomic status or land use of the surrounding areas.
Ideally, we would like to use self-supervised learning framework directly on satellite images to learn appropriate representation, rather than relying on pre-trained networks and fine-tuning. This will, however, require a vast quantity of training data, and has not been the focus of this work. In this work we have focused on _sustainability_, both in terms of human annotation and computational resources which prohibits training from scratch, and have shown that existing tools can be used to produce reliable population estimates. The proposed framework should also be validated externally on a larger scale. We explored population of a single district in Mozambique while existing population maps are available over the whole of Africa. Assessing the utility of SCIPE better would require further large scale validation both in different regions of Mozambique (which is our immediate focus), and in other countries (which is our long term goal), to make population maps more frequent, accessible, reliable and reproducible.
SCIPE avoids several typical bottlenecks associated with census-independent population estimation. While some methods require tedious manual annotation of built up area or potentially incomplete public features, SCIPE extracts features automatically using only satellite images. SCIPE is extremely fast, requires negligible GPU time, and provides meaningful population estimates. Microcensus data may not be available in all countries or regions, and can be expensive to gather, but this cost is far lower than that of conducting census on a regular basis. Very-high-resolution satellite imagery can also be expensive, but has become more accessible in recent years when used for humanitarian purposes [7, p. 14]. Given that many development agencies benefit from subsidised access to the Maxar very-high-resolution imagery these population maps could be produced relatively quickly for specific regions of focus, for example, when vaccination programmes are being planned. This approach, therefore, would contribute towards the UNs stated need for a data revolution [1] by allowing regularly updated estimates of population between census enumeration periods supporting a range of humanitarian activities as well as general governmental and NGO planning and allocation of resources.
## References
* [1] Independent expert advisory group on a data revolution for sustainable development. A world that counts: Mobilising the data revolution for sustainable development. November 2014.
* [2] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. 2013.
* [3] Maksym Bondarenko, Patricia Jones, Douglas Leasure, Attila Lazar, and Andrew Tatem. Griddedpopulation estimates disaggregated from mozambique's fourth general population and housing census (2017 census), version 1.1, November 2020.
* [4] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In _Proceedings of the ECCV_, 2018.
* [5] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In _Proceedings of NeurIPS_, 2020.
* [6] Patrick Doupe, Emilie Bruzelius, James Faghmous, and Samuel G Ruchman. Equitable development through deep learning: The case of sub-national population density estimation. In _Proceedings of ACM DEV_, pages 1-10, 2016.
* [7] Ryan Engstrom, David Newhouse, and Vidhya Soundararajan. Estimating small-area population density in Sri Lanka using surveys and Geo-spatial data. _PLoS ONE_, 15(8):e0237063, August 2020.
* [8] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In _Proceedings of ICML_, pages 1050-1059. PMLR, 2016.
* [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of CVPR_, 2016.
* [10] Roger Hillson, Austin Coates, Joel D. Alejandre, Kathryn H. Jacobsen, Rashid Ansumana, Alfred S. Bockarie, Umaru Bangura, Joseph M. Lamin, and David A. Stenger. Estimating the size of urban populations using Landsat images: a case study of Bo, Sierra Leone, West Africa. _Int. J. Health Geogr._, 2019.
* [11] Jeremy Howard and Sylvain Gugger. _Deep Learning for Coders with fastai and PyTorch_. O'Reilly Media, 2020.
* [12] Wenjie Hu, Jay Harshadbhai Patel, Zoe-Alanah Robert, Paul Novosad, Samuel Asher, Zhongyi Tang, Marshall Burke, David Lobell, and Stefano Ermon. Mapping missing population in rural india: A deep learning approach with satellite imagery. 2019.
* [13] Neal Jean, Sherrie Wang, Anshul Samar, George Azzari, David Lobell, and Stefano Ermon. Tile2vec: Unsupervised representation learning for spatially distributed data. In _Proceedings of the AAAI_, 2019.
* [14] Longlong Jing and Yingli Tian. Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey. _IEEE Trans. Pattern Anal. Mach. Intell._, 2020.
* [15] Facebook Connectivity Lab and Center for International Earth Science Information Network CIESIN Columbia University. High resolution settlement layer (HRSL), 2016.
* [16] Douglas R. Leasure, Warren C. Jochem, Eric M. Weber, Vincent Seaman, and Andrew J. Tatem. National population mapping from sparse survey data: a hierarchical bayesian modeling framework to account for uncertainty. _PNAS_, 2020.
* [17] Catherine Linard, Marius Gilbert, Robert W Snow, Abdisalan M Noor, and Andrew J Tatem. Population distribution, settlement patterns and accessibility across africa in 2010. _PLoS ONE_, 2012.
* [18] Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable AI: A Review of Machine Learning Interpretability Methods. _Entropy_, 2020.
* [19] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: An astounding baseline for recognition. In _Proceedings of CVPR Workshops_, pages 512-519, 2014.
* [20] Caleb Robinson, Fred Hohman, and Bistra Dilkina. A deep learning approach for population estimation from satellite imagery. In _Proceedings of ACM SIGSPATIAL Workshop on Geospatial Humanities_, 2017.
* A world without data? The unintended consequences of fashion in geography. _Urban Geography_, 2010.
* [22] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In _Proceedings of ICLR_, 2015.
* [23] Tobias G. Tiecke, Xianming Liu, Amy Zhang, Andreas Gros, Nan Li, Gregory Yetman, Talip Kilic, Siobhan Murray, Brian Blankspoor, Espen B. Prydz, and Hai-Anh H. Dang. Mapping the world population one building at a time. 2017. arXiv: 1712.05839.
* [24] Zhiguang Wang and Jianbo Yang. Diabetic retinopathy detection via deep convolutional networks for discriminative localization and visual explanation. In _Proceedings of AAAI Workshops_, 2018.
* [25] NA Wardrop, WC Jochem, TJ Bird, HR Chamberlain, D Clarke, D Kerr, L Bengtsson, S Juran, V Seaman, and AJ Tatem. Spatially disaggregated population estimates in the absence of national population and housing census data. _PNAS_, 2018.
* [26] Eric M Weber, Vincent Y Seaman, Robert N Stewart, Tomas J Bird, Andrew J Tatem, Jacob J McKee, Budhendra L Bhaduri, Jessica J Moehl, and Andrew E Reith. Census-independent population mapping in northern nigeria. _Remote Sens. Environ_, 2018.
* [27] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stephane Deny. Barlow twins: Self-supervised learning via redundancy reduction, 2021.
## Acknowledgments
The project was funded by the Data for Children Collaborative with UNICEF. | Knowledge of population distribution is critical for building infrastructure, distributing resources, and monitoring the progress of sustainable development goals. Although censuses can provide this information, they are typically conducted every ten years with some countries having forgone the process for several decades. Population can change in the intercensal period due to rapid migration, development, urbanisation, natural disasters, and conflicts. Census-independent population estimation approaches using alternative data sources, such as satellite imagery, have shown promise in providing frequent and reliable population estimates locally. Existing approaches, however, require significant human supervision, for example annotating buildings and accessing various public datasets, and therefore, are not easily reproducible. We explore recent representation learning approaches, and assess the transferability of representations to population estimation in Mozambique. Using representation learning reduces required human supervision, since features are extracted automatically, making the process of population estimation more sustainable and likely to be transferable to other regions or countries. We compare the resulting population estimates to existing population products from GRID3, Facebook (HRSL) and WorldPop. We observe that our approach matches the most accurate of these maps, and is interpretable in the sense that it recognises built-up areas to be an informative indicator of population. | Summarize the following text. | 247 |
isprs/880f8be8_8250_4fc0_9d50_ae7deae923e0.md | Historical Sentient - Building Information Model: A Digital Twin for the Management of Museum Collections in Historical Architectures
F. M. La Russa
1 Department of Civil Engineering and Architecture (DICAR), Universita degli Studi di Catania, Via Santa Sofia n. 64, 95125 Catania, Italy - [email protected] / [email protected]
C. Santagati
1 Department of Civil Engineering and Architecture (DICAR), Universita degli Studi di Catania, Via Santa Sofia n. 64, 95125 Catania, Italy - [email protected] / [email protected]
1
## 1 Introduction
The Anthropocene age is strongly characterized by disruptive changes that are impacting the relationship between humans and technology. Among these, the tremendous potentialities linked to AI (Artificial Intelligence) applications in a variety of fields are increasingly visible. With specific reference to AEC (Architecture Engineering Construction) field and to historical architecture domain, the adoption of advanced and innovative digital workflows can advantage and optimize preservation activities. Among all the buildings that fall under heritage preservation, there is a specific typology to which this research is mainly addressed: museums hosted in historical architectures. In this case, the issue of heritage preservation has to consider both the historical values of the building - the container - and the peculiarities of the collections - the content. For instance, the maintaining of the optimal environmental conditions for museum collections is critical as well as the proper conservation of the historical building. The study deals with the experimentation of innovative methodologies that allow advanced management of museum collections, hosted in a historical building, through the development of a DSS (Decision Support System) that uses ML (Machine Learning) to implement effective conservation strategies. The aim is to investigate the application of DT (Digital Twin) approach to get a Sentient building able to acquire the ability to perceive external inputs and develop strategies to support its management and/or conservation. The formulation of a novel methodology - namely HS - BIM (Historical Sentient - Building Information System) - for historical building documentation, management, and conservation is proposed. As case study, it has been chosen the university museum MuRa (Museo della Rappresentazione) in Catania. The museum is hosted in _villa Zingali Tento_, a historical building realized in 1930 which houses a collection of engravings and design drawings by architects and edchers. In particular, the experimentation focuses on the management of the thermo-hyborimetric conditions for the preventive conservation of the collection and the museum rooms. Our contribution is structured as follows: after the review of the state of the art and the focus on the problem definition, we describe the HS-BIM definition; then the related methodology is illustrated; HS-BIM approach is shown concretely through the chosen case study on MuRa (Museo della Rappresentazione) in Catania. Finally, in the conclusions some reflections on the adoption of the methodology and further developments will be shown.
## 2 Related Works
The impact of Industry 4.0 on AEC sector is increasingly growing, we are witnessing continuous experimentation and innovations that have appeared in the industrial and manufacturing sector and are gradually spreading to new buildings construction and the built environment. In the last decades, the main focus in AEC has been mainly on Energy Management driven by the integration of intelligent technologies in building systems (Wong et al., 2005). There has been an evolution in the concept of Building Automation System: from Automated Buildings, able to show key performance indicators; to Smart Buildings, able to analyze energy consumers (Clements-Croome, 2004); up to Cognitive Buildings, able to learn behavior (Ploennigs and Schumann, 2017). Indeed, the concept of CB (Cognitive Building) is integrated with sensing technologies, distributed intelligence and IoT (internet of things) and is strictly linked with the concept of DT (Digital Twin). Since Michael Grieves coined the word DT in 2002 in the context of manufacturing (Grieves, 2019), this concept has been leveraged in different fields achieving different levels of maturity. An effective definition of DT is given by Bolton et al as \"a dynamic virtual representation of a physical object or system across its lifecycle, using real-time data to enable understanding, learning and reasoning\" (Bolton et al., 2018; p. 783). The integration of AI with IoT and BIM (Building Information Modeling) technologies (Pasini et al., 2016) will give DT the ability to assimilate, analyze, simulate, predict, prescribe and act with minimal human involvement as envisioned by Bien et al (Bien et al., 2002; Zuchker et al., 2018).
In literature we find several attempts in the definition of the DT level of maturity (Sophistication). A first definition is given by Madni et al (Madni et al., 2018) and regards the industry sector in general: Level of Pre-Digital Twin (virtual system model with an emphasis on technology/technical-risk mitigation); Level 2 or Digital Twin (virtual system model of the physical twin exists), Level 3 or Adaptive Digital Twin (virtual system model of the physical twin with adaptive UI); level 4 Intelligent Digital Twin (virtual system model of the physical twin with adaptive UI and reinforcement learning). As for Built environment, Simon Evans (Evans, 2019) proposes six levels: Level 0 or Reality capture (e.g. point cloud, drones, biofilometry, or drawings' sketches); Level 1 or 2D map/system or 3D model (e.g. object-based, with no metadata or BIM); Level 2 or Comenet model to persistent (static) data, metadata and BIM Stage 2 (e.g. documents, drawings, asset management systems); Level 3 or Enrich with real-time data (e.g. from IoT, sensors); Level 4 or Two-way data integration and interaction; Level 5 or Autonomous operations and maintenance. To date, these concepts have found limited application in historical heritage and museums domain, even if the Natural History Museum in London has recently embraced this technology (Richandson, 2020). Many institutions have applied H-BIM (Historical Building Information Modeling) and in parallel there has been an increase of computation techniques in architecture thanks to the development of user-friendly VPL (Visual Programming Language) such as Grasshopper, Dynamo, Node Red, Ardublock, NETLab Toolkit, ReactiveBlocks, GraspIO, Wyliodrin. The flexibility of these computational tools allows us to enrich H-BIM models with new concepts, definitions, layers of actions and knowledge, as well as to manage, catalog and reorder the data and the different relationships contained in the models (Argiolas et al., 2015; Giovannini, 2017; Tono et al., 2019). In the management of historical buildings with museum functions, applications using DSS have become increasingly frequent in recent years. These technologies are often used together with WSN (Wireless Sensor Network), which allow constant and real-time monitoring of all the significant parameters for the needs of the museum layouts and the architectures that contain them. The project of the Sala dei Cinquecento at Palazzo Vecchio in Florence (Viani, 2014), the ones on Palazzo della Civitila Italiana in Rome (Trento et al., 2019) and the Musco Egizio in Turin (Calvano et al., 2020) can be considered remarkable examples. ML applications in Architecture and Construction were already being developed at the time of the first CAD applications in the 1960s (Wright Stenson, 2017). They were intended to experiment with direct human-computer interaction to facilitate, optimize and develop the design process. There are several cases in which neural networks have been trained for design purposes in the construction sector, such as the work carried out at the University College of London, where the possibility of training a robotic arm to make woodwork components has been examined (Brugnano and Hanna, 2017). In other words, the aim is to develop a system for evaluating the rooms of a building using datasets made from the assessments of users in many other buildings to understand which recurring patterns ensure conditions of comfort (Davis, 2016; Kim, 2018). There are experiments that also concern the classification of different types of environments using neural networks (Tono et al., 2019; Peng et al., 2017).
## 3 Hs-Bim
The HS-BIM (Historical Sentinel - Building Information Modeling) definition proposed in this research work draws on the considerations made on the above mentioned DT maturity levels, focuses on historical buildings preservation actions and is projected towards the level 5 'Autonomous operations and maintenance' (Evans, 2019)/level 4 'Intelligent Digital Twin' (Madni et al., 2019). Indeed, it envisions the implementation of H-BIM models performances by applying AI techniques, specifically ML. Therefore, we can define HS-BIM (La Russa, 2019) as a BIM model that perceives the external and internal inputs of the historical building it represents, recognizes the manifestations of degradation and reports them, elaborates the inputs thanks to learning mechanisms and autonomously makes choices related to its preservation strategies (Fig. 1).
Figure 1: HS-BIM Workflow
The innovative aspect of this methodology resides in the change of paradigm regarding the relations between the historical building and the professional figures who deal with its management, conservation and restoration. The analogy between a human being and a building, well established in the disciplines of conservation and architectural restoration, is based on a reinterpretation of architecture as a living organism that goes through a life cycle, has a certain state of health and requires rehabilitation therapies in the presence of diseases (Marconi, 1993).
It is possible to identify similarities between the physical and behavioral aspects highlighted in living beings and the techniques used in BIM-based methodologies (Tono, 2018). For example, the ability to receive real-time data from different kinds of diagnostic sensors located in strategic points of the building can be envisioned as a peripheral nervous system that receives external inputs. In the presence of this condition, it is possible to compare the building to the _corpus_ (as it is often called in the medical approach) while its _animus_ (its nervous system) is its virtual prosthesis, i.e. the HBIM model. In this configuration, despite the complexity of the processes put in place, the model does not already have an active behavior. With the creation of a DSS based on Machine Learning mechanisms, the DT of the building - namely the HS-BIM model - assumes a synthetic behavior in the processing of inputs, thus becoming Sentiment. The training process is supervised (there are scenarios controlled by the designer) and each solution will be \"an experience\" for the building from which to learn. In this sense we could find also some relations between the learning theory of the psychologist Thondike (Thondike, 1905) and ML techniques. Learning in sentinel beings generally occurs through the transfer of knowledge from adult individuals or is embedded in the nature of the being. Similarly, the starting training dataset takes the role of the 'teaching adult' and contributes to the constitution of the _anima_ of the HS-BIM model, where 'S' stands for 'Sentient' and encompasses the evolution from a passive to an active decision-making model.
## 4 Methodology
The developed methodology addresses the management of the thermo-hygrometric conditions for the preventive conservation of museum collections in buildings with high historical value. Therefore, after a first step aimed at the 3D digitization of the building and its semantic modeling in a BIM environment, the following steps are focused on the realization of a DSS to assist the building manager in the decision-making phases related to the conservation of the museum collections and architectural interior spaces. Specifically, the design of the DSS is carried out through a VPL work environment in order to guarantee characteristics such as responsivity, flexibility, and user-friendliness. Furthermore, the DSS is set on AI mechanisms (in particular ML) which, on the basis of a training dataset, recognizes the relationship between a combination of actions and the relative satisfaction of thermo-hygrometric conditions and makes previsions. The methodology is structured as follows:
* 3D digitalization of the historical building by means of instrumental surveys (laser scanners and photogrammetry) and critical analysis of the construction apparatus of the historical building;
* HBIM modeling of the whole building taking into account high LOD for museum rooms under analysis;
* gathering of meteorological data stored by the nearest climate station;
* microclimate analysis of the urban context;
* simplification of the HBIM model into a NURBS model and thermo-technical information enrichment in VPL environment;
* setting out the standard action strategies (already implemented) in relation to the maintenance of museum collections;
* specification of the thermo-hygrometric parameters (prescribed by national regulations) for the conservation of the collections in each museum room;
* gathering of the thermo-technical data collected inside the museum or, in the absence of the latter, creation of a synthetic dataset carrying out specific energy simulations considering the conservation strategies already in use;
* data labeling and filtering to ensure supervised learning of the training dataset;
* training and validation of the ML algorithm.
## 5 Case Study
The adequacy of the approach so far explained has been validated on a cultural heritage building that shows a complex showcase of preservations needs: _villa Zingali Teto_ in Catania (Fig. 2). Built in 1930, this architecture houses the Museo della Rappresentazione (Museum of Representation) and it is managed by the Department of Civil Engineering and Architecture of the University of Catania (La Russa, 2019b).
The villa has been subject to geometric and spatial surveys (mainly using laser scanners and photogrammetry) and archival researches to reconstruct historical and constructive events. The complexity linked to the many interior environments (small rooms, corridors, staircases) required to carry out a large amount of TLS (Terrestrial Laser Scanning) scans (233) with the aim of covering as much as possible surfaces for each floor (Fig. 3). In this way, it was possible to fully understand the space syntax and expositions of each area of the villa. Several decorated surfaces (as wood and freescoilerns) and furniture have been acquired by using SFM (Structure from Motion) techniques. Therefore, a HBIM model of the villa has been created according to PointCloud-to-BIM approach (Fig. 4).
This analysis was useful to structure the subsequent energy simulations. As already mentioned, the study has been focused on the conservation issues linked to the collections, with reference to the thermo-hygrometric conditions. Due to the high cultural importance of the villa, the planned actions are not invasive and are designed to manage the building through passive strategies (natural ventilation, occupation of the rooms) and the optimization in using existing HVAC (Heating, Ventilation and Air Conditioning) system. For the design of the DSS system, considering all the requirements to take into account, Machine Learning (ML) algorithms linked to multivariate linear regression are applied. Indeed, these algorithms allow us to investigate, in a digital model, the relationship between input variables and output results. They have been widely used in the past for forecasting future trends in finance, meteorological models and others (Bakar et al., 2009; Eck, 2017; Abbas, 2017). In order to create the initial dataset for training the ML algorithms, direct measurements of the museum environments would be needed for at least one calendar year, together with 'candidate use strategies' to be tested so that the ML system could learn drawing on the results. Nevertheless, data collection requires an extensive amount of resources, not always affordable for small museums or local bodies.
However, by performing energy simulations it is possible to create a synthetic dataset that can compensate for the absence of a sensor network and enable the initial learning phase. The simulations conducted were carried out with VPL programming tools. The need to operate on a simplified version of the H-BIM required an instrument that at the same time operated in CAD and VPL environment. The choice fell on Rhinoceros 6, GH (Grasshopper) and Dragonfly and Honeybee GH plugins for energy simulations. From the perspective of obtaining energy simulations closely as possible to reality, an analysis was carried out on the urban heat island contextual to the villa using Dragonfly (Fig.5).
The urban meteorological data thus generated were functional to the simulations carried out on a simplified three-dimensional model of the villa using the Honeybee plug-in (Fig. 6).
The choice of the rooms on which to carry out the analysis has considered both the conservative instance and the possibility of having different scenarios available. Therefore, the analysis has been performed in the dining room (wooden furniture), the winter garden (stain glasses sunroom) and the two adjacent rooms, which contain prints and drawings (Fig. 7-8). The thermo-hyrometric parameters required, according to the normative reference (Manoli, 2015), are explained in Table 1 (Tab.1).
Passive strategies of use considered as parameters are: utilization of natural ventilation, allowed number of visitors, HVAC system use. For each of them, it has been defined as a combination of variants that constitute the simulated configurations of use. After the fulfillment of thermo-hyrometric conditions simulations, data are exported and labeled. All data belonging to the days when the thermo-hyrometric parameters are satisfied only in a single room, as a daily average, have been excluded. Once the training dataset is labeled, then it is called up in the DSS developed in Grasshopper and two main lists of data are extracted from it.
One deals with the external environmental parameters (dry bulb temperature and relative humidity), while the other considers the assumed usage configurations. For the supposed conditions of use a sort of \"dictionary\" has been created where the keys correspond to integers (0, 1, 2), while the variants foreseen by passive strategies are the values. Therefore, if we want to summarize using a notation related to programming languages, we get:
_Adopted solutions = [0 : 1st variant;, 1 : 2nd variant; 2 : 3rd variant]_
\\begin{table}
\\begin{tabular}{|l|c|c|} \\hline Room & T (Β°C) & U\\({}_{\\text{r}}\\)(\\%) \\\\ \\hline Winter garden & \\(\\Delta\\text{T}<5\\) & \\(<45\\) \\\\ Dining room & 19 β 24 & 40 β 65 \\\\ Exhibition rooms & 19 β 21 & 60 - 60 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Thermo-hyrometric parameters required in relation to the specific room.
Figure 5: Urban heat island analysis based on the BIM model of the villa and 3D shapes of its urban context.
Figure 6: The simplified three-dimensional model of the villa for energy simulation.
Figure 7: Location of environments analysed by DSS.
Figure 8: Rooms selected for thermo-hygrometric analysis: a) winter garden; b) Dining room; c) Exhibition hall.
Figure 9: DSS code in Grasshopper and results of the simulation. In this example, with an outdoor temperature of 40 \\({}^{\\circ}\\)C and with a relative humidity of 0.54, the DSS suggests the combination of actions { 1; 0; 0 } which (translated) means to allow access to only 22 users, to allow natural ventilation for 2 hours and 2 hours of predefined air conditioning program.
The order in which they appear corresponds to the different fields of application (first line: occupation, second line: ventilation, third line: air conditioning system). In this way, the dataset so assembled has only successful cases (supervised learning).
The Grasshopper plugin LunchBoxML (which allows the use of multivariate linear regression component) has been used to identify the correlations between the assumed usage configurations and the satisfaction of the environmental parameters for each day. The incoming data flow in the Grasshopper component consists of two main entries (in addition to the training dataset): the inputs (temperature and relative humidity) and the outputs (configuration used). The ML algorithm will perform a linear association between the inputs and outputs. If the user wants to find out a possible usage strategy, the temperature and humidity values of the day for which the search is carried out must be entered in the input test data. In this way it is then possible to obtain the most probable correct use configuration in accordance with the values inserted as a test (Fig. 9).
## 6 Conclusions
This study has addressed to investigate the concept of DT in relation to the management of museum collections in historical architectures and proposes the definition of HS-BIM model where AI - BIM - IoT concepts are integrated. It has also been tested a workflow which allows, thanks to the creation of a synthetic dataset, to boost the time required to gather the needed data for training the developed decision-making model and overcome the possible initial costs due to sensor installation and setup.
The expected advantages of this workflow can be envisioned in ensuring continuity in preventive conservation actions in the absence of qualified professionals and/or a monitoring system. At this first stage of testing, we applied multivariate linear regression a basic ML algorithm. This choice was also influenced by the presence of two combinations of parameters (external climatic conditions / internal prevention actions) as input and output of the code realized. In the next future, the algorithm will be implemented, and many other parameters will be taken into account.
Furthermore, our purpose is to get a DT level of maturity 5 according to Evans definition (Evans, 2019) and we can assume that at this stage of the research we have achieved a level 3. In the future we will work on the real data and on the implementation of the system in terms of AI.
Indeed, the reliability of the synthetic dataset created and used can be tested only in forward phases because it will be necessary to check, with a sensor system (for at least one year), all thermo-hygrometric parameters for each room and compare them to expected values. In addition, we will test a possible implementation of Artificial Intelligence mechanisms (such as Deep Learning) that would allow the transition to a learning system for 'attempts and errors'.
Along the time, thanks to the experience and background data, the decision-making system will improve the quality of his work.
## Acknowledgements
Cettina Santagati wrote paragraphs: Introduction, Related Works. Federico Mario La Russa wrote paragraphs: HS-BIM, Methodology, Case Study, Conclusions.
## References
* Abbas (2017) Abbas, O. M., 2017. Forecasting with Machine Learning. _International Journal of Computer._, 26(1), 184-194
* Argiolas, Prenza, and Quaquero (2015) Argiolas, C., Prenza, R. and Quaquero, E., 2015. _BIM 3.0 Dal disego alla simulazione_, Gangemi, Roma.
* Bakar and Tahir (2009) Bakar, N. A. and Tahir, I. M., 2009. Applying Multiple Linear Regression and Neural Network to Predict Bank Performance. _International Business Research_, 2, 176-183, DOI: 10.5539/ibr.x2nda176
* Bien et al. (2002) Bien, Z., Bang, W. C., Kim, D. Y. and Han, J. S., 2002. Machine intelligence quotient: Its measurements and applications. _Fuzzy Sets and Systems_, 127(1), 3-16, DOI: [https://doi.org/10.1016/S0165-0114](https://doi.org/10.1016/S0165-0114)(01)00149-X
* Bolton et al. (2018) Bolton, R., McColl-Kennedy, J. R., Cheung, L., Gallan, A., Orsingher, C., Witell, L. and Zaki, M., 2018. Customer experience challenges: Bringing together digital, physical and social reams. _Journal of Service Management_, 29(5), 776-808, DOI: [https://doi.org/10.1108/JOSM-04-2018-0113](https://doi.org/10.1108/JOSM-04-2018-0113)
* Brugnaro and Hanna (2017) Brugnaro, G. and Hanna, S., 2017. Adaptive Robotic Training Methods for Subtractive Manufacturing. _Proceedings of the 37th Annual Conference of the Associ aton for Computer Aided Design in Architecture (ACADIA)_, 164-169
* Calvano, Cirelli, and Lo Turco (2020) Calvano, M., Cirelli, M. and Lo Turco, M., 2020. Display the Invisible. Automated Algorithms to Visualize Complex Phenomena. _Proceedings of the 2nd International and Interdisciplinary Conference on Image and Imagination_. IMG 2019, Algher, 936-949
* Clements-Croome (2004) Clements-Croome, D., 2004. _Intelligent Buildings: Design, Management and Operation_. Thomas Telford, London.
* Davis (2016) Davis, D., 2016. Evaluating Buildings with Computation and Machine Learning. _Proceedings of the 36th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA)_, 116-123
* Eck (2017) Eck, D. J., 2017. Bootstrapping for multivariate linear regression models. _Statistics & Probability Letters._, 134, 141-149, DOI: [https://doi.org/10.1016/j.spl.2017.11.001](https://doi.org/10.1016/j.spl.2017.11.001)
* Evans (2019) Evans, S., 2019. _Beyond buzzwords: the true meaning and value of \"digital twins\"_. Link: [https://www.snclavalin.com/en/beyond-engineering/beyond-buzzwords-the-true-meaning-and-value-ofdigital-twins](https://www.snclavalin.com/en/beyond-engineering/beyond-buzzwords-the-true-meaning-and-value-ofdigital-twins)
* Giovannini (2017) Giovannini, E. C., 2017. VRIM workflow: semantic H-BIM objects using parametric geometries. _3DModeling& BIM. Progetazione, design, proposte per la ricostruzione._, Roma, 212-229
* Grieves (2019) Grieves, M., 2019. _Digital Twin: Manufacturing Excellence through Virtual Factory Replication_. A White Paper, LLC, Melbourne.
* Kim et al. (2018) Kim, J., Zhou, Y., Schiavon, S., Raftery, P. and Brager, G., 2018. Personal comfort models: Predicting individuals' thermal preference using occupant heating and cooling behavior and machine learning. _Building and Environment_, 129, 96-106, DOI: [https://doi.org/10.1016/j.builder.2017.12.011](https://doi.org/10.1016/j.builder.2017.12.011)- Tetro and the collections of the Museum of Representation_. Master's Thesis, University of Catania
* BIM: Historical Sentient
- Building Information Model. _Dn. Building Information Modeling. Data & Semantics_, 5, 17-27
* Mahni et al. (2019) Mahni, A. M., Madni, C. C. and Lucero, S. D., 2019. Leveraging Digital Twin Technology in Model-Based Systems Engineering. _Systems_, 7, 7, DOI: [https://doi.org/10.3390/systems7010007](https://doi.org/10.3390/systems7010007)
* Manoli (2015) Manoli, F., 2015. _Manuale di gestione e cura delle collezioni museali_. LE MONNIER Universita, Milano
* Teoria e pratica in due secoli di dibutito_. Marsilio, Venezia
* Pasini et al. (2016) Pasini, D., Ventura, S. M., Rinaldi, S., Bellagente, P., Flammini, A. and Ciribini, A. L. C., 2016. Exploiting Internet of things and building information modelling framework for management of cognitive building. _2016 IEEE International Smart Cities Conference (ISC2)_, Trento, 1-6, DOI: 10.1109/ISC2.2016.7580817
* Peng et al. (2017) Peng, W., Zhang, F. and Nagakura, T., 2017. Machines' Perception of Space: Employing 3D Isovist Methods and a Convolutional Neural Network in Architectural Space Classification. _Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA)_, 474-481
* Ploennigs and Schumann (2017) Ploennigs, J., and Schumann, A., 2017. From semantic models to cognitive buildings. _Proceedings of 31\\({}^{\\text{th}}\\) AAAI17 Conference on Artificial Intelligence_. San Francisco, 5105-5106
* Wright Stetson (2017) Wright Stetson, M., 2017. _Architectural Intelligence: How Designers and Archicets Created the Digital Landscape_. MIT Press
* Richardson (2020) Richardson, J., 2020. What Digital Twin Technology Means for Museums. Link: [https://www.museumext.com/article/what-digital-twin-technology-means-for-museums/](https://www.museumext.com/article/what-digital-twin-technology-means-for-museums/)
* Thorndike (1905) Thorndike, E. L., 1905. _The elements of psychology_. A. G. Seiler, New York.
* Tono et al. (2019) Tono, A., Tono, H. and Zani, A., 2019. Encoded Memory: Artificial Intelligence and Deep Learning in Architecture in Bolognesi, C and Santagati, C (eds). _Impact of Industry 4.0 on Architecture and Cultural Heritage_, IGI Global, 283-305, DOI: 10.4018/978-1-7998-1234-0.ch012
* Tono (2018) Tono, A., 2018. _BIMiOX: The Evolutionary Information Genes_. Link: [https://www.autodesk.com/autodesk-university/class/BIMiOX-Evolutionary-Information-Genes-2018?pass=forgeDevCon](https://www.autodesk.com/autodesk-university/class/BIMiOX-Evolutionary-Information-Genes-2018?pass=forgeDevCon)
* Trento et al. (2019) Trento, A., Wurzer, G. and Coraglia, U. M., 2019. A Digital Twin for Directing People Flow in Preserved Heritage Buildings _37th eC4LDe and 23rd SIGraD1 Conference_, Porto, 561-568
* Viani et al. (2014) Viani, F., Giarola, E., Polo, A., Vannuccini, G. and Longo, L., 2014. Decision Support System for Museum Management through Distributed Wireless Sensing. _MWF2014: Museums and the Web_, Firenze
* Wong et al. (2005) Wong, J. K., Li, H., and Wang, S. W., 2005. Intelligent building research: A review. _Automation in Construction_, 14(1), pp. 143-159, DOI: [https://doi.org/10.1016/j.autcon.2004.06.001](https://doi.org/10.1016/j.autcon.2004.06.001)
* Zucker et al. (2018) Zucker, G., Sporr, A., Kollmann, S., Wendt, A., Siafara Chiodo L. and Fernbach, A., 2018. A Cognitive System Architecture for Building Energy Management. _IEEE Transactions on Industrial Informatics_, 14(6), 2521- 2529, DOI: 10.1109/TII.2018.2815739 | This paper investigates the application of the Digital Twin approach to get a Sentient building able to acquire the ability to perceive external inputs and develop strategies to support its management and/or conservation. The experimentation foresees the integration of an H-BIM model with a Decision Support System based on Artificial Intelligence (in this case Machine Learning techniques) for the management of museum collections in historical architectures. The innovative aspect of this methodology resides in the change of paradigm regarding the relations between the historical building under consideration and the professional figures who deal with the management, conservation and architectural restoration. This work tries to contextualize the novel HS-BIM methodology within the theoretical discussion of the disciplines mentioned above and to participate in Digital Twin's debate. HS-BIM can be seen as a possible path that leads to creating digital twins for cultural heritage. The reflection inspired by this experience aims to revise the concept of Digital Twin as a parallel/external digital model in favour of an artificial evolution of the real system augmented by a \"cognitive\" apparatus. In this vision, thanks to AI application, future buildings will be able to sense \"comfort and pain\" and learning from their own life-cycle experience but also from that one of elder sentient-buildings thanks to transfer learning already applied in AI's fields.
+
Footnote β : Corresponding author | Summarize the following text. | 261 |
arxiv-format/2304_00844v1.md | # Spectral Enhanced Rectangle Transformer for Hyperspectral Image Denoising
Miaoyu Li \\({}^{1\\dagger}\\), Ji Liu \\({}^{2\\dagger}\\), Ying Fu \\({}^{1\\ast}\\), Yulun Zhang\\({}^{3}\\), Dejing Dou\\({}^{4}\\)
\\({}^{1}\\)Beijing Institute of Technology, \\({}^{2}\\)Baidu Inc., \\({}^{3}\\)ETH Zurich, \\({}^{4}\\)BCG X
[email protected], [email protected], [email protected],
[email protected], [email protected]
## 1 Introduction
With sufficient spectral information, hyperspectral images (HSIs) can provide more detailed characteristics to distinguish from different materials compared to RGB images. Thus, HSIs have been widely applied to face recognition [37, 38], vegetation detection [4], medical diagnosis [43], _etc._ With scanning designs [2] and massive wavebands, the photon numbers in individual bands are limited. HSI is easily degraded by various noise. Apart from poor visual effects, such undesired degradation also negatively affects the downstream applications. To obtain better visual effects and performance in HSI vision tasks, denoising is a fundamental step for HSI analysis and processing.
Similar to RGB images, HSIs have self-similarity in the spatial domain, suggesting that similar pixels can be grouped and denoised together. Moreover, since hyperspectral imaging systems are able to acquire images at a nominal spectral resolution, HSIs have inner correlations in the spectral domain. Thus, it is important to consider both spatial and spectral domains when designing denoising methods for HSI. Traditional model-based HSI denoising methods [10, 17, 21] employ handcrafted priors to explore the spatial and spectral correlations by iteratively solving the optimization problem. Among these works, total variation [20, 21, 52] prior, non-local similarity [19], low-rank [8, 9] property, and sparsity [42] regularization are frequently utilized. The performance of these methods relies on the accuracy of handcrafted priors. In practical HSI denoising, model-based methods are generally time-consuming and have limited generalization ability in diverse scenarios.
To obtain robust learning for noise removal, deep learning methods [7, 41, 35, 49] are applied to HSI denoising and achieve impressive restoration performance. However, most of these works utilize convolutional neural networks for feature extraction and depend on local filter response to separate noise and signal in a limited receptive field.
Recently, vision Transformers have emerged with competitive results in both high-level tasks [16, 39] and low-level tasks [1, 13, 50], showing the strong capability of modeling long-range dependencies in image regions. To diminish the unaffordable quadratically computation cost to image size, many works have investigated the efficient design of spatial attention [47, 11, 46]. Swin Transformer [28] splitted feature maps into shifted square windows. CSWin Transformer [15] developed a stripe window across the features maps to enlarge the attention area. As HSI usually has large feature maps, exploring the similarity beyond the noisy pixel can cause unnecessary calculation burden. Thus, how to efficiently model the non-local spatial similarity is still challenging for HSI denoising Transformer.
HSIs usually lie in a spectral low-rank subspace [9], which can maintain the distinguished information and suppress noise. This indicates that the non-local spatial similarity and low-rank spectral statistics should be jointly unitized for HSI denoising. However, existing HSI denoising methods [24, 45] mainly utilize the low-rank characteristics through matrix factorization, which is based on a single HSI and requires a long-time to solve. The global low-rank property in large datasets is hardly considered.
In this paper, we propose a **S**pectral **E**nhanced **R**ectangle **T**ransformer (SERT) for HSI denoising. To reinforce model capacity with reasonable cost, we develop a multi-shape rectangle self-attention module to comprehensively explore the non-local spatial similarity. Besides, we aggregate the most informative spectral statistics to suppress noise in our spectral enhancement module, which projects the spatial-spectral cubes into low-rank vectors with the assistance of a global spectral memory unit. The spectral enhancement module also provides interactions between the non-overlapping spatial rectangles. With our proposed Transformer, the spatial non-local similarity and global spectral low-rank properly are jointly considered to benefit the denoising process. Experimental results show that our method significantly outperforms the state-of-the-art methods in both simulated data and real noisy HSIs.
Overall, our contributions can be summarized as follows:
* We propose a spectral enhanced rectangle Transformer for HSI denoising, which can well exploit both the non-local spatial similarity and global spectral low-rank property of noisy images.
* We present a multi-shape rectangle spatial self-attention module to effectively explore the comprehensive spatial self-similarity in HSI.
* A spectral enhancement module with memory blocks is employed to extract the informative low-rank vectors from HSI cube patches and suppress the noise.
## 2 Related Works
### Hyperspectral Image Denoising
HSI denoising is a well-developed research area in computer vision [44, 9, 19] and remote sensing [49, 34]. Mainstream HSI denoising methods can be classified into model-based methods and deep learning methods.
Traditional model-based methods [54, 29, 48, 10, 29] illustrate noise removal as an iterative optimization problem with handcrafted priors. Adaptive spatial-spectral dictionary methods are proposed in [17]. Chang [9] employed the hyper-Laplacian regularized unidirectional low-rank tensor recovery method to utilize the structure correlation in HSI. The spatial non-local similarity and global spectral low-rank property are integrated in [19] for denoising. Besides, other conventional spatial regularizers [52, 29] and low-rank regularization [8] are also introduced to model the spatial and spectral properties of noisy HSI.
With great potential to automatically learn and represent features, deep learning methods [45, 32, 7, 32] have been actively investigated for HSI denoising. Spectral-spatial features are exploited via residual convolutional network in HSID-CNN [49]. A deep spatial-spectral global reasoning network is proposed in [7] to consider both the local and global information for HSI denoising. Besides, a quasi-recurrent neural network was extended to HSI denoising task [32, 41], showing the benefits of both convolutional and recurrent neural networks. Model-guided interpretable networks have also been actively explored in [3, 44]. Different from those convolution-based networks that have limited receptive field and fixed feature extraction paradigms, our proposed method utilizes a transformer to better model the inner similarity in spatial and spectral domains.
### Vision Transformer
**Transformer for RGB images.** Transformers have been actively applied to vision tasks [47, 16, 39, 18] due to its powerful ability in modeling long-range dependencies. Self-attention mechanism has been proven to be efficacious in previous works [40, 23]. When applied to the spatial region, it is crucial for the Transformers to consider the trade-off between computation cost and model capacity. To cut down the quadratic computation growth to image size, Dosovitskiy [16] first employed Transformer for image recognition with images spitted in small patches. Swin Transformer [28] was proposed with shifted window for self-attention in the spatial domain. To further enlarge the receptive field of self-attention, down-sampled attention was introduced in [47, 39, 13]. Without spatial information loss, Dong [15] employed horizontal and vertical stripes to compute self-attention. However, for HSI denoising, the non-local spatial similarity is not efficiently explored as these Transformers conducted the spatial self-attention in limited windows or introduced unnecessary computation cost. Besides, the combined consideration of the spatial and spectral domains are rarely investigated.
**Transformer for HSI.** Recently, there is an emerging trend of using Transformer to HSI restoration [1, 51, 36] and HSI classification [22, 27]. An architecture search framework was proposed in [55] to find a suitable network consisting of spectral and spatial Transformer for HSI classification. A 3D quasi-recurrent and Transformer network was presented in [1] for hyperspectral image denoising, which combined the 3D quasi-recurrent layer with Swin blocks. Different from these works that tend to directly employ existing transformer blocks to another tasks, methods in [5, 6] solve the HSI reconstruction problem with task-oriented transformer block under the guidance of degradation mask. However, these works do not consider the similarity in both spatial and spectral domains. Here, we introduce our spectral enhanced rectangle Transformer to HSI denoising, exploring the most important two characteristics of HSI, including spatial non-local similarity and global low-rank properties.
## 3 Spectral Enhanced Rectangle Transformer
Assuming the degraded noisy HSI as \\(\\mathbf{Y}\\in\\mathbb{R}^{H\\times W\\times B}\\), where \\(H\\), \\(W\\), and \\(B\\) represent the height, width, and band of the HSI, the noise degradation can be formulated as
\\[\\mathbf{Y}=\\mathbf{X}+\\mathbf{n}, \\tag{1}\\]
where \\(\\mathbf{X}\\!\\in\\!\\!\\mathbb{R}^{H\\times W\\times B}\\) is the desired clean HSI, and \\(\\mathbf{n}\\!\\!\\in\\!\\!\\mathbb{R}^{H\\times W\\times B}\\) denotes the addictive random noise. In realistic HSI degradation situations, HSIs are corrupted by various types of noise, _e.g._, Gaussian noise, stripe noise, deadline noise, impulse noise, or a mixture of them.
In this section, we elaborately introduce our proposed spectral enhanced rectangle Transformer for HSI denoising. The overall architecture is shown in Figure 1. In our implementation, each Residual Transformer Layer (RTL) consists of 6 Transformer blocks. And the proposed Transformer Block mainly contains two essential components, _i.e._, rectangle self-attention (RA) module and spectral enhancement (SE) module. Figure 1(b) and Figure 1(c) illustrate the detailed framework of RA module and SE module, respectively. The outputs of RA and SE are added together to achieve comprehensive feature embeddings for noise removal. Next, we discuss each module in detail.
### Spatial Rectangle Self-Attention
To remove noise from HSI, it is important to explore the similarity information in spatial domain [19], which implies that similar pixels can be aggregated together for denoising. Existing deep learning-based HSI denoising methods mainly utilize the convolutional layer to extract the local information with spatially invariant kernels, limiting the flexibility to model the non-local similarity.
For better model capacity, there are various attempts [28, 39, 50] that employ Transformer as an alternative solution to convolution neural network. The power of self-attention mechanism in modeling spatial information has also been proven in [13, 26]. Since the global self-attention in the spatial domain introduces high computational complexity, Swin Transformer [28] and CSWin Transformer [15] split the input feature into windows or stripes for attention operation. From the heatmap shown in Figure 2, we can observe that neighboring pixels are more similar to the center pixel than distant pixels. When conducting spatial self-attention, Swin (see Figure 2(b)) focuses on local information while CSwin (Figure 2 (c)) tends to utilize pixels which is less informative. Thus, how to effectively conduct the self-attention in the informative spatial regions to model non-local similarity is still challenging for HSI denoising.
Here, we propose a rectangle self-attention in the spatial domain, in which the feature maps are split into several non-overlapping rectangles. As shown in Figure 2, our rectangle Transformer focuses on the informative neighboring pixels and obtains more exhaustive information in non-local area. At different stages of the network, rectangles of different shapes are employed to explore better expression ability.
The details of our proposed RA module are shown in Figure 1(b). To obtain comprehensive features, the rectangle self-attention is conducted in vertically and horizontally
Figure 1: Overall framework of SERT. (a) SERT mainly includes two essential components, _i.e._, SE for non-local spatial similarity and SE for global low-rank property. (b) spatial rectangle self-attention (RA) and (c) spectral enhancement (SE) module.
after the spectral split operation. Different from [15], we add a spectral shuffle [30] operation to exchange the information from two branches. Since rectangle self-attention in vertical and horizontal focuses on different regions and has different receptive fields, the shuffle operation also enlarges the respective field of the whole module.
Let \\(\\mathds{Z}{\\in}\\mathbb{R}^{H\\times W\\times C}\\) denote the input features of RA module. The outputs of RA module is calculated via
\\[\\mathbf{Z}_{1},\\mathbf{Z}_{2}=\\mathrm{Split}(\\mathbf{Z}), \\tag{2}\\] \\[\\hat{\\mathbf{Z}}^{1}=\\mathrm{W}\\mathrm{-RMSA}(\\mathbf{Z}^{1}),\\hat{\\mathbf{Z} }^{2}=\\mathrm{H}\\mathrm{-RMSA}(\\mathbf{Z}^{2})\\] (3) \\[\\hat{\\mathbf{Z}}=\\mathrm{Shuffle}([\\hat{\\mathbf{Z}}^{1},\\hat{\\mathbf{Z}}^{2}]), \\tag{4}\\]
where \\(\\mathrm{W}\\mathrm{-RMSA}\\) denotes the horizontal rectangle multi-head self-attention, and \\(\\mathrm{H}\\mathrm{-RMSA}\\) denotes the vertical rectangle multi-head self-attention. \\(\\mathbf{Z}\\) is firstly divided into two parts in spectral domain, where \\(\\mathbf{Z}^{1}{\\in}\\mathbb{R}^{H\\times W\\times\\frac{C}{2}}\\) and \\(\\mathbf{Z}^{2}{\\in}\\mathbb{R}^{H\\times W\\times\\frac{C}{2}}\\). Then, \\(\\mathbf{Z}^{1}\\) and \\(\\mathbf{Z}^{2}\\) conduct the \\(\\mathrm{W}\\mathrm{-RMSA}\\) and \\(\\mathrm{H}\\mathrm{-RMSA}\\) separately.
Supposing the size of horizontal rectangle as [\\(h\\), \\(w\\)] and \\(h{>}w\\), for \\(\\mathrm{W}\\mathrm{-RMSA}\\), the input features \\(\\mathbf{Z}^{1}\\) is partitioned into non-overlapping rectangles as \\(\\{\\mathbf{Z}_{1}^{1},\\mathbf{Z}_{2}^{1}, ,\\mathbf{Z}_{N}^{1}\\}\\), in which \\(Z_{i}^{1}\\in\\mathbb{R}^{h\\times w\\times\\frac{C}{2}}\\) and \\(N{=}\\frac{W\\times H}{h\\times w}\\). The output of each rectangle from \\(\\mathrm{W}\\mathrm{-RMSA}\\) is calculated as
\\[\\mathbf{Q}_{i}^{1} =\\mathbf{Z}_{i}^{1}\\mathbf{W}_{q}^{1},\\quad\\mathbf{K}_{i}^{1}=\\mathbf{Z}_{i}^{1} \\mathbf{W}_{k}^{1},\\quad\\mathbf{V}_{i}^{1}=\\mathbf{Z}_{i}^{1}\\mathbf{W}_{v}^{1} \\tag{5}\\] \\[\\hat{\\mathbf{Z}}_{i}^{1} =\\mathrm{SoftMax}(\\mathbf{Q}_{i}^{1}{\\mathbf{K}_{i}^{1}}^{T}/\\sqrt{d}+\\bm {P})\\mathbf{V}_{i}^{1}, \\tag{6}\\]
where \\(\\mathbf{W}_{q}^{1}\\), \\(\\mathbf{W}_{k}^{1}\\), \\(\\mathbf{W}_{v}^{1}\\)\\(\\in\\)\\(\\mathbb{R}^{\\frac{C}{2}\\times\\frac{C}{2}}\\) are the projection mappings of _query_\\(\\mathbf{Q}_{i}^{1}{\\in}\\mathbb{R}^{h\\times w\\times\\frac{C}{2}}\\), _keys_\\(\\mathbf{K}_{i}^{1}{\\in}\\mathbb{R}^{h\\times w\\times\\frac{C}{2}}\\), and _value_\\(\\mathbf{V}_{i}^{1}{\\in}\\mathbb{R}^{h\\times w\\times\\frac{C}{2}}\\). \\(\\mathbf{P}\\) is the learnable parameter embedding the position and \\(d\\) is the feature dimension. Then the outputs of horizontal rectangle self-attention is aggregated by
\\[\\mathrm{W}\\mathrm{-RMSA}(\\mathbf{Z}^{1})=\\mathrm{Merge}(\\hat{\\mathbf{Z}}_{1}^{1},\\hat {\\mathbf{Z}}_{2}^{1}, ,\\hat{\\mathbf{Z}}_{N}^{1}). \\tag{7}\\]
For vertical rectangle self-attention \\(\\mathrm{H}\\mathrm{-RMSA}\\), the size of the rectangle is [\\(w\\), \\(h\\)] while other operations are similar to \\(\\mathrm{W}\\mathrm{-RMSA}\\). Moreover, at different layers of the network, rectangles in various shapes are employed to explore non-local similarity in different scales.
### Spectral Enhancement
In traditional model-based HSI denoising methods, HSI is always represented by its extracted patches, and the low-rank property is widely explored in HSI denoising [9], compressive sensing [14], unmixing [24], implying that the low-dimensional spectral subspace is beneficial to HSI tasks. We also adopt the low-rank property to guide the HSI denoising process. However, without strong regularization like SVD decomposition [8], projecting the noisy HSI into a proper subspace is difficult. Thus, instead of introducing orthogonal linear projection as in [12] to HSI, we use the memory unit (MU) to store the low-rank statistics of HSI cubes. The network itself automatically learns how to represent the HSI cubes in subspace. The MU module can be denoted as a dictionary of global low-rank spectral vectors.
As shown in Figure 1(c), the features are firstly partitioned into several cube patches of size \\(P{\\times}P{\\times}C\\) to explore the spectral-spatial correlation. In the implementation, \\(P\\) is set to the long side of the rectangle in RA module. Accordingly, the spectral enhancement block also provides information interactions between the inside rectangles. Moreover, shift operation [28] is employed in spatial domain to establish connections between adjacent cube patches.
The input of SE module is denoted as \\(\\mathbf{Z}_{p}\\in\\mathbb{R}^{P\\times P\\times C}\\). To obtain distinguished spectral information in a subspace, following [23] and [11], a squeeze operation is employed and aggregates the features across the cube patch \\(\\mathbf{Z}_{p}\\) to produce a projected spectral vector of size \\(1{\\times}1{\\times}K\\). Specifically, a downsample operation is firstly conducted in the spatial domain to obtain aggregated spectral vector \\(\\mathbf{Z}_{c}{\\in}\\mathbb{R}^{1{\\times}1{\\times}C}\\). Then, it is projected to obtain \\(\\mathbf{Z}_{k}\\in\\mathbb{R}^{1{\\times}1{\\times}K}\\), which is in a subspace of rank \\(K\\). The extraction is described as
\\[\\mathbf{Z}_{c} =\\mathrm{AveragePool}(\\mathbf{Z}_{p}), \\tag{8}\\] \\[\\mathbf{Z}_{k} =\\mathbf{Z}_{c}\\mathbf{W}_{k}, \\tag{9}\\]
where \\(\\mathbf{W}_{k}\\in\\mathbb{R}^{C\\times K}\\) is the projection mapping. Notably, instead of conducting a global aggregation on the whole image, we focus on the information inside the cube since neighboring pixels tend to share similar spectral statistics.
To explore the spatial-spectral correlation beyond the current HSI cube and enhance the expression ability of low-rank spectral vector, we introduce a memorizing unit (MU) to store the spectral information. The MU module maintains a global memory bank \\(\\mathbf{M}\\in\\mathbb{R}^{K\\times B}\\), which is learned as parameters of the network. For spectral vector \\(Z_{k}\\), we seek the most relevant spectral low-rank vectors in MU and use these vectors to assist in adjusting the projected vector \\(\\mathbf{Z}_{k}\\). The corresponding coefficients \\(\\mathbf{I}\\in\\mathbb{R}^{1{\\times}B}\\) between \\(\\mathbf{Z}_{k}\\) and stored low-rank vectors \\(\\mathbf{M}\\) is extracted by
\\[\\mathbf{I}=\\mathrm{Softmax}(\\mathbf{Z}_{k}\\mathbf{M}). \\tag{10}\\]
With coefficients matrix \\(\\mathbf{I}\\), the desired low-rank vector \\(\\mathbf{Z}_{l}{\\in}\\mathbb{R}^{1{\\times}1{\\times}K}\\) can be obtained from MU via
\\[\\mathbf{Z}_{l}=\\mathbf{I}\\mathbf{M}. \\tag{11}\\]
Figure 2: This similarity statistic is obtained via Realistic dataset [52]. As the distance becomes longer, the similarity decreases.
Since \\(\\mathbf{Z}_{l}\\) represents the most informative spectral statistics of the noisy cube, to enhance the spatial-spectral correlation and suppress noise, we use the obtained low-rank vector as guidance to benefit the denoising process. The output of our spectral enhancement module is obtained by rescaling the input SHI cube \\(\\mathbf{Z}_{p}\\) with \\(\\mathbf{Z}_{l}\\) as
\\[\\hat{\\mathbf{Z}}_{p}=\\mathbf{Z}_{p}\\cdot\\mathbf{W}_{c}\\mathbf{Z}_{l}, \\tag{12}\\]
where \\(\\mathbf{W}_{c}\\in\\mathbb{R}^{C\\times K}\\) is the project mapping and \\(\\cdot\\) is the element-wise dot product.
## 4 Experiments
In this section, we first evaluate our method with synthetic experiments, including Gaussian noise cases and complex noise cases. Then we report results on real noisy datasets. Finally, we perform model analysis experiments to verify the effectiveness of the proposed model.
We compare several traditional model-based HSI denoising methods including the filter-based method (BM4D [31]), tensor-based method (LLRT [9]), and orthogonal basis-based method (NGMeet [19]). Five state-of-the-art deep learning-based methods, _i.e._, HSID-CNN [48], GRNet [7], QRNN3D [41], T3SC [3], and MAC-Net [7] are also compared. Traditional methods are programmed in Matlab with Intel Core i9-10850K CPU. Our method as well as other deep networks is evaluated with an NVIDIA RTX 3090 GPU. Peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) and spectral angle mapper (SAM) are used as the quantitative criteria.
### Experiments on Synthetic Data
**Datasets.** Synthetic experiments are conducted on ICVL dataset, which has been widely used for simulated studies [3, 41]. ICVL contains 201 HSIs of size 1392\\(\\times\\)1300 with 31 bands from 400 \\(nm\\) to 700 \\(nm\\). We use 100 HSIs for training, 5 HSIs for validating, and 50 HSIs used for testing. Following settings in [3] and [41], training images are cropped to size 64\\(\\times\\)64 at different scales. During the testing phase, HSIs are cropped to 512\\(\\times\\)512\\(\\times\\)31 to obtain an affordable computation cost for traditional methods.
**Implementation Details.** We use noise patterns in [41] to simulate the noisy HSIs. Specifically, the noise patterns are
* i.i.d Gaussian noise from level 10 to level 70.
* Complex noise cases. Five types of complex noise are included, _i.e._, Non-i.i.d Gaussian noise, Gaussian + Stripe noise, Gaussian + Deadline noise, Gaussian + Impulse noise, and Mixture noise.
\\begin{table}
\\begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{10} & \\multicolumn{3}{c|}{30} & \\multicolumn{3}{c|}{50} & \\multicolumn{3}{c|}{70} & \\multicolumn{3}{c}{10-70} \\\\ & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM \\\\ \\hline Noisy & 28.13 & 0.8792 & 18.72 & 18.59 & 0.5523 & 37.9 & 14.15 & 0.3476 & 49.01 & 11.23 & 0.2301 & 56.45 & 17.24 & 0.4782 & 41.94 \\\\ BM4D [31] & 40.78 & 0.9930 & 2.99 & 37.69 & 0.9872 & 5.02 & 34.96 & 0.9850 & 6.81 & 33.15 & 0.9554 & 8.40 & 36.62 & 0.9770 & 5.51 \\\\ LLRT [9] & 46.72 & 0.9983 & 1.60 & 41.12 & 0.9920 & 2.52 & 38.24 & 0.9830 & 3.47 & 36.23 & 0.9732 & 4.46 & 40.06 & 0.9860 & 3.24 \\\\ NGMeet [19] & **47.90** & 0.9988 & 1.39 & 42.44 & 0.9816 & 2.06 & 39.69 & 0.9658 & 2.49 & 38.05 & 0.9531 & 2.83 & 41.67 & 0.9937 & 2.19 \\\\ HSID-CNN [49] & 43.14 & 0.9918 & 2.12 & 40.30 & 0.9854 & 3.14 & 37.72 & 0.9746 & 4.27 & 34.95 & 0.9521 & 5.84 & 39.04 & 0.9776 & 3.71 \\\\ GRNet [7] & 45.25 & 0.9976 & 1.83 & 42.09 & 0.9957 & 2.18 & 40.25 & 0.9936 & 2.42 & 38.95 & 0.9914 & 2.63 & 41.44 & 0.9944 & 2.27 \\\\ QRNN3D [41] & 45.61 & 0.9977 & 1.80 & 42.18 & 0.9955 & 2.21 & 40.05 & 0.9929 & 2.63 & 38.09 & 0.9883 & 3.42 & 41.34 & 0.9938 & 2.42 \\\\ T3SC [3] & 45.81 & 0.9979 & 2.02 & 42.44 & 0.9957 & 2.44 & 40.39 & 0.9933 & 2.85 & 38.80 & 0.9904 & 3.26 & 41.64 & 0.9942 & 2.61 \\\\ MAC-Net [45] & 45.20 & 0.9974 & 1.87 & 42.10 & 0.9955 & 2.35 & 40.09 & 0.9931 & 2.79 & 38.64 & 0.9905 & 3.16 & 41.31 & 0.9941 & 2.52 \\\\
**SERT (Ours)** & 47.72 & **0.9988** & **1.36** & **43.56** & **0.9969** & **1.77** & **41.33** & **0.9949** & **2.05** & **39.82** & **0.9929** & **2.30** & **42.82** & **0.9957** & **1.88** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Averaged results of different methods under Gaussian noise levels on ICVL dataset. PSNR is in dB.
\\begin{table}
\\begin{tabular}{c|c c c|c c c|c c c|c c c|c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{Non-i.i.d Gaussian} & \\multicolumn{3}{c|}{Gaussian+Deadline} & \\multicolumn{3}{c|}{Gaussian+Impulse} & \\multicolumn{3}{c|}{Gaussian+Stripe} & \\multicolumn{3}{c}{Gaussian+Mixture} \\\\ & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM & PSNR & SSIM & SAM \\\\ \\hline Noisy & 18.29 & 0.5116 & 46.20 & 17.50 & 0.4770 & 47.55 & 14.93 & 0.3758 & 46.98 & 17.51 & 0.4867 & 46.98 & 13.91 & 0.3396 & 51.53 \\\\ BM4D [31] & 36.18 & 0.9767 & 5.78 & 33.77 & 0.9615 & 6.85 & 29.79 & 0.8613 & 21.59 & 35.63 & 0.9730 & 6.26 & 28.01 & 0.8419 & 23
For i.i.d Gaussian noise case, we train networks with random noise levels from 10 to 70 and test them under different levels of noise. For complex noise, networks are trained with a mixture of noise and tested under each case.
For our proposed model, the learning rate is set to \\(1e{-4}\\) with Adam optimizer. After 50 epochs, the learning rate is divided by 10. The total epoch number is 80. we set the basic channel \\(C=96\\) and rank size \\(K=12\\). The size of the rectangle of each Transformer layer is set to \\([16,1],[32,2]\\), and \\([32,4]\\) respectively. For competing methods, we use the parameter settings in the referenced works and make a great effort to reproduce the best results.
**Quantitative Comparison.** We show the quantitative results of Gaussian noise experiments and complex noise experiments in Tables 1 and 2. Among these traditional methods, NGMeet performs well on Gaussian noise cases in Table 1 and surpasses the deep learning method HSID-CNN. However, results of NGMeet and other model-based methods under complex noise cases in Table 2 are much worse, showing the poor generalization ability of handcrafted priors. Our proposed method outperforms other deep learning methods by at least 0.9 dB for all noise cases. Notably, our method effectively recovers a more accurate image from the challenging complex noisy HSIs, demonstrating its impressive ability to handle various noise.
**Visual Comparison.** To further demonstrate the denoising performance of our method, we show the denoised results of different methods under random Gaussian noise and deadline noise in Figure 3. In the top row, QRNN3D and QRNN3D exhibit excessive smoothness for some more complex textures. Compared to NGMeet, our method has much fewer artifacts than other methods. In the bottom row, our method restores more texture details with less noise.
### Experiments on Real Noisy Data
**Datasets.** Urban dataset and Realistic dataset from [53] are both adopted for our real data experiments.
Urban dataset contains a image of size 307\\(\\times\\)307 with 210 bands covering from \\(400\\) to 2500 \\(nm\\). Since there is no clean HSI, we use APEX dataset [25] for pre-training, in which band-dependent noise levels from 0 to 55 are added to the clean HSIs. The settings are the same with [3].
For Realistic dataset [53], there are 59 noisy HSIs provided with paired clean HSIs. Each HSI contains 696\\(\\times\\)520 pixels in spatial resolution with 34 bands from 400 \\(nm\\) to 700 \\(nm\\). We randomly select 44 HSIs from both indoor scenes and outdoor scenes. The left is used for testing.
**Implementation Details.** For Urban dataset experiment, networks are trained with their default parameter settings. The training epochs of our method is set to 100 epochs with
Figure 4: Visual quality comparison of real noisy HSI experiments on Urban dataset with bands 1, 108, 208.
Figure 3: Visual comparison on ICVL. Images are from band 28. The top row exhibits the results under Gaussian noise with noise level 50, and the bottom row exhibits the results under deadline noise.
a learning rate \\(1e-4\\). For the Realistic dataset [52], we crop overlapped 128\\(\\times\\)128 spatial regions with data augmentation to train deep networks. The data augmentation settings in [52] are also adopted. The training epoch is set to 1000.
**Quantitative Comparison.** Table 3 shows the averaged results of different methods on the Realistic dataset. Our proposed SERT significantly outperforms other HSI denoising methods by almost 0.5 dB, showing the effectiveness of our method in handling real noise.
**Visual Comparison.** We provide the denoising results of real noisy HSIs in Figures 4 and 5. Our method is superior to traditional denoising and deep learning methods in terms of both noise removal and detail retention. From Figure 4, we can observe that Urban image is corrupted by complex noise. The stripe noise has severely affected the visual effect of image. Denoised images obtained by other methods are either over-smoothed or still have obvious stripe noise. Our method provides a clean output image while preserving the textures and sharpness. For visual comparison of Realistic dataset in Figure 5, the competing methods generate incorrect texture and are less effective in noise removal. And our method achieves the most promising visual result.
### Comparison with other Transformers
To show the effectiveness of our method in exploring spatial and spectral characteristics of HSIs, we evaluate our model with four Transformer methods in Table 4. Our model achieves the best results, implying that the proposed Transformer block is more suitable for HSI denoising.
**Differences with existing RGB Transformers.** Existing RGB Transformer methods consider the inner long-range dependency from the spatial dimension [13, 28] or spectral dimension [50]. Our Transformer explores the joint correlation. Besides, our Transformer block utilizes the non-local similarity and low-rank property, providing a better modeling capability to explore the rich information of HSI.
**Differences with existing HSI Transformers.** TRQ3D proposed a hybrid framework that employs both Swin Transformer and 3D quasi-recurrent network for HSI denoising [33]. With Transformer block adopted from RGB image tasks, the inner characteristic of HSI is hardly fully utilized in the proposed Transformer-based network.
### Model Analysis
**Model Complexity.** In Table 5, we compare the average inference time, GFLOPs as well as denoising performance by different denoising methods on ICVL dataset and real noisy dataset [52]. Our method achieves the competing computation cost and inference time with better performance.
**Component Analysis.** The results of different component designs are given in Table 6(a). The first row presents Transformer with rectangle self-attention (RA) in spatial domain. Applying spectral enhancement (SE) to capture spatial-spectral information, it remarkably boosts the de
\\begin{table}
\\begin{tabular}{|c|c c c c c|} \\hline Metric & SwinIR [26] & Restomer [50] & CSwin [15] & TRQ3D [33] & **SERT (Ours)** \\\\ \\hline GFLOPS & 1473.0 & 3652.8 & 1129.5 & 2135.7 & **1018.9** \\\\ \\hline Parma (M) & 2.98 & 909.4 & 58.53 & **0.68** & 1.91 \\\\ \\hline PSNR (dB) & 40.44 & 41.07 & 42.04 & 41.66 & **42.82** \\\\ \\hline SSM & 0.9938 & 0.9945 & 0.9951 & 0.9947 & **0.9957** \\\\ \\hline SAM & 2.32 & 2.05 & 2.18 & 2.21 & **1.88** \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Comparison with other Transformers under random Gaussian noise on ICVL dataset. SWinIR, Restomer, and CSwin are proposed for RBG image tasks. TRQ3D is for HSI denoising.
Figure 5: Visual comparison on Realistic dataset [52] of scene 5 with corresponding PSNR. The images are from band 12 on 550 nm.
\\begin{table}
\\begin{tabular}{|c|c c c c c c c c c|} \\hline Metric & Noisy & BM4D [31] & LLRT [9] & NGMeet [19] & HSID-CNN [48] & GRNet [7] & QRNN3D [41] & T3SC [3] & MAC-Net [7] & **SERT (Ours)** \\\\ \\hline PSNR & 23.26 & 29.04 & 28.26 & 28.72 & 26.44 & 25.33 & 28.12 & 28.51 & 29.20 & **29.68** \\\\ \\hline SSM & 0.7609 & 0.9471 & 0.9417 & 0.9511 & 0.8992 & 0.8381 & 0.9066 & 0.9323 & 0.9489 & **0.9533** \\\\ \\hline SAM & 17.329 & 3.087 & 3.960 & 2.735 & 5.242 & 9.737 & 5.590 & 4.408 & 4.099 & **2.536** \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Average results of different methods on 15 real noisy HSIs. The PSNR is in dB, and best results are in bold.
noising performance by 0.42 dB improvement. The introduction of spectral shuffle (SS) also slightly improves the results, which validates the necessity of feature fusion. With memory unit (MU), the model gains 0.18 dB in PSNR, demonstrating the effectiveness of learning from a large-scale dataset to obtain representative low-rank vectors.
**Position of SE Module.** We further place our SE module at different positions to obtain the spatial-spectral correlation. The results are shown in Table 6(b). For global SE, the whole features of HSI is projected to one low-rank vector. Local SE stands for SE module that projected the feature inside a rectangle to one vector. Non-local SE, which is the employed design, projects several neighboring rectangles into one vector. Interestingly, global SE brings a slight decrease in performance, indicating extracting a low-rank vector from the entire HSI is inappropriate. As can be seen that non-local SE yields the best performance. We owe it to its ability to make interactions between spatial rectangles and aggregate information of neighboring similar pixels.
**Visualization of Low-rank Vectors.** To demonstrate the role of spectral enhancement module, we visualize several low-rank vectors obtained by SE module in Figure 6. The input cubes are severely influenced by noise and it is difficult to judge the similarities between cubes visually. However, low-rank vectors extracted from these noisy cube patches by SE module show clear similarities. Since the patch 7, 8 and 9 are all from the road area, their projected low-rank vectors are more similar to each other than to other vectors. This proves the ability of SE module to extract essential information from patches and suppress noise.
**Parameter Analysis.** We evaluate our proposed rectangle Transformer under different settings of rectangle size in Figure 7. We fix the width of rectangles and change their lengths for comparison. Since our method includes three layers of Transformer, we change the length in different layers. It can be observed that a rectangle with longer length may not bring better performance for HSI denoising, validating the essence of our proposed rectangle self-attention in modeling non-local similarity in the spatial domain.
## 5 Conclusion
In this paper, we present a spectral enhanced rectangle Transformer for HSI denoising, considering the spatial non-local similarity and spectral low-rank property of HSI. We exploit the non-local similarity via multi-shape rectangle self-attention in the spatial domain with computation efficiency. Moreover, we integrate a spectral enhancement module with learnable memory unit to explore the global spectral low-rank property of HSI. The proposed spectral enhancement introduces interactions across spatial rectangles while maintaining informative spectral characteristics and suppressing noise. In summary, our proposed Transformer utilizes the spatial-spectral correlation to eliminate the noise. Extensive quantitative and qualitative experiments demonstrate that our method significantly outperforms other competing methods with synthetic and real noisy HSIs. In the future, we plan to extend our method to cope with various HSI restoration tasks.
\\begin{table}
\\end{table}
Table 6: Component analysis of various designs on ICVL dataset under random Gaussian noise.
Figure 6: Visualization of low-rank vectors in SE module.
\\begin{table}
\\end{table}
Table 5: Comparisons of PSNR, Params (M) GFLOPS and inference time of different deep learning methods.
Figure 7: Different settings of rectangleβs length at different layers. The widths is set to [1,2,4] for defaults.
## References
* [1] Wele Gedara Chaminda Bandara and Vishal M Patel. Hypertransformer: A textural and spectral feature fusion transformer for pansharpening. In _CVPR_, pages 1767-1777, 2022.
* [2] Robert W Basedow, Dwayne C Carmer, and Mark E Anderson. Hydice system: Implementation and performance. In _Imaging Spectrometry_, volume 2480, pages 258-267. SPIE, 1995.
* [3] Theo Bodrito, Alexandre Zouaoui, Jocelyn Chanussot, and Julien Mairal. A trainable spectral-spatial sparse coding model for hyperspectral image restoration. In _NeurIPS_, volume 34, pages 5430-5442, 2021.
* [4] Peter Burai, Balazs Deak, Orsolya Valko, and Tamas Tomor. Classification of herbaceous vegetation using airborne hyperspectral imagery. _Remote Sensing_, 7(2):2046-2066, 2015.
* [5] Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Coarse-to-fine sparse transformer for hyperspectral image reconstruction. In _ECCV_, pages 686-704. Springer, 2022.
* [6] Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In _CVPR_, pages 17502-17511, 2022.
* [7] Xiangyong Cao, Xueyang Fu, Chen Xu, and Deyu Meng. Deep spatial-spectral global reasoning network for hyperspectral image denoising. _IEEE TGRS_, 2021.
* [8] Yi Chang, Luxin Yan, Xi-Le Zhao, Houzhang Fang, Zhijun Zhang, and Sheng Zhong. Weighted low-rank tensor recovery for hyperspectral image restoration. _IEEE TCYB_, 50(11):4558-4572, 2020.
* [9] Yi Chang, Luxin Yan, and Sheng Zhong. Hyper-laplacian regularized unidirectional low-rank tensor recovery for multispectral image denoising. In _CVPR_, pages 4260-4268, 2017.
* [10] Guangyi Chen and Shen-En Qian. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. _IEEE TGRS_, 49(3):973-980, 2010.
* [11] Xiangyu Chen, Xintao Wang, Jiantao Zhou, and Chao Dong. Activating more pixels in image super-resolution transformer. _arXiv preprint arXiv:2205.04437_, 2022.
* [12] Shen Cheng, Yuzhi Wang, Haibin Huang, Donghao Liu, Haoqiang Fan, and Shuaicheng Liu. Nbnet: Noise basis learning for image denoising with subspace projection. In _CVPR_, pages 4896-4906, 2021.
* [13] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In _NeurIPS_, volume 34, pages 9355-9366, 2021.
* [14] Weisheng Dong, Guangming Shi, Xin Li, Yi Ma, and Feng Huang. Compressive sensing via nonlocal low-rank regularization. _IEEE TIP_, 23(8):3618-3632, 2014.
* [15] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In _CVPR_, pages 12124-12134, 2022.
* [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
* [17] Ying Fu, Antony Lam, Imari Sato, and Yoichi Sato. Adaptive spatial-spectral dictionary learning for hyperspectral image denoising. In _ICCV_, pages 343-351, 2015.
* [18] Ying Fu, Zichun Wang, Tao Zhang, and Jun Zhang. Low-light raw video denoising with a high-quality realistic motion dataset. _TMM_, 2022.
* [19] Wei He, Quanming Yao, Chao Li, Naoto Yokoya, and Qibin Zhao. Non-local meets global: An integrated paradigm for hyperspectral denoising. In _CVPR_, pages 6868-6877, 2019.
* [20] Wei He, Hongyan Zhang, Huanfeng Shen, and Liangpei Zhang. Hyperspectral image denoising using local low-rank matrix recovery and global spatial-spectral total variation. _IEEE J-STARS_, 11(3):713-729, 2018.
* [21] Wei He, Hongyan Zhang, Liangpei Zhang, and Huanfeng Shen. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. _IEEE TGRS_, 54(1):178-188, 2015.
* [22] Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza, and Jocelyn Chanussot. Spectralformer: Rethinking hyperspectral image classification with transformers. _IEEE TGRS_, 60:1-15, 2021.
* [23] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _CVPR_, pages 7132-7141, 2018.
* [24] Jie Huang, Ting-Zhu Huang, Liang-Jian Deng, and Xi-Le Zhao. Joint-sparse-blocks and low-rank representation for hyperspectral unmixing. _IEEE TGRS_, 57(4):2419-2438, 2018.
* [25] Klaus I Itten, Francesco Dell'Endice, Andreas Hueni, Mathias Kneubuhler, Daniel Schlapfer, Daniel Odermatt, Felix Seidel, Silvia Huber, Jurg Schopfer, Tobias Kellenberger, et al. Apex-the hyperspectral esa airborne prism experiment. _Sensors_, 8(10):6235-6259, 2008.
* [26] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In _ICCV_, pages 1833-1844, 2021.
* [27] Bing Liu, Anzhu Yu, Kuiliang Gao, Xiong Tan, Yifan Sun, and Xuchu Yu. Dss-trm: deep spatial-spectral transformer for hyperspectral image classification. _Eur. J. Remote Sens_, 55(1):103-114, 2022.
* [28] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _ICCV_, pages 10012-10022, 2021.
* [29] Ting Lu, Shutao Li, Leyuan Fang, Yi Ma, and Jon Atli Benediktsson. Spectral-spatial adaptive sparse representation for hyperspectral image denoising. _IEEE TGRS_, 54(1):373-385, 2015.
* [30] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In _ECCV_, pages 116-131, 2018.
* [31] Matteo Maggioni, Vladimir Katkovnik, Karen Egiazarian, and Alessandro Foi. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. _IEEE TIP_, 22(1):119-133, 2012.
* [32] Erting Pan, Yong Ma, Xiaoguang Mei, Fan Fan, Jun Huang, and Jiayi Ma. Sqad: Spatial-spectral quasi-attention recurrent network for hyperspectral image denoising. _IEEE TGRS_.
* [33] Li Pang, Weizhen Gu, and Xiangyong Cao. Trq3dnet: A 3d quasi-recurrent and transformer based network for hyperspectral image denoising. _Remote Sensing_, 14(18):4598, 2022.
* [34] Qian Shi, Xiaopei Tang, Taoru Yang, Rong Liu, and Liangpei Zhang. Hyperspectral image denoising using a 3-d attention denoising network. _IEEE TGRS_, 2021.
* [35] Oleksii Sidorov and Jon Yngve Hardeberg. Deep hyperspectral prior: Single-image denoising, inpainting, super-resolution. In _CVPR_, pages 0-0, 2019.
* [36] Xunyang Su, Jinjiang Li, and Zhen Hua. Transformer-based regression network for pansharpening remote sensing images. _IEEE TGRS_, 60:1-23, 2022.
* [37] Muhammad Uzair, Arif Mahmood, and Ajmal Mian. Hyperspectral face recognition with spatiospectral information fusion and pls regression. _IEEE TIP_, 24(3):1127-1137, 2015.
* [38] Muhammad Uzair, Arif Mahmood, and Ajmal S Mian. Hyperspectral face recognition using 3d-dct and partial least squares. In _BMVC_, volume 1, page 10, 2013.
* [39] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In _ICCV_, pages 568-578, 2021.
* [40] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In _CVPR_, pages 7794-7803, 2018.
* [41] Kaixuan Wei, Ying Fu, and Hua Huang. 3-d quasi-recurrent neural network for hyperspectral image denoising. _IEEE TNNLS_, 32(1):363-375, 2020.
* [42] Wei Wei, Lei Zhang, Chunna Tian, Antonio Plaza, and Yanning Zhang. Structured sparse coding-based hyperspectral imagery denoising with intracluster filtering. _IEEE TGRS_, 55(12):6860-6876, 2017.
* [43] Xueling Wei, Wei Li, Mengmeng Zhang, and Qingli Li. Medical hyperspectral image classification based on end-to-end fusion deep neural network. _IEEE T-IM_, 68(11):4481-4492, 2019.
* [44] Fengchao Xiong, Jun Zhou, Shuyin Tao, Jianfeng Lu, Jiantao Zhou, and Yntato Qian. Smds-net: Model guided spectral-spatial network for hyperspectral image denoising. _IEEE TIP_, 31:5469-5483, 2022.
* [45] Fengchao Xiong, Jun Zhou, Qinling Zhao, Jianfeng Lu, and Yntao Qian. Mac-net: Model-aided nonlocal neural network for hyperspectral image denoising. _IEEE TGRS_, 60:1-14, 2021.
* [46] Rui Yang, Hailong Ma, Jie Wu, Yansong Tang, Xuefeng Xiao, Min Zheng, and Xiu Li. Scalablevit: Rethinking the context-oriented generalization of vision transformer. _arXiv preprint arXiv:2203.10790_, 2022.
* [47] Tian Ye, Mingchao Jiang, Yunchen Zhang, Liang Chen, Erkang Chen, Pen Chen, and Zhiyong Lu. Perceiving and modeling density is all you need for image dehazing. _arXiv preprint arXiv:2111.09733_, 2021.
* [48] Qiangqiang Yuan, Liangpei Zhang, and Huanfeng Shen. Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. _IEEE TGRS_, 50(10):3660-3677, 2012.
* [49] Qiangqiang Yuan, Qiang Zhang, Jie Li, Huanfeng Shen, and Liangpei Zhang. Hyperspectral image denoising employing a spatial-spectral deep residual convolutional neural network. _IEEE TGRS_, 57(2):1205-1218, 2018.
* [50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In _CVPR_, pages 5728-5739, 2022.
* [51] Feng Zhang, Kai Zhang, and Jiande Sun. Multiscale spatial-spectral interaction transformer for pan-sharpening. _Remote Sensing_, 14(7):1736, 2022.
* [52] Hongyan Zhang, Lu Liu, Wei He, and Liangpei Zhang. Hyperspectral image denoising with total variation regularization and nonlocal low-rank tensor decomposition. _IEEE TGRS_, 58(5):3071-3084, 2019.
* [53] Tao Zhang, Ying Fu, and Cheng Li. Hyperspectral image denoising with realistic data. In _ICCV_, pages 2248-2257, 2021.
* [54] Xiangtao Zheng, Yuan Yuan, and Xiaoqiang Lu. Hyperspectral image denoising by fusing the selected related bands. _IEEE TGRS_, 57(5):2596-2609, 2018.
* [55] Zilong Zhong, Ying Li, Lingfei Ma, Jonathan Li, and Wei-Shi Zheng. Spectral-spatial transformer network for hyperspectral image classification: A factorized architecture search framework. _IEEE TGRS_, 60:1-15, 2021. | Denoising is a crucial step for hyperspectral image (HSI) applications. Though witnessing the great power of deep learning, existing HSI denoising methods suffer from limitations in capturing the non-local self-similarity. Transformers have shown potential in capturing long-range dependencies, but few attempts have been made with specifically designed Transformer to model the spatial and spectral correlation in HSIs. In this paper, we address these issues by proposing a spectral enhanced rectangle Transformer, driving it to explore the non-local spatial similarity and global spectral low-rank property of HSIs. For the former, we exploit the rectangle self-attention horizontally and vertically to capture the non-local similarity in the spatial domain. For the latter, we design a spectral enhancement module that is capable of extracting global underlying low-rank property of spatial-spectral cubes to suppress noise, while enabling the interactions among non-overlapping spatial rectangles. Extensive experiments have been conducted on both synthetic noisy HSIs and real noisy HSIs, showing the effectiveness of our proposed method in terms of both objective metric and subjective visual quality. The code is available at [https://github.com/MyuL/SERT](https://github.com/MyuL/SERT).
+
Footnote β : Equal Contribution, \\({}^{\\ast}\\)Corresponding Author | Write a summary of the passage below. | 264 |
arxiv-format/2402_11735v1.md | # LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection
Jingyu Song, Lingjun Zhao, and Katherine A. Skinner
This work is supported by the Ford Motor Company via the Ford-UM Alliance under award N028603.J. Song, L. Zhao, and K. Skinner are with the Department of Robotics, University of Michigan, Ann Arbor, MI, USA jingyuso,lingjunz,[email protected].
## I Introduction
Autonomous vehicles (AVs) are expected to accurately perceive the surrounding environment to enable effective and safe planning and control across a variety of scenarios and environmental conditions [1, 2, 3, 4, 5]. An important part of the perception task is to precisely localize the objects in the surrounding environment. A common representation for these objects is a set of 3D bounding boxes that have locations, sizes and classes [6, 7]. Despite various combinations of sensor configurations on AVs, many object detection algorithms rely on LiDAR and cameras due to their dense returns [1, 2, 8, 9, 10, 11].
Still, LiDAR systems and cameras are sensitive to varying weather and lighting conditions, so AVs can suffer significantly from downgraded perceptual capability in these scenarios. To tackle this problem, recent research has focused on leveraging radar systems, which have automotive-grade design that ensures robust performance under various conditions [12, 13, 14, 15]. Additional benefits of radars include their low cost, long detection range and Doppler effect information (i.e., velocity of captured targets). Therefore, it is of great significance to design a model that could effectively leverage radar for 3D object detection [2, 13, 14].
Existing detectors with radars can be categorized into single-modality methods [16] and fusion-based methods [13, 15]. Recent works [17, 18] have achieved impressive detection accuracy when fusing LiDAR and radars on the Oxford Radar RobotCar dataset [19], which has high-resolution radar data. However, this dataset uses a spinning radar, which lacks Doppler information and has increased cost [12]. Among the popular datasets for 3D object detection [6, 8, 12, 20], nuScenes [7] stands out because it is large-scale and has a complete sensor suite including radars. nuScenes represents radar data as object lists, which is a common representation that could also be interpreted as a very sparse point cloud with additional feature attributes from radar onboard signal processing [12, 13]. The common challenges with this dataset are the sparsity and noise of the radar data. Consequently, single-modality radar detectors fail to achieve reliable performance. Some fusion-based detectors [21, 22] suffer from downgraded performance when adding radar to LiDAR-only or LiDAR-camera fusion detectors, while some detectors [13, 15, 23] have to enforce hard constraints such as limiting detection to specific classes or limiting detection range to achieve improvement in performance. Our work seeks to fill the gap in the current literature by improving the design of fusion architecture for LiDAR and radar by leveraging their shared point cloud representation.
In this work, we propose a novel LiDAR-radar fusion detector, LiRaFusion (Fig. 1). Our main contributions are: (i) a novel joint feature extractor for effective LiDAR-radar fusion, (ii) the first introduction of the adaptive gated network into LiDAR-radar fusion for object detection with novel improvement considering the bird's-eye-view (BEV) feature space, and (iii) extensive evaluation on open source datasets and detectors that demonstrate improvement over existing LiDAR-radar fusion methods. As most existing detectors follow the backbone-neck-head design [10, 21, 24], LiRaFusion can be directly integrated into existing methods by serving as the backbone to enable more modality configurations, which is validated by extending it to LiDAR-camera-radar fusion. Code will be made available on the project website.1
Footnote 1: [https://github.com/Song-Jingyu/LiRaFusion](https://github.com/Song-Jingyu/LiRaFusion)
## II Related Work
### _Radar Datasets for Autonomous Driving_
Data is the key component for enabling development of learning-based object detectors. However, radar data was rarely available in early public autonomous driving datasets [6, 20, 25]. Recently, more datasets with radar data have become accessible to researchers. In these datasets, radar data usually has two representations. The most common representation is the point cloud, in which each point represents an object in the object list output by many off-the-shelf radars with on-board processing algorithms such
Fig. 1: We propose LiRaFusion to efficiently leverage the complementary information of LiDAR and radar for 3D object detection.
as Constant False Alarm Rate (CFAR) [12, 13]. This representation is available on datasets such as nuScenes [7], aiMotive [26], and Zendar [27]. There are also datasets that directly use the raw data from radars [19, 28]. The raw data has denser information but the lack of CFAR processing leads to increased noise. The spinning radar configuration in [19] loses Doppler information (e.g., velocities of the captured targets), which is important to understand the scene. Another challenge brought by this representation is the difficulty of annotating the data [12]. In this work we leverage the nuScenes dataset [7] because of its full coverage on sensors and driving scenarios, accurate annotations, and popularity, which allows us to compare this work to many existing works. The proposed fusion architecture can be transferred to other datasets that have similar point cloud representation.
### _Multi-modality 3D Object Detection_
3D object detection is an important part of the perception system for AVs [1]. The main objective is to assign a class label and a 3D bounding box for each detected object in the scene [8]. Common sensors used for 3D object detection include LiDAR and cameras. Though several single-modality object detectors with LiDAR [24] or cameras [29] achieve impressive results on KITTI [6] or nuScenes [7] benchmarks, multi-modality object detectors have recently shown promise in leveraging complementary information to improve robustness and accuracy [2, 22, 30, 31, 32, 11, 33]. LiDAR-camera fusion is the most common configuration. However, these two sensors have shared drawbacks (e.g., sparse information at long range, lacking velocity estimation) that could be compensated by radars that are commonly deployed on AVs [12, 13]. Radar-only detectors usually fail to overcome the data sparsity to perform comparably to camera- or LiDAR-based detectors [16, 33].
One recent trend is to fuse radar with one or several other sensors. Existing fusion configurations include camera-radar (CR), LiDAR-radar (LR) and LiDAR-camera-radar (LCR) [13, 14, 15, 34, 35, 21, 36]. As the main focus of this work is fusing LiDAR and radar in the shared point cloud representation, we mainly compare our methods with FUTR3D [21] and EZFusion [15] because they are the most recent state-of-the-art detectors supporting LR fusion on the nuScenes [7] dataset. In FUTR3D [21], though LR and LCR configurations are supported, they suffer from downgraded performance when compared with LiDAR-only (LO) or LC fusion. We argue the failure could come from the simple MLP-style feature extractor for radars and lack of joint feature fusion before sampling features for query points. Our proposed method aims to address these limitations with the proposed adaptive fusion framework. EZFusion [15] is built on CenterPoint [24] by adding radar feature projection for the LiDAR points. In EZFusion [15], though its LR fusion shows improvement over its LO configuration (equivalent to CenterPoint [24]), its partial moving class setting, which uses only the \\(7\\) moving classes out of 10 classes in the nuScenes benchmark, has more limited application for practical deployment since the missed static classes are also vital for keeping AVs safe. Our proposed method achieves further improvement over EZFusion under the same partial classes setting and is demonstrated on the complete class setting, which has stronger potential in application since it can account for both static and moving object classes.
### _Gated Network for Sensor Fusion_
Fusing information from different modalities requires sophisticated architecture design and there are multiple prior works that addressed this challenge [37]. Among them, projection, addition and concatenation are common practices [11, 14, 15, 22]. Though these methods demonstrate improvement on multi-modality fusion, they are not learning-based and lack adaptivity. To account for this issue, researchers have turned to gated networks when different feature maps are fused. This process is also named as the mixture of experts because the backbones used for different modalities are considered different expert networks. This method is first introduced in [38], in which the expert network is defined as a domain-specific neural network to process a single sensing modality, and the gating network is a weighting neural network that selects useful features among the outputs of the expert networks. This idea has been leveraged by the perception community as more works have focused on using different modalities [2, 39, 40, 41]. For instance, in 3D object detection, 3D-CVF [39] proposes an LC fusion network using a gated network. Extensive evaluation is conducted on the KITTI [6] and nuScenes [7] datasets, which demonstrates the effectiveness of the gated network.
The success of fusing LiDAR and camera modalities through the gated network motivates us to adapt this method for the LR sensor configuration. The gated network is able to learn adaptive weights for different expert networks so the model can learn to be robust to noise from individual experts. We find this feature is of great significance in the LR fusion since the radars are more noisy, which can degrade the performance of multi-modal detectors if the information is not properly handled. In our proposed work, we extend the original gated network design [39, 42] by making it channel-wise so that each channel of the BEV map has an adaptive weight. According to our best knowledge, LiRaFusion is the first to introduce the gated network into LR perception.
## III Method
The goal of our method, LiRaFusion, is to achieve more effective feature extraction and fusion for LiDAR and radar data for 3D object detection (Fig. 2). The inputs to LiRaFusion are a LiDAR point cloud and a radar point cloud. One stream stacks these two point clouds as the input to the proposed early fusion module. The early fusion module processes the denser point cloud with the proposed joint feature encoder and a common sparse 3D convolution encoder. Its output is then fed into a common LiDAR backbone to obtain the feature map. In this work, we use the VoxelNet following [21, 24]. The other stream uses the PointPillars [43] backbone to process the radar points, taking advantage of the pillars since the height measurement for radar points are noisy [12, 13]. The output is a radar feature map. The output feature maps from these two streams can be considered as two experts, which are further fused with the proposed gated network in the middle fusion module. The middle fusion module learns the adaptive weights for these two feature maps and then concatenates the weighted feature maps together. The concatenated feature map is passed into the Feature Pyramid Network (FPN) [44] neck and the detector head to generate predictions. Our main contributions are the novel architectures for the early fusion and middle fusion modules. LiRaFusion is an enhanced backbone for LR feature extraction so it can be extended to LCR configuration as well. Technical details of each module are discussed in the following subsections.
### _Early Fusion_
We design an early fusion block to fuse the LiDAR and radar points at an early stage to extract features for each voxel cell, as shown in Fig. 3. Unlike [13, 23, 26], which ignore features such as LiDAR intensity, Radar Cross-section (RCS), and velocity, we keep these features since LiDAR intensity and RCS are helpful to classify objects, and velocity information is important to distinguish static or dynamic objects and predict the velocity and rotation. For the LiDAR input, we keep the point intensity and the captured time offset (\\(\\Delta t_{l}\\)) from the current frame as we accumulate multiple sweeps. For the radar points, we keep the RCS, compensated velocities (\\(V_{x_{\\text{comp}}},V_{y_{\\text{comp}}}\\)) and the time offset (\\(\\Delta t_{r}\\)). We use zero-padding to match the dimension of these points to merge them. After voxelization, we use the proposed joint voxel feature encoder to extract features for each voxel cell. We follow the simplified voxel feature encoder of VoxelNet [10] in MMDetection3D [20] to set the number of the voxel feature dimensions to be the same as the input points. The first \\(3\\) feature dimensions represent the location of the centroid of this cell, which is computed by taking the mean locations of all the points in this voxel cell. The following \\(2\\) feature dimensions correspond to features from LiDAR so we average over all LiDAR points. The last \\(4\\) feature dimensions correspond to the radar features. We pass the mean of the radar features to a \\(4\\times 4\\) linear layer to enable to network to learn an appropriate way to handle the radar features. We only process non-empty voxel cells. For cells that do not have radar points, which is common due to sparsity, we leave the last \\(4\\) dimensions of these cells as zero. After obtaining the voxel features, we apply standard sparse convolution and further process its output with a standard LiDAR backbone, VoxelNet [10]. To simplify the terminology, we refer to the output of the early fusion stream as the LiDAR feature map in the following sections. Though radar data has already been fused when encoding the voxel feature, due to the sparsity of radar data, most information in the encoded voxel features is from LiDAR. We further fuse it with the radar feature map at a higher level with the proposed middle fusion module.
### _Middle Fusion_
In order to perform adaptive sensor fusion on the feature maps from different modalities, we refer to [39, 42] for designing the gated network. To the best of our knowledge, we are the first to bring the adaptive gated network to the field of LR fusion for 3D object detection. We improve the existing gated network by enabling it to adaptively learn
Fig. 3: The network architecture early fusion module. We stack the loaded LiDAR and radar points by zero-padding them to the same number of dimensions before feeding into the proposed joint voxel feature encoder.
Fig. 2: Overview of the architecture of LiRaFusion. Our main contributions, shown as bold text, mainly include a joint voxel feature encoder to extract per-voxel features from the stacked point cloud, and a gated network to learn weights for each input feature map to fuse them adaptively.
the weight over the channel dimension. Specifically, the generated adaptive weight for the input feature maps are currently in the shape \\(B\\times C\\times H\\times W\\) instead of the previous channel-constant style in the shape \\(B\\times 1\\times H\\times W\\). This change enables the gated network to weight and extract the LiDAR and radar features in a more flexible way. Intuitively, the feature maps are all in bird's-eye-view resulting from being flattened over the \\(z\\) axis. By proposing a channel-specific weight, we improve the capability of the network to exploit the spatial knowledge from the input experts.
The design of the adaptive gated network is shown in Fig. 4. The input LiDAR feature map \\((B\\times C_{1}\\times H\\times W)\\) and input radar feature map \\((B\\times C_{2}\\times H\\times W)\\) are first concatenated in the channel dimension. Then the concatenated feature map is passed to a convolution block that is followed by a sigmoid function. Notably, the output feature dimension of the convolution blocks for LiDAR and radar modalities is set to match the dimension of the input feature maps (\\(C_{1}\\) for LiDAR and \\(C_{2}\\) for radar). The learned adaptive weights are applied to the original input feature maps through an element-wise product operation. The obtained gated LiDAR and radar features are further concatenated together along the feature dimension as a fused feature map. The resulting fused feature map has the shape \\(B\\times(C_{1}+C_{2})\\times H\\times W\\) and serves as the output of the middle fusion block.
## IV Experiments
### _Experiment Design_
As mentioned in Section II-A, we evaluate our method on the nuScenes dataset [7]. We follow the official split that has 700 scenes for training and 150 scenes for validation. It only has annotations for the key-frames (samples), but also provides non-key-frames (sweeps) without annotations. We follow common practices in [15, 20, 21] to load multiple sweeps into a current sample frame to increase the data density, while appending a time difference channel for data from each sweep as additional temporal information.
We use the mean-Average-Precision (mAP) and nuScenes Detection Score (NDS) [7] as the main evaluation metrics. We compare LiRaFusion with the existing LR and LCR detectors. In addition, we group predictions to break down the improvement of fusing radars. We also report runtime and additional TP (True Positive) metrics from nuScenes [7]. Ablation studies are conducted to validate our model design.
As mentioned previously, many existing detectors [20, 21, 24] follow the backbone-neck-head design. Therefore, LiRaFusion can be integrated by replacing their backbones with LiRaFusion and keeping the same neck and head. As the baselines we choose - FUTR3D [21] and EZFusion [15] - are initially proposed to work with different heads, we group our experiments based on the detector heads. Inspired by [45, 46], FUTR3D [21] uses a transformer-based head, which is referred to as TransHead, EZFusion [15] uses the same head as CenterPoint [24], which is referred to as CenterHead. We choose the LO and LR configurations for FUTR3D as the baselines, denoted as FUTR3D-LO and FUTR3D-LR. We implement the LR fusion strategy in EZFusion with both heads, which is named as EZFusion-LR\\({}^{*}\\), where \\({}^{*}\\) represents our re-implementation. The other group of experiments with CenterHead [24] focuses on \\(7\\) moving classes out of \\(10\\) complete classes in the nuScenes official benchmark for consistency with results reported in EZFusion [15]. We re-train the original CenterPoint [24] with \\(7\\) classes and name it as CenterPoint-7. Since FUTR3D-LO and CenterPoint are the state-of-the-art LO detectors, we include them in our comparison to demonstrate the improvement obtained with fusing radar data. We implemented LiRaFusion based on the MMDetection3D framework [20]. More implementation and training details can be found on the project website.
### _Results and Comparison_
We perform a comparison of our method with several state-of-the-art LR fusion networks with CenterHead [24] and TransHead [21] separately on the nuScenes dataset [7]. Table I shows the results of all models trained with TransHead [21]. We can see that FUTR3D-LR performs worse than FUTR3D-LO, which proves that ineffective design of LR fusion strategy could actually harm performance. The re-implemented EZFusion with TransHead (EZFusion-LR\\({}^{*}\\)) fails to achieve further improvement over FUTR3D-LO. Our model, LiRaFusion, achieves the best results on the nuScenes validation set in terms of NDS and mAP. It also shows impressive improvement on certain classes such as car and pedestrian, the top two most frequent classes, which are critical for AVs to detect in the scene in order to operate effectively and safely.
Table II shows the results for models trained with CenterHead using the \\(7\\) moving classes for consistency with EZFusion [15]. Similarly to EZFusion, we re-train CenterPoint with the 7-class setting as a baseline LO detector and name it as CenterPoint-7. Though EZFusion-LR\\({}^{*}\\) achieves considerable improvement over CenterPoint-7, there is a small gap between EZFusion-LR\\({}^{*}\\) and the reported results
Fig. 4: The network architecture for the middle fusion module. In this module, by applying a channel-wise convolution and a sigmoid function to the concatenated LiDAR-radar feature map, the network generates adaptive weights for LiDAR and radar separately. Then the input LiDAR and radar feature maps are element-wise multiplied with the weights before being concatenated as a fused LiDAR-radar feature map.
in EZFusion [15] so we include both of them in the table. LiRaFusion is the top performer in terms of NDS and mAP. Similar to its performance with TransHead [21], LiRaFusion achieves impressive improvement in terms of AP over almost all classes.
In addition to the comparison with the most recent state-of-the-art LR detectors [15, 21] shown above, we compare LiRaFusion (with TransHead) with other LR or LCR detectors for a complete overview. Table III demonstrates that LiRaFusion has consistent improvement over existing LR and LCR detectors. It is worth mentioning that many detectors in Table III have to enforce a partial class setting, or use the extra camera modality, while LiRaFusion trains on the complete class setting with the LR sensor configuration and outperforms all the LR and LCR detectors.
We additionally report the model runtime and the True Positive (TP) metrics defined in [7] evaluated with the complete class setting in Table IV. The table shows LiRaFusion is comparable with other baselines in runtime, which is measured on the same desktop with an RTX A6000 GPU. We can also see LiRaFusion is comparable with the best method in terms of translation and scale error, while achieving the lowest error in estimating orientation, velocity and attribute. We argue the significant reduction in orientation and velocity error comes from the effective fusion of the Doppler information from radars.
We also group predictions based on object distances from the ego-vehicle and weather conditions to further break down the performance boost, as this directly represents the improvement from radars. Table V shows the performance breakdown based on the distance. We find the performance gain increases with increasing distance. This finding matches the expectation that radars complement LiDAR by providing additional information at distant locations where the LiDAR returns become sparser. Table VI shows that LiRaFusion achieves notable improvement over the LO baselines in rainy scenes, where LiDAR is generally believed to have reduced detection capability [13, 36]. This finding validates the importance of fusing radars and leveraging their robustness across different weather conditions.
### _Qualitative Results_
Figure 5 shows qualitative comparison of LiRaFusion and FUTR3D-LO by presenting the predictions along with the ground truth bounding boxes. These results show that the radar sensor contributes several measurement points (shown in magenta) for a car that was previously missed by FUTR3D-LO, which only uses LiDAR data. The radar and LiDAR points are used effectively by LiRaFusion and
\\begin{table}
\\begin{tabular}{c|c c c} \\hline \\hline Method & \\begin{tabular}{c} NDS \\\\ \\end{tabular} & \\begin{tabular}{c} mAP \\\\ \\end{tabular} & \\begin{tabular}{c} mAP \\\\ \\end{tabular} & \\begin{tabular}{c} \\(30m,50m\\) \\\\ \\end{tabular} &
\\begin{tabular}{c} \\(30m,50m\\) \\\\ \\end{tabular} \\\\ \\hline FUTR3D-LO & 73.86 & 55.2 & 29.93 \\\\ LiRaFusion (ours) & 74.14 \\(\\uparrow\\) 0.28 & 55.88 \\(\\uparrow\\) 0.68 & 31.73 \\(\\uparrow\\) 1.8 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: Performance by object distance of experiments with TransHead [21]. We report the mAP separately in \\(3\\) ranges: \\(0m-20m\\), \\(20m-30m\\) and \\(30m-50m\\). With increasing object distance, LiRaFusion demonstrates higher gain over FUTR3D-LO as radars complement LiDAR at far locations where LiDAR suffers from data sparsity. All values are percentages.
\\begin{table}
\\begin{tabular}{c|c|c c c c c c c c c c c} \\hline \\hline Method & Sensor & NDS \\(\\uparrow\\) & \\multicolumn{6}{c}{AP (Average Precision) \\(\\uparrow\\)} \\\\ & & mean (mAP) & Car & Ped & Bicycle & Bus & Barrier & TC & Truck & Trailer & Moto & CV \\\\ \\hline FUTR3D-LO & LO & 65.74 & 59.39 & 84.3 & 81.4 & 49.0 & 65.4 & **62.4** & 64.2 & 53.5 & **41.8** & 66.4 & 25.5 \\\\ FUTR3D-LR & LR & 65.37 & 58.08 & 83.8 & 81.2 & **49.8** & 65.4 & 60.4 & 60.6 & 51.0 & 41.0 & 65.1 & 22.6 \\\\ EzFusion-LR\\({}^{*}\\) & LR & 65.77 & 59.24 & 84.6 & 81.7 & 47.3 & 69.1 & 62.0 & 65.7 & 52.2 & 39.7 & 66.9 & 23.3 \\\\ LiRaFusion (ours) & LR & **66.69** & **60.11** & **85.6** & **82.2** & 46.9 & **69.6** & 61.2 & **66.0** & **54.0** & 40.7 & **68.1** & **26.7** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: Results with TransHead [21] evaluated on nuScenes _val_ set. EZFusion-LR\\({}^{*}\\) represents our re-implementation of [15]. All values are percentages. No model ensemble or test-time augmentation is used.
the prediction by LiRaFusion aligns well with the ground truth.
We also show the corresponding adaptive weights on LiDAR and radar feature maps in Fig. 5. We see that the learned weight on the LiDAR feature map is generally higher than that in the radar feature map, which meets the expectation that LiDAR is the preferred sensor for object detection in terms of the density and geometric accuracy. We also notice that in the radar weight map, locations with larger distance to the ego-vehicle have relatively larger weight. This finding matches our intuition that incorporating radar data provides more information for long-range detection capabilities [12, 13, 48]. Basically, the smaller radar weight at the near locations means that the detector learns to be more dependent on LiDAR at places where it has dense returns, while trusting radar returns at farther locations where LiDAR data has reduced density. Note that the radar weight within the black bounding box in Fig. 5 is relatively large, which means that the proposed gated network is capable of learning to use the expert feature maps adaptively to let the LiDAR and the radar complement each other in a more effective way.
### _Ablation Studies_
We study the contribution of each fusion module to the overall performance. We denote LiRaFusion-early as the model with the early fusion module only and LiRaFusion-middle as the model with the middle fusion module only. We report the results of these models in addition to the baseline FUTR3D-LO. As shown in Table VII, both LiRaFusion-early and LiRaFusion-middle achieve improvement over the baseline in most metrics. When the early fusion and middle fusion modules are combined together, the performance of the combined model is further enhanced.
When designing the adaptive gated network used in the middle fusion module, we improve over the existing network design in [39, 42] that has a constant weight (referred as channel-constant) for all features at one location in the bird's-eye-view (BEV) feature map. Since the \\(z\\) dimension and original feature dimension are merged together to form the BEV feature map, a specific weight for each feature dimension could help to utilize the spatial knowledge. Inspired by this, we propose a channel-specific weight map in the gated network. As shown in Table VIII, the proposed channel-specific gated network outperforms the original network design, which validates the effectiveness of our improvement over the original gated network in [39, 42].
### _LiDAR-Camera-Radar Fusion_
We explore the potential of LiRaFusion to fuse LiDAR, camera, and radar for 3D object detection as they are the common sensing modalities on modern AVs. Since most object detectors follow the backbone-neck-head paradigm, LiRaFusion can be applied to many LiDAR-camera (LC) detectors by replacing the LiDAR backbone to enable LCR fusion. FUTR3D [21] supports LC configuration (FUTR3D-LC) and is one of the state-of-the-art LC detectors. By replacing its LiDAR backbone to LiRaFusion, we implemented the LCR model referred as LiRaFusion-LCR. We directly compare LiRaFusion with FUTR3D-LC to evaluate the scalability of LiRaFusion. Table IX shows that LiRaFusion-LCR achieves further improvement over FUTR3D-LC. The results also demonstrate that radars can complement the LC configuration, which reinforces the importance of fusing radar data in modern object detectors. As the main focus of this project is on LR fusion, we leave more experiments on LCR fusion for future work, and hope our work can inspire more research on fusing radars with other sensors to improve perceptual capabilities of AVs.
## V Conclusion
We have proposed a novel LiDAR-radar fusion network, LiRaFusion, to facilitate cross-modality feature extraction for 3D object detection. We design a joint voxel feature encoder to extract voxel feature encoding in an early stage. We propose an adaptive gated network to further fuse the feature maps from LiDAR and radar by learning modality-adaptive weight maps. Experimental results show that LiRaFusion achieves consistent improvement over existing LiDAR-radar detectors on the nuScenes benchmark. Future work includes applying LiRaFusion to existing LiDAR-camera detectors to further improve over existing LCR detectors, and also extending LiRaFusion to other scene understanding tasks.
Fig. 5: Example bounding box predictions and corresponding weight maps. We present two frames in which LiRaFusion correctly detects a car (highlighted with a red circle) that is missed by the baseline LO detector. We also show a zoomed-in view in which we label radar points in magenta, and LiDAR points in gray or red (if they reside in a bounding box). We show ground truth bounding boxes in blue and predictions in green. In the visualization of weight maps, the black bounding box with arrow denotes the ego-vehicle. Boxes without an arrow denote the highlighted missed car object. Best viewed in color and zoomed-in.
\\begin{table}
\\begin{tabular}{c|c c c c c} \\hline Method & NDS \\(\\uparrow\\) & mAP \\(\\uparrow\\) & mAP\\(\\downarrow\\) & mAP\\(\\downarrow\\) & mAP\\(\\downarrow\\) \\\\ \\hline LiRaFusion-middle & 65.0 & 64.2 & 0.50 & **0.259** & 0.304 & 0.305 & 0.193 \\\\ LiRaFusion-LR & 66.69 & 60.11 & 0.346 & 0.267 & **0.298** & **0.240** & 0.166 \\\\ \\hline LiRaFusion-LCR & **64.6** & **64.76** & **0.345** & **0.259** & 0.309 & 0.276 & **0.181** \\\\ \\hline \\end{tabular}
\\end{table} TABLE IX: Results with LCR fusion and TransHead [21].
\\begin{table}
\\begin{tabular}{c|c c c c} \\hline Method & NDS \\(\\uparrow\\) & mAP \\(\\uparrow\\) & mAP \\(\\uparrow\\) \\\\ \\hline LiRaFusion-middle & 65.74 & 59.39 & 0.276 & 84.3 \\\\ LiRaFusion-early & 65.95 & 59.11 & 0.263 & 85.4 \\\\ LiRaFusion-middle & 66.59 & 59.45 & 0.241 & 85.3 \\\\ LiRaFusion & **66.69** & **60.11** & **0.240** & **85.6** \\\\ \\hline \\end{tabular}
\\end{table} TABLE VIII: Ablation study of gated network design.
modal network for depth completion,\" _IEEE Transactions on Image Processing_, vol. 30, pp. 5264-5276, 2021.
* [41] D. Qiao and F. Zulkermine, \"Adaptive feature fusion for cooperative perception using lidar point clouds,\" in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2023, pp. 1186-1195.
* [42] J. Kim, J. Koh, Y. Kim, J. Choi, Y. Hwang, and J. W. Choi, \"Robust deep multi-modal learning based on gated information fusion network,\" in _Computer Vision-ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2-6, 2018, Revised Selected Papers, Part IV_. Springer, 2019, pp. 90-106.
* [43] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, \"Pointpillars: Fast encoders for object detection from point clouds,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019.
* [44] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, \"Feature pyramid networks for object detection,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, July 2017.
* ECCV 2020_. Springer, 2020, pp. 213-229.
* [46] Y. Wang, V. Guizilini, T. Zhang, Y. Wang, H. Zhao,, and J. M. Solomon, \"Detr3d: 3d object detection from multi-view images via 3d-to-2d queries,\" in _The Conference on Robot Learning (CoRL)_, 2021.
* [47] L. Wang, T. Chen, C. Anklam, and B. Goldluecke, \"High dimensional frustum pointnet for 3d object detection from camera, lidar, and radar,\" in _2020 IEEE Intelligent Vehicles Symposium (IV)_, 2020, pp. 1621-1628.
* [48] M. I. Skolnik, _Radar handbook_. McGraw-Hill Education, 2008. | We propose LiRaFusion to tackle LiDAR-radar fusion for 3D object detection to fill the performance gap of existing LiDAR-radar detectors. To improve the feature extraction capabilities from these two modalities, we design an early fusion module for joint voxel feature encoding, and a middle fusion module to adaptively fuse feature maps via a gated network. We perform extensive evaluation on nuScenes to demonstrate that LiRaFusion leverages the complementary information of LiDAR and radar effectively and achieves notable improvement over existing methods. | Summarize the following text. | 110 |
arxiv-format/1011_5082v1.md | # The parity of specular Andreev reflection under mirror operation in zigzag graphene ribbon
Yanxia Xing\\({}^{1,3}\\), Jian Wang\\({}^{1,*}\\), and Qing-feng Sun\\({}^{2,\\dagger}\\)
\\({}^{1}\\)Department of Physics and the Center of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China. \\({}^{2}\\)Beijing National Lab for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100080, China. \\({}^{3}\\)Department of Physics, Beijing Institute of Technology, Beijing 100081, China.
######
Since the experimental realization of graphene[3], it has become an exciting arena for theoretical and technological investigations.[4] A number of new phenomena have been predicted and verified experimentally. For instance, in the presence of magnetic field, it exhibits a distinctive half-integer quantum Hall effect.[3] Its quasi-particles obey the Dirac-like equation and have relativistic-like behaviors.[4] Due to the relativistic effect, the Klein tunneling occurs where an incident electron in graphene can pass through a potential barrier with probability one.[5] Then a graphene p-n junction can be used to focus Dirac electron current with a negative refractive index.[6; 7]
Since good contacts between superconducting leads and graphene have been realized experimentally,[8] the transport study through graphene based normal-metal-superconductor (GNS) heterojunction becomes feasible. In the presence of a normal metal (graphene)-superconducting interface, an incoming electron converts into a hole and a cooper pair is formed that enters the superconductor. Due to the relativistic nature of the electron in graphene, the electron-hole conversion can either be intraband (within conduction or valence band) or interband (between conduction and valence bands). When the electron-hole conversion is intraband, it corresponds to the usual Andreev reflection (AR)[9] or Andreev retroreflection (ARR) because the reflected hole is along the incident direction. This ARR occurs for both relativistic and non-relativistic electrons. When the electron-hole conversion is interband, the reflected hole is along specular direction and a specular Andreev reflection (SAR) takes place,[10] which can lead to novel phenomena as we will discuss below.
It is known that the parity is a fundamental quantity in physics and reflection is a general physical phenomenon in nature. In this paper, we discuss the parity of reflection amplitude for graphene in contact with superconductor leads. In general, the parity of a reflection amplitude can be either even or odd when the system is under mirror operation. However, for all previous known reflection events, the reflection amplitudes in the one-mode energy region have even parity under the mirror operation. It is yet to find an odd-parity reflection event. In this paper, we found, for the first time, that the SAR amplitude has an odd parity under the mirror operation for zigzag graphene ribbons with even number of chains. This means that the phases of SAR amplitude for a graphene-superconductor hybrid system and its mirror system differ by \\(\\pi\\). We attribute this phenomenon to the unique band structure of the graphene. Obviously this phase difference does not affect any observable quantities for each system. When two systems couple together, however,this \\(\\pi\\) phase manifest through quantum interference between two SARs. So this \\(\\pi\\) phase shift has important consequences for a four terminal device with two superconducting leads (see Fig.2(a)). When two superconducting leads are symmetrically attached to the device, the quantum interference of the left and right SAR leads to a destructive or constructive interference depending on whether the phase difference of superconducting leads is zero or \\(\\pi\\). Importantly, when two superconducting leads are _asymmetrically_ attached to the device, the same interference pattern occurs provided that the Dirac point \\(E_{0}\\) is in line with the condensate of superconducting lead. The quantum interference between pairs of the AR can be tuned by shifting the Dirac point, the asymmetry of the two superconducting leads, as well as the phase between two superconducting leads. Due to the odd parity of SAR, the interference pattern for SAR is phase contrasted to that of ARR where the parity is even.
Before doing numerical calculation, we first prove that the phases of SAR amplitude of two systems (i) and (ii) in Fig.1(a) differ by \\(\\pi\\), i.e., the parity of SAR is odd under mirror operation. Note that for graphene systems electrons in valence and conduction band are usually referred as electrons and holes, respectively. In the presence of superconducting lead the reference point of electrons and holes is the Fermi level in the superconducting lead. In the following, we will refer electrons (holes) as electrons above (below) Fermi level in superconducting lead. Denote \\(\\psi_{c}^{+}\\) (\\(\\psi_{v}^{+}\\)) the wavefunction of electrons in conduction (valence) band moving in +y direction and \\(\\psi_{c}^{-}\\) (\\(\\psi_{v}^{-}\\)) in -y direction in the zigzag graphene nanoribbon _lead_. It was known that under reflection \\(\\hat{P}:x\\rightarrow-x\\), \\(\\psi_{c}^{\\pm}\\) is symmetric while \\(\\psi_{v}^{\\pm}\\) is anti-symmetric if the energy of electron is in the first transmission channel[11][see Fig.1(a)], i.e.,
\\[\\hat{P}\\psi_{c}^{\\pm}(x,y) = \\psi_{c}^{\\pm}(-x,y)\\] \\[\\hat{P}\\psi_{v}^{\\pm}(x,y) = -\\psi_{v}^{\\pm}(-x,y). \\tag{1}\\]
which is one of the unique features of zigzag edge nanoribbons with even number of chains. Assuming the incident electron from the terminal-1, the wavefunctions for SAR \\(\\psi_{1,3}\\) in zigzag nanoribbon lead 1 or 3 of the system (i) can be written as
\\[\\psi_{1}^{(i)} = \\psi_{e}^{+}+r_{11}\\psi_{e}^{-}+r_{11A}\\psi_{h}^{-}\\] \\[\\psi_{3}^{(i)} = t_{13}\\psi_{e}^{+}+r_{13A}\\psi_{h}^{+} \\tag{2}\\]
where \\(r_{11}\\) is the normal reflection amplitude, \\(t_{13}\\) is the transmission amplitude, \\(r_{11A}\\) and \\(r_{13A}\\) are the Andreev reflection amplitudes with the reflected hole to the terminal-1 and 3, respectively. Similarly the wavefunctions for the system (ii) are given by
\\[\\psi_{1}^{(ii)} = \\psi_{e}^{+}+\\bar{r}_{11}\\psi_{e}^{-}+\\bar{r}_{11A}\\psi_{h}^{-}\\] \\[\\psi_{3}^{(ii)} = \\bar{t}_{13}\\psi_{e}^{+}+\\bar{r}_{13A}\\psi_{h}^{+} \\tag{3}\\]
Since the system (i) is related to (ii) by the reflection operator \\(\\hat{P}\\), we have \\(\\psi_{\\alpha}^{(i)}=\\hat{P}\\psi_{\\alpha}^{(ii)}\\) with \\(\\alpha=1,3\\). Note that for SAR, the electron is in the conduction band while the hole is in the valence band, i.e., \\(\\psi_{e}=\\psi_{c}\\) and \\(\\psi_{h}=\\psi_{v}\\). From this relation together with Eqs.(1), (2), and (3), we obtain
\\[r_{11A}=-\\bar{r}_{11A},\\quad r_{13A}=-\\bar{r}_{13A}\\] \\[r_{11}=\\bar{r}_{11},\\quad t_{13}=\\bar{t}_{13} \\tag{4}\\]
Note that the origin of this \\(\\pi\\) phase shift (odd parity) is the interband conversion from the electron to the hole. Therefore the \\(\\pi\\) phase shift does not occur for ARR since it involves only intraband conversion. Now we verify this statement numerically using a tight-binding model (see below for detailed description of the model and numerical procedure). The numerical results of AR probability \\(R_{11A(13A)}=|r_{11A(13A)}|^{2}\\) for two systems are shown in Fig.1(b). As expected the AR probability are exactly the same for two systems. However, the phase of AR amplitudes \\(r_{11A(13A)}\\) denoted as \\(\\Phi_{11(13)}^{i,ii}\\) are different. It is shown in Fig.1(c) and Fig.1(d) that ARR amplitudes (\\(|E_{0}|>|E_{F}|\\), with \\(|E_{F}|=0.5\\)) are the same for two systems in Fig.1(a) while the SAR amplitudes (\\(|E_{0}|<|E_{F}|\\)) have a \\(\\pi\\) phase shift. It confirms the odd parity for interband electron-hole conversion, which comes from the distinct topology of the band structure of graphene.
To see the consequence of the odd parity of SAR, we examine a symmetric four-terminal device with two superconducting leads depicted in Fig.2(a) (by setting asymmetry \\(\\delta N=0\\) and phase difference \\(\\delta\\phi=0\\)). For this system, two beams from terminal-1 has a \\(\\pi\\) phase shift due to odd parity of SAR and interferes destructively at terminal-3 giving rise to a vanishing SAR coefficient. However, we can arrive the same conclusion using symmetry argument as follows. Since the system is symmetric with respect to \\(x=0\\), we must have \\(r_{13A}=\\bar{r}_{13A}\\) when the reflection operation along x-direction is applied. While from Eq.(4), \\(r_{13A}=-\\bar{r}_{13A}\\). So the AR probability \\(R_{13A}=|r_{13A}|^{2}\\) for SAR can also be zero from symmetry point of view.[12] Therefore we conclude that the symmetric device can not be used to test the odd parity of SAR. In the following, we demonstrate that due to the \\(\\pi\\) phase shift the destructive interference still occurs in a four-probe devices with two superconducting leads attached asymmetrically and hence can be used to test the odd parity of SAR.
For this purpose, we consider an asymmetric four-terminal device consisting of a zigzag graphene ribbon with two superconducting leads as shown in Fig.2(a). The Hamiltonian of the graphene is[13]\\(H_{0}=\\sum_{\\bf i}\\epsilon_{\\bf i}a_{\\bf i}^{\\dagger}a_{\\bf i}-\\sum_{<{\\bf i}{ \\bf j}>}ta_{\\bf i}^{\\dagger}a_{\\bf j}\\). Here \\(a_{\\bf i}\\) and \\(a_{\\bf i}^{\\dagger}\\) are the annihilation and creation operators at site \\({\\bf i}\\), \\(\\epsilon_{\\bf i}\\) is the on-site energy which can be controlled experimentally by the gate voltage[3], and the hopping constant \\(t=2.75eV\\) represents the nearest carbon bond energy. The pair potential (energy gap) of superconducting terminal-\\(\\beta\\) with \\(\\beta=2,4\\) is \\(\\tilde{\\Delta}_{\\beta}=\\Delta_{\\beta}e^{i\\varphi_{\\beta}}\\) with \\(\\Delta_{2}=\\Delta_{4}=\\Delta\\simeq 1meV\\). In numerical calculations,[12] we fix Fermi energy \\(E_{F}\\) and tune the Dirac point \\(E_{0}\\). We have used \\(\\Delta\\) as the energy unit.
Now we study the interference between two ARs from GNS junctions as shown in Fig.2(a) in which two superconducting leads 2 and 4 are asymmetrically attached to the zigzag nanoribbon. The horizontal distance \\(\\delta N\\) between two GNS junctions measures the asymmetry of two GNS junctions. The scattering process can be qualitatively understood as follows. For simplicity, we assume \\(\\phi_{2}=\\phi_{4}\\) for the moment. As shown schematically in Fig.2(a), for SAR the particle-like electrons in terminal-1 split into two beams and are scattered separately by two GNS junctions (green horizontal lines) as holes that finally recombine at terminal-3. We examine the total phase accumulated for each beam that involves the following three processes. Before reaching the first GNS junction (denoted by the left vertical green line) two beams of electrons propagate with the same momentum \\(k_{x}\\). After reaching the second GNS junction (denoted by the right vertical green line) two beams of holes also propagate with the same momentum \\(k_{x}^{\\prime}\\). Obviously phases accumulated in the above two processes for both beams are the same. Between them two beams propagate with different momenta \\(k_{x}\\) and \\(k_{x}^{\\prime}\\). Hence the phase difference between two beams is \\(\\phi=(k_{x}-k_{x}^{\\prime})\\delta x\\) with \\(\\delta x=b\\delta N\\), where \\(b=\\sqrt{3}a\\) and \\(a\\) the lattice constant. This phase difference can be tuned by varying the Dirac point \\(E_{0}\\) or the asymmetry \\(\\delta N\\) giving rise to a complicated interference pattern (see Fig.2). In particular, this phase difference can be zero if \\((k_{x}-k_{x}^{\\prime})=0\\) (i.e.,\\(E_{0}=0\\)) or \\(\\delta N=0\\). In general, the total phase difference is \\(\\phi=(k_{x}-k_{x}^{\\prime})\\delta x+\\phi_{2}-\\phi_{4}\\).
Interference pattern of AR probability \\(R_{13A}\\) for system depicted in Fig.2(a) with pair potential phase difference of two superconductors \\(\\delta\\varphi=0\\) and \\(\\pi\\) (\\(\\delta\\varphi\\equiv\\varphi_{2}-\\varphi_{4}\\)) are then plotted in Fig.2(b) and (c), respectively. For Fig.2(b) following observations are in order:(1) For the geometrically symmetric system (\\(\\delta N=0\\)), the interference is always destructive with zero \\(R_{13A}\\) as long as \\(|E_{0}|<|E_{F}|\\).[12] Clearly this is due to the \\(\\pi\\) phase shift depicted in Fig.1(d) and is consistent with the band selection rule.[11] (2) When Dirac point \\(E_{0}\\) is in line with the condensate energy of the superconductor, i.e., when \\(E_{0}=0\\), \\(R_{13A}\\) is again zero no matter what value \\(\\delta N\\) assumes. This means that there is a completely destructive interference between two beams scattered by two GNS junctions attached asymmetrically to the graphene nano-ribbon. This behavior can be understood as follows. When \\(E_{0}=0\\) the incoming electron and reflected hole have the same propagating momentum \\(k_{x}\\) and thus path 1 and 2 in Fig.2(a) experience the same quantum phase \\(k_{x}\\delta x\\) except at the superconducting leads. Hence the total phase difference is only due to the \\(\\pi\\) phase shift between two SARs. (3) \\(R_{13A}\\) is an even function of Dirac point \\(E_{0}\\) because of the electron-hole symmetry in graphene. Due to the geometric symmetry, \\(R_{13A}\\) is also an even function of asymmetry \\(\\delta N\\). (4) For nonzero \\(E_{F}\\), the closer the Dirac point \\(E_{0}\\) to \\(E_{F}\\), the more rapidly \\(R_{13A}\\) oscillates as we vary \\(\\delta N\\). This is because the difference of propagating momentum \\(k_{x}-k^{\\prime}_{x}\\) increases monotonically as \\(E_{0}\\) approaches to \\(E_{F}\\). (5) When \\(E_{0}\\) is in the vicinity of \\(E_{F}\\), \\(R_{13A}\\) can reach 0.9 which is much larger than that when \\(|E_{0}|>|E_{F}|\\). This is because when \\(E_{F}\\) is very close to \\(E_{0}\\), the edge states of zigzag ribbon begin to contribute, then electron is easier to be scattered by two GNS junctions located also at edges of zigzag ribbon. Considering the pseudo-spin conservation, large \\(R_{13A}\\) is always found in the region of \\(|E_{0}|<|E_{F}|\\), i.e., the SAR region. (6) There is an overall fine oscillation with a period of \\(\\delta N=3b\\). Similar behavior was also found in zigzag ribbons with a p-n junction where the conductance is determined by the relative displacement \\(\\delta\\) along the p-n junction.[14] In Fig.2(c) with the superconducting phase difference \\(\\delta\\varphi=\\pi\\), we see that the interference pattern is contrary to \\(\\delta\\varphi=0\\) [Fig.2(b)] where the constructive interference becomes destructive and vice versa.
To further analyze the interference pattern, we plot in Fig.3(a) the total \\(R_{13A}\\) vs Dirac point \\(E_{0}\\) for different asymmetry \\(\\delta N\\) with the phase difference between two superconducting leads \\(\\delta\\varphi=0\\) [main panel of Fig.3(a)] or \\(\\delta\\phi=\\pi\\) [inset of Fig.3(a)]. Clearly the interference (oscillatory) pattern occurs only for asymmetric systems (\\(\\delta N\
eq 0\\)) with oscillation frequency proportional to \\(\\delta N\\). When pair potential phase difference \\(\\delta\\varphi=\\pi\\) is introduced, the interference pattern reverses, and \\(R_{13A}\\) with \\(\\delta N=0\\) becomes the envelop function of \\(R_{13A}\\) for all nonzero \\(\\delta N\\). In Fig.3(b) we plot \\(R_{13A}\\) vs \\(\\delta N\\) for different widths \\(W\\) of nanoribbon. It is shown clearly that \\(R_{13A}\\) is a periodic function of \\(\\delta N\\) with larger periodicity for larger \\(W\\).
In the inset of Fig.3(b) we plot this period versus the width for different \\(E_{0}\\). The period \\(P\\) is obtained in two ways: (1). from the expression \\(P=2\\pi/(k_{x}-k_{x}^{\\prime})\\) where the momenta \\(k_{x}\\) and \\(k_{x}^{\\prime}\\) can be obtained from the band structure for a given \\(E_{0}\\) (black symbols). (2). directly from main panel of Fig.3(b) (red solid circle). From the inset, it clearly shows that two periods are exactly the same giving strong evidence that the interference pattern of AR probability are indeed from two reflected hole beams.
Finally, the interference pattern of AR probability \\(R_{11A}\\) is also studied (not shown). We found that only ARR probability \\(R_{11A}\\) (\\(|E_{0}|>E_{F}=0.2\\Delta\\)) exhibits interference pattern. We note that since there is no \\(\\pi\\) phase shift involved in ARR, when \\(\\delta N=0\\) reflected electrons through two GNS junctions interfere constructively when \\(\\delta\\varphi=0\\) and destructively when \\(\\delta\\varphi=\\pi\\) which is in contrast to SAR in Fig.2. In fact, interference patterns of SAR and ARR are always phase contrast not only for \\(\\delta N=0\\) but also for all other \\(\\delta N\\).
To test the odd parity of SAR experimentally, it relies on the fabrication of high quality zigzag graphene nanoribbons. It has been achieved by several laboratories using different methods last year including the method to unzip the multi-walled carbon nanotube (CNT),[15] the anisotropic etching by thermally activated nickel nanoparticles,[16] and use reconstruction of the edge to make zigzag graphene nanoribbons.[17] In view of the above experimental breakthrough, we expect that the setup to test our predicted phenomenon can be realized experimentally.
To reduce the experimental challenge, we have considered an unzipped CNT device, i.e., (n,n) CNT-zigzag graphene-(n,n) CNT, obtained by unzipping a few unit cells in the central part of an armchair CNT which has been achieved experimentally.[15] For this system, the wavefunction in the armchair CNT has the same symmetry as that of the zigzag graphene ribbon. Following the same procedure leading to Eq.(4), we have shown that the unzipped CNT in contact with a superconducting lead has the odd parity under mirror operation. Similar conclusions drawn from GNS can be obtained for unzipped CNT with two superconducting leads.
In conclusion, up to now, the parity of reflection amplitude was found to be even under the mirror operation. Here we have provided an example of odd parity for the reflection amplitude, the SAR amplitude in the zigzag graphene-superconductor hybrid system. This odd parity is due to the combination of unique band structure of the graphene and the electron-hole conversion involving two energy bands with different parity symmetry. The signature of odd parity of SAR can be found from the quantum constructive interference in a four terminal system with two superconducting leads attached asymmetrically. Furthermore, the interference pattern due to odd parity of SAR is phase contrasted to that of ARR where the parity is even.
**Acknowledgement** We gratefully acknowledge the financial support from a RGC grant (HKU 705409P) from the Government of HKSAR and from NSF-China under Grant Nos.10974236 and 10821403.
## References
* (1) \\({}^{\\ast}\\)e-mail: [email protected].
* (2) \\({}^{\\dagger}\\)e-mail: [email protected].
* (3) K. S. Novoselov et al, Science **306**, 666 (2004); K. S. Novoselov et al, Nature (London) **438**, 197 (2005); Y. Zhang et al, Nature (London) **438**, 201 (2005).
* (4) C. W. J. Beenakker, Rev. Mod. Phys. **80**, 1337 (2008); A.H. Castro Neto et al, Rev. Mod. Phys. **81**, 109 (2009); A. Rycerz et al, Nature Phys. **3**, 172 (2007).
* (5) M. I. Katsnelson et al, Nature Phys. **2**, 620 (2006); A.-F. Young et al, Nature Phys. **5**, 222 (2009); A. V. Shytov et al, Phys. Rev. Lett. **101**, 156804 (2008).
* (6) V. V. Cheianov et al, Science **315**, 1252 (2007).
* (7) Y. Xing, J. Wang and Q.-F. Sun, Phys. Rev. B **81**, 165425 (2010).
* (8) H. B. Heersche et al, Nature **446**, 56 (2007); F. Miao et al, Science **317**, 1530 (2007).
* (9) A. F. Andreev, Sov. Phys. JETP **19**, 1228 (1964).
* (10) C. W. J. Beenakker, Phys. Rev. Lett. bf 97, 067007 (2006).
* (11) J. Nakabayashi et al, Phys. Rev. Lett **102**, 066803 (2009).
* (12) S. Cheng et al, Phys. Rev. Lett. **103**, 167003 (2009); Q.-F. Sun and X.C. Xie, J. Phys.: Condens. Matter **21**, 344204 (2009).
* (13) D. N. Sheng et al, Phys. Rev. B **73**, 233406 (2006); Z. Qiao and J. Wang, Nanotechnology **18**, 435402 (2007). W. Long et al, Phys. Rev. Lett. **101**, 166806 (2008); J. Li and S.-Q. Shen, Phys. Rev. B **78**, 205308 (2008).
* (14) A. R. Akhmerov et al, Phys. Rev. B **77**, 205416 (2008).
* (15) L. Jiao et al, Nature (London), **458**, 877 (2009).
* (16) L. C. Campos et al, Nano. Lett. **9**, 2600 (2009).
* (17) C. O. Girit et al, ibid, **323** 1705 (2009).
Figure 2: (Color online) Panel (a): sketch of AR interferometer in which the zigzag ribbon is asymmetrically attached by two superconductor lead-2 and 4. Electrons in terminal-1 can be Andreev reflected into terminal-3 by either top or bottom GNS junction (horizontal green lines). Panel(b) and (c): the contour plot of \\(R_{13A}\\) vs Dirac point \\(E_{0}\\) and asymmetry \\(\\delta N\\). The phase difference of two superconductor leads \\(\\delta\\varphi\\) is zero in panel (b) and \\(\\pi\\) in panel (c). The other parameters: Fermi energy \\(E_{f}=0.8\\), number of chains in zigzag ribbon \\(N=40\\) corresponding to width \\(60a\\), the width of superconductor lead \\(W_{S}=10b\\), where \\(b=\\sqrt{3}a\\).
Figure 3: (Color online) Panel (a): With fixed Fermi level \\(E_{F}=0.8\\), total AR probability \\(R_{13A}\\) vs Dirac point \\(E_{0}\\) for different asymmetry \\(\\delta N\\). In the main panel \\(\\delta\\varphi=0\\), and while \\(\\delta\\varphi=\\pi\\) in the inset. Panel (b): \\(R_{13A}\\) vs asymmetry \\(\\delta N\\) with \\(E_{0}=0.3t\\) for different width \\(W\\) from \\(10\\times 3a\\) to \\(38\\times 3a\\) with the interval \\(2\\times 3a\\) along the black arrow. Inset panel: the hollow signs are the period \\(P\\) obtained from the main panel and the solid red circles are the period \\(P\\) from the energy band with the expression \\(P=2\\pi/(k_{x}-k_{x}^{\\prime})\\). The other parameters: \\(\\delta\\varphi=0\\), \\(E_{F}=0.8\\). | It is known that the parity of reflection amplitude can either be even or odd under the mirror operation. Up to now, all the parities of reflection amplitude in the one-mode energy region are even under the mirror operation. In this paper, we give an example of odd parity for Andreev reflection (AR) in a three-terminal graphene-supercondutor hybrid systems. We found that the parity is even for the Andreev retroreflection (ARR) and odd for specular Andreev reflection (SAR). We attribute this remarkable phenomenon to the distinct topology of the band structure of graphene and the specular Andreev reflection involving two energy bands with different parity symmetry. As a result of odd parity of SAR, the SAR probability of a four-terminal system with two superconducting leads (two reflection interfaces) can be zero even when the system is asymmetric due to the quantum interference of two ARs. | Condense the content of the following passage. | 183 |
ieee/43fa4f04_d618_420f_b391_fc782b7ff4aa.md | # Human-Aware Trajectory Optimization for Enhancing D* Algorithm for Autonomous Robot Navigation
Min Je Choi\\({}^{\\copyright}\\)
(Member, IEEE)
Seong Jin Park\\({}^{\\copyright}\\)
Sion Kim\\({}^{\\copyright}\\)
S. Lee Lee\\({}^{\\copyright}\\)
\\({}^{1}\\)Department of Transportation Engineering, University of Seoul, Seoul 02504, South Korea
\\({}^{2}\\)Department of Smart Cities, University of Seoul, Seoul 02504, South Korea
Corresponding author: Seung Jae Lee ([email protected])
This work was supported by Korean Ministry of I.and, Infrastructure and Transport (MOLIT) as \"Innovative Talent Education Program for Smart City.\" The work of Min Je Choi was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea under Grant NRF-RS-2023-00276836. The work of Seung Jae Lee was supported by Korea Agency for Infrastructure Technology Advancement (KAIA) Grant funded by the Ministry of Land, Infrastructure and Transport under Grant RS-2024-00409428.
######
* +
Footnote β : The associate editor coordinating the review of this manuscript and improving it for publication was Razi Iqbal\\({}^{\\copyright}\\)
As Fig. 2, the traffic method on the pedestrian paths is different from the road, and there is no regulation or regularity, so people can see that they pass in two directions. Additionally, pedestrian paths should be considered periodic obstacles, similar to trees, chairs, and personal mobility (PM).
This research focuses on the problem of path optimization for autonomous robots that use pedestrian paths. Fig. 3 presents the method of path determination of an autonomous robot before the traditional driving method and after the new method using the modified D* algorithm, developed in this study. This research proposes a novel approach to improve the robots' path-determination algorithm by considering the interaction between robots and pedestrians. Since pedestrians generally tend to recognize and avoid autonomous robots, this research modified the existing D* algorithm to predict human behavior patterns so that autonomous robots can react efficiently. The modified algorithm allows the robots to recognize pedestrians' avoidance intentions through sensors and only change course when necessary. This reduces the energy consumption of autonomous robots and improves their operation efficiency. In general, the power source of a robot is driven by receiving it from the battery, so its operating time is limited, and must be charged regularly [9]. In addition, energy efficiency is an important consideration as the size of a robot is often limited due to the width characteristics of the pedestrian paths, and the battery capacity is determined according to the size of the robot. Efficient battery management is a critical factor in determining the operating time and range of autonomous robots, while optimal routing algorithms can ensure safe efficient movement and minimize energy consumption.
This study reviewed previous studies on path planning, path cost, and trajectory utilization of autonomous robots, and collected and analyzed pedestrian trajectory data using sensor technology. The robot recognized the human movement path through the sensor and converted it into a trajectory. Based on the converted trajectory data, the robot modifies the route in real-time, detects pedestrians during autonomous movement on the walkway, and develops an efficient route planning methodology by determining whether to avoid the trajectory. This took a more generalized approach by calculating the average distance while considering that the distance perceived by pedestrians varies from individual to individual. Research has been conducted on minimizing path costs and improving the energy efficiency of autonomous robots using the modified D* algorithm. Fig. 4 presents the research flow, and the contribution of this research is to improve the energy efficiency of the D* algorithm by analyzing human trajectories.
The paper is structured as follows. Section II provides a detailed review of related literature in the field. Section III describes the whole process of extracting human trajectory data, the autonomous robot hardware for the experiment, and the modified D* algorithm that incorporates the avoidance points. Section IV describes the experimental part and the experimental results reflecting the original D* algorithm and
Figure 1: Traffic method on the road.
Figure 3: Autonomous robotβs path method (Before, After).
Figure 2: Traffic method on the pedestrian path.
the modified D* algorithm. Finally, Section V summarizes the research results, discusses the contributions of this work and future research directions, and concludes the paper.
## II Related work
### Pedestrian Behavior in Shared Spaces with Autonomous Vehicles
Various studies are being conducted worldwide, to promote pedestrian efficiency with the integration of autonomous robots, including research on mobility hubs [10, 11]. With the introduction of Autonomous Vehicles (AVs) into pedestrian shared spaces, the interaction with humans is becoming increasingly important. While the technology and necessity of AVs are designed for pedestrians, the interaction between AVs and pedestrians in shared spaces has only recently been studied. More research into the AVs' perception and following behavior of pedestrians will help advance AV navigation technology [12]. Shared spaces can be sidewalks, parking lots, roads, intersections, and more. As the shared space concept becomes increasingly popular in urban planning, AVs will need to deal with potentially large numbers of pedestrians and negotiate their passage [13]. A VR facility called \"LargeSpace\" has been developed to investigate pedestrian behavior when interacting with AVs in shared spaces [14]. Natasha Merat et al. have provided an overview of mathematical and computational modeling techniques used to understand how AV and pedestrian behavior can be cooperative and effective [15]. Most people have already interacted with cars in shared spaces and on roads. Based on this experience, pedestrians bring their existing knowledge, expectations, and habits about cars to interacting with AVs in shared spaces. Furthermore, AVs need to predict pedestrians' short-term behavior and respect social norms of crowd navigation. A social norm is a definition of appropriate behavior that expresses a notion of what people tend to do [16]. In the context of crowd navigation, a social norm might be about respecting a minimum personal distance or not moving erratically. AVs have some of the attributes of robots and perform autonomy, by using sensors, pedestrian trajectory prediction, etc. Many experiments use unmanned vehicles [17] or mobile robot prototypes to study pedestrian responses to AVs [18].
### Planning a Robot's Path on a Pedestrian Path
Unlike roads, pedestrian paths have no rules and traffic flows in both directions. Many experiments and solutions are needed for robots to drive automatically on pedestrian paths. According to Masahiro Shiomi et al, the traditional approach in robotics is to treat pedestrians as moving obstacles and ensure collision-free movement in the presence of moving obstacles [19]. Recent research provides collision-free behavior and movement for robots by planning the path of the robot so as not to violate the walkability of the pedestrian's space. Kitazawa et al. investigated pedestrians' gaze patterns to determine the size and shape of their information processing space (IPS) [20]. Pedestrians pay much more attention to the ground surface to detect immediate environmental hazards than to fixate on obstacles. David et al. applied a real-time deep learning-based method to the problem of human-aware robot navigation. The methodology was applied through training on images captured by cameras, and it presents a deep learning-based approach for integrating pedestrian detection into robot navigation problems [21]. Similarly, there are studies on how future commercialized robots will interact with humans on pedestrian paths. In this study, experiments were conducted using an algorithm that allows a robot to continue its course without changing its path when detecting pedestrians with proactive avoidance behaviors.
### Minimum Route Cost for Automatic Operation on a Walking Path
Recently, research has been carried out on electric vehicle batteries and different road gradients to calculate the minimum path cost, considering different weight factors [22, 23] Since the advent of mobile robots, various studies have been conducted in the field of path planning. The traditional way for robots to navigate through mobile and fixed obstacles on pedestrian paths is through avoidance. Liang et al. provided a fundamental review of the algorithms used in 3D path planning and their applications. All approaches have been
Figure 4: Research flow.
classified into five categories: sampling-based algorithms, node-based optimal algorithms, mathematical model-based algorithms, biomimetic algorithms, and multi-fusion-based algorithms [24]. Among them, multi-fusion-based algorithms synthesize the strengths of multiple algorithms to achieve global optima and minimum cost. It is an algorithm that can achieve multiple objectives simultaneously and has demonstrated good environmental performance when combined with different methods. Ayawli et al. [25] used the Voronoi Diagram and Computer Geometry Technique to present a new path algorithm for robots in complex and dynamic environments. An intelligent replanning algorithm was devised that classifies moving obstacles based on their location, speed, distance, and direction of movement, to determine the level of threat they pose to the robots' responses to different obstacles. The VD-CGT method is an efficient method that avoids unnecessary calculations, which is advantageous for identifying moving obstacles with collision risk. The short re-planning time enables safe and fast route navigation. Wang [26] elaborated the basic principle of the A* algorithm, divided the robot path planning area using a grid method, and employed the MATLAB simulation platform to generate a two-dimensional path for the robot. Most of the mobile robot research is carried out in a grid method and uses grid maps. Jung et al. proposed a collision avoidance driving control algorithm for mobile robots, which is possible when the robot uses the D* algorithm and fuzzy rules for global and local path planning movement. By describing human behavior, they proposed a collision avoidance driving algorithm using the robot's action command when there is a risk of collision with moving obstacles and a cost area in the direction of travel, and route modification [27]. The above papers suggest that the shortest and least-cost paths for robot driving can be designed in different ways. Previous studies have been conducted on various path algorithms to minimize the path cost for robots [28, 29, 30]. Based on these studies, this research adds conditions to the path algorithm to minimize the path cost and presents a method to improve the battery efficiency of robots.
### Human Track Utilization Method
Researchers are actively working on improving the safety and convenience of pedestrians. According to Mehdi et al. the analysis of trajectory data was used to calculate the average change in direction and speed from the perspective of the pedestrian's distance and angle [31]. Also, The features of pedestrians' pre-avoidance decision-making behaviors were analyzed and used to understand the underlying dynamics of crowd behavior [32]. Bennewitz et al. applied the EM algorithm to the trajectories recorded by laser distance sensors to cluster a set of movement patterns, and introduced a method for automatically inducing HMMs and updating them using JPDAF based on distance data and vision information [33]. Glas et al. developed a system to track the location and body orientation of many people simultaneously using a network of laser rangefinders [34]. Berclaz et al. achieved reliable multi-person tracking by using heuristic methods to rank individuals and process their trajectories over a long sequence if they are not confused with each other. It provided accurate position estimates by applying metrics to find the optimal trajectory across multiple frames [35]. Heath and Guibas presented a distributed vision-based technique for tracking people with a network of multiple stereo camera sensors in complex and dynamic environments [36]. Sighencea et al. reviewed the latest deep learning-based solutions to predict pedestrian trajectories, along with the sensors and processing methodologies used. Through this, they addressed the available datasets, performance metrics used in the evaluation process, and the practical application areas [37]. Sun et al. proposed a novel approach to predict pedestrian trajectories for autonomous mobile service robots using rangefinder sensors to learn and predict 3DOF pose trajectories [38]. A multi-object localization method was presented by fusing Lidar and camera data. The point cloud data was clustered to obtain a compact representation in 3D space and asynchronously fused to present cutting-edge and distinct technology through detection, localization, and tracking [39]. Recently, more experiments have been conducted in subway stations to predict pedestrian attributes and individual trajectories using CCTV. Lidar sensors and Deep Neural Network (DNN) algorithms were used to predict pedestrian attributes and individual trajectories [40]. This study aims to improve the driving efficiency of robots by collecting and analyzing human trajectory data using sensors.
## III Methodology
This study adopts the conceptual framework for 3D object detection used by Zhou and Tuzel [41, 41]. It exploits a methodology for collecting and analyzing trajectory data of pedestrians using Lidar and Camera sensors. The total methodology of this study is depicted in Fig. 5. A camera was used to detect people, and based on the detected people, sensor fusion was used to determine the location of the people and their distance from the robot. The data was collected in the form of RGB and PCD (Point Cloud Data). To improve the complexity and sparseness of the PCD, VoxelNet model, a 3D deep learning technology, was used to extract human trajectory data using ROS (Robot Operating System) and CloudCompare. In addition, the existing optimal path algorithm, the D* algorithm, was modified to average the avoidance points to reduce the unnecessary movement path of the robot, and a study was conducted to determine the efficient path when the pedestrian avoids first. In Table 1 is the Notation of this paper.
### Data Collection and Analysis
This study utilized autonomous robots for collecting and analyzing trajectory data. Fig. 6 is a Scout Mini image with a different drive type. Scout Mini can be equipped with additional components such as a camera, LiDAR, GPS, IMU,etc. In this experiment, the Velodyne VLP-16 LiDAR was used as seen in Fig. 7 and the Realsense D435i depth camera as seen in Fig. 8. Table 2 lists the components of the Scout Mini, Table 3 lists the technical specifications of the VLP-16 LiDAR and Table 4 lists the technical specifications of the D435i depth camera.
The VLP-16 lidar, pictured in Fig. 7, collected the pedestrian trajectory data in 3D point cloud data (PCD) format. The collection of trajectory data was conducted in pedestrian environments, with the robot's mounting Lidar and camera sensors collecting data within their detection range. The robot image in Fig. 9 illustrates the customized Scout Mini used for collecting trajectory data. The mounted sensors included Lidar, Depth Camera, RTK-GPS, and IMU (Table 5).
The trajectory data was collected using both LiDAR and camera sensors simultaneously. Fig. 10 shows data fusion to extract human trajectory data, matching spatial and time series data on the same axis. A represents PCD collected
\\begin{table}
\\begin{tabular}{|c|l l l|} \\hline \\(\\left\\{\\mathbf{a}_{t}^{pos}\\right\\}\\) & Positive anchor & \\(t_{l}\\) & Predicted trajectory \\\\ \\(\\left\\{\\mathbf{a}_{t}^{Neg}\\right\\}\\) & Negative anchor & \\(t_{l}^{*}\\) & Actual trajectory \\\\ \\(L\\) & Loss Function & \\(d\\) & Distance between robot and pedestrian \\\\ \\(\\alpha\\), \\(\\beta\\) & The weight of each term & \\(|\\theta|\\) & Relative direction of movement of the robot and pedestrian \\\\ \\(N_{pos}\\) & The number of times the real object was detected & \\(D_{threshold}\\) & Distance threshold to determine avoidance intent \\\\ \\(L_{cls}\\) & Classification loss & \\(\\theta_{threshold}\\) & Angle threshold to determine avoidance intent \\\\ \\(p_{l}^{pos}\\) & Model prediction probability for positive samples & \\(P_{avoid}\\) & Behavior that a person avoids when certain conditions are satisfied \\\\ \\(N_{neg}\\) & Number of times the true object was not detected & \\(C_{exist}(n)\\) & Cost function of the original D* algorithm \\\\ \\(p_{j}^{neg}\\) & Model prediction probability for negative samples & \\(C_{avoid}\\) & Additional cost for pedestrian avoidance \\\\ \\(\\mathrm{L_{traj}}\\) & Trajectory Regression Loss & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Notations used in this paper.
Figure 5: Method process.
with LiDAR, while B represents RGB data collected with a depth camera. The data collected by each sensor was fused to extract human trajectories; the LiDAR detected the position of individuals, and the camera verified the accuracy of the targets detected by the LiDAR.
The point cloud raw data is characterized by numerous points stored without any specific order, making it difficult to identify the geometric characteristics of trajectories and the interactions between points. Therefore, to analyze the raw PCD, the Cloud Compare software was used to quantify the PCD collected every second. The objects' shapes and the interactions between points, were quantified and visualized in Table 6, where each PCD's (x, y, z) coordinates, names, and point counts were quantified for upcoming analysis.
A 3D object recognition model was used to analyze the 3D PCD. As seen in Fig. 10(A), 3D PCD is unstructured data, consisting of many randomly distributed data points in space, which requires significant computing time to process. To effectively solve this issue, the data was preprocessed into a voxelization format of normalized data, which was then
\\begin{table}
\\begin{tabular}{|c|l|} \\hline Hardware & Specification \\\\ \\hline Size & 627 x 550 x 252 mm \\\\ \\hline Mode of Operation & \\begin{tabular}{l} 4wheel drive, Differential \\\\ Drive Model \\\\ \\end{tabular} \\\\ \\hline Wheelbase & 452mm \\\\ \\hline Battery Operating & -20 \\(\\sim\\) 60\\({}^{\\circ}\\) \\\\ Temperature & -20 \\(\\sim\\) 60\\({}^{\\circ}\\) \\\\ \\hline charging Time & 2hour \\\\ \\hline Minimum Ground & 107mm \\\\ Clearance & 0m \\\\ \\hline Minimum Turning Radius & 0m \\\\ \\hline Battery Voltage & 24V / 15Ah \\\\ \\hline Max Velocity & 10km/h \\\\ \\hline Communication &
\\begin{tabular}{l} Standard CAN / RS232 \\\\ \\end{tabular} \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Components of scout Mini(Robot).
Figure 8: Intelβs realsense depth camera(D435).
Figure 6: AGLE-X scout mini.
Figure 9: Customizing scout mini robot.
converted into a Sparse 4D tensor for data analysis via GPU computations.
Fig. 11 represents the process of extracting human trajectory data. After detecting a person using a camera, the data
Figure 11: Process for extracting people trajectory data.
Figure 10: Merging PCD and image data to exact human trajectory.
was matched with the same spatial orientation and time series to identify the location of the person and the distance to the robot using sensor fusion based on the detected human. The trajectory data was then extracted using the VoxelNet model. The trajectory data was collected on a voxel-by-voxel basis and grouped points identified as human objects. The human trajectory data was extracted by sampling and refining the human objects for each voxel to predict the human shape and movement pattern.
Fig. 12 demonstrates the process of breaking down the 3D data into uniformly sized cubic voxels to organize the data structure. This approach significantly reduces the computational time required for analyzing trajectory data and enables its interpretation through a deep learning network.
\\[\\mathbf{L}=\\mathbf{\\alpha}\\frac{1}{N_{pos}}\\sum_{i}\\mathbf{L_{cls}}\\left(\\mathbf{p}_{i}^{pos}, \\mathbf{1}\\right)+\\mathbf{\\beta}\\frac{1}{N_{neg}}\\sum_{\\mathbf{j}}\\mathbf{L_{cls}}\\left( \\mathbf{p}_{j}^{neg},\\mathbf{0}\\right)\\]
\\[+\\frac{\\mathbf{1}}{N_{pos}}\\sum_{i}\\mathbf{L_{traj}}\\left(\\mathbf{t}_{i},\\mathbf{t}_{i}^{*}\\right) \\tag{1}\\]
This study enhanced the accuracy of trajectory prediction by modifying the loss function to reflect the detailed features
\\begin{table}
\\begin{tabular}{|c|l|} \\hline Hardware & Specification \\\\ \\hline Frame Resolution & 1920 x 1080 \\\\ \\hline Sensor FOV(H x V) & 69\\({}^{\\circ}\\) x 42\\({}^{\\circ}\\) \\\\ \\hline Frame Rate & 30fps \\\\ \\hline Sensor resolution & 2MP \\\\ \\hline Sensor Technology & Rolling Shutter \\\\ \\hline \\end{tabular} \\[+\\frac{\\mathbf{1}}{N_{pos}}\\sum_{i}\\mathbf{L_{traj}}\\left(\\mathbf{t}_{i},\\mathbf{t}_{i}^{*}\\right)\\] (2)
\\end{table}
Table 4: Technical specifications of D435i(Camera).
Figure 12: VoxelNet process for extracting peopleβs trajectory data based on point cloud data.
Figure 13: A conceptual diagram of the robot recognizing human avoidance points and driving.
of PCD. Positive anchors \\(\\left[\\mathbf{u}_{i}^{\\textit{Pos}}\\right]_{i=1,2,3 N_{\\textit{Pos}}}\\), and negative anchors, \\(\\left\\{\\mathbf{a}_{j}^{\\textit{Neg}}\\right\\}_{i=1,2,3 N_{\\textit{Pos}}}\\), were utilized to define the center, location, length, width, height, and rotation of the 3D box, and the loss function was modified. The loss function calculates the sum of classification losses for positive anchors and negative anchors, in addition to the trajectory regression loss. \\(\\mathbf{L_{\\textit{cR}}}\\) represents the classification loss, and \\(\\mathbf{L_{\\textit{rAdj}}}\\) represents the trajectory regression loss. \\(\\mathbf{t_{i}}\\) means the predicted trajectory, and \\(\\mathbf{t_{i}}^{s}\\) means the actual trajectory. \\(\\mathbf{\\alpha}\\) and \\(\\mathbf{\\beta}\\) are weights adjusting each term in the loss function. The voxel-based trajectory regression loss is designed to precisely estimate pedestrian trajectories, contributing to robotic path planning. The loss function, which directly estimates the 3D direction of the box and uniformly normalizes x and y, is defined as in (1).
Fig. 13 presents an image of a robot detecting and navigating a pedestrian's avoidance point. An avoidance point is where pedestrians recognize robots and alter their path to avoid potential intersections or collisions. This study measured the precise location and distance at which pedestrians change their trajectory upon detecting robots. Specifically, the points where pedestrians follow straight paths, detect robots, and alter their direction were analyzed, and the changes in angle at these points were measured to determine an average.
To evaluate pedestrian perception and avoidance behavior in a real-world setting, trajectory data was collected from approximately 1,000 pedestrians on a school walkway. This measured the distance to the point at which the pedestrian recognized the robot and avoided it. Fig. 14 analyzes the average distance of the avoidance points based on the collected trajectory data, and the average distance of the avoidance points is approximately 1.77m.
### _Modification of D\\({}^{*}\\) Algorithm Incorporating Avoidance Points_
In this study, the D\\({}^{*}\\) algorithm was used to perform path planning based on global and local plans. The D\\({}^{*}\\) algorithm establishes a global path plan first and then detects obstacles
\\begin{table}
\\begin{tabular}{|c|l|} \\hline Hardware & Name \\\\ \\hline Lidar & VLP-16 \\\\ \\hline Camera & Depth D435i \\\\ \\hline RTK-GPS & MRP-2000 \\\\ \\hline IMU & VN100 \\\\ \\hline PC & NVIDIA Jetson AGX OrinTM \\\\ \\hline \\end{tabular}
\\end{table} TABLE V: Customized sensor components.
Figure 14: Finding the average avoidance point.
Figure 15: Modified D\\({}^{*}\\) algorithm driving behavior (D\\(\\geq\\)1.77m).
in real-time while performing local avoidance actions as designed. While the D* algorithm typically performs local avoidance for all obstacles, this study proposed using the derived average pedestrian avoidance point to inform local planning.
Fig. 15 and Fig. 16 present the navigation methods of a robot using a modified D* algorithm, categorized by the detection distance of the avoidance points. According to the average avoidance point determined in Fig. 14, if the pedestrian's avoidance point is detected beyond 1.77 meters, the robot does not avoid the person, but instead uses an efficient navigation method, as seen in Fig. 15. Fig. 16 presents that if a pedestrian's avoidance point is detected within 1.77 meters, the robot navigates by avoiding the person to ensure the pedestrian's safety. To analyze pedestrian avoidance intentions, the distance between the pedestrian and the robot and the pedestrian's movement direction was used to study avoidance intentions.
The formula expressing this is presented in (2). D represents the distance between the robot and the pedestrian, while \\(\\theta\\) represents the avoidance angle relative to the robot's and pedestrian's movement directions. \\(D_{threshold}\\) is the distance threshold for judging avoidance intentions, and \\(\\theta_{threshold}\\) is the angle threshold. According to previous conditions, if the robot detects a human avoidance point, condition 1 is executed and the robot drives the existing path. If it does not detect a human avoidance point, condition 0 is executed and the robot drives around the human first.
\\[\\left\\{\\begin{array}{ll}1,&\\mbox{if }D>D_{threshold}\\mbox{ and }|\\theta|>\\theta_{threshold}\\\\ 0,&\\mbox{otherwise}\\end{array}\\right. \\tag{2}\\]
Figure 17: Test section for autonomous walkway robot path algorithm (University of Seoul).
Figure 18: Surrounding pedestrian environment for trajectory data collection.
10 times per day, and Table 7 presents the average values of each test per weekday.
The analysis revealed that the modified D* algorithm was more efficient than the traditional D* algorithm. Despite the autonomous robot covering shorter distances with the modified algorithm, it encountered more people, and there was a decrease in travel time and an increase in average speed compared to the traditional algorithm. Table 8 presents the analytical results of the experiment, calculated as averages. Table 9 analyzes how much efficiency increased in the modified D* algorithm compared to the traditional algorithm. When the modified algorithm was applied, about 76% of the encounters resulted in people avoiding the robot first, indicating that many people would move out of the robot's way upon seeing it. Also, the total travel distance of the robot decreased by an average of 3.66%, and the average speed increased by 4.16%. This suggests that the modified D* algorithm selects more efficient paths and enables faster travel. In addition, the travel time efficiency improved by 7.52%, and although the robot encountered on average 4.83% more pedestrians, it performed better with the modified D* algorithm. These results indicate that autonomous robots on pedestrian paths can minimize path costs and enhance battery and energy efficiency.
## V Conclusion
This study focuses on improving the D* algorithm for the path planning of autonomous robots. Based on experimental results, incorporating dynamic pedestrian avoidance behavior patterns improved the safety and efficiency of robots'
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & People avoid & Distance travelled & Driving Speed & Driving time efficiency \\\\ \\hline Analysis & 76.89\\% & -3.66\\% & 4.16\\% & -7.52\\% \\\\ results & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline \\end{tabular}
\\end{table}
Table 9: **Efficiency of the modified D* algorithm compared to the traditional D* algorithm.**
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline Analysis & 76.89\\% & -3.66\\% & 4.16\\% & -7.52\\% \\\\ results & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline \\end{tabular}
\\end{table}
Table 9: **Efficiency of the modified D* algorithm compared to the traditional D* algorithm.**path planning. The modified D* algorithm extended robots' operating range in complex pedestrian environments and calculated the optimal average distance of avoidance points within a predefined detection range by analyzing human trajectory data. By applying avoidance points, the robot can travel to its destination with the minimum path cost according to the given environmental conditions, thereby reducing travel time and enhancing energy efficiency. This can also contribute to reduced power consumption and longer battery life for the robots. Future developments could refine the algorithm by adding conditions such as areas with high pedestrian density and damaged walkways. In addition, advancements in point cloud data processing technology are needed for commercialization, focusing on improving data processing speed and efficiency. These improvements are expected to make autonomous robot path planning safer and more efficient.
## References
* [1]J. Adrian (2019-04) A glossary for research on human crowd dynamics. Collective Dyn.4 (A), pp. 1-13. External Links: Document Cited by: SSII.
* [2]M. Bertozzi, A. Broggi, and A. Fascioli (2000-04) Vision-based intelligent vehicles: state of the art and perspectives. Robot. Auto. Syst.32 (1), pp. 1-16. External Links: Document Cited by: SSI.
* [3]M. Chen, D. Ku, S. Kim, J. Kwak, Y. Jang, D. Lee, and S. Lee (2023-04) Action plans on the reduction of mobility energy consumption based on personal mobility activation. Energy263 (12), pp. 26019. External Links: Document Cited by: SSI.
* [4]M. Chen, D. Ku, and S. Lee (2022-04) Integrated YOLO and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [5]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [6]M. Chen, D. Ku, and S. Lee (2022-04) Integrated YOLO and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [7]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [8]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [9]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [10]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [11]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [12]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [13]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [14]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [15]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [16]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [17]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [18]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [19]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [20]M. Chen, Y. Chen, and S. Lee (2022-04) The effect of a smart mobility hub based on concepts of metabolism and retrofitting. J. Cleaner Prod.379 (2), pp. 134709. External Links: Document Cited by: SSI.
* [21]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [22]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [23]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [24]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [25]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [26]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [27]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [28]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [29]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [30]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [31]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [32]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [33]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [34]M. Chen, Y. Chen, and S. Lee (2022-04) Integrated YOLO: and CNN algorithms for evaluating degree of walkway breakage. KSCE J. Civil Eng.26 (8), pp. 3570-3577. External Links: Document Cited by: SSI.
* [35]M. Chen, Y. Chen, and S. Lee (2022-04) A comprehensive walk-billy evaluation system for promoting environmental benefits. Sci. Rep.13 (1), pp. 16183. External Links: Document Cited by: SSI.
* [36]M. Chen, Y. Chen, and S. Lee (2022-04) The effect of a smart mobility hub based on concepts of metabolism and retrofitting. J. Cleaner Prod.379 (2), pp. 134709. External Links: Document Cited by: SSI.
* [37]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [38]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [39]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [40]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [41]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [42]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [43]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [44]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI.
* [45]M. D. Feldman, A. Spalanzani, and J. Dugdale (2022-04) Pedestrian behavior in shared spaces with autonomous vehicles: an integrated framework and review. IEEE Trans. Intell. Vehicles8 (1), pp. 438-457. External Links: Document Cited by: SSI* [36] K. Heath and L. Guibas, \"Multi-person tracking from sparse 3D trajectories in a camera sensor network,\" in _Proc. 2nd ACM/IEEE Int. Conf. Distrib. Smart Caneras_, Sep. 2008, pp. 1-9.
* [37] B. I. Sijghenees, R. I. Sanciu, and C. D. Caleanu, \"A review of deep learning-based methods for pedestrian trajectory prediction,\" _Sensors_, vol. 21, no. 22, p. 7543, Nov. 2021.
* [38] L. Sun, Z. Yan, S. M. Mellado, M. Hanheide, and T. Duckett, \"3DOF pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data,\" in _Proc. IEEE Int. Conf. Robot. Autom. (ICRA)_, May 2018, pp. 5942-5948.
* [39] J. Amendolia, A. Dayal, L. R. Cenkeramadi, and A. Jha, \"Edge-distributed fusion of camera-iLDAR for robust moving object localization,\" _IEEE Access_, vol. 11, pp. 73583-73598, 2023, doi: 10.1109/ACCESS.2023.239512.
* [40] T. Kim, E. Jeong, and S. I. You, \"Development of pedestrian property estimation method based on deep neural networks using LiDAR sensor,\" _J. Korean Soc. Transp._, vol. 36, no. 5, pp. 319-330, Oct. 2018.
* [41] Y. Zhou and O. Tuzel, \"VoxelNet: End-to-end learning for point cloud based 3D object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, Jun. 2018, pp. 4490-4499.
\\begin{tabular}{c c} & MIN JE CHOI (Member, IEEE) received the Ph.D. degree in transport engineering from the University of Seoul, in 2023. He is currently a Research Professor with the Department of Transportation Engineering, University of Seoul, working on transportation planning and autonomous robotics. His research interests include efficient transportation planning and demand estimation, 3D modeling mapping service of pedestrian paths for PBV system construction, and autonomous pedestrian path robots. \\\\ \\end{tabular} \\begin{tabular}{c c} & SEONG JIN PARK received the bachelor's degree in mechanical and automotive engineering from Dong Seoul University, in 2023, majoring in autonomous vehicles and robotics. He is currently pursuing the master's degree in transportation planning and autonomous robotics with the Transportation Planning Laboratory, University of Seoul. \\\\ \\end{tabular} \\begin{tabular}{c c} & SION KIM is currently pursuing the Ph.D. degree with the Department of Transportation Engineering, University of Seoul. His main research interests include transportation planning and policy. His research interests include transportation demand forecasting and modeling, public transportation, and sustainability. \\\\ \\end{tabular}
\\begin{tabular}{c c} & SEUNG JAE LEE received the Ph.D. degree in civil and environmental engineering from the University College London, in 1995. He is currently a Full Professor with the Department of Transportation Engineering, University of Seoul. His research interests include efficient transport planning systems, land use transport, walkability, and sustainability. He is a member of the editorial board of the international journal _Transportnerica A: Transport Science_ and the editorial board of the _Journal of Advanced Transportation_. \\\\ \\end{tabular} | This research focuses on modifying the D* algorithm for path optimization of autonomous robots moving on sidewalks. The existing D* algorithm is designed to make the autonomous robots recognize and avoid obstacles. However, in real-world pedestrian settings, observations indicate that passersby on sidewalks tend to notice robots and avoid them themselves. By analyzing people's trajectory data collected through lidar sensors, this study identified the average distance and angle of avoidance at which people start to avoid autonomous robots. Based on this, we proposed a modified D* algorithm that allows the robot to maintain the existing optimal path when people are willing to maneuver around while adopting an avoidance path only when they are not. Experimental results showed that the autonomous robot using the modified D* algorithm outperformed the conventional method regarding driving efficiency and time. This research is expected to contribute to optimizing autonomous robots' walking paths by enabling efficient driving even under limited battery capacity. | Write a summary of the passage below. | 180 |
ieee/f64e11ea_a639_4f36_bef8_f8d027dabc89.md | # The SPECCHIO Spectral Information System
Andreas Hueni\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Laure A. Chisholm\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Cindy Ong\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Tim J. Malthus\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Mathew Wyatt, Simon A. Trim, Michael E. Schaepman\\({}^{\\textcircled{\\tiny\\textregistered}}\\), and Medhavy Thankappan
Manuscript received April 29, 2020; revised July 28, 2020 and August 23, 2020; accepted September 14, 2020. Date of publication September 21, 2020; date of current version October 2, 2020. This work was supported in part by the COST Action Eurospec, in part by the Australian National Data Service project DC-10; in part by the APEX Airborne Prism Experiment, in part by the Swiss Commission on Remote Sensing, and in part by MeIEOC projects through the EMRP and EMPIR Programmes co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. The work of Andreas Hueni and Michael E. Schaepman was supported by the University of Zurich Priority Programme on Global Change and Biodiversity. _(Corresponding author: Andreas Hueni., Simon A. Trim, and Michael E. Schaepman are with Remote Sensing Laboratories, University of Zurich, 8057 Zurich, Switzerland (e-mail: [email protected]; [email protected]; [email protected])._
Laurie A. Chisholm is with the Faculty of Science, University of Wollongong, Wollongong, NSW 2522, Australia (e-mail: [email protected]).
Cindy Ong is with Mineral Resources, CSIRO, Clayton, VIC 3169, Australia (e-mail: [email protected]).
Tim J. Malthus is with Coasts Program, CSIRO Oceans and Atmosphere, Brisbane, QLD 4102, Australia (e-mail: [email protected]).
Mathew Wyatt is with the Indian Ocean Marine Research Centre, Australian Institute of Marine Science, Crawley WA, Australia (e-mail: [email protected]).
Medhavy Thankappan is with the Environmental Geoscience Division, Geoscience Australia, Canberra, ACT 2609, Australia (e-mail: [email protected]).
Digital Object Identifier 10.1109/ISTARS.2020.3025117
## I Introduction
Spectral signatures, acquired by spectroradiometers measuring emitted or reflected electromagnetic radiation, are used for a wide range of Earth System science purposes [1]. The quality and interpretation of air- or satellite-borne, remotely sensed spectral signatures relies essentially on calibration [2], validation, comparisons, and models [3, 4], all of which, in turn, often rely on _in situ_ spectral data. Consequently, field and laboratory spectroscopy are indispensable tools to provide the required reference and training data, but they also represent a research method in their own right [5].
The value of spectral data is strongly linked to information about the measurement context [6], i.e., the description of the target and its sampling environment at the time of measurement. Proximal sensing methods offer generally a higher degree of control over explanatory variables and the statistical sampling used in the experiment than airborne or space-based acquisitions. The target and its extent, the time of day and the illumination conditions may be chosen more freely (and repeatedly), while the measurement context can be defined by auxiliary _in situ_ measurements and protocols. In many cases, datasets obtained in such a manner are viewed to be of veridical, i.e., truthful, nature, colloquially referred to as \"ground truth.\" This may be linked to the belief that proximity and perceived control of the sampling process result in correct data, with many newer users of field spectroscopy underestimating the involved complexities [7]. It is however a fact that all measured data are uncertain and thus there may be no such thing as \"ground truth\" [8]. Furthermore, comparisons with datasets acquired by other sensors at different spatial resolutions, instantaneous fields-of-view, and viewing/illumination angles are hampered by scaling and BRDF issues [3, 8, 9, 10]. This once more corroborates the need for precise documentation of measurement conditions [7], in particular if datasets are to be made fit for long-term use and applicable for a variety of purposes by a wider community. We argue here that the term \"ground truth\" refers to a more advanced set of metadata available of the target measured _in situ_, as well as more intrinsic knowledge of the target, rather than to a superior physical measurement on the ground.
The technical solution to enable such long-term usability and data sharing is the spectral database [11, 12, 13], acting as a repository for spectral data and their metadata, where the metadata provide the alluded measurement context, essentially giving meaning to the data [14].
A number of spectral databases have appeared over the past decade since the second version of the SPECCHO spectral database system [11] was designed and implemented. Examples of such systems are the Ahvaz Spectral Geodatabase Platform [15], a workflow for spectroradiometric field surveys including a spectral database [16], a landcover database in Egypt [17], a multispectral material signature database [18], a spectral library for outcrop characterization [19], and the generic EcoSIS solution [20], amongst many others.
All of these works are based to a large extent on the metadata schemas introduced by SPECCHO versions 1 and 2 [11, 13], but add their individual flavours to accomplish applicationspecific services, such as geographic information system, spectral processing or analysis functionality. This indicates a paradigm shift towards more informed systems, which we term Spectral Information Systems (SIS) and define as follows:
_SIS are systems for building and providing spectral information, utilizing spectral databases as repositories for spectral data and associated metadata._
SIS support the spectroscopy data life cycle [21] by giving metadata-specific guidance during data acquisition, providing automated data ingestion, functions for metadata augmentation (i.e., annotating spectral data with metadata), and spectral data and metadata processing, thus enabling the information retrieval to build knowledge and new conclusions leading to improved experimental planning (see Fig. 1). Information is inferred from data [22] by both metadata augmentation and data processing.
Our experiences with designing and using SPECCHIO V2 as well as the review of the implementations of other spectral libraries alluded to above have helped to shape the requirements for the next generation of spectral information systems. This requirement analysis was significantly supported by the Australian National Data Service (ANDS) data capture project DC-10, aimed at establishing an Australian spectral database system. The most essential findings are summarized as follows.
1. Metadata requirements are a function of the different user groups and their application domain, with each group tending to use a set of general meta attributes plus domain specific ones [23, 24].
2. Native sensor file format support by the data ingestion process is an ongoing task as industry continues to develop spectral sensors to meet scientific requirements, e.g., the measurement of fluorescence [25]. The SIS must allow generic spectral data storage, i.e., provide multi-instrument support. Essentially, while the storage is generic, the file reading is sensor or company specific.
3. Sharing data within research groups requires a more detailed management of user rights to allow collaborative research.
4. The demand for increased visibility of data requires the feeding of data discovery portals, where a portal is a website that gives users unified access to content [26, 27].
5. Monolithic systems with built-in scientific processing can never provide the analytical flexibility required by the broad range of disciplines and in particular by the per se individualistic nature of scientists.
6. The scalability of the SIS with number of spectra and related metadata quickly becomes a relevant issue with the deployment of automated sensors [4] and the aggregation of data on a continental scale, such as in the framework of Digital Earth Australia [28].
7. Access to the system should include a web browser-based option to enable easy, interactive data exploration without the need of installing specialized software.
Version 3 of SPECCHIO was designed to further meet these requirements by offering a flexible metadata system, enhancing the support of new sensors by automating the sensor definition in the database, supporting higher-level languages to allow scientists writing their own algorithms, and redesigning the storage system to enable scalability. Furthermore, the system was updated to a modern client-server architecture with increased system security to accommodate the hosting constraints of many institutions.
This article introduces the concepts chosen for the implementation of SPECCHIO V3, documents the achieved results in terms of system capability and availability, demonstrates the system use in a case study, presents lessons learned, and discusses future system capabilities. It furthermore provides the required knowledge background for SPECCHIO end-users to customize their individual SPECCHIO instances by leveraging in particular the new, flexible, and powerful Entity-Attribute-Value based metadata storage, and optimize their system usage.
## II Concepts
The concepts described in this section address the latest requirements for spectral information systems and reflect the solutions chosen for the SPECCHIO V3 system.
### _SPECCHIO V3 System Architecture_
SPECCHIO V3 is based on a client-server-based architecture (see Fig. 2) using the open-source Glassfish application server1 and the open-source Jersey RESTful web services framework.2 All communication of the SPECCHIO Java client with the spectral database on the server side including user authentication is handled via the Glassfish server in the SPECCHIO application service, effectively shielding the database from direct user access via Structured Query Language (SQL) calls. Java objects are passed between client and server encoded as XML via Hypertext Transfer Protocol Secure (HTTPS), but communication may also use the unencrypted Hypertext Transfer Protocol (HTTP).
Fig. 1: Spectroscopy data life cycle, supported by a spectral information system.
Higher-level languages also rely on the SPECCHIO Java client for communication with the SPECCHIO application service.
The web browser interface is supported through the Glassfish server by the SPECCHIO web service. This web service itself uses the SPECCHIO application programming interface (API) to communicate with the SPECCHIO application service.
### _Support of Higher-Level Processing Languages_
The number of applications of spectroscopy is enormous [29], [30] and consequently an ever-growing plethora of analysis techniques exist. Algorithms to process spectral data are developed by scientists using various programming languages and must invariably deal with spectral data selection, input and output. These basic functions are made available via the SPECCHIO API and thus allow the development of code that can operate on a common data pool, namely the SPECCHIO database run by the MySQL Relational Database Management System (see Fig. 3).
The SPECCHIO API provides a large number of functions to interact with the SPECCHIO database server, which are as follows:
1. spectral data selection via metadata space queries;
2. grouping of selected spectral data by metadata attributes;
3. extraction of metadata vectors for a given spectral dataset;
4. insert and update of spectral data and metadata; and
5. linking of new spectral information with existing metadata.
The API allows the writing of code that supports the data life cycle stages of data ingestion, augmentation, information building, and retrieval [31]. A simple example of information building for a given set of spectra would be the determination of solar angles based on the UTC and latitude/longitude metadata parameters, which in turn would contribute to metadata augmentation. Thus, the generic SPECCHIO API supports the implementation of application or domain specific workflows by end users.
### _Flexible Metadata Storage and Redundancy Reduction_
Metadata are of prime importance within spectral information systems as they define the context of the spectral data and enable their retrieval. There are no metadata standards of spectral data collections yet, although work toward such a goal is underway [23], [32]. It is expected that a standard would define a minimal set of mandatory attributes and allow for optional attributes. The applicability of spectroscopy to many fields, and in particular its ability to estimate bio-geophysical parameters has led to an ever-increasing demand to store application specific metadata.
A static, traditional relational database model, such as adopted for SPECCHIO version 2, offers no solution to such dynamic requirements. Hence, the Entity-Attribute-Value (EAV) paradigm [33] was chosen as the new data storage concept. By doing so, we took advantage of our previous experience of the EAV approach in the APEX Calibration and Information System [34], which is used to handle and process laboratory calibration data of the APEX airborne imaging spectroradiometer [35].
Within the EAV approach, attributes are defined in a meta-layer. Entities, i.e., the spectral data, refer to these attributes and actual attribute values are stored in a generic storage container [36].
The SPECCHIO system uses a generic value table that can store attribute values as integer, double, date/time, string, categorical, spatial field, or binary. The default storage field as well as the cardinality per spectrum are part of the attribute definition. For any given attribute, the cardinality defines the number of permitted metadata values per spectrum, e.g., a capture time can occur only once per spectrum, while the latter may be associated with several keywords.
Categorical values are linked to defined vocabularies that are implemented as taxonomies. The taxonomy approach was based on the one used in the Australian Ecological Knowledge and Observation System [37].
Fig. 3: Tiers of the SPECCHIO system for higher-level language support, utilizing Java Bridge to interface scientific higher-level code with the SPECCHIO API.
Fig. 2: SPECCHIO V3 system architecture showing the encapsulation of the MySQL-based spectral database by using a Glassfish application server for all communication.
Binary values can hold items such as pictures or PDF files encoded as binary streams. The interpretation of the content itself is a task of the system software and as such is irrelevant to the metadata storage concept.
Attributes are grouped into metadata categories to allow configurable, application/domain specific graphical user interfaces.
Metadata of spectral data collections are highly redundant. Typically, a statistically relevant number of measurements will be acquired for the same target, also known as the measurand. The resulting spectra will usually have common attributes such as integration time, spatial location, and target description. The metadata storage model is normalized such that several spectra can refer to the same attribute value. Data normalization is carried out during data ingestion by using an attribute-value lookup table (LUT) containing already inserted values per database user to maintain system integrity. The data insert process checks the LUT for an identical attribute value, and, if existing, inserts a cross reference to the spectrum entity. For new values both the value and a cross-reference are inserted and the new value added to the LUT.
### _Metadata Storage Levels_
Metadata are generally associated with a spectrum. This may seem obvious in first instance, but a more thorough analysis shows that many metaparameters are often shared by several spectra, as pointed out above. The SPECCHIO system has always supported the structuring of spectral data by hierarchies. This is in effect a grouping function and is exploited in the SPECCHIO system to carry out easy selections and updates via the hierarchical tree structure.
Linking metadata at the spectrum level, however, imposes some limitations once the sizes of spectral collections grow. The APEX spectral ground control point campaign, as more comprehensively introduced later in the case study, comprises some 84'000 spectra and serves to illustrate the issue at this point. Two problems present themselves when annotating such a large dataset: 1) metaparameters that apply to all spectra, like a document describing a sampling approach common to all data acquisitions, will be stored once as a value, but will be linked to all spectra, creating \\(\\sim\\)84 000 entries in the spectrum to value cross-relational table, and 2) new datasets added to this campaign need to have these common metaparameters redefined explicitly.
Adding the hierarchy as a further storage level solves both issues: a single link is created between the value and the hierarchy, and new data inserted below this hierarchy will automatically inherit metadata defined at the hierarchy level.
Fig. 4 illustrates the storage levels within the database. It must be noted that the table _hierarchy_x_spectrum_ is filled in all cases to speed up data selections via hierarchies, and hence no storage penalty is paid when linking metaparameter values at the hierarchy level.
### _Campaign Handling_
Data storage in SPECCHIO is organized by campaigns, where a campaign is a high-level container for data collected, for example, a particular purpose or within a certain project. Actual sampling campaigns can be constrained both spatially and temporally, but SPECCHIO applies no such restrictions, i.e., the campaign is a conceptual container grouping data that are in some manner related to each other.
A fundamental concept of the campaign is its relation to file system hierarchies holding spectral input files.3 A campaign can be related to several directory structures, acting as data sources during data ingestion.
Footnote 3: A list of supported input files can be found online. Available: [https://specchio.ch/faq/#what-file-formats-are-supported](https://specchio.ch/faq/#what-file-formats-are-supported)
Campaigns can be built in the system over time by adding new data sources, all contributing to the same campaign. These sources can even be spread over different computers that may be situated in separate networks. Each data source is essentially an entry point into a file system hierarchy. The data ingestion process parses the underlying folders and files by using these entry points. Data loading replicates the hierarchy structure of each source within the database. Re-invocations of the data loader lead to the identification of additional files and folders and a consecutive loading. We term this feature the \"delta-loading\" capability. It supports the gradual building of campaigns, e.g., from data generated by a regular source of spectra, such as spectrometers mounted on flux towers [38] or flown on unmanned aerial vehicles [39].
### _Research Groups_
The concept of the research group allows the collaboration of researchers within the SPECCHIO system, working on a particular campaign. Quite often, remote sensing campaigns involve participants from different institutions, each team handling a different aspect of the measurement process. In such cases, the resulting data can also be spread across the participating institutions. A research group is automatically created for each campaign. Initially, the user creating the campaign will be the only group member. Additional members can be added at any time to an existing campaign, which in turn lets them add their own data sources as well as add other team members.
Fig. 4: Illustration of the storage levels by linking metaparameter values to spectra using the EAV paradigm. (a) Linking at spectrum level. (b) Linking at hierarchy level.
### _Sensors, Instruments, and Calibrations_
Sensors, instruments, and calibrations are part of the SPECCHIO relational database model. A sensor refers to the blueprint specification of a spectrometer, i.e., it is a theoretical concept. An instance of a sensor is called an instrument, i.e., it relates to an actual device that is usually identified by a serial number. Instruments tend to be wavelength calibrated, specifying an average wavelength per spectral band. The associated calibration file cannot be made to substitute the serial number as a means for identifying a specific device, as an instrument can be recalibrated over time, resulting in a different calibration file, while the serial number naturally remains constant. Depending on the manufacturer, instruments resample their calibrated wavelengths to the sensor blueprint specification, while many others deliver the instrument and calibration specific center wavelength per band with each measured spectrum. Furthermore, instruments can also relate to radiometric calibration coefficients.
Instrument calibrations are handled via the calibration entity in the database. Each calibration holds the parameters that define the radiometric and spectral performance of a calibrated instrument and every spectrum captured by an instrument refers to the appropriate calibration in the database. Consequently, instrument coefficients such as wavelengths for a particular calibration are only stored once within the database.
The generation of sensors, instruments, and calibrations yet unknown to the system is automated upon data loading and calibration specific metadata are parsed from the input files where provided. The update of these database system tables requires administrator rights [40] to maintain the integrity of the system. The file loading process however allows such inserts by encapsulating them in a process on the server side, hence shielded from direct user interaction.
### _Generic Spectral Data Storage and Handling_
Generic spectral vector storage in the SPECCHIO spectral database is based on binary large objects. Spectral vectors are stored as floating-point vectors represented as binary strings. This approach allows the storage of spectra irrespective of the number of spectral bands and also increases the retrieval flexibility and speed as spectra can be subset within SQL queries, e.g., allowing the selection of single spectral bands without the need to load the full spectrum into memory.
The system must also generically handle spectral data as the database can hold spectra acquired by different instruments. The concept is based on the spectral spaces paradigm [41], where a spectral space holds spectral vectors that share common characteristics: same number of spectral bands, identical center wavelengths and physical unit of measurement. Spaces are used throughout the system for processing, visualization, and file output. A space is a Java class comprising a Java array to hold the spectral vectors and information about the center wavelengths and physical unit. To deal with the handling of spaces, we introduce the Space Factory.
The Space Factory is a conceptual, central component of the SPECCHIO system. It creates new spaces based on given inputs and contains the logic to form \"non-mixed\" spaces. As an example, assume the use case of creating spectral plots of a number of spectra that were acquired by different instruments. To do so requires that spectral vectors are plotted versus their related wavelengths. Thus, spectra must be compiled into their spectral spaces first before any processing or plotting can be done.
In a first step, the user will select the spectra to be plotted by defining query conditions that are passed to the SPECCHIO EAV query engine. The query engine affects a subspace projection [42]. This yields a number of spectrum IDs that are matching the user's selection. These IDs are then handed to the Space Factory. The Space Factory creates spaces for all existing combinations of the sensors, instruments, calibrations, and measurement units associated with the selected spectra (see Fig. 5).
Utilizing the Space Factory ensures that all spectra contained by a space have a common wavelength per band and the same measurement unit. Spectral spaces and the Space Factory are being used extensively when implementing any spectral processing based on the SPECCHIO API.
## III Results
### _Comparison of SPECCHO Versions 2 and 3_
This section highlights the changes that were made in the upgrade from SPECCHIO V2 to V3. Each of the following table blocks Table I lists the capability or quantity for V2 and V3 and the specific update (\\(>\\)) that was applied.
### _Open Source_
The new SPECCHIO version has been moved to open source as per ANDS regulations. The source code of version 3 was initially deposited on an ANDS project related github4 account, but merged consecutively with the version curated by the Remote Sensing Laboratories (RSL) at the University Zurich. This federated SPECCHIO code is available via github [43].
Footnote 4: [Online]. Available: [https://github.com/IntersectAustralia/dc10](https://github.com/IntersectAustralia/dc10)
### _System Availability_
Most end users prefer to either connect to an existing SPECCHIO instance, where data can be shared with other existing
Fig. 5: Building of spaces by the Space Factory based on user defined query conditions.
users, or to setup their own local instance while avoiding the complexities of an installation at the server end from scratch.
### _Clients_
The SPECCHO client software is able to connect to any SPECCHO server instance. It is compiled in two versions supporting generic platforms and MacOS X specifically. The installation package is available for download from the SPECCHO webpage.5 At the time of writing, SPECCHO runs seamlessly on Java version 8, build 212 or lower. Users with higher Java build numbers should install the latest version of the SPECCHO client or refer to further information given in the SPECCHO FAQ6 to avoid certification problems caused by more recent versions of Java.
Footnote 5: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/)
Footnote 6: Online. Available: [https://specchio.ch/faq/](https://specchio.ch/faq/)
Footnote 7: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/)
### _SPECCHO Virtual Machine_
The complete SPECCHO system including database, Glassfish application server and client has been setup in a CentOS 7 system within an Oracle Virtual Machine. Users can download7 this readymade solution and run it on their own machines.
Footnote 7: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/)
### _Australian SPECCHO Instance_
The new SPECCHO version was made available to the Australian community in mid-2013 and operated by the University of Wollongong. This instance is planned to transition to Geoscience Australia (GA) to provide operational hosting and long-term custodianship of SPECCHO. GA expects to operate this Australian instance as a continental-wide data source within the framework of Digital Earth Australia Program [28], where it is expected to be used routinely for calibration and validation of multisource satellite data [44].
A metadata feed has been implemented for the Research Data Australia service of the ANDS portal. Any SPECCHO server can be configured to support publishing of information to ANDS. A similar data feed has been conceptualized for the Terrestrial Ecosystem Research Network (TERN)8 as well, but has not been implemented at the time of writing. A spectral dataset may be published on the ANDS portal by carrying out a data selection in the SPECCHO user interface, choose a principal investigator and hitting the \"Publish Collection\" button, which in turn will autogenerate an RIF-CS XML file that is sent to the ANDS server and ingested on a periodic basis. An ANDS Collection Key will be generated upon publishing and added as new metadata value to all exported spectra, allowing their identification within the SPECCHO system.
Footnote 8: Online. Available: www.tern.org.au
### _Worldwide SPECCHO Online Instance_
The University of Zurich maintains an online instance of the SPECCHO system, available to users worldwide for testing and productive purposes. The productive database contains some 154 700 spectra (Date: 27.04.2020).
### _Metadata Attributes_
The metadata supported by SPECCHIO has been considerably updated, utilizing the EAV paradigm. The attribute table is prefilled with 380 entries of eight different data types (see Table II). A detailed list of all available attributes can be displayed via a function within the SPECCHIO client application. The large number of floating-point data type attributes is mainly related to the support of bio- and geophysical variables from the domains vegetation, soil, and geochemistry.
New attributes can be added to the system by administrators using MySQL insert statements. Once added, they become immediately available to all clients after the SPECCHIO application service has been restarted.
### _Metadata Entry Methods and Redundancy Reduction_
Entering metadata has been made easier and faster by supporting metadata augmentation from tabular data held in Microsoft Excel files. Existing spectral data can be updated with new metadata by using matching between metaparameters existing in both the database and the input file, e.g., sample plot numbers encoded within the spectral file names may be matched with corresponding numbers in the Excel file using wildcard9 definitions.
Footnote 9: Wildcard: a symbol such as an asterisk which can be used to represent any character or range of characters in certain commands.
The efficiency of the automated metadata redundancy reduction is essentially a function of the redundancy of the input data as only existing redundancies can be minimized. Reductions for, e.g., Analytical Spectral Devices spectrometer binary files amount to an average of 70% with a standard deviation of 10%.
### _Supported Input File Formats_
The number of supported input files has been enhanced to 19 different formats. Native file loading is the preferred option as metadata can be automatically extracted and ingested into the SPECCHIO metaparameter table. The SPECCHIO webpage features a collection of spectral file formats with example files provided to help the user community checking on file format compliance.10
Footnote 10: [Online]. Available: [https://specechio.ch/faq/](https://specechio.ch/faq/)
### _Speecchio API_
The SPECCHIO API is implemented in a Java class and documented online [47]. Any programming language supporting Java either natively such as MATLAB [48] or via bridging technologies, e.g., R via the Java package [49] or Python via JPype [50], can therefore be used to interface SPECCHIO (see Fig. 3). All other SPECCHIO classes available in the client may also be used to interact with the system to maximum effect. Use cases of the SPECCHIO API can be found online.11
Footnote 11: [Online]. Available: [https://specechio.ch/guides/](https://specechio.ch/guides/)
### _SPECCHIO Web Interface_
The building of dynamic interactive web pages for spectral data exploration was first prototyped using the VAADIN framework.12 The concept was greatly refined in collaboration with the University of Applied Sciences of Northwestern Switzerland (FHNW), leading to an appealing solution,13 where data can be queried by dynamic metadata restrictions [51]. This implementation uses Java and Java Script and relies on the SPECCHIO Java API, thus greatly reducing the required implementation and updating efforts.
Footnote 12: [Online]. Available: [http://vx22.geo.uzh.ch:8080/SPECCHIO_Web_Interface/](http://vx22.geo.uzh.ch:8080/SPECCHIO_Web_Interface/)
### _SPECCHIO Graphical User Interface_
Most of SPECCHIO's graphical user interfaces (GUI) were redesigned due to the change to the EAV based metaparameter storage. As a consequence, no software updates are required when new metadata attributes are added to the system. The building of GUIs like the Metadata Editor (see Fig. 6) is purely generic and dependent on the metadata configuration of the SPECCHIO server the client is connected to.
The introduction of an attribute called the Application Domain allows the control of the metadata categories shown by default. The Application Domain is a taxonomy that can be extended or modified by the system administrator via MySQL statements. It thus enables end users to be presented with categories tuned according to their research domain. Fig. 6 shows the default categories for the Spectral Ground Control Point (SGCP) domain [8].
## IV Case Study
This section exemplifies the practical application of SPECCHIO. We selected the spectral ground control point (SGCP)campaign carried out in the framework of calibration and validation for the APEX airborne imaging spectrometer [8], [35], [52] to serve as an example. This campaign comprises some 101'300 spectra (Date: 28.04.2020) at various processing levels (digital numbers, radiances, and reflectance factors), collected over ten years of APEX operation. A fair amount of labor has been invested in annotating these data with spatial location and elevation, target classification, UTC time stamp, solar angles, cloud cover, photographs, field protocol scans, processing algorithm notes serving as provenance information, spatial sampling scheme, beam geometry [46], sensor to target distance, measurement support definition [10], and corresponding airborne mission identifier (see also Fig. 6 for an example of an SGCP reflectance set displayed in the Metadata Editor). The life cycle steps applying to this SGCP campaign are shown in Fig. 7. Data are imported from ASD binary files and augmented with most of their metadata using the SPECCHIO Metadata Editor (see Fig. 6). Additional metadata are inserted by algorithms written in MATLAB as described below.
These import and processing steps can be carried out by all researchers added as collaborators to the SGCP campaign. This allows that each field team can individually upload their SGCP data into the database. Each field mission gets an airborne mission designator in its top folder to allow easy identification. This can be observed in Fig. 6, where the hierarchy names under the campaign \"APEX Spectral Ground Control\" all start with APEX mission designators, like M0150. This arrangement, combined with a guideline on how to load and augment SGCP data, enables the loading of data into SPECCHIO from various machines and operating systems and by different people at their own time.
Radiance data are processed in a purpose-built, interactive MATLAB [48] software tool, utilizing the SPECCHIO API, to produce reflectance factors, involving the following steps:
1. automated flagging of white reference and target spectra in the metadata;
2. correction of radiometric steps between detectors [53] and storage of corrected radiances as intermediate products in the database;
3. interpolation of reference panel radiances over time, re-sampled to the time stamps of the target spectra; and
4. storage of the computed reflectance factors in the database.
Reflectance data are used to validate and quality control APEX surface reflectance data and APEX at-sensor radiances, the latter by employing radiative transfer modeling [8]. These validation processes can be largely automated by combining the metadata of both _in situ_ and airborne datasets, as originally conceptualized for the APEX processing and archiving facility [52] and recently implemented operationally [54]. In essence, the SPECCHIO system is queried for each flight line to identify spectra matching the airborne acquisition in both space and time. The spectrum metadata is sufficiently detailed to produce validation products with automated, target-specific annotations. An example of such an automated validation is shown in Fig. 8, indicating some remaining calibration problems, such as a loss of energy in the blue wavelengths below 450 nm or interpolation artifacts in water vapor absorption regions.
An analysis of the UZH RSL in-house database, hosting the APEX SGCP campaign among others, shows that the average number of metaparameters per spectrum is 15, while a carefully curated dataset like the APEX SGCP campaign reaches a mean of 36 (see Fig. 10).
Specific information about instruments, including their spectral and radiometric calibration, is not part of the metaparameter count mentioned above, but is regarded a system information which can only be changed by administrators or server processes having administrator rights. Any user can however inspect these data using the Instrumentation Metadata Editor (see Fig. 9), such as the individual components that make up a radiometric calibration of an ASD instrument.
## V Discussion
The development of SPECCHIO version 3 has been a major effort as the whole architecture has largely been redesigned. The use of the EAV paradigm for the storage of metadata is one of the most eminent changes as it allows for the quick adaptation
Fig. 6: SPECCHIO Metadata Editor graphical user interface illustrating the hierarchical data browser for data selection (left side), metadata fields grouped by categories (middle) and category selection panel showing the default configuration for SGCPs (right side).
Fig. 7: Spectroscopy data life cycle as applied for SGCP campaign data.
of new metadata attributes within the system. This is in sharp contrast to previous versions where a database model update and software upgrade had been required. New metaparameters are instantly available to the users after being added to the system, the only exception are new binary contents where both the server and client software would need upgrading as the interpretation is done in software.
The paradigm change from spectral database to spectral information system is reflected in the new software by the EAV based metadata storage but also in the new API, offering many functions to select, group, and reinsert data, essentially allowing the building of information from algorithms implemented in higher level programming languages. The support of such languages is one key step toward the use of the SPECCHO data pool for dynamic applications, e.g., continuous data insert from tower mounted instruments, and to involve more researchers by allowing them the use of their development environment of choice. In combination with the new research group functionality, a team of researchers may work on the same data source while writing their algorithms in different programming languages.
One focus of current research is the definition of mandatory and optional metaparameters [23, 32]. The previous version of SPECCHO supported a preliminary data quality scheme prescribing optional and mandatory metaparameters. This has been dropped in the new version, as it had never been used by any SPECCHO end user and research by Rasaiah _et al._[23] indicates that requirements differ between applications and user groups. Future versions may again include such a feature, which at that point will allow more flexibility due to the underlying EAV based storage supporting the definition of application-specific metadata requirements.
While data quality is obviously very important, there are currently no data quality indicators implemented in the system. Again, there is no technical limitation in doing so, but a missing scientific approach on how to best estimate the quality of a data set, where quality ideally is defined as \"fit for purpose.\" Thus, in the current version, data are imported \"as is\" and not assigned any automatic quality flag. A future extension of SPECCHO in the framework of MetEOC-314 will introduce the storage and propagation of spectroradiometric uncertainties, at which point the notion of data quality will no longer only be qualitative but quantitative.
Footnote 14: [Online]. Available: [http://empir.npl.co.uk/meteoc](http://empir.npl.co.uk/meteoc)
One measurement of data quality is the metadata space density [40], based on the assumption that more metadata relates to a higher descriptive power of the metadata space, enabling the interpretation of the scientific data [55]. The metadata analysis of the RSL in-house database, as presented in the case study, demonstrates that carefully curated datasets reach a mean of 36 metaparameters per spectrum of a maximum 380 possible entries (see Fig. 10). This statistical analysis also demonstrates that spectral metadata spaces are essentia
Fig. 8: Example of an automated validation result of an APEX HDRF cube, showing a comparison of spectra with SGCP data showing the target variation as grey envelope (top left), a true-color image of the scene zoomed into SGCP neighborhood with the SGCP indicated in the middle by a red circle (top right), the ratio between APEX and ASD (bottom left), and absolute differences in reflectance (bottom right).
Fig. 10: Histograms of number of metaparameters per spectrum for all campaigns and for the APEX SGCP campaign, showing a bimodality with the distribution around a mean of 36 associated with the well-curated APEX SGCP campaign.
Fig. 9: Instrumentation Metadata Editor showing the digital number spectrum being part of the radiometric calibration coefficients for the 3βFOV fore optic of ASD instrument 18140.
thus confirming our flexible EAV storage choice where only the available metaparameters take up storage space.
It must be noted that augmenting and processing a spectral dataset still requires manual labor, dedication, and attention to the detail, despite streamlined interfaces, group update functions, and automated calculation algorithms.
A certain amount of development time has been spent on implementing new file format readers. It is an irksome duty of the maintainers of the code, as almost every new sensor becoming available appears to adopt another flavor of file format. We advocate that these proprietary formats should be dropped in favor of a standardized file format, such as the combination of ISO 19156 standard and Sensor Model Language proposed by Jimenez _et al._[32] or the SpectroML standard extended for field spectroscopy data and metadata [56].
## VI Conclusion
SPECCCHIO version 3 represents a major release of the SPECCHIO system, upgrading it to a spectral information system. The key improvements are a flexible metadata storage system that is easily extended to cater for the needs of different science domains, and a rich API that allows the automation of all SPECCHIO system functions. Scientific end-users can thus integrate direct SPECCHIO database access in their processing algorithms written in a programming language of their choice by using common Java bridging technologies.
Moving to open source opens the opportunity to involve more developers worldwide and further improve the system.
## Acknowledgment
This project has greatly benefitted through interactions with EcoSIS, the information technology services at the University of Wollongong, and SpecNet, by work carried out in the framework of the COST Actions OPTIMISE and SENSECO, and by programming support through student semester projects at FHNW, Switzerland.
## References
* An assessment,\" _Remote Sens. Environ._, vol. 113, pp. 123-137, 2009.
* [2] K. J. Thome, \"In-flight intersensor radiometric calibration using vicious approaches,\" in _Post-Launch Calibration of Satellite Sensors_, S. A. Morain, Ed. London, U.K.: Taylor Francis, 2004, pp. 95-102.
* What is it and why do we need it?\" _Remote Sens. Environ._, vol. 103, pp. 227-235, 2006.
* [4] A. Porcar-Castell _et al._, \"EUROSPEC: At the interface between remote-sensing and ecosystem CO2 flux measurements in Europe,\" _B biogeoscciences_, vol. 12, pp. 6103-6124, 2015.
* [5] E. J. Milton, \"Field spectroscopy,\" in _Geoinformatics_, vol. 1, P. Atkinson, Ed. Oxford, U.K.: EOLSS Publishers/UNESCO, 2009, pp. 209-239.
* [6] E. J. Milton, N. P. Fox, and M. Schapemman, \"Progress in field spectroscopy,\" in _Proc. Geosci. Remote Sens. Symp._, 2006, pp. 1966-1968.
* [7] E. J. Milton, M. E. Schaepman, K. Anderson, M. Kneubuhler, and N. Fox, \"Progress in field spectroscopy,\" _Remote Sens. Environ._, vol. 113, pp. 92-109, 2009.
* some considerations,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 3, pp. 1117-1135, Mar. 2017.
* [9] K. Pfutzner, R. E. Bartolo, B. Ryan, and A. Bollhofer, \"Issues to consider when designing a spectral library database,\" in _Proc. Spatial Intell., Innovation Praxis, Nat. Biennial Conf. Spatial Sci. Inst._, 2005, pp. 416-425.
* [10] K. Anderson _et al._, \"Inter-comparison of hemispherical conical reflectance factors (HCRF) measured with four fibre-based spectrometers,\" _Opt. Express_, vol. 21, pp. 605-617, 2013.
* [11] A. Hueni, J. Nieke, J. Schopfer, M. Kneubuhler, and K. Itten, \"The spectral database SPECCHIO for improved long term usability and data sharing,\" _Comput. Geosci._, vol. 35, pp. 557-565, 2009.
* [12] S. Bojinski, M. Schaepman, D. Schlaepfer, and K. Itten, \"SPECCCHIO: A Web-accessible database for the administration and storage of heterogeneous spectral data,\" _Photogrammetry Remote Sens._, vol. 57, pp. 204-211, 2002.
* [13] S. Bojinski, M. Schaepman, D. Schlaepfer, and K. Itten, \"SPECCCHIO: a spectrum database for remote sensing applications,\" _Comput. Geosci._, vol. 29, pp. 27-38, 2003.
* [14] L. Floridi, \"Is information meaningful data?\" _Philosophy Phenomenological Res._, vol. 70, pp. 351-370, 2005.
* [15] M. Karami, K. Rangaran, and A. Saberi, \"Using GIS servers and interactive maps in spectral data sharing and administration: Case study of Ahvaz Spectral Geodatabase Platform (ASGP),\" _Comput. Geosci._, vol. 60, pp. 23-33, 2013.
* [16] L. Pompilio, P. Villa, M. Boschetti, and M. Pepe, \"Spectroradiometric field surveys in remote sensing practice: A workflow proposal, from planning to analysis,\" _IEEE Geosci. Remote Sens. Mag._, vol. 1, no. 2, pp. 37-51, Jun. 2013.
* [17] S. Arafat, E. Farg, M. Shokr, and G. Al-Kazaz, \"Internet-based spectral database for different land covers in Egypt,\" _Adv. Remote Sens._, vol. 2, pp. 85-92, 2013.
* [18] S. Iregnfried and J. Hock, _Acquisition and Storage of Multispectral Material Signatures-Workflow Design and Implementation_. Karlsruhe, Germany: KIT Scientific Publishing, 2015, pp. 123-135.
* [19] L. Colini _et al._, \"Mit Etna (Italy) and Sahara desert (Algeria) sites: CAL/VAL activities for hyperspectral data and development of spectral libraries for outcropting surfaces characterization,\" in _Proc. EARSEL SIG Imag. Spectroscopy Workshop Zurich_, 2017, pp. 129-130.
* [20] EcoSIS Executive Team, \"Ecological spectral information system (EcosISIS).\" [Online]. Available: [https://ecosis.org](https://ecosis.org), Accessed: 2020.
* [21] L. Chisholm and A. Hueni, \"The spectroscopy dataset lifecycle: Best practice for exchange and dissemination,\" in _AuaCover Good Practice Guidelines: A Technical Handbook Supporting Calibration and Validation Activities of Remotely Sensed Data Product_, A. Held, S. Phinn, M. Soto-Berelov, and S. D. Jones, Eds. Canberra, ACT, Australia: TERN AusCover, 2015, pp. 234-248.
* [22] J. Rowley, \"The wisdom hierarchy: Representations of the DIKW hierarchy,\" _J. Inf. Sci._, vol. 33, pp. 163-180, 2007.
* [23] B. Rassiah, S. Jones, C. Bellman, and T. Malthus, \"Critical metadata for spectroscopy field campaigns,\" _Remote Sens._, vol. 6, pp. 3662-3680, 2014.
* [24] B. Rassiah, S. Jones, C. Bellman, T. Malthus, and A. Hueni, \"Assessing field spectroscopy metadata quality,\" _Remote Sens._, vol. 7, 2015, Art. no. 4499.
* [25] A. Burkart _et al._, \"A method for uncertainty assessment of passive sum-induced chlorophyll fluorescence retrieval using an infrared reference light,\" _IEEE Sens. J._, vol. 15, no. 8, pp. 4603-4611, Aug. 2015.
* [26] W. Tang and J. Selwood, _Spatial Portals. Gateways to Geographic Information_. Redlands, CA, USA: ESI Press, 2005.
* [27] B. Vokcher, A. Richter, and M. Mittubick, \"From geoportals to geographic knowledge portals,\" _ISPRR Int. J. Geo-Inf._, vol. 2, pp. 256-275, 2013.
* foundations and lessons learned,\" _Remote Sens. Environ._, vol. 202, pp. 276-292, 2017.
* [29] G. Shaw and D. Manolakis, \"Signal processing for hyperspectral image exploitation,\" _IEEE Signal Process. Mag._, vol. 19, no. 1, pp. 12-16, Jan. 2002.
* prospective technologies and applications,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2006, pp. 2005-2008.
* [31] A. Hueni, L. Suarez, L. Chisholm, and A. Held, \"The use of spectral databases for remote sensing of agricultural crops,\" in _Hyperspectral Remote Sensing of Vegetation: Fundamentals, Sensor Systems, Spectral Libraries, and Data Mining for Vegetation_, 2nd ed. vol. 1, P. S. Thenkhauli, G. J. Lyon, and A. Hueuele, Eds. Boca Raton, FL, USA: CRC Press, 2018, p. 449.
* [32] M. Jimenez, M. Gonzalez, A. Amaro, and A. Fernandez-Renau, \"Field spectroscopy metadata system based on ISO and OGC standards,\" _ISPRS Int. J. Geo-Inf._, vol. 3, pp. 1003-1022, 2014.
* [33] P. Nadkarni, L. Marenco, R. Chen, E. Skoufos, G. Shepherd, and P. Miller, \"Organization of heterogeneous scientific data using the EAV/CR representation,\" _J. Amer. Med. Informat. Assoc._, vol. 6, pp. 478-493, 1999.
* imaging spectrometer) calibration information system,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 11, pp. 5169-5180, Nov. 2013.
* [35] M. Schaepman _et al._, \"Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX),\" _Remote Sens. Environ._, vol. 158, pp. 207-219, 2015.
* [36] V. Dinu and P. Nadkarni, \"Guidelines for the effective use of entity-attribute-value modeling for biomedical databases,\" _Int. J. Med. Informat._, vol. 76, pp. 769-779, 2007.
* [37] D. J. Turner, A. K. Smyth, C. M. Walker, and A. J. Lowe, \"AEKOS: Next-generation online data and information infrastructure for the ecological science community,\" in _Terrestrial Ecosystem Research Infrastructures: Challenges and Opportunities_, A. Chabbi and H. W. Loescher, Eds. Boca Raton, FL, USA: CRC Press, 2017.
* [38] M. Balzarodo _et al._, \"Ground-based optical measurements at European flux sites: A review of methods, instruments and current controversies,\" _Sensors_, vol. 11, pp. 7954-7981, 2011.
* [39] A. Burkart, S. Cogliati, A. Schickling, and U. Rascher, \"A novel UAV-based ultra-light weight spectrometer for field spectroscopy,\" _IEEE Sens. J._, vol. 99, 2013.
* [40] A. Hueni, T. Malthus, M. Kneubuehler, and M. Schaepman, \"Data exchange between distributed spectral databases,\" _Comput. Geosci._, vol. 37, pp. 861-873, 2011.
* [41] D. Landgrebe, _On Information Extraction Principles for Hyperspectral Data_. West Lafayette, IN, USA: Purdue Univ., 1997.
* [42] A. Hueni, J. Nieke, J. Schopfer, M. Kneubuhler, and K. Itten, \"Metadata of spectral data collections,\" in _Proc. 5th EARSeL Workshop Imag. Spectroscopy_, 2007, p. 14.
* [43] A. Hueni, \"SPECCHO source code.\" [Online]. Available: [https://github.com/SPECCCHIODB/SPECCCHIO](https://github.com/SPECCCHIODB/SPECCCHIO), Accessed: 2020.
* [44] C. Ong, T. Malthus, I. C. Lau, M. Thankappan, and G. Byrne, \"The development of a standardised validation approach for surface reflectance data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2018, pp. 6456-6459.
* [45] European Commission DG XI, CORINE land cover, European Commission Directorate-General Environment, Nuclear Safety and Civil Protection, Office for Official Publications Eur. Communities, Luxembourg city, Luxembourg, 1993.
* [46] G. Schaepman-Strub, M. Schaepman, T. H. Painter, S. Dangel, and J. V. Martonchuk, \"Reflectance quantities in optical remote sensing: definitions and case studies,\" _Remote Sens. Environ._, vol. 103, pp. 27-42, 2006.
* [47] SPECCHO, \"SPECCHO API.\" [Online]. Available: [https://specchio.ch/javadoc/](https://specchio.ch/javadoc/), Accessed: 2020.
* [48] The MathWorks Inc., Natick, MA, USA, _Matlab_, 2017.
* [49] S. Urbanek, \"Lfava: Low-level R to Java interface,\" Jun. 7, 2020. [Online]. Available: [http://CRAN.R-project.org/package](http://CRAN.R-project.org/package) = Java
* [50] JPype, \"Jype documentation.\" [Online]. Available: [https://jyppe.readthedocs.io/en/latest/index.html](https://jyppe.readthedocs.io/en/latest/index.html), Accessed: 2020.
* [51] A. Hueni, C. Schibli, R. Rossi, and M. Gwerder, \"SPECCHO spectral information system web interface,\" in _Proc. EARSeL SIG IS Zurich_, 2017, pp. 123-124.
* [52] A. Hueni _et al._, \"Structure, components and interfaces of the airborne prism experiment (APEX) processing and archiving facility,\" _IEEE Trans. Geosci. Remote Sens._, vol. 47, no. 1, pp. 29-43, Jan. 2009.
* [53] A. Hueni and A. Bialek, \"Cause, effect and correction of field spectroradiometer inter-channel radiometric steps,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 4, pp. 1542-1551, Apr. 2017.
* [54] C. Meiller, H. Kuchelle, M. Werfeli, and A. Hueni, \"A calibration and validation tool for data quality analysis of airborne imaging spectroscopy data,\" in _Proc. Int. Geosci. Remote Sens. Symp._, 2020, pp. 6234-6237.
* [55] W. K. Michener, \"Metadata,\" in _Ecological Data: Design, Management and Processing_, W. K. Michener and J. W. Brunt, Eds. Oxford, U.K.: Blackwell Sci., 2000, pp. 92-116.
* [56] T. Malthus and A. Hueni, \"An XML-based format for the exchange of spectroradiometric data,\" in _Proc. EARSeL SIG IS_, 2009. | Spectral Information Systems provide a framework to assemble, curate, and serve spectral data and their associated metadata. This article documents the evolution of the SPECCHIO system, devised to enable long-term usability and data-sharing of field spectroradiometer data. The new capabilities include a modern, web-based client-server architecture, a flexible metadata storage scheme for generic metadata handling, and a rich application programming interface, enabling scientists to directly access spectral data and metadata from their programming environment of choice. The SPECCHIO system source code has been moved into the open source domain to stimulate contributions from the spectroscopy community while binary distributions, including the SPECCHIO virtual machine, simplify the installation and use of the system for the end-users.
Information systems, metadata, relational databases, spectroradiometers, spectroscopy. | Provide a brief summary of the text. | 162 |
arxiv-format/2306_03068v1.md | # Results of the 2015 Workshop on Asteroid Simulants
Philip T. Metzger
University of Central Florida, Florida Space Institute, 12354 Research Parkway, Suite 214, Orlando, FL 32826-0650; PH 407-823-5540; email: [email protected]
Daniel T. Britt
University of Central Florida, Department of Physics, 4111 Libra Drive, Physical Sciences Bldg. 430, Orlando, FL 32816-2385; PH 407-823-2600; email: [email protected]
Stephen D. Covey
Deep Space Industries, Inc., 13300 Tanja King Blvd, #408, Orlando, FL 32828; PH 904-662-0550; email: [email protected]
John S. Lewis
Deep Space Industries, Inc., P.O. Box 67, Moffett Field, CA 94035; PH 855-855-7755; email: [email protected]
## Introduction
The first asteroid simulants workshop was held October 6-7, 2015 at the offices of the Florida Space Institute (FSI), part of the University of Central Florida. It was co-sponsored by Deep Space Industries, FSI, and the Center for Lunar and Asteroid Surface Science (CLASS), a node of NASA's Solar System Exploration Research Virtual Institute (SSERVI). The attendees reviewed the history of lunar soil simulants in order to avoid the problems that were encountered in the lunar program and to adopt its best practices. Then they identified needs for asteroid simulants and determined their requirements. Finally, they developed a strategy including which types of simulants to make first, how to validate them, and how they should be stored and distributed. This paper reports on the proceedings.
Background: Lunar Simulations
The Apollo program demonstrated the challenges in designing hardware to work with extraterrestrial regolith, and the lessons are equally important for asteroid simulants. For example, the drive tubes to obtain core samples of lunar regolith needed to be redesigned after Apollo 11 and again after Apollo 14 because of the difficulty driving the tube into highly frictional and compacted lunar regolith, and because the sample that was driven into the tube became unacceptably disturbed by the geometry of the tube. The development cycle for space technology will become more successful and less expensive if high fidelity simulants are available earlier.
The low fidelity simulant used for drive tube testing was a mixture of kaolinite clay and League City (Texas) sand. After the Apollo program, several improved lunar soil simulants were developed, including Minnesota Lunar Simulant-1 (MLS-1) [Weiblen and Gordon, 1988; Weiblen et al, 1990] and Johnson Space Center-1 (JSC-1) [Willman et al, 1995]. JSC-1 was designed for geotechnical purposes and to a lesser extent chemistry of lunar mare soil, although its chemistry was really not typical of mare soil [Taylor and Liu, 2010]. The simulant was often used, perhaps inappropriately, in tests that needed a better chemical simulant. Over time additional simulants were developed by various users, everyone according to his or her unadvised opinion. This is because the vast majority of users have neither the time nor background to develop regolith consistent with the details of lunar geology. Many tests were performed with incorrect simulants resulting in wasted time, or worse, deceptive results that in the extreme of a spaceflight program could have tragic consequences [LEAG, 2010].
To rectify this, the NASA Marshall Spaceflight Center (MSFC) was designated to manage lunar simulants and they developed the 5-step approach given in Table 1. (This is the process we will follow in this asteroid simulants project in close collaboration with the NASA team.) Having MSFC act as the central clearinghouse following this rigorous process enabled NASA to \"obtain better simulants, with rigorous specifications and performance, and at lower expense\" [McLemore, 2014]. Unfortunately, many researchers still set about on their own, disregarding the structure NASA had set in place, and so many lunar simulants were developed without working with the NASA MSFC team. As a result, by 2010 more than 30 simulants had been developed by various groups in the U.S. and overseas [LEAG, 2010]. The result was (1) many poorly designed simulants that had incorrect properties and (2) the rampant misuse of well-designed simulants by expecting them to have properties that they weren't designed for [Taylor and Liu, 2010]. However, in the midst of that chaos the NASA team with its contractors successfully re-created and characterized the JSC-1 simulant, now as JSC-1A, and created the high-fidelity chemical/mechanical lunar highlands simulant series NU-LHT. These pedigreed simulants along with the careful means to measure them gave NASA the tools it needed for writing contracts to develop hardware such that it could work with lunar soil. It also gave the more careful technologists who did not misuse the simulants the ability to compare results with one another and with lunar soil. With more progress developing pedigreed simulants and demonstrating the benefits of their use in publications, the tide of poor lunar simulant engineering outside of the NASA-led team may eventually be curbed.
If the lunar community could have the ideal soil simulant, it would many properties in common with actual lunar soil. The lunar community created a list of 32 desired properties and organized them into ten categories [McKay and Blacic, 2002]. Some of these such as agglutinates are not relevant or significant in the asteroid case, while additional properties will need to be added. For example, with asteroids we may desire to replicate the volatile liberation temperature for water and organic matter as well as the chemistry of the organic matter.
It turns out that it is far too difficult - perhaps impossible - and far too expensive to create a simulant with all of the desired properties. Extraterrestrial regolith is too exotic compared to terrestrial materials and too complex to engineer with perfect fidelity. Thus, choices had to be made about which properties would be simulated, and the users did not all have the same requirements. Some wanted simulants for different parts of the Moon (just as it will be with asteroid simulants and different types of asteroids); some wanted broad-use simulants whereas others had very specific needs; some needed higher fidelity than others; some needed fine dust while others needed the coarser fraction or even cobbles; and they needed simulants that had undergone different degrees of weathering on the lunar surface [Stoeser 2009]. Thus, it became clear that one simulant would not work for all needs, and this is one of the main factors leading to the misunderstanding and misuse of lunar simulants. Many users thought that a simulant was a simulant was a simulant. It turns out that with the simulants developed so-far, the community focused upon just three properties or groups of properties: the particle size distribution, the mechanical properties, and the chemical properties [McKay, 2009]. Some users amended simulants in various ways or created ad hoc simulants for specialized tests.
A rigorous and cost-effective way to deal with this situation is to develop _families_ of simulants with roots and branches, representing the basic simulant and its variants for specific needs. The NASA simulants program developed generally two such families, the JSC and NU-LHT series. The JSC-1 simulant was re-constituted as JSC-1A [Gustafson, 2009] and a series of its adaptations were developed, including JSC-1Af for lunar dust, JSC-1Ac for coarse particles, and a version of JSC-1A that included agglutinated particles made via plasma torch processing [Gustafson et al, 2008].
\\begin{table}
\\begin{tabular}{p{34.1pt}} \\hline
1. Development of the necessary simulant requirements based on the appropriate standards \\\\
2. Development of a method by which to compare simulants to a reference \\\\
3. Selection and measurement of the appropriate reference materials. \\\\
4. Development and demonstration of simulant development and process control techniques \\\\
5. Selection of suitable simulant feed stocks \\\\ \\end{tabular}
\\end{table}
Table 1: **Five Step Approach to Developing Simulants**[McLemore, 2014]These simulants somewhat represent the chemistry of lunar maria-type regolith. For the lunar highlands, the very high fidelity NU-LHT series of simulants was developed to meet most chemical and geotechnical needs. They have a mixture of minerals to represent the lunar highlands, plus simulated breccias as well as glass including pseudo-agglutinates that give it high-fidelity mechanical behavior [Stoeser et al, 2008; Stoeser et al, 2010]. In addition to the US-developed simulants, a number of international simulants were assessed by the NASA simulants team. A Canadian team developed the simulant OB-1 for lunar drilling tests [Battler et al, 2006; Richard et al, 2007; Battler and Spray, 2009] and a follow-on version, Chenobi [Electric Vehicle Controllers, 2009]. FJS-1 and MKS-1 were developed by the Japanese space agency [Jiang et al, 2011] and NAO-1, CAS-1, and TJ-1, were developed by different teams in China [Li et al, 2009; Zheng et al, 2009; Jiang et al, 2011].
The NASA MSFC team developed a Figure of Merit (FoM) to compare how well the various lunar simulants match a particular lunar soil [Rickman et al, 2010]. Based on their analysis of simulants available at that time, the _Lunar Regolith Simulant User's Guide_ including a \"Fit-to-Use\" table was published to help technologists and scientists select which simulant to use for particular activities [Rickman et al, 2010]. This guide comprised analyses of MLS, the JSC-1 series, the NU-LHT series, OB-1, Chenobi, and FJS.
## Prior use of asteroid simulants
A literature search (as exhaustive as possible) found that, to-date, various ad hoc simulants are being used for a wide variety of asteroid studies for technology and for pure science. Housen [1992] used a mixture of 50% basalt fragments, 24% fly ash, 20% iron grit, and 6% water to simulate asteroid regolith for cratering ejecta experiments. Fujiwara et al [2000] used \"various kind of rocks, sand, and artificial materials like bricks\" while Yano et al [2002] used glass beads and _lunar_ soil simulant to study asteroid sampling via projectile impact. Sears et al [2002] studied the formation of smooth regolith ponds on asteroids by using _Martian_ regolith simulant JSC-Mars-1 and with mixtures of sand plus iron grains. Sandel et al [2006] used \"meteorite simulants\" for impact experiments to study collisional disruption and resulting fragment distribution from asteroids. Izenberg, and Barnouin-Jha [2006] used playground sand with embedded cobbles to simulate asteroid regolith to study how impacts affect the morphology and vertical layering of asteroids. Makabe [2008] used a simulant of a C-type asteroid to study projectile impact in the Hayabusa-2 mission for capturing samples. Guttler et al [2012] studied crater formation on asteroid surfaces using spherical glass beads. Barucci et al [2012] used a _lunar_ regolith simulant and \"many simulants\" to test an asteroid sampling mechanism. Durda et al [2012, 2013, 2014] studied the morphology of asteroid surfaces using _lunar_ soil simulant JSC-1A, glass microspheres, and bread flour. Bernold [2013] performed asteroid mining and conveying experiments using _lunar_ regolith simulants. Crane et al [2013] used shaving from a steel bar to simulate a Tholen Type M asteroid regolith for thermal inertia tests. Murdoch et al [2013] studied the strength properties of asteroid regolith in microgravity using spherical soda-line glass beads. Backes et al [2014] used floral foam and \"a variety of simulants\" both hard and soft to represent the surface of a comet. This survey shows that, at the present, the materials chosen for asteroid simulants are generally low-fidelity or inappropriate and lack the commonality and standards that would promote comparing results. (Not included in this list are the many spectroscopic studies of terrestrial analog materials that inform remote observation and calibrate spacecraft instruments, nor space weathering experiments or the like.) This review also demonstrates the wide range of uses for simulants, and it indicates the need for leadership in producing easily accessible, pedigreed simulants (along with sample preparation procedures) in appropriate levels of fidelity and cost.
## NEEDS for Asteroid Simulant
Simulants will be needed to support the development of space missions. These include:
* Japan's Hayabusa-2 mission, which has NASA-funded collaborators and which will perform explosive cratering and sample return of the asteroid's surface
* NASA's Origins-Spectral Interpretation-Resource Identification-Security-Regolith Explorer (OSIRIS-REx) mission, which will use a burst of gas to collect asteroid regolith for sample return
* NASA's Asteroid Redirect Mission (ARM), both the robotic portion and the crew exploration portion
* Space mining by commercial companies
* Future missions for planetary defense
Science missions need asteroid simulants to develop sampling devices, to test digging methods, to test sensors and dust mitigation, etc. Simulants are used during the mission to compare to results that are observed at the destination, to interpret the results and decide upon a course of action. For example, when a Mars Exploration Rover had trouble driving, simulants were used to determine the best wheel motions to get un-stuck or to avoid getting stuck. The robotic portion of the ARM will require simulants to test asteroid grappling devices and/or boulder extraction from an asteroid. The human mission to visit the returned asteroid will need simulants to develop crew tools for studying and sampling the asteroid.
NASA's technology development program will require asteroid simulants. These are predominantly in the following Technology Areas: TA04 Robotics, Tele-Robotics and Autonomous Systems; TA06 Human Health, Life Support, and Habitation Systems; TA07 Human Exploration and Development of Space; TA08 Science Instruments, Observations and Sensor Systems; and TA09 Entry, Descent and Landing; although some applications exist in the other Technology Areas, as well. Some examples include: mobility testing of robotics on asteroid regolith; human health when exposed to asteroid dust and organic molecules; mining and processing of resources from asteroids; instruments to study asteroids; and propulsion systems that will not overly disturb asteroid regolith during proximity operations.
In addition to the NASA need for asteroid simulants, there is a newfound commercial need for them. Multiple companies, Deep Space Industries included, have announced plans to mine asteroids for volatiles and other components. In the early stages of asteroid mining, the low-hanging fruit is water for life support and propulsion. Extracting CO\\({}_{2}\\) in addition to H\\({}_{2}\\)O enables the production of soft cryogens including methane and liquid oxygen, and even the production of asteroid-derived storable propellants such as hydrogen peroxide and dimethyl ether. The development of these commercial applications requires simulants for anchoring, excavation, and volatiles extraction. Future commercial applications include similar developments leading toward the extraction of structural metals and silicon for solar cells.
Planetary defense is another potential user of asteroid simulants. Techniques must be developed and tested that enable deflection of asteroids with impending impacts on the Earth. In some of these scenarios, asteroid simulants will be vital in developing the kinetic impact vehicles, landers and thrusters to redirect asteroids.
Lunar simulants have been used in studying the health effects of lunar dust. Similar research must be performed for asteroids. The concerns include inhalation, dermal toxicity, ocular toxicity, and dissolution into the body. In the lungs, dust can cause edema, fibrosis, inflammation, and possibly cancer [Khan-Mayberry, 2009]. The particle sizes less than 10 microns are considered respirable, so simulants comprising that size range are necessary. The Lunar Airborne Dust Toxicity Assessment Group decided that it was necessary to test a variety of lunar dusts. [Khan-Mayberry, 2009]. Varieties include mature vs. immature and highlands vs. mare. The need for variety is more acute with the greater variety of asteroids.
## Families of Asteroid Simulant
Because there are many spectral classes of asteroids (and many types of meteorites that are samples of those asteroids), there need to be many types of simulant. The workshop decided it is best to develop one type of simulant first and after validating it to develop four to six more. The workshop decided that perhaps a Carbonaceous CI simulant would be a good choice for the first, followed by CM, C2, CV, L Ordinary, LL Ordinary, H Chondrite, Iron, Enstatite Chondrite, and Basaltic Chondrite types (not listed in order of preference, which is yet to be determined).
Each of these simulants will be a \"root\" simulant. More specialized versions, or \"branches\", can be developed to meet specific user needs. For example, a version with higher fidelity organic content may be needed for medical tests whereas most users do not want that because it is carcinogenic.
Simulants can be provided in two basic forms. The powderized mineral constituents can be bonded together to form competent cobbles or even bounders, or it can be provided as loose regolith. For the regolith form, there may be different versions based on the particle size distribution, including standard, fine, and coarse variations.
The cobble form is manufactured by grinding the mineral constituents to the desired textures, mixing them, wetting, and drying. Preliminary work shows the clay component binds these powders remarkably well. The details of wetting and drying determine the mechanical strength of the resulting cobbles. For regolith simulant of the highest fidelity, it would be necessary to form cobbles in this manner and then re-grind them so the individual regolith grains are themselves lithtic fragments of multiple minerals. However, it is not envisioned that we shall provide such high fidelity regolith at this time. Instead, it will be provided as mixtures of monomineralic grains (the mixed powders prior to wetting and drying). Users may procure and crush the cobbles if they need higher fidelity.
## 3 Properties of Asteroid Simulant
The workshop compiled an extensive list of asteroid properties that may (or may not) be desired for asteroid simulants. These are listed below. The attendees decided which parameters were so important that the simulant design must be controlled by them. These are listed in bold with an asterisk. The attendees also decided which parameters are important to measure and report to the user community although they are not control parameters. These are listed in italic with an asterisk.
\\begin{table}
\\begin{tabular}{l l} \\hline
1. & Grain properties \\\\ & & **Size distribution*** (data from power law observations) \\\\ & & **Mean particle size*** \\\\ & & \\\\ & & \\\\ & & \\\\ & & \\\\ & & \\\\ & & \\\\ & & \\\\ \\end{tabular}
\\end{table}
Table 2: **Properties of Asteroid Simulants**Geomechanical Properties * 1. Fatigue 2. **Tensile Strength* 3. **Compressive Strength* 4. Shear Strength* 5. Grain Hardness (hardness indexes) 6. Surface friction 7. Abrasivity (for tool development) 8. Flexural Strength-bending resistance 9. Fracture properties, friability 10. Impact resistance 11. _Angle of Repose*_ 12. _Regolith Internal Friction*_ 13. _Regolith Cohesion*_ 14. Adhesion (depends on tool material, too) 15. Compressibility of regolith 16. Compactibility of regolith (index test, like Proctor Compaction)
Optical properties 1. _Albedo*_ 2. _Reflectance spectrum*_ 3. Absorption 4. Thermal emissivity
Aerodynamic properties 1. Gas erodibility (rocket exhaust) 2. Particles' coefficient of drag
Physical Properties 1. Thermal properties (derived properties from mineralogy, texture, and volatile content) 1. heat capacity 2. Conductance 3. thermal cracking behavior 4. Emissivity 2. **Bulk density of rocks*** 3. Particle density 4. **Porosity of rocks*** 5. Surface area 6. Permeability of rocks 7. Permeability of regolith as a function of porosity/compaction** 8. Bulk density of regolith as a function of porosity/compaction8. Geochemical properties 1. Bulk chemistry (derived property of the composition) 2. Mineralogy* 3. Siderophile elements in Iron simulants 4. Modal Composition 5. Isotopic ratios 6. Organic content 1. C-to-H ratio (aliphatic vs aromatic) 2. Toxicity 3. Sulphur and Nitrogen content of the organic matter
9. Chemical reactivity 1. From surface damage 2. As volatile /soluble minerals 3. Absorptive capacity for volatiles*
10. Texture 1. Homogeneity and isotropy of texture 2. Chondrules
11. Volatiles 1. Volatiles content 2. Water* 3. Organics* 4. Sulphur compounds* 5. Release pattern* 1. **thermal and/or vacuum release*** 2. **chemisorbed, physisorbed patterns*** 6. Implanted solar wind particles
## Validation of Asteroid Simulants
Next, the attendees decided how asteroid simulant should be tested. There are two purposes in this. First, it benchmarks the simulant's properties as-designed to real asteroids so that users will understand both the value and limitations of simulant in lieu of real space materials. Users can then design meaningful applications and interpret the results in terms of the space materials. Second, it validates the accuracy and repeatability of the simulant's properties as-manufactured so the user can be sure the simulant is the same as when the original batch was benchmarked. Benchmarking will make use of the following data sources:
* Laboratory measurements
* Bolide observations to determine compressive strengths
* Ground-based observations by radar and thermal infrared
* Spacecraft imagery and sample return
* Modeling \\[\\circ\\] Interparticle cohesion \\[\\circ\\] Depletion of fine particles and buildup of surface lags \\[\\circ\\] Particle size distribution to match remote sensing \\[\\circ\\] Theories of formation
The data from these sources will be integrated to develop simulant testing requirements such as particle size distribution and compressive strength. Several types of meteorites will be needed for laboratory measurements because several types of asteroid simulants will be developed representing the different classes of asteroids (CI, CM, H Ordinary, etc). For documented specificity, the laboratory measurements will be performed on selected meteorites in each class. While the list is expected to evolve, the workshop decided that appropriate reference meteorites could include the following: CI - Orgueil; CM - Murchison; C2 - Tagish Lake; CV - Allende; Iron - Gibeon. The attendees decided that reference meteorites should also be specified for the following types: L Ordinary Chondrite; LL Ordinary Chondrite; H Ordinary Chondrite; Enstatite Chondrite; and Basaltic Achondrite. After benchmarking the prototype of each simulant class with these data sources, batches of manufactured simulant will be tested using statistically relevant samples to verify the manufacturing processes are adequately controlled. The tests will check each of the control parameters, including bulk density measurements via immersion in a wetting fluid, particle size distribution via sieving, mechanical hardness using standard engineering test equipment, thermogravimetric tests of water release as a function of temperature, mass spectrometry of the released volatiles as a function of temperature, and weighing on a Faraday balance for magnetic susceptibility.
## Conclusion
The first asteroid simulant workshop made important progress in defining a proactive, well-documented program. By basing this program on methods that NASA developed for lunar simulants, the asteroid simulants program will attempt from the beginning to avoid misunderstandings and misuse of simulants that occurred in the lunar community. This program will provide the asteroid researchers and technologists with several families of consistent, well-documented asteroid simulants that are pedigreed through benchmarking against space materials. This program will help ensure comparability of tests between the users, higher quality tests, and cost savings since every project will not need to develop simulants on its own.
## Acknowledgement
The authors gratefully acknowledge the assistance of Dr. Jim Mantovani of NASA's Kennedy Space Center and support for this work from NASA's Small Business Innovative Research (SBIR) 2015 program, Phase 1, subtopic H1.01, \"Regolith ISRU for Mission Consumable Production,\" contract NNX15CK10P.
## References
* Backes et al. (2014) Backes, Paul, Christopher McQuin, Mircea Badescu, Anthony Ganino, Nicholas Wiltsie, Scott Moreland, Phillip Walkemeyer et al. \"Sampling System Concepts for a Touch-and-Go Architecture Comet Surface Sample Return Mission.\" (2014).
* Barucci et al. (2012) Barucci, Maria Antonietta, A. F. Cheng, P. Michel, L. A. M. Benner, R. P. Binzel, P. A. Bland, H. Bohnhardt et al. \"MarcoPolo-R near earth asteroid sample return mission.\" _Experimental Astronomy_ 33, no. 2-3 (2012): 645-684.
* Battler et al. (2006) Battler, Melissa, Jim Richard, Dale Boucher, and John Spray. \"Developing an anorthositic lunar regolith simulant.\" In _37th Annual Lunar and Planetary Science Conference, League City, TX_. 2006.
* Battler et al. (2009) Battler, Melissa M., and John G. Spray. \"The Shawmere anorthosite and OB-1 as lunar highland regolith simulants.\" _Planetary and Space Science_ 57, no. 14 (2009): 2128-2131.
* Bernold (2013) Bernold, Leonhard E. \"Closed-cycle pneumatics for asteroid regolith mining.\" In _Asteroids_, ed. by Viorel Badescu, pp. 345-364. Springer Berlin Heidelberg, 2013.
* Crane et al. (2013) Crane, K. T., D. A. Minton, and J. P. Emery. \"Thermal Inertia of a Metallic Regolith: A Simulant Sample Experiment.\" In _Lunar and Planetary Institute Science Conference Abstracts_, vol. 44, p. 1018. 2013.
* Durda et al. (2013) Durda, D., G. Devaud, D. Scheeres, P. Sanchez, S. Roark, P. Kaptchen, R. Dissly, and A. Campo Bagatin. \"Laboratory Investigation of Asteroid Regolith Properties.\" In _European Planetary Science Congress 2013, held 8-13 September in London, UK. Online at: http://meetings. copernicus. org/epsc2013, id. EPSC2013-1050_, vol. 8, p. 1050. 2013.
* Durda et al. (2015) Durda, D.D., P. Sanchez, A. Fischer, G. Devaud, D. J. Scheeres, S. E. Roark, P. F. Kaptchen, and R. Dissly. \"The Size Distribution of'Boulders' Formed During Slope Failure in Piles of Self-Cohesive Powders: Application to the Morphology of Regoliths on Small Asteroids.\" In _Lunar and Planetary Institute Science Conference Abstracts_, vol. 45, p. 2015. 2014.
* Durda et al. (2012) Durda, Daniel D., D. J. Scheeres, S. E. Roark, R. Dissly, and P. Sanchez. \"Asteroid Regolith Mechanical Properties: Laboratory Experiments With Cohesive Powders.\" In _AAS/Division for Planetary Sciences Meeting Abstracts_, vol. 44. 2012.
* Lunar Highlands Simulant.\" [http://www.evcltd.com/index](http://www.evcltd.com/index) 005.htm. 2009.
* Fujiwara et al. (2000) Fujiwara, A., T. Mukai, J. Kawaguchi, and K. T. Uesugi. \"Sample return mission to NEA: MUSES-C.\" _Advances in Space Research_ 25, no. 2 (2000): 231-238.
* Gustafson (2009) Gustafson, G. \"JSC-1A lunar regolith simulant: availability and characterization.\" In _2009 Lunar Regolith Simulant Workshop_. 2009.
* Gustafson et al. (2008) Gustafson, Robert J., Brant C. White, and Marty A. Gustafson. \"Development of a high fidelity lunar soil simulant.\" In _Space Technology and Applications International Forum (STAIF) 2008_ vol. 969, no. 1, pp. 213-220. AIP Publishing, 2008.
Guttler, C., N. Hirata, and A. M. Nakamura. \"Cratering experiments on the self armoring of coarse-grained granular targets.\" _Icarus_ 220, no. 2 (2012): 1040-1049.
* [10]Housen, Kevin R. \"Crater ejecta velocities for impacts on rocky bodies.\" In _Lunar and Planetary Science Conference_, vol. 23, p. 555. 1992.
* [11]Izenberg, N. R., and O. S. Barnouin-Jha. \"Seismic Modification of Asteroid Surfaces: Laboratory Simulations of Normal Impulses.\" In _AGU Fall Meeting Abstracts_, vol. 1, p. 1174. 2006.
* [12]Jiang, Mingjing, Liqing Li, and Yugang Sun. \"Properties of TJ-1 lunar soil simulant.\" _Journal of Aerospace Engineering_ 25, no. 3 (2011): 463-469.
* [13]Khan-Mayberry, Noreen. \"Lunar Airborne Dust Toxicity Assessment Group (LADTAG).\" In _2009 Lunar Regolith Simulator Workshop_. 2009.
* [14]Li, Yongquan, Jianzhong Liu, and Zongyu Yue. \"NAO-1: Lunar highland soil simulant developed in China.\" _Journal of Aerospace Engineering_ 22, no. 1 (2009): 53-57.
* [15]Lunar Exploration Analysis Group (LEAG). \"Status of Lunar Regolith Simulants and Demand for Apollo Lunar Samples.\" [http://www.lpi.usra.edu/leag/reports/SIM_SATReport2010.pdf](http://www.lpi.usra.edu/leag/reports/SIM_SATReport2010.pdf). 2010.
* [16]Makabe, Teruo, and Hajime Yano. \"The effective projectile shape for asteroid impact sampling.\" In _Proceedings of the 26th International Conference on Space Technology and Science_, paper no. 2008-k-08. 2008.
* [17]McKay, David S. \"Simulants--are we on the right path?\" In _2009 Lunar Regolith Simulator Workshop_. 2009.
* [18]McKay, David S. and James D. Blackic (1991). _Workshop on Production and Uses of Simulated Lunar Materials_. LPI Tech. Rpt. 91-01. Lunar and Planetary Institute, Houston, TX. 83 pp.
* [19]McLemore, Carole A. \"Logic of the NASA/MSFC Simulant Development Technical Approach.\" [http://isru.msfc.nasa.gov/simulantdev_logic.html](http://isru.msfc.nasa.gov/simulantdev_logic.html) (Updated 20 June 2014).
* [20]Murdoch, N., B. Rozitis, S. F. Green, P. Michel, T-L. de Lophem, and W. Losert. \"Simulating regoliths in microgravity.\" _Monthly Notices of the Royal Astronomical Society_ 433, no. 1 (2013): 506-514.
* [21]Richard, J., L. Sigurdson, and M. M. Battler. \"OB-1 lunar highlands physical simulant evolution and production.\" In _2007 Lunar and dust regolith simulant workshop, [http://isru.msfc.nasa.gov/2007wksp_docs.html](http://isru.msfc.nasa.gov/2007wksp_docs.html). 2007.
* [22]Rickman, D. L., C. A. McLemore, and J. C. Fikes. _Lunar Regolith Simulant User's Guide_. National Aeronautics and Space Administration, Marshall Space Flight Center, 2010.
* [23]Sandel, L. E., M. M. Strait, D. D. Durda, and G. J. Flynn. \"Methods for Quantifying Results of Impact Disruption Experiments of Chondritic Meteorites.\" In _37th Annual Lunar and Planetary Science Conference_, vol. 37, p. 1359. 2006.
* [24]Sears, D. W. G., P. Jansma, G. Mattioli, M. S. Kareev, and P. H. Benoit. \"Simulation of The Formation of Regolith Ponds On Asteroids.\" In _EGS General Assembly Conference Abstracts_, vol. 27, Abstract #1422. 2002.
Stoeser, Douglas. \"Introduction to Lunar Regolith Simulants.\" In _2009 Lunar Regolith Simulant Workshop, [http://isru.msfc.nasa.gov/2009](http://isru.msfc.nasa.gov/2009) workshop.html_. 2009.
* Stoeser et al. (2010) Stoeser, D. B., D. L. Rickman, and S. Wilson. _Design and specifications for the highland regolith prototype simulants NU-LHT-1M and-2M_. National Aeronautics and Space Administration, Marshall Space Flight Center, 2010.
* Stoeser et al. (2008) Stoeser, Douglas, Steve Wilson, Michael Weinstein, Douglas Rickman, H. Lower, Gregory Meeker, Christian Schrader, Carole McLemore, and John Fikes. \"The LHT (Lunar Highlands Type) regolith simulant series.\" In _National Geological Society of America Conference_. 2008.
* Taylor et al. (2010) Taylor, Lawrence A., and Yang Liu. \"Important considerations for lunar soil simulants.\" _Earth and Space 2010: Engineering, Science, Construction, and Operations in Challenging Environments_ (2010).
* Weiblen and Gordon (1988) Weiblen, P. W., and K. Gordon. \"Characteristics of a simulant for lunar surface materials.\" _LPI Contributions_ 652 (1988): 254.
* Weiblen et al. (1990) Weiblen, Paul W., Marian J. Murawa, and Kenneth J. Reid. \"Preparation of simulants for lunar surface materials.\" In _Engineering, Construction, and Operations in Space II_, pp. 98-106. ASCE, 1990.
* Willman et al. (1995) Willman, Brian M., Walter W. Boles, David S. McKay, and Carlton C. Allen. \"Properties of lunar soil simulant JSC-1.\" _Journal of Aerospace Engineering_ 8, no. 2 (1995): 77-87.
* Yano et al. (2002) Yano, Hajime, Sunao Hasegawa, Masano Abe, and Akira Fujiwara. \"Asteroidal surface sampling by the MUSES-C spacecraft.\" In _Asteroids, Comets, and Meteors: ACM 2002_, vol. 500, pp. 103-106. 2002.
* Zheng et al. (2009) Zheng, Yongchun, Shijie Wang, Ziyuan Ouyang, Yongliao Zou, Jianzhong Liu, Chunlai Li, Xiongyao Li, and Junming Feng. \"CAS-1 lunar soil simulant.\" _Advances in Space Research_ 43, no. 3 (2009): 448-454. | The first asteroid simulants workshop was held in late 2015. These materials are needed for tests of technologies and mission operational concepts, for training astronauts, for medical studies, and a variety of other purposes. The new program is based on lessons learned from the earlier lunar simulants program. It aims to deliver families of simulants for major spectral classes of asteroids both in cobble and regolith form, beginning with one type of carbonaceous chondrite and rapidly expanding to provide four to six more asteroid classes. These simulants will replicate a selected list of asteroid properties, but not all known properties, in order to serve the greatest number of users at an affordable price. They will be benchmarked by a variety of data sets including laboratory analysis of meteorites, observation of bolides, remote sensing of asteroids, data from asteroid missions, and scientific modeling. A variety of laboratory tests will verify the as-manufactured simulants are accurately and repeatedly providing the specified characteristics. | Summarize the following text. | 196 |
arxiv-format/2403_06860v2.md | # A Geospatial Approach to Predicting
Desert Locust Breeding Grounds in Africa
Ibrahim Salihu Yusuf
Mukhtar Opeyemi Yusuf
Kobby Panford-Quainoo
Arnu Pretorius
## 1 Introduction
Desert Locusts (DL) are voracious migratory pests that can form large swarms, causing significant damage to crops and leading to food crises that affect humans and livestock (Peng et al., 2020). Over the years, DL have posed a significant threat to food security in Africa. Their ability to swiftly migrate over long distances and across geographical boundaries, makes it extremely difficult to coordinate preventative measures between control teams. Although various attempts have been made to mitigate the DL threat (Enns et al., 2022), much remains to be done.
DL exhibit phase polymorphism (also called polyphenism) which causes them to change from a solitarious to a gregarious phase and vice versa. Solitarious DL are shy, sedentary and do not move much, and with their green or brownish cryptic colour hide during the day (Pfluger and Braunig, 2021). They also avoid other locusts, except for mating, and are reported to migrate at night (Uvarov, 1977; Pfluger and Braunig, 2021). In contrast, gregarious DL are conspicuous and reveal anti-predator warning colours in bright yellow and black (Pfluger and Braunig, 2021). Gregarious locusts are very active, and they aggregate both as nymphs (marching hopper bands) and adults (swarms), and they fly by day and roost overnight in trees (Pfluger and Braunig, 2021). It has been observed that a major stimulus for polyphenism in solitarious DL is crowding. When solitarious DL are crowded, regularly touching the hind femur of one another, or perceiving the smell and sight of others, this causes a solitarious individual to become gregarious (Rogers et al., 2003; Anstey et al., 2009). When adult locusts become gregarious they form large swarms; a swarm measuring a single square kilometre can contain up to 80 million individuals (or more) that can fly up to 90 miles a day and consume an equivalent of their body mass in green vegetation every day (Uvarov, 1957). This is equivalent to the food that would be consumed by 35,000 people. As they migrate in search of food, each female lays up to 80 eggs in a pod. If the environmental conditions are suitable for breeding, the population of DL can grow exponentially within a short period of time, thereby resulting in a plague.
Historically, DL plagues have caused significant harm to both human and animal lives (Gross, 2021; Mullie et al., 2023). Various initiatives led by the UN-FAO and other stakeholders have aimed to mitigate this threat. Despite these efforts, there remain opportunities to enhance the effectiveness of measures against the adverse impact of DL on both animal and crop production. Given the nature of their lifecycle, whereby adults lay eggs that hatch in about 2 weeks and grow to become adults that form swarms in about 4 months, there is a need to develop a comprehensive solution to mitigating locust plagues. The first stage in locusts' lifecycle is oviposition by adult females which later results in hoppers that have limited locomotion abilities (Symmons and Cressman, 2001).
During this phase, known as the breeding stage, directing control efforts towards eradicating locust breeding appears to be a more effective approach. If successful at this stage, it means that some future generations of locusts have been eliminated and, if consistently carried out, after some years it is possible to reduce the negative impact of locust plagues to a bare minimum. This approach would first involve the identification of actual locust breeding grounds and subsequently the application of an effective control operation such as pesticide spraying.
In this paper, we present a set of new custom deep learning architectures as well as a fine-tuned foundational model specifically for DL breeding ground prediction. These models were created by first curating a breeding dataset from locust observation data collected by the UN-FAO and then exploring new modelling strategies using various remotely-sensed features and multi-spectral earth observation images. Whilst past research aimed at identifying regions _favourable_ to breeding, our focus is more specific. We care about identifying regions where locusts would be found _present_, copulating and/or laying eggs, i.e. _actual_ breeding grounds across Africa. However, even with this more specific task, we show that our models significantly outperform existing baselines, achieving accuracy, F1 and ROC-AUC scores of 83.03%, 81.53% and 87.69%, respectively. By openly releasing our models, we hope this work might assist locust control agencies and potentially improve early warning systems and the effectiveness of targeted control measures.
### Related Work
Several researchers have approached locust breeding ground prediction using machine learning (Gomez et al., 2018; Kimathi et al., 2020; Gomez et al., 2021; Klein et al., 2022). Here we give a brief overview of this work.
Machine learning has been instrumental in assessing the role of soil moisture (SM) in predicting DL breeding grounds. (Gomez et al., 2018) used a machine learning approach to predict DL breeding grounds based on SM data from European Space Agency Climate Change Initiative (ESA-CCI) 1. In their study area of Mauritania, and period of 1985-2015, random forest achieves the best performance in evaluating the link between SM and DL hoppers' presence with Kappa and ROC-AUC scores of 0.95 and 0.74 respectively. Their study shows that ESA-CCI SM was a significant predictor of DL breeding grounds in areas around Mauritania. They also demonstrate that a location is deemed favourable for breeding when the SM minimum value is over 0.07 \\(m^{3}/m^{3}\\) within 6 days or more. To improve their work, (Gomez et al., 2021) addressed the lack of automated and operational procedures for predicting DL breeding grounds in near real-time (NRT). They indicated that the ESA-CCI SM data is released with several months of lag. Therefore, they used SM data from the Soil Moisture and Ocean Salinity (SMOS) satellite data product2, which updates three times every month, to predict DL breeding grounds. Their study area covers the entire DL recession area between 2016-2018, which includes more than 30 countries from West Africa to West India, spanning about 16 million square kilometers. They evaluated six machine learning models and found a Weighted k-Nearest Neighbors approach to have the best performance with a Kappa and ROC-AUC score of 0.50, and 0.80 respectively.
Footnote 1: [https://esa-soilmoisture-cci.org/](https://esa-soilmoisture-cci.org/)
Some studies have combined SM data along with other bio-climatic data to improve model generalization. (Kimathi et al., 2020) is one of these studies that utilizes machine learning algorithms to predict potential DL breeding grounds in East Africa. They considered key bio-climatic factors such as temperature and rainfall, as well as graphic factors such as sand and moisture contents. Using the MaxEnt algorithm, they trained models on Morocco, Mauritania and Saudi Arabia and found Morocco model parameters to have the best generalization with an AUC score of 0.82. The findings of this study revealed that vast areas of Kenya and Sudan, along with the northeastern regions of Uganda, and the southeastern and northern regions of South Sudan, were at a high risk of providing a suitable breeding environment for DL in the period between February and April 2020.
So far, the data considered in all of these studies are temporal and vary rapidly over time. Some studies have incorporated certain static ecological data that can be useful for DL breeding survival. (Klein et al., 2022) introduce a fused multi-scale approach for predicting suitable breeding habitat for different locust species in different areas, including _Calliptamus italicus, CIT_ (Italian locust) in Pavlodolar oblast (Kazakhstan), _Dociostaurus maroccanus, DMA_ (Moroccan locust) in Turkistan oblast (Kazakhstan), and _Schistocerca gregaria_ (Desert Locust) in the Awash river basin (Ethiopia, Djibouti, Somalia). They incorporated up-to-date land surface parameters, vegetation development, and other relevant environmental factors into their model, along with climatic and soil preferences derived from ecological niche modelling (ENM). They emphasize the importance of considering actual changes in the landscape and human interactions for understanding locust outbreaks and defining suitable breeding grounds. To address this, the authorspropose incorporating variables obtained from Sentinel-23 (high-resolution remote sensing data) time-series analysis to describe the current state of the land, thereby refining the suitable breeding grounds within the model. For the year 2019, the model was validated using field observations and achieved an AUC performance of 0.747 for CIT, 0.850 for DMA and 0.801 for DL.
Footnote 3: [https://sentinel.esa.int/web/sentinel/missions/sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
Although classical machine learning models have shown promise for locust control, their ability to handle complex temporal data is limited. In contrast, certain deep learning models are well suited for such data. As an example, (Tabar et al., 2021) introduced the Predictor of Locust Activity and movemeNt (PLAN) model, a deep learning approach to forecasting DL migration patterns by integrating crowdsourced observations (data from PlantVillage4) and remote-sensed data. Inputs to their model are categorized into three categories: (1) static variables (actual evapotranspiration, sand content, total biomass productivity and elevation), (2) temporal variables (soil moisture, precipitation and wind speed) and (3) historical locust observations. Each category is processed by a different sub-component of the model and later fused together for the final prediction. Their model achieved an AUC score of 0.89 in forecasting the presence of DL.
Footnote 4: [https://plantvillage.psu.edu/](https://plantvillage.psu.edu/)
Previous studies on desert locust breeding ground prediction have considered different datasets, input variables and study areas, making it difficult to benchmark the performance of the model from each study. The remotely-sensed variables used are also from different satellite data products with varying spatial and temporal resolutions as well as update frequency. These compounding issues make it difficult to build a robust and operationally-ready model that can be used by regional governments and control agencies such as the UN-FAO for administering control activities towards eliminating the threat posed by DL. In this study, our goal is to build such a robust and operationally-ready model for predicting DL breeding grounds.
## 2 Methods
Locust breeding ground prediction is framed as a binary classification task. We define a set of geographical locations as \\(\\mathcal{L}\\), where each location \\(l\\) has a specific coordinate. Each location \\(l\\) has an associated binary label \\(y_{l}\\), indicating if it's an actual breeding ground (\\(y_{l}=1\\)) or not (\\(y_{l}=0\\)). For every location, a feature vector \\(x_{l}\\) is introduced to represent spatio-temporal data. This vector incorporates components including temporal variables and non-temporal variables.
In essence, the goal is to learn a function \\(f:\\mathcal{X}\\rightarrow\\{0,1\\}\\) based on labeled data \\(\\{(x_{l},y_{l})\\}\\), predicting the breeding ground label for any feature vector \\(x_{l}\\). The subsequent sections detail the data preparation for this problem.
### Data
We present our data sources, collection methods, and the final data processing used in our models.
#### 2.1.1 Locust Observation Records.
To learn the behaviour of locusts that would help in identifying breeding grounds, there is a need to have enough locust observation data. While there are locust observation data collected by regional authorities in some countries, they are limited in their spatial and temporal coverage and accessibility. The primary source of locust observation data that overcomes the aforementioned limitations is the UN-FAO Locust Hub5. The UN-FAO has been collecting locust observation data over the past 48 years (1975-Present) spanning different locust stages (Hoppers, Bands, Adults, and Swarms) across Africa and Asia. The data contains geolocation records and other environmental conditions of the observed site and is made publicly available via the Locust Hub. The data is collected based on guidelines recommended by the UN-FAO (Keith, 2001) to be used during field surveys.
Footnote 5: [https://locust-hub-hofao.hub.arcgis.com](https://locust-hub-hofao.hub.arcgis.com)
#### 2.1.2 Breeding Data.
The locust observation data from the FAO describes DL observed at different stages (Hoppers, Bands, Adults, and Swarms). For our use case, we are interested in the records that describe the breeding stage. We curate the appropriate data for this stage in a manner similar to (Klein et al., 2022) by including records of adults found laying eggs and those of early-stage instars (insect development stage), which also depict successful incubation and nymph hatching (Klein et al., 2022).
In the domain of species distribution modeling, data collected from the field are mostly presence records. However, the UN-FAO data includes observations of sites where locusts were expected to be found (favorable locations) but were missing. We wanted to consider these records as a subset of the absence records, but from our preliminary experiments, we found them to be biased. As a result, similar to (Gomez et al., 2021), we discarded them and opted to generate pseudo-absence records as prevalent in previous studies (Gomez et al., 2018, 2021; Klein et al., 2022). We perform pseudo-absence generation using a random sampling technique (Iturbide et al., 2015; Yusuf et al., 2022) while maintaining a buffer zone around each presence observation. A combination of presence and pseudo-absence records, described in table 1, makes up a robust dataset for learning to classify actual locust breeding grounds. Figure 1also shows a visual representation of our curated breeding data.
### Input Features.
Using the breeding data created in 2.1.2, which contains only geolocation and time information, we derive input features from remotely-sensed environmental and climatic variables as well as earth observation data. In each case, we prioritize satellite data products with higher spatial-temporal resolution and update frequency.
#### 2.2.1 Remotely-Sensed Variables
Similar to previous studies, we sourced environmental and climatic features that are suitable for locust breeding. These features are either static or temporal. We drew inspiration from (Tabar et al., 2021) and crafted them into a spatio-temporal data representation. By dividing the earth's surface into a grid of 0.1-degree resolution, we extracted an \\(n\\times n\\) grid centered on the observed location, where \\(n\\) is odd with a default value of 7. Two data feature groups were prepared:
1. **Temporal**: Incorporating soil moisture, precipitation, and fraction of vegetation cover, we constructed a spatio-temporal representation. For each record in our data, we retrieved a 96-day historical record of each temporal variable and resampled the data by computing a mean for each 3-day period. Following the resampling process, we obtained 30 time-steps for each variable, resulting in an input of shape \\(30\\times 7\\times 7\\times 3\\).
2. **Non-temporal**: Combining all 14 variables from the TerraClimate 6 with SoilGrid's _sand_5-15cm_mean_7, Copernicus _land_cover_2019_8 and ALOS WORLD 3D - 30m _wadis_9 data products, we constructed a matrix of \\(7\\times 7\\times 17\\). Footnote 6: [https://www.climatologylab.org/terraclimate.html](https://www.climatologylab.org/terraclimate.html)
Footnote 7: [https://www.isric.org/explore/soilgrids](https://www.isric.org/explore/soilgrids)
Footnote 8: [https://land.copernicus.eu/global/products/lc](https://land.copernicus.eu/global/products/lc)
Figure 2 visually outlines these feature groups. Here \\(x\\), and \\(y\\) denote spatial dimensions, and \\(T\\) is the temporal dimension (the number of time-steps after resampling). \\(v\\) and \\(s\\) are the number of temporal, and non-temporal features, respectively. The individual representation in (A) is fed to our PLAN-based model while the spatio-temporal representation in (B) is fed to the other deep learning models that utilize remotely-sensed input features. For our classical models, we flatten each data group and concatenate them before feeding the input to the model, as illustrated in (C).
#### 2.2.2 Earth Observation Data
Unlike remotely sensed features such as soil moisture, elevation etc. earth observation data are multi-spectral images of the earth's surface. These images have spectral bands ranging from visible light to the near-infrared (NIR) and shortwave infrared (SWIR) part of the electromagnetic spectrum. This enables detailed observations of vegetation, soil and water cover, inland waterways, and coastal areas. We hypothesize that the information contained in these spec
\\begin{table}
\\begin{tabular}{l|c c c} \\hline
**Split** & **Non-Breeding (b)** & **Breeding (1)** & **Date Range** \\\\ \\hline Train & 2238 & 2238 & 2020-01-01 to 2021-04-21 \\\\ Validation & 154 & 154 & 2021-04-22 to 2021-07-09 \\\\ Test & 820 & 820 & 2021-07-10 to 2023-07-30 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Description of Curated Breeding Records
Figure 1: Visualization of Breeding Data. Green and Red points represent non-breeding and breeding locations respectively. There is an equal number of green and red points. While breeding locations are concentrated, non-breeding locations are spread out due to random sampling during pseudo-absence generation.
Figure 2: Visual Representation of Spatiotemporal Input Features. The spatial dimensions are denoted by \\(x\\) and \\(y\\), while \\(T\\) represents the temporal dimension. \\(v\\) and \\(s\\) indicate the number of temporal and non-temporal variables, respectively. (A) corresponds to the input fed into our PLAN-LB model. (B) shows the spatiotemporal representation utilized by our Conv3D and ConvLSTM models. Lastly, (C) presents the input employed for SVM and logistic regression models.
tral images can serve the same purpose as remotely-sensed features in detecting desert locust breeding grounds.
NASA's Harmonized Landsat and Sentinel-2 (HLS) (Claverie et al., 2018) 10 dataset, which provides a high temporal (2-3 days) and spatial (30m) multi-spectral image of the earth's surface makes a suitable source of input features for an operational locust breeding ground prediction model. Using the HLS satellite data and our curated breeding records, we create spatio-temporal \"chips\" (subsection of a larger geospatial image) of size \\(224\\times 224\\) for locust breeding ground prediction which comprise 3 temporal steps with a size of 30 days. Each step also includes 6 multi-spectral bands (Blue, Green, Red, Narrow NIR, SWIR 1, SWIR 2). This implies that the resulting input features of shape \\(3\\times 6\\times 224\\times 224\\) represent an observation of a specific area over the past 90 days.
Footnote 10: [https://hls.gsfc.nasa.gov/](https://hls.gsfc.nasa.gov/)
### Models
In this section, we present deep learning models designed to model the complex relationships between the various spatio-temporal features from the two categories of data discussed in the previous section. By leveraging deep learning, our models aim to uncover intricate patterns and dependencies within the input features that will aid in locust breeding ground prediction. In the following subsections, we describe the architecture and functioning of these models.
#### 2.3.1 PLAN for Locust Breeding (PLAN-LB)
This model is a modified version of the PLAN model (Tabar et al., 2021). Since our input features do not include historical locust observations, as seen in PLAN, we employ only two modules for processing the temporal and non-temporal input. As explained in Section 2.2.1, our inputs have spatial dimensions of \\(7\\times 7\\) to incorporate neighboring information, in contrast to the point values utilized in PLAN. Consequently, the two modules we utilized are convolutional modules, as depicted in Figure 3. The temporal module independently encodes each entry in the temporal series into a feature vector. Subsequently, the series of resulting feature vectors undergo processing via an LSTM block in a many-to-one configuration. The non-temporal module similarly encodes the non-temporal input into a feature vector. The outputs from both modules are then concatenated and forwarded to a final linear layer for classification.
#### 2.3.2 3D convolutional network (Conv3D)
This model learns from the spatio-temporal input features using a three-dimensional convolutional network architecture. Conv3D has been effective in domains like action recognition, medical analysis, and geospatial analysis (Wang et al., 2018; Wu et al., 2018; Lee et al., 2018; Liu and Hu, 2019; Lee et al., 2021; Duan et al., 2022). Our Conv3D model features two residual layers with Conv3D blocks, layer normalization, and ReLU activation. It employs a kernel size of \\((3,7,7)\\) and retains spatial dimensions using a \"same\" padding approach. The model concludes with average pooling and a softmax activation output layer, producing a prob
Figure 3: PLAN-LB Model Architecture. This model was derived from PLANβs model and it independently processes the temporal and non-temporal inputs. The temporal module encodes each entry in the temporal series into a feature vector. Subsequently, the series of resulting feature vectors undergo processing via an LSTM block in a many-to-one configuration. The non-temporal module similarly encodes the non-temporal input into a feature vector. The outputs from both modules are then concatenated and forwarded to a final linear layer for classification
ability score for locust breeding likelihood. The full architecture is displayed in Figure 4.
#### 2.3.3 Convolutional LSTM (ConvLSTM)
A convolutional variant of the standard LSTM (Long Short-Term Memory) (Hochreiter and Schmidhuber, 1997) recurrent network, ConvLSTM incorporates convolutional operations within the recurrent structure, both in the input-to-state and state-to-state transitions (Shi et al., 2015). It is specifically designed to capture spatio-temporal dependencies in data and has been effective in applications such as video analysis and weather forecasting (Shi et al., 2015; Sanchez-Caballero et al., 2020; Moishin et al., 2021). Our ConvLSTM uses a kernel size of \\((3,3)\\) for convolutional operations, followed by a ReLU activation. A linear layer then transforms its outputs, and a softmax activation provides a probability score for locust breeding likelihood. The architecture is visualized in Figure 5.
#### 2.3.4 Prithvi for Locust Breeding (Prithvi-LB)
Prithvi (Jakubik et al., 2023) is a ViT-based geospatial foundational model pre-trained on HLS data. It features a self-supervised encoder with a ViT architecture (Dosovitskiy et al., 2021), incorporating a Masked AutoEncoder (MAE) learning strategy and an MSE loss function. The model exhibits spatial attention across patches and introduces temporal attention. Prithvi demonstrates superior performance in diverse remote sensing temporal tasks, such as multi-temporal cloud gap imputation, floods and wildfire scars segmentation, and multi-temporal crop segmentation (Jakubik et al., 2023). Leveraging Prithvi's capabilities, we derive a model that incorporates Prithvi's pre-trained ViT encoder and transpose convolution decoder blocks as illustrated in Figure 6. Our Prithvi-LB model was trained to learn the temporal and spatial intricacies of predicting locust breeding grounds using HLS data.
## 3 Experiments
This section details the training of our models on both categories of input features described in 2.2. We first outline our experimental framework and then discuss our experiments.
### Experimental setup
We utilized TPU_V3's for training our ConvLSTM from the Google Cloud platform. Complementing this, our computational setup included 4 vCPUs and 32GB of RAM. For hyperparameter selection, we used a batch size of 32 and trained the models for 200 epochs. Early stopping with a patience of 10 epochs was employed to prevent overfitting. The learning rate was set to \\(1e-4\\), and we utilized the Adam optimizer with \\(\\beta_{1}\\) and \\(\\beta_{2}\\) values of \\(0.9\\) and \\(0.999\\), respectively.
For the Prithvi-LB model, we used an Nvidia V100 GPU with 8 vCPUs and 30GB of RAM. We fine-tuned the model for 10 epochs with early stopping. The learning rate was set to \\(1e-4\\), and we utilized the AdamW optimizer with \\(\\beta_{1}\\), \\(\\beta_{2}\\) and weight_decay values of \\(0.9\\), \\(0.999\\) and 0.1 respectively.
Figure 4: _Conv3D Model Architecture._ Our Conv3D model features two residual layers with Conv3D blocks, layer normalization, and ReLU activation. It employs a kernel size of \\((3,7,7)\\) and retains spatial dimensions using a βsameβ padding approach. The model concludes with average pooling and a softmax activation output layer, producing a probability score for locust breeding likelihood. Given that the input features pertain to a specific point location, the output provides a classification indicating whether the point is a breeding or non-breeding ground.
### Using Remotely-Sensed Input Features
In this experiment, we trained three of our proposed models (PLAN-LB, Conv3D andConvLSTM) and two classical machine learning models--Logistic Regression and Support Vector Machine (SVM) on remotely-sensed input features described in Section 2.2.1. Given the spatio-temporal nature of our data, preprocessing for the classical models involved flattening and concatenating the input features. Each of the models was trained to optimize the objective described in 2 and the best model was selected using the validation split. The performance of the selected model on the test set was evaluated using accuracy, precision, recall, F1 and ROC-AUC scores. The results obtained from each model are presented in Table 2.
### Using Multi-Spectral Earth Observation Images
We trained Prithvi-LB on spatio-temporal chips (\\(3\\times 6\\times 224\\times 224\\)) derived from HLS data as described in 2.2.2. In this experiment, the model was trained using a segmentation objective, where each pixel has a probability score for the breeding and non-breeding class. The model was trained for 10 epochs and the best checkpoint was selected using the validation split. The results obtained after evaluating the selected checkpoint on the test split are shown in Table 2.
## 4 Results and Discussion
The results from the experiments conducted in Sections 3.2 and 3.3 are summarized in Table 2. In the initial set of experiments utilizing remotely-sensed input features, the deep learning models outperformed the classical models, with the ConvLSTM model emerging as the top performer. It achieved an accuracy, F1-score, and ROC-AUC score of 75.76%, 67.34%, and 63.61%, respectively.
On the other hand, the experiments involving HLS multi-spectral earth observation data exhibited the best overall performance, surpassing ConvLSTM with improvements of +5.11, +13.26, and +25.28 in accuracy, F1-score, and ROC-AUC score, respectively. This outstanding performance can be attributed to various factors, including the utilization of a pre-trained Prithvi ViT encoder and the high spatial resolution (30m) of the data. Furthermore, HLS boasts a high update frequency of 2-3 days, enhancing the model's
Figure 5: ConvLSTM Model Architecture. Our ConvLSTM uses a kernel size of \\((3,3)\\) for convolutional operations, followed by a ReLU activation. A linear layer then transforms its outputs, and a softmax activation provides a probability score for locust breeding likelihood. Given that the input features pertain to a specific point location, the output provides a classification indicating whether the point is a breeding or non-breeding ground.
Figure 6: Prithvi-LB Model Architecture. This custom model was derived by adding a custom decoder layer atop the pre-trained Prithvi vision transformer encoder. The custom decoder consist of a stack of upsampling blocks followed by a final two-dimensional convolutional layer that produces the output segmentation map. The segmentation map classifies each 30-square-meter patch of land as either a breeding or non-breeding ground.
suitability for operational deployment.
It is however noteworthy that a significant number of samples in the breeding records dataset were lost due to missing values on one or more variables during the preprocessing phase for remotely-sensed input features. This issue not only affects the performance and reliability of remotely-sensed input features but is also exacerbated by the low update frequency of the various variable sources, rendering them less suitable for models intended for operational deployment.
A visual analysis of the predictions made by Prithvi-LB is depicted in Figure 7. It predicts potential DL breeding sites across every \\(30m\\times 30m\\) section of our study area, as illustrated in Figure 1. The results demonstrate the model's capability to identify the sparse nature of DL breeding grounds effectively. By overlaying these predictions on high-resolution satellite imagery, it was observed that areas identified as probable DL breeding sites predominantly consist of desert terrains with sparse tree cover. These areas are presumed to be where DL edible vegetation might emerge following periods of rainfall. To further substantiate the accuracy of our model's predictions, we plan to conduct ground-based verification in collaboration with the UN-FAO and other partner organizations.
## 5 Conclusion
In this research, we aimed to develop a robust and operationally-ready model for predicting locust breeding grounds, addressing a critical need in managing the threat posed by DL to animal and food security. Utilizing locust observation records from the UN-FAO, along with two categories of input features - remotely-sensed data and multi-spectral earth observation images - we trained and evaluated various models. Our findings indicate that our Prithvi-based model, which utilizes multi-spectral earth observation images, demonstrates superior performance, attaining accuracy, F1 and ROC-AUC scores of 83.03%, 81.53% and 87.69% respectively. This model's effectiveness is largely attributed to leveraging the high temporal (2-3 days) and spatial (30m) resolution Harmonized Landsat and Sentinel-2 (HLS) satellite product. Consequently, our research offers significant advancements in predicting desert locust breeding grounds, with potential for enhancing the administration and effectiveness of control activities undertaken by regional governments and relevant agencies.
## Impact Statements
Our proposed methodology for detecting DL breeding grounds solely utilizing multi-spectral earth observation images has not only surpassed existing approaches but has also demonstrated the potential for immediate operational
\\begin{table}
\\begin{tabular}{l|c c c c c c} \\hline \\hline
**Methods** & **Accuracy** & **F1-score** & **Precision** & **Recall** & **ROC\\_AUC** & **Input** \\\\ \\hline SVM & 62.36 & 63.19 & 72.47 & 71.04 & 71.04 & RS \\\\ Logistic Regression & 60.23 & 60.13 & 69.45 & 67.96 & 67.96 & RS \\\\ PLAN-LB & 71.21 & 56.94 & 79.84 & 59.20 & 75.09 & RS \\\\ Conv3D & 75.38 & 64.74 & **86.29** & 64.67 & 69.91 & RS \\\\ ConvLSTM & 75.76 & 67.34 & 80.37 & 66.48 & 63.61 & RS \\\\ \\hline Prithvi-LB & **83.03** & **81.53** & 82.12 & **82.90** & **87.69** & MS \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Experiment Results. RS and MS refer to remotely-sensed and multi-spectral inputs, respectively. These results were obtained from training different models described in Section 2.3 using various input types as discussed in Section 2.2. The table presents performance metrics, including accuracy, F1-score, precision, recall, and ROC-AUC score, for different models in predicting DL breeding grounds. Notably, Prithvi-LB trained using multi-spectral earth observation images yields the highest predictive performance.
Figure 7: Visualization of Prithvi-LB Predictions for January 2023. This figure presents the spatial predictions generated by the Prithvi-LB model for January 2023, encompassing every \\(30m\\times 30m\\) parcel within the specified study area, as depicted in Figure 1. Areas covered in red are predicted as breeding sites. The model adeptly identifies the sparse nature of DL breeding areas. Overlaying these predictions onto satellite imagery reveals that the majority of areas predicted as potential breeding sites are characterized by desert landscapes with sparse tree presence. This pattern suggests these regions might witness vegetation growth following rainfall, indicating possible DL breeding grounds.
deployment. This advancement addresses a critical need for organizations involved in locust control operations, potentially enhancing their ability to effectively mitigate the threat posed by DL. However, the DL threat is a multi-stakeholder problem that needs complex coordination between many partners and will not be solved by modeling alone. There are also implications of relying solely on model predictions for administering locust control activities, and such a strategy might fail to provide enough information to effectively dispatch (often very expense) control measures.
## References
* A. Dosovitskiy et al. (2021)Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. External Links: 2102.0214 Cited by: SS1.
* H. Duan et al. (2022)BEVS: a large-scale hierarchical network for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2969-2978. Cited by: SS1.
* C. Enns, B. Bersaglio, and R. Karmushu (2022)Disaster management takes to the skies: how new technologies are reconfiguring spatialities of power in desert locust management. Political Geography98, pp. 102732. Cited by: SS1.
* D. Gomez et al. (2018)Machine learning approach to locate desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. Cited by: SS1.
* D. Gomez et al. (2021)Prediction of desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599. Cited by: SS1.
* M. Gross (2021)How locusts become a plague. Current Biology31 (10), pp. R459-R461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. L. Anstey, S. M. Rogers, S. R. Ott, M. Burrows, and S. J. Simpson (2009)Serotonin mediates behavioral g-arization underlying swarm formation in desert locusts. Science323 (5914), pp. 627-630. External Links: ISSN 0304-3800, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas based on esa cci soil moisture. Journal of Applied Remote Sensing12 (3), pp. 036011-036011. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 10459-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 104599-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* M. Claverie et al. (2018)Machine learning approach to detect desert locust breeding areas using machine learning methods and smos (mir_smnt2) near real time product. Journal of Arid Environments194, pp. 10459-10461. External Links: ISSN 0960-9822, Document, Link Cited by: SS1.
* Moishin et al. (2021) Moishin, M., Deo, R. C., Prasad, R., Raj, N., and Abdulla, S. Designing deep-based learning flood forecast model with convlstm hybrid algorithm. _IEEE Access_, 9:50982-50993, 2021.
* Mullie et al. (2023) Mullie, W. C., Prakash, A., Muller, A., and Lazutkaite, E. Insecticide use against desert locust in the horn of africa 2019-2021 reveals a pressing need for change. _Agronomy_, 13(3), 2023. ISSN 2073-4395. doi: 10.3390/agronomy13030819. URL [https://www.mdpi.com/2073-4395/13/3/819](https://www.mdpi.com/2073-4395/13/3/819).
* Peng et al. (2020) Peng, W., Ma, N. L., Zhang, D., Zhou, Q., Yue, X., Khoo, S. C., Yang, H., Guan, R., Chen, H., Zhang, X., et al. A review of historical and recent locust outbreaks: Links to global warming, food security and mitigation strategies. _Environmental research_, 191:110046, 2020.
* 326, 2021.
* Rogers et al. (2003) Rogers, S. M., Matheson, T., Despland, E., Dodgson, T., Burrows, M., and Simpson, S. J. Mechanosensory-induced behavioural gregarization in the desert locust schistocerca gregaria. _Journal of Experimental Biology_, 206(22):3991-4002, 2003.
* Sanchez-Caballero et al. (2020) Sanchez-Caballero, A., Fuentes-Jimenez, D., and Losada-Gutierrez, C. Exploiting the convlstm: Human action recognition using raw depth video-based recurrent neural networks. _arXiv preprint arXiv:2006.07744_, 2020.
* Shi et al. (2015) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., and Woo, W.-c. Convolutional lstm network: A machine learning approach for precipitation nowcasting. _Advances in neural information processing systems_, 28, 2015.
* Symmons & Cressman (2001) Symmons, P. and Cressman, K. Desert locust guidelines: biology and behaviour. _FAO, Rome_, pp. 1-42, 2001.
* Tabar et al. (2021) Tabar, M., Gluck, J., Goyal, A., Jiang, F., Morr, D., Kehs, A., Lee, D., Hughes, D. P., and Yadav, A. A plan for tackling the locust crisis in east africa: harnessing spatiotemporal deep models for locust movement forecasting. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, pp. 3595-3604, 2021.
* Uvarov (1957) Uvarov, B. The aridity factor in the ecology of locusts and grasshoppers of the old world. In _Arid Zone Research VIII. Human and Animal Ecology. Reviews of Research_, pp. 164-98. UNESCO, Paris, 1957.
* Uvarov (1977) Uvarov, B. _Grasshoppers and Locusts: A Handbook of General Acridology_, volume 1-2. Centre for Overseas Pest Research, London, 1977.
* Wang et al. (2018) Wang, S., Zhang, R., Deng, Y., Chen, K., Xiao, D., Peng, P., and Jiang, T. Discrimination of smoking status by mri based on deep learning method. _Quantitative Imaging in Medicine and Surgery_, 8(11):1113, 2018.
* Wu et al. (2018) Wu, K., Shen, Y., and Wang, S. 3d convolutional neural network for regional precipitation nowcasting. _Journal of Image and Signal Processing_, 7(4):200-212, 2018.
* Yusuf et al. (2022) Yusuf, I. S., ab Tessera, K., Tumiel, T., Slim, Z., Kerkeni, A., Nevo, S., and Pretorius, A. On pseudo-absence generation and machine learning for locust breeding ground prediction in africa, 2022. | Desert locust swarms present a major threat to agriculture and food security. Addressing this challenge, our study develops an operationally-ready model for predicting locust breeding grounds, which has the potential to enhance early warning systems and targeted control measures. We curated a dataset from the United Nations Food and Agriculture Organization's (UN-FAO) locust observation records and analyzed it using two types of spatio-temporal input features: remotely-sensed environmental and climate data as well as multi-spectral earth observation images. Our approach employed custom deep learning models (three-dimensional and LSTM-based recurrent convolutional networks), along with the geospatial foundational model Prittiv recently released by (Jakubik et al., 2023). These models notably outperformed existing baselines, with the Prittivi-based model, fine-tuned on multi-spectral images from NASA's Harmonized Landsat and Sentinel-2 (HLS) dataset, achieving the highest accuracy, F1 and ROC-AUC scores (83.03%, 81.53% and 87.69%, respectively). A significant finding from our research is that multi-spectral earth observation images alone are sufficient for effective locust breeding ground prediction without the need to explicitly incorporate climatic or environmental features.
Machine Learning, ICML | Condense the content of the following passage. | 262 |
arxiv-format/2305_17520v1.md | USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense Active Learning for Super-resolution
Vikrant Rangnekar
Uddeshya Upadhyay
Zeynep Akata
Biplab Banerjee
Centre for Machine Intelligence and Data Science (CMInDS), IIT Bombay University of Tubingen Max Planck Institute for Intelligent Systems, Tubingen
## 1 Introduction
The paradigm of dense prediction is very important in computer vision, given that pixel-level regression tasks like super-resolution, restoration, depth estimation etc., help in holistic scene understanding. A common example of a pixel-level (i.e., dense) regression task is _Image super-resolution_ (SR) is the process of recovering high-resolution (HR) images from their low-resolution (LR) versions. It is an important class of image processing techniques in computer vision, deep learning, and image processing and offers a wide range of real-world applications, such as medical imaging (Li et al., 2021), satellite imaging (Verpoorter et al., 2014), surveillance (Caner et al., 2003) and security (Gohshi, 2015), and remote sensing (Yang et al., 2015), to name a few. The well-performing techniques for super-resolution often rely on deep learning-based methods that are trained in a supervised fashion, requiring high-resolution data as groundtruth. However, the acquisition of high-resolution imaging data (to be served as labels) for many real-world applications may be infeasible. Consider the example of histopathology microscopy from medical imaging, where the typical digital microscope takes significantly longer to acquire the high-resolution scans (i.e., at high magnification) image of the slide than low-magnification (Aeffner et al., 2018; Hamilton et al., 2014). Moreover, the acquired high-resolution scans also have a significantly larger memory footprint leading to an increase in storage resources (Bertram and Klopfleisch, 2017). Similarly, acquiring high spatial resolution images from satellites for remote sensing requires expensive sensors and hardware and has significantly higher operating costs (Cornebise et al., 2018, 2022). In such scenarios, generating a large volume of training samples is infeasible.
As a remedy, concepts like zero-shot SR or single-image SR have been proposed. Nevertheless, zero-shot SR still requires ample supervision from the test image patches (Shocher et al., 2018) to learn the transferrable model for novel scenarios with divergent distributions (Soh et al., 2020), and the performance of the single-image SR models is still affected by the lack of sufficient labeled data (Lim et al., 2017). Notwithstanding these discussions, there are situations where there are restrictions on dealing with training samples within a pre-defined budget. For example, in histopathology microscopy, the constraint on available resources may allow high-resolution acquisition for only a limited number of patients/microscopy slides. One of the viable solutions in this regard is to select a subset of highly representative training samples from the available training set while respecting the budget and deploying them to train the SR model. This corresponds to the notion of active learning for subset selection. However, selecting the subset is challenging considering the fact that we need a quantitative measurement for the eligibility of a given training LR-HR pair to be selected. Many works have explored different _query functions_ to select a subset to label from a larger dataset [1, 14, 15]. However, most of them have been applied to classification or low-dimensional regression problems [11], and there still exists a gap on how to address this for dense regression tasks (e.g., super-resolution). Active learning technique to label those points for which the current model is least certain has been studied well in the context of classification [13]. While there are recent advances in uncertainty estimation using neural networks for dense regression [12, 13], it is yet to be studied if they can be leveraged in active learning for dense regression.
In summary, our contributions are as follows: (i) We show how statistical image models can help alleviate the need for a large volume of high-resolution imaging data. (ii) We show that probabilistic deep networks, along with the statistical image models, can be used to learn informative prior about niche domain datasets that may allow limited access to high-resolution data. (iii) Our probabilistic deep network trained with the statistical image models allows us to estimate the uncertainty for the sample in a niche domain that can be leveraged for active learning as illustrated in Figure 1.
## 2 Related Work
Active Learning.These are a set of techniques that involve selecting a minimal data subset to be annotated, representing the entire dataset, and providing maximum performance gains. Querying strategies for active learning can be broadly categorized into three categories: heterogeneity-based, performance-based, and representativeness-based models. Uncertainty sampling [1, 14, 15, 16], a type of heterogeneity-based model, is a standard active learning strategy where the learner aims to label those samples which have the most uncertain labelings. Non-Bayesian approaches[14, 15] dealing with entropy, distance from decision boundary, etc., also exist but are not scalable for deep learning [17]. Representation-based methods that aim at increasing the diversity in a batch[11] have also been studied. However, most of these works have been studied in the context of classification or low-dimensional regression problems, and the literature on dense regression is still sparse.
Statistical Image models.The \\(n\\times n\\) RGB images occupy the space of \\(\\mathbb{R}^{3n^{2}}\\). However, the structured images occupy a small region in that space. The statistical properties of the samples in this small structured space can be leveraged to generate synthetic data that have similar statistics to real-world structured images. For instance, the observation that natural images follow a power law with respect to the magnitude of their Fourier Transform (FT) formed the basis for Wiener image denoising[16], Dead Leaves models [12] and fractals as image models [13, 15]. Similarly, works like [12, 14, 15] showed that outputs of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution. Works like [12, 13] showed statistical models capable of producing realistic-looking textures. The recent work [15] takes this research a step closer to realistic image generation by learning from procedural noise processes and using the generated samples for pre-training the neural networks. However, it is only applied to classification.
Figure 1: The proposed framework _USIM-DAL._ (Left-to-right) We train a probabilistic deep network for a dense regression task (e.g., super-resolution) on synthetic samples obtained from statistical image models as described in Section 3. The pre-trained model is used to identify the high-uncertainty samples from the domain-specific unlabeled set. Top-K highly uncertain samples are chosen for labeling on which the pre-trained network is further fine-tuned.
Super-resolution.This consists of CNN-based methods to enhance the resolution of the image [1, 22, 23, 24]. Attention mechanism has proven to be ubiquitous, with [25] introducing channel and spatial attention modules for adaptive feature refinement. Transformers-based endeavors such as [10], achieve state-of-the-art results using multi-head self-attention for SR. [11] uses a probabilistic diffusion model and performs SR through an iterative denoising process. Works like [23, 24] use internal and external recurrence of information to get superior SR performance during inference. However, these works do not consider the problem of super-resolution in the active learning context, leaving a gap in the literature.
Uncertainty Estimation.Quantifying uncertainty in machine learning models is crucial for safety-critical applications [22, 25, 26, 1]. Uncertainty can be broadly categorized into two classes: (i) Epistemic uncertainty (i.e., uncertainty in model weights [1, 13, 14, 15]). (ii) Aleatoric uncertainty (i.e., noise inherent in the observations) [16, 23]. The dense predictive uncertainty may be considered as a proxy for error and can be used for active learning purposes [11].
## 3 Method
We first formulate the problem in Section 3.1, and present preliminaries on active learning, statistical image models, and uncertainty estimation in Section 3.2. In Section 3.3, we describe the construction of _USIM-DAL_ that learns a prior via statistical image modeling, which is later used to select the most informative samples from the unlabeled set for labeling and further improving the model.
### Problem Formulation
Let \\(\\mathcal{D}_{U}=\\{\\mathbf{x}_{i}\\}_{i=1}^{N}\\) be the unlabeled set of input images from domain \\(\\mathbf{X}\\) (i.e., \\(\\mathbf{x}_{i}\\in\\mathbf{X}\\forall i\\)). We consider the task where images (\\(\\mathbf{x}\\)) are to be mapped to another set of dense continuous labels (\\(\\mathbf{y}\\), e.g., other images, such that \\(\\mathbf{y}_{i}\\in\\mathbf{Y}\\forall i\\)). We want to learn a mapping \\(\\mathbf{\\Psi}\\) for the same, i.e., \\(\\mathbf{\\Psi}:\\mathbf{X}\\rightarrow\\mathbf{Y}\\). However, we want to learn it under the constraint that we do not have sufficient _budget_ to \"label\" all the \\(N\\) samples in \\(\\mathcal{D}_{U}\\) (i.e., acquire all the corresponding \\(\\mathbf{y}\\)), but we do have a budget to label a significantly smaller subset of \\(\\mathcal{D}_{U}\\) with \\(K<<N\\) samples, say \\(\\mathcal{D}_{U}^{K}\\). This is a real-world constraint, as discussed in Section 2. In this work, we focus on the problem of super-resolution where the domain \\(\\mathbf{Y}\\) consists of high-resolution images (corresponding to the low-resolution images in domain \\(\\mathbf{X}\\)).
We tackle the problem of choosing the set of \\(K<<N\\) samples (\\(\\mathcal{D}_{U}^{K}\\)) that are highly representative of the entire unlabeled training set \\(\\mathcal{D}_{U}\\), such that the learned mapping \\(\\mathbf{\\Psi}\\) on unseen data from a similar domain performs well.
### Preliminaries
Active Learning.As discussed above, given a set of \\(N\\) unlabeled images \\(\\mathcal{D}_{U}\\), we want to choose a set of \\(K<<N\\) samples (\\(\\mathcal{D}_{U}^{K}\\)) that are highly representative of the entire unlabeled training set \\(\\mathcal{D}_{U}\\). This is the problem of active learning, which consists of _query strategies_ that maps the entire unlabeled set \\(\\mathcal{D}_{U}\\) to its subset. That is, the query strategy (constrained to choose \\(K\\) samples and parameterized by \\(\\phi\\)) is given by, \\(\\mathcal{Q}_{K,\\phi}:\\mathcal{D}_{U}\\rightarrow\\mathcal{D}_{U}^{K}\\). Many works explore designing the query strategy \\(\\mathcal{Q}_{K,\\phi}\\)[1, 13, 14]. However, they seldom attempt to design such a strategy for dense regression.
Statistical Image Models (SIM).As discussed in [1], the statistical properties of RGB images can be exploited to generate synthetic images that can serve as an excellent pre-training learning signal. The generative model (based on statistical properties of RGB images) is described as \\(\\mathcal{G}(\\cdot;\\theta_{G}):\\mathbf{z}\\rightarrow\\mathbf{x}\\) where \\(\\mathbf{z}\\) is a stochastic latent variable and \\(\\mathbf{x}\\) is an image. The image generation is modelled as a hierarchical process in which, first, the parameters of a model are sampled. Then the image is sampled given these parameters and stochastic noise. Previous works [1] highlight the following statistical models. (i) **Spectrum:** based on the magnitude of the Fourier transform (FT). The FT of many natural images follows a power law, i.e., \\(\\frac{1}{|f|^{\\alpha}}\\), where \\(|f|\\) is the magnitude of frequency \\(f\\), and \\(\\alpha\\) is a constant close to 1. For generative models, the sampled images are constrained to be random noise images that have FT magnitude following \\(\\frac{1}{|f_{\\mathbf{z}}|^{\\alpha}+|f_{\\mathbf{y}}|^{\\alpha}}\\) with a and b being two random numbers uniformly sampled as detailed in [1]. (ii) **Wavelet-marginal model (WMM):** Generates the texture by modeling their histograms of wavelet coefficient as discussed in [11, 13]. (iii) **Color histograms:** As discussed in [1], this generative
Figure 2: Samples generated from Statistical Image Models (combination of Spectrum + WMM + Color histogram).
model follows the color distribution of the dead-leaves model [1]. Combining all these different models allows for capturing colour distributions, spectral components, and wavelet distributions that mimic those typical for natural images. Figure 2 shows examples of generated samples from such models.
Uncertainty Estimation.Various works [12, 13] have proposed different methods to model the uncertainty estimates in the predictions made by DNNs for different tasks. Interestingly recent works [13, 14] have shown that for many real-world vision applications, modeling the aleatoric uncertainty allows for capturing erroneous predictions that may happen with out-of-distribution samples. To estimate the uncertainty for the regression tasks using deep network (say \\(\\mathbf{\\Psi}(\\cdot;\\zeta):\\mathbf{X}\\rightarrow\\mathbf{Y}\\)), the model must capture the output distribution \\(\\mathcal{P}_{Y|X}\\). This is often done by estimating \\(\\mathcal{P}_{Y|X}\\) with a parametric distribution and learning the parameters of the said distribution using the deep network, which is then used to maximize the likelihood function. That is, for an input \\(\\mathbf{x}_{i}\\), the model produces a set of parameters representing the output given by, \\(\\{\\hat{\\mathbf{y}}_{i},\\hat{\
u}_{i}\\dots\\hat{\\rho}_{i}\\}:=\\mathbf{\\Psi}( \\mathbf{x}_{i};\\zeta)\\), that characterizes the distribution \\(\\mathcal{P}_{Y|X}(\\mathbf{y};\\{\\hat{\\mathbf{y}}_{i},\\hat{\
u}_{i}\\dots\\hat{ \\rho}_{i}\\})\\), such that \\(\\mathbf{y}_{i}\\sim\\mathcal{P}_{Y|X}(\\mathbf{y};\\{\\hat{\\mathbf{y}}_{i},\\hat{ \
u}_{i}\\dots\\hat{\\rho}_{i}\\})\\). The likelihood \\(\\mathcal{L}(\\zeta;\\mathcal{D}):=\\prod_{i=1}^{N}\\mathcal{P}_{Y|X}(\\mathbf{y}_{i} ;\\{\\hat{\\mathbf{y}}_{i},\\hat{\
u}_{i}\\dots\\hat{\\rho}_{i}\\})\\) is then maximized to estimate the optimal parameters of the network. Typically, the parameterized distribution is chosen to be _heteroscedastic_ Gaussian distribution, in which case \\(\\mathbf{\\Psi}(\\cdot;\\zeta)\\) is designed to predict the _mean_ and _variance_ of the Gaussian distribution, i.e., \\(\\{\\hat{\\mathbf{y}}_{i},\\hat{\\sigma}_{i}^{2}\\}:=\\mathbf{\\Psi}(\\mathbf{x}_{i}; \\zeta)\\). The optimization problem becomes,
\\[\\zeta^{*}=\\underset{\\zeta}{\\text{argmin}}\\sum_{i=1}^{N}\\frac{|\\hat{\\mathbf{y}}_ {i}-\\mathbf{y}_{i}|^{2}}{2\\hat{\\sigma}_{i}^{2}}+\\frac{\\log(\\hat{\\sigma}_{i}^{2} )}{2} \\tag{1}\\]
With Uncertainty\\((\\hat{\\mathbf{y}}_{i})=\\hat{\\sigma}_{i}^{2}\\). An important observation from Equation 1 is that, ignoring the dependence through \\(\\zeta\\), the solution to Equation 1 decouples estimation of \\(\\hat{\\mathbf{y}}_{i}\\) and \\(\\hat{\\sigma}_{i}\\). That is, for minimizing with respect to \\(\\hat{\\mathbf{y}}_{i}\\) we need,
\\[\\frac{\\partial\\left(\\sum_{i=1}^{N}\\frac{|\\hat{\\mathbf{y}}_{i}- \\mathbf{y}_{i}|^{2}}{2\\hat{\\sigma}_{i}^{2}}+\\frac{\\log(\\hat{\\sigma}_{i}^{2})} {2}\\right)}{\\partial\\hat{\\mathbf{y}}_{i}}=0 \\tag{2}\\] \\[\\frac{\\partial^{2}\\left(\\sum_{i=1}^{N}\\frac{|\\hat{\\mathbf{y}}_{i} -\\mathbf{y}_{i}|^{2}}{2\\hat{\\sigma}_{i}^{2}}+\\frac{\\log(\\hat{\\sigma}_{i}^{2}) }{2}\\right)}{\\partial\\hat{\\mathbf{y}}_{i}^{2}}>0 \\tag{3}\\]
Equation 2 & 3 lead to \\(\\hat{\\mathbf{y}}_{i}=\\mathbf{y}_{i}\\ \\forall i\\). Similarly for minimizing with respect to \\(\\hat{\\sigma}_{i}\\) we need,
\\[\\frac{\\partial\\left(\\sum_{i=1}^{N}\\frac{|\\hat{\\mathbf{y}}_{i}- \\mathbf{y}_{i}|^{2}}{2\\hat{\\sigma}_{i}^{2}}+\\frac{\\log(\\hat{\\sigma}_{i}^{2})} {2}\\right)}{\\partial\\hat{\\sigma}_{i}}=0 \\tag{4}\\] \\[\\frac{\\partial^{2}\\left(\\sum_{i=1}^{N}\\frac{|\\hat{\\mathbf{y}}_{i }-\\mathbf{y}_{i}|^{2}}{2\\hat{\\sigma}_{i}^{2}}+\\frac{\\log(\\hat{\\sigma}_{i}^{2}) }{2}\\right)}{\\partial\\hat{\\sigma}_{i}^{2}}>0 \\tag{5}\\]
Equation 4 & 5 lead to \\(\\hat{\\sigma}_{i}^{2}=|\\hat{\\mathbf{y}}_{i}-\\mathbf{y}_{i}|^{2}\\ \\forall i\\). That is, the estimation \\(\\hat{\\sigma}_{i}^{2}\\) should perfectly reflect the squared error. Therefore, a higher \\(\\hat{\\sigma}_{i}^{2}\\) indicates higher error. We leverage this observation to design our dense active learning framework as described in Section 3.3.
### Constructing _Usim-Dal_
To tackle the problem mentioned in Section 3.1 (i.e., choosing a small subset), we leverage the fact that even before training the model with the labelled set, we can train a model based on the samples that we get from statistical image model as described above, which can then be used to make inference on the unlabeled domain-specific dataset identifying the high-uncertainty samples. The high-uncertainty samples can then be labelled and used to fine-tune the model.
We constraint the generative process for statistical image models as, Similar to [1], we treat image generation as a hierarchical process in which first the parameters of a model, \\(\\theta_{G}\\), are sampled. Then the image is sampled given these parameters and stochastic noise, i.e.,
\\[\\theta_{G}\\sim prior(\\theta_{G})\\text{ and }\\mathbf{z}\\sim prior( \\mathbf{z}) \\tag{6}\\] \\[\\mathbf{x}=\\mathcal{G}(\\mathbf{z};\\theta_{G}) \\tag{7}\\]
In particular, for super-resolution, we create a large (synthetic) labelled dataset using the samples from the statistical image models, say \\(\\mathcal{D}_{SL}=\\{(\\texttt{low}(\\mathbf{x}_{s,i}),\\mathbf{x}_{s,i})\\}_{i=1}^{M}\\). Where \\(\\mathbf{x}_{s,i}\\) are generated samples from statistical image model and \\(\\texttt{low}(\\cdot)\\), is the 4\\(\\times\\) down-sampling operation. We then train the network \\(\\mathbf{\\Psi}(\\cdot;\\zeta)\\) on \\(\\mathcal{D}_{SL}\\) using Equation 1, leading to the optimal parameter \\(\\zeta_{SL}^{*}\\), as shown in Figure 1. The trained model \\(\\mathbf{\\Psi}(\\cdot;\\zeta_{SL}^{*})\\) is then run in inference mode on all the samples of the unlabeled set \\(\\mathcal{D}_{U}\\) and gather the top uncertain samples for labeling, that is,
\\[\\{\\hat{\\mathbf{y}}_{i},\\hat{\\sigma}_{i}\\}:=\\mathbf{\\Psi}(\\mathbf{x}_{i}; \\zeta_{SL}^{*})\\ \\forall\\mathbf{x}_{i}\\in\\mathcal{D}_{U} \\tag{8}\\] \\[\\mathcal{D}_{U}^{K}:=\\{\\mathbf{x}_{j}\\}\\forall j\\in\\texttt{topK} \\left(\\{(\\hat{\\sigma}_{i})\\}_{i=1}^{N}\\right) \\tag{9}\\]
Where, \\(\\langle\\cdot\\rangle\\) represents the mean operation, and \\(\\texttt{topK}\\big{(}\\{\\hat{\\sigma}_{i}\\}_{i=1}^{N}\\big{)}\\) returns the indices of \"top-K\" most uncertain samples (i.e., mean uncertainty is high). We then acquire the labels for the samples in \\(\\mathcal{D}_{U}^{K}\\), giving us, \\(\\mathcal{D}_{UL}^{K}=\\{(\\mathbf{x}_{j},\\mathbf{y}_{j})\\}\\). As discussed in Section 3.2, the input samples in \\(\\mathbf{D}_{UL}^{K}\\) serve as a proxy to the set of \\(K\\) samples that would have the highest error between the prediction made by the model \\(\\mathbf{\\Psi}(\\cdot;\\zeta_{SL}^{*})\\) and the ground truth. That leads to better fine-tuning. The model \\(\\mathbf{\\Psi}(\\cdot;\\zeta_{SL}^{*})\\) is then fine-tuned on \\(\\mathcal{D}_{UL}^{K}\\) via Equation 1, leading to the final state of the model \\(\\mathbf{\\Psi}(\\cdot;\\zeta_{KL}^{*})\\) (shown in Figure 1) that can be used for inferring on the new sample.
_USIM-DAL_ models the aleatoric uncertainties in the prediction. Still, it is crucial to note that it leverages the Statistical Image Modeling (SIM)-based synthetic images for pertaining and learning important priors for color images that broadly capture different niche domains such as medical images, satellite images, etc. Therefore, the initial model, capable of estimating the aleatoric uncertainty (trained on SIM-based synthetic images), can reasonably capture the uncertainty as a proxy for reconstruction error for domain-specific images that are not necessarily out-of-distribution images. Moreover, picking samples with high reconstruction errors for subsequent fine-tuning of the model yields better performance on similar highly erroneous cases, iteratively improving the model. Furthermore, in high-dimensional regression cases, the aleatoric and epistemic uncertainty often influence each other and are not independent Kendall and Gal (2017), Upadhyay et al. (2022), Zhang et al. (2019).
## 4 Experiments and Results
We provide an overview of the experiments performed and the results obtained. In Section 4.1, we describe the task and various methods used for comparison. Section 4.3 analyzes the performance of various dense active learning algorithms for super-resolution and shows that our proposed method _USIM-DAL_ can help greatly improve the performance when constrained with a limited budget.
### Tasks, Datasets, and Methods
We present the results of all our experiments on the super-resolution task. We demonstrate our proposed framework using a probabilistic SRGAN (which is the adaptation of SRGAN (Ledig et al., 2017) that estimates pixel-wise uncertainty as described in (Kendall and Gal, 2017)) model. We evaluate the performance of various models on a wide variety of domains like (i) Natural Images (with Set5, Set14, BSD100, and Visual Genome dataset (Ledig et al., 2017; Martin et al., 2001; Krishna et al., 2017)). (ii) Satellite Images (with PatternNet dataset (Zhou et al., 2018)). (ii) Histopathology Medical Images (with Came-lyon dataset (Litjens et al., 2018)). The evaluation protocol is designed to constraint all the training domain datasets to be restricted by a small fixed number of images (also called _training budget_). We used different training budgets of 500, 1000, 2000, 3000 and 5000 images for natural and satellite domains. For both natural and satellite images, the input image resolution was set to \\(64\\times 64\\). For natural images the training dataset was obtained from Visual Genome (separate from the test-set). Similarly, for the histopathology medical images, the input image resolution was set to \\(32\\times 32\\) and we used training budgets of 4000, 8000, 12000, and 16000.
We compare the super-resolution performance in terms of metrics MSE, MAE, PSNR, and SSIM (Wang et al., 2004) for the following methods on respective test sets: (i) SRGAN model trained from scratch with a randomly chosen subset satisfying the training budget from the entire training data (called _Random_). (ii) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling (as described in Section 3.2). This model is called _SIM_. (iii) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling and then fine-tuned on a randomly chosen subset satisfying the training budget from the entire training data, called _SIM+Random_. (iv) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling and then fine-tuned on a subset chosen using uncertainty estimates, satisfying the training budget from the entire training data, called _USIM-DAL_.
### Dense Active Learning via Uncertainty Estimation
Our method proposes to utilize a probabilistic network that is learned from synthetic images sampled from statistical image models (i.e., \\(\\mathbf{\\Psi}(\\cdot;\\zeta_{SL}^{*})\\) mentioned in Section 3.3).
Figure 3: Output of the pre-trained probabilistic deep network (which is trained using synthetic images sampled from statistical image models) on samples from _unseen_ natural image datasets. (a) LR input, (b) HR groundtruth, (c) Predicted output, SR, from the network, (d) Predicted uncertainty from the network, (e) Error between SR and groundtruth.
Figure 3 shows the output of probabilistic SRGAN trained on synthetic images evaluated on samples from natural images. We observe that (i) The predicted super-resolved images (Figure 3-(c)) are still reasonable. (ii) The uncertainty estimates (Figure 3-(d)) still resemble the structures from the images and are a reasonable proxy to the error maps (Figure 3-(e)) between the predictions and the ground truth, even though the model has never seen the natural images.
We use the predicted uncertainty from this model to identify the samples from the real-world domain that would lead to high errors. Figure 4 shows the distribution of mean uncertainty values for samples in (i) Statistical Noise (ii) Natural (ii) Satellite (iii) Medical image datasets. We notice that the model trained on synthetic images leads to a gaussian distribution for the mean uncertainty values on the synthetic image datasets. We obtain similar distributions for other datasets from different domains. This further emphasizes that uncertainty estimates obtained from \\(\\boldsymbol{\\Psi}(\\cdot;\\zeta_{SL}^{*})\\) can be used as a proxy to identify the highly uncertain (therefore erroneous) samples from different domains (i.e., the samples close to the right tail of the distributions).
### _Usim-Dal_ for Super-Resolution
Table 1 shows the performance of different methods on multiple natural image datasets, including Set5, Set14, BSD100, and Visual Genome (VG). We observe that with the smallest training budget of 500 images, _USIM-DAL_ performs the best with a PSNR/MAE of 25.174/0.035 (Table 1 shows the results with a scaling factor for better accommodation) compared to _SIM+Random_ with PSNR/MAE of 25/0.039 and _SIM_ with PSNR/MAE of 24.8/0.037. We also notice that at this budget, choosing the random subset of the training dataset to train the model from scratch performs the worst with PSNR/MAE of 23.36/0.043. As the budget increases (left to right in Tabel 1), the performances of all the methods also improve. However, a similar trend is observed where the _USIM-DAL_ performs better than _SIM+Random_, _SIM_, and _Random_. We observe a similar trend for other natural image datasets. This allows us to make the following observations: (i) Using a synthetic training image dataset (sampled from the statistical image model, discussed in Section 3.2) leads to better performance than using a small random subset of training images from the original domain (i.e., _SIM_ better than _Random_). (ii) Using the above synthetic training image dataset to train a model and later fine-tuning it with domain-specific samples lead to further improvements (i.e., both _USIM-DAL_ and _SIM+Random_ better than _SIM_). (iii) With a limited budget, fine-tuning a model (pre-trained on synthetic
Figure 4: Distribution of mean uncertainty for samples in Statistical Image Noise, PatternNet (satellite), Camelyon (medical), Visual Genome (natural) datasets.
Figure 5: Evaluation of various methods on histopathology medical domain (i.e., Camelyon dataset) and satellite imaging domain (i.e., PatternNet dataset) at various fine-tuning budgets. The yellow curve is the _SIM_ baseline. The red curve is the SIM model fine-tuned with random samples (i.e., _SIM+Random_). The blue curve is the SIM model fine-tuned with the highest uncertain samples (i.e., _USIM-DAL_).
training image dataset) using high-uncertainty samples from the training set (as decided by the _USIM-DAL_) is better than using the random samples from the training set (i.e., _USIM-DAL_ better than _SIM+Random_).
We perform a similar set of experiments with other imaging domains, namely, (i) Satellite imaging (using PatternNet dataset) and (ii) Medical imaging (using Camelyon histopathology dataset). We observe a similar (to natural images) trend in these domains. Figure 5 shows the performance (measured using PSNR) for different methods on these two domains, with varying training budgets. For satellite imaging, at the lowest training budget of 500 images, _USIM-DAL_ with PSNR of 23.5 performs better than _SIM+Random_ with PSNR of 23.4 and _SIM_ with a PSNR of 23.2. We observe that as the training budget increases to 2000 images, _USIM-DAL_ (with PSNR of 23.6) outperforms _SIM+Random_ (with PSNR of 23.35) with an even higher margin. As we increase the training budget further, the _SIM+Random_ model starts performing similarly to _USIM-DAL_. With a budget of 5000 samples, _USIM-DAL_ has a performance of 23.62, and _SIM+Random_ has a performance of 23.60. Given a domain with large (specific to datasets) training budgets, the performance achieved from random sampling and active learning strategies will converge.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**D**} & \\multirow{2}{*}{**Methods**} & \\multicolumn{3}{c}{**500**} & \\multicolumn{3}{c}{**1000**} & \\multicolumn{3}{c}{**2000**} & \\multicolumn{3}{c}{**3000**} & \\multicolumn{3}{c}{**5000**} \\\\ & & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM \\\\ \\hline \\multirow{6}{*}{**SVNR**} & Random & 4.129 / 3.854 / 24.784 / 7.232 & 3.398 / 3.720 / 24.957 / 7.319 & 3.660 / 3.588 / 25.271 / 7.422 & 3.586 / 3.529 / 25.334 / 7.465 & 3.500 / 3.420 / 25.514 / 7.539 \\\\ & SIM & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 \\\\ & SIM +Random & 2.976 / 3.139 / 26.283 / 7.839 & 2.958 / 3.099 / 26.377 / 7.872 & 2.941 / 3.081 / 26.435 / 7.896 & 2.934 / 3.088 / 26.436 / 7.910 & 2.912 / 3.066 / 26.546 / 7.935 \\\\ & USIM-DAL & **2.926 / 3.088 / 26.484 / 7.869** & **2.884 / 3.009 / 26.550 / 7.894** & **2.848 / 3.027 / 26.649 / 7.931** & **2.843 / 3.029 / 26.644 / 7.944** & **2.831 / 3.025 / 26.699 / 7.943** \\\\ \\hline \\hline \\multirow{6}{*}{**SVNR**} & Random & 6.254 / 4.750 / 22.535 / 6.333 & 6.111 / 4.669 / 22.576 / 6.382 & 5.942 / 4.564 / 22.701 / 6.468 & 5.862 / 4.539 / 22.616 / 6.488 & 5.800 / 4.450 / 22.886 / 5.994 \\\\ & SIM & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 \\\\ & SIM +Random & 4.488 / 3.907 / 23.748 / **7.016** & 4.485 / 3.871 / 23.787 / **7.862** & 4.447 / 3.828 / 24.166 / 7.159 & 4.426 / 3.828 / 24.162 / 7.179 & 4.396 / 3.798 / 24.090 / 7.198 \\\\ & USIM-DAL & **4.376 / 3.836 / 23.810** / 6.5984 & **4.366 / 3.816 / 23.818 / 7.000** & **4.331 / 3.767 / 24.288 / **7.177** & **4.317 / 3.749 / 24.422 / 7.208** & **4.292 / 3.728 / 24.583 / **7.227** \\\\ \\hline \\hline \\multirow{6}{*}{**SVNR**} & Random & 4.857 / 4.338 / 23.357 / 6.072 & 4.778 / 4.294 / 23.427 / 6.098 & 4.670 / 4.226 / 23.583 / 6.160 & 4.630 / 4.207 / 23.598 / 6.187 & 4.600 / 4.160 / 23.703 / 6.214 \\\\ & SIM & 3.526 / 3.738 / 24.
For Camelyon dataset, we use the input image resolution of 32\\(\\times\\)32. We observe that _USIM-DAL_ performs the best across all budgets when compared to _SIM+Random_ and _SIM_. We also note that high-frequency features that are typically present in high-resolution scans (i.e., obtained at 20\\(\\times\\) or 40\\(\\times\\) magnification from the histopathology microscope) make the super-resolution problem harder and require more data to achieve good performance.
Figure 6 summarizes the performance gain (in terms of PSNR) by using _USIM-DAL_ (i.e., uncertainty-based active learning strategy for dense regression) compared to _SIM+Random_ (i.e., no active learning, randomly choosing a subset from real training domain), relative to _SIM_ (i.e., no real samples used from the domain) at best performing limited budgets. That is, the relative percentage boost in performance is reported as:
\\[\\frac{(\\text{PSNR}_{\\text{USIM-DAL}}-\\text{PSNR}_{\\text{SIM+Random}})*100}{ \\text{PSNR}_{\\text{SIM+Random}}-\\text{PSNR}_{\\text{SIM}}} \\tag{10}\\]
We note that _USIM-DAL_ consistently performs better than _SIM+Random_, with the relative percentage boost in PSNR of 26.14% for Set5 to 142.69% for PatternNet. Figure 7 shows the qualitative outputs of different models on multiple datasets. On all the datasets, we notice that the output obtained by _USIM-DAL_ is better than the output of _SIM+Random_ that is better than _SIM_ and _Random_.
## 5 Discussion and Conclusion
In this work, we presented a novel framework called _USIM-DAL_ that is designed to perform active learning for dense-regression tasks, such as image super-resolution. Dense-regression tasks, such as super-resolution, are an important class of problem for which deep learning offers a wide range of solutions applicable to medical imaging, security, and remote sensing. However, most of these solutions often rely on supervision signals derived from high-resolution images. Due to the time-consuming acquisition of high-resolution images or expensive sensors, hardware, and operational costs involved, it is not always feasible to generate large volumes of high-resolution imaging data. But in real-world scenarios, a limited budget for acquiring high-resolution
Figure 7: Qualitative results from different methods (performing 4\\(\\times\\) super-resolution) including (b) _Random_, (c) _SIM_, (e) _SIM+Random_, (f) _USIM-DAL_ on (i) BSD100, (ii) Visual Genome, (iii) PatternNet, and (iv) Camelyon datasets. (a) LR input, and (d) HR groundtruth. Input resolution for BSD100, Visual Genome, and PatternNet is \\(64\\times 64\\), and for Camelyon is \\(32\\times 32\\). (f) _USIM-DAL_ produces the most visually appealing outputs.
data is often available. This calls for active learning that chooses a subset from large unlabeled set to perform labeling to train the models. While multiple querying strategies (in the context of active learning) exist for the classification tasks, the same for dense regression tasks are seldom discussed. Our work paves the way for using modern uncertainty estimation techniques for active learning in dense regression tasks. We show that a large synthetic dataset acquired using statistical image models can be used to learn informative priors for various domains, including natural images, medical images, satellite images, and more. The learned prior can then be used to choose the subset consisting of high-uncertainty samples that can then be labeled and used to fine-tune the prior further. Through extensive experimentation, we show that our approach generalizes well to a wide variety of domains, including medical and satellite imaging. we show that active learning performed by proposed querying strategy (i.e., _USIM-DAL_) leads to gains of upto 140% / 53% with respect to a random selection strategy (i.e., SIM+Random) relative to no dataset-specific fine-tuning (i.e., _SIM_) on satellite/medical imaging.
**Acknowledgements.** This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 - Project number 390727645). The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Uddeshya Upadhyay.
## References
* Aeffner et al. (2018) Famke Aeffner, Hibret A Adissu, Michael C Boyle, Robert D Cardiff, Erik Hagendorn, Mark J Hoenerhoff, Robert Klopfleisch, Susan Newbiggine, Dirk Schaudien, Oliver Turner, et al. Digital microscopy, image analysis, and virtual slide repository. _ILAR journal_, 2018.
* Bae et al. (2021) Gwangbin Bae, Ignas Budvytis, and Roberto Cipolla. Estimating and exploiting the aleatoric uncertainty in surface normal estimation. In _ICCV_, 2021.
* Jurjo et al. (2021) Manel Baradad Jurjo, Jonas Wulff, Tongzhou Wang, Phillip Isola, and Antonio Torralba. Learning to see by looking at noise. _NeurIPS_, 2021.
* Beluch et al. (2018) William H. Beluch, Tim Genewein, Andreas Nurnberger, and Jan M. Kohler. The power of ensembles for active learning in image classification. In _CVPR_, 2018.
* Bertram and Klopfleisch (2017) Christof A Bertram and Robert Klopfleisch. The pathologist 2.0: an update on digital pathology in veterinary medicine. _Veterinary pathology_, 2017.
* Blundell et al. (2015) Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In _ICML_, 2015.
* Bose et al. (2022) Rupak Bose, Vikrant Rangnekar, Biplab Banerjee, and Subhasis Chaudhuri. Zero-shot remote sensing image super-resolution based on image continuity and self tessellations. In _GCPR_, 2022.
* Brinker (2003) Klaus Brinker. Incorporating diversity in active learning with support vector machines. In _ICML_, 2003.
* Caner et al. (2003) G. Caner, A.M. Tekalp, and W. Heinzelman. Super resolution recovery for multi-camera surveillance imaging. In _International Conference on Multimedia and Expo_, 2003.
* Cornebise et al. (2018) Julien Cornebise, Daniel Worrall, Micah Farfour, and Milena Marin. Witnessing atrocities: quantifying villages destruction in darfur with crowdsourcing and transfer learning. In _Proc. AI for Social Good NeurIPS2018 Workshop, NeurIPS'18_, 2018.
* Cornebise et al. (2022) Julien Cornebise, Ivan Orsolic, and Freddie Kalaitzis. Open high-resolution satellite imagery: The worldstrat dataset-with application to super-resolution. _arXiv preprint arXiv:2207.06418_, 2022.
* Daxberger et al. (2021) Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. _NeurIPS_, 2021.
* Ebrahimi et al. (2019) Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, and Marcus Rohrbach. Uncertainty-guided continual learning with bayesian neural networks. _arXiv preprint arXiv:1906.02425_, 2019.
* Field (1987) David J Field. Relations between the statistics of natural images and the response properties of cortical cells. _Josa a_, 1987.
* Gohshi (2015) Seiichi Gohshi. Real-time super resolution algorithm for security cameras. In _International Joint Conference on e-Business and Telecommunications (ICETE)_, 2015.
* Gorriz et al. (2017) Marc Gorriz, Axel Carlier, Emmanuel Faure, and Xavier Giro-i Nieto. Cost-effective active learning for melanoma segmentation. _arXiv preprint arXiv:1711.09168_, 2017.
* Graves (2011) Alex Graves. Practical variational inference for neural networks. _NeurIPS_, 2011.
* Hamilton et al. (2014) Peter W Hamilton, Peter Bankhead, Yinhai Wang, Ryan Hutchinson, Declan Kieran, Darragh G McArt, Jacqueline James, and Manuel Salto-Tellez. Digital pathology and image analysis in tissue biomarker research. _Methods_, 2014.
* Heeger and Bergen (1995) David J Heeger and James R Bergen. Pyramid-based texture analysis/synthesis. In _Proceedings of the 22nd annual conference on Computer graphics and interactive techniques_, 1995.
* Jain and Grauman (2016) Suyog Dutt Jain and Kristen Grauman. Active image segmentation propagation. In _CVPR_, 2016.
* Jain et al. (2017)* Kataoka et al. [2020] Hirokatsu Kataoka, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, and Yutaka Satoh. Pre-training without natural images. In _Proceedings of the Asian Conference on Computer Vision_, 2020.
* Kendall and Gal [2017] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In _NeurIPS_, 2017.
* Kretzmer [1952] Ernest R Kretzmer. Statistics of television signals. _The bell system technical journal_, 1952.
* Krishna et al. [2017] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. _IJCV_, 2017.
* Lakshminarayanan et al. [2016] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. _arXiv preprint arXiv:1612.01474_, 2016.
* Laves et al. [2020] Max-Heinrich Laves, Sontoj Ihler, Jacob F Fast, Luder A Kahrs, and Tobias Ortmaier. Well-calibrated regression uncertainty in medical imaging with deep learning. In _Medical Imaging with Deep Learning_, 2020.
* Ledig et al. [2017] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In _CVPR_, 2017.
* Lee et al. [2001] Ann B Lee, David Mumford, and Jinggang Huang. Occlusion models for natural images: A statistical study of a scale-invariant dead leaves model. _IJCV_, 2001.
* Li et al. [2021] Y Li, Bruno Sixou, and F Peyrin. A review of the deep learning methods for medical images super resolution problems. _Irbm_, 2021.
* Liang et al. [2021] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In _ICCVw_, 2021.
* Lim et al. [2017] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In _CVPRw_, 2017.
* Litjens et al. [2018] Geert Litjens, Peter Bandi, Babak Ehteshami Bejnordi, Oscar Geessink, Maschenka Balkenhol, Peter Bult, Altuna Halilovic, Meyke Hermsen, Rob van de Loo, Rob Vogels, et al. 1399 h&e-stained sentinel lymph node sections of breast cancer patients: the camelyon dataset. _GigaScience_, 2018.
* Martin et al. [2001] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In _ICCV_, 2001.
* Nair et al. [2020] Tanya Nair, Doina Precup, Douglas L Arnold, and Tal Arbel. Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. _Medical image analysis_, 2020.
* Portilla and Simoncelli [2000] Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. _IJCV_, 2000.
* Redies et al. [2008] Christoph Redies, Jens Hasenstein, and Joachim Denzler. Fractal-like image statistics in visual art: similarity to natural scenes. _Spatial vision_, 2008.
* Roy and McCallum [2001] Nicholas Roy and Andrew McCallum. Toward optimal active learning through monte carlo estimation of error reduction. _ICML, Williamstown_, 2001.
* Saharia et al. [2022] Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. _IEEE TPAMI_, 2022.
* Sener and Savarese [2017] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. _arXiv preprint arXiv:1708.00489_, 2017.
* Shocher et al. [2018] Assaf Shocher, Nadav Cohen, and Michal Irani. \"zero-shot\" super-resolution using deep internal learning. In _CVPR_, 2018.
* Simoncelli [2005] Eero P Simoncelli. 4.7 statistical modeling of photographic images. _Handbook of Video and Image Processing_, 2005.
* Soh et al. [2020] Jae Woong Soh, Sunwoo Cho, and Nam Ik Cho. Meta-transfer learning for zero-shot super-resolution. In _CVPR_, 2020.
* Sudarshan et al. [2021] Viswanath P Sudarshan, Uddeshya Upadhyay, Gary F Egan, Zhaolin Chen, and Suyash P Awate. Towards lower-dose pet using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. _Medical Image Analysis_, 2021.
* Upadhyay and Awate [2019a] Uddeshya Upadhyay and Suyash P Awate. A mixed-supervision multilevel gan framework for image quality enhancement. In _Medical Image Computing and Computer Assisted Intervention-MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part V_, pages 556-564. Springer, 2019a.
* Upadhyay and Awate [2019b] Uddeshya Upadhyay and Suyash P Awate. Robust super-resolution gan, with manifold-based and perception loss. In _2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)_, pages 1372-1376. IEEE, 2019b.
Uddeshya Upadhyay, Yanbei Chen, and Zeynep Akata. Robustness via uncertainty-aware cycle consistency. _NeurIPS_, 2021a.
* Upadhyay et al. [2021b] Uddeshya Upadhyay, Yanbei Chen, Tobias Hepp, Sergios Gatidis, and Zeynep Akata. Uncertainty-guided progressive gans for medical image translation. In _MICCAI_, 2021b.
* Upadhyay et al. [2021] Uddeshya Upadhyay, Viswanath P Sudarshan, and Suyash P Awate. Uncertainty-aware gan with adaptive loss for robust mri image enhancement. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3255-3264, 2021c.
* Upadhyay et al. [2022] Uddeshya Upadhyay, Shyamgopal Karthik, Yanbei Chen, Massimiliano Mancini, and Zeynep Akata. BayesCap: Bayesian identity cap for calibrated uncertainty in frozen neural networks. In _European Conference on Computer Vision_, pages 299-317. Springer, 2022.
* Verpoorter et al. [2014] Charles Verpoorter, Tiit Kutser, David A Seekell, and Lars J Tranvik. A global inventory of lakes based on high-resolution satellite imagery. _Geophysical Research Letters_, 2014.
* Wang et al. [2019] Guotai Wang, Wenqi Li, Michael Aertsen, Jan Deprest, Sebastien Ourselin, and Tom Vercauteren. Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. _Neurocomputing_, 2019.
* Wang et al. [2016] Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classification. _IEEE Transactions on Circuits and Systems for Video Technology_, 2016.
* Wang et al. [2018] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In _ECCVw_, 2018.
* Wang and Ye [2015] Zheng Wang and Jieping Ye. Querying discriminative and representative samples for batch mode active learning. _ACM TKDD_, 2015.
* Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_, 2004.
* Woo et al. [2018] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In _ECCV_, 2018.
* Yang et al. [2015a] Daiqin Yang, Zimeng Li, Yatong Xia, and Zhenzhong Chen. Remote sensing image super-resolution: Challenges and approaches. In _IEEE DSP_, 2015a.
* Yang et al. [2015b] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. _IJCV_, 2015b.
* Zhang et al. [2019] Zizhao Zhang, Adriana Romero, Matthew J Muckley, Pascal Vincent, Lin Yang, and Michal Drozdzal. Reducing uncertainty in undersampled mri reconstruction with active acquisition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2049-2058, 2019.
* Zhou et al. [2018] Weixun Zhou, Shawn Newsam, Congmin Li, and Zhenfeng Shao. Patternnet: A benchmark dataset for performance evaluation of remote sensing image retrieval. _ISPRS journal of photogrammetry and remote sensing_, 2018. | Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called _USIM-DAL_ that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed _USIM-DAL_ and superiority over several dense regression active learning methods. | Give a concise overview of the text below. | 235 |
arxiv-format/1401_5836v3.md | **The Strength of Friendship Ties in Proximity Sensor Data**
Vedran Sekara\\({}^{1,*}\\), Sune Lehmann\\({}^{1,2}\\)
**1 Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark**
**2 Niels Bohr Institute, University of Copenhagen, Osterbro, Denmark**
\\(*\\) **E-mail: Corresponding [email protected]**
## Introduction
Recognizing genuine social connections is a central issue within multiple disciplines. When do connections happen? Where do they take place? And with whom is an individual connected? These questions are important when working to understand and design urban areas [1, 2], studying close-contact spreading of infectious diseases [3, 4, 5], or organizing teams of knowledge workers [6, 7, 8]. In spite of their importance, measuring social ties in the real world can be difficult.
In classical social science the standard approach is to use self-reported data. This method, however, is only practical for relatively small groups and suffers from cognitive biases, errors of perception, and ambiguities [9]. Further, it has been shown that the ability to capture behavioral patterns via self-report data is limited in many contexts [10]. A different approach for uncovering social behavior is to use digital records from emails and cell phone communication [11, 12, 13, 14, 15, 16, 17, 18]. Although such analyses have improved our understanding of social ties, they have left many important questions unanswered--are electronic traces a valid proxy for real social connections? Eagle et al. [19] began to answer this question by including a spatial component as part of their data, using the short range (\\(\\sim 10\\,m\\)) Bluetooth sensor embedded in study participants' smartphones to measure physical proximity. Their results show that proximity data closely reflects social interactions in many cases. But since it is easy to think of examples where reciprocal Bluetooth detection does not correspond to social interaction (e.g. transient co-location in dining hall) the question remains, which observations correspond to actual social interactions and which are just noise?
Multiple alternatives have been proposed to Bluetooth for sensor-driven measurement of social interactions, each with particular strengths and weaknesses [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. For example, Radio Frequency Identification (RFID) badges have short interaction ranges (\\(1-4\\,m\\)) and measure only face-to-face interactions, thus solving many of the resolution problems posed by Bluetooth [29, 30]. This approach, however, confines interactions to occur within specific areas covered by special radio receivers and requires participants to wear custom radio tags on their chests at all times--unlike Bluetooth which is ubiquitous across many types of modern electronic devices.
Our investigation digs into the role of Bluetooth signal strength, using a dataset obtained from applications running on the cell phones of 134 students at a large academic institution. Each phonerecords and sends data to researchers about call and text logs, Bluetooth devices in nearby proximity, WiFi hotspots in proximity, cell towers, GPS location, and battery usage [31]. In addition, we combine the data collected via the phones with online data, such as social graphs from Facebook for a majority of the participants. The study continuously gathers data, but in this paper we focus on Bluetooth proximity data gathered for 119 days during the academic year of 2012-2013. Specifically, we focus on the received signal strength parameter and propose a methodology that applies signal strength to distinguish between social and non-social interactions. We concentrate on the signal parameter because it is present in a majority of digitally recorded proximity datasets [29, 31, 32] and in addition, it also suggests a rough estimate for the distance between two devices. Applying the method on our data, we compare the findings to a null model and demonstrate how removing links with low signal strength influences network structure. Moreover, we use estimated link-weights and an online dataset to validate the friendship-quality of removed links.
## Materials and Methods
### Dataset
We distributed phones among students from four study lines (majors), where each major was chosen based on the fraction of students interested in participating in the project. This selection method yielded a coverage of \\(>93\\%\\) of students per study line, enabling us to capture a dense sample of the social interactions between subjects. Such high coverage of internal connections within a social group, with respect to the density of social interactions combined with the duration of observation, has not been achieved in earlier studies [19, 29].
The data collector application installed on each phone follows a predefined scanning time table, which specifies the activation and duration of each probe. Proximity data is obtained by using the Bluetooth probe. Every 300 seconds each phone performs a Bluetooth scan that lasts 30 seconds. During the scan it registers all discoverable devices within its vicinity (\\(5-10m\\)) along with the associated received signal strength indicator (RSSI) [33]. Recorded proximity data is of the form (\\(i\\), \\(j\\), \\(t\\), \\(s\\)), denoting that person \\(i\\) has observed \\(j\\) at time \\(t\\) with signal strength \\(s\\). Only links between experiment participants are considered, comprising a dataset of \\(2\\,183\\,434\\) time ordered edges between 134 nodes, see Table 1 for more information. Data collection, anonymization, and storage was approved by the Danish Data Protection Agency, and complies with both local and EU regulations. Written informed consent was obtained via electronic means, where all invited participants digitally signed the form with their university credentials. Along with the mobile phone study we also collected Facebook graphs of the participants. Not all users donated their data since this was voluntary, however we obtained a user participation of \\(\\sim 88\\%\\) (119 users and 1018 Facebook friendships). For the missing 12% of users, we assume they do not share any online friendships with the bulk of participants.
### Identifying links
Independent of starting conditions, the scanning framework on one phone will drift out of sync with the framework on other phones after a certain amount of time, thus the phones will inevitably scan in a desynchronized manner. This desynchronization can mainly be attributed to: internal drift in the time-protocol of each phone, depletion of the battery, and users manually turning phones off. To account for irregular scans, we divide time into windows (bins) of fixed width and aggregate the Bluetooth observations within each time-window into a weighted adjacency matrix. The complete adjacency matrix is thus given by: \\(W=\\left(W^{(\\Delta t_{1})},W^{(\\Delta t_{2})},\\ldots,W^{(\\Delta t_{n})}\\right)\\), where each link is weighted by its signal strength and where \\(\\Delta t_{i}\\) indicates window number \\(i\\). These matrices generally assume a non-symmetric form, i.e. person \\(A\\) might observe \\(B\\) with signal strength \\(s\\) while person \\(B\\) observes \\(A\\) with strength \\(s^{\\prime}\\), or not at all. The scanning frequency of the application sets a natural lower limit of the network resolution to 5minutes. If we are interested in the social dynamics at a different temporal resolution we can aggregate the adjacency matrices and retain entries according to some heuristic (e.g. with the strongest signal). Depending on the level of description (monthly, weekly, daily, hourly, or every 5 minutes) the researcher must think carefully about the definition of a network connection. Frameworks for finding the best temporal resolution, so called _natural timescales_ have for specific problems been investigated by Clauset and Eagle [34], and Sulo et al. [35]. In this paper, however, we are interested in the identification and removal of non-important proximity links, so aggregating multiple time-windows is not a concern here. Henceforth we solely work with 5 minutes time-bins.
The Bluetooth probe logs all discoverable devices within a sphere with a radius of 5-10 meters--walls and floor divisions reduce the radius, but the reduction in signal depends on the construction materials [36]. Blindly taking proximity observations as a ground truth for social interactions will introduce both false negative and false positive links in the social network. False negative links are typically induced by hardware errors beyond our control, thus we focus on identifying false positive links. We therefore propose to identify non-social or noisy proximity links via the signal strength parameter. The parameter can be thought of as a proxy for the relative distance between devices, since most people carry their phones on them, it will in principle also suggests the separation distance between individuals.
Previous work has applied Bluetooth signals to estimate the position of individuals [37, 38, 39, 40] but studies by Hay [41], and Hossein et al. [42] have revealed signal strength as an unsuitable candidate for accurately estimating location. However, the complexity of the problem can greatly be reduced by focusing on the relative distance between individuals rather than position. In theory, the transmitted power between two antennae is inversely proportional to the distance squared between them [43]. Reality is more complicated, due to noise and reflection caused by obstacles.
We use the ideal result as a reference while we perform empirical measurements to determine how signal strength depends on distance. Two devices are placed on the ground in a simulated classroom setting, where we are able to control the relative distance between them. The resulting measurements are plotted in Fig. 1A. As is evident from the figure, there is a large variance in the measured signal strength values for each fixed distance. However, as both phones exhibit the same variance we can exclude faulty hardware; further, environmental noise such as interference from other devices, or solar radiation can also be dismissed since there appear no daily patterns in the data. But we observe multiple bands or so-called modes onto which measurements collapse, Ladd et al. [32] noted a similar behavior for the received signal strength of WiFi connections, both are phenomena caused by non-Gaussian distributed noise. The empirical measurements form a foundation for understanding signal variance as a function of distance, but they were performed in a controlled environment. In reality, there are a multitude of ways to carry a smartphone: some carry it around in a pocket, others in a bag. Liu and Striegel [44] investigated how these various scenarios influence the received signal strength--their results indicate only
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline & Total & Average pr. time-bin \\\\ \\hline Nodes (Users) & 134 & 17.32 \\\\ \\hline Edges (Dyads) & 2 183 434 & 62.50 \\\\ \\hline Time-bins & 34 272 & - \\\\ \\hline Average clustering & 0.85 & 0.26 \\\\ \\hline Average degree & 103.51 & 2.41 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: **Data overview** Statistics showing the number of total (aggregated) and average values of network properties. Time-bins span five minutes and cover the entire 119 day period, including weekends and holidays. For the average values we only take active nodes into account, i.e. people that have observed another person or been observed themselves in that specific time-bin. Network properties are calculated for the full aggregated network and as averages over each temporal network slice.
minor variations, hence we conclude that the general behavior is similar to the measurements shown in the figure. Further, social interactions are not only limited to office environments, so we have re-produced the experiment in outdoors and in basement-like settings; the results are similar.
Bi-directional observations yield at most two observations per dyad per 5-minute time-bin, we can average over the measurements (Fig 1B), or take the maximal value (Fig 1C). Fig. 2 shows the distributions of signal strength for each respective distance. For raw data, Fig. 2A, we observe a localized zero-distance distribution while the 1, 2, and 3-m distributions overlap considerably. Averaging over values per time-bin smoothes out and compresses the distributions, but the bulk of the distributions still overlap (Fig. 2B). Taking only the maximal signal value into account separates the distributions more effectively (Fig. 2C). The reasoning behind choosing the maximal signal value is that phones are physically at different locations and we expect the distance to be maximally reflected in the distributions.
Thus, by thresholding observations on signal strength, we can filter out proximity links that are likely to be further away than a certain distance. By doing so we are able to emphasize links that are more probable of being genuine social interactions, while minimizing noise and filtering away non-social proximity links. From the behavioral data we count the number of appearances per dyad and assign the values as weights for each link. Link weights follow a heavy-tailed distribution, with a majority of pairs only observed a few times (low weights), a social behavior that has previously been observed by Onnela et al. [14]. Based on their weight we divide links into two categories: weak and strong. A link is defined as 'weak' if it has been observed (on average) less than once per day during the data collection period, remaining links are characterized as'strong'. An effective threshold should maximize the number of removed weak links, while minimizing the loss of strong links. Fig. 3 depicts the number of weak and strong links as a function of threshold value. We observe that, as we increase the threshold, the number of weak links decreases linearly, while the number of strong links remains roughly constant and then drops off suddenly. Taking into account both the maximum-value distance distributions (Fig. 2C) and link weights (Fig. 3), we choose the value \\((-80\\,dBm)\\) that optimizes the ratio between strong and weak links. In a large majority of cases, this corresponds to interactions that occur within a radius of \\(0-2\\) meters--a distance which Hall [45] notes as a typical social distance for interactions among close acquaintances.
### Removing links
This section outlines various strategies for removing non-social links from the network. Fig. 4A shows an illustration of the raw proximity data for a single time-bin, a link is drawn if either \\(i\\to j\\) or \\(j\\to i\\). Thickness of a link represents the strength of the received signal. For the thresholded network (Fig. 4B) we remove links according to the strength of the signal (where we assume the weaker the signal the greater the relative distance between two persons). To estimate the effect of the threshold we compare it to a null model, where we remove the same number of links, but where the links are chosen at random, illustrated in Fig. 4C. To minimize any noise the random removal might cause, we repeat the procedure \\(n=100\\) times, each time choosing a new set of random links, with statistics averaged over the 100 repetitions. As a reference, to check whether thresholding actually emphasizes social proximity links, we additionally compare it to a control network, where we remove the same amount of links, but where the links have signal strengths _above_ or _equal_ to the threshold, Fig. 4D. This procedure is also repeated \\(n\\) times. In a situation where there are more links below the threshold than above, we will remove fewer links for the latter compared to the other networks.
Figure 1: **Bluetooth signal strength (RSSI) as a function of distance.****A:** Scans between two phones. Measurements are per distance performed every five minutes over the course of 7 days. Mean value and standard deviation per distance are respectively \\(\\mu_{0}=-45.13\\pm 1.56\\) dBm, \\(\\mu_{1}=-77.48\\pm 4.15\\) dBm, \\(\\mu_{2}=-82.03\\pm 4.57\\) dBm, and \\(\\mu_{3}=-85.49\\pm 2.75\\) dBm. **B:** Average of the values in respective time-bins. Summary statistics are: \\(\\mu_{0}^{\\rm avg}=-45.13\\pm 1.20\\) dBm, \\(\\mu_{1}^{\\rm avg}=-77.46\\pm 2.90\\) dBm, \\(\\mu_{2}^{\\rm avg}=-81.99\\pm 3.17\\) dBm, and \\(\\mu_{3}^{\\rm avg}=-85.45\\pm 1.88\\) dBm. **C:** Maximal value per time-bin. The mean value and standard deviation per distance are: \\(\\mu_{0}^{\\rm max}=-44.41\\pm 1.11\\) dBm, \\(\\mu_{1}^{\\rm max}=-75.09\\pm 3.24\\) dBm, \\(\\mu_{2}^{\\rm max}=-79.25\\pm 3.47\\) dBm, and \\(\\mu_{3}^{\\rm max}=-83.88\\pm 2.00\\) dBm. The measurements cover hypothetical situations where individuals are far from each other and on either side of a wall.
Figure 2: **Distributions of signal strength for the respective distances.****A:** Raw data. Measurements from both phones are statistically indistinguishable and are collapsed into single distributions, i.e. there is no difference between whether \\(A\\) observes \\(B\\) or vise versa. **B:** Average of signal strength per time-bin. **C:** Maximal value of signal strength per. time-bin.
## Results
### Network properties
Now that we have determined a threshold for filtering out non-social proximity links, let us study the effects on the network properties. Thresholding weak links does not significantly influence the number of nodes present (\\(N\\)) in the network (Fig. 5A), while the number of links (\\(M\\)) is substantially reduced (Fig. 5B). On average we remove 2.38 nodes and 32.18 links per time-bin. Social networks differ topologically from other kinds of networks by having a larger than expected number of triangles [46], thus clustering is a key component in determining the effects of thresholding. Fig. 6 suggests that we are, in fact, keeping real social interactions: random removal disentangles the network and dramatically decreases the clustering coefficient, while thresholding conserves most of the average clustering. Calculating the average ratio (\\(\\langle\\langle c_{T}\\rangle/\\langle c_{N}\\rangle\\rangle\\)) between clustering in the thresholded (\\(\\langle c_{T}\\rangle\\)) and the null networks (\\(\\langle c_{N}\\rangle\\)) reveals that \\(c_{T}\\) on average is 2.38 larger. These findings emphasize that a selection process based on signal strength greatly differs from a random one.
### Link evaluation
Sorting links by signal strength and disregarding weak ones greatly reduces the number of links, but do we remove the correct links, i.e. do we get rid of noisy, non-social links? The fact that clustering remains high in spite of removing a large fraction of links is a good sign, but we want to investigate this question more directly. To do so, we divide the problem into two timescales; a short one where we consider the probability that a removed link might reappear a few time-steps later, and a long where we evaluate the quality of a removed link according to certain network properties. Let's first consider the short time-scale. We assume that human interactions take place on a time-scale that is mostly longer than the 5-minute time-bins we analyze here. Thus, if a noisy link is removed, the probability that it will re-appear in one of the immediately following time-steps should be low, since no interaction is assumed to take place. Howbeit we expect the probability to be significantly greater than zero, since even weak (non-social) links
Figure 3: **Number of links per type as a function of threshold value.** Links are classified as weak if they are observed less than 120 times in the data, i.e. links that on average are observed less than once per dayβotherwise they are classified as strong. Grouping students into study lines, reveals that links within each study line have an almost uniform distribution of weights while links across study lines are distributed according to a heavy-tailed distribution. A threshold of \\(-80\\ dBm\\) (gray area) removes 1159 weak and 387 strong links and classifies 97.6% of inter-study line links as weak and 86.7% of intra-study line links as strong.
Figure 4: **Networks.****A:** Raw network; shows all observed links for a specific time-bin. Thickness of a link symbolizes the maximum of the received signal strengths. **B:** Thresholded network, we remove links with received signal strengths below a certain threshold, where dotted lines indicate the removed links. **C:** Null model; with respect to the previous network we remove the same amount of links, but where the links are chosen at random. **D:** Control network, a similar amount of links with signal strength above or equal to the threshold are removed.
Figure 5: **Network statistics.** Properties are highly dynamic but on average we observe 17.32 nodes and 62.50 links per time-bin. **A:** Number of nodes \\(N\\) as a function of time. Only active nodes are counted, i.e. people that have observed another person or been observed themselves. Dynamics are shown for two weeks during the 2013 spring semester, clearly depicting both daily and weekly patterns. Data markers are omitted to avoid visual clutter. On average thresholding removes 3.06 nodes during weekends and holidays, and 2.38 during regular weekdays. **B:** Number of links \\(M\\) as a function of time. 10.60 links are on average removed during weekends/holidays, and 32.21 are removed during weekdays.
imply physical proximity. Similarly, if we (accidentally) remove a social link, the probability that it will appear again should be high, since the social activity is expected to continue to take place.
Let us formalize this notion. Consider a link \\(e\\) that is removed at time \\(t\\), the probability that the link will appear in the next time-step is \\(p(t+1|e,t)\\). Generalizing this we can write the probability that any removed link will appear in all the following \\(n\\) time-steps as:
\\[p(t+1,\\ldots,t+n|t)=\\frac{\\text{no. links removed at $t$ present at $t+1\\cap\\ldots\\cap t+n$}}{\\text{no. links removed at $t$}} \\tag{1}\\]
Fig. 7A illustrates that thresholded links in subsequent time-steps are observed less frequently then both null and control links. To compare with the worst possible condition, we compare data from each thresholded time-bin with the _raw data_ from the next bin (where the raw data contains many weak links). In spite of this, we observe a clear advantage of distinguishing between links with weak and strong signal strengths. If we look at values for \\(t+1\\), the first subsequent time-step, the probability of re-occurrence in the thresholded network is about 12% lower than for the null model, and as we look to later time-steps, the gap widens.
A different set of social dynamics unfolds on longer timescales where the class schedule imposes certain links to appear periodically, e.g every week. Here we determine impact of removing links in two ways. First, we use total link weights and second, we use online friendship status. Friends meet frequently; we capture this behavior by using the total number of observations of a certain dyad to estimate the weight of a friendship (again, counted in the raw network). Thus, we evaluate the quality of a removed links by considering its total weight compared to the weight of other links present in the same time-bin. However, since multiple links are removed per time-bin we are more interested in the average,
\\[q_{t}=\\frac{\\text{Avg. weight of removed links at $t$}}{\\text{Avg. weight of all links present at $t$}} \\tag{2}\\]
This estimates, per time-bin, whether removed links on average have weights below, close to, or above the mean. Note that the measure is intended to estimate the quality of removed links and is therefore not defined for bins where zero links are removed. Fig. 7B indicates difference in link selection processes. Choosing links at random (null network) removes both strong and weak links with equal probability, thus on average this corresponds to the mean weight of links present. Compared to null, the thresholded
Figure 6: **Average clustering.** Only active nodes, i.e. nodes that are part of at least one dyad contribute to the average, the rest are disregarded. Average clustering is calculated according to the definition in [47]. Since social activity in groups larger than two individuals results in network triangles, the fact that clustering is not significantly reduced by thresholding (compared to the null model) provides evidence that we are preserving social structure in spite of link removal.
network removes links with weights below average, indicating that removed links are less frequently observed and therefore also less likely to be real friendships. The control case displays an diametrical behavior, on average, it removes links with higher weights.
The second method to evaluate the link-selection processes compares the set of removed links with the structure of an online social network, i.e. if a removed proximity link has an equivalent online counterpart. We estimate the quality by measuring the fraction of removed links with respect to those present at time \\(t\\).
\\[q_{t}^{\\text{FB}}=\\frac{\\text{no. of FB links removed at }t}{\\text{no. of FB links present at }t} \\tag{3}\\]
The quality measure is essentially a ratio, i.e. it can assume values \\(0\\leq q_{t}^{\\text{FB}}\\leq 1\\) depending on the fraction of links that are removed. Bins with zero Facebook friendships are disregarded since they contain no information regarding the online social network. Fig. 7C shows that random removal (null network), on average, removes \\(\\sim 43\\%\\) of online friendships, while the thresholded network removes \\(\\sim 33\\%\\), a 10 percent point difference. For comparison, the control network removes \\(\\sim 44\\%\\) of the online links. Further, redoing the analysis for a dataset comprised only of users for which we have both proximity and online data for, does not significantly alter the results.
Facebook links are not necessary good indicators for strong friendships, but are more likely to correspond to real social interactions. In spite of this, both Fig. 7B and C support that distinguishing between strong and weak proximity links tends to emphasize real social interactions: on average thresholded links have lower edge weights and remove fewer Facebook friendships compared to both the null-model and the control.
## Discussion
The availability of electronic datasets is increasing, so the question of how well can we use these electronic _clicks_ to infer actual social interactions is important for effectively understanding processes such as relational dynamics, and contagion. Sorting links based on their signal strength allows us to distinguish between strong and weak ties, and we have argued that thresholding the network emphasizes social proximity links while eliminating some noise.
Simply thresholding links based on signal strength is not a perfect solution. In certain settings we remove real social connections while noisy links are retained. Our results indicate that the proposed framework is better at identifying strong links than removing them. A trend which the link-reappearance probability, link-weights, and online friendship analysis support. Compared to the baseline we achieve better results than just assuming all proximity observations as real social interactions. But determining whether a close proximity link corresponds to an actual friendship interaction is much more difficult. Multiple scenarios exist where people are in close contact but are not friends, one obvious example is queuing. Each human interaction has a specific social context, so an understanding of the underlying social fabric is required to fully discern when a close proximity link is an actual social meeting. This brings us back to the question of how to determine a real friendship from digital observations (cf. [9]). Close proximity may not be the best indicator of friendship; call logs, text logs, and geographical positions are all factors which coupled with information from the Bluetooth probe could give us a better insight into social dynamics and interactions.
## Acknowledgments
We thank L. K. Hansen, A. Stopczynski, and P. Sapiezynski for many useful discussions and A. Cuttone for proofreading the manuscript. The work in this paper was funded by a Young Investigator Grant from the Villum Foundation (High Resolution Networks, awarded to SL).
Figure 7: **Link evaluation.****A:** Probability of link reappearance. For each selection process we remove a specific set of links. In the thresholded network, we remove links with weak signal strength. For the null network, we remove links at random. Lastly, in the control network case we remove strong links. The probability for links to reappear within all the next \\(n\\) time-steps is calculated using Eq. 1 and averaging over all time-bins. Boundary conditions are not applied and the reappearance probability for the last \\(n=5\\) bins is not taken into account. **B:** Quality measure for proximity data. **C:** Quality measure for the online data. For each time-bin we calculate \\(q_{t}\\) as defined in Eq. 2 and 3. Brackets indicate a temporal average across all time-bins and value are shown for all three network types.
## References
* 1. Sun J, Yuan J, Wang Y, Si H, Shan X (2011) Exploring space-time structure of human mobility in urban space. Physica A: Statistical Mechanics and its Applications 390: 929-942.
* 2. Sevtsuk A, Ratti C (2010) Does urban mobility have a daily routine? learning from the aggregate data of mobile networks. Journal of Urban Technology 17: 41-60.
* 3. Liljeros F, Edling CR, Amaral LAN, Stanley HE, Aberg Y (2001) The web of human sexual contacts. Nature 411: 907-908.
* 4. Mossong J, Hens N, Jit M, Beutels P, Auranen K, et al. (2008) Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Medicine 5: e74.
* 5. Cauchemez S, Donnelly CA, Reed C, Ghani AC, Fraser C, et al. (2009) Household transmission of 2009 pandemic influenza a (H1N1) virus in the united states. New England Journal of Medicine 361: 2619-2627.
* 6. Wu L, Waber B, Aral S, Brynjolfsson E, Pentland A (2008) Mining face-to-face interaction networks using sociometric badges: Predicting productivity in an IT configuration task. Available at SSRN 1130251.
* 7. Pentland A (2012) The new science of building great teams. Harvard Business Review 90: 60-69.
* 8. Blansky D, Kavanaugh C, Boothroyd C, Benson B, Gallagher J, et al. (2013) Spread of academic success in a high school social network. PLoS ONE 8: e55944.
* 9. Wuchty S (2009) What is a social tie? Proceedings of the National Academy of Sciences 106: 15099-15100.
* 10. Watts DJ (2007) A twenty-first century science. Nature 445: 489-489.
* 11. Eckmann JP, Moses E, Sergi D (2004) Entropy of dialogues creates coherent structures in e-mail traffic. Proceedings of the National Academy of Sciences of the United States of America 101: 14333-14337.
* 12. Barabasi AL (2005) The origin of bursts and heavy tails in human dynamics. Nature 435: 207-211.
* 13. Kossinets G, Watts DJ (2006) Empirical analysis of an evolving social network. Science 311: 88-90.
* 14. Onnela JP, Saramaki J, Hyvonen J, Szabo G, Lazer D, et al. (2007) Structure and tie strengths in mobile communication networks. Proceedings of the National Academy of Sciences 104: 7332-7336.
* 15. Gonzalez MC, Hidalgo CA, Barabasi AL (2008) Understanding individual human mobility patterns. Nature 453: 779-782.
* 16. Lazer D, Pentland A, Adamic L, Aral S, Barabasi AL, et al. (2009) Computational social science. Science 323: 721-723.
* 17. Song C, Qu Z, Blumm N, Barabasi AL (2010) Limits of predictability in human mobility. Science 327: 1018-1021.
* 18. Bagrow JP, Wang D, Barabasi AL (2011) Collective response of human populations to large-scale emergencies. PLoS ONE 6: e17680.
* 19. Eagle N, Pentland AS, Lazer D (2009) Inferring friendship network structure by using mobile phone data. Proceedings of the National Academy of Sciences 106: 15274-15278.
* 20. Haritaoglu I, Harwood D, Davis LS (2000) W4: Real-time surveillance of people and their activities. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22: 809-830.
* 21. Polastre J, Szewczyk R, Culler D (2005) Telos: enabling ultra-low power wireless research. In: Information Processing in Sensor Networks, 2005. IPSN 2005. Fourth International Symposium on. IEEE, pp. 364-369.
* 22. Salathe M, Kazandjieva M, Lee JW, Levis P, Feldman MW, et al. (2010) A high-resolution human contact network for infectious disease transmission. Proceedings of the National Academy of Sciences 107: 22020-22025.
* 23. Rosenstein B (2008) Video use in social science research and program evaluation. International Journal of Qualitative Methods 1: 22-43.
* 24. Olguin DO, Waber BN, Kim T, Mohan A, Ara K, et al. (2009) Sensible organizations: Technology and methodology for automatically measuring organizational behavior. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39: 43-55.
* 25. Kjaergaard MB, Nurmi P (2012) Challenges for social sensing using wifi signals. In: Proceedings of the 1st ACM workshop on Mobile systems for computational social science. ACM, pp. 17-21.
* 26. Wyatt D, Choudhury T, Kautz H (2007) Capturing spontaneous conversation and social dynamics: A privacy-sensitive data collection effort. In: Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on. IEEE, volume 4, pp. IV-213.
* 27. Carreras I, Matic A, Saar P, Osmani V (2012) Comm2sense: Detecting proximity through smartphones. In: Pervasive Computing and Communications Workshops (PERCOM Workshops), 2012 IEEE International Conference on. IEEE, pp. 253-258.
* 28. Wang D, Pedreschi D, Song C, Giannotti F, Barabasi AL (2011) Human mobility, social ties, and link prediction. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp. 1100-1108.
* 29. Cattuto C, Van den Broeck W, Barrat A, Colizza V, Pinton JF, et al. (2010) Dynamics of person-to-person interactions from distributed RFID sensor networks. PLoS ONE 5: e11596.
* 30. Barrat A, Cattuto C (2013) Temporal networks of face-to-face human interactions. In: Temporal Networks, Springer. pp. 191-216.
* 31. Stopczynski A, Sekara V, Sapiezynski P, Cuttone A, Madsen MM, et al. (2014) Measuring large-scale social networks with high resolution. PLoS ONE 9: e95978.
* 32. Ladd AM, Bekris KE, Rudys A, Kavraki LE, Wallach DS (2005) Robotics-based location sensing using wireless ethernet. Wireless Networks 11: 189-204.
* 33. Shorey R, Miller BA (2000) The bluetooth technology: merits and limitations. In: Personal Wireless Communications, 2000 IEEE International Conference on. IEEE, pp. 80-84.
* 34. Clauset A, Eagle N (2012) Persistence and periodicity in a dynamic proximity network. arXiv preprint arXiv:12117343.
* 35. Sulo R, Berger-Wolf T, Grossman R (2010) Meaningful selection of temporal resolution for dynamic networks. In: Proceedings of the Eighth Workshop on Mining and Learning with Graphs. ACM, pp. 127-136.
* 36. Cheung KC, Intille SS, Larson K (2006) An inexpensive bluetooth-based indoor positioning hack. Proc UbiComp06 Extended Abstracts.
* 37. Anastasi G, Bandelloni R, Conti M, Delmastro F, Gregori E, et al. (2003) Experimenting an indoor bluetooth-based positioning service. In: Distributed Computing Systems Workshops, 2003. Proceedings. 23rd International Conference on. IEEE, pp. 480-483.
* 38. Bruno R, Delmastro F (2003) Design and analysis of a bluetooth-based indoor localization system. In: Personal wireless communications. Springer, pp. 711-725.
* 39. Madhavapeddy A, Tse A (2005) A study of bluetooth propagation using accurate indoor location mapping. In: UbiComp 2005: Ubiquitous Computing, Springer. pp. 105-122.
* 40. Zhou S, Pollard JK (2006) Position measurement using bluetooth. Consumer Electronics, IEEE Transactions on 52: 555-558.
* 41. Hay S, Harle R (2009) Bluetooth tracking without discoverability. In: Location and context awareness, Springer. pp. 120-137.
* 42. Hossain A, Soh WS (2007) A comprehensive study of bluetooth signal parameters for localization. In: Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium on. IEEE, pp. 1-5.
* 43. Friis HT (1946) A note on a simple transmission formula. proc IRE 34: 254-256.
* 44. Liu S, Striegel A (2011) Accurate extraction of face-to-face proximity using smartphones and bluetooth. In: Computer Communications and Networks (ICCCN), 2011 Proceedings of 20th International Conference on. IEEE, pp. 1-5.
* 45. Hall ET (1990) The hidden dimension. Anchor Books New York.
* 46. Newman ME, Park J (2003) Why social networks are different from other types of networks. Physical Review E 68: 036122.
* 47. Watts DJ, Strogatz SH (1998) Collective dynamics of'small-world' networks. Nature 393: 440-442. | Understanding how people interact and socialize is important in many contexts from disease control to urban planning. Datasets that capture this specific aspect of human life have increased in size and availability over the last few years. We have yet to understand, however, to what extent such electronic datasets may serve as a valid proxy for real life social interactions. For an observational dataset, gathered using mobile phones, we analyze the problem of identifying transient and non-important links, as well as how to highlight important social interactions. Applying the Bluetooth signal strength parameter to distinguish between observations, we demonstrate that weak links, compared to strong links, have a lower probability of being observed at later times, while such links--on average--also have lower link-weights and probability of sharing an online friendship. Further, the role of link-strength is investigated in relation to social network properties. | Write a summary of the passage below. | 171 |
arxiv-format/2110_01706v2.md | # Event-based hyperspectral EELS: towards nanosecond temporal resolution
Yves Auad
[email protected]
Michael Walls
Jean-Denis Blazit
Odile Stephan
Luiz H. G. Tizei
Mathieu Kociak
Francisco De la Pena
Marcel Tence
[email protected] Laboratoire des Physique des Solides, Universite Paris Saclay, Orsay, France Unite Materiaux et Transformations, Universite de Lille, Lille, France
## Introduction
The scanning transmission electron microscope (STEM) works by rastering a focused electron beam on a sample. The image formation is usually performed by the single-channel annular dark field (ADF), bright field (BF), or annular bright-field (ABF) detectors. As the transmitted electrons carry spectral information from the sample, the focused electron probe makes STEM an interesting tool for performing electron energy loss spectroscopy (EELS) with high spatial resolution [1, 2, 3, 4]. Data is usually acquired in the form of a hyperspectral image, a data cube indexed by one energy and two spatial coordinates.
One of the main concerns when performing EELS is that the energy-momentum transferred during the inelastic scattering of the electron may cause undesired effects in the sample, such as knock-on displacement, induced heating, and radiolysis [5, 6]. Several approaches have been proposed to diminish them, such as custom scan paths and fast scans combined with data reconstruction algorithms [7, 8, 9, 10]. Although effective, these solutions are limited by the frame-based nature of the acquisition systems, which have a minimum acquisition time given by the readout time, typically of a few milliseconds for charge-coupled device (CCD) cameras, for example.
Up until now, frame-based detectors have been the usual solution for EELS acquisition. These count the number of electron hits in a given dwell time indiscriminately and thus the temporal information is limited by the spectrum acquisition time, as shown in Figure 1a. CCDs and complementary metal-oxide-semiconductor (CMOS) are the most widespread frame-based detectors for EELS [11, 12]. For both detectors, a scintillator and an array of optical fibers are typically used to convert the incident electrons into photons. These detectors have a variety of noise sources, such as dark and readout noises, and can dramatically degrade the spectral resolution due to the increased point-spread-function (PSF) imposed by the scintillator layer. A second kind of electron detection uses hybrid pixel detectors (HPDs), in which the sensor layer and the readout chip (also called application-specific integrated circuit or ASIC) are manufactured independently from each other. Multiple successive generations of ASICs led to the spread of HPDs in many different research subjects, such as space dosimetry [13], synchrotron source imaging [14, 15], X-Ray spectroscopies [16, 17] and electron microscopy, including diffraction [18, 19], imaging [20, 21, 22, 23] and EELS [24, 25]. One of the most successful ASICs, the Medipix3, introduces several improvements with respect to CCDs and CMOS for EELS acquisition. These include the practically zero readout noise, the improved PSF due to the direct electron detection and the readout time as low as \\(\\sim 500\\)\\(\\mu\\)s [26]. Despite their improved acquisition speed, the problems related to frame-based acquisition persist because scanning pixel time in a STEM can go aslow as tens of nanoseconds. This is much faster than the readout time of any commercially available frame-based detector.
A different concept of hyperspectral data acquisition for EELS can be defined when electrons are individually counted and can be unequivocally placed in the corresponding spectral and positional coordinates of the data cube. For example, one can consider a fast rastering electron beam with 0.5 \\(\\mu\\)s pixel time. In such a time interval, for a probe with \\(\\sim\\) 50 pA only \\(\\sim\\) 150 electrons would hit the sample, most of them falling in the zero-loss peak (ZLP). For a single-pixel acquisition, there would not be enough electrons to produce a usable EELS spectrum. However, continuously scanning and adding the electrons in an event-based fashion can lead to a meaningful reconstruction of the data cube. We show such a scheme in Figure 1b. The \\(\\Delta t\\) shown is a typical frame-based acquisition time (\\(\\sim\\) 1 ms). For the event-based acquisition, the SU rasters a great number of pixels within the time interval \\(\\Delta t\\) that would be needed to collect a spectrum in the frame-based approach. Of course, contrary to the frame-based approach, the arrival time of each of these hits is known with a precision much better than \\(\\Delta t\\). Also, during \\(\\Delta t\\), one acquires electron hits from different points of the data cube in space. One must therefore relate a given electron hit with the corresponding probe position to construct a hyperspectral image. In this case, hyperspectral images can be acquired with very fast scanning pixel dwell time and thus synchronously with the normal ADF imaging without any performance penalty.
In this work, we demonstrate the implementation of this concept for EELS, similarly to what was recently demonstrated for event-driven 4D STEM acquisition [27]. Although there is a mention to a event-based hyperspectral image in the literature in the context of EELS-EDX coincidence experiments [28], the aforementioned study does not discuss the methodology nor the benefits of the time-resolved capability for time-dependent processes. We show that probe position and electron hit can be related to the temporal dimension by using an electron detector capable of outputting such information. We start by explaining details of the event-based hyperspectral EELS implementation, describing, in particular, the Timepix3, the direct electron detector used throughout this paper, and the related features of the used readout board, called SPIDR (Speedy Pixel Detector Readout), that allowed us to produce supplementary events from the Scan Unit (SU) superimposed on the data flow of the electron events. To illustrate the problem, we show an event-based hyperspectral acquisition using 120 ns pixel time sampled over 512 x 512 pixels. The last part of the paper is dedicated to the application of this system to follow the decomposition of calcite (CaCO\\({}_{3}\\)) into calcium oxide (CaO) and gaseous carbon dioxide (CO\\({}_{2}\\)) under electron beam irradiation.
### Event-based hyperspectral EELS implementation
To implement the event-based hyperspectral EELS, we have used a Timepix3 (TPX3) detector. In its first version, Timepix was a simple modification of Medipix2, allowing one to increment the pixel counter by clock ticks instead of the number of events since a reference clock was distributed on each one of its pixels. Timepix had thus the old functionality of counting hits but also the option of outputting either time of arrival (ToA) or time over threshold (ToT) values [29]. The former measures the time elapsed until a hit is detected, while the latter measures the time the hit stays over the pixel signal threshold. Its successor, the ASIC TPX3, was the first real data-driven detector in the entire Medipix/Timepix family, as a pixel hit is responsible for triggering data output from the chip. A voltage-controlled oscillator running at 640 Mhz allows TPX3 to achieve a nominal temporal resolution of 1.5625 ns (called fine ToA) and, in contrast with the first Timepix generation, can simultaneously provide ToA and ToT [30; 31]. When TPX3 is used in EELS, therefore, we have access to each electron's positional coordinates (the dispersive and non-dispersive directions) and the temporal coordinates, represented here by both ToA and ToT. To reconstruct the hyperspectral image, one must find a way to correlate the temporal information of the electron events with the electron probe position. One approach is to feed the SU reference clock signal into TPX3, which would require flexible and programmable SUs and TPX3 control boards. Our solution is to create supplementary events in the TPX3 data flows, effectively having two distinct kinds of events: one linked to individual electrons and another to reference timestamps of the microscope probe position.
For the development of our application, we have used the TXP3 solution by Amsterdam Scientific Instruments (ASI), called Cheetah, which includes the SPIDR board [32] and the control software. Our detector consists of four 256x256 chips mounted linearly adjacent to each other to form a 256x1024-pixel array. In the following, the dispersive direction of the detector is denoted as \\(\\alpha\\) and the non-dispersive \\(\\beta\\). Also, the Cheetah provides us with two supplementary input time-to-digital converter (TDC) lines that run the same clock as the 40 Mhz reference clock and can reach a nominal temporal resolution of \\(\\sim\\) 260 ps. The SPIDR can detect TTL-based rising and falling edges in the TDC and includes them in the data flow in the same
Figure 1: Comparison between frame-based and event-based hyperspectral acquisitions. (a) In the frame-based hyperspectral image reconstruction, the entire spectral dimension is acquired for each electron probe position. The minimum exposure time is given by the camera readout time, typically in the millisecond range for CCDs. (b) The event-based reconstruction places each electron in its corresponding data cube position when an electron hit is detected. Because of this, the electron beam can be rastered as fast as the time resolution of the event-based camera, typically in the nanosecond range. In both cases, the cube color code represents a typical acquisition time of a frame-based measurement (\\(\\sim\\) 1 ms). In such a time window, the scan unit can raster a great number of pixels.
way it is done for electron events. We have used a custom-made SU solution that is based on a 25 Mhz clock and can scan as fast as 40 ns per pixel [7]. To synchronize the SU and the SPIDR clocks, the SU sends reference signals (what we call supplementary events) to the Cheetah, as demonstrated in Figure 2a. They contain only timestamps and can be represented by any input signal that can be used to unequivocally determine the electron probe position (\\(x,~{}y\\)). Although theoretically one could use a single signal indicating the start of the rastering, sending periodic reference signals allows to correct clock drift, which is especially important for long (\\(>\\)10 s) acquisition times. In our case, we have used the beginning of a new scan row (\\(y\\) direction) as a trigger falling edge, while the end of a line is represented by a rising edge. The difference between a falling and a rising edge is the scanning flyback time setting.
The complete hyperspectral reconstruction principle is shown in Figure 2b, which depicts the timeline of the occurring events. For clarity, electron events \\(e_{n}\\) are further subdivided into \\(E_{n}=e_{n}(t)\\) and \\(e_{n}=e_{n}(\\alpha,\\beta)\\) to explicitly indicate what information we have used in each step of the reconstruction. As the received supplementary event S(t) relates to the beginning of a new scan row, the number of columns (\\(x\\) direction) must be known by the software. This value is used as the number of time bins between a supplementary falling event and a successive rising event, as shown at the top in Figure 2b. As an example, we can see that electrons \\(E_{1}\\) and \\(E_{2}\\) are in the same row because they are both after \\(S_{1,full}\\) but are in different columns because they are in different time bins within the scan row. It is important to note that electron placement in the hyperspectral spatial pixel (\\(x\\) and \\(y\\)) is only dependent on time. The pixel address of the electron event, \\(e_{n}\\), is only used to form the hyperspectral signal (\\(\\alpha\\) and \\(\\beta\\)), as shown in Figure 2b at the bottom left by \\(e_{6}\\). Additionally, the rising edge trigger input by the SU indicates the end of a scan row, meaning the high digital signal corresponds to the flyback of the electron probe; any electron event that arrives during this time interval is rejected, as illustrated by \\(E_{5}\\). As a final remark, it is important to clarify that the time \\(t\\) in \\(E_{n}=e_{n}(t)\\) is simply the electron ToA corrected by the fine ToA, having a nominal temporal resolution of 1.5625 ns. Note also that the multiple electron-hole pairs created by a single impinging electron create multiple detector hits, called clusters. To circumvent the problem of multiple event counting due to clusters, a cluster-correction algorithm was implemented in our application. It must use both the temporal and the spatial information of adjacent electron hits to be effective and is explained in detail later in this work.
We have also developed a live acquisition program coded in Rust [33] capable of translating the received events by the TPX3 to a variety of outputs, including the hyperspectral image illustrated in Figure 2. The software can be controlled in a user interface plugin [34] developed for the Nionswift software [35], which is also used for data acquisition. Other software features include the acquisition of single spectra, which uses the period of a TTL signal in the TDC line to determine the spectrum dwell time. Both the live-processing and the plugin are open-source and are available to the community under MIT licensing. The processed data is transferred by transmission control protocol (TCP) using a 10-Gbit optical fiber from the dedicated processing computer to the client computer. For a single spectrum, data is transferred in its entirety (1024-sized array for fully-binned measurements and 1024x256-sized for image measurements) with configurable bit depth to accommodate a high range of acquisition times. For the hyperspectral image, one must note that for a 512 x 512 image with 120 ns pixel time, an entire 512 x 512 x 1024 hyperspectral is simultaneously reconstructed with the ADF, although very sparse. The transfer rate would need to
Figure 2: The hyperspectral data reconstruction process. (a) The scheme of the system used for data acquisition. The scan unit inputs temporal supplementary events, while individual electrons produce positional and temporal events. (b) We exemplify how the temporal information of both electrons and supplementary events can be used to arrange electrons in the reconstructed hyperspectral spatial data (\\(x\\) and \\(y\\)). Detector-pixel address information (\\(\\alpha\\) and \\(\\beta\\)) is used to determine the spectral information of each spatial pixel.
be \\(\\sim\\) 140 Gbit/s (using a 16-bit integer) which is much higher than the transfer limit of the 10-Gbit Ethernet. In such cases, data can be transferred more compactly by sending a list of indices to be incremented in the datacube. As an example, for a hyperspectrum containing 64 x 64 spatial pixels, and considering the 1024 pixels in the detector row, these indices must be between 0 and 4194304.
Figure 3 shows an initial example of a live hyperspectral reconstruction. Pixel dwell time was kept at 120 ns in a 512 x 512 spatial sampling with a current at the sample of approximately 8 pA from a region of approximately 1.0 \\(\\mu\\)m\\({}^{2}\\). The flyback time is measured as \\(\\sim\\) 28.5 \\(\\mu\\)s by the TDC and thus a single frame takes approximately 46.1 ms to acquire. The sample contains some silver nano-cubes drop-cast onto a thin film of amorphous carbon. At the top, the ADF image obtained during data acquisition and three snapshots for different accumulation times of the energy-filtered hyperspectrum between 5 eV and 45 eV, comprising the strong carbon plasmon resonance peaked at approximately 22 eV. At the bottom, we display the spectrum for the 8 x 8 pixel cell highlighted by the yellow square. In the first 2 s of acquisition, 43 complete ADF frames are accumulated and a minimal contrast shows up in the energy-filtered image. After 16 s and 76 s of acquisition (corresponding to, respectively, 347 and 1649 frames), the contrast is greater and the plasmon resonance is much more distinguishable.
### Study of calcite decomposition
In order to demonstrate our event-based hyperspectral image, we have used a calcite (CaCO\\({}_{3}\\)) sample and explored its well-known transformation to calcium oxide (CaO) and carbon dioxide (CO\\({}_{2}\\)) under the electron beam irradiation (CaCO\\({}_{3}\\xrightarrow{e^{-}}\\) CaO + CO\\({}_{2}\\)) [36; 37; 38]. The experiment was performed in a Vacuum Generators HB501 at 100 kV equipped with an LN\\({}_{2}\\) cold stage that stays at approximately 150 K. The acquired data had 4 \\(\\mu\\)s pixel time with the 32 nm x 32 nm region sampled by 32 x 32 pixels. The convergence angle was 15 mrad and a collection aperture of \\(\\sim\\) 2 mrad was used to have both an improved spectral resolution and to produce a non-saturated EELS dataset. The electron spectrometer was set to a low dispersion of \\(\\sim\\) 0.445 eV/pixel to monitor simultaneously the low loss region, the carbon K edge, and the calcium L\\({}_{2,3}\\) edges. In these conditions, one ADF image, and therefore one hyperspectral image, is completed every \\(\\sim\\) 5 ms. Such a rate is comparable with that of a single energy-filtered transmission electron microscope (EFTEM) image, although in the present case the whole spectral range is gathered. The collected signal is extremely low at these rates, as the dwell time for the acquisition of each pixel's spectrum is \\(\\sim\\) 4 \\(\\mu s\\), and some time-binning is needed for interpreting the data. Therefore, the total of 93 s of the acquisition was sliced into 232 hyperspectral images with intervals of 400 ms, which corresponds to roughly 80 complete ADF frames and an exposure time per pixel of 320 \\(\\mu\\)s. As we shall see, this temporal sampling is enough to unveil the calcite decomposition dynamics in the low-loss energy range. Data analysis in this work was done using the Hyperspy package [39].
Before examining the data set, we used a custom-developed algorithm to identify and treat clusters from our hyperspectral time slices. To do so, both ToA and the pixel hit position are used: the set of pixels within a single cluster is counted as a single event carrying the average ToA and pixel impact posi
Figure 3: Energy-filtered hyperspectral image containing 512 x 512 pixels and using 120 ns pixel time between 5 eV and 45 eV for a sample of silver nano-cubes drop-casted over a thin amorphous carbon film. The ADF (top left) and three images (1, 2, and 3, at the top) and the corresponding spectra acquired inside the highlighted yellow square (8 x 8 pixels) taken after 64, 508, and 2415 complete ADF frames show the time evolution and the event-based nature of hyperspectral data formation.
tion. A new cluster is created if the next electron event has a ToA superior to the previous one by \\(>200\\) ns or if the pixel distance is \\(>2\\) pixels in any of the \\(\\alpha\\) or \\(\\beta\\) directions independently (see Supplementary Material (SM) for further details on different parameters). Figure 4 shows the impact of cluster treatment on our EELS hyperspectral data. In Figure 4a, we have plotted the histogram of the ToT for all pixel hits before the cluster correction (orange curve) and the histogram of the sum of the ToT of all the pixel hits that belong to a single cluster (blue curve). A Gaussian fit to the distinct peak shown in the cluster-corrected data gives us an average value of \\(\\sim 139.13\\) ns and an equivalent full-width-half-maximum (fwhm) \\(\\Delta\\)T\\({}_{ToT}\\) = 22.65 ns, which is under the clock tick of 25 ns. In such a case, ToT-based spectroscopy has a resolution of approximately 16.18 keV and hence is difficult in the typical EELS range (\\(<1\\) keV). In Figure 4b, we have plotted the time difference between consecutive events, called inter-arrival times (ITs), for the same electrons as in Figure 4a. A consequence of the independence of the events in a Poisson process is that the number of events as a function of the observed IT follows an exponential decay \\(e^{-\\lambda t}\\), where \\(\\lambda\\) is the expected rate of occurrences in the Poisson process. The uncorrected curve (light red) seems to be properly following an exponential decay for ITs longer than 100 ns but has a steep increase of approximately two orders of magnitude for ITs shorter than 50 ns. Additionally, the uncorrected curve presents oscillations in the observed ITs, which is also a consequence of the multiple detected hits per electron and the inability to determine the proper effective electron hit time. After cluster-correction (light blue curve), the curve approximates to an exponential behavior for shorter times, despite a still visible deviation for ITs \\(<25\\) ns. Additionally, we also show (light orange curve) the IT for the electrons in which their cluster total ToT is between 60 and 220 (gray rectangle in Figure 4a), which follows a much-closer exponential behavior for short ITs. As it is discussed in the SM, identified clusters with small total ToT are primarily formed in between Timepix3 chips and thus might be subjected to a different cluster formation dynamics. Finally, the current in the detector estimated by the number of hits after cluster + ToT correction is 0.322 pA. The fitting result (dashed line) gives \\(\\lambda\\sim 2072\\) electrons.ms\\({}^{-1}\\), which corresponds to a current of \\(\\sim 0.334\\) pA and agrees within 96% with the electron hit estimate. Note that the Poisson statistics of the electrons are indicative of the non-saturated regime of electron detection.
Figure 5a displays one snapshot of the ADF at \\(\\sim 25\\) s of acquisition time (left), which already shows a contrast due to
Figure 4: Impact of the cluster-correction algorithm in the EELS data. (a) The normalized frequency of the summed ToT before and after cluster correction. (b) The electrons inter-arrival times for the uncorrected, cluster-corrected and cluster+ToT-corrected data. Fitting was performed for the latter and provided an expected number of occurrences of \\(\\lambda\\sim 2072\\) electrons.ms\\({}^{-1}\\), corresponding to a current in the detector of \\(i_{det}\\sim 0.334\\) pA.
Figure 5: Typical acquisition conditions. (a) ADF images at approximately 25 s of acquisition time (left) and at the end of the acquisition after 93 s (right). (b) Typical EELS spectrum for a single pixel in a single time slice, showing a fwhm of 2 pixels.
the accumulated sample damage. The ADF at the right shows a higher field-of-view image after the entire 93 s of acquisition. Figure 5b shows a typical single-pixel spectrum in one time slice (320 \\(\\mu\\)s pixel exposure time), displaying a ZLP with a maximum number of \\(\\sim\\) 60 electron hits and a fwhm of 2 pixels, a consequence of the improved point-spread-function of direct electron detectors [24].
Figure 6a shows a few results from the time-resolved hyperspectrum after running the cluster-correction algorithm in the data set. Two energy-filtered images centered at the plasmon resonance feature at \\(\\sim\\) 13 eV, indicated as \\(\\Delta E\\) and associated with the CaO formation [37; 38], are shown at the left for two distinct times (\\(T_{1}\\) and \\(T_{2}\\)) within the same time interval \\(\\Delta T\\) = 400 ms and thus depicts the CaO formation dynamics within \\(\\pm\\) 200 ms time resolution. The EELS spectra as a function of the total elapsed time for the pixel Pos\\({}_{1}\\) (yellow square) and Pos\\({}_{2}\\) (green square) are shown at the right. Note how Pos\\({}_{2}\\) is farther from where CaO formation starts and hence the transformation is triggered at a later time than at Pos\\({}_{1}\\). There is a clear transformation in the low-loss spectra, most notably around the aforementioned resonance at \\(\\Delta E\\), successfully captured by the time-binning chosen. In Figure 6b, we show a similar energy-filtered snapshot, but with time intervals of 100 ms and 1600 ms, demonstrating that the time-binning value can be arbitrarily picked as long as it is a multiple of a single scan image.
In Figure 7a, we show similar spectra for the core-loss energy range around the Carbon-K edge and the Calcium L\\({}_{2,3}\\) edges for the entire sample region. In Figure 7b, we display the sum of the normalized signal between 286 eV and 305 eV (comprising thus the C-K edge) divided by the signal between 343.5 eV and 351.0 eV (Ca L\\({}_{2,3}\\)). To have a better signal-to-noise ratio (SNR), time slices were binned by a factor of 8 (and thus have a time resolution of \\(\\sim\\) 3.2 s). The smaller proportion of carbon with respect to calcium over time suggests the calcite decomposition is happening and, consequently, carbon content is reducing in the system.
To extract more spectral information from the calcite decomposition dynamics, one could further increase the time interval of the hyperspectral slices, sacrificing time resolution for more signal per spectrum. More interesting, however, is to perform a low-rank approximation, such as singular value thresholding (SVT), a.k.a. PCA (principal component analysis), which can
Figure 6: Hyperspectral EELS results for the calcite decomposition in the low-loss energy range. (a) Two energy-filtered snapshots centered at \\(E\\) = 13 eV accumulated in the energy internal \\(\\Delta E\\) for the time slices at \\(T_{1}\\) = 30 s and \\(T_{2}\\) = 45 s summed over the time interval \\(\\Delta T\\) = 400 ms. The time evolution for two pixels (Pos\\({}_{1}\\) and Pos\\({}_{2}\\)) is also shown. (b) Similar to the snapshots in (a), but for a time interval of 100 ms (top) and 1600 ms (bottom).
Figure 7: Hyperspectral EELS results for the calcite decomposition in the core-loss energy range for the entire sample region. (a) Time evolution around the C-K edge at the left while, at the right, we display around the calcium L\\({}_{2,3}\\) edges. (b) The ratio between C-K edge (286 eV - 305 eV) and the calcium-L\\({}_{2,3}\\) (343.5 eV - 351.0 eV). The diminishing proportion indicates that despite sample mass-loss, carbon content is reducing with respect to calcium.
increase the signal-to-noise ratio [40; 41] without sacrificing time and spatial resolution. Figure 8 shows the result of SVT of the dataset with 3 components for a single spatial position close to the yellow square (Pos\\({}_{1}\\)) highlighted in Figure 6. The time evolution shows the progressive reduction of the carbon content, followed by a more and more pronounced crystal field splitting of the \\(t_{2g}\\) and \\(e_{g}\\) peaks in the Ca L\\({}_{2,3}\\) edge due to the undistorted octahedral symmetry and change in length of the Ca-O bonds in CaO compared to CaCO\\({}_{3}\\)[38]. The SVT was performed with the ZLP and the pixels close to the chip edges masked. The raw spectra associated are also shown in Figure 8 by the dashed superimposed curve for the same pixel as the SVT data, and by the dotted curve for the spatially binned 32x32 spectrum, which highlights the impressive potential of SVT denoising for such low-signal time-resolved datasets. Finally, note that although the time slice interval has the unbinned value of 400 ms, single-pixel exposure time is \\(\\sim 320\\)\\(\\mu\\)s.
## Conclusions and perspectives
In conclusion, we have presented the acquisition of a hyperspectral with scanning speed limited by the SU rastering time instead of the detection system. We have used a commercial event-based TPX3 solution along with an also commercially available custom-made scan engine [42] in which external events from the SU add timestamps in the electron data flow that can be later used to retrieve the electron probe position. For this reason, we refer to our approach as event-based hyperspectral EELS. All the developed software is available to the community, including the live data processing [33] and the interface plugin [34], and thus any SU capable of outputting the scan clock signal along with the Cheetah solution could be used to reproduce this work. To demonstrate our system capabilities, we have given as an example the decomposition of calcite into CaO and CO\\({}_{2}\\) under electron beam irradiation. After cluster correction and ToT correction, electron arrival times follow a Poisson distribution, which shows both the well-known statistics of the electron emission in a cold field emission gun (cFEG) and the non-saturated regime of the data acquisition. In principle, hyperspectral images can be acquired with pixel times as low as 1.5625 ns (nominal temporal resolution of TPX3), although reaching this scan rate would need further TPX3 calibrations [43; 44] that are irrelevant for our minimum rastering time of 40 ns. In TPX3, data can be saturated by the pixel dead time, by the column readout scheme, and by the detector maximum throughput. As the ZLP is focused in a single detector column, applications in which a meaningful ZLP intensity is required might be restricted to detector currents up to 1-2 pA, although tilting the detector/electron beam or custom pixel-line masking might alleviate this problem. The maximum detector throughput corresponds to currents up to 10-15 pA, which can also limit the detector applications. Improvements in the near future for all these aspects are expected with the new Timepix4 detector [45]. Time-resolved data is shown for the CaO formation in the low-loss and the core-loss energy range. For the latter, we have achieved a single-pixel spectrum after performing signal decomposition in the hyperspectral slices. We believe that event-based EELS will become increasingly available in the microscopy community. They will effectively tackle several important problems that require both nanometric spectral resolution and nanosecond time resolution. These include optical microresonators [46; 47; 48] thanks to their long-lived excitations, and accessing the chemistry of electron-irradiation sensitive materials like graphene oxide [49]. Additionally, the setup described in this work can be used to easily reconstruct photon-electron coincidence hyperspectral images, recently demonstrated [50].
## Data availability
The raw data used in this paper was made available in Zenodo [51]. The live processing software developed and the Nisonswift plugin interface in this work is also available under MIT license [33; 34].
## Acknowledgements
The present project has received funding from the European Union's Horizon 2020 research and innovation programme undergraduate agreement No 823717 (ESTEEM3) and 101017720 (EBEAM). We thank Marta de Frutos for discussions on EELS data analysis. Amsterdam Scientific Instruments (ASI) is also acknowledged for many fruitful technical discussions.
Figure 8: Set of spectra for a single pixel close to Pos\\({}_{1}\\) for four different times after SVT denoising (solid curve), for the pixel raw data associated (dashed curve), and for the 32x32 binned raw data (dotted curve). Carbon content is progressively reduced, while the crystal field splitting, associated with the Ca-O bonds, increases. Time slices are within the unbinned time interval of 400 ms and exposure time, per pixel, is 320 \\(\\mu\\)s.
## References
* (1) P. Batson, Simultaneous STEM imaging and electron energy-loss spectroscopy with atomic-column sensitivity, Nature 366 (6457) (1993) 728. doi:10.1038/366727a0.
* (2) N. Browning, M. Chisholm, S. Pennycook, Atomic-resolution chemical analysis using a scanning transmission electron microscope, Nature 366 (6451) (1993) 143-146. doi:10.1038/366143a0.
* (3) J. Nelayah, M. Kociak, O. Stephan, F. J. Garcia de Abajo, M. Tence, L. Hemard, D. Taverna, I. Pastoritz-Santos, L. M. Iiz-Marzan, C. Collies, Mapping surface plasmons on a single metallic nanoparticle, Nature Physics 3 (5) (2007) 348-353. doi:10.1038/nphys675.
* (4) O. L. Krivanek, M. F. Chisholm, V. Nicolosi, T. J. Pennycook, G. J. Corbin, N. Dellby, M. F. Murtull, C. S. Own, Z. S. Szilagyi, M. P. Oxley, et al., Atom-by-atom structural and chemical analysis by annular dark-field electron microscopy, Nature 464 (7288) (2010) 571-574. doi:10.1038/nature00879.
* (5) R. F. Egerton, Electron energy-loss spectroscopy in the electron microscope, Springer Science & Business Media, 2011. doi:10.1007/978-1-4419-9583-4.
* (6) S. J. Pennycook, The impact of STEM aberration correction on materials science, Ultramicroscopy 180 (2017) 22-33. doi:10.1016/j.ultramic.2017.03.020.
* (7) A. Zobelli, S. Y. Woo, A. Tararan, L. H. Tizei, N. Brun, X. Li, O. Stephan, M. Kociak, M. Tence, Spatial and spectral dynamics in STEM hyperspectral imaging using random scan patterns, Ultramicroscopy 212 (2020) 112912. doi:10.1016/j.ultramic.2019.112912.
* (8) A. Stevens, L. Luzi, H. Yang, L. Kovarik, B. Mehdi, A. Liyu, M. Gehm, N. Browning, A sub-sampled approach to extremely low-dose STEM, Applied Physics Letters 112 (4) (2018) 043104. doi:10.1063/1.5016192.
* (9) P. Trampert, F. Bourghorbel, P. Potocek, M. Peemen, C. Schlinkmann, T. Dahmen, P. Slusallek, How should a fixed budget of dwell time be spent in scanning electron microscopy to optimize image quality?, Ultramicroscopy 191 (2018) 11-17. doi:10.1016/j.ultramic.2018.03.007.
* (10) X. Li, O. Dyck, S. V. Kalinin, S. Jesse, Compressed sensing of scanning transmission electron microscopy (STEM) with nonrectangular scans, Microscopy and Microanalysis 24 (6) (2018) 623-633. doi:10.1017/S143192761801543X.
* (11) M. Strauss, I. Naday, I. Sherman, N. Zaluzec, Ccd-based parallel detection system for electron energy-loss spectroscopy and imaging, Ultramicroscopy 22 (1-4) (1987) 117-123. doi:10.1016/0304-3991(87)90055-6.
* (12) A. Faruqi, R. Henderson, M. Pryddetch, P. Allport, A. Evans, Direct single electron detection with a cmos detector for electron microscopy, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 546 (1-2) (2005) 170-175. doi:10.1016/j.nima.2005.03.023.
* (13) N. Stoffle, L. Pinsky, M. Kroupa, S. Hoang, J. Idarraga, C. Amberboy, R. Rios, J. Hauss, J. Keller, A. Bahadori, et al., Timepix-based radiation environment monitor measurements aboard the international space station, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 782 (2015) 143-148. doi:10.1016/j.nima.2015.02.016.
* (14) C. Ponchut, J. Clement, J.-M. Rigal, E. Papillon, D. LaMarra, B. Mikulc, Photon-counting x-ray imaging at kilohertz frame rates, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 576 (1) (2007) 109-112. doi:10.1016/j.nima.2007.01.131.
* (15) D. Pennicard, S. Smoljanin, B. Struth, H. Hirsemann, A. Fauler, M. Fiedler, G. Tolsonav, A. Zarubin, A. Tyazhev, G. Shelkov, et al., The lambda photon-counting pixel detector and high-\\(z\\) sensor development, Journal of Instrumentation 9 (12) (2014) C12026. doi:10.1088/1748-0221/9/12/C12026.
* (16) P. Russo, A. Lauria, G. Mettivier, M. Montesi, M. Marotta, L. Aloj, S. Lastoria, 18f-fdg positron autoradiography with a particle counting silicon pixel detector, Physics in Medicine & Biology 53 (21) (2008) 6227. doi:10.1088/0031-9155/53/21/022.
* (17) J. Jakubek, Energy-sensitive x-ray radiography and charge sharing effect in pixelated detector, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 607 (1) (2009) 192-195. doi:10.1016/j.nima.2009.03.148.
* (18) I. Nederlof, E. van Genderen, Y.-W. Li, J. P. Abrahams, A Melpix quantum area detector allows rotation electron diffraction data collection from submicrometree three-dimensional protein crystals, Acta Crystallographic Section: Biological Crystallography 69 (7) (2013) 1223-1230. doi:10.1107/S0907444913090700.
* (19) E. Van Genderen, M. Clabbers, P. P. Das, A. Stewart, I. Nederlof, K. Bartensen, Q. Portillo, N. Pannu, S. Nicolopoulos, T. Gruene, et al., Ab initio structure determination of nanocrystals of organic pharmaceutical compounds by electron diffraction at room temperature using a Timepix quantum area direct electron detector, Acta Crystallographica Section A: Foundations and Advances 72 (2) (2016) 236-242. doi:10.1107/S2053273315022500.
* (20) G. McMullan, D. Cattermole, S. Chen, R. Henderson, X. Llopart, C. Summerfield, L. Tlustos, A. Faruqi, Electron imaging with Medipix2 hybrid pixel detector, Ultramicroscopy 107 (4-5) (2007) 401-413. doi:10.1016/j.ultramic.2006.10.005.
* (21) R. van Gastel, I. Skihraulridge, S. Schramm, J. Abrahams, B. Poelsema, R. Tromp, S. Van Der Molen, Medipix 2 detector applied to low energy electron microscopy, Ultramicroscopy 110 (1) (2009) 33-35. doi:10.1016/j.ultramic.2009.09.002.
* (22) M. Krajnak, D. McGrouther, D. Mancuski, V. O'Shea, S. McVite, Pixelated detectors and improved efficiency for magnetic imaging in STEM differential phase contrast, Ultramicroscopy 165 (2016) 42-50. doi:10.1016/j.ultramic.2016.03.006.
* (23) J. P. van Schacyk, E. van Genderen, E. Maddox, L. Roussel, H. Boulanger, E. Friedjoh, J.-P. Abrahams, P. J. Peters, R. B. Ravelli, Sub-pixel electron detection using a convolutional neural network, Ultramicroscopy 218 (2020) 113091. doi:10.1016/j.ultramic.2020.113091.
* (24) J. L. Hart, A. C. Lang, A. C. Leff, P. Longo, C. Trevor, R. D. Twesten, M. L. Taheri, Direct detection electron energy-loss spectroscopy: a method to push the limits of resolution and sensitivity, Scientific reports 7 (1) (2017) 1-14. doi:10.1038/s41598-017-07709-4.
* (25) B. H. Goodge, D. J. Baek, L. F. Kourkoutis, Atomic-resolution elemental mapping at cryogenic temperatures enabled by direct electron, arXiv preprint arXiv:2007.09747doi:10.48550/arXiv.2007.09747.
* (26) R. Ballabriga, M. Campbell, E. Heijne, X. Llopart, L. Tlustos, W. Wong, Medipix3: A 64 k pixel detector readout chip working in single photon counting mode with improved spectrometric performance, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 633 (2011) S15-S18. doi:10.1016/j.nima.2010.06.108.
* (27) D. Jannis, C. Hofer, C. Gao, X. Xie, A. Beche, T. J. Pennycook, J. Verbeeck, Event driven 4D STEM acquisition with a Timepix3 detector:mic-crosecond dwell time and faster scans for high precision and low dose applications, Ultramicroscopy 233 (2022) 113423. doi:10.1016/j.nima.2021.113423.
* (28) D. Jannis, K. Muller-Caspary, A. Beche, J. Verbeeck, Coincidence detection of cells and eds crystal events in the electron microscope, Applied Sciences 11 (19) (2021) 9058. doi:10.3390/app11199058.
* (29) R. Ballabriga, M. Campbell, X. Llopart, Asic developments for radiation imaging applications: The Medipix and Timepix family, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 878 (2018) 10-23. doi:10.1016/j.nima.2017.07.029.
* (30) X. Llopart, R. Ballabriga, M. Campbell, L. Tlustos, W. Wong, Timepix, a 65k programmable pixel readout chip for arrival time, energy and/or photon counting measurements, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 581 (1-2) (2007) 485-494. doi:10.1016/j.nima.2007.08.079.
* (31) T. Poikela, J. Plosila, T. Westerlund, M. Campbell, M. De Gaspari, X. Llopart, V. Gromov, R. Kluit, M. Van Beuzekom, F. Zappon, et al., Timepix3: a 65k channel hybrid pixel readout chip with simultaneous toa/tot and sparse readout, Journal of instrumentation 9 (05) (2014) C05013. doi:10.1088/1748-0221/9/05/C05013.
* (32) B. van Der* (33) Y. Auad, M. Kociak, L. H. G. Tizei, J.-D. Blazit, M. Walls, O. Stephan, F. De la Pena, M. Tence, Timestrem/p3_tools: Release v1.0.0 (Mar. 2022). doi:10.5281/zenodo.54a6261. URL [https://doi.org/10.5281/zenodo.5346261](https://doi.org/10.5281/zenodo.5346261)
* (34) Y. Auad, Orsaydev/nslumiere: v5.24.5 release (Apr. 2022). doi:10.5281/zenodo.6407648. URL [https://doi.org/10.5281/zenodo.6407648](https://doi.org/10.5281/zenodo.6407648)
* (35) C. Meyer, N. Dellbly, J. A. Hachtel, T. Lovejoy, A. Mittelberger, O. Krivanek, Nion swift: Open source image processing software for instrument control, data acquisition, organization, visualization, and analysis using python, Microscopy and Microanalysis 25 (52) (2019) 122-123. doi:10.1017/S143192761900134X.
* (36) M. Tence, M. G. Walls, C. Jeanguillaume, C. Colliex, X. Thomas, O. Jbara, J. Cazaux, EELS study of beam-induced decomposition of calcite in the STEM (09 1989).
* (37) M. G. Walls, M. Tence, EELS study of beam-induced decomposition of calcite in the STEM (09 1989).
* (38) U. Golla-Schindler, G. Benner, A. Orchowski, U. Kaiser, In situ observation of electron beam-induced phase transformation of caco3 to caco via elnes at low electron beam energies, Microscopy and microanalysis 20 (3) (2014) 715-722. doi:10.1017/S1431927614000464.
* (39) F. de la Pena, E. Prestat, V. T. Fauske, P. Burdet, T. Furnival, P. Jokubauskas, M. Nord, T. Otsarcivities, J. Lahrenne, K. E. MacArthur, D. N. Johnstone, M. Sarahan, J. Taillon, T. Aarbolt, Quinn dls, V. Migunov, A. Ejjarrat, J. Caron, S. Mazzucco, B. Martineau, S. Somathath, T. Poon, T. Slater, C. Francis, M. Walls, N. Cautaerts, N. Tappy, F. Winkler, G. Domval, Typersyp/repj: Release v1.6.2 (Apr. 2021). doi:10.5281/zenodo.4683076. URL [https://doi.org/10.5281/zenodo.4683076](https://doi.org/10.5281/zenodo.4683076)
* (40) R. Arenal, F. De la Pena, O. Stephan, M. Walls, M. Tence, A. Loiseau, C. Colliex, Extending the analysis of EELS spectrum-imaging data, from elemental to bond mapping in complex nanostructures, Ultramicroscopy 109 (1) (2008) 32-38. doi:10.1016/j.ultramic.2008.07.005.
* (41) F. de la Pena, M.-H. Berger, J.-F. Hohepied, F. Dynys, O. Stephan, M. Walls, Mapping titanium and in oxide phases using EELS: An application of independent component analysis, Ultramicroscopy 111 (2) (2011) 169-176. doi:10.1016/j.ultramic.2010.10.001.
* (42) Attolight, Scanning Unit Liskamm (2022). URL [https://attolight.com/](https://attolight.com/)
* (43) B. Bergmann, M. Pichotka, S. Pospisil, J. Vycpalek, P. Burian, P. Broulim, J. Jakubek, 3D track reconstruction capability of a silicon hybrid active pixel detector, The European Physical Journal C 77 (6) (2017) 1-9. doi:10.1140/epjc/s10052-017-4993-4.
* (44) F. Fitters, N. A. Tehrani, D. Danheim, A. Fiergolski, D. Hynds, W. Klempt, V. Llopart, M. Munker, A. Nurnberg, S. Spannagel, et al., Time resolution studies of Timepix3 assemblies with thin silicon pixel sensors, Journal of Instrumentation 14 (05) (2019) P05022. doi:10.1088/1748-0221/14/05/P05022.
* (45) M. Campbell, J. Alozy, R. Ballabriga, E. Frojdh, E. Heijne, X. Llopart, T. Poikela, L. Titus, P. Valerio, W. Wong, Towards a new generation of pixel detector readout chips, Journal of Instrumentation 11 (01) (2016) C01007. doi:10.1088/1748-0221/11/01/C01007.
* (46) Y. Auad, C. Hamon, M. Tence, H. Lourence-Martins, V. Mkhitaryan, O. Stephan, F. J. Garcia de Abajo, L. H. Tizei, M. Kociak, Unveiling the coupling of single metallic nanoparticles to whispering-gallery microcavities, Nano letters 22 (1) (2022) 319-327. doi:10.1021/acs.nanolett.1c03826.
* (47) J.-W. Henke, A. S. Raja, A. Feist, G. Huang, G. Arend, Y. Yang, F. J. Kappert, R. N. Wang, M. Moller, J. Pan, et al., Integrated photonics enables continuous-beam electron phase modulation, Nature 600 (7890) (2021) 653-658. doi:10.1038/a41586-021-04197-5.
* (48) N. Muller, V. Hock, H. Koch, N. Bach, C. Rathje, S. Schafer, Broadband coupling of fast electrons to high-wavelength-gallery mode resonators, ACS Photonics 8 (6) (2021) 1569-1575. doi:10.1021/acsphotonics.1c00456.
* (49) A. Tararan, A. Zobelli, A. M. Benito, W. K. Maser, O. Stephan, Revisiting graphene oxide chemistry via spatially-resolved electron energy loss spectroscopy, Chemistry of Materials 28 (11) (2016) 3741-3748. doi:10.1021/acs.chematter.6b00590.
* (50) N. Varkentina, Y. Auad, S. Y. Woo, A. Zobelli, J.-D. Blazit, X. Li, M. Tence, K. Watanabe, T. Taniguchi, O. Stephan, et al., Cathodoluminescence excitation spectroscopy: nanoscale imaging of excitation pathways, arXiv preprint arXiv:2202.12520doi:10.4850/j.2020.12520.
* (51) Y. Auad, M. Walls, J.-D. Blazit, O. Stephan, L. H. G. Tizei, M. Kociak, F. De la Pena, M. Tence, Event-based hyperspectral EELS: towards nanosecond temporal resolution (Oct. 2021). doi:10.5281/zenodo.5552559. URL [https://doi.org/10.5281/zenodo.5552559](https://doi.org/10.5281/zenodo.5552559)
**Supplementary Material for Event-based hyperspectral EELS: towards nanosecond temporal resolution**
**Clock drift between the SU and the TPX3 readout board**
To understand how often the TPX3 and the SU clocks must be corrected, we have analyzed how many clock ticks, in units of the fine TDC bin of \\(\\sim\\) 260 ps, have been counted between successive scan rows concerning the main hyperspectral data of the manuscript text (Figure 6). For the more than 590000 scan rows analyzed, 76.7% of them have the same number of clock ticks between successive lines, which is also the reference value for placing the electrons, as discussed in Figure 2. 18.2% of the scan rows are shifted by one TDC clock tick. The other 5.1% has a maximum shift of 2 clock ticks. The average clock drift is \\(\\sim\\) 0.23 ticks per every 601500 ticks of the whole scan line, or 1 cycle every 2615217 ticks. In the present case, this corresponds to \\(\\sim\\) 60 ps per line and hence an average drift, per frame, of \\(\\sim\\) 1.9 ns, much smaller than the pixel dwell time of 4 \\(\\mu\\)s.
For the carbon membrane, considering the same logic applies, the drift per line would be \\(\\sim\\) 34 ps, or \\(\\sim\\) 17.4 ns per frame. This value is almost 8 times smaller than the pixel dwell time of 120 ns, meaning that, in the present experimental conditions, clock drift correction can be done either by each new scan row or each new can frame.
### EELS spectra dependence on the total cluster ToT
For the same data studied in Figure 4, we have plotted the dependency of the summed ToT on the detector hit position in the dispersive direction in Figure S2, intending to clarify the origin of the Poisson distribution after filtering data by the summed ToT. The histogram in Figure S2 clearly shows a central ToT value around \\(\\sim 139.12\\) ns (intense horizontal line), which is associated with the 100 keV electrons. A Gaussian fit provides a fwhm of \\(\\sim 22.65\\) ns which is below the unit clock tick of 25 ns. The histogram of the chip 1 alone provides a better contrast to see the high number of clusters with \\(\\Sigma\\) ToT \\(<60\\) ns around the chip junctions (close to pixels 256 and 512), Tis region, additionally, is also known to have hot pixels with high counting rates even in the absence of the electron beam. In figure S2b we have plotted the accumulated EELS spectrum for the lower (blue) and upper (orange) sides of the aforementioned histogram, which displays a disproportionate number of counts in the chip boundary, in the blue curve. Finally, we show in Figure S2c the relation between the summed ToT and the cluster size, which has its maximum of around 139.12 ns for a cluster size of 4 electrons. Note also the high number of clusters with a unit cluster size (individual electrons) for lower ToTs, which can be caused by the physical boundary condition on-chip junctions.
### Cluster-detection algorithm results for different parameters
As briefly discussed in the main text, cluster detection is performed by sorting the pixel hits by ToA and comparing them neighbor by neighbor: if the ToA increases by more than \\(\\Delta\\)TaA or if the pixel spatial position increases by more than either \\(\\Delta\\alpha\\) or \\(\\Delta\\beta\\), a new cluster is formed. This procedure is performed for hits that are not yet within a cluster until convergence is achieved. Figure S3a shows the same plot as in main text (Figure 4) for different \\(\\Delta\\)ToA values and for the uncorrected data set using \\(\\Delta\\alpha=\\Delta\\beta=2\\). As the pixel dead time is \\(\\sim 475\\) ns, there is no risk of same-pixel cluster identification as long as \\(\\Delta\\)ToA is smaller than this value. The typical cluster formation interval, however, is smaller than 475 ns and one must be primarily careful not to underestimate \\(\\Delta\\)ToA. This is undoubtedly the case for 25 ns, 50 ns, and, to a lesser extent, 100 ns. \\(\\Delta\\)ToA of 200 ns and 500 ns provide almost identical data. In Figure S3b, we have varied \\(\\Delta\\alpha\\) and \\(\\Delta\\beta\\) equally within the range \\([0,4]\\) for \\(\\Delta\\)ToA = 200 ns. For \\(\\Delta\\alpha=\\Delta\\beta=0\\), the result is identical to that for the uncorrected data, which is obvious considering any electron detected within \\(\\Delta\\)ToA = 200 ns will be at a different pixel due to the pixel dead time. Data does converge for \\(\\Delta\\alpha=\\Delta\\beta>=2\\).
Note that there is no conflict in choosing \\(\\Delta\\alpha=\\Delta\\beta=2\\) and the cluster size shown in Figure S2c. Indeed, one can have clusters of 5-10 electrons as long as the pixel separation between consecutive electrons in the ToA-sorted list is \\(<=2\\). For example, consider the following pixel address electron events list, in which their ToA is less than 200 ns between each other: (128, 128), (129, 128), (130, 128), (132, 130), (132, 132). There is a maximum separation of 2 pixels between consecutive electron events in each address direction and thus they belong to the same cluster. In this case, the cluster size is 5. | The acquisition of a hyperspectral image is nowadays a standard technique used in the scanning transmission electron microscope. It relates the spatial position of the electron probe to the spectral data associated with it. In the case of electron energy loss spectroscopy (EELS), frame-based hyperspectral acquisition is much slower than the achievable rastering time of the scan unit (SU), which sometimes leads to undesirable effects in the sample, such as electron irradiation damage, that goes unperceived during frame acquisition. In this work, we have developed an event-based hyperspectral EELS by using a Timepix3 application-specific integrated circuit detector with two supplementary time-to-digital (TDC) lines embedded. In such a system, electron events are characterized by their positional and temporal coordinates, but TDC events only by temporal ones. By sending reference signals from the SU to the TDC line, it is possible to reconstruct the entire spectral image with SU-limited scanning pixel dwell time and thus acquire, with no additional cost, a hyperspectral image at the same rate as that of a single channel detector, such as annular dark-field. To exemplify the possibilities behind event-based hyperspectral EELS, we have studied the decomposition of calcite (CaCO\\({}_{3}\\)) into calcium oxide (CaO) and carbon dioxide (CO\\({}_{2}\\)) under the electron beam irradiation.
keywords: electron microscope; electron energy-loss spectroscopy; event-based; hybrid pixel direct detector; timepix3 +
Footnote β : journal: Ultramicroscopy | Condense the content of the following passage. | 312 |
isprs/9dac15f3_b39a_4728_bb06_a62042bddcbb.md | # Real-time Photogrammetric Systems
- WHO ARE THE DEVELOPERS?
Peter Axelsson
Department of Photogrammetry, Royal Institute of Technology
100 44 STOCKHOLM, Sweden
e-mail [email protected]
Invited Paper
ISPRS, Commission V
## 1 Background and Introduction
The intention of this paper is not to give an overall view of the development in real-time photogrammetry or to present the latest and fastest hardware, but an attempt to try to find the traces and fingerprints of the photogrammetrists who are involved in the development of real-time measuring systems. Four systems are chosen to illustrate this intention, two primarily developed in the photogrammetric society and two primarily developed by people from other scientific areas. The systems are all commercially available. In this study, only passive image acquisition systems are discussed. This means that 3D systems like laser range finders and laser interferometry systems are excluded, even though they could fit into the same context.
The paper is divided in three main parts. The first part is discussing some criteria for a system being photogrammetrical' and how these criteria are dealt with and looked upon from different view-points. The second part is looking at the basic components of a vision system for photogrammetrical close-range applications with a discussion based on the solutions used in the four illustrating systems. The third part is a short description of the four systems. The paper is closed with a final discussion.
The two systems which are developed mainly by nonphotogrammetrists are illustrated through the diagrams by circles, O,, and the two systems developed mainly by photogrammetrists are represented with by,. All of the system are developed in Scandinavia. The systems are not compared looking only at their performance abilities for various operations and applications, but also how and why certain solutions and methods were selected. Key words in the photogrammetric society like precision and reliability are looked at with special interest to see if there are differences in the way they are treated.
## 2 Definitions
The terminology in real-time photogrammetry is rather confused and even if attempts have been made toward a common grammar, some of the terms are defined in the context of this article to avoid misunderstandings.
Real-timetime constrained and video-rate are terms which are related and partly overlapping each other. By video-rate is here meant a standard video imaging system, generating images at 25/50 Hz. Many systems uses its own image acquisition speed, which here are related as being either higher or lower than the standard video-rate. Real-time in machine and robot vision are often implicitly meant to be the standard video-rate. A more general definition of real-time is time-constrained, giving a limited time in which the task must be solved. In this paper real-time systems are equivalent to time constrained systems.
By Calibration is, in the photogrammetric society, mostly meant the determination of the inner orientation of a camera. In machine vision the term often stands for the determination of the outer orientation parameters as well. In this paper System Calibration is referred to as the determination of the absolute orientation and the inner orientation if they is determined simultaneously. If the inner orientation is determined separately, this is referred to as Camera Calibration. The term system calibration is chosen in favour of outer orientation or absolute orientation since it is more relevant when talking about an industrial installation.
## 3 A Photogrammetric System
As the title of this article indicates, the primal interest is in the developing process of the close-range systems, not so much the actual performance of the systems themselves. To be regarded as photogrammetric, a close-range system must however meet certain criteria. One suggestion for these criteria are given by (Grun, 1991):
* **Potential for high precision and reliability (redundant sensor data)**
* **Capability of self-diagnosis (quality report)**
* **Task flexibility with respect to 3-D object reconstruction functions**
This 'definition' of a photogrammetric system may be valid within our own society, while in computer vision the term 'photogrammetry' usually stands for the various orientation procedures of stereo images which here is related only to the third criterion. The definition implies of course that a system developed by a photogrammetrist may be said to be non-photogrammetric, while a system developed by a machine vision engineer may be seen as photogrammetric in our eyes.
When designing a real-time measuring system, all three criteria will by their nature be in conflict with the time constraints, since the time complexity of the computations are high for each of them.
### High Precision and Reliability
High precision is possible to achieve with the methods available in data extraction and data analysis (e.g. Haggren, 1990). Several systems aiming at high precision reach results which are as good, or better, than a human operator in cases where the targets are well defined.
The reliability is a more delicate matter since it touches the part of a system which is harder to describe in statistical figures: its insensitivity, or robustness, against erroneous data or model outliers. In a manually operated system, gross errors are rare and fairly simple methods can be used to locate them. When a process is automated or semi-automated, as in the case of real-time measuring systems, the need for more robust methods becomes more obvious. The robustness should be incorporated in all the parts of the measuring process, to ensure that single or groups of erroneous data do not influence the final output.
Two aspects of robustness are of major concern (Forster, 1987):
* **Robustness of design**
* **Robustness of estimation**
The robustness of design is concerned with the ability to test the models with respect to model errors and with the sensitivity of the result to errors. The Least Squares, LS, techniques together with statistical analysis are the main tools.
Robustness of estimation is concerned with optimization procedures which eliminates or reduces the effect of model errors. Other types of estimators which are more robust against model errors than the LS have been developed, e.g. Least Median Squares (Rousseeuw, 1987) and Minimum Description Length (Axelsson, 1992). These estimators, which can handle up to 50% of erroneous data, all lack an analytical solution. Instead, a systematic or random search must be used for finding the solution. This makes the methods computationally very complex. If the number of parameters are very high, as e.g. in a bundle adjustment these methods are not suitable. For other applications, like e.g. relative orientation or orientation of a single camera, the methods should be considered.
**Comments** None of the illustrating systems uses the second type, robustness of estimation. These methods are fairly new and the knowledge of them limited outside the statistical research environments. We believe that these method will play an important role in future systems, both in the extraction of image features and in orientation procedures.
The general view on the precision concept and if there are differences depending on the background was formulated by an electrical engineer as \" photogrammetrists think of precision in the cameras, images and all the different steps. We only relate to the deviations from a known reference object \". From the 'photogrammetric' side one person said that \" redundant observations are not fully utilized in non-photogrammetrical systems \".
When looking at the systems these comments seem to fit quite well. The systems developed by photogrammetrists are more general in dealing with the redundant information from over-determined systems, even though the other systems may use the redundant information in some steps. The parts which are of special interest for redundant information is the system calibration and the point determination (data analysis).
### Self-Diagnosis and Quality Report
A system operating over a time period must be able to control the quality of the output and to do the proper corrections during operation if necessary. A simple way of detecting errors in the output is e.g. to measure control points which are compared with their nominal values. If the detected error(s) is to be corrected, enough information must be provided by the system to locate, eliminate and update the error source.
To be able to do a statistical error propagation through the whole process, from image acquisition to data analysis, the different parts must be encompassed in a statistical framework, where results from one level can be be used in the next. The error theory developed for photogrammetry is well suited for this task since it already covers the image acquisition part and the adjustment part of the data analysis. Two parts in the process are however less investigated:
- data extraction
- robust adjustment methods
The _data extraction_ methods used in image processing, e.g. edge and point detectors, are very seldomly producing statistical values of their performance. Methods used in a photogrammetric system should be able to produce this type of values to enable a correct statistical treatment of data, e.g. the Forstner interest operator (Forstner, 1987). A statistical propagation is also needed if a theoretical prediction of the results are to be done before implementation and installation.
To make the process less sensitive to gross errors, the adjustment of redundant data may be treated with more _robust methods_ than the normal equally weighted LS. The statistical properties of such methods are not always known or possible to directly put in to the normal statistical procedures.
CommentsThe self-diagnostic capabilities of the systems are of very different nature depending of the degree of atomization and application. Those of the systems which are manually supported rely mainly on the operator to detect errors. The more automated systems have the ability of detecting errors and in some cases to correct them.
Two of systems developed by photogrammetrists uses control points to detect any changes in the system orientation and will automatically update the orientation parameters if needed. They also use several cameras to get an internal control of the point determination.
The two systems not developed by photogrammetrists have other ways of detecting errors in the system calibration, e.g. known distances, but are not able to correct it without operator assistance.
As mentioned in 3.1 the different view on how to describe the precision for the systems is valid also for the self-diagnostics and quality reports. The error theory which is used in the traditional photogrammetric systems require a statistical model for all the different steps to enable an error propagation. The other approach is to empirically estimate the accuracy of the system and use these values without the statistical background. This is a fast and computationally easy method and also easy to understand for the non-specialists who are to use the systems.
### Task Flexibility
The third criterion implies that photogrammetric systems are fairly general systems with respect to 3D object reconstruction. It may be argued that dedicated systems and even single camera systems with 2D capabilities might in some cases should be regarded as photogrammetric as well, as long as the extracted information from the images is metric.
CommentsThe generality of the 3D calculations are partly depending on the type of measurments the system is able to do. Grey-level based data extraction, used by the two systems developed by photogrammetrists, is in principle more general than target measurements, but many other factors, like sampling speed and data analysis, should also be considered.
## 4 Real-Time Systems - the System Parts
Even though photogrammetric systems may differ between each other in many respects, they are usually having the same basic components (fig 1).
Different tasks will certainly put different restrictions on the time constraints for the systems. Some applications have the hardest constraints on the image acquisition part, e.g. high speed motion analysis systems, where the extractions and analysis of data may not be completed or even started between the acquisition of two image frames. Other, more quality control or robot oriented tasks, may need to perform all steps in sequence in order to be able to make a decision in the time constrained cycle.
In the following section the different parts of the photogrammetric system are discussed and the four systems are briefly described in this context.
### System/Camera Calibration
The calibration of the cameras and systems are vital parts for the system performance. They may be done simultaneously in a combined adjustment of the camera and system calibration parameters or as separated procedures.
#### 4.1.1 Camera calibration
The calibration of electronical cameras have been thoroughly investigated (e.g. Bossemann, 1990). The calibration must not only take into account the optical system of the camera but also the electronic parts. The traditional photogrammetric optical calibration adjust data to a mathematical model based on physical assumptions. This leads to e.g. the familiar polynomial equations for the radial distortion.
Another approach is to calculate the deviations, or errors, for each pixel. From the deviations a look-up table is created. This method is not concerned with the physical background of the errors. It is fast and easy to implement and especially suited together with the Direct Linear Transformation, DLT. The two systems which are not developed by photogrammetrists are using this approach (fig 2).
#### 4.1.2 System calibration
The system calibration, or the outer orientation, is computed using either the DLT or the bundle adjustment (fig 3). The advantages of the DLT is, at least initially, its easy implementation and the simplicity of the outer orientation. The advantages of the bundle adjustment is its flexibility in the control, e.g. 1-3D points, distances, plumb lines, and its theoretical superiority and error propagation capabilities compared to the DLT. For a more comprehensive study of the differences between the DLT and the bundle approach see (Edgardh, 1992).
In the camera calibration both of the systems which are not developed by photogrammetrists are using a factory calibration. This is motivated by the stability of the CCD cameras. Both the system measure on laser spots or on reflective targets which partly reduces the need of re-focusing the cameras or changing the aperture, an argument often brought against factory calibrations. It may however not only be a question of precision but also of reliability as one of the photogrammetrists expressed \" never rely on a previous calibration. The system should be calibrated after the installation and it must be fairly easy to re-calibrate both the interior and exterior orientation \".
The choice between DLT or bundle adjustment can be difficult in some applications. An advantage using the bundle adjustment together with CCD cameras compared to analogue cameras is the possibility of making several measurements after each other. Two systems, #2 and #3 utilize this technique to calibrate the systems. A known distance is moved around in the measuring volume until a satisfactory number of observations are made. This is also used for the self-calibration of one of the systems, #3. This greatly reduces the problem of calibration of both the cameras and the system compared to the traditional test field calibration.
### Image acquisition
In close range applications, the typical image acquisition part consists of standard video-rate CCD-cameras. A good reason keeping to this standard is the large number of fairly cheap electronic components, ranging from the CCD-cameras over frame grabbers to hard ware implementations of basic image processing functions.
In the industrial environment, the production frequencies are sometimes deviating from the standard video rate in such a way that solutions with other image acquisition rates must be employed. Certain high resolution, non-standard CCD-cameras also have slower image generating cycle due to the read-out time of image data.
For the registration of very fast processes, with more than 500 frames/second, analogue high speed film cameras are still used to a large extent, even if CCD cameras with fairly good resolution are becoming available also for these purposes (fig 4).
### Detection and Extraction of Data
The extraction, or measurements, of data can be seen as an information compression and information extraction of the parts in the image which are of interest. This is primarily done by low-level image processing techniques. Two main types of information can be defined (fig 5):
- Area based information
- Point based information
- Grey-level correlation techniques
- Thresholding/slicing techniques
Examples of the first category are histogram or textures of defined regions. None of the illustrating systems use this type of information.
The point based information are mostly derived from an area in the image as well, but the purpose is to compute point coordinates. If the target points have different reflectance or emittance properties than the background image, a thresholding may be done to extract the target areas. This is a simple and fast technique. The measuring point can be defined by
Reflective markers
Projected laser spot
Light Emitting Diodes, LED's
Both the systems developed by non-photo-grammetrists use this type of targets as their only data source, #1 and #2. System #3 uses it primarily for the system calibration.
The detection and extraction of the target points are fast with the thresholding technique, but it does not enable the system to measure on natural object points, e.g. corners. To be able to do so, grey level correlation techniques must be used. This is a time consuming task, but can be speeded up if the location of the searched pattern is approximately known, as is the case e.g. when tracking points in a motion sequence. This method is used by #3 and #4
The precision of the extracted image coordinates are of course of vital interest for the final result. All of the illustrating systems claim a high precision in the measurements of the image coordinates, which means a 1/20:th - 1/50:th of a pixel.
If more complex image operations are to be done in real or near real-time, the implementations must be done in hardware. If the full frame must be processed even todays hard-ware implementations might not be enough. The amount of data in a stereo CCD system requires app. 12.5 Mb/sec (Grun, 1991).
CommentsFrom an error theoretical point of view the data extraction methods used is mostly unsatisfactory. Very few, if any, of the systems can produce error estimates for the image coordinates which can be used in the further processing of error estimation.
The extreme difference in speed for system #2 is due to the fact that each camera has a dedicated hardware unit capable of measuring 20 pts/50 Hz image. There is no further analysis of data in real-time as for the other systems.
The ability of measuring on natural targets requires grey-level based methods. This reduces the speed of the point measurements, but if more complex operations are to be developed in the future they must anyway be done in the grey-level image. This would indicate that grey-level based methods are more in principle more general.
### Analysis of data
Depending on the task, the analysis of data may range from the computation of single 3D coordinates or histogram analysis to the advanced reconstruction of complex structures and objects. In cases where the analysis is not a guiding part of an on-line process or too complex to be performed in the same cycle as the image acquisition and data extraction, the analysis of data is done as a separate phase. In cases where the final analysis require the whole set of measured data, e.q. generation of a DEM, this is done outside the time constrained image generating time cycle.
### Decisions and actions based on analysis
If the task of the system is to guide e.g. a production line, the analysis of data should result in a decision based on a set of pre-defined rules. The decisions made by the system is mainly guiding the actions for the actual image at hand, but may also guide the future handling of images.
In the case of a separate analysis phase, the decisions made can only affect the processing of the images. The image acquisition and formation step can only be affected if the whole task is repeated.
**Decisions in one cycle**
Mapvision, e.g. moving an object in the assembly line
MNS, e.g. Quality Control
## 5 Four ideas - Four solutions - four systems
The four systems which are used in the paper to illustrate the thoughts are here described in more detail. First the ideas behind and questions of special interest to this paper and secondly a small technical part. The following headlines are used for the description:
* Did you set up any clear goal before starting the development, or is the 3D system a continuation of earlier systems, e.g. 2D measurements?
* Do all developing engineers have a similar background or do they come from different fields?
* How much do you think the scientific background has influenced the system design, and if so, which part could have looked different?
* How would you describe the 'photo grammetric thinking', if there are any, in the data extraction part calibration, orientation and 3D calculations
* How would you describe the system regarding Precision Reliability, robustness Self-diagnosis
* System 1 MNS, Metronor AS
* Continuation of earlier 2D system
* No photogrammetrists from the beginning. Background in electrical engineering with special competence in CCD arrays.
Fig 8: Data Analysis, type and precision
**Comments** The precision of the systems are approximate and depends of course on the type of target point etc. The high speed motion system, #4
**β** has a different magnitude of precision depending on the different conditions for this application.
Fig 7: Analysis Cycles* Influences the thinking of how to handle precision. The calibration procedures might have looked different.
* Nothing special in the data extraction part. The bundle adjustment using known distances in the system calibration and the 3D calculations.
* The high precision of the system is made possible because of the high resolution cameras and the distinct targets. The precision is verified against a known reference, while the error theory is of minor interest. The reliability is in one sense limited since only two cameras are used. The special light pen works as an indicator if something is wrong with measurements or c ameras. There are no ways of automatically correcting errors during the measuring phase.
**Technical Description**
Two Videk Megaplus CCD-cameras (7 frames/sec)
System orientation with known distances
Measures on laser spots or LED's
33 points/sec
The measuring uncertainty is described as 0.05 + L/10000 (mm)
VME motorola 68030 for image processing tasks. HP workstation in Unix environment for operator
Special details:
Light pen for inaccessable points
Connections to various CAD-systems
* Light pen for inaccessable points
Connections to various CAD-systems
* Light pen
Light pen
Light pen for inaccessable points
Connections to various CAD-systems
* Light pen
Light pen
Light pen for inaccessable points
Connections to various CAD-systems
Nothing special in the data extraction part. Uses a
DLT solution for the orientations, which they became familiar with during earlier work with the SELSPOT system.
* The high internal precision of the system is made possible because of the distinct targets and a special hardware unit for each camera. This also enables the very high sampling frequency, 20 points/50 Hz frame with up to 7 cameras. The external precision and reliability is partly due to the simple orientation procedure with a DLT using five points and one point for control. There are no automatic self-diagnostic in the system.
**Technical Description**
2 - 7 CCD cameras with dedicated hard-ware
System orientation with calibration frame, DLT
Measures on reflective targets
20 points/frame at 50 Hz, The different cameras may be connected in multiplex mode to raise the image acquisition rate.
The absolute accuracy 0.04 %
Developed for the Macintosh environment
Special details:
* Easy to use for operator
Very fast measurements of image coordinates
Fig 10: MacReflex System Configuration
Fig 9: Light PenThe precision is easily controlled with repeated measurements. To make this test relevant it should span over a longer time span e.g. two hours. The reliability is achieved by a high number of calibration points. The error tests are only looking at the residuals. The self diagnosis are of two kinds, internal and external. The internal diagnosis uses the fact that more than two cameras are used for the intersection of points. The external diagnosis is using control points and looking at the residuals.
### Technical Description
2 -22 CCD cameras
System orientation with known distances, self-calibration of system after installation
Grey level based point measuring
0.4 points/sec
The absolute accuracy 0.01 - 0.02 %
Special details:
Self-calibration using distances
Very high accuracy
Can handle many cameras
Automatic corrections and self-diagnosis
### System 4 Track-Eye Innovativ Vision AB
#### 5.4.1 Continuation of earlier 2D system
No photogrammetrists in the development of the main 2D motion analysis system. For the 3D analysis module photogrammetric competence were used.
The influence of the design of the 3D module. The way errors are treated and their effect.
Nothing special in the data extraction part. The bundle adjustment with self-calibration of the unstable part of the interior camera parameters.
The high resolution scanner, 6.2 \\(\\mu\\)m, together with a grey-level based tracking algorithm ensures high precision in the image coordinates. The reliability is mainly dependent on the number of cameras. The self-diagnosis is fairly well developed with residual control of known points which automatically starts a new system calibration.
### Technical Description
2 -6 analogue high speed film cameras
System orientation with 3D calibration frame, self-calibration of un-stable parameters during motion sequence.
Grey level based point measuring/tracking
7 points/sec
The absolute accuracy 10 mm
Special details:
## 6 Conclusions and reflections
The main intention of this paper was to recognize any differences between photogrammetric real-time systems depending on the background of the developers. The main characteristics of a real-time system can basically be described as:
Fast and Robust
The traditional photogrammetric approach, which crudely may be described as putting everything into large linearized LS problem, may be fine for aerial mapping, but the geometrical conditions and time constraints in industrial and other close-range applications are not always fitted for this. It may be described as robust but is not always as fast as wanted.
The machine vision approach which, very generally, may be said to be more attracted by direct solutions, is on the other hand fast but not as robust as a correctly treaten over-determined system.
There seem to be a contradiction between these two approaches, but it is also possible that a merging of the two ideas can be fruitful. Direct solutions for fast estimations of e.g. initial values is engaging many researchers which e.g. resulted in the workshop at this Congress,\"Calibration and Orientation of Cameras in Computer Vision\". Similar ideas were expressed at the \"Second International Workshop on Robust Computer Vision\" organized by prof. W. Forstner in Bonn earlier this year.
When talking about the terms precision and reliability there seems to be a difference in the way these are handled. The photogrammetric approach is to try to model all errors according to a physical model, ending up with many correction terms. The other approach is to model the errors independently of the sources, by e.g. a matrix with a correction vector for each pixel. This latter method is fast and easy to implement, but it is less flexible and more difficult to treat in a statistical context.
A different viewpoint was expressed by one of the system developers when asked about the differences of scientific background, saying something like \" the knowledge of industrial management, how to actually manufacture the system when it is developed and how to get it out on the market. These are things which sometimes are just as important as which algorithm are used for the outer orientation \".
## References
* Axelsson (1992) Axelsson, P, 1992, _Minimum Description Length as an Estimator With Robust Properties_, Proc. Sec Int Workshop on Robust Computer Vision, pp??
* Axelsson (1992) Axelsson, P. 1992, _An Automated 3D Motion Analysis System for Digitized High Speed Film_, Int Arch of Photogrammetry and Remote Sensing, vol 9, Comm V.
* Bossemann et al. (1990) Bossemann, W., Godding, R.,Riechmann, W., 1990, _Photogrammetric Investigation of CCD Cameras_, Int. Arch. vol 28, part 5 pp 119-127, 1990
* Edgardh (1992) Edgardh, L-A, 1992, _Comparison of Precision and Reliability of Point Coordinates Using DLT and Bundle Approach_, Int Arch of Photogrammetry and Remote Sensing, vol 9, Comm V.
* Forstner and Gulich (1987) Forstner, W.,Gulich, E., 1987, _A Fast Operator for Detection and Precise Location of Distinct Points_, _Corners and Centers of Circular Features_. Proc. of Intercommission Conference of ISPRS on Fast Processing of Photogrammetric Data, Interlaken, June
* Forstner (1987) Forstner, W. 1987, _Reliability Analysis of Parameter Estimation in Linear Models with Applications to Mensuration Problems in Computer Vision_, Computer Vision, Graphics and Image Processing 40, pp 273-310.
* Grun (1991) Grun, A., 1991, _Recent Advances of Photogrammetry in Robot Vision_, First Australian Photogrammetric Conference, 7-9 November 1991.
* Haggren (1990) Haggren, H., 1990, _Real_time Photogrammetry for Engineering Surveys_, FIG Congress, Helsinki, Finland, June 10-19, Comm 6.
* Haralick et al. (1991) Haralick, R.M., Lee, C.N., Ottenberg, K., Nole, M., 1991, _Analysis and Solutions of the Three Point Perspective Pose Estimation Problem_, IEEE Conference on Computer Vision and Pattern Recognition, pp 592-598
* Innovativ Vision (1992) Innovativ Vision, _TrackEye, Specification of Innovativ Vision's Motion Analysis System_, Product information
* Josefsson (1992) Josefsson, T., 1992, _The MacReflex System, a New Tool for Testing Industrial Robots_, Publication of Qualisys AB.
* Kallhammer (1990) Kallhammer, J-E., 1990. _Digitize Your Film Without Loosing Resolution_. SPIE vo 1358, 19:th International Congress on High-Speed Photography and Photonics, pp 631-636.
* Metronor (1987) Metronor, _Metronor MNS Description_, Product Information.
* Rousseeuw and Leroy (1987) Rousseeuw, P., Leroy, A.1987,_Robust regression and outlier detection_, John Wiley & sons, Inc ISSN 0271-6356. | Industrial measuring systems are developed in a large number of scientific and technical environments. In photogrammetry, certain knowledge and traditions influence the development in a direction where precision, reliability and 3D are terms of great importance. Other developers may have different background in their work, leading to other solutions. A study concerning these differences is made, illustrated by systems developed by photogrammetrists and similar systems from the non-photogrammetrical 'outer world'. Several aspects are investigated, e.g. data collection, complexity of measurements, methodology, degree of automation and implementations. The main question and the motivation for this paper may however be formulated in a less technical, but more implicit way: Real time photogrammetry - are the traces of a photogrammetric tradition still visible? | Condense the content of the following passage. | 155 |
arxiv-format/1802_03422v1.md | # State of the Practice for GIS Software
W. Spencer Smith, D. Adam Lazzarato and Jacques Carette
Department of Computing and Software
McMaster University
Hamilton, Ontario, Canada
## 1 Introduction
This paper analyzes the state of development practice in Geographic Information Systems (GIS). The scope and purpose of the software analyzed ranges from complete desktop GIS systems, to stand-alone products, to programming libraries. GIS software requires sophisticated data structures and image processing algorithms. The complexity of GIS software raises concerns for software qualities such as correctness, reliability and performance. To address these concerns, and produce high quality software, requires solid Software Engineering (SE) and Scientific Computing (SC) development practices.
The authors of this paper are not GIS experts; however, we are experts in SE applied to scientific computation software. As outsiders, we can claim objectivity, since we have no prior attachment to any of the software examined in this paper. We hope to provide valuable feedback to the GIS community to help improve the quality of their software.
We arrive at our feedback and conclusions through a reproducible process of systematically grading software products in the GIS domain based on 13 software qualities. We do not grade the products on functionality. Rather, we grade the development process of the projects, anddetermine how well the projects adhere to SE principles and practices. A main goal of the software grading process is objectivity, and quantification, wherever possible. An external list of software products written by a domain expert acts as an authoritative list of software to be graded. As a part of the grading, we preform a pairwise comparison between each of the software products using a multicriteria decision analysis process. Based on the rankings from the decision analysis, we then analyze trends between software products.
Our inspiration for this project comes from Gewaltig and Cannon (2012), and the later paper, Gewaltig and Cannon (2014). (Our work is based mainly on the earlier version, since the classification system in that paper is the simpler of the two, and it still fulfills our needs). In their papers, Gewaltig and Cannon perform a software review in the domain of computational neuroscience. We build and expand on their process, as we previously did for mesh generation software (Smith et al., 2016) and for seismology software (Smith et al., 2017). Gewaltig and Cannon's review gathers data from the public information on the software product's website and analyzes it for trends to build feedback. The authors conclude that there is often a misunderstanding between developers and users regarding the reasons for creating the software and expectations for its features. Much of the software examined was written by students during their Master's or PhD research; many of the developers did not have backgrounds in computer science or SE. Their priority was their scientific application, not best practices in SC. (Segal, 2007) refers to this category of developers as professional end user developers. This type of developer seems common for SC software.
One major problem with scientific software development is a communication barrier between scientists and software engineers when discussing requirements. The barrier exists because the scientists have experience in their field, but less with software development. The scientists know that the requirements will change, but they cannot precisely convey how they will evolve. Not correctly articulating requirements, or changing requirements midway through a project, greatly impacts the productivity of the development team (Segal, 2005). When engineers create the software, the resulting development artifacts, such as user manuals and introductory examples, are not sufficient for the scientists to understand the product (Segal, 2008). When end users (scientists) develop the software product, the situation is not improved, since their training in science has not prepared them to consider important software qualities, like maintainability and reusability. The differences between SE and SC has led to a chasm between these two disciples (Kelly, 2007).
The remainder of this article is organized as follows: Section 2 provides background information and outlines previous work. Our methods are explained in Section 3. A summary of our results is presented in Section 4 and our recommendations are detailed in Section 5. Concluding thoughts are found in Section 6.
## 2 Background
The definitions found in this section include the software qualities and SC best practices that our software grading template is based on. Also included is a quick overview of the Analytic Hierarchy Process (AHP), a multicriteria decision making method that we use to analyze the results of the software grading.
### Software Qualities
Our analysis is built around a set of software qualities. Software qualities can be _internal_, in which case the qualities only concern developers, or _external_, in which case the qualities are visible to the end users (Ghezzi et al., 2002, p. 16). Strong internal software qualities help achieve strong external qualities. Qualities not only concern the software _product_ itself, but also the _process_ used, and the artifacts generated (Ghezzi et al., 2002, p. 16-17). Artifacts include documentation and test data, which are created to improve and measure the software's quality.
This paper measures 13 software qualities, as summarized in Smith et al. (2016): installability, correctness and verifiability (measured together), reliability, robustness, performance, usability, maintainability, reusability, portability, understandability, interoperability, visibility and reproducibility. The majority of the above _qualities of software_ come fromGhezzi et al. (2002). We have excluded qualities that we would not be able to sufficiently measure, such as _productivity_ and _timeliness_. We have also added two qualities that we believe are important to the overall quality of SC software: _installability_ and _reproducibility_.
The above software qualities come from SE, and apply to any class of software. Specific SC software development principles are also important to consider when examining GIS software. The \"best practices\" (Wilson et al., 2013) for SC form a checklist of eight basic practices that promote reliable and maintainable code. We use the key ideas from this checklist for creating our grading template. For example, from this list we draw our standards for source code documentation, reuse of libraries, the use of a issue tracker and other key elements of correctness and maintainability.
### Analytic Hierarchy Process
The Analytic Hierarchy Process (AHP) is a multicriteria decision making process. The objective of AHP is to compare multiple results based on multiple criteria important to the decision (Saaty, 1990). In this paper, AHP is used in part to compare qualities of software between each other. Since there is no formal scale or units in which these qualities are measured, we use AHP to remove this problem and focus on relative and pairwise comparisons. AHP consists of a set of _n options_ and a set of _m criteria_ with which the options are graded. The criteria are prioritized. Then, for each of the criterion, a pairwise analysis is performed on each of the options, in the form of an \\(n\\)xn matrix \\(a\\). The value of \\(a_{jk}\\) ranges from 1, when options \\(j\\) and \\(k\\) are equally graded, to 9, when option \\(j\\) grades extremely (maximally) higher than \\(k\\). The definitions of the values between 1 and 9 are found in Saaty (1990).
The value of \\(a_{kj}\\) is the inverse of \\(a_{jk}\\) (\\(a_{kj}=1/a_{jk}\\)). Once the matrix \\(a\\) has been filled, weights are generated by creating a new matrix \\(b\\). Entry \\(b_{jk}=a_{jk}/\\sum(a_{k})\\), where the dot (\\(\\cdot\\)) indicates the entire row. Next, these weights are averaged to determine the overall score for that option and criterion. All of the scores are weighted according to the priorities of the criteria. Final scores are generated to create a ranking for each of the options. These final scores give a high-level view of how an option compares to the others based on all criteria (Mocenni, 2014).
In our project, the \\(n\\) graded software products are the options. The 13 software qualities are the \\(m\\) criteria. In Triantaphyllou and Mann (1995) the authors warn that options' final scores should not be considered as absolute ranks. In our experiment, we certainly do not wish to absolutely rank software products, but more to sort the software products into groups based on their software qualities.
## 3 Methods
In this paper, we create a systematic grading and analysis procedure for a list of SC software products, in particular GIS software. First, the software is graded, based on the software qualities and best practices of SC software from Section 2. Second, the results are discussed and analyzed for trends.
### Software Product Selection
To select the software for analysis, we followed John W. Wilson's list of \"Useful remote sensing software\" (Wilson, 2014). The list provides a comprehensive list of GIS software, and libraries. Most of the software is free and open source, with contributions from both researchers and independent developers. Not all of the links to software products in Wilson's list were used. For example, links to the Python programming language and the R project for statistical computing, were removed because these are general programming languages, and thus not specific to GIS. Additionally, the links to sample data sets and tutorial web pages were not considered, since they are not software products. The full list of software graded can be found in Section 4. In total, there are 30 software products on the list.
### Grading Template
The template we used for grading the software products is a collection of 56 questions. The full list is available in the Appendix and at [https://data.mendeley.com/datasets/6kprpvv7r7/1](https://data.mendeley.com/datasets/6kprpvv7r7/1). The questions are divided into the 13 software qualities listed in Section 2.1. Due to the qualitative or subjective nature of some of the software qualities (e.g. reliability, robustness), the template had to be carefully structured. When choosing questions (measures), we aimed for unambiguity, and quantification where ever possible (e.g. yes/no answers). As outsiders, we looked for measures that are visible, measurable and feasible in a short time with limited domain knowledge. Unlike a comprehensive software review, this template does not grade on functionality and features. Therefore, it is possible that a relatively featureless product can outscore a feature-rich product.
In the first section of the template, general information is gathered about the software. This information contains the software name, URL, license information, possible educational backing, funding methods, and the dates for when the project was released and when it was last updated. A project is defined as _alive_ if it has been updated within the last 18 months, and _dead_ otherwise. This time frame is arbitrary, but it seems appropriate since this includes the usual time frame for new operating system updates and more than a full calendar year for educational institutions. As per Gewaltig and Cannon (2012), we define the category of _public_ software as software intended for use by the public. _Private_ (or _group_) software is only aimed at a specific group of people. Lastly, _concept_ software is available simply to demonstrate algorithms or concepts, and not for use in a production setting. The main categories of development models are: _open source_, where source code is freely available under an open source license; _freeware_, where a binary or executable is provided for free; and, _commercial_, where the user must pay for the software product. If the product is open source, we note the programming language used.
We use a virtual machine to provide an optimal testing environments for each software product. During the process of grading the 30 software products, it is much easier to create a new virtual machine to test the software on, rather than using the host operating system and file system. Adding and removing software from one's computer can often be difficult; we use virtual machines to avoid this headache. Once grading of a software is complete, the virtual machine with the software on it is destroyed, and the host operating system is oblivious. Virtual machines also provide fresh installs of operating systems, which minimizes or completely removes \"works-on-my-computer\" errors. Unless the software has dependencies that must be installed, any installation instructions that are provided by the software developers should be compatible with a fresh install of an operating system. In our grading data, we note the details of the virtual machine, including hypervisor and operating system versions.
### Measuring Qualities
For each of the following qualities, the software receives a grade from one to ten. These grades are the grader's subjective feeling about the software based on the measurements, past experiences and the other GIS software products. The grader must aim for consistency during the grading process. At the end of the ranking process, the potential subjectivity is mitigated by the use of AHP, since in AHP it is the relative difference that matters. As long as two graders are internally consistent, with their grades mostly trending in the same direction, their relative comparisons matrix in AHP should be similar. The objectivity of the grading process is discussed further in Section 3.4.
_Installability_ is an aspect of the software that we can thoroughly analyze. To grade qualities such as usability or robustness, we must first install the software. Installation is also the primary entry point for every user of the software: beginner or advanced. We check for the absence or presence of install instructions. These instructions are ideally linear and highly automated, including the installation of any external libraries that need to be installed. Before proceeding, if there is a way to validate the installation, we run the validation. At the end of the installation and testing, and if an uninstallation is available, we run it to see if any problems were caused. The complete grading template for installability is presented in Table 1. A similar set of measures is used for the other quality gradings.
_Correctness_ is difficult to grade because it is an absolute quality. What we are actually measuring is confidence in correctness and the related quality of _verifiability_. To accurately grade correctness/verifiability, there must be a requirements specification document and the behaviour of the software must strictly adhere to it. We do not have the time or the means to rigorously test every piece of software to this extent. We look for indirect means of judging correctness. For instance, we look for the use of standard libraries, in which the community has confidence, and confidence building techniques, such as such as assertions in the code, and documentation generated from the code.
Since the duration of our usage of the software is quick and structured, we can easily analyze surface _reliability_. We cannot grade long term reliability of the product, but poor reliability during grading is certainly a cause for concern. We know how the software is expected to behave via the installation guide and tutorial (if present), and we also complete a getting started tutorial, if available. If the software does not behave as expected during this duration of usage, the software is graded poorly with respect to reliability.
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline
**Installability Measure** & **Metric** \\\\ \\hline Are there installation instructions? & yes, no \\\\ Are the installation instructions linear? & yes, no \\\\ Is there something in place to automate the installation? & yes\\({}^{*}\\), no \\\\ Is there a specified way to validate the installation, such as a test suite? & yes\\({}^{*}\\), no \\\\ How many steps were involved in the installation? & number \\\\ How many software packages need to be installed? & number \\\\ Run uninstall, if available. Were any obvious problems caused? & unavail, \\\\ & yes\\({}^{*}\\), no \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Installability grading template (unavail means that uninstall is not available and a \\({}^{*}\\) indicates that the measurement should also be accompanied by explanatory text.)When we grade surface _robustness_, we are trying to break the software. We cannot test all features of a product, and we cannot provide exhaustive cases of garbage input to the software. Purposely making errors during the getting started tutorial and other interactions with the software tests the robustness of the program, and its error handling.
_Performance_ is a very difficult quality of software to measure. For practical reasons, the size of the problems we are testing cannot strain the products. Instead of measuring performance directly we look on the surface for evidence that performance was considered, such as a makefile that shows the presence of a performance profiler.
Surface _usability_ is based on our impressions of the \"human- friendliness\" of the product during the grading time frame. During our time using the product, we checked for the existence of a getting started tutorial. This tutorial is an explicit guide for first time users that has linear steps for the absolute basic usage of the product. We also look for a more detailed user manual. If any features are hidden or difficult to find, then they do not satisfy Norman's design principle of visibility (Norman, 2002). We also measure whether the software has the expected \"look-and-feel\" for products for that platform. User support techniques, such as web forums, are also considered when assessing usability.
_Maintainability_ is one of the more concrete software qualities to grade. Whether or not the developers write a changelog, use an issue tracking tool, or have multiple versions of the software, are all easy things to examine. If the developer gives information on how their code is reviewed, or have specific instructions on how to contribute to the project, this information adds to the maintainability of a product.
_Reusability_ is a strong theme in both SC best practices and in SE in general. In our grading, we note products that are currently being reused or that make reusability simple. Adding plugin or add-on functionality greatly improves reusability, especially when well-documented. Also, in the case of an API (Application Programming Interface), having full, concise documentation available for programmers improves reusability.
_Portability_ is graded based on what platforms the software is advertised to work on, and how the developers handle portability. Using cross- platform code or a cross-platform build system is evidence that portability was considered in the design and development. Any related discussion of portability or build practices is also noted.
For grading _understandability_, we examined the source code that comes with open source software products. We checked the source code for objective properties, like modularity, consistent indentation and if concise commenting is used. If there exists a coding standard enforced by the project, it helps understandability. We also checked for documentation regarding software design, such as a module guide for the system architecture. If source code is unavailable, the software is not graded on this criterion.
_Interoperability_ grading consists of examining if the product can communicate or otherwise interact with any external systems. We checked for this kind of interaction and whether an external API document is provided.
_Transparency_ is a quality that is ever-present when grading software products. All information we need for grading, including the getting started tutorial and the source code itself all depend on how the information is presented, and how easy it is to find. We are also interested in whether a development process has been defined. For example, a waterfall or a spiral development model could be used, or perhaps a more ad-hoc process has been documented.
_Reproducibility_ measures any evidence or documentation of development or testing platforms. If there are any tools that alleviate inconsistencies between hardware or software platforms, the reproducibility of the software's results can be tested. As stated above, there are several reasons we use virtual machines for using software during grading. These reasons are applicable for development as well. Documented methods of development or testing on virtual machines greatly helps reproducibility.
### Approach to Grading
During the grading process, the grader is faced with the task of getting a concise snapshot of each software product, based on one to three hours of interaction. The grader needs a good strategy to approach this task. Each grader may have a strategy that is slightly different from the others. Even though we aim for quantification wherever possible, it is unrealistic to expect exactly the same results between all potential graders. The key is to aim for relative consistency between graders, which should be possible since AHP is performing pair-wise comparisons between the grades.
The process of pair-wise comparison is automated using a software program that converts the grades (from 1 to 10 on each quality for each product) to an AHP comparison matrix. Once the AHP calculations are complete, we can see how the software products grade relative to one another. Further details on the algorithm to transform our objectives measures into an AHP sorted ranking can be found in Smith et al. (2015a), which presents an analysis similar to the current one, except rather than studying GIS software, the domain of interest is psychometrics software.
For grading GIS software, we award a grade of 5 for \"indifference.\" For example, if the developer has not explicitly written about or documented extra measures to increase the performance of an otherwise sound product. We cannot dock marks for poor performance, and we cannot award marks for outstanding performance. This same situation appears often in portability and reusability grading.
We award marks of 1 for understandability (of the code) when no source code is available, since we cannot analyze the product's understandability, so relative to any open source product the product's understandability is very poor.
To demonstrate that the grading process is objective, 5 software products were graded by two reviewers. The final results were very similar, and the final grades nearly exactly the same. The main source of difference between gradings is the interpretation of the definition of correctness, and specifically what a requirements specification document entails. As long as each grader is consistent, the relative comparisons in the AHP results will be consistent between graders. Changes in perceived visibility of the software product also plays a major part in differences between grades. Information that is hard to find, or on a different site, can hurt a product's grades, since not all reviewers will have the same luck in finding the information.
## 4 Summary of Results
The most up-to-date and complete grading of the 30 domain software products is available in an external repository at [https://data.mendeley.com/datasets/6kprpvv7r7/1](https://data.mendeley.com/datasets/6kprpvv7r7/1) with a less verbose summary available in the Appendix.
Before grading the software qualities, we gathered general information about the products. Of the 30 GIS products, eight were associated with educational institutions. These institutions are the workplaces of the developers, or provide support for the projects financially. There are 19 open source products. The 30 software products are easily split into three main sets. These sets will be used to simplify the presentation of the software products throughout the remainder of this paper. First, there are six _Desktop Geographical Information System (GIS)_ products, as shown in Table 2. These products have enormous feature sets and exist to obtain,change, analyze and present a wide variety of geographical data. The next set consists of 12 _stand-alone tools_ (Table 3) that perform specific tasks. These tools are much less feature-rich than the desktop GIS systems. Finally, there are 12 _libraries_ and _packages_ (Table 4) that enable programmers to develop their own software products using the functionality of the libraries/packages. Of these libraries, seven are written in Python, three in R, one in C, and one in C++.
Summary general information about the graded GIS software follows:
* 17 products have 5 or fewer developers. Eight projects have two or fewer developers.
* 5 products (GRASS, gvSIG, QGIS, OSSIM, PostGIS) have funding by The Open Source Geospatial Foundation (Foundation, 2014), whose goal is to support the development of open source geospatial software products.
* There are 9 dead products based on our 18 month time-frame for liveness.
* Of the 19 open source products, the GNU GPL license is the most popular (11/19). MIT (4/19) and BSD (3/19) licenses are also widely used. Closed source software, or \"freeware\" either provide no license, and explicit written terms of use, or an end user license agreement.
* Windows is well supported (29/30).
* C++ is the most popular language, in use by 13/30 products.
With respect to _installability_, 21/30 projects contained installation instructions, with 14 of the 21 having linear instructions. Therefore, more than half of the products analyzed did not contain linear installation instructions. Though, in 27/30 cases, installation was automated with the use of makefiles or scripts. So, the absence of linear installation instructions is partially justified in that the steps are taken automatically. Only two software products (NumPy and PostGIS) provided explicit post-installation tests to check the correctness of the installation. Eight products required software to be installed beforehand. Uninstallation automation is not provided in 13 of the 30 projects. However, deleting the software's root directory or deleting the executables was normally sufficient to uninstall the software.
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline Name & Status & Open & Language \\\\ & & source & \\\\ \\hline DIVA-GIS (Hijmans, 2011) & Dead & No & Java \\\\ GRASS (GRASS Development Team, 2014) & Alive & Yes & C \\\\ gvSIG (gvSIG Association, 2013) & Alive & Yes & Java \\\\ QGIS (QGIS, 2014) & Alive & Yes & C\\textasci{-}*, Python \\\\ SAGA-GIS (Conrad and Wichmann, 2014) & Alive & Yes & C\\textasci{-}* \\\\ uDig (Antonello et al., 2013) & Alive & Yes & Java \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Desktop GIS set
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline Name & Status & Open & Language \\\\ & & source & \\\\ \\hline GDAL/OGR (OSGEO, 2014a) & Alive & Yes & C\\({}^{**}\\) \\\\ GDL (Schellens, 2013) & Alive & Yes & C\\({}^{**}\\) \\\\ geopy (Tigas, 2014) & Alive & Yes & Python \\\\ landsat (Goslee, 2012) & Dead & Yes & R \\\\ NetworkX (NetworkX Dev. Team, 2014) & Alive & Yes & Python \\\\ NumPy (Numpy Developers, 2014) & Alive & Yes & C, Python \\\\ PostGIS (Ramsey et al., 2014) & Alive & Yes & C \\\\ pyproj (Whitaker, 2013) & Alive & Yes & Cython \\\\ pyshp (GeospatialPython.com, 2013) & Alive & Yes & Python \\\\ raster (Hijmans et al., 2014) & Alive & No & C \\\\ rgdal (Bivand et al., 2014) & Alive & Yes & C, C\\({}^{**}\\), R \\\\ shapely (Toblerity, 2014) & Alive & Yes & Python \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Programming libraries set
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline Name & Status & Open & Language \\\\ & & source & \\\\ \\hline Biomapper (Hirzel, 2009) & Dead & No & Borland \\\\ & & & Delphi \\\\ Conefor (Saura and Rubio, 2014) & Dead & Yes & C\\({}^{**}\\) \\\\ CROP\\_VGT (Griguolo, 2005) & Dead & No & Unclear \\\\ CyberTracker (CyberTracker Conservation, Unclear) & Alive & No & C\\({}^{**}\\), Java \\\\ DesktopGarp (Scacheti-Pereira, Unclear) & Unclear & No & C\\({}^{**}\\) \\\\ FRAGSTATS (McGarigal, 2014) & Dead & No & C\\({}^{**}\\) \\\\ Lifemapper (Beach et al., 2014) & Unclear & Yes & Python \\\\ MARXAN (Possingham, 2012) & Dead & No & C\\({}^{**}\\) \\\\ Maxent (Schapire, 2011) & Alive & No & Java \\\\ openModeller (openModeller Developers, 2014) & Dead & No & C\\({}^{**}\\) \\\\ OSSIM (OSGEO, 2014b) & Alive & Yes & C\\({}^{**}\\) \\\\ Zonation (C-BIG, 2014) & Dead & No & C\\({}^{**}\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Stand-alone tools setAs Figure 1 shows, programming libraries generally do well on installability. This is because these products can often be installed using just one step (9/12) using a package manager, like pip for Python software and CRAN for R software. GRASS and gvSIG, from the Desktop GIS set, also score high on installability, since these products include easy to use installers and have easy-to-follow, linear installation instructions. Poor cases of installability occur when the user must \"jump hurdles\" to obtain or install the software. Problems occur when users must do additional research, or follow an installation practice that requires an extra layer of software or \"work-arounds\". For example the only supported way to install DIVA-GIS (native on Windows) on OS X is through Winebottler, an.exe packager for OS X. This is not a \"normal\" way to support OS X and relies on a third party for installation and portability. This is different from using a virtual machine for installing and using the software, as we have done for our measurement purposes. In the case of MARXAN and Conefor's command line tools, personal information (name, email) or email correspondence with the developers is required to obtain the software. Developers have every right to ask for this information before making the software available to the user, but this still adds complexity to the installation process.
As Figure 2 shows, _correctness and verifiability_ score high for programming libraries and some desktop GIS systems, but not particularly well for the stand-alone tools. 18 systems used external libraries, with the stand-alone tools using external libraries less frequently than the desktop GIS or programming library sets. Some of the most relied-upon software include sp (written in R), GDAL and PostGIS for abstracting the handling and storage of spatial data. Requirements specification documents were very rare. Only three products (GRASS, GDL and pyshp) explicitly stated adherence to a specification. In GRASS, this specification is presented in a wiki that outlines the purpose, scope, overall description and specific requirements, such as performance and design constraints. While specification documents often do not exist, some projects contain other evidence of explicitly considering correctness. Doxygen or similar tools are used in projects such as SAGA-GIS, PostGIS and OSSIM to automatically generate documentation from the source code. This adds to correctness because the specified behaviour of the product is derived from the source code, and by maintaining them together the documentation and the code should be in sync. Another form of confidence building is automated testing. Five desktop GIS systems, and 10 programming libraries used automated testing. Though stand-alone tools show a general lack in automated testing (2/12). Without testing, requirements specification or other evidence, the conclusion is that stand-alone tools have not adequately considered the quality of correctness.
Figure 1: AHP results for installability.
_Reliability_, overall, is very strong on the surface for all three sets of software products. As explained above, installation of the 30 products went smoothly for the most part. Terminal errors or other prohibitive problems during installation were rare and only occurred in two products: Lifemapper and OSSIM (both stand-alone tools). Initial testing of the products was less automated, and contained more room for error, especially if there are multiple steps in the getting started tutorial. There were no errors or other \"breakages\" for desktop GIS products or stand-alone tools during initial testing. However, a programming library, geopy, had a segfault error while running the getting started tutorial.
Surface _robustness_ is considered in all 30 of the software products graded. By making simple typos and using purposely broken/poor input, we were successful in triggering errors in the software products, without the product crashing. All of the software contained some form of error handling, with variations on the style of display and amount of information given in the error message. These variations impact the usability of the product. Good information and prominent placement of blocking errors helps the user understand the errors. While the software all contains error handling, some software products, like raster, give difficult to understand and vague error messages, which provided little information on what the error was or how to proceed. On average, programming libraries performed better than desktop GIS systems and stand-alone tools, giving more informative and noticeable errors on the command line, as compared to the various methods of displaying errors in a GUI environment.
Surface _performance_ is not explicitly discussed in 22/30 software products. GRASS and QGIS have sections of documentation related to performance. This documentation covers performance optimization measures taken by the developers, and/or benchmarks using test data. Performance is considered in one stand-alone tool (openModeller) in a document detailing the methods to profile the product's performance. In the case of programming libraries, the task of achieving maximum performance lies with the end user. Just one programming library (PostGIS) had any documentation about performance considerations. Sometimes, like in the case of FRAGSTATS and GDAL/OGR, the only time performance is mentioned is when there are possible known memory leaks or other performance- related bugs.
Surface _usability_ is strong for both desktop GIS products and programming libraries, as shown in Figure 3. Desktop GIS products contained a getting started tutorial most frequently, with 4/6 products, compared to 5/12 for stand-alone tools and 7/12 for programming libraries. These getting started tutorials normally contain a standard example directed toward first-time users. 29/30 of the software products contained a complete user manual. These user manuals vary in scope and length, but serve to inform the user of the software's complete purpose,
Figure 2: AHP results for correctness & verifiability.
design and functionality. The best user manuals come with the desktop GIS products. Their user manuals are logically organized into sections that cover all of the user's interactions with the product, from pre-installation information (feature overview, marketing) to software design, to information and guides on using every facet of the software. The one product that does not contain a user manual is CROP_VGT. In this case, the getting started tutorial serves as the complete guide on how to use the software. Some of the user guides are more academic, such as the documentation for MARXAN, which consists of references to books, and external manuals written by others.
The layout and design of the software products is encompassed by usability. Design is more apparent in GUI applications, but design considerations are also apparent in command line software or programming libraries. For the most part, the expected \"look and feel\" of the software products are adhered to. Rarely, (eg. Biomoper, OSSIM) some unusual choices are made for such things as the font, or the GUI skin. There are a few rare cases where the design of the software makes important features more difficult to find than they could be. For instance, there is a lack of organization on the settings screen in Mascent. This is known as a problem with visibility, as described in Don Norman's design principles (Norman, 2002).
In most cases, (27/30) the expected user characteristics are not documented. Conefor advises that you should be an advanced user to use the command line tools, but otherwise, developers rarely document any background knowledge that potential users should have before using the product.
Another aspect of usability to consider, is the existence of a user support system. Other than direct email, and the issue tracker (if it exists, and if it is used for posting support requests) there are alternate methods of support, such as mailing lists, IRC (Internet Relay Chat), message boards, and FAQs (Frequently Asked Questions). Most frequently, 4/6 desktop GIS products had alternatives, like an IRC channel for uDig and a dedicated QGIS StackExchange tag for questions. Stand-alone tools (7/12) and programming libraries (6/12) also had alternatives for support. Deviating from the norm, some projects written in R, like raster, had no explicit support model.
_Maintainability_ roughly varies with the size of the project. With the information available, size is difficult to quantify; however, the grader can form a feel for the size of the project from the number of developers and downloads, and the activity in the news sections and support channels. Based on the reviewer's feel for project size, larger projects generally perform better on maintainability. The developers of small or closed-source projects do not always consider
Figure 3: AHP results for usability.
maintainability, as shown in Figure 4. 29 products had multiple versions of the software, but often these past versions were not available for download. The user may not ever want to download these legacy versions, but having them available does not hurt, and improves visibility. 14 of the software products did not use an issue tracking tool, or asked for email correspondence to report bugs. Email correspondence is private, so the reported bugs are not known to all, which is bad for both visibility and maintainability. Of these 14 products, 10 are stand-alone tools. Out of the remaining 16 products that are using issue tracking, 14 of them were mostly dealing with corrective maintenance. Desktop GIS systems (5/6) and programming libraries (9/12) mostly used issue tracking tools. Stand-alone tools only used issue tracking in 2/12 cases (OSSIM and openModeller). When issue trackers were employed, the majority of the tickets have been closed for most products. Trac, GitHub, JIRA, and Sourceforge are the most popular issue tracking systems.
Version control systems are publicly used in desktop GIS (5/6) and programming libraries (9/12), but again, in just 2/12 cases for stand alone tools (OSSIM and openModeller). The developers of any of the graded products may be privately using version control systems, but there is no documentation suggesting so. Git (10) and SVN (7) are nearly equal in use among the graded software products.
The best cases of maintainability come from software products with _developer's guides_. Four desktop GIS systems and two programming libraries contained developer's guides. Any information associated with the process of adding new code to the project from internal or external contributors can be included in a developer's guide. The software products that contain developer's guides are: GRASS, gvSig, NetworkX, PostGIS, SAGA-GIS, uDig.
As an alternative to explicitly documenting the development process, the process can be implicit in the workflow of the tools employed. For example, products that use GitHub adhere to the processes of the Git version control system and the pull request system facilitated by GitHub.
_Reusability_ scored high for desktop GIS and programming library products, but for different reasons. Five out of six desktop GIS systems contain ways to make reusability easy using APIs, and add-ons. An outstanding example, GRASS GIS, contains an API and an add-ons system. These systems provide the software product's functionality to developers so that they can create their own functionality both inside GRASS and in their own programming projects. Programming libraries, on the other hand, provide reusability because the software product itself is the code and available for programmers to use for their own purposes.
Figure 4: AHP results for maintainability.
For both desktop GIS and programming library software, documentation is important. Well-written and designed add-on API documentation can makes it easier for developers to learn how to interact with the products. For stand alone tools, reusability does not seem to be a primary concern. For these products, the developers either do not have the resources or requirements to develop an API or plug-in system.
_Portability_ has been achieved for most of the software products, with 29 products supporting Windows and of these, 22 supported Linux, OS X or both. There are 7 Windows-only products. There exist many different ways to achieve portability including cross-platform build systems such as cmake, OS-specific branches in code, or by the use of a language easily compiled or interpreted on different systems. For example, languages like R and Python can be run on any modern OS. Therefore, the programming libraries set is graded very well on portability. In some cases, portability was explicitly identified as not being important, which means a lack of portability cannot be held against these products, since they have matched their own stated requirements. SAGA-GIS stated that support for OS X is possible, but the developers had not tested it. DesktopGarp explicitly stated that there are no plans for OS X/Linux support.
_Understanding_ of the code, overall, is strong on the surface for all sets of software products. We examined the 19 open source products' source code and found consistency in formatting, and in the cases of products with developers manuals, sometimes even code style guidelines (uDIG, OSSIM, NumPy) or formatting tools (PostGIS). Useful commenting is almost always used. In one case (pyproj), the source contains little formatting, and the grades were lowered accordingly. For larger projects such as the desktop GIS products, and particularly ones with developer's guide, there are often design documents. Nine open source projects had a design document as a reference.
_Interoperability_ is similar to reusability in that projects that require these facilities often support them well. This occurs primarily in the desktop GIS and programming library sets. These sets use external libraries more frequently, and support re-use via add-ons or directly via an API. For example, geopy is an API itself, but geopy interacts with many external services such as Google and Mapquest to obtain geocoding data.
_Transparency_ seems to be roughly proportional to the size of the project, as illustrated in Figure 5. The more information the project has to display, the more often the developers have designed efficient ways to access this information. Ideally, projects have one web site with all information contained on it. In practice, projects often consist of multiple web sites that provide different services for the project. For example, a main site serves as a hub to external code hosting, download sites, issue trackers, and/or documentation. In this case, it is the grader's task to discover these web sites and gather information about the project.
Key to the transparency of a product is whether its development process is defined. Any protocols that the developers use to add new code, keep track of issues or release new versions are ideally recorded, so that new developers, or users, can be informed. Ten projects had defined development processes. The most thorough information regarding development process was found in the developer's guides. These guides cover development processes, software design, code style and more. Only 8 projects contained any developer-specific documentation section with 7 of them having explicit developer's manuals. Of these 7, 5 were desktop GIS products and the other 2 were programming libraries. Six of the 7 projects with developer's manuals have 5 or more developers. The desktop GIS set has excellent transparency, since these projects have large groups of developers to coordinate. Stand alone tools normally use self-made sites, so the relative transparency can vary, but, in general, this set of GIS software graded poorly in transparency, especially tools like Biomapper or Zonation.
Open source programming libraries can rely on code hosting services such as GitHub or Source-Forge to consolidate information and tasks such as issue tracking and a wiki. Software packages available via repositories, such as R software in The Comprehensive R Archive Network (CRAN), can be given a web page to display information about the project.
_Reproducibility_ is only partially considered in the 30 graded software products. Only four products (uDig, NumPy, shapely, GDAL/OGR) provide development setup information. In particular, shapely recommends the usage of a virtual development environment. GDAL/OGR includes a Vagrantfile, which enables the user to have access to a functioning virtual machine, loaded with the project source and tools as configured by Vagrant (Hashimoto and Bender, 2014).
Access to sample data is provided by 24/30 projects. This sample data can be used in the getting started tutorial, or simply to illustrate the format of the data and to provide sample data for the user to play with. Sample data along with a getting started tutorial (see usability) adds to reproducibility (and correctness) since the output can be checked against what is stated in the tutorial. However, sample data is often not comprehensive with respect to the product's functionality, so one cannot fully grade correctness using sample data. To fully grade correctness, a product must use a comprehensive test suite, as discussed in the results summary for correctness.
Once the grading has been finished, the overall impression of the products performance on all software qualities is evaluated using AHP with equal weights between all qualities, as shown in Figure 6. Stand-alone tools' AHP grades are lower relative to the other two sets. This ranking is due to the generally poor performance of stand-alone tools on installability, correctness, maintainability, reusability, portability, understandability, transparency and reproducibility. Part of the reason for the relatively poor performance may be that these products have fewer developers.
## 5 Recommendations
Our recommendations assume the ideal case where the developers have the desire, time and resources to aim for high quality. That is, in the terminology of Gewaltig and Cannon (2012), the software is intended to be user ready, as opposed to review ready, or research ready. Not all developers will require a high grade on the template in A. However, if the work will be used for decisions that impact health, safety or financial concerns, or if the project is to be maintained going forward, then high quality should be the goal. Moreover, if the results obtained with the software are to be reproducible, documentation has a critical role.
Figure 5: AHP results for transparency.
An example from GIS that stands out on all measures is GRASS. Developers on other projects should look to GRASS as an example to emulate. This advice applies to projects outside of the GIS domain. In a paper studying SC software in the domain of oceanography, using the same methods as used in this paper, one of the recommendations was for oceanography developers to follow the example set by GRASS (Smith et al., 2015b). The success of GRASS is, of course, based on the hard work of the dedicated individuals that have contributed to it. However, the success should also be attributed to the existence of a clear software development process and an infrastructure of development support tools.
The full grading template in the Appendix should be taken as a set of criteria for developers of SC software to consider to ensure best development practices and product quality. Considering all of the items on the list is recommended, but based on the above results for the GIS domain, the authors have three main recommendations for developers:
1. **Ensure the project has a requirements specification document.** Correctness is a quality on which many software products suffered in our study. By definition, correctness requires a specification. For developers to claim correctness, they must have complete, consistent, unambiguous, and verifiable specifications detailing the operation of the software product. In this instance, if the software was graded more leniently, perhaps some of the more extensive and complete user manuals could be seen as requirements specification documents. Sometimes, they even included mathematical background. However, the nature of user manuals is to teach the end user about how to use the software product, not to provide requirements. The move to incorporate requirements is facilitated by the progress on this topic in SE. A structured template for requirements specification for SC software is provided by Smith and Lai (2005). Formal specification can also be applied
Figure 6: Final AHP results.
with tools such as Frama-C (CEA-LIST, 2014) for C or JML (Leavens, 2013) for Java.
2. **Provide multiple support methods.** Support for the product should not simply consist of directly emailing the customers. Mailing lists are better since they can be public, have been in use for many years and are relatively simple. Static methods of support such as an FAQ page or.hlp file (obsolete, Windows help format) are also useful, but do not allow for ad hoc support requests by users. \"Alternative\" methods of support should make support requests easier, and allow any person with the knowledge to respond. Some ideas for addtional support methods include an IRC channel, a Stack Exchange ([http://stackexchange.com/](http://stackexchange.com/)) tag for new questions, or opening the issue tracker up for support requests. Normally, an issue tracker is only for bug reports, but allowing support requests to be added via the issue tracker gives users another way to contact developers and to get support for an issue with the product. This adds to the usability of the product because simple support makes the product simpler to use. Opening the issue tracker to users can assist the developers with maintainability (finding bugs), usability (design visibility issues), and other quality improvements. Not all of the above measures are necessary, especially if the software product has relatively few users or features. In the end, the developers for each software package needs to determine the appropriate level of support for their project.
3. **Design product websites for maximum transparency; for open source projects, provide a developer's guide.** Transparency of a product is important for developers because users with different backgrounds and intentions will be looking for information. Transparency played a large part in how quickly we could grade each product. Developers can make essential information about the project visible by creating well-designed and usable websites. Simple HTML websites are easy to maintain, and their design is straightforward. Web platforms such as Wordpress (Automatic, 2014) make creating and administering a blog and page style product website straight forward. There also exists full web solutions (like GitHub or SourceForge) to display product information and host source code, and other assets such as user manuals and issue trackers. For example, we used GitHub to host our project results summaries during the creation of this paper ([https://github.com/adamlazz/DomainX](https://github.com/adamlazz/DomainX)). When developers start to mix two or three of the above methods for their own project, transparency is greatly reduced. Developers are tasked with keeping multiple sites up to date while developing the product. If the multiple sites are not up to date, the user might be misguided, and the management of the product suffers. Transparency is especially important to consider for new team members or users that choose to look at the source and edit or contribute new code. The product's lead developers should create developer's guides as reference materials for these new developers. Ideally, all aspects of product development are represented. Information on the current state of development, product roadmap, design, and contribution guidelines for adding new code should all be included. These contribution guidelines can include any explicit coding standards or version control processes (e.g. creating a new branch for the patch changes). These processes increase maintainability because the developers have created a plan to execute when maintaining the source.
Once these steps have been taken, we would further recommend the use of a virtual development environment to ease reproducibility. These are quite simple to create nowadays, and ensures that developers' and testers' environments are fully controlled. This makes it simple for new developers of the product to set up their development environment. While this recommendation mainly concerns developers, it is also possible that this environment can be used by end users for a complete, isolated view of the product that requires no set up from the user.
## 6 Conclusions
To provide feedback to the GIS software community, we systematically graded 30 software products associated with the domain. Using a multicriteria decision making method, AHP, we performed a pairwise analysis between each software product. The results were summarized and interpreted for trends.
For the state of practice in GIS software we found the following positive trends among the graded software:
* Products rarely have installation problems or problems with initial testing.
* Projects handle garbage input without problems, such as crashing the program or errantly proceeding with bad input. All GIS software products surveyed had some error handling, which adds to their robustness.
Our survey found the following negative trends:
* Developers rarely explain the background knowledge or fully explain the intended behaviour of the product with a requirements specification document. Without a complete specification document, the product cannot adequately be judged on correctness.
* Ideal or expected user characteristics are rarely stated, which makes it difficult for the user to determine if the product is right for them.
* Instructions for validating or checking installation to ensure it works correctly are rarely included in the graded software. If the user is unfamiliar with the software product, this information would be helpful to them.
* For people that want to contribute to the source code, identification of a coding standard should be provided, and there should be comments in the code indicating \"what\" is being done, but not \"how\" it is being done. Proper code documentation should include pointers to more information on the algorithms used in the code to improve the understandability of the software.
* Evidence that performance or maintainability are considered is rare. Lack of this information hurts the user's impression of the product for these qualities.
* Though not a part of the software itself, the supporting web sites are still a part of the product. Having multiple web sites serving separate functions hurts transparency of the project. For example, having separate sites for the main product page, a repository site and a wiki site means users must hunt for information online that would be better gathered from a single well-designed web site.
## 7 Acknowledgments
The authors acknowledge the time and effort of fellow team members Vasudha Kapil, Sun Yue and Zheng Zeng for their assistance in the project. In particular, Sun Yue for development and initial documentation for a Java program automating AHP pairwise comparisons from software grading scores.
## References
* Antonello et al. (2013) Antonello, A., Eichar, J., Garnett, J., Pazos, M., Gasdorf, F., 2013. uDig. URL [http://udig.refractions.net/](http://udig.refractions.net/)
* Automatic (2014) Automatic, 2014. Wordpress. URL [https://wordpress.com](https://wordpress.com)
* Beach et al. (2014) Beach, J., Stewart, A., Grady, C., Cavner, J., 2014. Lifemapper. URL [http://www.lifemapper.org](http://www.lifemapper.org)
* Bivand et al. (2014) Bivand, R., Keitt, T., Rowlingson, B., Pebesma, E., Sumner, M., Hijmans, R., Rouault, E., 2014. rgdal. URL [http://cran.r-project.org/web/packages/rgdal/index.html](http://cran.r-project.org/web/packages/rgdal/index.html)
* C-BIG (2014) C-BIG, 2014. Zonation. URL [http://cbig.it.helsinki.fi/software/zonation/](http://cbig.it.helsinki.fi/software/zonation/)
* CEA-LIST (2014) CEA-LIST, 2014. Frama-C. URL [http://frama-c.com/](http://frama-c.com/)
* Conrad and Wichmann (2014) Conrad, O., Wichmann, V., 2014. SAGA-GIS. URL [http://www.saga-gis.org/en/index.html](http://www.saga-gis.org/en/index.html)
* CyberTracker Conservation (2014) CyberTracker Conservation, Unclear. CyberTracker. URL [http://www.cybertracker.org/](http://www.cybertracker.org/)
* Foundation (2014) Foundation, O., 2014. OSGeo. URL [http://www.osgeo.org/](http://www.osgeo.org/)
* GeospatialPython (2013) GeospatialPython.com, 2013. pyshp. URL [https://code.google.com/p/pyshp/](https://code.google.com/p/pyshp/)
* Gewaltig and Cannon (2012) Gewaltig, M.-O., Cannon, R., May 2012. Quality and sustainability of software tools in neuroscience. Cornell University Library, 1-20.
* Gewaltig and Cannon (2014) Gewaltig, M.-O., Cannon, R., January 2014. Current practice in software development for computational neuroscience and how to improve it. PLOS Computational Biology, 1-9.
* Ghezzi et al. (2002) Ghezzi, C., Jazayeri, M., Mandrioli, D., 2002. Fundamentals of Software Engineering, 2nd Edition. Prentice Hall.
* Goslee (2012) Goslee, S., 2012. landsat. URL [http://cran.r-project.org/web/packages/landsat/](http://cran.r-project.org/web/packages/landsat/)
* GRASS Development Team (2014) GRASS Development Team, 2014. GRASS. URL [http://grass.osgeo.org/](http://grass.osgeo.org/)
* Griguolo (2005) Griguolo, S., 2005. CROP_VGT. URL [http://cido.iua.it/~silvio/cropvgt.html](http://cido.iua.it/~silvio/cropvgt.html)
* gvSIG Association (2013) gvSIG. URL [http://www.gvsig.org/web](http://www.gvsig.org/web)
* Hashimoto and Bender (2014) Hashimoto, M., Bender, J., 2014. Vagrant. URL [https://github.com/mitchellh/vagrant](https://github.com/mitchellh/vagrant)
* Hijmans (2011) Hijmans, R., 2011. DIVA-GIS. URL [http://www.diva-gis.org/](http://www.diva-gis.org/)
* Hijmans et al. (2014) Hijmans, R. J., van Etten, J., Mattiuzzi, M., Sumner, M., Greenberg, J. A., Lamigueiro, O. P., Bevan, A., Racine, E. B., Shortridge, A., 2014. raster. URL [http://cran.r-project.org/web/packages/raster/](http://cran.r-project.org/web/packages/raster/)
* Hirzel (2009) Hirzel, A., 2009. Biomapper. URL [http://www2.unil.ch/biomapper/](http://www2.unil.ch/biomapper/)
* Kelly (2007) Kelly, D. F., 2007. A software chasm: Software engineering and scientific computing. IEEE Softw. 24 (6), 120-119.
* Leavens (2013) Leavens, G., 2013. The Java Modeling Language. URL [http://www.eecs.ucf.edu/~leavens/JML/index.shtml](http://www.eecs.ucf.edu/~leavens/JML/index.shtml)McGarigal, K., 2014. FRAGSTATS.
URL [http://www.umass.edu/landeco/research/fragstats/fragstats.html](http://www.umass.edu/landeco/research/fragstats/fragstats.html)
Mocenni, C., 2014. The analytic hierarchy process. Online.
URL [http://www.dii.unisi.it/~mocenni/Note_AHP.pdf](http://www.dii.unisi.it/~mocenni/Note_AHP.pdf)
NetworkX Dev. Team, 2014. NetworkX.
URL [http://networkx.github.io/](http://networkx.github.io/)
Norman, D. A., 2002. The Design of Everyday Things, reprint paperback Edition. Basic Books, New York.
Numpy Developers, 2014. NumPy.
URL [http://www.numpy.org/](http://www.numpy.org/)
openModeller Developers, 2014. openModeller.
URL [http://openmodeller.sourceforge.net/](http://openmodeller.sourceforge.net/)
OSGEO, 2014. GDAL/OGR Python.
URL [http://trac.osgeo.org/gdal/wiki/Gdal0grInPython](http://trac.osgeo.org/gdal/wiki/Gdal0grInPython)
OSGEO, 2014b. OSSIM.
URL [http://trac.osgeo.org/ossim/](http://trac.osgeo.org/ossim/)
Possingham, H., 2012. MARXAN.
URL [http://www.uq.edu.au/marxan/](http://www.uq.edu.au/marxan/)
QGIS, 2014. QGIS.
URL [http://www.qgis.org/](http://www.qgis.org/)
Ramsey, P., Santilli, S., Obe, R., Cave-Ayland, M., Park, B., 2014. PostGIS.
URL [http://postgis.refractions.net/](http://postgis.refractions.net/)
Saaty, T. L., 1990. How to make a decision: The analytic hierarchy process. European Journal of Operational Research 48 (1), 9-26.
Saura, S., Rubio, L., 2014. Conefor.
URL [http://www.conefor.org/coneforsensinode.html](http://www.conefor.org/coneforsensinode.html)
Scachetti-Pereira, R., Unclear. DesktopGarp.
URL [http://www.nhm.ku.edu/desktopgarp/FAQ.html](http://www.nhm.ku.edu/desktopgarp/FAQ.html)
Schapire, R., 2011. Maxent.
URL [http://www.cs.princeton.edu/~schapire/maxent/](http://www.cs.princeton.edu/~schapire/maxent/)
Schellens, M., 2013. GDL.
URL [http://gnudatalanguage.sourceforge.net/](http://gnudatalanguage.sourceforge.net/)
Segal, J., 2005. When software engineers met research scientists: A case study. Empirical Software Engineering 10 (4), 517-536.
Segal, J., 2007. Some problems of professional end user developers. In: VLHCC '07: Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing. IEEE Computer Society, Washington, DC, USA, pp. 111-118.
Segal, J., July-Aug 2008. Developing scientific software. IEEE Software 25 (4), 18-20.
Smith, S., Lai, L., August 2005. A new requirements template for scientific computing. Proceedings of SREP'05, 1-15.
Smith, S., Sun, Y., Carette, J., January 2015a. Comparing psychometrics software development between CRAN and other communities. Technical Report CAS-15-01-SS, McMaster University.
Smith, S., Sun, Y., Carette, J., January 2015b. State of the practice for developing oceanographic software. Technical Report CAS-15-02-SS, McMaster University, Department of Computing and Software.
Smith, W. S., Lazzarato, A., Carette, J., October 2016. State of practice for mesh generation software. Advances in Engineering Software 100, 53-71.
Smith, W. S., Zeng, Z., Carette, J., 2017. Seismology software: State of the practice. Journal of Seismology Submitted, 33 pp.
Tigas, M., 2014. geopy.
URL [https://github.com/geopy/geopy](https://github.com/geopy/geopy)
Toblerity, 2014. shapely.
URL [https://github.com/Toblerity/Shapely](https://github.com/Toblerity/Shapely)
Triantaphyllou, E., Mann, S. H., 1995. Using the analytic hierarchy process for decision making in engineering applications. International Journal of Industrial Engineering: Applications and Practice 2 (1), 35-44.
Whitaker, J., 2013. pyproj.
URL [https://code.google.com/p/pyproj/](https://code.google.com/p/pyproj/)
Wilson, G., Aruliah, D., Brown, C. T., Hong, N. P. C., Davis, M., Guy, R. T., Haddock, S. H., Huff, K. D., Mitchell, I. M., Plumblet, M. D., Waugh, B., White, E. P., Wilson, P., September 2013. Best practices for scientific computing. CoRR.
* Wilson (2014) Wilson, J., 2014. Useful remote sensing software. Online. URL [http://www.johnnybirder.com/outreach/remotesensing/software.html](http://www.johnnybirder.com/outreach/remotesensing/software.html)
## Appendix A Full Grading Template
The table below lists the full set of measures that are assessed for each software product. The measures are grouped under headings for each quality, and one for summary information. Following each measure, the type for a valid result is given in brackets. Many of the types are given as enumerated sets. For instance, the response on many of the questions is one of \"yes,\" \"no,\" or \"unclear.\" The type \"number\" means natural number, a positive integer. The types for date and url are not explicitly defined, but they are what one would expect from their names. In some cases the response for a given question is not necessarily limited to one answer, such as the question on what platforms are supported by the software product. Case like this are indicated by \"set of\" preceding the type of an individual answer. The type in these cases are then the power set of the individual response type. In some cases a superscript \\({}^{*}\\) is used to indicate that a response of this type should be accompanied by explanatory text. For instance, if problems were caused by uninstall, the reviewer should note what problems were caused. An (I) precedes the question description when its measurement requires a successful installation.
## Appendix B
\\begin{table}
\\begin{tabular}{l} \\hline \\hline
**Summary Information** \\\\ \\hline Software name? (string) \\\\ URL? (url) \\\\ Educational institution (string) \\\\ Software purpose (string) \\\\ Number of developers (number) \\\\ How is the project funded (string) \\\\ Number of downloads for current version (number) \\\\ Release date (date) \\\\ Last updated (date) \\\\ Status ([alive, dead, unclear]) \\\\ License ((GNU GPL, BSD, MIT, terms of use, trial, none, unclear]) \\\\ Platforms (set of [Windows, Linux, OS X, Android, Other OS]) \\\\ Category ([concept, public, private]) \\\\ Development model ((open source, freeware, commercial)) \\\\ Publications using the software (set of url) \\\\ Publications about the software (set of url) \\\\ Is source code available? ([yes, no]) \\\\ Programming language(s) (set of {FORTRAN, Matlab, C, C\\({}^{*}\\), Java, R, Ruby, Python, Cython, BASIC, Pascal, IDL, unclear]) \\\\ \\hline
**Installability** (Measured via installation on a virtual machine.) \\\\ \\hline Are there installation instructions? ([yes, no]) \\\\ Are the installation instructions linear? ([yes, no, n/a]) \\\\ Is there something in place to automate the installation? ([yes\\({}^{*}\\), no]) \\\\ Is there a specified way to validate the installation, such as a test suite? ([yes\\({}^{*}\\), no]) \\\\ How many steps were involved in the installation? (number) \\\\ How many software packages need to be installed before or during installation? (number) \\\\ (I) Run uninstall, if available. Were any obvious problems caused? ([unavail, yes\\({}^{*}\\), no]) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Grading TemplateOverall impression? ((1.. 10))
**Correctness and Verifiability**
Are external libraries used? ((yes\\({}^{*}\\), no, unclear))
Does the community have confidence in this library? ((yes, no, unclear))
Any reference to the requirements specifications of the program? ((yes\\({}^{*}\\), no, unclear))
What tools or techniques are used to build confidence of correctness? (string)
(I) If there is a getting started tutorial, is the output as expected? ((yes, no\\({}^{*}\\), n/a))
Overall impression? ((1.. 10))
**Surface Reliability**
Did the software \"break\" during installation? ((yes\\({}^{*}\\), no))
(I) Did the software \"break\" during the initial tutorial testing? ((yes\\({}^{*}\\), no, n/a))
Overall impression? ((1.. 10))
**Surface Robustness**
(I) Does the software handle garbage input reasonably? ((yes, no\\({}^{*}\\)))
(I) For any plain text input files, if all new lines are replaced with new lines and carriage returns, will the software handle this gracefully? ((yes, no\\({}^{*}\\), n/a))
Overall impression? ((1.. 10))
**Surface Performance**
Is there evidence that performance was considered? ((yes\\({}^{*}\\), no))
Overall impression? ((1.. 10))
**Surface Usability**
Is there a getting started tutorial? ((yes, no))
Is there a standard example that is explained? ((yes, no))
Is there a user manual? ((yes, no))
(I) Does the application have the usual \"look and feel\" for the platform it is on? ((yes, no\\({}^{*}\\)))
(I) Are there any features that show a lack of visibility? ((yes, no\\({}^{*}\\)))
Are expected user characteristics documented? ((yes, no))
What is the user support model? (string)
Overall impression? ((1.. 10))
**Maintainability**
Is there a history of multiple versions of the software? ((yes, no, unclear))
Is there any information on how code is reviewed, or how to contribute? ((yes\\({}^{*}\\), no))
Is there a changelog? ((yes, no))
What is the maintenance type? (set of {corrective, adaptive, perfective, unclear})
What issue tracking tool is employed? (set of {Trac, JIRA, Redmine, e-mail, discussion board, sourceforge, google code, git, none, unclear})
Are the majority of identified bugs fixed? ((yes, no\\({}^{*}\\), unclear))
Which version control system is in use? ((svn, cvs, git, github, unclear))
Is there evidence that maintainability was considered in the design? ((yes\\({}^{*}\\), no))
Are there code clones? ((yes\\({}^{*}\\), no, unclear))
Overall impression? ((1.. 10))
## Appendix B Summary of Grading Results
The full gradings of the 30 GIS software products are below. The most recent gradings are available at: [https://data.mendeley.com/datasets/6kprpvv7r7/1](https://data.mendeley.com/datasets/6kprpvv7r7/1). The column headings correspond with the above questions from the grading template.
\\begin{table}
\\begin{tabular}{l l l l l l l l} \\hline \\hline Name & Ins & Lin & Auto & Val & Steps & Pkgs & Uninstall \\\\ \\hline DIVA-GIS & Yes & Yes & Yes & No & 1 & 0 & No uninstall available \\\\ GRASS & Yes & Yes & Yes & Yes & 1 & 2 & No problems \\\\ gvSIG & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ QGIS & Yes & No & Yes & No & 2 & 1 & No uninstall available \\\\ SAGA-GIS & Yes & Yes & Yes & No & 1 & 1 & No uninstall available \\\\ uDig & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ Biomapper & No & N/A & Yes & No & 2 & 0 & No uninstall available \\\\ Conefor & No & N/A & N/A & No & 2 & 0 & No uninstall available \\\\ CROP\\_VGT & Yes & Yes & N/A & No & 2 & 0 & No problems \\\\ CyberTracker & Yes & Yes & Yes & No & 1 & 0 & No uninstall available \\\\ DesktopGarp & Yes & Yes & Yes & No & 2 & 0 & No uninstall available \\\\ FRAGSTATS & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ Lifemapper & No & N/A & Yes & No & 2 & 0 & No uninstall available \\\\ MARXAN & No & N/A & Yes & No & 2 & 0 & No uninstall available \\\\ Maxent & Yes & No & Yes & No & 1 & 0 & No problems \\\\ openModeller & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ OSSIM & No & Yes & Yes & No & 1 & 12 & No problems \\\\ Zonation & No & N/A & N/A & No & 1 & 0 & No problems \\\\ GDAL/OGR & Yes & No & Yes & No & 1 & 2 & No uninstall available \\\\ GDL & Yes & Yes & Yes & No & 4 & 0 & No uninstall available \\\\ geopy & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ landsat & No & N/A & Yes & No & 1 & 2 & No problems \\\\ NetworkX & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ NumPy & Yes & Yes & Yes & Yes & 3 & 0 & No uninstall available \\\\ PostGIS & Yes & No & Yes & Yes & 1 & 1 & No problems \\\\ pyproj & Yes & Yes & Yes & No & 2 & 0 & No uninstall available \\\\ pyshp & Yes & Yes & Yes & No & 1 & 0 & No problems \\\\ raster & No & N/A & Yes & No & 1 & 1 & No problems \\\\ rgdal & No & N/A & Yes & No & 1 & 1 & No problems \\\\ shapely & Yes & No & Yes & No & 1 & 1 & No problems \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.6: Installability grading results
\\begin{table}
\\begin{tabular}{l l l l l} \\hline \\hline Name & Std Lib & Req & Evidence & Std Ex \\\\ & & Spec & & \\\\ & & Doc & & \\\\ \\hline DIVA-GIS & No & No & None & Yes \\\\ GRASS & Yes & Yes & Programmers guide & Yes \\\\ gvSIG & No & No & No & Yes \\\\ QGIS & Yes & No & Developers section & N/A \\\\ SAGA-GIS & Yes & No & Doxygen & N/A \\\\ uDig & Yes & No & Developers guide & Yes \\\\ Biomapper & No & No & None & N/A \\\\ Conefor & No & No & No & N/A \\\\ CROP\\_VGT & No & No & None & N/A \\\\ CyberTracker & Yes & No & No & Yes \\\\ DesktopGarp & Yes & No & No & Yes \\\\ FRAGSTATS & Yes & No & None & Yes \\\\ Lifemapper & Yes & No & pydoc & Yes \\\\ MARXAN & No & No & None & N/A \\\\ Maxent & No & No & None & Yes \\\\ openModeller & Yes & No & None & Yes \\\\ OSSIM & No & No & Doxygen & Yes \\\\ Zonation & No & No & None & Yes \\\\ GDAL/OGR & Yes & No & Doxygen & N/A \\\\ GDL & Yes & Yes & Doxygen & N/A \\\\ geopy & No & No & None & No \\\\ landsat & Yes & No & Extensive documentation & No \\\\ NetworkX & Yes & No & None & Yes \\\\ NumPy & Yes & No & None & Yes \\\\ PostGIS & Yes & No & Doxygen & Yes \\\\ pyproj & No & No & Wrapper to PROJ.4 library & N/A \\\\ pyshp & No & Yes & No & Yes \\\\ raster & Yes & No & None & Yes \\\\ rgdal & Yes & No & None & N/A \\\\ shapely & Yes & No & None & Yes \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: Correctness grading results
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Name & Break during install & Break during \\\\ & & initial test \\\\ \\hline DIVA-GIS & No & No \\\\ GRASS & No & No \\\\ gvSIG & No & No \\\\ QGIS & No & No \\\\ SAGA-GIS & No & No \\\\ uDig & No & No \\\\ Biomapper & No & N/A \\\\ Conefor & No & No \\\\ CROP\\_VGT & No & No \\\\ CyberTracker & No & No \\\\ DesktopGarp & No & No \\\\ FRAGSTATS & No & No \\\\ Lifemapper & Yes, install command not given & No \\\\ MARXAN & No & No \\\\ Maxent & No & No \\\\ openModeller & No & No \\\\ OSSIM & Yes, installed wrong package & No \\\\ Zonation & No & No \\\\ GDAL/OGR & No & N/A \\\\ GDL & No & N/A \\\\ geopy & No & Yes, segfault \\\\ landsat & No & No \\\\ NetworkX & No & No \\\\ NumPy & No & No \\\\ PostGIS & No & No \\\\ pyproj & No & No \\\\ pyshp & No & No \\\\ raster & No & No \\\\ rgdal & No & N/A \\\\ shapely & No & No \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.8: Reliability grading results
\\begin{table}
\\begin{tabular}{l l l l l} \\hline \\hline Name & Handle garbage input & Handle & line & ending \\\\ & & & change & \\\\ \\hline DIVA-GIS & Yes & N/A & & \\\\ GRASS & Yes & N/A & & \\\\ gvSIG & Yes & N/A & & \\\\ QGIS & Yes & N/A & & \\\\ SAGA-GIS & Yes & N/A & & \\\\ uDig & Yes & N/A & & \\\\ Biomapper & Yes & N/A & & \\\\ Conefor & Yes & N/A & & \\\\ CROP\\_VGT & Yes & N/A & & \\\\ CyberTracker & Yes & N/A & & \\\\ DesktopGarp & Yes & N/A & & \\\\ FRAGSTATS & Yes & N/A & & \\\\ Lifemapper & Yes & N/A & & \\\\ MARXAN & Yes & N/A & & \\\\ Maxent & Yes & N/A & & \\\\ openModeller & Yes & N/A & & \\\\ OSSIM & Yes & N/A & & \\\\ Zonation & Yes & N/A & & \\\\ GDAL/OGR & Yes & Yes (in scripts) & & \\\\ GDL & Yes & Yes (in scripts) & & \\\\ geopy & Yes & Yes (in scripts) & & \\\\ landsat & Yes & Yes (in scripts) & & \\\\ NetworkX & Yes & Yes (in scripts) & & \\\\ NumPy & Yes & Yes (in scripts) & & \\\\ PostGIS & Yes & N/A & & \\\\ pyproj & Yes & Yes (in scripts) & & \\\\ pyshp & Yes & Yes (in scripts) & & \\\\ raster & Yes & Yes (in scripts) & & \\\\ rgdal & Yes & Yes (in scripts) & & \\\\ shapely & Yes & Yes (in scripts) & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.9: Robustness grading results
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline Name & Evidence of performance considerations \\\\ \\hline DIVA-GIS & No \\\\ GRASS & Yes, performance-specific documentation \\\\ gvSIG & No \\\\ QGIS & Yes, notes in wiki on performance \\\\ SAGA-GIS & No \\\\ uDig & No \\\\ Biomapper & No \\\\ Conefor & No \\\\ CROP\\_VGT & No \\\\ CyberTracker & No \\\\ DesktopGarp & No \\\\ FRAGSTATS & No \\\\ Lifemapper & No \\\\ MARXAN & No \\\\ Maxent & No \\\\ openModeller & Yes, notes in wiki on performance \\\\ OSSIM & No \\\\ Zonation & No \\\\ GDAL/OGR & No \\\\ GDL & No \\\\ geopy & No \\\\ landsat & No \\\\ NetworkX & No \\\\ NumPy & No \\\\ PostGIS & Yes, performance tips in documentation \\\\ pyproj & No \\\\ pyshp & No \\\\ raster & No \\\\ rgdal & No \\\\ shapely & No \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.10: Performance grading results
\\begin{table}
\\begin{tabular}{l c c c c c c c} \\hline \\hline Name & GS & Std & User & Look & Visib & User & Support \\\\ & tuto- & Ex & Man & and & Prob? & char & \\\\ & trial & & & feel & & & \\\\ \\hline DIVA-GIS & Yes & Yes & Yes & Yes & No & No & Mailing list, email \\\\ GRASS & Yes & Yes & Yes & Yes & No & No & Mailing Lists, forum \\\\ gvSIG & Yes & Yes & Yes & Yes & No & No & Bug tracker, mailing list \\\\ QGIS & No & No & Yes & Yes & No & No & Mailing Lists, Forum, StackExchange, chat \\\\ SAGA-GIS & No & No & Yes & Yes & No & No & Mailing list, forum \\\\ uDig & Yes & Yes & Yes & Yes & No & No & Mailing list, Issue tracker, IRC \\\\ Biomapper & No & No & No & No & Yes & No & Discussion list, wiki \\\\ Conefor & No & No & Yes & Yes & No & Yes & Email list \\\\ CROP\\_VGT & No & No & No & Yes & No &.nlp file, email \\\\ CyberTracker & Yes & Yes & Yes & Yes & No & Facebook/Yahoo group, email \\\\ DesktopGarp & Yes & Yes & Yes & Yes & No & No & Discussion list \\\\ FRAGSTATS & Yes & Yes & Yes & Yes & No & No & FAQ, email \\\\ Lifemapper & No & No & Yes & Yes & No & None \\\\ MARXAN & No & No & Yes & Yes & No & No & Mailing List, email \\\\ Maxent & Yes & No & Yes & Yes & Yes & Yes & Discussion group \\\\ openModeller & No & No & Yes & Yes & No & No & IRC, email \\\\ OSSIM & Yes & Yes & Yes & No & Yes\\({}^{*}\\) & No & IRC, Mailing list, Issue tracker \\\\ Zonation & No & Yes & Yes & Yes & No & No & Issue tracker, forums, wiki, \\\\ GDAL/OGR & No & No & Yes & Yes & No & No & Mailing list \\\\ GDL & No & No & Yes & Yes & No & No & Docs, readme, forums \\\\ geopy & Yes & Yes & Yes & Yes & No & No & Github Issues \\\\ landsat & No & No & Yes & Yes & No & No & Email \\\\ NetworkX & Yes & Yes & Yes & Yes & No & No & Issue tracker, mailing list \\\\ NumPy & Yes & Yes & Yes & Yes & No & No & GitHub, Mailing List \\\\ PostGIS & Yes & Yes & Yes & Yes & No & No & IRC, Mailing list, ticket tracker, commercial support, Stack Exchange \\\\ pyproj & No & No & Yes & Yes & No & No & Issue tracker \\\\ pyshp & Yes & Yes & Yes & Yes & No & No & Issue tracker, email, commercial support \\\\ raster & Yes & Yes & Yes & Yes & Yes & No & None. But you can find the developers email \\\\ rgdal & No & No & Yes & Yes & No & No & None explicit, email? \\\\ shapely & No & Yes & Yes & Yes & No & No & Github \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.11: Usability grading results, \\({}^{*}\\) has a visibility problem with the settings screen layoutPM means Programmer's manual, DG means Developer's Guide, NC means not complete, C means Corrective, P means Perfective, A means Adaptive, \\({}^{*}\\) Need account to view issue tracker, no software showed code clones.
\\begin{table}
\\begin{tabular}{l l l l l l l l l} \\hline \\hline Name & Mul & Code & Chlog & Type & Issue & track & Bugs fixes & CVS & Evid \\\\ & Ver & rvw & & & & tool & & & \\\\ \\hline DIVA-GIS & Yes & N/A & Yes & N/A & Email & N/A & N/A & No \\\\ GRASS & Yes & PM & Yes & C & Trac & Yes & SVN & PM \\\\ gvSIG & Yes & DG & No & N/A\\({}^{*}\\) & N/A\\({}^{*}\\) & N/A\\({}^{*}\\) & SVN & No \\\\ QGIS & Yes & Yes & Yes & C & Redmine & Yes & Git & No \\\\ SAGA-GIS & Yes & Yes & Yes & C & Trac & No & SVN & Yes \\\\ uDig & Yes & Yes & Yes & C & JIRA & No & Git & Yes \\\\ Biomapper & Yes & N/A & Yes & N/A & Email & N/A & N/A & No \\\\ Conefor & Yes & No & No & N/A & N/A & N/A & N/A & N/A \\\\ CROP\\_VGT & Yes & N/A & Yes & N/A & N/A & N/A & N/A & N/A \\\\ CyberTracker & Yes & N/A & Yes & N/A & N/A & N/A & N/A & No \\\\ DesktopGarp & Yes & N/A & No & N/A & N/A & N/A & N/A & N/A \\\\ FRAGSTATS & Yes & N/A & Yes & N/A & Email & N/A & N/A & No \\\\ Lifemapper & Yes & No & No & N/A & N/A & N/A & N/A & Yes \\\\ MARXAN & No & N/A & No & N/A & Email & N/A & N/A & No \\\\ Maxent & Yes & N/A & NC & N/A & N/A & N/A & Git & No \\\\ openModeller & Yes & No & Yes & C & Sourceforge & Yes & SVN & No \\\\ OSSIM & Yes & No & Yes & C & Trac & Yes & Git & Doxygen \\\\ Zonation & Yes & N/A & Yes & C & Redmine & Yes & N/A & No \\\\ GDAL/OGR & Yes & No & Yes & C & Trac & Yes & SVN & No \\\\ GDL & Yes & Yes & Yes & C & Sourceforge & Yes & CVS & No \\\\ geopy & Yes & No & Yes & A & GitHub & Yes & Git & No \\\\ landsat & Yes & No & No & N/A & N/A & N/A & N/A & No \\\\ NetworkX & Yes & DG & Yes & C, P & GitHub & Yes & Git & No \\\\ NumPy & Yes & No & Yes & C & GitHub & Yes & Git & No \\\\ PostGIS & Yes & DG & Yes & C & Trac & Yes & SVN & No \\\\ pyproj & Yes & No & No & C & Google & No & Git & No \\\\ & & & & & Code & & & \\\\ pyshp & Yes & No & Yes & C & Google & Yes & Git & No \\\\ & & & & & Code & & & \\\\ raster & Yes & N/A & Yes & N/A & N/A & N/A & N/A & No \\\\ rgdal & Yes & No & Yes & N/A & N/A & N/A & SVN & No \\\\ shapely & Yes & No & Yes & C & GitHub issues & Yes & Git & No \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.12: Maintainability grading results
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Name & Portions reused & Evid \\\\ \\hline DIVA-GIS & No & No \\\\ GRASS & Yes add-ons & API documentation \\\\ gvSIG & Yes extensions & No \\\\ QGIS & Yes plugins & Yes plugins \\\\ SAGA-GIS & Yes API & API documentation \\\\ uDig & Yes plugins & Plugin documentation \\\\ Biomapper & No & No \\\\ Conefor & No & No \\\\ CROP\\_VGT & No & No \\\\ CyberTracker & No & No \\\\ DesktopGarp & No & No \\\\ FRAGSTATS & No & No \\\\ Lifemapper & Not shown & Web service \\\\ MARXAN & No & No \\\\ Maxent & Yes API & No \\\\ openModeller & Yes this is a framework & Yes \\\\ OSSIM & Yes API & No \\\\ Zonation & No & No \\\\ GDAL/OGR & Yes API & API documentation \\\\ GDL & Yes & No \\\\ geopy & No & No \\\\ landsat & No & No \\\\ NetworkX & Yes & No \\\\ NumPy & Yes & No \\\\ PostGIS & Yes & No \\\\ pyproj & Yes & No \\\\ pyshp & Yes & No \\\\ raster & Yes & No \\\\ rgdal & Yes & API documentation \\\\ shapely & Yes & API documentation \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 13: Reusability grading results, Evid means Evidence
\\begin{table}
\\begin{tabular}{l l l l l} \\hline \\hline Name & Platform & Port in code & Not important? & Evid \\\\ \\hline DIVA-GIS & WIN OSX & N/A & No & N/A \\\\ GRASS & WIN LIN OSX & Tools to create installer/ packages & N/A & N/A \\\\ gySIG & WIN LIN & Makefile & No & No \\\\ QGIS & WIN LIN OSX & Cross platform code & N/A & No \\\\ & ANDROID & & & \\\\ SAGA-GIS & WIN LIN & Cross platform code & Yes, with OS X & No \\\\ uDig & WIN LIN OSX & Eclipse & N/A & N/A \\\\ Biomapper & WIN & N/A & No & N/A \\\\ Conefor & WIN LIN OSX R & Unclear & No & No \\\\ CROP\\_VGT & WIN & N/A & N/A & N/A \\\\ CyberTracker & WIN ANDROID & N/A & N/A & No \\\\ DesktopGarp & WIN & N/A & Yes. No plans for & N/A \\\\ & & & Mac/Unix & \\\\ FRAGSTATS & WIN & N/A & No & N/A \\\\ Lifemapper & WIN LIN OSX & Python & N/A & N/A \\\\ MARXAN & WIN LIN OSX & N/A & No & N/A \\\\ Maxent & JAVA & Java or.bat & N/A & N/A \\\\ openModeller & WIN LIN OSX & Platform specific installation & N/A & N/A \\\\ OSSIM & WIN LIN OSX & Compilation steps, platform specific code & No & N/A \\\\ Zonation & WIN & N/A & N/A & N/A \\\\ GDAL/OGR & WIN LIN OSX & Differences in makefile & N/A & N/A \\\\ GDL & LIN OSX & N/A & N/A & N/A \\\\ geopy & WIN LIN OSX & Python & N/A & N/A \\\\ landsat & WIN LIN OSX & R & N/A & N/A \\\\ NetworkX & WIN LIN OSX & Python & N/A & N/A \\\\ NumPy & WIN LIN OSX & Python & N/A & N/A \\\\ PostGIS & WIN LIN OSX & Differences in makefile & No & N/A \\\\ pyproj & WIN LIN OSX & Python & N/A & N/A \\\\ pyshp & WIN LIN OSX & Python & N/A & N/A \\\\ raster & WIN LIN OSX & R & N/A & N/A \\\\ rgdal & WIN LIN OSX & R & N/A & N/A \\\\ shapely & WIN LIN OSX & Python & N/A & N/A \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.14: Portability, Evid means Evidence
\\begin{table}
\\begin{tabular}{l l l l l l l l l l l} \\hline \\hline Name & Indent & Code & Cons & Cnstnts & Cmnts & URL & Params & Mdlr & File & Design \\\\ & & std & Id & & & & & & names & doc \\\\ \\hline DIVA-GIS & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ GRASS & Yes & No & Yes & No & Yes & Yes & Yes & Yes & Yes & Yes \\\\ gvSIG & Yes & No & Yes & No & Yes & No & Yes & Yes & Yes & Yes \\\\ QGIS & Yes & No & Yes & No & Yes & Yes & Yes & Yes & No \\\\ SAGA-GIS & Yes & No & Yes & No & Yes & Yes & No & Yes & Yes & Yes \\\\ uDig & Yes & Yes & Yes & No & Yes & No & Yes & Yes & Yes & Yes \\\\ Biomapper & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ Conefor & Yes & No & Yes & No & Yes & No & Yes & Yes & No & No \\\\ CROP\\_VGT & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ CyberTracker & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ DesktopGarp & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ FRAGSTATS & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ Lifemapper & No & No & Yes & No & Yes & No & Yes & Yes & No & No \\\\ MARXAN & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ Maxent & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ openModeller & Yes & No & Yes & No & Yes & Yes & Yes & Yes & No \\\\ OSSIM & Yes & Yes & Yes & No & Yes & No & Yes & Yes & Yes & No \\\\ Zonation & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A & N/A \\\\ GDAL/OGR & Yes & No & Yes & No & Yes & No & Yes & Yes & Yes & No \\\\ GDL & Yes & No & Yes & No & Yes & Yes & Yes & Yes & Yes & Yes \\\\ geopy & Yes & No & Yes & No & Yes & Yes & Yes & Yes & No \\\\ landsat & No & No & Yes & No & Yes & No & Yes & Yes & No & Yes \\\\ NetworkX & Yes & No & Yes & No & Yes & No & Yes & Yes & Yes & Yes \\\\ NumPy & Yes & Yes & No & Yes & No & Yes & Yes & Yes & Yes \\\\ PostGIS & Yes & No & Yes & No & Yes & No & Yes & Yes & Yes & Yes \\\\ pyproj & No & No & Yes & No & No & Yes & Yes & No & No \\\\ pyshp & Yes & No & Yes & No & Yes & Yes & Yes & Yes & No \\\\ raster & Yes & No & Yes & No & No & Yes & Yes & Yes & No & No \\\\ rgdal & Yes & No & Yes & No & No & No & Yes & Yes & No \\\\ shapely & Yes & No & Yes & No & Yes & No & Yes & Yes & Yes & No \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.15: Understandability grading results
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline Name & Ext systems & Workflow & API \\\\ \\hline DIVA-GIS & ArcView & No & N/A \\\\ GRASS & Many & Not explicit & Yes \\\\ gvSIG & Other softwares in project & Not explicit & Yes \\\\ QGIS & GDAL framework on OS X & Not explicit & Yes \\\\ SAGA-GIS & wxWidgets & Not explicit & API \\\\ uDig & Eclipse Rich Client Platform & Not explicit & Yes \\\\ Biomapper & None & No & None \\\\ Conefor & None & Yes\\({}^{*}\\) & N/A \\\\ CROP\\_VGT & None & Not explicit & No \\\\ CyberTracker & Android/Windows phones & Not explicit & N/A \\\\ DesktopGarp & Microsoft XML Parser & Not explicit & No \\\\ FRAGSTATS & ERSI ArcGIS ArcInfo used in tutorial & ArcGIS & N/A \\\\ Lifemapper & PostgreSQL PostGIS GISs Web etc & Not explicit & Yes \\\\ MARXAN & None & No & N/A \\\\ Maxent & None & No & Yes \\\\ openModeller & GBIF specisLink WCS & Yes & Yes \\\\ OSSIM & Plugins & Not explicit & Yes \\\\ Zonation & None & Not explicit & N/A \\\\ GDAL/OGR & libgdal Numpy & Not explicit & Yes \\\\ GDL & In software requirements & Not explicit & N/A \\\\ geopy & Many third party services & Not explicit & Yes \\\\ landsat & sp rgdal & Not explicit & Yes \\\\ NetworkX & NumPy SciPy GraphViz and more & Not explicit & Yes \\\\ NumPy & SciPy stack & Not explicit & Yes \\\\ PostGIS & PostgreSQL & Not explicit & Yes \\\\ pyproj & Interface to Proj.4 library & No & Yes \\\\ pyshp & None & Not explicit & Yes \\\\ raster & sp & Not explicit & Yes \\\\ rgdal & sp & Not explicit & Yes \\\\ shapely & libgeos & Not explicit & Yes \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.16: Interoperability, \\({}^{*}\\)Conefor input generating GIS plugins
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Name & Dev process & External exam \\\\ \\hline DIVA-GIS & No & 5 \\\\ GRASS & Yes, developerβs guide & 10 \\\\ gvSIG & Yes, developerβs guide & 4 \\\\ QGIS & Yes, developerβs guide & 8 \\\\ SAGA-GIS & Yes, developerβs guide & 7 \\\\ uDig & Yes, developerβs guide & 10 \\\\ Biomapper & No & 3 \\\\ Conefor & No & 6 \\\\ CROP\\_VGT & No & 4 \\\\ CyberTracker & No & 6 \\\\ DesktopGarp & No & 7 \\\\ FRAGSTATS & No & 6 \\\\ Lifemapper & No & 6 \\\\ MARXAN & No & 4 \\\\ Maxent & No & 9 \\\\ openModeller & No & 6 \\\\ OSSIM & No & 6 \\\\ Zonation & No & 4 \\\\ GDAL/OGR & No & 4 \\\\ GDL & Yes, HACKING file & 8 \\\\ geopy & No & 9 \\\\ landsat & No & 5 \\\\ NetworkX & Yes, developerβs guide & 9 \\\\ NumPy & No & 4 \\\\ PostGIS & Yes, developerβs guide & 8 \\\\ pyproj & No & 6 \\\\ pyshp & No & 9 \\\\ raster & No & 9 \\\\ rgdal & No & 4 \\\\ shapely & No & 9 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table B.17: Visibility grading results
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline Name & Dev & Ver test data & Tools capture \\\\ & env & & exp context \\\\ \\hline DIVA-GIS & No & Sample data not for verification & None \\\\ GRASS & No & Sample data and test suite & None \\\\ gvSIG & No & Tests exist & None \\\\ QGIS & No & Sample data available and test suite available & No \\\\ SAGA-GIS & No & Tests available & None \\\\ uDig & Yes & Sample data and test suite & None \\\\ Biomapper & No & No & None \\\\ Conefor & No & Sample data not for verification & None \\\\ CROP\\_VGT & No & No & None \\\\ CyberTracker & No & Sample data not for verification & None \\\\ DesktopGarp & No & No & None \\\\ FRAGSTATS & No & Yes & None \\\\ Lifemapper & No & Sample data not for verification & None \\\\ MARXAN & No & No & No \\\\ Maxent & No & Sample data not for verification & No \\\\ openModeller & No & Sample data and test suite & None \\\\ OSSIM & No & Yes and test suite & None \\\\ Zonation & No & Sample data not for verification & None \\\\ GDAL/OGR & No & Tests & Vagrantfile \\\\ GDL & No & Test suite & None \\\\ geopy & No & Test suite & No \\\\ landsat & No & No & None \\\\ NetworkX & No & Test suite & No \\\\ NumPy & Yes & Test suite & None \\\\ PostGIS & No & Yes test suite, make check & None \\\\ pyproj & No & Test suite & None \\\\ pyshp & No & Test suite & None \\\\ raster & No & No & None \\\\ rgdal & No & Tests available & None \\\\ shapely & No\\({}^{*}\\) & Tests available & None \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 18: Reproducibility, \\({}^{*}\\)Virtual environment preferred | We present a reproducible method to analyze the state of software development practices in a given scientific domain and apply this method to Geographic Information Systems (GIS). The analysis is based on grading a set of 30 GIS products using a template of 56 questions based on 13 software qualities. The products range in scope and purpose from a complete desktop GIS systems, to stand-alone tools, to programming libraries/packages. The final ranking of the products is determined using the Analytic Hierarchy Process (AHP), a multicriteria decision making method that focuses on relative comparisons between products, rather than directly measuring qualities. The results reveal concerns regarding the correctness, maintainability, transparency and reproducibility of some GIS software. Three recommendations are presented as feedback to the GIS community: i) Ensure each project has a requirements specification document; ii) Provide a wealth of support methods, such as an IRC (Internet Relay Chat) channel, a Stack Exchange tag for new questions, or opening the issue tracker for support requests, as well as the more traditional email-based methods; and, iii) Design product websites for maximum transparency (of the development process); for open source projects, provide a developer's guide.
keywords: Geographic Information Systems, scientific computing, software engineering, software quality, review, Analytic Hierarchy Process +
Footnote β : journal: | Condense the content of the following passage. | 267 |
arxiv-format/2406_18279v1.md | # CAS: Confidence Assessments of classification algorithms for Semantic segmentation of EO data
Nikolaos Dionelis, Nicolas Longepe
Manuscript created February, 2024.N. Dionelis and N. Longepe are with the European Space Agency (ESA), \\(\\Phi\\)-lab, ESRIN, Italy. E-mail: [email protected]; [email protected].
## I Introduction
Confidence assessments of classification algorithms are important as it is a desirable property of models in real-world applications to _a priori_ know if they produce an incorrect output. In this work, we focus on algorithms that take as inputs Earth Observation (EO) images and output both labels and confidence. Confidence is a metric between \\(0\\) and \\(1\\) which is a proxy for the probability of correct classification. Confidence assignment and assessment in this paper is performed at both the segment and pixel levels. Furthermore, confidence assignment and assessment has several important applications [1]. Here, the main application we examine is EO Foundation Models [2] and more specifically their evaluation on semantic segmentation downstream tasks [3, 4], i.e. land cover classification using satellite Sentinel-2 data [2, 3].
**Confidence metric.** The ability to assign an accurate calibrated confidence metric to every prediction output of a model is important for reliability and _trust_[1, 5]. In real-world applications, for improved user convenience, models should output reliable predictions. Developing designated mechanisms that flag the specific outputs of the model for which the model should not be trusted, that is, the model knows _when_ it does not know [6], is crucial for models to be operational. In this way, models have the desired ability to _abstain_ in specific cases. As we will also examine in Sec. III, for instances where we simply do not know the correct classification from the available data, for example due to lack of resolution, models should be able to output \"_None_ of the above\" for the final prediction in EO of the semantic segmentation output class.
In this paper, we propose a general methodology to detect misclassifications of models and, then, to refine the model improving its performance and generalization using the detected weak points of both the _available_ data used and the model. The proposed approach has wide applicability as by using our model, we are able to mitigate the negative effects of incorrect classifications, thus improving the decision making of models and _preventing_ high error rates during inference [8, 9]. Our main contribution is the proposed new confidence metric and that we estimate confidence per pixel and segment (Sec. III). CAS detects the segments with _incorrect_ predicted labels and refines the model improving its segmentation performance.
## II Related Work
Semantic segmentation classification tasks are important in remote sensing. Detecting true low confidence predictions in such tasks is challenging in the specific case of EO data because distribution changes often appear in practice and deep neural networks can be overconfident [5, 7]. By using a measure of confidence on the predictions, we are able to detect incorrect classifications [1, 9], as well as domain shifts. In this work, confidence assessments refer to estimating the confidence value at the _segment_ and pixel levels and identifying _true_ low confidence classified sub-segments and pixels. Given an EO data sample, the model outputs [1, 5]: (a) a prediction (i.e. the inferred class label), and (b) a measure of confidence that quantifies the performance accuracy of this prediction. When a specific prediction by the model has high confidence/ certainty, then this indicates high reliability and _trust_ for this prediction. Also, when a prediction has low confidence, the model might choose to _abstain_ from providing an answer [6, 10]. Several different examples of low confidence sample detection in EO exist [8, 10]: _(i) Geographical differences_ to achieve global mapping. Confidence assessments to identify substantial geographical differences, e.g. _forests_ in Europe and Africa and buildings in Europe and Asia, is the first step for domain adaptation. _(ii) Unseen_/ new classes, where we are particularly interested in a _specific set_ of classes, and we would like high accuracy/ confidence for these classes (e.g. for rural classes). Also, models should be able to operate in an _open_-set environment rather than a closed-set setting and predict classes together with a confidence metric [8, 11]. _(iii) Multi-sensor differences_, and _(iv) Different biomes and climate areas_.
To learn from _fewer_ labels, many Foundation Models and pre-trained models have been trained on unlabelled EO data and tested on diverse downstream tasks, but for these models, confidence assessments for semantic segmentation tasks havenot been performed. Furthermore, such models do _not_ perform confidence assignment at the segment level for their classification and segmentation predictions [12, 13, 14, 16].
## III Proposed Methodology
**The proposed model CAS.**_Proposed method:_ Our model extracts features from satellite EO data, predicts the class label per pixel, and estimates a confidence metric per both pixel and segment. We find segments/ connected components, estimate pixel-wise confidence, and assign a confidence metric to each segment. CAS computes several _statistics_ for the segments and the pixels within the segments. These features are a _proxy_ for correct classification. The statistics we calculate are soft-value indicators. We use the softmax probability and compute the difference in pixel-wise probability between the _first_ and second predicted classes and the negative entropy over the predicted classes per pixel, where these three measures behave in a _similar_ manner. We combine the computed statistics, also taking into account cross-pixel correlations, into the proposed confidence metric. To effectively combine the different features which are for the segment (i.e. for _cross-pixel_ correlations rather than only pixel-wise [7]) and for the pixels within the segment (e.g., the logits), we transform the features into a _compatible_ form to have comparable values. We normalize the statistics, also using the segment boundaries, and add them.
Using CAS, we perform confidence assignment and assessments. Our model identifies the weak points of _both_: (i) the available data (i.e. epistemic uncertainty), as we will also examine in the next paragraphs, and (ii) of the model [1]. Using the proposed combined confidence metric, CAS detects the segments with incorrect predicted labels and _refines_ the model improving its segmentation performance and generalization. Our model CAS operates at the segment level, uses the pixels in each segment and their probabilities, and computes statistics including the _coverage_ of the pixels that have a confidence higher than \\(90\\%\\) within the segment. CAS performs segment refinement based on identified _low_ confidence sub-segments.
**Flowchart diagram.** The flowchart of CAS is shown in Fig. 2. We train our model on the dataset ESA WorldCover that contains \\(11\\) classes, e.g. Tree cover. For model initialization, we use and start from the model PhilEO [2, 3]. Our model CAS is based on the _geo-aware_ pre-trained PhilEO Foundation Model which we have recently developed in-house [2]. This PhilEO Foundation Model Version 1.0 has been trained on the global _unlabelled_ dataset PhilEO Globe [3]. We start from the pre-trained _all_-spectral-bands U-Net-based _PhilEO_ model, and as a downstream task, we perform fine-tuning on the labelled dataset ESA WorldCover. CAS assigns a confidence metric to the predictions [1], _identifies_ the incorrect predicted labels, updates the model, and improves the model's performance.
**Assigning a confidence metric to every prediction.**_Importance of the confidence metric:_ The available data that we have might _not_ contain features that are discernible either by visual inspection or by the model, that can be used to distinguish between classes, for example crops and grass in the semantic segmentation task of land cover classification. In such cases, the confidence metric _identifies_ the problem. Using the assigned confidence, we find instances where we simply do not know the correct classification from the available data, for example due to lack of resolution. In this work, we focus on Sentinel-2 which has \\(10\\) m resolution. For _near_ classes, e.g. Cropland and Grassland in the dataset WorldCover1 which has \\(11\\) classes in total, to effectively separate the classes, features in the data (like colour) should contain enough information to distinguish between the different classes. The _features_ in the data are, for example, the visual features (RGB) and the data/ model features (_all_ spectral bands). Because of the resolution of the data, i.e. using the available data, the model cannot find features to distinguish between the two classes Cropland and Grassland in Fig. 1, at the top middle of the input image. The assigned confidence by CAS helps to detect such cases.
Footnote 1: [http://worldcover20202.esa.int/data/docs/WorldCover_PUM_V1.1.pdf](http://worldcover20202.esa.int/data/docs/WorldCover_PUM_V1.1.pdf)
In Fig. 1, the classes Cropland and Grassland are in purple and yellow colours, respectively. The colour scheme is defined by WorldCover. In several applications, e.g. crop _yield_ estimation, confusing _crops_ with grass is an important problem. The crop yield might be overestimated if Cropland and Grassland are not accurately distinguished. From _visual_ inspection of the input image in Fig. 1, we observe that the features for the two classes are similar, i.e. green colour. There is a limitation in the information conveyed by the data: it is difficult to find features in the data to clearly distinguish between the two classes, for example at the top _middle_ of the input. In addition to the low resolution and to the fact that the measurement is from very far, i.e. image from a satellite (the _two_ Sentinel-2 satellites operate at an average altitude of \\(786\\) km), the season is also important. For crops, the acquisition time and whether this time of year was a harvest period is crucial. Misclassifications might occur due to the available data, and the confidence metric identifies and _quantifies_ these weaknesses of the data and the model.
**Information conveyed by the data.** In the example in the previous paragraphs, not being able to distinguish between
Fig. 1: Semantic segmentation land cover classification, confidence metric estimation, and confidence assessments by the proposed model CAS on satellite Sentinel-2 data, using the dataset ESA WorldCover.
crops and grass using the available data is due to epistemic uncertainty. The _two_ different main sources of uncertainty are aleatoric and epistemic. Aleatoric uncertainty is statistical and is related to randomness, for example the specific sample not being a _typical_ example of the class. On the contrary, epistemic uncertainty is systematic, and it is caused by _lack_ of knowledge. Epistemic uncertainty can be reduced using additional information, while aleatoric is _irreducible_. Not being able to separate the classes Cropland and Grassland in Fig. 1(a) is an epistemic uncertainty problem because it is induced by the not enough detail in the measurement. The characteristics and unique features of each of the two land cover classes can be known using additional data and information (e.g., _in-situ_ measurements). In addition, another example of an epistemic uncertainty problem is clouds and being able to distinguish between crops, grass, and clouds, and combinations of these three classes. The _characteristics_ of each class are known and the uncertainty can be reduced by using additional data.
## IV Evaluation: Experiments and Results
We evaluate the proposed model CAS and we note that we perform confidence assignment and assessments aiming at improving the actual _segmentation_ and classification performance of the model. We perform evaluation at the segment level, taking into account semantic information, as well as evaluation at the pixel level. We perform _segment_-wise evaluation using: (a) the Intersection over Union (IoU), and (b) the correlation between the confidence for the segment and the IoU. The IoU uses the ground truth information as it is based on the _explicit_ evaluation of the result. On the contrary, the confidence for the segment does not utilize the ground truth information, that is, it is an _a priori_ estimate [5, 7]. It is not based on explicit numerical evaluation of the result. The _correlation_ between the confidence for the segment and the IoU shows the extent to which the confidence for the connected component is a proxy for the IoU [15, 13]. In the _ideal_ case, the correlation is equal to one as a high confidence value for the predicted segment means that the segmentation is correct and the IoU is high.
We also perform evaluation at the _pixel_ level and compute histograms of the confidence scores for both the correct classifications and the misclassifications. We calculate distribution distances to assess the _separability_ of the incorrect and the correct classifications using the assigned confidence metric.
**Qualitative evaluation.** We test the proposed model CAS on several samples from the dataset ESA WorldCover. For this, we show the following six _images_: (a) Input, (b) Prediction, (c) Ground truth, (d) Correct classifications, (e) Misclassifications, (f) Assigned confidence by CAS, in Fig. 1 and Figs. 3-6.
**Evaluation of CAS at the segment level.**_Comparing CAS to the base model:_ The proposed model CAS achieves an IoU of \\(74.632\\%\\) in Table I, while the base model used yields an IoU of \\(64.282\\%\\). The base model does not use confidence assignment and thus does not perform segment refinement based on _low confidence_ sub-segments. The percentage improvement of our model CAS compared to the base model is \\(16.101\\%\\).
_Comparing our model to other baseline models:_ We compare CAS to the aggregated dispersion measures model from
Fig. 4: Semantic segmentation and confidence assignment and assessments by CAS on Sentinel-2 _multi-spectral_ data using WorldCover.
Fig. 3: Semantic segmentation, confidence estimation, and confidence assessments by our model CAS described in Sec. III in the subsection βThe proposed model CASβ on Sentinel-2 data using WorldCover.
Fig. 2: Flowchart diagram of the proposed model CAS that performs _confidence_-aware segmentation where we assign a confidence metric to predictions, identify wrong predicted labels, and refine the model.
[13] that yields an IoU of \\(69.565\\%\\). The percentage _improvement_ of CAS compared to this model is \\(7.284\\%\\) in Table I.
**Sensitivity analysis for CAS.** When the coverage of the pixels with \\(>80\\%\\) softmax probability in the segment is used instead of the coverage of the \\(>90\\%\\) probability pixels in Sec. III, the IoU is \\(73.316\\%\\) in Table II. When the coverage of the \\(>70\\%\\) probability pixels is used, the IoU is \\(73.039\\%\\).
**Correlation metric.** As further _segment_-wise evaluation of CAS, we compute the correlation between: (i) the confidence for the segment, and (ii) the IoU. CAS achieves the correlation coefficient of \\(60.529\\%\\) in Table I. The correlation when the softmax probability is used on its own, averaged over the segment, i.e. the mean over the _interior_ of the segment without its boundary, is \\(35.735\\%\\). CAS _improves_ the correlation between the confidence for the segment and the IoU, and the percentage improvement is \\(69.383\\%\\). In addition, as an _ablation_ study for CAS, when the median is used over the segment instead of the average, then the correlation is \\(57.963\\%\\) in Table II.
**Correlation sensitivity analysis for CAS.** When the coverage of the pixels with \\(>80\\%\\) (or \\(>70\\%\\) respectively) softmax probability in the segment is used instead of the coverage of the \\(>90\\%\\) probability pixels, as well as when the mean is used over the segment, then the _correlation_ is \\(63.124\\%\\) (or \\(62.645\\%\\) respectively) in Table II. Furthermore, as an ablation study for CAS, when the final refinement to improve the segmentation performance of the model is _not_ performed, when confidence estimation is performed, then the correlation is \\(46.778\\%\\). CAS improves the segmentation performance of the model in the correlation coefficient metric, when compared to the ablation study, and the percentage _improvement_ here is \\(29.396\\%\\).
**The results of our model for Fig. 1.** We _qualitatively_ and numerically evaluate our model, and for the performance of the proposed model CAS in Fig. 1, the IoU is \\(88.593\\%\\) in Table III. In addition, the correlation between the confidence for the segment and the IoU is \\(84.509\\%\\). Comparing our proposed combined confidence metric with using only the softmax output probability, the _IoU_ achieved by the latter is \\(68.895\\%\\). Here, the percentage improvement is \\(28.591\\%\\). When using only the softmax output probability, the _correlation_ between the confidence for the segment and the IoU is \\(69.050\\%\\). As an ablation study for our model, when the gradient is _not_ used, the IoU is \\(70.976\\%\\). When the median over the segment is used instead of the mean, the correlation coefficient is \\(85.192\\%\\).
**Comparing the proposed model CAS with other baseline models.** We now evaluate our model and _compare_ the results we obtain with the results obtained by other models for Fig. 1. The aggregated dispersion measures model from [13] achieves the IoU of \\(78.726\\%\\) in Table III. The percentage improvement of CAS with respect to this model is \\(12.533\\%\\). Therefore, we
Fig. 5: Land cover classification and _confidence_ assessments by the proposed model CAS on Sentinel-2 L2A data using ESA WorldCover.
Fig. 6: Semantic segmentation classification and both confidence assignment and assessments by our model CAS on Sentinel-2 data.
have evaluated our model at the _segment_ level and in the next paragraphs, we will perform evaluation at the pixel level.
**Evaluation of CAS at the pixel level.** We now evaluate our model at the _pixel_ level and assess the assigned confidence for all the examined images. We examine the histograms and the distribution of the scores in Fig. 7. Also, in Table IV, for the separability of the correct and _incorrect_ classifications, we calculate the Kullback-Leibler (KL) and Jensen-Shannon (JS) \\(f\\)-divergences, the Wasserstein distance distribution metric, and the threshold-independent evaluation metric Area Under the Receiver Operating Characteristics Curve (AUROC). In Fig. 7(a), the histogram of our model CAS has _two_ peaks at \\(0\\) and \\(1\\) for the misclassifications and the correct classifications, respectively, and this is desirable. The Wasserstein distance is \\(13.524\\), while the JS divergence is \\(2.805\\). The KL divergence is \\(2.580\\) (also \\(3.029\\) as the KL \\(f\\)-divergence is non-symmetric) and the AUROC is \\(0.901\\). Moreover, the overlap area percentage is \\(27.440\\%\\), while the Euclidean distance is \\(11.987\\).
CAS outperforms other methods in Fig. 7(b) where the softmax output probability on its own is used. For the latter, the Wasserstein distance is \\(5.071\\) in Table IV. The JS divergence is \\(0.566\\), the KL divergence \\(0.475\\) (also \\(0.658\\) since KL is not symmetric), and the AUROC \\(0.777\\). Also, the overlap area percentage is \\(33.584\\%\\), while the Euclidean distance is \\(9.425\\). The percentage _improvement_ of CAS compared to when only the softmax probability is used is \\(15.959\\%\\) for the AUROC in Table IV, and \\(27.183\\%\\) for the Euclidean distance. We observe that all the evaluation metrics for CAS show _improved_ performance compared to the other models. Furthermore, as an ablation study, in Table IV, the model CAS outperforms the model in Fig. 7(c) which does _not_ use segment boundaries.
## V Conclusion
We have proposed the model CAS for confidence assignment and assessments for semantic segmentation classification tasks. CAS takes as input satellite Sentinel-2 multi-spectral data, computes confidence, and improves the segmentation performance of models. The evaluation for the task of land cover classification on WorldCover shows that CAS outperforms other baseline models in the IoU, correlation, JS divergence, AUROC, and Wasserstein distance metrics in Tables I and IV. CAS in Fig. 7(a) has two peaks at \\(0\\) and \\(1\\) and this is desirable. As future work, we will use the results for noisy labels mitigation to detect incorrect class labels in EO datasets.
## References
* Learning from EO data to understand our planet: Recommendations_, Slide 19, 2021. [https://azeg59835.wx.msech.net/event/vasirusetuprod/production-nikal-public/bb8484245642.aad4fe5c4893aa91](https://azeg59835.wx.msech.net/event/vasirusetuprod/production-nikal-public/bb8484245642.aad4fe5c4893aa91)
* [2] C. Fibaek, L. Camilleri, A. Luyts, N. Dionelis, and B. Le Saux, _PhilEO Bench: Evaluating Geo-Spatial Foundation Models_, IGARSS, 2024.
* [3] B. Le Saux, C. Fibaek, L. Camilleri, A. Luyts, N. Dionelis, et al., _The PhilEO congenital Foundation Model Suite_, EGU, 2024. [http://meetinogronizer.copernicus.org/EGU24/EGU24-17934.html](http://meetinogronizer.copernicus.org/EGU24/EGU24-17934.html)
* [4] J. Jakubik, S. Roy, et al., _Fundament Models for Generalist Geospatial Artificial Intelligence_, arXiv:2310.18660, 2023.
* [5] K. Kahl, C. Luth, et al., _VALUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation_, In ICLR, 2024.
* [6] P. de Jorge, et al., _Reliability in Semantic Segmentation_, In CVPR, 2023.
* [7] M. Rottmann, et al., _Prediction error meta classification in semantic segmentation: Detection via aggregated dispersion measures of softmax probabilities_, In Proc. IJCNN, 2020.
* [8] J. Gawlikowski, et al., _An advanced Dirichlet prior network for Out-of-Distribution detection in remote sensing_, IEEE TGRS, 2022.
* [9] Terrance DeVries and Graham W. Taylor, _Learning Confidence for Out-of-Distribution Detection in Neural Networks_, arXiv:1802.04865, 2018.
* [10] J. Kuchler, et al., _Uncertainty estimates for semantic segmentation: Providing enhanced reliability for automation_, arXiv:2401.09245, 2024.
* [11] G. Di Biaes, H. Blum, et al., _Pixel-wise Anomaly Detection in Complex Driving Scenes_, In Proc. CVPR, 2021.
* [12] M. Rottmann and M. Schubert, _Uncertainty Measures and Prediction Quality Rating for the Semantic Segmentation_, CVPR Workshop, 2019.
* [13] R. Chan, et al., _Entropy Maximization and Meta Classification for Out-of-Distribution Detection in Semantic Segmentation_, In Proc. ICCV, 2021.
* [14] S. Rai, F. Cermelli, et al., _Unmasking Anomalies in Road-Scene Segmentation_, In Proc. ICCV, 2023.
* [15] M. Rottmann and M. Reese, _Automated detection of label errors in semantic segmentation datasets via deep learning and uncertainty quantification_, In Proc. WACV, 3214-3223, 2023.
* [16] D. Hendrycks and K. Gimpel, _A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks_, In ICLR, 2017.
* [17] A. Lacoste, N. Lehmann, et al., _GEO-Bench: Toward Foundation Models for Earth Monitoring_, arXiv:2306.03831, 2023.
Fig. 7: Histogram plots for _semantic_ segmentation land cover classification on Sentinel-2 L2A multi-spectral data using ESA WorldCover. Here, the aim is to effectively _separate_ misclassifications and correct classifications, and the horizontal axis in the plots is the confidence. | Confidence assessments of semantic segmentation algorithms in remote sensing are important. It is a desirable property of models to a priori know if they produce an incorrect output. Evaluations of the confidence assigned to the estimates of models for the task of classification in Earth Observation (EO) are crucial as they can be used to achieve improved semantic segmentation performance and prevent high error rates during inference. The model we develop, the Confidence Assessments of classification algorithms for Semantic segmentation (CAS) model, performs confidence evaluations at both the segment and pixel levels, and outputs both labels and confidence. The outcome of this work has important applications. The main application is the evaluation of EO Foundation Models on semantic segmentation downstream tasks, in particular land cover classification using satellite Sentinel-2 data. The evaluation shows that the proposed model is effective and outperforms other baseline models.
Earth observation, Confidence assessments. | Provide a brief summary of the text. | 174 |
arxiv-format/2402_01946v1.md | # Yield forecasting based on short time series with high spatial resolution data
**'Sayli Pokal**
_University of Nebraska-Lincoln, Nebraska, USA_
Yuzhen Zhou
_University of Nebraska-Lincoln, Nebraska, USA_
Trenton Franz
_University of Nebraska-Lincoln, Nebraska, USA_
Corresponding author email: [email protected]_
## 1 Introduction
Precision agriculture, also known as site-specific crop management, plays a crucial role in modern agriculture (Pedersen et al., 2017). It involves using spatio-temporal data to identify variability within a field and adjust crop treatments accordingly. Recent advancements in technology, such as yield monitors and Global Positioning Systems (GPS), have made it possible to collect yield data at geo-referenced points and create yield maps that visualize the variability within the field. However, yield maps can differ significantly from year to year due to factors such as weather, crop diseases, and crop management techniques. Predicting future yields is, therefore, challenging and requires analyzing years of yield maps, soil data, weather patterns, and crop management information.
Accurate prediction of crop yield is pivotal in aiding farmers and policymakers to make well-informed decisions regarding soil, crop management, marketing, and storage. Two common approaches for estimating crop yield include implementing process-based crop simulation models and statistical analysis of spatio-temporal data sets. Crop simulation models are mechanistic models that consider the crop's physiological characteristics and various environmental factors (Wang et al., 2002; Holzworth et al., 2014; Basso et al., 2013). However, these models can be challenging to implement as they require a large number of parameters and input variables that are not always readily available. On the other hand, statistical models rely on past observational data to identify relationships and patterns in the data that can be applied to predict future crop yields. Although statistical models do not directly incorporate plant growth mechanisms, they can be useful in forecasting crop yield.
There are several statistical models available for forecasting yield at a large scale (regional or state level), as discussed in previous studies ([22, 23, 24, 25]). However, only a few approaches are available for forecasting yield for an individual farm or site-specific yield forecasting. For any given year, Drummond et al. (2003) predicts the yield within a site using methods such as stepwise multiple linear regression, projection pursuit regression, and neural networks. Other techniques, including Bayesian networks, regression trees, and artificial neural networks (ANN), have also been used for site-specific yield prediction. However, predicting site-specific yield for future years is more challenging than predicting yield within an individual site year.
The most common approaches for site-specific yield forecasting include fitting linear regression models or spatial econometric models, as demonstrated by (Peralta et al., 2016, Anselin et al., 2004, Schwalbert et al., 2018). Anselin et al. (2004) used yield monitor data to obtain site-specific yield forecasts by implementing a spatial econometric model. Peralta et al. (2016) and Schwalbert et al. (2018) used high-resolution satellite imagery data at the mid-growing season to identify within-field variability and used ordinary least square regression and spatial econometric models to forecast site-specific yield at the end of the season. Lambert et al. (2004) compared four spatial regression models that incorporate spatial correlation in the economic analysis of variable rate technology. Li et al. (2016) estimated site-specific crop yield response functions using varying coefficient models and developed a decision system that provides input prescriptions for producers. The existing methods for forecasting site-specific yield are either based on satellite imagery data or data with many time points. However, not much literature exists for forecasting yield based on historical yield maps for a short time series and high dimensional spatial data.
In this paper, we aim to address this gap in the field and develop a new approach for site-specific yield forecasting based on historical yield maps. Our study was motivated by a maize yield data set from a field in Mead, Nebraska, consisting of 7 years of historical yield maps. Each yield map has a spatial resolution of \\(10m\\), and the field size is \\(800m\\times 800m\\). Our goal is to estimate yield for the future growing season and predict the spatial pattern of yield distribution in the field, i.e., obtain yield maps. However, forecasting yield with the data at hand is challenging due to the short time series and noise in the high-resolution data. Existing methods for site-specific yield forecasting are not designed to handle such short-time series data with only 7 time points; hence, a new approach is required.
The spatially varying auto-regressive (SVAR) model employed by Shand et al. (2018) is a model with the potential to handle short-time series spatial data. Shand et al. (2018) used the model to predict HIV diagnosis rates in US states based on county-level HIV data, where the data was abundant in space but included only a few time points. Instead of creating a time series model for each county, the SVAR model jointly modeled the parameters for each county using a Gaussian copula. The SVAR model uses a spatial dependence structure that combines information from its neighborhood time series. The neighbors essentially act as \"replicates\", making the model's forecasts more reliable compared to the single time series approach.
However, the SVAR model was developed for county-level or lattice spatial data and cannot be directly applied to the yield data, which is continuous in nature. Due to the high dimensionality of the yield data, directly implementing the SVAR model on the yield data would be computationally expensive, with too many parameters to estimate. Additionally, the high-resolution data tends to be noisy and may result in lower prediction accuracy if fed directly into the model. To make the model forecasts reliable and the number of parameters reasonable, there is a need to reduce the dimension of the data and the noise. A common approach to address this issue in spatial statistics is to divide the field into blocks and aggregate the data within the block. However, this approach works well only for homogeneous spatial fields. For an inhomogeneous field, the blocking approach would eliminate the fine patterns in the data, leading to inaccurate yield predictions.
In this paper, we propose a novel two-stage approach for site-specific yield forecasting based on short-time series and high-resolution spatial data that addresses the above issues. In the first stage, we develop a clustering approach for dimension reduction and noise reduction, which retains the fine pattern of the spatial field. In the second stage, we apply a modified version of the SVAR model to obtain yield forecasts for the future growing season. Implementing the proposed method at three different sites in Nebraska, we demonstrate that our method provides finer resolution and more accurate yield maps than the existing approach. The proposed method can thus help implement more effective site-specific management strategies.
The rest of the paper is organized as follows: Section 2 describes the data and site. The details of the proposed methods are presented in Section 3. In Section 4, we implement the proposed model with maize yield data and compare it with existing models. Section 5 concludes the study. Technical details and results for two other independent sites are included in the Appendix.
## 2 Data and Site Description
The data were collected at three different sites in Nebraska, namely Mead, Brule, and Site 6, located on the University of Nebraska research and extension farms. The size of each field was approximately \\(64ha\\) (\\(800m\\times 800m\\)). Historical yield maps for at least seven years were available for each site, along with corresponding hydro-geophysical maps. Yield data was available for every \\(100m^{2}\\) of the field, with historical yield maps having a spatial resolution of \\(10m\\). Spatial maps of shallow and deep electrical conductivity (EC) and soil water content (SWC) with a \\(10m\\) resolution were also available for each field, based on the hydro-geophysical surveys conducted in the field. EC and SWC geophysical maps were pre-processed using empirical orthogonal function (EOF) analysis. The EOF analysis decomposes the observed SWC and EC variability measured by the hydro-geophysical surveys into a set of orthogonal spatial patterns (EOFs), which are invariant in time, and a set of time series expansion coefficients (ECs) which are invariant in space (Perry and Niemann, 2007). Using EOFs helps reduce individual survey noise and instrument error while preserving the dominant geophysical spatial patterns. For yield forecasting analysis, the first EOFs of the SWC and EC geophysical maps were used as predictors, along with relative elevations of the site. Complete details of the multivariate statistical EOF analysis can be found in (Perry and Niemann, 2007) and Korres et al. (2010). See Franz et al. (2020) for a complete description of the crop yield and geophysical datasets.
Crop rotation was done at the Mead site, where maize and soybean are grown in alternate years. Maize was planted at Brule and Site 6 during all years. Planting occurred between late April and early May, depending on the field and weather conditions. Irrigation was provided as required, starting mid-June through early September. Irrigation, herbicide, pesticide, and other crop management practices followed the standard best management practices prescribed for production-scale maize systems. However, information on these technicalities was not available for all sites and years.
Weather information for each site was collected from the nearest Nebraska Mesonet Station within a \\(20km\\) radius. Data was collected on rainfall totals (RT) and potential evaporation (PET) for each growing season, May-June (vegetation growth), July-August (reproduction/grain filling), and seasonal totals (May-September). Rainfall total is known to be a good predictor of average annual yield. PET provides information about the crop water demand, accounting for factors such as temperature, wind speed, and solar radiation. Information on the average storm depth (SD) and average inter-storm arrival rates (SA) is also included for each year. Table 1 provides details of each site.
## 3 Methods
The goal of this paper is to obtain one-year ahead forecasts for site-specific maize yield and to obtain yield maps of the field. We propose a model for forecasting yield at an individual farm level using information such as geophysical variables (soil water content and EC), weather conditions, relative elevation, and historical yield data of the field. The challenge with obtaining forecasts for the data at hand is that we have short time series data that is noisy and high dimensional. Implementing forecasting models directly on the data is not feasible and does not provide good forecasts. We thus propose a novel two-stage approach to obtain forecasts for a short time series and high-resolution spatial data. First, we develop a clustering approach for data aggregation. Second, we implement a modified version of the SVAR forecasting model for obtaining yield forecasts for the future growing season.
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline
**Study** & **Landuse** & **Mesonet Sta-** & **Crop yield years** \\\\
**Site** & & **tion** & **\\\\ \\hline Mead & Rainfed maize and soybean rotation & Ithaca 3E & 2001 - 2017 \\\\ Brule & Irrigated maize & Big Springs & 2010 - 2016 \\\\ Site 6 & Irrigated maize & Ithaca 3E & 2001 - 2017 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Study Site Description
### Data aggregation
#### 3.1.1 Blocking approach
The idea behind spatial data aggregation is that two points that are close to each other in space are correlated. Aggregating these data points will help reduce noise and dimension while retaining helpful information. One way to perform spatial data aggregation is to divide the entire field into equal-sized blocks and average all the observations within each block. Averaging the observations within a block results in a single value corresponding to each block in the field. This method is referred to as data aggregation by blocking.
Aggregating data within a region is helpful if the observations are homogeneous. However, the observations within a given region may not be homogeneous with respect to all the covariates. Looking at Figure 1, we observe that the geophysical maps for each covariate show different spatial patterns. If we define homogeneous regions within the field based on relative elevation, the corresponding regions for Deep EC may not contain homogeneous observations. In this situation, blocking will lead to information loss for Deep EC.
Suppose we divide the field into 64 equal-sized blocks and aggregate the data within each block. The observations within a block will not necessarily be homogeneous for all variables. Aggregating the data will not be efficient as it will eliminate the fine patterns in the field. Thus, data aggregation by blocking may lead to the loss of useful information. If we divide the field into smaller blocks, we might not achieve significant noise reduction, and we will also require the estimation of a larger number of model parameters. Therefore, there is a trade-off between the number of parameters to be estimated and avoiding over-aggregation.
#### 3.1.2 Clustering approach
In the case of an inhomogeneous field, an approach for reducing noise would be to identify similar observations and aggregate these similar observations, leading us to the clustering approach. Grouping observations that are similar and aggregating over these groups will reduce the noise in the data. It will also help identify patterns in the field, such as regions corresponding to low and high yields. The clustering approach will help retain the fine patterns in the EC map, whereas the blocking approach will not retain these fine patterns in the data. Before implementing clustering, we need to specify the variables used for clustering and the number of clusters.
#### Variables used for clustering
Ideally, we want to create groups that identify patterns in the response variable, i.e., the yield, as it will help identify within-field variability in the yield data. To achieve this, we need to identify variables correlated to the response and use these variables to perform clustering. It has been shown that there is a correlation between soil EC and yield, where lower values of soil EC are correlated to lower yield values (Eyinla and Oladunjoye, 2014). We also observed from the historical yield data that the future growing season yield is correlated to the current yield. Hence, historical yield and EC should be good predictors of within-field variability. The distribution of soil EC may stay the same over the years. However, the distribution of yield may be different from year to year. In practice, we suggest using variables correlated to the response or historically known to be good predictors of the response as clustering variables.
#### Number of clusters
Since the purpose behind clustering is to find homogeneous groups of observations in the data that help predict the response, these clusters need not represent the actual clusters in the data. The number of clusters will depend on what
Figure 1: Maps of the spatial covariates at site in Mead, A: Map of Deep EC, B: Map of Soil Water Content, C: Map of Relative Elevation. Universal Transverse Mercator (UTM) is a map projection system with units of meters.
seems reasonable for the data set. Using a large number of clusters may not help with noise reduction and will require estimating a large number of parameters. In contrast, a small number of clusters may result in over aggregation of data and loss of useful information. We need to select the number of clusters that will lead to a reasonable number of parameters to be estimated while retaining valuable information for forecasting yield. One can consider cluster validity statistics, such as a scree plot, as a starting point for selecting the number of clusters. We suggest trying a range of values from small to large and selecting the value that provides good forecasts.
#### Implementation
We implemented k-means clustering using Deep EC and the current year's yield as clustering variables. Initially, we tried several models using different combinations of covariates and historical yield to determine the clustering variables that result in the best forecasting performance. We found that the model with Deep EC and the current year's yield gave the best forecasting results, which supported our intuition behind using these variables to perform clustering. We performed clustering for cluster sizes of 25 and 64, which allowed us to compare the forecasting performance of the models using clustering versus blocking as a data aggregation method. Once we obtained the blocks/clusters for the field, data aggregation was performed by taking the arithmetic average of all the observations within each block/cluster. To reduce the skewness in the yield data, we log-transformed the yield and then aggregated the data within each cluster. As a result, we end up with one value per block/cluster for the log-transformed yield and the covariates.
### The forecasting model
#### 3.2.1 Trend analysis
The maize yields show an increasing trend over time. The increase in yield over the years is primarily due to the advancement in technology and the use of hybrid seeds. The trend can be significantly affected by climatic conditions such as drought or individual storms (e.g., damage from wind or hail). Thus, weather information needs to be incorporated while fitting the trend model. We fitted a linear trend model for our data, considering the average log-transformed yield within the field as the response. We included predictors such as year, rainfall total (RT), potential ET (PET), storm depth rate (SD), and storm arrival rate (SA) for the season of July through August. Model selection was performed to select the best model for each site corresponding to the lowest mean squared error. The trend model selected for each site is summarized in the table below.
The fitted trend model was then used to de-trend the yield at each site, and normalized yield data was obtained by subtracting the average log-transformed yield from the aggregated data. Validating the trend model is challenging in the case of short time series. We suggest adopting a domain expert's knowledge of the topic or using trend models established in the past.
#### 3.2.2 Bayesian forecasting model
We implemented a modified version of the SVAR model to obtain forecasts for the short-time series. Let \\(Y_{i,t}\\) and \\(Z_{i,t}\\) denote the aggregated yield and the normalized yield, respectively, for the \\(i\\)th cluster (\\(i=1,2,\\ldots,n\\)) at time \\(t\\) (\\(t=1,2,\\ldots,T\\)). We model \\(Z_{i,t}\\) using Bayesian hierarchical model as follows,
\\[Z_{i,t}=\\rho_{i}Z_{i,t-1}+\\epsilon_{i,t}, \\tag{1}\\]
where \\(\\epsilon_{i,t}\\stackrel{{ i.i.d.}}{{\\sim}}N(0,\\sigma^{2})\\), and \\(\\rho_{i}\\in(-1,1)\\), the AR(1) coefficient of the \\(i\\)th cluster. The prior of \\(\\mathbf{\\rho}=(\\rho_{1}, ,\\rho_{n})^{\\top}\\) is modeled using a Gaussian copula with covariance matrix \\(\\mathbf{\\Omega}\\). The copula approach allows modeling the dependence structure among \\(\\rho_{i}\\)s while providing the flexibility of choosing appropriate marginal distributions for each \\(\\rho_{i}\\). Hence, the estimation and inference of any single auto-correlation coefficient \\(\\rho_{i}\\) become more accurate and stable by combining the information from its neighborhood time series.
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline
**Site** & **Trend model** \\\\ \\hline Mead & \\(log(yield)=year+RT+PET+SD\\) \\\\ Brule & \\(log(yield)=year+RT+PET\\) \\\\ Site 6 & \\(log(yield)=year+RT+PET\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Trend model In Shand et al. (2018), the covariance matrix \\(\\mathbf{\\Omega}\\) is modeled using CAR model given by Leroux et al. (2000) with variance \\(\\tau_{\\rho}^{2}\\) and spatial correlation parameter \\(\\lambda_{\\rho}\\),
\\[\\mathbf{\\Omega}=\\tau_{\\rho}^{2}(1-\\lambda_{\\rho}\\mathbf{I}+\\lambda_{\\rho} \\mathbf{R}), \\tag{2}\\]
where \\(\\mathbf{R}\\) denotes the neighborhood matrix, the \\(i\\)th diagonal element of \\(\\mathbf{R}\\) represents the total number of neighbors for the group \\(i\\), and the \\((i,j)\\)th off-diagonal element is \\(-1\\) if \\(i\\) and \\(j\\) are neighbors and \\(0\\) otherwise.
When blocking is used for data aggregation, the neighborhood structure of the data is preserved. The blocks that share edges or nodes are considered neighbors. Typically, a block in the center of the field will have eight neighbors, and those on the field's border will have three to five neighbors. The SVAR model can be implemented using this neighborhood structure.
However, the neighborhood structure is not preserved when clustering is used for data aggregation. To model the dependence among \\(\\rho_{i}\\)s, we need to redefine the \"neighbors\" of a given cluster based on the cluster similarities rather than by spatial locations.
We used the average cluster separation matrix to define cluster neighbors. Average cluster separation is defined as the matrix of mean dissimilarities between points of every pair of clusters; it is the same as the dissimilarity matrix obtained using average linkage in hierarchical clustering Nielsen (2016). For \\(i=1, ,n\\), let \\(C_{i}=\\{x_{i1},x_{i2}, ,x_{im_{i}}\\}\\) be the \\(i\\)th cluster, where \\(x_{ij}\\) is the \\(j\\)th object in the cluster and \\(m_{i}\\) is the size of the cluster. The \\((i,j)\\)th entry of the average cluster separation matrix is defined by the average distance of all pairs of objects in these two clusters,
\\[D(C_{i},C_{j})=\\frac{1}{m_{i}m_{j}}\\sum_{k=1}^{m_{i}}\\sum_{\\ell=1}^{m_{j}}d(x_ {ik},x_{j\\ell}),\\]
where \\(d(\\cdot,\\cdot)\\) is the Euclidean distance.
Based on the concept of the \\(\\epsilon\\)-neighborhood graph (von Luxburg, 2007), we considered all clusters whose pairwise distances are smaller than \\(\\epsilon\\) as neighbors. Specifically, the clusters \\(C_{i}\\) and \\(C_{j}\\) are neighbors only if \\(D(C_{i},C_{j})<\\epsilon\\). Thus, the neighborhood matrix \\(\\mathbf{R}\\) under the clustering approach is well defined.
The model parameters were estimated using the MCMC algorithm. The prediction for normalized yield \\(Z_{i,t}\\) was obtained by sampling from the posterior predictive distribution using forward sampling. For each iteration, we have
\\[\\hat{Z}_{i,t}=\\hat{\\rho}_{i}Z_{i,t-1}+\\hat{e}_{i,t}. \\tag{3}\\]
For more details regarding the implementation of the SVAR model, refer to Shand et al. (2018).
Forecasts were obtained for each cluster using the above model. These forecasts were on the normalized yield data scale. The normalized yield forecasts were then back-transformed on the yield data scale by adding the predicted trend value and taking the exponential. The yield values for each spatial point location in the field were then obtained by assigning the yield value for the cluster to all the observations within the cluster. The above process allowed us to obtain fine-resolution yield maps.
#### Handling missing data
Missing data is a common issue with many real-life data sets and can be an issue for parametric models. In the case of our data, the year 2009 is missing for all the sites. The correlation structure in the SVAR model assumes that the time difference is equally spaced; this assumption is violated since the yield for the year 2009 is missing. We provided a simple fix to handle the missing data issue. We considered the normalized yield to be approximately normally distributed, and then, using the property of conditional multivariate normal distribution, we obtained the conditional distribution of yield for the year 2009, given the observed yield data. For mathematical details, refer to Appendix A. The mean and variance parameters were estimated using the MCMC algorithm. A random sample was drawn from the conditional distribution of \\(2009\\) yield given the observed yield and was considered as the yield for \\(2009\\). The model parameters were updated with each MCMC iteration, and a random sample for the missing year was drawn from the updated conditional distribution during each iteration as well.
## 4 Results
We compared our proposed clustering-based SVAR model to four other models. Model \\(1\\) is the clustering-based SVAR model; in this model we used clustering for data aggregation and SVAR model for forecasting. In model \\(2\\) we used clustering for data aggregation and the random forest algorithm for forecasting. In model \\(3\\) and \\(4\\) we used blocking for data aggregation and implemented SVAR and the random forest for forecasting, respectively. Models \\(1-4\\) were fitted using the normalized yield data as the response variable. Model \\(5\\) did not use data aggregation but was fitted using the log-transformed, de-trended yield as the response variable; the random forest algorithm was applied for forecasting.
The forecasting performance of these models was evaluated using prediction R-squared (\\(R^{2}\\)), mean squared prediction error (MSPE) and mean absolute prediction error (MAPE). Let \\(y_{i}\\) and \\(\\hat{y}_{i},\\ i=1,2,\\ldots,n\\) denote the observed yield and the predicted yield respectively corresponding to the i\\({}^{\\text{th}}\\) cluster/block for the year 2017. The performance metrics were defined as follows,
\\[R^{2}=1-\\frac{\\sum_{i=1}^{n}(y_{i}-\\hat{y}_{i})^{2}}{\\sum_{i=1}^{n}(y_{i}-\\bar{y })^{2}} \\tag{4}\\]
\\[\\text{MSPE}=\\frac{1}{n}\\sum_{i=1}^{n}(y_{i}-\\hat{y}_{i})^{2} \\tag{5}\\]
\\[\\text{MAE}=\\frac{1}{n}\\sum_{i=1}^{n}|y_{i}-\\hat{y}_{i}| \\tag{6}\\]
Table 3 presents the results for the Mead site across different models for clusters of size \\(n=25\\) and \\(n=64\\).
Table 3 shows that the clustering-based SVAR model with \\(n=25\\) clusters achieved the best forecasting performance. Compared to other models, this one had the highest prediction \\(R^{2}=0.81\\) and the lowest values for MSPE and MAPE. The predicted yield from this model was 11.717 Mg/Ha, while the actual average yield in 2017 was 12.023 Mg/Ha. The clustering-based random forest model with 25 clusters performed as the second-best among all the models.
The study shows that models that used clustering consistently outperformed those that used blocking for data aggregation. We also observed that models that used the SVAR model for forecasting performed better than the random forest-based models. Model 5 (without data aggregation) performed similarly to the blocking-based SVAR models. These findings suggest that implementing clustering as a data aggregation method leads to better forecasting performance. Finally, the study found that models with 25 clusters outperformed those with 64 clusters, likely due to the increased variability in the data.
Figure 2 and 3 display the yield maps comparing the clustering-based SVAR and the blocking-based SVAR model for \\(25\\) and \\(64\\) clusters/blocks, respectively. The study found that the clustering-based SVAR model generated yield
\\begin{table}
\\begin{tabular}{l l c c c c c} \\hline \\hline
**Aggregation Method** & **Forecasting Model** & **\\# Clusters** & \\(R^{2}\\,(\\%)\\) & **MSPE** & **MAPE** & **Predicted Average** \\\\ \\hline Clustering & SVAR & 25 & 81 & 0.141 & 0.322 & 11.717 \\\\ Clustering & Random Forest & & 74.153 & 0.192 & 0.381 & 11.739 \\\\ \\hline Blocking & SVAR & 25 & 28.637 & 0.316 & 0.461 & 11.693 \\\\ Blocking & Random Forest & & 16.579 & 0.369 & 0.473 & 11.735 \\\\ \\hline Clustering & SVAR & 64 & 77.238 & 0.285 & 0.397 & 11.718 \\\\ Clustering & Random Forest & & 65.131 & 0.436 & 0.443 & 11.738 \\\\ \\hline Blocking & SVAR & 64 & 29.972 & 0.434 & 0.546 & 11.703 \\\\ Blocking & Random Forest & & 18.55 & 0.505 & 0.553 & 11.732 \\\\ \\hline None & Random Forest & - & 28.556 & 0.827 & 0.677 & 11.726 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Forecasting performance at the Mead site maps with finer resolutions that were more representative of the true yield maps than the blocking-based method. The clustering-based model accurately identified the pattern of yield distribution in the field and provided more information compared to the blocking-based model. The reliable yield maps produced by the clustering-based model can help understand the within-field variability, and while the clusters cannot be used as management zones by themselves, they can help develop effective management strategies.
Furthermore, the study also examined the forecasting results for different values of \\(\\epsilon\\) in the \\(\\epsilon\\)-nearest neighborhood matrix. It was found that the results were not sensitive to the values of \\(\\epsilon\\) as long as each cluster had at least one neighbor. The forecasting results for different values of \\(\\epsilon\\) are presented in Appendix B.
To verify the effectiveness of our proposed clustering-based SVAR model, we conducted the same analysis on Brule and Site 6, which are two independent sites. The results are presented in Appendix C, which includes the forecasting accuracy tables (Table 5, 6) and the yield maps (Figure 4, 5, 6, 7). We found that for both sites, the clustering-based SVAR models with 64 clusters outperformed the other models and produced the most accurate yield maps. This confirms the effectiveness of our proposed model.
three independent sites in Nebraska. However, due to the short time series, it was validated using a single year, i.e., 2017. Future studies can evaluate the data over multiple years across different sites. Although further research is needed to obtain management areas based on clusters, this study is one of the few studies forecasting site-specific yield for a short time series based on yield-monitor data and geophysical maps. Existing studies make use of satellite image data or use data collected over long periods of time to obtain yield forecasts.
## Acknowledgements
We would like to thank Nathan Thorson of the Eastern Nebraska Research and Extension Center and the West Central Research and Extension Center for providing crop yield information and access to study sites.
## References
* Pedersen et al. (2017) Soren Marcus Pedersen, Kim Martin Lind, et al. _Precision Agriculture: Technology and Economic Perspectives_. Springer, 2017.
* Wang et al. (2002) Enli Wang, MJ Robertson, GL Hammer, Peter S Carberry, D Holzworth, Holger Meinke, SC Chapman, JNG Hargreaves, NI Huth, and G McLean. Development of a generic crop model template in the cropping system model apsim. _European journal of Agronomy_, 18(1-2):121-140, 2002.
* Holzworth et al. (2014) Dean P Holzworth, Neil I Huth, Peter G deVoil, Eric J Zurcher, Neville I Herrmann, Greg McLean, Karine Chenu, Erik J van Oosterom, Val Snow, Chris Murphy, et al. Apsim-evolution towards a new generation of agricultural systems simulation. _Environmental Modelling & Software_, 62:327-350, 2014.
* Basso et al. (2013) Bruno Basso, Davide Cammarano, and Elisabetta Carfagna. Review of crop yield forecasting methods and early warning systems. In _Proceedings of the first meeting of the scientific advisory committee of the global strategy to improve agricultural and rural statistics, FAO Headquarters, Rome, Italy_, volume 241, 2013.
* Newlands et al. (2014) Nathaniel K Newlands, David S Zamar, Louis A Kouadio, Yinsuo Zhang, Aston Chipanshi, Andries Potgieter, Souleymane Toure, and Harvey SJ Hill. An integrated, probabilistic model for improved seasonal forecasting of agricultural crop yield under environmental uncertainty. _Frontiers in Environmental Science_, 2:17, 2014.
* Bussay et al. (2015) Attila Bussay, Marijn van der Velde, Davide Fumagalli, and Lorenzo Seguini. Improving operational maize yield forecasting in hungary. _Agricultural Systems_, 141:94-106, 2015.
* Paudel et al. (2021) Dilli Paudel, Hendrik Boogaard, Allard de Wit, Sander Janssen, Sjoukje Osinga, Christos Pylianidis, and Ioannis N Athanasiadis. Machine learning for large-scale crop yield forecasting. _Agricultural Systems_, 187:103016, 2021.
* Drummond et al. (2003) Scott T Drummond, Kenneth A Sudduth, Anupam Joshi, Stuart J Birrell, and Newell R Kitchen. Statistical and neural methods for site-specific yield prediction. _Transactions of the ASAE_, 46(1):5, 2003.
* Peralta et al. (2016) Nahuel R Peralta, Yared Assefa, Juan Du, Charles J Barden, and Ignacio A Ciamplitti. Mid-season high-resolution satellite imagery for forecasting site-specific corn yield. _Remote Sensing_, 8(10):848, 2016.
* Anselin et al. (2004) Luc Anselin, Rodolfo Bongiovanni, and Jess Lowenberg-DeBoer. A spatial econometric approach to the economics of site-specific nitrogen management in corn production. _American Journal of Agricultural Economics_, 86(3):675-687, 2004.
* Schwalbert et al. (2018) Rai A Schwalbert, Telmo JC Amado, Luciana Nieto, Sebastian Varela, Geomar M Corassa, Tiago AN Horbe, Charles W Rice, Nahuel R Peralta, and Ignacio A Ciamplitti. Forecasting maize yield at field scale based on high-resolution satellite imagery. _Biosystems engineering_, 171:179-192, 2018.
* Lambert et al. (2004) Dayton M Lambert, James Lowenberg-Deboer, and Rodolfo Bongiovanni. A comparison of four spatial regression models for yield monitor data: A case study from argentina. _Precision Agriculture_, 5(6):579-600, 2004.
* Li et al. (2016) Xiaofei Li, Keith H Coble, Jesse B Tack, and Barry J Barnett. Estimating site-specific crop yield response using varying coefficient models. Technical report, 2016.
* Shand et al. (2018) Lyndsay Shand, Bo Li, Trevor Park, and Dolores Albarracin. Spatially varying auto-regressive models for prediction of new human immunodeficiency virus diagnoses. _J. R. Stat. Soc. Ser. C. Appl. Stat._, 67(4):1003-1022, 2018. ISSN 0035-9254. doi: 10.1111/rssc.12269. URL [https://doi.org/10.1111/rssc.12269](https://doi.org/10.1111/rssc.12269).
* Perry and Niemann (2007) Mark A Perry and Jeffrey D Niemann. Analysis and estimation of soil moisture at the catchment scale using eofs. _Journal of Hydrology_, 334(3-4):388-404, 2007.
* Korres et al. (2010) W Korres, CN Koyama, P Fiener, and K Schneider. Analysis of surface soil moisture patterns in agricultural landscapes using empirical orthogonal functions. _Hydrology & Earth System Sciences_, 14(5), 2010.
* Franz et al. (2020) Trenton E Franz, Sayli Pokal, Justin P Gibson, Yuzhen Zhou, Hamed Gholizadeh, Fatima Amor Tenorio, Daran Rudnick, Derek Heeren, Matthew McCabe, Matteo Ziliani, et al. The role of topography, soil, and remotely sensed vegetation condition towards predicting crop yield. _Field Crops Research_, 252:107788, 2020.
* Ejynila and Oladunjoye (2014) DorcasS Ejynila and Michael A Oladunjoye. Improving quality agricultural practices in tropical environments through integrated geophysical methods. 2014.
* Leroux et al. (2000) Brian G. Leroux, Xingye Lei, and Norman Breslow. Estimation of disease rates in small areas: a new mixed model for spatial dependence. In _Statistical models in epidemiology, the environment, and clinical trials (Minneapolis, MN, 1997)_, volume 116 of _IMA Vol. Math. Appl._, pages 179-191. Springer, New York, 2000. doi: 10.1007/978-1-4612-1284-3_4. URL [https://doi.org/10.1007/978-1-4612-1284-3_4](https://doi.org/10.1007/978-1-4612-1284-3_4).
* Nielsen (2016) Frank Nielsen. _Hierarchical Clustering_, pages 195-211. Springer International Publishing, Cham, 2016. ISBN 978-3-319-21903-5. doi: 10.1007/978-3-319-21903-5_8. URL [https://doi.org/10.1007/978-3-319-21903-5_8](https://doi.org/10.1007/978-3-319-21903-5_8).
* Luxburg (2007) Ulrike von Luxburg. A tutorial on spectral clustering. _Stat. Comput._, 17(4):395-416, 2007. ISSN 0960-3174. doi: 10.1007/s11222-007-9033-z. URL [https://doi.org/10.1007/s11222-007-9033-z](https://doi.org/10.1007/s11222-007-9033-z).
## Appendix
### A. Missing data
Consider the normalized yield matrix \\(\\mathbf{Z}\\sim MVN(\\mathbf{\\mu},\\mathbf{\\Sigma})\\) with \\(n\\times T\\) observations. Let \\(\\mathbf{Z}\\) be partitioned as,
\\[\\mathbf{Z}=\\begin{bmatrix}\\mathbf{Z}_{2009}\\\\ \\mathbf{Z}_{-2009}\\end{bmatrix}\\text{with sizes }\\begin{bmatrix}n\\times 1\\\\ n(T-1)\\times 1\\end{bmatrix},\\]
where \\(\\mathbf{Z}_{-2009}\\) is the matrix of normalized yield for all years except the year 2009. Accordingly, we partition \\(\\mathbf{\\mu}\\) and \\(\\mathbf{\\Sigma}\\) as follows,
\\[\\mathbf{\\mu} =\\begin{bmatrix}\\mathbf{\\mu}_{1}\\\\ \\mathbf{\\mu}_{2}\\end{bmatrix}\\text{with sizes }\\begin{bmatrix}n\\times 1\\\\ n(T-1)\\times 1\\end{bmatrix},\\] \\[\\mathbf{\\Sigma} =\\begin{bmatrix}\\mathbf{\\Sigma}_{11}&\\mathbf{\\Sigma}_{12}\\\\ \\mathbf{\\Sigma}_{21}&\\mathbf{\\Sigma}_{22}\\end{bmatrix}\\text{with sizes }\\begin{bmatrix}n\\times n &n\\times n(T-1)\\\\ n(T-1)\\times T&n(T-1)\\times n(T-1)\\end{bmatrix},\\]
then the distribution of \\(\\mathbf{Z}_{2009}\\) conditional on \\(\\mathbf{Z}_{-2009}=\\mathbf{a}\\) is multivariate normal given by, \\((\\mathbf{Z}_{2009}|\\mathbf{Z}_{-2009}=\\mathbf{a})\\sim N(\\bar{\\mathbf{\\mu}},\\mathbf{ \\bar{\\Sigma}})\\), where
\\[\\bar{\\mathbf{\\mu}}=\\mathbf{\\mu}_{1}+\\mathbf{\\Sigma}_{12}\\mathbf{\\Sigma}_{22}^{-1}\\left( \\mathbf{a}-\\mathbf{\\mu}_{2}\\right),\\text{ and }\\mathbf{\\overline{\\Sigma}}=\\mathbf{\\Sigma}_{11}-\\mathbf{ \\Sigma}_{12}\\mathbf{\\Sigma}_{22}^{-1}\\mathbf{\\Sigma}_{21}.\\]
We update the estimates of \\(\\bar{\\mathbf{\\mu}}\\) and \\(\\mathbf{\\bar{\\Sigma}}\\) as one of the steps in the MCMC algorithm using Gibbs sampling. We then take a random sample from N(\\(\\bar{\\mathbf{\\mu}}\\), \\(\\mathbf{\\bar{\\Sigma}}\\)) and update the \\(\\mathbf{Z}\\) matrix in each step of the MCMC algorithm.
### B. Results for different values of \\(\\epsilon\\)
We present the forecasting results obtained for different values of \\(\\epsilon\\) in Table 4.
* Case 1: Some of the clusters have no neighbors (\\(\\epsilon=3\\)),
* Case 2: Every cluster has at least one neighbor (\\(\\epsilon=6\\)),
* Case 3: Every cluster has at least two neighbors (\\(\\epsilon=16\\)).
At the Mead site, we have observed that the outcomes for Cases when \\(\\epsilon=6\\) and 16 are analogous to those for \\(\\epsilon=100\\), where all clusters are considered neighbors of each other. However, the forecasting performance is slightly reduced for Case when \\(\\epsilon=3\\), since certain clusters have no neighbors. Nevertheless, overall, the results are not sensitive to the value of \\(\\epsilon\\), given that each cluster has at least one neighbor.
For the blocking-based SVAR model, we have also considered the exchangeable prior proposed in Shand et al. (2018) to determine if it can enhance the forecasting accuracy of the model. The exchangeable prior assumes that all blocks are neighbors of each other. The outcomes are shown in Table 4. However, using the exchangeable prior for the neighborhood matrix structure produces only minor improvements in the forecasting performance compared to using the spatial neighborhood matrix structure which considers the blocks on the border as neighbors.
\\begin{table}
\\begin{tabular}{l l l l l l l l} \\hline \\hline
**Aggregation Method** & **Forecasting Model** & **\\# Clusters** & \\(\\epsilon\\) & \\(R^{2}\\left(\\%\\right)\\) & **MSPE** & **MAPE** & **Predicted Average** \\\\ \\hline Clustering & SVAR & 25 & 100 & 81 & 0.141 & 0.322 & 11.717 \\\\ Clustering & SVAR & 25 & 3 & 79.609 & 0.151 & 0.327 & 11.69 \\\\ Clustering & SVAR & 25 & 6 & 80.741 & 0.143 & 0.323 & 11.706 \\\\ Clustering & SVAR & 25 & 16 & 80.924 & 0.142 & 0.324 & 11.713 \\\\ \\hline Blocking & SVAR & 25 & Spatial NB & 28.637 & 0.316 & 0.461 & 11.693 \\\\ Blocking & SVAR & 25 & Exchangeable & 31.244 & 0.304 & 0.456 & 11.705 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Forecasting performance at the Mead site for different values of \\(\\epsilon\\)
\\begin{table}
\\begin{tabular}{l l c c c c c} \\hline \\hline
**Aggregation Method** & **Forecasting Model** & **\\# Clusters** & \\(R^{2}\\left(\\%\\right)\\) & **MSPE** & **MAPE** & **Predicted Average** \\\\ \\hline Clustering & SVAR & 25 & 74.188 & 1.044 & 0.915 & 11.047 \\\\ Clustering & Random Forest & & 69.083 & 1.25 & 1.037 & 11.079 \\\\ \\hline Blocking & SVAR & 25 & 24.099
Figure 6: Yield maps for Site 6 for 25 clusters, A: Observed yield for 2017, B: Predicted yield for 2017 using clustering based SVAR model, C: Predicted yield for 2017 using blocking based SVAR model
Figure 7: Yield maps for Site 6 for 64 clusters, A: Observed yield for 2017, B: Predicted yield for 2017 using clustering based SVAR model, C: Predicted yield for 2017 using blocking based SVAR model | Precision agriculture, also known as site-specific crop management, plays a crucial role in modern agriculture. Yield maps are an essential tool as they help identify the within-field variability that forms the basis of precision agriculture. If farmers could obtain yield maps for their specific site based on their field's soil and weather conditions, then site-specific crop management techniques would be more efficient and profitable. However, forecasting yield and producing reliable yield maps for an individual field can be challenging due to limited historical data. This paper proposes a novel two-stage approach for site-specific yield forecasting based on short-time series and high-resolution yield data. The proposed approach was successfully applied to predict yield maps at three different sites in Nebraska, demonstrating the method's ability to provide fine resolution and accurate yield maps.
precision agriculture; site-specific yield forecasting; yield maps; clustering; auto-regressive; Gaussian copula | Give a concise overview of the text below. | 172 |
arxiv-format/1708_03859v1.md | Modeling soil organic carbon with Quantile Regression: Dissecting predictors' effects on carbon stocks
Luigi Lombardo\\({}^{1,2}\\)
L. Lombardo\\({}^{1,2}\\)
S. Saia\\({}^{3}\\)
Calogero Schillaci\\({}^{4}\\)
P. Martin Mai\\({}^{2}\\)
Raphael Huser\\({}^{1}\\)
\\({}^{1}\\) Computer, Electrical and Mathematical Sciences & Engineering Division, KAUST,
Thuwal, Saudi Arabia
\\({}^{2}\\) Physical Sciences and Engineering Division, KAUST, Thuwal, Saudi Arabia
\\({}^{3}\\)Council for Agricultural Research and Economics (CREA)
Cereal and Industrial Crops Research Centre (CREA-CI), Foggia, Italy
\\({}^{4}\\)Department of Agricultural and Environmental Science, University of Milan, Italy
November 7, 2021
#
**Keywords:** Quantile Regression, R coding, Topsoil Organic Carbon, Digital Soil Mapping, Mediterranean agro-ecosystem
**Corresponding Author:** Luigi Lombardo*, Email: [email protected]
Soil Organic Carbon (SOC) plays a key role in various agricultural and ecological processes related to soil fertility, carbon cycle and soil-atmosphere interactions including CO\\({}_{2}\\) sequestration. Thus, its knowledge has a crucial importance both at global and local scales, especially when aiming at managing natural, anthropic areas and especially agricultural lands. In this context, the scientific community has spent considerable efforts in mapping SOC, modeling its spatiotemporal variation and confirming its primary role in shaping ecosystems functioning (Ajami et al., 2016; Grinand et al., 2017; Ratnayake et al., 2014; Schillaci et al., 2017a).
Spatiotemporal studies can be found in various geographic contexts from Africa (Akpa et al., 2016), Asia (Chen et al., 2016), Australia (Henderson et al., 2005), Europe (Yigini and Panagos, 2016), North-America (West and Wali, 2002) to South-America (Araujo et al., 2016). The variability of the local landscape, available funding, mean gross income of the population in the area and temporal commitment affect the number of samples, their spatial density and distribution. As a result, there are experiments conducted on almost regular and dense grids, most of which focus on small areas (Lacoste et al., 2014; Taghizadeh-Mehrjardi et al., 2016) and other where the sampling strategy significantly varies across space (Mondal et al., 2016). The latter studies mainly correspond to regional or even greater scales (Reijneveld et al., 2009; Sreenivas et al., 2016), with only few cases where an optimal sample density is maintained at a national level (Mulder et al., 2016). The characteristics of environment under study can require the use of different predictors capable of explaining the variability of soil traits, topography and standing biocoenosis, especially (cropped or natural) phytocoenosis, the latter being efficiently explained by remotely sensed (RS) properties (Morellos et al., 2016; Peng et al., 2015).
Modeling procedures for SOC primarily aims at constructing present, past or predictive maps and studying the role of each predictor over the target variable. Regarding the latter, the estimation of predictor contributions on a target variable such as SOC, is of particular interest to efficiently obtain agro-environmental and social benefits (e.g. Rossel and Bouma, 2016).
Statistical applications provide quantitative ways to deal with such research questions. The current literature encompasses algorithms that can be clustered into interpolative and predictive. Pure interpolators are broadly used when the density of the samples is sufficient to regularly describe the variation of SOC across a given area. Examples can be found (Hoffmann et al., 2014; Piccini et al., 2014) with excellent performances reported. The weakness of these approaches becomes evident when using data sets with non-regular distribution in space (Dai et al., 2014; Miller et al., 2016). Conversely, regression-based predictive models hardly suffer from the spatial sampling scheme as they do not rely on the distribution across the geographic space in order to derive functional relations between SOC and dependent variables (Hobley et al., 2016).
Among these, linear regression models are a well-established tool for estimating how, on average, certain environmental properties affect SOC and SOC stock (Rodriguez-Lado and Martinez-Cortizas, 2015). However, they are bounded by definition to model the conditional mean, thus being unable to explore the effects of the same properties at different C contents or stock of the soil, especially at the boundaries of the distribution.
In the present work, Quantile Regression (hereafter QR, Koenker (2005)) is used to model SOC stock from a non-homogenously sampled topsoil SOC dataset using soil texture, land use, topographic and remotely sensed covariates. In particular, QR is able to model the relationship between a set of covariates and specific percentiles of SOC. In classical regression approaches, the regression coefficients (also often called beta coefficients) represent the mean increase in the response variable produced by one unit increase in the associated covariates. Conversely, the beta coefficients obtained from QR represent the change in a specific quantile of the response variable produced by a one unit increase in the associated covariates. In this way, QR allows one to study how certain covariates affect, for example SOC median (quantile \\(\\tau=0.5\\)) or extremely low (e.g., \\(\\tau=0.05\\)) or high (e.g., \\(\\tau=0.95\\)) SOC values. Therefore, it gives a much more complete description of the effect of predictors on the whole SOC probability distribution (i.e., not just the mean) and thus offer the chance to study differential SOC responses to environmental factors.
Furthermore, when used for mapping purposes, QR also allows for soil mapping at given quantiles, providing analogous estimates to more common approaches by using the median instead of the mean.
In the present experiment we use a nested strategy to model SOC in Sicilian agricultural areas with QR: we initially aim at testing the QR overall performances when modeling the SOC stock by segmenting its distribution into 19 quantiles (\\(\\tau=0.05\\) to \\(\\tau=0.95\\)). Subsequently, we examine the coefficients of each predictor for each of the quantiles. Ultimately, we compare the median prediction with available SOC benchmarks for the same study area to test the efficiency of QR for soil mapping purposes. The dataset used in this contribution is the same used in Schillaci et al. (2017) where a Stochastic Gradient Treeboost is adopted.
## 2 Materials and methods
### Study area
Sicily with its approximate 25 thousand squared kilometers is the biggest Mediterranean island. More than 60% of its area is cropped. The natural/semi natural ecosystems include i) Mediterranean maquis, ii) dunes and coastal systems, iii) woods and forests. There are also 37 ancillary islands that are not considered in the present study. Sicily has several sub-climatic zones, all of which are included in hot-summer Mediterranean climate (Csa Koeppen) and warm-summer Mediterranean climate (Csb Koeppen) with mean annual temperatures usually higher than \\(15.8^{\\circ}\\) C. From the West to the South-East coasts, indicators of a semiarid environment can be observed over the year with low or no rainfall summer, high air temperatures and evapo-transpiration demand together with water deficit. The mountainous areas (Madonie, Sicani, Nebrodi and Peloritani ridges, physiography can be checked in Schillaci et al. (2017)) are scarcely cultivated mostly because of conservation policies acting in favor of the local temperate woodland. The continentality index, which is determined by the difference between the mean air temperature during summer and winter, is similar in all the climatic subregions.
According to the latest soil map published by Fantappie et al. (2010) using the World Reference Based (Group et al., 2014) soil classification, the dominant soils in Sicily are:Entisols (36%), Inceptisols (34%), followed by the Mollisols, Alfisols, Vertisols and Andisols. This climatic context plays an important role on the decay processes of organic residue [Lutzow et al., 2006] and on the stabilization of organic fractions. In particular, the local climatic setting facilitates the decomposition and mineralization of the organic matter.
### SOC Data
The available datasets represent the SOC stock (expressed in \\(ton\\cdot ha^{-1}\\)) of the toposils (Ap horizon, from 0 to \\(30cm\\) depth) primarily from agricultural areas (Figure 1). It has been calculated from the organic carbon (expressed in \\(g\\cdot kg^{-1}\\)) multiplied by the soil bulk density. The latter is derived by pedotransferf function [Pellegrini et al., 2007]. In total, 2202 samples are used for modeling purposes. See [Schillaci et al., 2017b] for further information on the dataset.
Supplementary Figure 1 shows the variability associated with each of the considered quantiles. The dataset was provided by the Assessorato Regionale Territorio Ambiente (ARTA) as georeferenced SOC values derived by pedological profiles.
The adopted covariates and their interpretation are discussed in the Supplementary Materials, Predictors Section. The distribution of the aforementioned covariates is shown in Supplementary Figure 2 through their Empirical Cumulative Distribution Function. Prior to any analysis, we transformed some of the variables. This is shown and explained in the Supplementary Material (Figure 3 and Pre-processing Section, respectively).
Figure 1: SOC stock dataset and geographic contextualization.
### Statistical modeling using quantile regression
#### 2.3.1 Quantile regression
In classical regression analysis, the fluctuations in the mean of a response variable (e.g., \\(\\log(\\text{SOC})\\)) are typically explained through a linear function of a set of predictors. In the case where \\(n\\) responses \\(Y_{1},\\ldots,Y_{n}\\) are observed with their \\(p\\) respective predictors \\(x_{1i},\\ldots,x_{pi}\\) (here assumed to be continuous for simplicity), a statistical model may be formulated as
\\[Y_{i}=\\beta_{0}+\\beta_{1}x_{1i}+\\cdots+\\beta_{p}x_{pi}+\\varepsilon_{i},\\]
where the random variables \\(\\varepsilon_{i}\\) are typically assumed to be mutually independent and to follow a normal distribution with zero mean and finite variance \\(\\sigma^{2}\\). Under such a model, and if the predictors are linearly independent, the vector of unknown regression parameters \\(\\beta=(\\beta_{1},\\ldots,\\beta_{p})^{T}\\) may be estimated using the Ordinary Least Squares (OLS) estimator \\(\\hat{\\beta}_{OLS}\\), which may also be seen as minimizing the squared loss function, i.e.,
\\[\\hat{\\beta}_{OLS}=(X^{T}X)^{-1}X^{T}Y=\\min_{\\beta}\\|Y-X\\beta\\|^{2}=\\min_{\\beta }\\sum_{i=1}^{n}(Y_{i}-\\beta_{0}-\\beta_{1}x_{1i}-\\cdots-\\beta_{p}x_{pi})^{2}, \\tag{1}\\]
where \\(Y=(Y_{1},\\ldots,Y_{n})^{T}\\) is the vector of observations, and \\(X\\) is the \\(n\\)-by-\\((p+1)\\) design matrix, where the first column corresponds to the intercept and is a vector of ones, and each other column corresponds to a specific predictor, i.e., it contains the values \\(x_{k1},\\ldots,x_{kn}\\), \\(k=1,\\ldots,p\\). From the right-hand side of (1), the conditional _mean_ of \\(Y\\) may be estimated by \\(\\hat{\\beta}_{0;OLS}+\\hat{\\beta}_{1;OLS}x_{1}+\\cdots+\\hat{\\beta}_{p;OLS}x_{p}\\). In other words, this is a _point predictor_, focusing on a single feature (i.e., the mean) of the distribution of the response \\(Y\\).
More detailed information on the whole conditional (not necessarily Gaussian) _distribution_ of the response \\(Y\\) may be obtained using _quantile_ regression. By definition, for each probability \\(0\\leq\\tau\\leq 1\\), the \\(\\tau\\)-quantile \\(y_{\\tau}\\) of \\(Y\\) is the value exceeding \\((100\\times\\tau)\\%\\) of the data. Mathematically, one has \\(\\text{pr}(Y\\leq y_{\\tau})=\\tau\\), and the collection of all quantiles \\(\\{y_{\\tau}:0\\leq\\tau\\leq 1\\}\\) fully characterizes the probability distribution of \\(Y\\). The value \\(\\tau=0.5\\) corresponds to the _median_, while low and high quantiles (for low and high values of \\(\\tau\\), respectively) correspond to extreme values of \\(Y\\) lying in the lower and upper tails of the distribution, respectively.
By analogy with (1), the conditional \\(\\tau\\)-quantile may be estimated by minimizing an objective function, where the squared loss function is replaced by the quantile loss function. More precisely, computing
\\[\\hat{\\beta}_{\\tau}=\\min_{\\beta}\\sum_{i=1}^{n}L_{\\tau}(Y_{i}-\\beta_{0}-\\beta_ {1}x_{1i}-\\cdots-\\beta_{p}x_{pi}), \\tag{2}\\]
where the quantile loss function \\(L_{\\tau}\\) is defined as
\\[L_{\\tau}(x)=\\begin{cases}-2(1-\\tau)x,&x<0;\\\\ 2\\tau x,&x\\geq 0,\\end{cases}\\]
the conditional \\(\\tau\\)-quantile \\(y_{\\tau}\\) may then be estimated as \\(\\hat{y}_{\\tau}=\\hat{\\beta}_{0;\\tau}+\\hat{\\beta}_{1;\\tau}x_{1}+\\cdots+\\hat{ \\beta}_{p;\\tau}x_{p}\\). When \\(\\tau=0.5\\), \\(L_{0.5}=|x|\\) is the absolute loss function, and \\(\\hat{y}_{0.5}\\) corresponds to the estimatedconditional median. In our application, we choose a sequence of 19 equispaced probabilities \\(\\tau=0.05,0.1,\\ldots,0.95\\) to fit separate quantile regression models, giving much deeper insight into the complete conditional distribution of the SOC values, as a function of spatial predictors. By focusing on low (respectively high) quantiles, regression coefficients inform us about the predictors mostly influencing the absence (respectively high concentrations) of SOC stock over space. By considering independent quantile regression models for different values of \\(\\tau\\), this allows for the possibility that the importance of certain predictors may change according the SOC level. More statistical details on quantile regression and its application may be found in Koenker (2005).
Finding the estimated parameters \\(\\hat{\\beta}_{\\tau}\\) by optimizing (2) is not trivial, but robust algorithms have been implemented and made freely available in the R package quantreg. Model checking and validation may be performed using classical regression techniques with some minor adjustments. For example, to assess the goodness of fit, the coefficient of determination \\(R^{2}\\) is typically replaced by a similar measure based on the quantile loss, although the interpretation remains essentially the same. Similarly, to check the ability of the model to predict unobserved values, cross-validation combined with the quantile loss function is typically used, in order to be consistent with the fitting procedure, instead of using the mean squared error as in classical regression analysis.
#### 2.3.2 Model building strategy, estimation and uncertainty assessment
The strategy adopted in the present work includes five steps:
1. We perform a preliminary multicollinearity analysis to exclude highly correlated covariates. When Pearson's correlation coefficients are above 0.7 or below -0.7, we remove one of two or more collinear covariates as suggested by (Pengelly and Maass, 2001). This is shown and explained in the Supplementary Material (Figure 4 and Pre-processing Section, respectively).
2. Categorical covariates are converted into dummy variables equivalent to each predictor level. Then, the most and least representative dummy classes are removed to avoid using a singular design matrix and subsequent parameter estimates. The least represented classes contain one to five SOC stock samples. This allows to remove potential sources of noise in the modeling procedure, whereas the effect of the most frequent class are carried in the model intercept. The most frequent classes account for a significant part of the data by definition, thus the interpretation of their contribution to the model is clearly important. To investigate their effects on SOC stock we pre-run a separate simpler model built only with the most frequent class within the covariates.
3. Model performances or predictive power is evaluated through leave-one-out cross-validation (Sammut and Webb, 2010). This allows for producing quality metrics based on quantile loss (Koenker and Bassett Jr, 1978). In a QR framework, the latter is equivalent to the \\(R^{2}\\) coefficient used in classical linear regression.
4. Model uncertainty over replicates is implemented through non-parametric case-resampling bootstrap (Davison and Hinkley, 1997). In particular, 10000 replicates are generatedby resampling each of the 2202 cases with replacement. As a result, 10000 replicates of the beta coefficient estimates for each predictor and categorical class are produced for each of the 19 quantiles considered in this study. Similarly, 19 sets of 10000 predictive maps are also computed. This procedure evaluates the variability of the modeling output and the reliability of the final estimates across replicates.
5. SOC regionalization is conducted by producing 19 distinct quantile predictive maps by using the original dataset without any resampling scheme to ensure the full predictive power for mapping purposes.
### Currently available SOC estimations in the study area
Three digital soil mapping products are currently available for the area under study: i) the ISRIC World Soil Information ([http://www.isric.org](http://www.isric.org), Hengl et al. (2014)), ii) the Global Soil Organic Carbon Estimates of the Harmonized World Soil Database ([http://esdac.jrc.ec.europa.eu/content/global-soil-organic-carbon-estimates](http://esdac.jrc.ec.europa.eu/content/global-soil-organic-carbon-estimates), Hiederer and Kochy (2011)) and iii) the European Joint Research Centre JRC European SOC map (Lugato et al., 2014). These layers represent the state of the art of digital soil mapping and are de facto the only SOC benchmarks for the globe and for Europe. According to Hengl et al. (2014), SOC distribution is calculated through Generalized Linear Models at a 1-km resolution using the GSIF package in R. Hiederer and Kochy (2011) use analogous linear regression model and spatial resolution to regionalize the SOC data over the globe. Conversely, the JRC European estimates are calculated using a deterministic approach using the agro-ecosystem SOC model CENTURY (Parton et al., 1988). The inclusion of such estimates in the present contribution allows to compare the regional QR prediction to reliable, robust and well tested analogous datasets. The comparison is based on the median QR prediction together with the aforementioned benchmarks. To accommodate for differences in the spatial resolution we downscale all maps to the minimum common resolution (1-km cell size) where the resulting values per pixel represent the average SOC stock among smaller pixels in a given 1-km cell side.
## 3 Results
Leave-one-out cross-validation performances appear in line with other methods in the literature. In particular, Schillaci et al. (2017) report an R\\({}^{2}\\) of 0.47 whereas the quantile loss reaches 0.49 for quantiles \\(\\tau=0.4,0.45\\) (see Figure 2). In addition, Figure 2 reveals that the quantile loss has a bell shape as a function of the quantile level. This implies that the predictive power decreases towards the boundaries of the distribution.
The uncertainty of estimated beta coefficients (assessed by means of the non-parametric case-resampling bootstrap) is presented in five separate subplots: Figure 3 presents boxplots of estimated parameters obtained from the 10000 bootstrap replicates for the simple model comprising only three categorical variables; The estimated parameters for the final reference model are summarized in Figures 4, 5, 6, and 7, which correspond to continuous predictors, Land use, Texture and Landform, respectively.
similar, but less pronounced decrease in the beta coefficients is shown for the _Silty Clay_. Both _Clay_ and _Silty clay_ present a very low internal variability, especially at the upper quantiles. On the other side, sand texture class produces an increasing beta coefficient across the quantiles, from strongly negative to the left side of the distribution to almost 0 in the right side. However, the variability within each beta coefficient in each quantile for sand is very high, hindering its interpretation.
Coefficients for _Landform_ classes are summarized in Figure 7 (except for _Plains_) where unexpectedly, none of the Landform classes appear to have a clear influence over the SOC in the study area and no pattern across quantiles can be ascertained.
Predictive maps are shown in Figure 8. Here, variations in predicted SOC over the study area are evident in the extreme quantiles (\\(q\\leq 0.25\\) and \\(q\\geq 0.75\\)) but less pronounced in the central quantiles (\\(0.25<q<0.75\\)). Similarly, the variability (measured as inter-quartile range) shows an increasing trend through quantiles.
The qualitative comparison between the predicted median and those of ISRIC, European and Global JRC benchmarks is shown in Figure 9. Among the available SOC Stock benchmarks, the JRC European map and, to a certain degree, ISRIC map are close to our median map in term of degrees of spatial variability (Figure 10). ISRIC frequently overesti
Figure 3: Boxplots of estimated beta coefficients based on the simple model with 10000 bootstrap replicates, and plotted with respect to the quantile level \\(\\tau=0.05,\\ldots,0.95\\). The blue line represents 0 (i.e., no effect), while the red curves are 95% pointwise confidence intervals.
mates SOC stock in the study region. In particular, our predicted median and ISRIC maps efficiently capture the pedo-genetic differences but not differences within land use classes. JRC-EU better capture differences within arables, which was far the most represented classes of land use. Finally, JRC-GL captures few spatial differences but, similarly to our predicted median it is the only benchmark capturing the high SOC stocks in the southeastern areas.
The spatial relation between predictive maps is compressed for a numerical-only assessment in Figure 10.
Here, the reference predicted median is compared to the three benchmarks through i) pixel-by-pixel density plotting, ii) quantile-quantile plot, iii) residuals. Three observations can be made. ISRIC is strongly overestimating the SOC stock compared to our QR-based model only with low-carbon coincident concentrations. The qualitative similarity between the median and the JRC-EU predictions is once more confirmed from a quantitative perspective with a quantile-quantile plot showing a slight but constant underestimation. Ultimately,
Figure 4: Boxplots of estimated beta coefficients for continuous predictors. These results are based on the final model with 10000 bootstrap replicates, and plotted with respect to the quantile level \\(\\tau=0.05,\\ldots,0.95\\). The blue line represents 0 (i.e., no effect), while the red curves are 95% pointwise confidence intervals.
JRC-GL shows the lowest residuals with respect to the QR reference together with a good agreement up to a concentration of approximately 45 \\(t/ha\\). However, from this threshold to the right tail of the distribution, the two predictions completely diverge one from the other.
## 4 Discussion
We present a Quantile Regression framework for modeling SOC stock data. This is applied to the semi-arid Sicilian territory located in the middle of Mediterranean Sea. We explore its application evaluating its predictive performances and assess it as a tool to provide a deeper information on predictor effects at different carbon contents. This makes QR a tool to produce reliable soil maps.
Figure 5: Boxplots of estimated beta coefficients for each category of Land Use. These results are based on the final model with 10000 bootstrap replicates, and plotted with respect to the quantile level \\(\\tau=0.05,\\ldots,0.95\\). The blue line represents 0 (i.e., no effect), while the red curves are 95% pointwise confidence intervals. Numbers between parentheses correspond to the Corine 2000 codes. In particular, Mixed ecosystem corresponds to Land principally occupied by agriculture, with significant areas of natural vegetation (Corine 243).
In terms of predictive skills, QR shows comparable results (maximum R\\({}^{2}\\) of 0.49, in Figure 2) to those obtained with Stochastic Gradient Treeboost (R\\({}^{2}\\) of 0.47, Schillaci et al. (2017b)) using the same dataset.
Other experiments show equivalent or worse performances. Yigini and Panagos (2016) obtain an \\(R^{2}\\) coefficient of 0.40 at the European scale with regression-kriging, whereas Meersmans et al. (2008) report an \\(R^{2}\\) coefficient of 0.36 with multiple regression and Nussbaum et al. (2014) R\\({}^{2}\\) of 0.35, both at regional scales. Quality metric based on the quantile loss highlights a decreased performance near the left and right tails of the SOC stock distribution.
The simple model intercept (Figure 3) shows values bounded between 10 and 130 \\(t/ha\\) which are in line with the original dataset and interestingly these values show a very low variability. This implies that the contribution of _Non-irrigated arables_, _Clay loam_, and, to a lesser extent, _Plains_ is very strong. Notably, the intercept of the final model (Figures 4),
Figure 6: Boxplots of estimated beta coefficients for each category of Texture. These results are based on the final model with 10000 bootstrap replicates, and plotted with respect to the quantile level \\(\\tau=0.05,\\ldots,0.95\\). The blue line represents 0 (i.e., no effect), while the red curves are 95% pointwise confidence intervals.
that also bears the effects of the _Non-irrigated arables_, _Clay loam_ and _Plains_, shows values very similar to the simple model but a higher variability. This implies that the greater model complexity due to the inclusion of other predictors (both for continuous and categorical) can produce high ranges of variation in the SOC stock.
_Mean Annual Rainfall_ and _log(Catchment Area)_ coefficients are constantly positive, confirming the influence of soil moisture on carbon sequestration as reported in several articles (e.g., Saiz et al., 2012). Nonetheless, these result partly disagree with Schillaci et al. (2017), that found that found a scarce, but still positive, influence of the untransformed CA on SOC stock of the same area, with a method capable of handling non-gaussian distributed data. This difference points at the need of transforming data even for non-strictly statistical predictive methods.
In contrast to _Mean Annual Rainfall_ and _log(Catchment Area)_, _Mean Annual Temperature_ shows negative and slightly varying beta coefficients across the whole SOC distribution. Recent surveys clearly highlight the balance between temperature and rainfall in shaping the background SOC and SOC stocks amount and variations ((Davidson et al., 2000, FAO, 2017, Schillaci et al., 2017)).
However, the community still debates whether the temperature should have a positive correlation with SOC stocks (e.g., Conant et al., 2011, Sierra et al., 2015). In the present work, the strong and negative effect of the temperature supports the hypothesis that temperature negatively affects SOC accumulation in agricultural soils of Mediterranean areas even when
Figure 7: Boxplots of estimated beta coefficients for each category of Landform Classification. These results are based on the final model with 10000 bootstrap replicates, and plotted with respect to the quantile level \\(\\tau=0.05,\\ldots,0.95\\). The blue line represents 0 (i.e., no effect), while the red curves are 95% pointwise confidence intervals.
SOC or rainfall or both are high. This could depend on the erraticness of rainfall and thus water availability that can consist in a low water availability even at high rainfall, which can be lost by runoff (Panagos et al., 2017). The unclear but apparently low temperature effect and clear and positive rainfall effect at the lowest quantiles also suggests that when SOC is low, management of water availability rather than temperature mitigation should be put in place.
Ultimately, _SL_ beta coefficients across quantiles are almost constantly negative confirming the influence of erosion on carbon stocks (Olson et al., 2016).
From textural classes a general positive trend for mixed granulometries emerges. This is typical for Sicilian soils as sand classes do not have the capacity to fix organic matter while purely clayey soils are extremely variable. A peculiar effect actually characterizes the Clay class with a positive beta coefficient sign from quantile 0.05 to 0.50 aligning to zero values from the median to the 95 percentile. This can be interpreted as a strong clay protective
Figure 8: Predictive maps (left side) together with their associated variability (right side). The latter is measured as the interquartile range, i.e., the distance between the 75% and the 25% quantiles, calculated from the 10000 cross-validated maps. Greyed out regions correspond to no-data zones.
effect for small carbon contents up to a limit where other factors need to interplay in order to further increase the carbon fixation/absorption (Badagliacca et al., 2017; Grimm et al., 2008; Mondal et al., 2016).
Among different uses strong positive relations can be recognized for _Vineyards_, _Olive Groves_, _Land principally occupied by agriculture, with significant areas of natural vegetation_, _Natural Grassland_ and _Sclerophyllous vegetation_. (Vicente-Vicente et al., 2016) report carbon sequestration rates of 0.78 tC ha\\({}^{-1}\\) yr\\({}^{-1}\\) Mediterranean vineyards. Similarly, Farina et al. (2017) suggest a potential SOC stock increase of 40.2% and 13.5% for vines and olives in similar environments to those considered in this study, respectively. In our work, such a positive effect were found also at the lowest boundary of the SOC distribution. This has a direct implication for land use management when aiming to increase SOC in such a fragile ecosystems compared to arables. In Sicily, arables are mostly winter cereals and grain legumes, which respectively reduce N availability for the microorganisms and have few residues.
Similarly, the positive effects of _Land principally occupied by agriculture, with significant areas of natural vegetation (Corine 243)_ suggest that in-field and in-farm crop and landscape and environmental diversification can also favor SOC accumulation irrespective of the initial
Figure 9: Available SOC-stock spatial-predictive maps in Sicily: Q50 corresponds to our median prediction, ISRIC is the SOC stock map from the International Soil Reference and Information Centre whereas JRC-EU and JRC-GL are the SOC stock benchmarks produced from the Joint Research Centre at the European and Global scale, respectively. Greyed out regions correspond to no-data zones.
SOC levels in semi-arid Mediterranean environments, as also found in continental north-European areas by Kaczynski et al. (2017). Their work cover the time window between 1971 and 2013 during which the authors highlight a marked increase in SOC stock from 2001 coinciding with crop production as a very high yields provided very high input of carbon from crop residues. With respect to _Land principally occupied by agriculture, with significant areas of natural vegetation_, Tian et al. (2016) conduct a study in China in order to estimate carbon sequestration in different grassland quality condition, which also depend on the diversification of its composition. Their conclusions show that the average sequestration rate was \\(0.04\\cdot 10^{12}\\ kgC\\cdot ha^{-}1\\) and that this rate increases as the grassland quality increases, which also depends on the diversification of its composition.
As regards the Sclerophyllous vegetation, other studies have highlighted its contribution to SOC even in Mediterranean contexts (Munoz-Rojas et al., 2013).
In terms of soil mapping, the four maps (our median and the three benchmarks) agree in depicting higher SOC stock levels around the Etna volcano and generally at the foothills. This may be interpreted as a result of particle transport where Carbon-rich soil from reliefs are eroded and deposited at the bottom of mountain ranges and/or different geological substrates producing soils with contrasting ability to retain organic C (Costantini and L'Abate, 2016; Mondal et al., 2016). A similar agreement is produced in the central portion of the
Figure 10: Bivariate comparison between median and available benchmark. The first row shows a density scatterplot between our predicted median map and the three available benchmarks in each column. The second row presents the same information compressed in a quantile-quantile plot. The third row summarises the residuals. Red dashed lines correspond to linear fits with regression coefficients equal to 1.
study area but with lower SOC concentrations. Conversely, the southeastern sector is shown to carry high SOC stocks for three maps with the exception of the European JRC, whereas the Global JRC depicts less reasonable patterns and ISRIC overestimates the SOC stock with peaks well above any local measurement. Our SOC stock predictive map shows reasonable values such as JRC and reasonable spatial patterns such as ISRIC. This can clearly be due to a higher resolution because ISRIC, Global and European JRC are continental or global and at such scale the landscape scale is often not represented. Nevertheless, QR was able to reach this level of detail suggesting its use for different datasets and modeling scales.
## 5 Conclusion
QR performs similarly to other statistical methods and enables considerations at given sub-domains of the SOC stock distribution. The link between SOC stock amount and the distribution of some Land Use classes (Vineyards, Olive orchards and Mixed ecosystems (Corine 243)) or and presence of Clayey soils was positive and, above all, varying across the SOC distribution. This has direct implication in the management of agriculture at the regional level, since these crops are likely to contemporary increase the gross income of the area and also the ecosystem benefits, such as C sequestration in the soil.
Variables like Vineyards or Clay change significantly through the SOC distribution. This suggest that classical linear regression methods may not recognize this trend and ultimately generate very different SOC values at high or low carbon contents. Furthermore, advantages can be drawn from an agronomic point of view as a better understanding of environmental effects at various SOC concentrations can improve management schemes and allow for sequestration-tailored practices that preserve yield and rentability. This paper shows that Quantile Regression has valid and interesting agronomic applications, as observed in few recent examples (Barnwal and Kotani, 2013; Yu et al., 2016). To promote its applicability and reproducibility, the R code is made available in the Supplementary Materials.
## References
* Ajami et al. (2016) Mohammad Ajami, Ahmad Heidari, Farhad Khormali, Manouchehr Gorji, and Shamsollah Ayoubi. Environmental factors controlling soil organic carbon storage in loess soils of a subhumid region, northern iran. _Geoderma_, 281:1-10, 2016.
* Akpa et al. (2016) Stephen IC Akpa, Inakwu OA Odeh, Thomas FA Bishop, Alfred E Hartemink, and Ishaku Y Amapu. Total soil organic carbon and carbon sequestration potential in nigeria. _Geoderma_, 271:202-215, 2016.
* Araujo et al. (2016) Jane Kelly Silva Araujo, Valdomiro Severino de Souza Junior, Flavio Adriano Marques, Paul Voroney, and Regilene Angelica da Silva Souza. Assessment of carbon storage under rainforests in humic hapludox along a climosequence extending from the atlantic coast to the highlands of northeastern brazil. _Science of The Total Environment_, 568:339-349, 2016.
* Arguelles et al. (2016)Giuseppe Badagliacca, Paolo Ruisi, Robert M. Rees, and Sergio Saia. An assessment of factors controlling n2o and co2 emissions from crop residues using different measurement approaches. _Biology and Fertility of Soils_, 2017.
* Barnwal and Kotani (2013) Prabhat Barnwal and Koji Kotani. Climatic impacts across agricultural crop yield distributions: An application of quantile regression on rice crops in andhra Pradesh, india. _Ecological Economics_, 87:95-109, 2013.
* Chen et al. (2016) Long-Fei Chen, Zhi-Bin He, Xi Zhu, Jun Du, Jun-Jun Yang, and Jing Li. Impacts of afforestation on plant diversity, soil properties, and soil organic carbon storage in a semi-arid grassland of northwestern china. _Catena_, 147:300-307, 2016.
* Conant et al. (2011) Richard T Conant, Michael G Ryan, Goran I Agren, Hannah E Birge, Eric A Davidson, Peter E Eliasson, Sarah E Evans, Serita D Frey, Christian P Giardina, Francesca M Hopkins, et al. Temperature and soil organic matter decomposition rates-synthesis of current knowledge and a way forward. _Global Change Biology_, 17(11):3392-3404, 2011.
* Costantini and L'Abate (2016) Edoardo A.C. Costantini and Giovanni L'Abate. Beyond the concept of dominant soil: Preserving pedodiversity in upscaling soil maps. _Geoderma_, 271:243-253, 2016. ISSN 0016-7061. doi: 10.1016/j.geoderma.2015.11.024.
* Dai et al. (2014) Fuqiang Dai, Qigang Zhou, Zhiqiang Lv, Xuemei Wang, and Gangcai Liu. Spatial prediction of soil organic matter content integrating artificial neural network and ordinary kriging in tibetan plateau. _Ecological Indicators_, 45:184-194, 2014.
* Davidson et al. (2000) Eric A Davidson, Susan E Trumbore, and Ronald Amundson. Biogeochemistry: soil warming and organic carbon content. _Nature_, 408(6814):789-790, 2000.
* Davison and Hinkley (1997) Anthony Christopher Davison and David Victor Hinkley. _Bootstrap methods and their application_, volume 1. Cambridge university press, 1997.
* Fantappie et al. (2010) M Fantappie, G LAbate, and EAC Costantini. Factors influencing soil organic carbon stock variations in italy during the last three decades. In _Land degradation and desertification: assessment, mitigation and remediation_, pages 435-465. Springer, 2010.
* FAO (2017) FAO. Food and agriculture organization of the united nations, rome, italy. In _Global Symposium on Soil Organic Carbon_, 2017.
* Farina et al. (2017) Roberta Farina, Alessandro Marchetti, Rosa Francaviglia, Rosario Napoli, and Claudia Di Bene. Modeling regional soil c stocks and co 2 emissions under mediterranean cropping systems and soil types. _Agriculture, Ecosystems & Environment_, 238:128-141, 2017.
* Grimm et al. (2008) Rosina Grimm, T Behrens, Michael Marker, and Helmut Elsenbeer. Soil organic carbon concentrations and stocks on barro colorado islanddigital soil mapping using random forests analysis. _Geoderma_, 146(1):102-113, 2008.
* Grinand et al. (2017) Clovis Grinand, Guerric Le Maire, Ghislain Vieilledent, H Razakamanarivo, Tiana Razafimbelo, and Martial Bernoux. Estimating temporal changes in soil carbon stocks at ecoregional scale in Madagascar using remote-sensing. _International Journal of Applied Earth Observation and Geoinformation_, 54:1-14, 2017.
* Grinand et al. (2017)IOSS Working Group et al. World reference base for soil resources 2014 international soil classification system for naming soils and creating legends for soil maps. _FAO, Rome_, 2014.
* Henderson et al. (2005) Brent L Henderson, Elisabeth N Bui, Christopher J Moran, and DAP Simon. Australia-wide predictions of soil properties using decision trees. _Geoderma_, 124(3):383-398, 2005.
* Hengl et al. (2014) Tomislav Hengl, Jorge Mendes de Jesus, Robert A MacMillan, Niels H Batjes, Gerard BM Heuvelink, Eloi Ribeiro, Alessandro Samuel-Rosa, Bas Kempen, Johan GB Leenaars, Markus G Walsh, et al. Soilgrids1kmglobal soil information based on automated mapping. _PLoS One_, 9(8):e105992, 2014.
* Hiederer and Kochy (2011) Roland Hiederer and Martin Kochy. Global soil organic carbon estimates and the harmonized world soil database. _EUR_, 79:25225, 2011.
* Hobley et al. (2016) Eleanor U Hobley, Jeff Baldock, and Brian Wilson. Environmental and human influences on organic carbon fractions down the soil profile. _Agriculture, Ecosystems & Environment_, 223:152-166, 2016.
* Hoffmann et al. (2014) U Hoffmann, T Hoffmann, G Jurasinski, S Glatzel, and NJ Kuhn. Assessing the spatial variability of soil organic carbon stocks in an alpine setting (grindelwald, swiss alps). _Geoderma_, 232:270-283, 2014.
* Huang et al. (2014) Ni Huang, Li Wang, Yiqiang Guo, Pengyu Hao, and Zheng Niu. Modeling spatial patterns of soil respiration in maize fields from vegetation and soil property factors with the use of remote sensing and geographical information system. _PloS one_, 9(8):e105150, 2014.
* Kaczynski et al. (2017) Radoslaw Kaczynski, Grzegorz Siebielec, Marjoleine C Hanegraaf, and Hein Korevaar. Modelling soil carbon trends for agriculture development scenarios at regional level. _Geoderma_, 286:104-115, 2017.
* Koenker (2005) Roger Koenker. Quantile regression, 2005.
* Koenker and Jr (1978) Roger Koenker and Gilbert Bassett Jr. Regression quantiles. _Econometrica: journal of the Econometric Society_, pages 33-50, 1978.
* Lacoste et al. (2014) Marine Lacoste, Budiman Minasny, Alex McBratney, Didier Michot, Valerie Viaud, and Christian Walter. High resolution 3d mapping of soil organic carbon in a heterogeneous agricultural landscape. _Geoderma_, 213:296-311, 2014.
* Lugato et al. (2014) Emanuele Lugato, Panos Panagos, Francesca Bampa, Arwyn Jones, and Luca Montanarella. A new baseline of organic carbon stock in european agricultural soils using a modelling approach. _Global change biology_, 20(1):313-326, 2014.
* Lutzow et al. (2006) M v Lutzow, Ingrid Kogel-Knabner, K Ekschmitt, E Matzner, G Guggenberger, B Marschner, and H Flessa. Stabilization of organic matter in temperate soils: mechanisms and their relevance under different soil conditions-a review. _European Journal of Soil Science_, 57(4):426-445, 2006.
* Lutzow et al. (2014)J Meersmans, F De Ridder, Frank Canters, Sarah De Baets, and Marc Van Molle. A multiple regression approach to assess the spatial distribution of soil organic carbon (soc) at the regional scale (flanders, belgium). _Geoderma_, 143(1):1-13, 2008.
* Miller et al. (2016) Bradley A Miller, Sylvia Koszinski, Wilfried Hierold, Helmut Rogasik, Boris Schroder, Kristof Van Oost, Marc Wehrhan, and Michael Sommer. Towards mapping soil carbon landscapes: Issues of sampling scale and transferability. _Soil and Tillage Research_, 156:194-208, 2016.
* Mondal et al. (2016) Arun Mondal, Deepak Khare, and Sananda Kundu. Impact assessment of climate change on future soil erosion and soc loss. _Natural Hazards_, 82(3):1515-1539, 2016.
* Morellos et al. (2016) Antonios Morellos, Xanthoula-Eirini Pantazi, Dimitrios Moshou, Thomas Alexandridis, Rebecca Whetton, Georgios Tziotzios, Jens Wiebensohn, Ralf Bill, and Abdul M Mouazen. Machine learning based prediction of soil total nitrogen, organic carbon and moisture content by using vis-nir spectroscopy. _Biosystems Engineering_, 152:104-116, 2016.
* Mulder et al. (2016) VL Mulder, M Lacoste, AC Richer-de Forges, MP Martin, and D Arrouays. National versus global modelling the 3d distribution of soil organic carbon in mainland france. _Geoderma_, 263:16-34, 2016.
* Munoz-Rojas et al. (2013) Miriam Munoz-Rojas, A Jordan, LM Zavala, FA Gonzalez-Penaloza, D De la Rosa, Rafael Pino-Mejias, and Maria Anaya-Romero. Modelling soil organic carbon stocks in global change scenarios: a carbosoil application. _Biogeosciences_, 10(12):8253, 2013.
* Nussbaum et al. (2014) M Nussbaum, A Papritz, A Baltensweiler, and L Walthert. Estimating soil organic carbon stocks of swiss forest soils by robust external-drift kriging. _Geoscientific Model Development_, 7(3):1197-1210, 2014.
* Olson et al. (2016) Kenneth R Olson, Mahdi Al-Kaisi, Rattan Lal, and Larry Cihacek. Impact of soil erosion on soil organic carbon stocks. _Journal of Soil and Water Conservation_, 71(3):61A-67A, 2016.
* Panagos et al. (2017) Panos Panagos, Cristiano Ballabio, Katrin Meusburger, Jonathan Spinoni, Christine Alewell, and Pasquale Borrelli. Towards estimates of future rainfall erosivity in europe based on redes and worldclim datasets. _Journal of Hydrology_, 548:251-262, 2017.
* Parton et al. (1988) William J Parton, J WB Stewart, and C Vernon Cole. Dynamics of c, n, p and s in grassland soils: a model. _Biogeochemistry_, 5(1):109-131, 1988.
* Pellegrini et al. (2007) S Pellegrini, N Vignozzi, EAC Costantini, and G LAbate. A new pedotransfer function for estimating soil bulk density. In _Changing soils in a changing wold: the soils of tomorrow. Book of abstracts. 5th International congress of European society for soil conservation, Palermo_, pages 25-30, 2007.
* Peng et al. (2015) Yi Peng, Xiong Xiong, Kabindra Adhikari, Maria Knadel, Sabine Grunwald, and Mogens Humlekrog Greve. Modeling soil organic carbon at regional scale by combining multi-spectral images with laboratory spectra. _PloS one_, 10(11):e0142295, 2015.
* Peng et al. (2016)Bruce C Pengelly and Brigitte L Maass. Lablab purpureus (l.) sweet-diversity, potential use and determination of a core collection of this multi-purpose tropical legume. _Genetic Resources and Crop Evolution_, 48(3):261-272, 2001.
* Piccini et al. (2014) Chiara Piccini, Alessandro Marchetti, and Rosa Francaviglia. Estimation of soil organic matter by geostatistical methods: Use of auxiliary information in agricultural and environmental assessment. _Ecological Indicators_, 36:301-314, 2014.
* Ratnayake et al. (2014) RR Ratnayake, T Kugendren, and N Gnanavelrajah. Changes in soil carbon stocks under different agricultural management practices in north sri lanka. _Journal of the National Science Foundation of Sri Lanka_, 42(1), 2014.
* Reijneveld et al. (2009) Arjan Reijneveld, Joke van Wensem, and Oene Oenema. Soil organic carbon contents of agricultural land in the netherlands between 1984 and 2004. _Geoderma_, 152(3):231-238, 2009.
* Rodriguez-Lado and Martinez-Cortizas (2015) Luis Rodriguez-Lado and Antonio Martinez-Cortizas. Modelling and mapping organic carbon content of toposoils in an atlantic area of southwestern europe (galicia, nw-spain). _Geoderma_, 245:65-73, 2015.
* Rossel and Bouma (2016) Raphael A Viscarra Rossel and Johan Bouma. Soil sensing: A new paradigm for agriculture. _Agricultural Systems_, 148:71-74, 2016.
* Saiz et al. (2012) Gustavo Saiz, Michael I Bird, Tomas Domingues, Franziska Schrodt, Michael Schwarz, Ted R Feldpausch, Elmar Veenendaal, Gloria Djagbletey, Fidele Hien, Halidou Compaore, et al. Variation in soil carbon stocks and their determinants across a precipitation gradient in west africa. _Global change biology_, 18(5):1670-1683, 2012.
* Sammut and Webb (2010) Claude Sammut and Geoffrey I. Webb, editors. _Leave-One-Out Cross-Validation_, pages 600-601. Springer US, Boston, MA, 2010. ISBN 978-0-387-30164-8. doi: 10.1007/978-0-387-30164-8_469.
* Schillaci et al. (2017a) Calogero Schillaci, Marco Acutis, Luigi Lombardo, Aldo Lipani, Maria Fantappie, Michael Marker, and Sergio Saia. Spatio-temporal topsoil organic carbon mapping of a semi-arid mediterranean region: The role of land use, soil texture, topographic indices and the influence of remote sensing data to modelling. _Science of The Total Environment_, 601:821-832, 2017a.
* Schillaci et al. (2017b) Calogero Schillaci, Luigi Lombardo, Sergio Saia, Maria Fantappie, Michael Marker, and Marco Acutis. Modelling the topsoil carbon stock of agricultural lands with the stochastic gradient treeboost in a semi-arid mediterranean region. _Geoderma_, 286:35-45, 2017b.
* Sierra et al. (2015) Carlos A Sierra, Susan E Trumbore, Eric A Davidson, Sara Vicca, and I Janssens. Sensitivity of decomposition rates of soil organic matter with respect to simultaneous changes in temperature and moisture. _Journal of Advances in Modeling Earth Systems_, 7(1):335-356, 2015.
* Sierra et al. (2016)Kandrika Sreenivas, VK Dadhwal, Suresh Kumar, G Sri Harsha, Tarik Mitran, G Sujatha, G Janaki Rama Suresh, MA Fyzee, and T Ravisankar. Digital mapping of soil organic and inorganic carbon status in india. _Geoderma_, 269:160-173, 2016.
* Taghizadeh-Mehrjardi et al. (2016) R Taghizadeh-Mehrjardi, K Nabiollahi, and R Kerry. Digital mapping of soil organic carbon at multiple depths using different data mining techniques in baneh region, iran. _Geoderma_, 266:98-110, 2016.
* Tian et al. (2016) Zheng Tian, Xiuqin Wu, Erfu Dai, and Dongsheng Zhao. Soc storage and potential of grasslands from 2000 to 2012 in central and eastern inner Mongolia, china. _Journal of Arid Land_, 8(3):364-374, 2016.
* Vicente-Vicente et al. (2016) Jose Luis Vicente-Vicente, Roberto Garcia-Ruiz, Rosa Francaviglia, Eduardo Aguilera, and Pete Smith. Soil carbon sequestration rates under mediterranean woody crops using recommended management practices: a meta-analysis. _Agriculture, Ecosystems & Environment_, 235:204-214, 2016.
* West and Wali (2002) Tristram O West and Mohan K Wali. Modeling regional carbon dynamics and soil erosion in disturbed and rehabilitated ecosystems as affected by land use and climate. _Water, Air, & Soil Pollution_, 138(1):141-164, 2002.
* Yigini and Panagos (2016) Yusuf Yigini and Panos Panagos. Assessment of soil organic carbon stocks under future climate and land cover changes in europe. _Science of the Total Environment_, 557:838-850, 2016.
* Yu et al. (2016) Yang Yu, David Makowski, Tjeerd Jan Stomph, and Wopke van der Werf. Robust increases of land equivalent ratio with temporal niche differentiation: A meta-quantile regression. _Agronomy Journal_, 108(6):2269-2279, 2016. doi: 10.2134/agronj2016.03.0170. | Soil Organic Carbon (SOC) estimation is crucial to manage both natural and anthropic ecosystems and has recently been put under the magnifying glass after the Paris agreement 2016 due to its relationship with greenhouse gas. Statistical applications have dominated the SOC stock mapping at regional scale so far. However, the community has hardly ever attempted to implement Quantile Regression (QR) to spatially predict the SOC distribution. In this contribution, we test QR to estimate SOC stock (0-30 \\(cm\\) depth) in the agricultural areas of a highly variable semi-arid region (Sicily, Italy, around 25,000 \\(km2\\)) by using topographic and remotely sensed predictors. We also compare the results with those from available SOC stock measurement. The QR models produced robust performances and allowed to recognize dominant effects among the predictors with respect to the considered quantile. This information, currently lacking, suggests that QR can discern predictor influences on SOC stock at specific sub-domains of each predictors. In this work, the predictive map generated at the median shows lower errors than those of the Joint Research Centre and International Soil Reference, and Information Centre benchmarks. The results suggest the use of QR as a comprehensive and effective method to map SOC using legacy data in agro-ecosystems. The R code scripted in this study for QR is included. | Provide a brief summary of the text. | 274 |
arxiv-format/1904_03983v1.md | # Weakly Supervised Semantic Segmentation of Satellite Images
Adrien Nivaggioli
_Qwant Research_
Paris, France
[email protected]
Hicham Randrianarivo
_Qwant Research_
Paris, France
[email protected]
## I Introduction
Semantic segmentation of satellite and aerial images could be incredibly helpful for fields like urban planning, disaster recovery, autonomous agriculture, environmental monitoring and many others. We now have access to large databases filled with more images than any manual method could handle (such as the USGS Earth Explorer1, ESA s Sentinel Mission 2 or NASA's Earthdata Search3). The need for an automatized study of those images is obvious, and there is an urgent demand for tools and methods that allow automatic interpretation of this huge amount of data. In the last few years, deep learning has become the essential tool for solving this kind of problem [8, 10]. For the task of semantic segmentation, several successful methods appeared recently like SegNet [3], FCN [18], U-NET [17] or PSPNet [22] with great results in satellite and aerial imagery [2, 13, 19]
Footnote 1: [https://earthexplorer.nusg.gov/](https://earthexplorer.nusg.gov/)
Footnote 2: [https://sentinel.esa.int/web/sentinel/home](https://sentinel.esa.int/web/sentinel/home)
Footnote 3: [https://search.earthdata.nasa.gov/](https://search.earthdata.nasa.gov/)
One of the biggest difficulties faced by those methods is the lack of pixel-level labels for those images. Indeed, the process of manually annotating each image is tedious. This is also an issue when trying to perform a semantic segmentation of a regular image, like those of the Pascal VOC [7] or the Common Object in COntext (COCO) datasets [12]. To face it, methods relying on weaker types of labels started to appear, called Weakly Supervised Learning (WSL). Those methods can use simple bounding-box annotations [9, 15], scribbles [11, 20], points [5], or a simple image-level class label [14, 16].
The method we use is divided into 4 steps (c.f. fig. 1), and based on the work of Ahn and Kwak [1]. First, a classification network is trained using image level annotations. Then, a second network is trained to learn the relationships between a pixel and its neighbourhood, called an affinity network. After this, a random walk is performed combining Class Activation Map (CAM) and affinity labels to produce the segmentation labels. Finally, we can use the segmentation labels produced by the method to train a segmentation model.
**Contributions:** We use a new backbone better adapted to satellite imagery. We conduct several experiments on the loss function proposed by [1] to adapt its hyperparameters for satellite and aerial imagery. We train several state-of-the-art segmentation models (SegNet [3], PSPNet [22]and UNet [17]) to validate the quality of the generated labels for the training of segmentation models. We modify the method in order to perform semantic segmentation using only the classification and the affinity networks. (c.f. fig. 1) Finally, we compare our semantic segmentation results with fully supervised approaches on the validation set of the DEEPGLOBE dataset [6] and study the trade-off between information and quality of segmentation..
## II Method
In this section, we present the approach we follow for weakly based semantic segmentation. The method can be split into 4 different parts as shown in fig. 1 : Classification Network, Affinity Network, Random Walk and Segmentation Network.
### _Classification Network_
The first step of the method is the training of a classification network that will be used to identify the different categories present in the images. The main idea is to use the capacityof the CAM that can be extracted from the Convolutional Neural Network (CNN) to localize in the image the areas that influence the most the classification result [4, 16, 23]. For each image, the classification network will produce a set of CAMs \\(\\mathcal{M}=\\{M_{c}\\,|\\,\\,c\\in\\mathcal{C}\\}\\), each CAM \\(M_{c}\\) corresponding to the activation map of the class \\(c\\). Additionally, another CAM \\(M_{bg}\\) is defined to localize background in the image (1).
\\[M_{bg}(x,y)=\\{1-\\max_{c\\in\\mathcal{C}}M_{c}(x,y)\\}^{\\alpha} \\tag{1}\\]
Usually, there is no background in aerial images, but this CAM prevents the network to predict unsure result. Even if this means that a smaller portion of the original dataset will be usable to train the segmentation network, we favour precision over quantity. With that in mind, we conducted all of our experiments both with and without the background, in order to compare the results and assess which approach fits our case best.
### _Affinity Network_
The second part of the method consists in training an Affinity Network which models relationships between pairs of pixels \\((i,j)\\) in the image. The AffinityNet is designed to extract a convolutional feature map \\(f^{\\text{aff}}\\) where each element can be seen as an affinity feature. The affinity between two pixels \\((i,j)\\) denoted by \\(W_{ij}\\) is defined as the similarity between the affinity features:
\\[W_{ij}=\\exp\\{-\\|f^{\\text{aff}}(x_{i},y_{i})-f^{\\text{aff}}(x_{j},y_{j})\\|_{1}\\} \\tag{2}\\]
Nevertheless, the affinity labels needed to train the AffinityNet are not directly available. A way to generate these affinity labels is to use the CAM as partial sources of supervision for their generation. The values of the CAM are used as a confidence score to determine whether a pixel belongs to a category \\(c\\) or not. A high \\(alpha\\) is used to assess confident regions of classes \\(c\\in\\mathcal{C}\\), and a lower one is used to assess areas the network is the most unsure about. The pixels that are left are considered as neutral. Using the CAM with high confidence, a binary label is given to the pair of pixels. If theirs classes are the same, their affinity label \\(W_{ij}^{*}\\) is \\(1\\), \\(0\\) otherwise. The pairs containing neutral pixel are considered neutral and ignored during the training (c.f. fig. 2).
From there, we compute for each coordinate of the image its affinities with the other coordinates within a circle of fixed radius \\(\\gamma\\). The set of pairs of coordinates that have an affinity is defined by :
\\[\\mathcal{P}=\\{(i,j)\\,|\\,\\,d((x_{i},y_{i}),(x_{j},y_{j}))<\\gamma,\\forall i\
eq j\\} \\tag{3}\\]
The set \\(\\mathcal{P}\\) is then divided into two subset \\(\\mathcal{P}^{+}\\) and \\(\\mathcal{P}^{-}\\) where \\(W_{ij}^{*}=1\\) and \\(W_{ij}^{*}=0\\) respectively. The subset \\(\\mathcal{P}^{+}\\) is again divided into two subsets \\(\\mathcal{P}_{fg}^{+}\\) for foreground and \\(\\mathcal{P}_{bg}^{+}\\)for the background regions.
Fig. 1: Inference pipeline of the original Affinity-Net, that requires both an image and a label as inputs. We propose to remove the dotted parts to transform the Affinity-Net into an independent segmentation network.
Fig. 2: Generating Affinity labels. The areas correspond to confident prediction of _Agriculture_ in yellow and _Forest_ in green. The black area is the _background_ and the white is _neutral_.
The model is trained using the following loss function :
\\[\\mathcal{L}=\\frac{\\mathcal{L}_{fg}^{+}}{a}+\\frac{\\mathcal{L}_{bg}^{+}}{b}+\\frac{ \\mathcal{L}^{-}}{c} \\tag{4}\\]
with
\\[\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}=1 \\tag{5}\\]
\\[\\mathcal{L}_{fg}^{+} =-\\frac{1}{|\\mathcal{P}_{fg}^{+}|}\\sum_{(i,j)\\in\\mathcal{P}_{fg}^{ +}}\\log W_{i,j} \\tag{6}\\] \\[\\mathcal{L}_{bg}^{+} =-\\frac{1}{|\\mathcal{P}_{bg}^{+}|}\\sum_{(i,j)\\in\\mathcal{P}_{bg}^{ +}}\\log W_{i,j}\\] (7) \\[\\mathcal{L}^{-} =-\\frac{1}{|\\mathcal{P}^{-}|}\\sum_{(i,j)\\in\\mathcal{P}^{-}}\\log \\left(1-W_{i,j}\\right) \\tag{8}\\]
This loss is quite similar to the one used by the authors of [1], but they suggest that \\(a\\) should be equal to \\(b\\). We want to be able to penalize the background further, so we slackened the constraints. [1] used eq. 4 with \\(a=b=4,c=2\\). In order to penalize the background further, we used \\(a=6,b=2,c=3\\).
Those labels can then be used to perform a random walk on the original CAMs in order to get proper segmentation labels.
From there, the original method uses those labels to train a regular segmentation network. We compared the results of 3 different networks, U-Net [17], PSPNet [22] and SegNet [3] using our generated labels.
We also tried a new approach that allows us to directly performs semantic segmentation with the trained classification and affinity networks. Indeed, [1] proposes to set to 0 the CAMs of the class we know are not present in the image. While this increases the quality of the segmentation labels, it prevents us from using the AffinityNet on images we do not have any labels on. To assess which CAM need to be kept, we use the confidence scores output by the classification network.
As a backbone network, we used an extension of the ResNet38 [21], composed of 74 convolutional layers (as opposed to 38 in the ResNet38). We found that a deeper network greatly improved our performances, supposedly because aerial images have a lower variance than regular ones, thus making it more difficult for the network to differentiate the different classes.
## III Experiments and Results
### _Dataset_
We used the DeepGlobe dataset [6] composed of 803 aerial images with pixel-level labels. We kept 562 for training and the remaining 241 for validation. Each image has a size of 2448x2448 pixels, and contains at least one of those categories : Barren, Water, Urban, Forest, Agriculture, Rangeland and Unknown. We split each image into 64 patches of size 306x306 each. Even if the dataset comes with pixel-wise annotations of the images, we reduced those labels to image-level (which categories are present in the labels). This means that no information about localization and/or distribution of the categories are present in the final dataset. The DeepGlobe dataset also offers 171 aerial images without any labels that we used for testing. Indeed, we can upload our results to their CodaLab competition4 and compare our score to others, all of them using fully-supervised techniques.
Footnote 4: [https://competitions.codalah.org/competitions/18468](https://competitions.codalah.org/competitions/18468)
### _Quality of Segmentation Labels_
We used the training dataset to teach both the classification Network and the Affinity Network. Then, we generated the Segmentation Label from the validation dataset. Because we have the pixel-level labels of those images (even if they were not used during the training), we can compare the quality of our segmentation labels to the ground truth provided by DeepGlobe as shown in fig. 3. [1] showed that their method was fairly insensitive to hyperparameters, but our dataset of aerial images is different from regular images, mostly because they have no background class. With that in mind, we tried modifying the loss of the Affinity-Net, decreasing the background CAMs, and removing the background completely. We measured the precision and the recall of the predictions. The results showed that removing the background altogether gives almost the best precision, but with a far better recall (cf Table. I).
Furthermore, we use the best generated labels to train 3 different segmentation networks, and compare the results with a fully-supervised training on the same networks (cf Table. II).
### _Semantic Segmentation_
If we follow the method described in [1], we cannot create the Segmentation Labels without providing the image-level labels, even after training. We modified the method of [1] to directly perform the segmentation, without having to train
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline \\multicolumn{2}{c}{Method} & Score (mIoU) \\% \\\\ \\hline PSPNet & Full labels & \\(43.99\\) \\\\ - & Weak labels & \\(42.97\\) \\\\ U-Net & Full labels & \\(42.44\\) \\\\ - & Weak labels & \\(39.25\\) \\\\ SegNet & Full labels & \\(37.10\\) \\\\ - & Weak labels & \\(37.76\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE II: Results of semantic segmentation on the DEEP-GLOBE land classification challenge. Full labels are labels provided for the challenge. Weak labels are labels produced using weakly supervised model.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline \\(\\alpha\\) & Precision & Recall \\\\ \\hline \\(4\\) & \\(85.22\\) & \\(60.40\\) \\\\ \\(16\\) & \\(86.52\\) & \\(66.47\\) \\\\ \\(32\\) & \\(86.85\\) & \\(67.28\\) \\\\ \\(\\infty\\) & \\(88.38\\) & \\(87.31\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: Quality of Segmentation Labels produced by the AffinityNet. Parameters of eq. 5 are \\(a=6\\), \\(b=2\\), \\(c=3\\). \\(\\alpha=\\infty\\) means background has not be taken into account.
a separate segmentation network. We were able to evaluate our results and compare them to to the best, fully-supervised approaches of the deepglobe competition (cf Table. III).
## IV Conclusion
We adapted the method proposed in [1] to our dataset composed of aerial images. The quality of the pixel-level labels generated from image-level labels are quite good. Both offer similar performances when used for training several segmentation network. This means that less information in a dataset does not necessarily implies inferior results.
Furthermore, the semantic segmentation performed by our modification of the AffinityNet gives remarkable results, close to best fully supervised ones.
Further work will be done to improve our results, both on the quality of the generated segmentation labels and the direct semantic segmentation.
Weakly supervised semantic segmentation of satellite images is a cornerstone to the transition from geolocalized text information to complete semantic segmentation.
## References
* [1] Jiwoon Ahn and Suha Kwak. Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation. In _IEEE Comput. Vis. Pattern Recognit._, 2018.
* [2] Nicolas Audebert, Bertrand Le Saux, and Sebastien Lefevre. Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps. In _IEEE Conf. Comput. Vis. Pattern Recognit._, 2017.
* [3] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. _IEEE Trans. Pattern Anal. Mach. Intell._, 2017.
* [4] Loris Bazzani, Alessandro Bergamo, Dragomir Anguelov, and Lorenzo Torresani. Self-Taught Object Localization with Deep Networks. In _IEEE Winter Conf. Appl. Comput. Vis._, 2016.
* [5] Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei. What's the point: Semantic segmentation with point supervision. _Eur. Conf. Comput. Vis._, 2016.
* [6] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images. Technical report, 2018.
* [7] Mark Everingham, S. M.All Eisami, Luc Van Gool, Christopher K.I. Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes Challenge: A Retrospective. _Int. J. Comput. Vis._, 2014.
* [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In _IEEE Comput. Vis. Pattern Recognit._, 2016.
* [9] Anna Khoreva, Rodrigo Benenson, Jan Hosang, and Matthias Hein. Simple Does It : Weakly Supervised Instance and Semantic Segmentation. In _IEEE Conf. Comput. Vis. Pattern Recognit._, 2017.
* [10] Alex Krizhevsky, Ilya Sulskever, and Geoffret E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In _Adv. Neural Inf. Process. Syst._, 2012.
* [11] Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, and Jian Sun. ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation. In _IEEE Conf. Comput. Vis. Pattern Recognit._, 2016.
* [12] TY Lin, Michael Maire, Serge Belongie, and James Hays. Microsoft COCO: Common Objects in Context. In _Eur. Conf. Comput. Vis._, 2014.
* [13] Diego Marcos, Michele Volpi, Benjamin Kellenberger, and Devis Tuia. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. _ISPRS J. Photogramm. Remote Sens._, 2018.
* [14] Seong Joon Oh, Rodrigo Benenson, Anna Khoreva, Zeynep Akata, Mario Fritz, and Bernt Schiele. Exploiting saliency for object segmentation from image level labels. _IEEE Comput. Vis. Pattern Recognit._, 2017.
* [15] George Papandreou, Liang-Chieh Chen, Kevin Murphy, and Alan L. Yuille. Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation. In _IEEE Int. Conf. Comput. Vis._, 2015.
* [16] Pedro O. Pinheiro and Ronan Collobert. From image-level to pixel-level labeling with Convolutional Networks. _IEEE Conf. Comput. Vis. Pattern Recognit._, 07-12-June:1713-1721, 2015.
* [17] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In _Int. Conf. Med. Image Comput. Imetr._, 2015.
* [18] Evan Shelhamer, Jonathan Long, and Trevor Darrell. Fully Convolutional Networks for Semantic Segmentation. _IEEE Trans. Pattern Anal. Mach. Intell._, 2016.
* [19] Jamie Sherrah. Fully Convolutional Networks for Dense Semantic Labelling of High-Resolution Aerial Imagery. Technical report, 2016.
* [20] Paul Vernaza and Manmohan Chandraker. Learning random-walk label propagation for weakly-supervised semantic segmentation. _IEEE Conf. Comput. Vis. Pattern Recognit._, 2017.
* [21] Zifeng Wu, Chunhua Shen, and Anton van den Hengel. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Technical report, 2016.
* [22] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. _IEEE Conf. Comput. Vis. Pattern Recognit._, 2017.
* [23] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning Deep Features for Discriminative Localization. In _IEEE Comput. Vis. Pattern Recognit._, 2016.
Fig. 3: Example of segmentation labels. From left to right: Original Image, Ground Truth Label, Our Image-Level label, Predicted Label. Note that in order to generate the predicted label, we used only the image-level annotations and original images.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline Method & Score (mIoU) & Rank \\\\ \\hline Deep Aggregation Net & \\(53.58\\) & \\(1\\) \\\\ \\hline Dense Fusion Classmate Network & \\(52.64\\) & \\(2\\) \\\\ Ours - Weakly Supervised w/o background & \\(45.90\\) & \\(14\\) \\\\ Ours - Weakly Supervised w/ background & \\(32.32\\) & \\(33\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: Ranking of our Weakly-Supervised method among Fully-Supervised ones | When one wants to train a neural network to perform semantic segmentation, creating pixel-level annotations for each of the images in the database is a tedious task. If he works with aerial or satellite images, which are usually very large, it is even worse. With that in mind, we investigate how to use image-level annotations in order to perform semantic segmentation. Image-level annotations are much less expensive to acquire than pixel-level annotations, but we lose a lot of information for the training of the model. From the annotations of the images, the model must find by itself how to classify the different regions of the image. In this work, we use the method proposed by Anh and Kwak [1] to produce pixel-level annotation from image level annotation.
We compare the overall quality of our generated dataset with the original dataset.
In addition, we propose an adaptation of the AffinityNet that allows us to directly perform a semantic segmentation.
Our results show that the generated labels lead to the same performances for the training of several segmentation networks. Also, the quality of semantic segmentation performed directly by the AffinityNet and the Random Walk is close to the one of the best fully-supervised approaches.
Computer vision, Weak learning, Semantic segmentation, Land cover classification +
Footnote β : publicationid: pubid: 978-1-7281-0009-8/19/$31.00 Β©2019 IEEE | Write a summary of the passage below. | 281 |
arxiv-format/2310_07638v2.md | # Context-Enhanced Detector For Building Detection From Remote Sensing Images
Ziyue Huang, Mingming Zhang, Qingjie Liu,, Wei Wang, Zhe Dong, and Yunhong Wang,
Ziyue Huang, Mingming Zhang, Qingjie Liu, and Yunhong Wang are with the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 1001911, China, and also with the Hangzhou Innovation Institute, Beihang University, Hangzhou 1300151, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).Wei Wang, and Zhe Dong are with the National Disaster Reduction Center of China, Beijing 100124, China (e-mail: [email protected]; [email protected]).
## I Introduction
Building detection plays a pivotal role in various remote sensing applications, including urban planning [1], earthquake disaster reduction [2], and mapping [3]. With the advent of Convolutional Neural Networks (CNNs) [4, 5], many studies [6, 7, 8] have been proposed to effectively detect buildings in remote sensing images. In this paper, we represent buildings using oriented bounding boxes (OBBs) [9] and use the object detection method to extract buildings.
As crucial components of human activities, buildings showcase significant disparities in appearance, shape, structure, and material. Such extensive heterogeneity, combined with the complexity of environments, poses a significant challenge to the detector's accuracy. To overcome the challenge, some studies [10, 11, 12, 13] incorporate contextual information in the detection process. These studies can be broadly categorized into multi-scale feature context modeling and relational context modeling [14]. Multi-scale context feature modeling aims to capture contextual features by utilizing dilated convolution [15] or pyramid pooling [16] to expand the receptive fields of features. However, existing methods [17, 18, 19, 20] lack direct long-range feature interactions [21], which limits their ability to comprehend the overall scene. Relational context modeling analyzes the spatial relationships between instances [12, 22, 23, 24] to improve detection accuracy. However, these methods have certain limitations when applied to building detection. The complex environment of building groups can lead to numerous low-quality or erroneous region proposals, which undermine the validity of relationship reasoning.
To overcome the limitations of current context-enhancement methods, we propose Context-Enhanced Detector (CEDet). CEDet introduces the Semantic Guided Contextual Mining (SGCM) module to facilitate multi-scale feature context modeling. The SGCM incorporates a self-attention mechanism to strengthen long-range feature dependencies. Moreover, SGCM integrates multi-scale features to generate rich semantic features encompassing global context and utilizes a semantic segmentation loss based on pseudo-masks to guide contextual information extraction. To enhance the modeling of relational context, CEDet adopts a multi-stage framework inspired by Cascade R-CNN [25], aiming to improve the quality of region proposals. Finally, CEDet introduces a Context Enhancement OR-CNN Head (CE Head), which separates the classification and regression tasks. CE Head incorporates the Instance Context Mining Module (ICMM) to acquire spatial relational contextual information. This integration significantly enhances the ability to identify buildings within intricate scenes. Our contributions could be summarized as follows:
1. We present Context-Enhanced Detector (CEDet), a novel approach for building detection in remote sensing images. CEDet incorporates a cascade structure to enhance the accuracy of the detection process.
2. To enhance the multi-scale feature context, we propose the Semantic Guided Contextual Mining (SGCM) module. The SGCM utilizes a self-attention mechanism to strengthen long-range feature dependencies. Additionally, a semantic segmentation loss based on pseudo-masks is introduced to guide the extraction of contextual information.
3. The Instance Context Mining Module (ICMM) is introduced to capture contextual information between instances by leveraging spatial relationships. This module significantly improves the detector's ability to identify buildings in complex environments.
4. Our CEDet achieves state-of-the-art (SOTA) performanceon the CNBuilding-9P, CNBuilding-23P, and SpaceNet datasets. Compared with the baseline OR-CNN [26], CEDet demonstrates improvements of 2.1% in AP50 on CNBuilding-9P, 2.3% on CNBuilding-23P, and 2.4% on SpaceNet.
## II Related Work
### _Oriented Object Detection_
Oriented object detection has received extensive attention in remote sensing image understanding [26, 27, 28, 29, 30, 31], since the OBB representations can finely capture the appearance of objects in remote sensing images [9]. Ding et al. [27] proposed RoI-Transformer, which introduced rotated RoIAlign to extract spatially invariant features. Xu et al. [32] proposed Gliding Vertex, which employed the offset relative to the circumscribed rectangle to represent oriented objects. Yang et al. [29] proposed R3Det, utilizing the feature alignment module to address the problem of feature inconsistency in oriented detection. Han et al. [33] utilized an invariant rotation network and rotation covariant head to enhance the quality of rotation feature representation. Qin et al. [34] introduced an anchor-free oriented object detection model, achieving a balance between speed and accuracy. Han et al. [28] employed rigid deformable convolution [35] to achieve feature alignment for single-stage detectors. Several studies, including GWD [36], CSL [37], KFIoU [30], and KLD [31], have explored effective regression loss for oriented object detection. These methods aimed to address the challenges of discontinuity and aliasing in OBB representation. Li et al. [38] improved Reppoints [39] to enable oriented detection. Xie et al. [26] proposed oriented R-CNN by utilizing oriented RPN to generate high-quality rotation proposals, simplifying the pipeline of the two-stage detector. Experimental results demonstrate that oriented R-CNN achieves a favorable balance between speed, accuracy, and simplicity. Hence, we adopt oriented R-CNN as our baseline.
### _Building Segmentation_
Building extraction has been the focus of numerous methods due to the potential value of building information in various applications. Semantic segmentation serves as a typical task setting for building extraction. Alshehhi et al. [40] proposed a patch-based CNN architecture for building segmentation. Hamaguchi et al. [8] presented a multi-task U-Net network that incorporates contextual information through road extraction tasks. A Multi-scale learning strategy is also adopted to address the problem of scale variation. Yang et al. [41] designed a dense-attention network based on DenseNet [42]. They combined a spatial attention mechanism with DenseNet to fuse features at different scales dynamically. Griffiths et al. [43] integrated multi-source data (optical, lidar, and building footprint) and employed Morphological Geodesic Active Contours to enhance the quality of annotations. Wei et al. [44] proposed an improved fully convolutional network to obtain building footprint masks, which were then transformed into polygons using a polygon regularization algorithm. Zhu et al. [45] presented an adaptive polygon generation algorithm (APGA) that generates sequences of building vertices and arranges them to form polygons. Hosseinpour et al. [46] proposed CMGFNet, which incorporates digital surface model features with conventional optical features to obtain robust features. Visual transformer (ViT) [47] has demonstrated effective modeling of global dependencies. Based on this, Wang et al. [48] introduced BuildFormer, a method that leverages ViT to extract vital global features for finer building segmentation. Zorzi et al. [49] utilized Graph Neural Network (GNN) to propagate global information to vertices, enhancing detection accuracy. While building extraction based on semantic segmentation has made significant progress, it is essential to note that the masks predicted by semantic segmentation models may not effectively extract individual building instances. This limitation restricts the applicability of these methods in real-world scenarios.
### _Building Detection_
Building detection is more challenging than building segmentation, but it offers valuable instance-level information, which is crucial for downstream applications. PolyRNN [50] and PolyRNN++ [51] have approached instance-level segmentation as a contour polygon prediction task, inspiring subsequent building detection research. Li et al. [52] introduced PolyMapper, a serialization method that converts the graph structure of roads into a sequential structure, enabling simultaneous extraction of building footprints and roads. Xu et al. [53] proposed a centroid-aware head based on Mask R-CNN [54]. Li et al. [55] developed a hybrid model for building polygon extraction, utilizing multiple networks to obtain bounding boxes, segmentation masks, and corners of buildings, and employing Delaunay triangulation to construct building polygons. Liu et al. [56] improved HTC [57] with a more robust backbone and dynamic anchor strategy to achieve high-quality building extraction. Hu et al. [58] proposed a transformer-based polygonal building extraction model using a two-phase strategy for multi-task training. Zhao et al. [59] integrated a rotated detector into building detection, introducing RotSegNet. Rotation detection enables the extraction of rotation-invariant features and effective detection of buildings in dense areas. Compared to the segmentation methods, we are oriented to solve the problem of building detection in complex scenes, which can be further combined with the segmentation methods.
### _Context-Enhancement Method_
The incorporation of contextual information has been proven to be effective in improving the performance of detection [10, 22, 23, 24, 12, 20] and segmentation [11, 13, 14, 15, 17, 18, 60]. For instance, Chen et al. [15] proposed the ASPP module, which employs multiple parallel atrous convolutions to capture long-distance context. Jin et al. [14] introduced ISNet, which incorporates semantic-level and image-level contextual information to augment the features. Both \\(A^{2}\\)-FPN [18] and CATNet [17] proposed complex multi-scale feature interaction modules and employed global context aggregation to model global dependency. Zhou et al. [13]enhanced the network's ability to reason about contextual information by incorporating multi-scale features and graph reasoning. Zheng et al. [60] focused on modeling semantic relationships between pixels and proposed the HSDN, utilizing these semantic relations for semantic decoupling and feature enhancement. Chen et al. [11] explicitly established contextual relationships between low-level and high-level features.
Context modeling between instances is more common in detection tasks because the region features used by the two-stage detectors [26, 61] naturally correspond to instance features. Hu et al. [22] proposed an attention mechanism to automatically model the semantic and spatial relationships between regional features, enabling effective context modeling. Li et al. [23] introduced a task decoupling detection framework that leverages local spatial aggregation and global semantic interactions to capture relational context. CLT-Det [24] introduced a correlation transformer module to enhance the information interaction between regional features, improving contextual understanding. CAD-Net [10] utilized multi-scale regional features to obtain contextual information, allowing the network to focus on scale-specific features for improved performance. PCI [12] established dynamic proposal graphs for refining classification and regression, incorporating contextual information into the detection process. Nevertheless, it is imperative to acknowledge that within intricate remote sensing scenes, noise poses challenges in achieving effective relationship modeling, consequently influencing detection performance.
## III Method
As shown in Fig. 1, we propose a Context-Enhanced Detector (CEDet), a three-stage building detection framework, for high-accuracy building detection. Firstly, CEDet introduces a Semantic Guided Context Mining (SGCM) module to extract contextual information at the feature level. SGCM adopts semantic-guided multi-scale feature fusion to enhance multi-scale features of Feature Pyramid Network (FPN) [62]. Subsequently, oriented Region of Interests (oriented RoIs) extracted by oriented Region Proposal Network (oriented RPN) are refined with enhanced features. Additionally, a Context Enhancement OR-CNN Head (CE Head) is designed for building classification and oriented bounding box regression. In building classification branch, Instance Context Mining Module (ICMM) is introduced to capture relational contextual information at the instance level. Besides, offsets predicted by oriented bounding box regression branch will improve building detection precision. We will detail the main components of CEDet in the subsequent subsections.
### _Semantic Guided Context Mining_
Semantic Guided Context Mining (SGCM) module is proposed to extract contextual information by capturing the long-range dependence of features from FPN, including semantic-guided multi-scale feature fusion and simple feature reduction as shown in Fig. 2.
Given multi-scale features \\(\\{P_{2},P_{3},P_{4},P_{5},P_{6}\\}\\) from FPN, a self-attention block [63] is firstly adopted to capture the long-range dependence from \\(P_{6}\\) by self-attention operation. Then, features \\(\\{P_{2},P_{3},P_{4},P_{5},P_{6}\\}\\) go through independent \\(1\\times 1\\) convolutional layers and are scaled to the same size with \\(P_{3}\\) by bilinear interpolation. Subsequently, the multi-scale fused feature is obtained by adding scaled features at all levels, in which each location establishes an implicit global relationship with the whole image.
After the multi-scale fused feature undergoes two \\(3\\times 3\\) convolutional layers, a single \\(1\\times 1\\) convolutional layer is employed to acquire the enriched semantic feature. Another \\(1\\times 1\\) convolutional layer is utilized to generate pseudo-masks for subsequent segmentation loss computation. Ultimately, the enhanced multi-scale features are attained through the element-wise addition of the rich semantic feature to the original multi-scale features extracted from FPN. This feature reduction process incorporates valuable semantic and multi-scale contextual information into the multi-scale features obtained from FPN and diminishes the semantic gap [64] among these features.
To effectively extract contextual information, the fused feature within the proposed SGCM is supervised by a semantic segmentation loss \\(L_{seg}\\). Since ground truth masks are unavailable, we use OBB annotations to generate pseudo-masks that
Fig. 1: Context-Enhanced Detector (CEDet) is a three-stage detection model for high-accuracy building detection. Semantic Guided Context Mining (SGCM) module can enhance the multi-scale feature context. Context Enhancement OR-CNN Head (CE Head) adopts the decoupling structure and obtains the relationship contextual information by Instance Context Mining Module (ICMM).
serve as surrogate ground truth for calculating the semantic segmentation loss. Our experiments demonstrate that these pseudo-masks strongly resemble the actual ground truth masks in many scenes, primarily due to the prevalent rectangular shape of most buildings. Consequently, despite the lack of genuine ground truth, the predicted pseudo-masks effectively depict the outlines of buildings. Finally, the semantic segmentation loss is calculated by pixel-wise cross-entropy loss:
\\[L_{Seg}=CrossEntropy(M^{*},M_{pseudo}) \\tag{1}\\]
where \\(M^{*}\\) denotes the predicted mask, \\(M_{pseudo}\\) denotes the pseudo-mask, \\(L_{Seg}\\) is the semantic segmentation loss.
### _Instance Context Mining Module_
Spatial relationships [22, 23, 24] among building instances can help to identify buildings more accurately from complex backgrounds. Unlike contextual information extraction at the feature level, instance-level relationship modeling adopts radial symmetric feature aggregation to bring stronger spatial invariance. Moreover, the introduction of RRoAlign [27] and the deepening of semantic level reduce the interference of noise, which is helpful to the reasoning of high-order relationships.
However, there are two problems with directly using oriented RoIs generated by oriented RPN for relational reasoning. Firstly, the presence of redundant RoIs introduces instability to the feature distribution of relational context, thereby increasing the difficulty of relational reasoning. Secondly, RoIs may contain multiple background regions, such as clusters of buildings, fragmented parts of buildings, and false detections. These background regions further hinder the extraction of meaningful relational context features. Therefore, we propose the Instance Context Mining Module (ICMM), which employs a feature aggregation method to extract spatial relational context and address the problems.
To achieve more efficient relational modeling, we use Non-maximum suppression (NMS) to suppress the duplicate RoIs and the false RoIs. As shown in Fig. 3, both RoIs \\(B\\in\\mathbb{R}^{N\\times 5}\\) and the corresponding RoI features \\(F\\in\\mathbb{R}^{N\\times C}\\) are fed into the ICMM, where \\(N\\) is the number of RoIs. Consistent with OR-CNN [26], RoI features \\(F\\) are cropped from FPN features by using \\(7\\times 7\\) size RRoAlign and mapped to \\(C=1024\\) dimensions by a flattening operation and two fully connected layers with ReLU.
Each RoI contains five elements \\((x,y,w,h,\\theta)\\), where \\((x,y)\\) is the center coordinates, \\((w,h)\\) is the width and height, and \\(\\theta\\) is the rotation angle of the RoI. Unlike standard GCN [65], RoIs are split into queries and keys for separate processing, which allows for better handling of contextual features. Query branch maintains all RoIs, termed as \\(B^{q}\\in\\mathbb{R}^{N\\times 5}\\). The corresponding features are mapped through a linear layer to obtain query features \\(F^{q}\\in\\mathbb{R}^{N\\times C}\\). The key branch uses non-maximum suppression (NMS) with an Intersection over Union (IoU) threshold of 0.5 to filter noises and obtain key RoIs \\(B^{k}\\in\\mathbb{R}^{M\\times 5}\\), where \\(M\\) is the number of RoIs after NMS. The corresponding features are mapped through an independent linear layer to obtain key features \\(F^{k}\\in\\mathbb{R}^{M\\times C}\\).
The normalized center distance is used to model building instance spatial relationships. Let the \\(i\\)-th RoI in \\(B^{q}\\) be \\(b^{q}_{i}=(x^{q}_{i},y^{q}_{i},w^{q}_{i},h^{q}_{i},\\theta^{q}_{i})\\), and the \\(j\\)-th RoI in \\(B^{k}\\) be \\(b^{k}_{j}=(x^{k}_{j},y^{k}_{j},w^{k}_{j},h^{k}_{j},\\theta^{k}_{j})\\), then the spatial relationship between \\(b^{q}_{i}\\) and \\(b^{k}_{j}\\) can be obtained by the exponential transformation
Fig. 3: Instance Context Mining Module (ICMM) uses spatial relationships between RoIs to extract instance-level contextual features.
Fig. 2: Semantic Guided Context Mining (SGCM) module has two phrases: (a): Fusion, which enhances contextual information, performs feature fusion and uses semantic loss for supervision. (b): Reduction, which enhances FPN features with the fused feature.
of center distances:
\\[\\Delta_{i,j}=\\left(\\frac{(x_{i}^{q}-x_{k}^{k})}{w_{i}^{q}},\\frac{(y_{i}^{q}-y_{j}^ {k})}{h_{i}^{q}}\\right) \\tag{2}\\]
\\[S_{i,j}=S(b_{i}^{q},b_{j}^{k})=\\exp\\{-\\frac{||\\Delta_{i,j}||_{2}}{2}\\} \\tag{3}\\]
where \\(||\\cdot||_{2}\\) is the \\(L_{2}\\) norm, \\(S=[S_{i,j}]\\in\\mathbb{R}^{N\\times M}\\) is the spatial relation matrix between queries and keys. The minimum value of \\(w_{i}^{q}\\) and \\(h_{i}^{q}\\) is limited to 56 to avoid the normalized center distance between small RoIs and other RoIs being too large to establish an effective spatial relationship.
Then, a fixed threshold \\(t\\) is used to quantify the spatial relation matrix:
\\[A_{i,j}=\\begin{cases}0,&S_{i,j}<t\\\\ 1,&S_{i,j}\\geq t\\end{cases} \\tag{4}\\]
where \\(t\\) empirically sets to 0.1. \\(A=[A_{i,j}]\\in\\mathbb{R}^{N\\times M}\\) is the adjacency matrix describing spatial relations. After that, the discrete adjacency matrix is normalized by degree matrix \\(D\\in\\mathbb{R}^{N\\times N}\\) to obtain the final adjacency matrix \\(\\hat{A}\\):
\\[D_{i,j}=\\begin{cases}\\sum_{k}A_{i,k},&i=j\\\\ \\quad 0\\quad,&i\
eq j\\end{cases} \\tag{5}\\]
\\[\\hat{A}=D^{-1}A \\tag{6}\\]
Then key features \\(F^{k}\\) are multiplied by normalized adjacency matrix \\(\\hat{A}\\) to obtain context-enhanced features \\(F^{\\prime}\\in\\mathbb{R}^{N\\times C}\\). Query features are added to preserve the original semantic information:
\\[F^{\\prime}=\\mathrm{ReLU}(\\hat{A}F^{k}+F^{q}) \\tag{7}\\]
where \\(\\mathrm{ReLU}\\) denotes the ReLU activate function. ICMM can be repeated multiple times to capture rich instance relationships, and our experiments show that two ICMMs achieve a trade-off between performance and efficiency. Finally, the ICMM-enhanced RoI features will pass through two fully connected layers for building classification.
### _Detection_
This subsection will introduce the overview of the proposed CEDet, as well as the training and inference processes. Using multi-scale features obtained from FPN, the oriented RPN predicts the offsets and classification scores of anchors on each scale. We define the loss functions of RPN as \\(L_{RPNCls}\\) and \\(L_{RPNReg}\\), which is consistent with the OR-CNN [26]. Then, the oriented RPN outputs 2,000 oriented proposals as oriented RoIs. SGCM module enhances multi-scale features and is supervised by the semantic segmentation loss \\(L_{Seg}\\) that is calculated as defined in Eqs. 1.
Then, based on enhanced multi-scale features of FPN, an OR-CNN Head refines oriented RoIs by predicting the offset and classification of oriented RoIs, which is supervised by the classification loss \\(L_{H_{1}Cls}\\) and the regression loss \\(L_{H_{1}Reg}\\). Finally, refined oriented RoIs \\(B_{1}\\in\\mathbb{R}^{2000\\times 5}\\) and the corresponding classification scores \\(C_{1}\\in\\mathbb{R}^{2000\\times 2}\\) can be obtained through the offsets and scores predicted by OR-CNN Head. As a binary classification for building detection task, each classification score has two dimensions, which represent the probability of background and building.
The training process of CE Head is the same as that of OR-CNN Head, despite using ICMM to capture the relationships. CE Head calculates the classification loss \\(L_{H_{2}Cls}\\) and the regression loss \\(L_{H_{2}Reg}\\), and obtains the final bounding boxes \\(B_{2}\\in\\mathbb{R}^{2000\\times 5}\\) and classification score \\(C_{2}\\in\\mathbb{R}^{2000\\times 2}\\). The total loss during training is:
\\[\\begin{split} L_{Total}=L_{RPNCls}+L_{RPNReg}+L_{Seg}\\\\ +L_{H_{1}Cls}+L_{H_{1}Reg}\\\\ +L_{H_{2}Cls}+L_{H_{2}Reg}\\end{split} \\tag{8}\\]
In the inference stage, a series of predicted bounding boxes and corresponding classification scores are obtained through the same process. We take the final bounding boxes \\(B_{2}\\) as the output. Consistent with Cascade R-CNN [25], the average scores \\(C_{avg}=(C_{1}+C_{2})/2\\) are used as the final classification scores.
The NMS is used as a post-processing to handle redundant detection boxes, with the threshold set to 0.1. Due to the high density of buildings, we keep a maximum of 300 boxes per image.
## IV Experiments
### _Dataset and Metrics_
We perform experiments on three datasets: (1) CNBuilding-9P is a challenging large-scale building dataset covering nine provinces in China: Gansu (GS), Guangdong (GD), Guangxi (GX), Hainan (HI), Hunan (HN), Jilin (JL), Shandong (SD), Sichuan (SC), and Zhejiang (ZJ). This dataset encompasses a multitude of intricate scenes, including urban areas, countryside landscapes, farmland, forested areas, and mountainous regions, thereby encompassing the entirety of conceivable building types, such as residential housing, factories, shopping centres, warehouses, stadiums, and more. Images in the CNBuilding-9P dataset are collected from GoogleEarth with 50,782 images and 1,210,968 buildings that vary significantly in size, structure, and appearance. Images in CNBuilding-9P are manually labeled with instance-level OBB annotations, and some examples are shown in Fig. 4. Detailed information regarding the images and building instances for each province's training and test sets can be found in Table I. The validation set is derived by randomly sampling one-tenth of the data from the training set. Subsequently, all comparison methods, including our proposed CEDet, are trained using the training and validation sets, and the resulting detection performances on the test set are reported. All ablation studies are exclusively conducted using CNBuilding-9P.
(2) CNBuilding-23P serves as an extension of the CNBuilding-9P dataset, encompassing an additional 14 areas: Chongqing (CQ), Fujian (FJ), Guizhou (GZ), Hebei (HE), Heilongjiang (HL), Hubei (HB), Jiangxi (JX), Liaoning (LN), Inner Mongolia (NM), Qinghai (QH), Shanxi (SX), Yunnan (YN), Tibet (XZ), and Other-Provinces (OP). With a total of 139,217 images and 3,623,425 buildings, CNBuilding-23P represents a scale almost three times larger than that of CNBuilding-9P. Figure 5 compares the square root distribution of building areas between CNBuilding-23P and CNBuilding-9P. While the area distributions and building densities of the two datasets exhibit similarities, CNBuilding-23P presents a greater number of small building instances, thereby introducing additional challenges. While the two datasets' area distributions and building densities exhibit similarities, CNBuilding-23P presents a greater number of small building instances, thereby introducing additional challenges. Due to the substantial scale of the CNBuilding-23P dataset, the experiment only reports the overall results obtained from this dataset.
(3) SpaceNet: SpaceNet is a public building dataset obtained from the DigitalGlobe WorldView 3 satellite [66]. We select Paris, Shanghai, and Khartoum for evaluation in our experiments. These areas contain 633 images with 16,853 building instances, 3,351 images with 71,294 building instances, and 923 images with 25,371 building instances, respectively. The polygon annotations in the dataset are transformed into rotated box representations using the minimum bounding rectangle algorithm. Afterward, the dataset is randomly split into training, testing, and validation subsets, with a ratio of 3:1:1, and the detection performance on the test set is reported.
### _Implementation Details_
All experiments are conducted using the MMRate [67] framework. As for the backbone network, ResNet50 [5] is employed and initialized with pre-trained parameters from ImageNet [68] to match the default configuration in MMRate [67]. During training, images are resized to \\(800\\times 800\\) using bilinear interpolation, and random horizontal flips with a probability of 0.5 are applied for data augmentation. Normalization is performed on the images using mean and variance values obtained from the ImageNet dataset. For image pre-processing in the test phase, the same procedures are followed as in training, except for the absence of data augmentation. Following the evaluation methodology used in DOTA [9], the performance is measured in terms of PASCAL VOC2012 average precision (AP) [69], with IoU thresholds of 0.5 and 0.75. These metrics are referred to as AP50 and AP75, respectively. An SGD optimizer is employed with an initial learning rate of 0.01. Models are trained for a total of 12 epochs, with the learning rate decreasing by a factor of 10 at epochs 8 and 11. To ensure stable training, a linear warm-up strategy [70] is implemented for the first 500 iterations, using a learning ratio of 0.333. Gradient clipping is also applied with a maximum normalized value of 35 to prevent gradient explosion. The experiments are conducted using two NVIDIA 2080TI graphics processing units (GPUs) with a total batch size of 4. For testing, NMS with an Intersection over Union (IoU) threshold of 0.1 is
Fig. 4: Some examples in the CNBuilding-9P. CNBuilding-9P dataset covers nine provinces in China and contains 50,782 images with 1,210,968 building instances. The first row is the original image, and the orange boxes in the second row represent the ground truth. Zoom in to see more details.
Fig. 5: The statistic of CNBuilding-9P and CNBuilding-23P, including the square root of building area and the number of buildings per images.
utilized to remove duplicated bounding boxes. Additionally, boxes with scores lower than 0.05 are discarded to further reduce false detections.
### _Comparison With Other Approaches_
This section compares the proposed CEDet with other detection methods on three building detection datasets: CNBuilding-9P, CNBuilding-23P, and SpaceNet. Table II, III, and IV detail the detection performance on different datasets.
_1) Results on CNBuilding-9P_: Table II presents the performance and inference speed of CEDet and other methods on CNBuilding-9P test set. Comparison methods include single-stage methods: Gliding Vertex [32] (Gliding Vertex), KLD [31] (KLD), and R3Det [29] with KFIoU [30] (KFIoU); and multi-stage methods: Oriented Faster R-CNN [71] (FR-O), RoI Transformer [27] (RoI-Transformer), Oriented R-CNN [26] (OR-CNN), and Oriented Cascade R-CNN (CR-O). Among them, CR-O is a variant that incorporates the structure of Cascade R-CNN [25] based on OR-CNN [26], accomplished by concatenating multiple OR-CNN Heads.
Since CEDet introduces a cascade structure, CR-O and OR-CNN are selected jointly as the baseline for comparison. All models employ ResNet50 with FPN [62] as the feature extractor and adopt the original configuration from MMRate [67]. Table II presents the AP50 results for each province, as well as the overall performance across all nine provinces. Furthermore, Table II includes the AP75 metric, which measures the high-precision detection ability of the model [31]. The FPS column denotes the inference speed of the model when tested on a single GPU, encompassing the entire process of image processing, model inference, and post-processing operations.
Table II illustrates that the performance of single-stage methods is typically inferior to that of multi-stage methods. Among the multi-stage methods, both RoI-Transformer [27] and OR-CNN [26] exhibit higher AP75 compared to FR-O [61], suggesting that the utilization of oriented RoIs can enhance the precision of building detection. However, the multi-stage methods exhibit similar performance in terms of AP50, indicating that these approaches possess similar classification capabilities on the CNBuilding-9P dataset.
Our CEDet achieves 58.5% and 34.1% in terms of AP50 and AP75, respectively. Compared with the baseline OR-CNN, our CEDet exhibits significant improvements across all provinces: +1.1% on GS, +1.9% on GD, +1.1% on GX, +1.8% on HI, +1.7% on HN, +2.0% on JL, +2.7% on SD, +2.7% on SC, and +2.7% on ZJ. In summary, CEDet achieves an overall AP50 improvement of 2.1% over OR-CNN and outperforms the best method, FR-O [71], by 1.7%. This indicates that enhancing context can effectively improve the detector's ability to identify buildings in complex scenes. Additionally, due to the cascaded structure of CEDet, there is a 2.6% improvement in AP75 compared to OR-CNN. While CR-O [25], with its cascaded structure, performs closely in terms of AP75, it can be observed that CR-O's AP50 is approximately 1% lower than other multi-stage methods. This demonstrates that the performance improvement of CEDet does not arise from the addition of detection stages but rather from the effective context extraction modules.
Fig. 6 presents the visualized detection results of CEDet and OR-CNN on CNBuilding-9P. Compared to OR-CNN, CEDet produces fewer false detection boxes and achieves more accurate building detection. CEDet incorporates an additional detection head and employs SGCM and ICMM to enhance features. As a result, CEDet exhibits the lowest inference speed among all detection methods, with a reduction of over half compared to OR-CNN. Subsequent ablation studies will comprehensively analyze the impact of each module on the detector's inference speed.
#### Iv-C2 Results on CNBuilding-23P
The CNBuilding-23P dataset is approximately three times larger than the CNBuilding-9P dataset, providing a more comprehensive evaluation of the detector's detection ability in diverse and complex scenes. As shown in Table III, our CEDet achieves 57.6% and 33.3% in terms of AP50 and AP75, respectively. CEDet consistently outperforms OR-CNN in all provinces, showcasing its superiority. Notably, CEDet exhibits significant improvements in the provinces of HI (+2.1%), HE (+2.2%), HL (+2.4%), HB (+3.6%), LN (+2.4%), OP (+2.1%), SX (+2.3%), SC (+3.2%), YN (+3.4%), and ZJ (+2.5%). Furthermore, CEDet achieves a 2.3% improvement over the baseline and a 2.2% improvement over FR-O (55.4%) in terms of overall AP50. These improvements demonstrate the effectiveness of CEDet in handling complex scenarios.
#### Iv-C2 Results on SpaceNet[66]
Table IV presents the detection performances on SpaceNet, measured in terms of AP50. Across all areas, multi-stage methods demonstrate higher accuracy compared to single-stage methods, with our CEDet achieving the best performance among all the methods considered. Compared with baseline OR-CNN [26], CEDet improves 2.3% in the Khartoum area, 2.8% in the Shanghai area, and 2.0% in the Paris area. CEDet achieves a 2.4% improvement over the baseline and a 1.1% improvement over RoI-Transformer [27] in terms of the overall AP50.
### _Ablation Study_
#### Iv-D1 Effectiveness of SGCM
SGCM plays a critical role in extracting multi-scale contextual information. By incorporating multi-scale feature fusion and reduction, each feature of FPN can access shared semantic features and acquire the ability to perceive contextual information at different scales [57, 64]. The inclusion of semantic segmentation loss directly contributes to the subsequent task by guiding the multi-scale features to develop explicit semantic recognition capabilities. Furthermore, this multi-task paradigm proves advantageous in obtaining more generalized features [57].
Table V presents the performance improvement achieved by each module of SGCM. Additionally, we choose HTC [57] and HSP [72] as comparison methods. HTC removes the self-attention block within SGCM and increases the 2-layer \\(3\\times 3\\) convolutional layers to 4 layers. On the other hand, HSP incorporates the ASPP module based on HTC to enhance the capability of capturing multi-scale features. In the fusion process, HSP replaces the 2-layer \\(3\\times 3\\) convolutional layers with the ASPP [15]. The atrous rates in the ASPP are set to \\((24,48,72)\\) to be consistent with HSP [72].
Compared with OR-CNN, using HTC results in a slight improvement of 0.18% in AP50. However, there is a slight decrease in performance when employing ASPP to increase the receptive field. In contrast, SGCM's self-attention block improves HTC's performance by an additional 0.35%, resulting in a total improvement of 0.53% over OR-CNN. It is noticed that semantic segmentation loss is essential for extracting context features. A comparison of the bottom two rows in Table V reveals that SGCM experiences a performance
Fig. 6: The visualization results of CEDet and OR-CNN [26] on CNBuilding-9P dataset. The pink boxes are the correct detections, defined as the IoU with the ground truth greater than 0.5. Red boxes are false detection boxes. Yellow boxes are ground truths that are not detected.
decrease of 0.76% in the absence of semantic segmentation supervision.
Fig. 7 illustrates the mask prediction obtained by SGCM. Despite the utilization of pseudo-labels for semantic segmentation, SGCM exhibits notable proficiency in accurately predicting the spatial extent of buildings, including those with intricately curved outlines, as exemplified in the second row.
#### Iv-A2 Effectiveness of CE Head and ICMM
Leveraging relational context to enhance building detection capabilities from complex remote sensing images has been identified as a viable approach [24, 60]. In the proposed CEDet, CE Head incorporates ICMM to effectively capture relational contextual information via the spatial relationship between RoIs.
Instead of replacing the OR-CNN Head, CEDet adds the CE Head after the OR-CNN Head for two reasons. Firstly, the oriented RoIs extracted by the oriented RPN often lack sufficient quality, which hinders the effective extraction of semantic features with high confidence. As a result, the CE Head struggles to learn effective instance relations. Secondly, adopting the cascade structure has proven to enhance detection accuracy, and the joint decision-making of multi-stage classification scores further improves classification accuracy [25]. Additionally, to mitigate the feature ambiguity caused by the aggregation of contextual information, CE Head employs a decoupling structure [73] and exclusively utilizes ICMM in the classification branch, thereby minimizing its impact on regression.
Table VI compares the effects of these different designs. Replacing OR-CNN Head with CE Head results in a 1.3% improvement in AP50, while the improvement in AP75 is marginal. Subsequently, introducing the cascade structure leads to a further 0.5% improvement in AP50. Notably, the AP75 metric demonstrates significant improvement, surpassing OR-CNN by 2%.
The second row of Table VI presents the impact of replacing the original OR-CNN with the decoupling structure. In the original OR-CNN Head, RoI features are extracted using two fully connected layers, which are then utilized for classification and regression. Conversely, the decoupled OR-CNN Head utilizes four independent fully connected layers to extract RoI features and performs classification and regression separately. However, experiments demonstrate that the decoupled head fails to improve the detector's performance effectively. This suggests that the performance gain achieved by CE Head is not attributed to parameter increments but rather to the utilization of relational context.
In addition to the cascade structure, NMS is introduced to suppress noisy oriented RoIs. Table VII analyses this strategy. The results reveal a 0.25% improvement in AP50 and a 0.5% improvement in AP75 when NMS is applied. These findings suggest that removing noisy oriented RoIs facilitates ICMM in capturing relationship information more effectively.
ICMM can be stacked multiple times to obtain more decisive relational contextual information. The impact of the number of ICMMs is examined in Table VIII. The results indicate that the performance is not significantly affected by the number of ICMMs. Even with a single ICMM, there is an improvement in performance. Increasing the number of ICMMs does not lead to better performance but affects the inference speed. This observation can be attributed to the relatively straightforward relational context in the building dataset, where a simpler design is sufficient to extract the relational context effectively. As a result, the CE Head in CEDet employs only two ICMMs.
#### Iv-A3 Combination of Modules
Table IX presents a comprehensive overview of the performance enhancements achieved by integrating various modules into the baseline OR-CNN [26].
Fig. 7: Visualization of the input images, target pseudo-masks, and the predicted masks.
The cascade structure [25] demonstrates effective performance improvement at AP75 (+2.4%). However, it does not enhance building classification ability, as evidenced by a slight decrease of 0.8% in AP50. Without sufficient exploitation of contextual information, the detector's recognition capability in complex scenes cannot be effectively improved. SGCM improves AP50 by 0.5% through multi-scale context mining. On the other hand, ICMM achieves a more significant improvement of 1.3% in AP50 through relation-based context mining. This can be attributed to ICMM's utilization of high-level semantic features for reasoning and its ability to model relations with rotation and scale invariance, facilitating deeper contextual information extraction. The combination of ICMM and the cascade structure further enhances performance, resulting in a 1.8% improvement over OR-CNN in AP50. Finally, the proposed CEDet with ICMM and SGCM leads to a remarkable 2.1% improvement compared with the baseline.
#### Iv-B4 Different Backbones
A stronger backbone can lead to better generalization, thereby enhancing the recognition ability of the detector in complex real-world scenes. In the case of CEDet, features with enhanced generalization also contribute to the improved extraction capability of contextual information. Table X presents a performance comparison between OR-CNN [26] and CEDet under different backbone architectures on CNBuilding-9P dataset, including ResNet50 [5], ResNet101 [5], Swin-T [75], and ResNet101 [74]. For ResNext101, OR-CNN achieves a 1.2% improvement over the baseline in AP50 compared with ResNet50. However, using Swin-T as the backbone results in a decrease of 0.7% in AP50. It is worth noting that OR-CNN with different backbone architectures does not outperform CEDet with ResNet50. On the other hand, CEDet with Swin-T backbone leads to a further improvement of 0.9% in AP50 compared with CEDet with ResNet50 backbone, which highlights the effectiveness of Swin-T backbone in enhancing the performance of CEDet.
## V Conclusion
This paper introduces CEDet, a novel approach that effectively leverages contextual information to achieve highly accurate building detection. CEDet adopts a multi-stage structure to enhance the ability of contextual feature extraction. The proposed SGCM module in CEDet addresses the issue of insufficient long-range feature interactions observed in existing multi-scale context extraction methods by utilizing a self-attention mechanism. A semantic segmentation loss based on pseudo-masks is also employed to supervise contextual feature extraction. The Instance Context Mining Module (ICMM) is proposed to capture contextual information between instances through spatial relationships, significantly enhancing the detector's accuracy. Finally, ablation experiments demonstrate the effectiveness of SGCM and ICMM. Moreover, our CEDet achieves outstanding performance on three benchmark datasets, i.e., CNBuilding-9P, CNBuilding-23P, and SpaceNet, further illustrating its superiority.
[MISSING_PAGE_POST]
p residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. Cited by: SSI.
* [39]H. Ma, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881-2890. Cited by: SSI.
* [40]H. Ma, Y. Ren, and J. Yu (2020) Detection of collapsed buildings in post-earth-bauke remote sensing images based on the improved yolov3. Remote Sensing12 (1), pp. 44. Cited by: SSI.
* [41]H. Ma, Y. Ren, and J. Yu (2020) Detection of collapsed buildings in post-earth-bauke remote sensing images based on the improved yolov3. Remote Sensing12 (1), pp. 44. Cited by: SSI.
* [42]H. Ma, Y. Ren, and J. Yu (2020) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. Cited by: SSI.
* [43]H. Ma, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881-2890. Cited by: SSI.
* [44]H. Ma, Y. Ren, and J. Yu (2020) Detection of collapsed buildings in post-earth-bauke remote sensing images based on the improved yolov3. Remote Sensing12 (1), pp. 44. Cited by: SSI.
* [45]H. Ma, Y. Ren, and J. Yu (2020) Detection of collapsed buildings in post-earth-bauke remote sensing images based on the improved yolov3. Remote Sensing12 (1), pp. 44. Cited by: SSI.
* [46]H. Ma, Y. Ren, and J. Yu (20* [18] M. Hu, Y. Li, L. Fang, and S. Wang, \"A2-fpn: Attention aggregation based feature pyramid network for instance segmentation,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2021, pp. 15 343-15 352.
* [19] S. Huang, Z. Lu, R. Cheng, and C. He, \"Fapn: Feature-aligned pyramid network for dense image prediction,\" in _Proceedings of the IEEE/CVF international conference on computer vision_, 2021, pp. 864-873.
* [20] B. Wang, R. Ji, L. Zhang, and Y. Wu, \"Bridging multi-scale context-aware representation for object detection,\" _IEEE Transactions on Circuits and Systems for Video Technology_, 2022.
* [21] X. Wang, R. Girshick, A. Gupta, and K. He, \"Non-local neural networks,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2018, pp. 7794-7803.
* [22] H. Hu, J. Gu, Z. Zhang, J. Dai, and Y. Wei, \"Relation networks for object detection,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2018, pp. 3588-3597.
* [23] W. Li, Z. Chen, B. Li, D. Zhang, and Y. Yuan, \"Hd: Heterogeneous task decoupling for two-stage object detection,\" _IEEE Transactions on Image Processing_, vol. 30, pp. 9456-9469, 2021.
* [24] Y. Zhou, S. Chen, J. Zhao, R. Yao, Y. Xue, and A. El Saddik, \"Clt-det: Correlation learning based on transformer for detecting dense objects in remote sensing images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-15, 2022.
* [25] Z. Cai and N. Vasconcelos, \"Cascade r-cnn: Delving into high quality object detection,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2018, pp. 6154-6162.
* [26] X. Xie, G. Cheng, J. Wang, X. Yao, and J. Han, \"Oriented r-cnn for object detection,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 3520-3529.
* [27] J. Ding, N. Xue, Y. Long, G.-S. Xia, and Q. Lu, \"Learning roi transformer for oriented object detection in aerial images,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 2849-2858.
* [28] J. Han, J. Ding, J. Li, and G.-S. Xia, \"Align deep features for oriented object detection,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-11, 2021.
* [29] X. Yang, J. Yan, Z. Feng, and T. He, \"R3det: Refined single-stage detector with feature refinement for rotating object,\" in _Proceedings of the AAAI conference on artificial intelligence_, vol. 35, no. 4, 2021, pp. 3163-3171.
* [30] X. Yang, Y. Zhou, G. Zhang, J. Yang, W. Wang, J. Yan, X. Zhang, and Q. Tian, \"The kfou loss for rotated object detection,\" _arXiv preprint arXiv:2201.12558_, 2022.
* [31] X. Yang, X. Yang, J. Yang, Q. Ming, W. Wang, Q. Tian, and J. Yan, \"Learning high-precision bounding box for rotated object detection via kullback-leibler divergence,\" _Advances in Neural Information Processing Systems_, vol. 34, pp. 18 381-18 394, 2021.
* [32] Y. Xu, M. Fu, Q. Wang, Y. Wang, K. Chen, G.-S. Xia, and X. Bai, \"Gliding vertex on the horizontal bounding box for multi-oriented object detection,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 43, no. 4, pp. 1452-1459, 2020.
* [33] J. Han, J. Ding, N. Xue, and G.-S. Xia, \"Redet: A rotation-equivariant detector for aerial object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 2786-2795.
* [34] R. Qin, Q. Liu, G. Gao, D. Huang, and Y. Wang, \"Mrdet: A multihead network for accurate rotated object detection in aerial images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-12, 2021.
* [35] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, \"Deformable convolutional networks,\" in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 764-773.
* [36] X. Yang, J. Yan, Q. Ming, W. Wang, X. Zhang, and Q. Tian, \"Rethinking rotated object detection with gaussian wasserstein distance loss,\" in _International Conference on Machine Learning_. PMLR, 2021, pp. 11 830-11 841.
* [37] X. Yang and J. Yan, \"Arbitrary-oriented object detection with circular smooth label,\" in _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16_. Springer, 2020, pp. 677-694.
* [38] W. Li, Y. Chen, K. Hu, and J. Zhu, \"Oriented reppoints for aerial object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 1829-1838.
* [39] Z. Yang, S. Liu, H. Hu, L. Wang, and S. Lin, \"Reppoints: Point set representation for object detection,\" in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 9657-9666.
* [40] R. Alshehhi, P. R. Marpu, W. L. Woon, and M. Dalla Mura, \"Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 130, pp. 139-149, 2017.
* [41] H. Yang, P. Wu, X. Yao, Y. Wu, B. Wang, and Y. Xu, \"Building extraction in very high resolution imagery by dense-attention networks,\" _Remote Sensing_, vol. 10, no. 11, p. 1768, 2018.
* [42] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 4700-4708.
* [43] D. Griffiths and J. Boehm, \"Improving public data for building segmentation from convolutional neural networks (cnn) for fused airborne lidar and image data using active contours,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 154, pp. 70-83, 2019.
* [44] S. Wei, S. Ji, and M. Lu, \"Toward automatic building footprint delineation from aerial images using cnn and regularization,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 58, no. 3, pp. 2178-2189, 2019.
* [45] Y. Zhu, B. Huang, J. Gao, E. Huang, and H. Chen, \"Adaptive polygon generation algorithm for automatic building extraction,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2021.
* [46] H. Hosseinpour, F. Samardagegan, and F. D. Javan, \"Cmgfnet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images,\" _ISPRS journal of photogrammetry and remote sensing_, vol. 184, pp. 96-1152, 2022.
* [47] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly _et al._, \"An image is worth 16x16 words: Transformers for image recognition at scale,\" _arXiv preprint arXiv:2010.11929_, 2020.
* [48] L. Wang, S. Fang, X. Meng, and R. Li, \"Building extraction with vision transformer,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-11, 2022.
* [49] S. Zorzi, S. Bazrafkan, S. Habenschuss, and F. Fraundorfer, \"Polyworld: Polygonal building extraction with graph neural networks in satellite images,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 1848-1857.
* [50] L. Castrejon, K. Kundu, R. Urtasun, and S. Fidler, \"Annotating object instances with a polygon-rnn,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 5230-5238.
* [51] D. Acuna, H. Ling, A. Kar, and S. Fidler, \"Efficient interactive annotation of segmentation datasets with polygon-rnn++,\" in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_, 2018, pp. 859-868.
* [52] Z. Li, J. D. Wegner, and A. Lucchi, \"Topological map extraction from overhead images,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 1715-1724.
* [53] L. Xu, Y. Li, J. Xu, and L. Guo, \"Gated spatial memory and centroid-aware network for building instance extraction,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-14, 2021.
* [54] K. He, G. Gkioxari, P. Dollar, and R. Girshick, \"Mask r-cnn,\" in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 2961-2969.
* [55] Z. Li, Q. Xin, Y. Sun, and M. Cao, \"A deep learning-based framework for automated extraction of building footprint polygons from very high-resolution aerial imagery,\" _remote Sensing_, vol. 13, no. 18, p. 630, 2021.
* [56] X. Liu, Y. Chen, M. Wei, C. Wang, W. N. Gongvalves, J. Marcato, and J. Li, \"Building instance extraction method based on improved hybrid task cascade,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 19, pp. 1-5, 2021.
* [57] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang _et al._, \"Hybrid task cascade for instance segmentation,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 4974-4983.
* [58] Y. Hu, Z. Wang, Z. Huang, and Y. Liu, \"Polybuilding: Polygon transformer for building extraction,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 199, pp. 15-27, 2023.
* [59] W. Zhao, J. Na, M. Li, and H. Ding, \"Rotation-aware building instance segmentation from high-resolution remote sensing images,\" _IEEE geoscience and remote sensing letters_, vol. 19, pp. 1-5, 2022.
* [60] C. Zheng, J. Nie, Z. Wang, N. Song, J. Wang, and Z. Wei, \"High-order semantic decoupling network for remote sensing image semantic segmentation,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 61, pp. 1-15, 2023.
* [61] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster r-cnn: Towards real-time object detection with region proposal networks,\" _Advances in neural information processing systems_, vol. 28, pp. 91-99, 2015.
* [62] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, \"Feature pyramid networks for object detection,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 2117-2125.
* [63] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, \"Attention is all you need,\" _Advances in neural information processing systems_, vol. 30, 2017.
* [64] J. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, and D. Lin, \"Libra r-cnn: Towards balanced learning for object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 821-830.
* [65] T. N. Kipf and M. Welling, \"Semi-supervised classification with graph convolutional networks,\" _arXiv preprint arXiv:1609.02907_, 2016.
* [66] A. Van Etten, D. Lindenbaum, and T. M. Bascatow, \"Spacenet: A remote sensing dataset and challenge series,\" _arXiv preprint arXiv:1807.01232_, 2018.
* [67] Y. Zhou, X. Yang, G. Zhang, J. Wang, Y. Liu, L. Hou, X. Jiang, X. Liu, J. Yan, C. Lyu, W. Zhang, and K. Chen, \"Mmrostate: A rotated object detection benchmark using pytorch,\" in _Proceedings of the 30th ACM International Conference on Multimedia_, 2022.
* [68] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" _Advances in neural information processing systems_, vol. 25, 2012.
* [69] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, \"The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,\" [http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html](http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html).
* [70] P. Goyal, P. Dollar, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, \"Accurate, large minibatch sgd: Training imagenet in 1 hour,\" _arXiv preprint arXiv:1706.02677_, 2017.
* [71] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster r-cnn: Towards real-time object detection with region proposal networks,\" _Advances in neural information processing systems_, vol. 28, 2015.
* [72] C. Xu, C. Li, Z. Cui, T. Zhang, and J. Yang, \"Hierarchical semantic propagation for object detection in remote sensing imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 58, no. 6, pp. 4353-4364, 2020.
* [73] Y. Wu, Y. Chen, L. Yuan, Z. Liu, L. Wang, H. Li, and Y. Fu, \"Rethinking classification and localization for object detection,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 10 186-10 195.
* [74] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, and M. Li, \"Bag of tricks for image classification with convolutional neural networks,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 558-567.
* [75] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, \"Swin transformer: Hierarchical vision transformer using shifted windows,\" in _Proceedings of the IEEE/CVF international conference on computer vision_, 2021, pp. 10 012-10 022. | The field of building detection from remote sensing images has made significant progress, but faces challenges in achieving high-accuracy detection due to the diversity in building appearances and the complexity of vast scenes. To address these challenges, we propose a novel approach called Context-Enhanced Detector (CEDet). Our approach utilizes a three-stage cascade structure to enhance the extraction of contextual information and improve building detection accuracy. Specifically, we introduce two modules: the Semantic Guided Contextual Mining (SGCM) module, which aggregates multi-scale contexts and incorporates an attention mechanism to capture long-range interactions, and the Instance Context Mining Module (ICMM), which captures instance-level relationship context by constructing a spatial relationship graph and aggregating instance features. Additionally, we introduce a semantic segmentation loss based on pseudo-masks to guide contextual information extraction. Our method achieves state-of-the-art performance on three building detection benchmarks, including CNBuilding-9P, CNBuilding-23P, and SpaceNet.
building detection, multi-scale context, relational context, cascade structure | Summarize the following text. | 205 |
mdpi/0333175c_2bb2_4d00_93cc_1a754144f112.md | Retrieval of Snow Albedo and Total Ozone Column from Single-View MSI/S-2 Spectral Reflectance Measurements over Antarctica
Alexander Kokhanovsky
1Telespazio Belgium SPRL, Bratuskrasse 7, D-64293 Darmstadt, Germany 1
Simon Gascoin
2CNRS/CNES/IRD/INRAE/UPS, CESBIO, Universite de Toulouse, 31400 Toulouse, France; [email protected] 2
Laurent Arnaud
3Institut des Geosciences de l'Environnement (IGE), University Grenoble Alpes, CNRS, UMR 5001, 38041 Grenoble, France; [email protected] (L.A.); [email protected] (G.P.) 2
Chislain Picard
3Institut des Geosciences de l'Environnement (IGE), University Grenoble Alpes, CNRS, UMR 5001, 38041 Grenoble, France; [email protected] (L.A.); [email protected] (G.P.) 2
## 1 Introduction
Sentinel-2 (S-2) is an Earth observation mission from the Copernicus Programme that systematically acquires optical imagery at high spatial resolution (10 m to 60 m) over land and coastal waters. The mission is a constellation with two twin satellites, S-2A and S-2B. The satellites were launched on 23 June 2015 and 7 March 2017, respectively. It is planned that S-2C and S-2D will be launched during the next 10 years. The S-2 mission is equipped with the Multi-Spectral Instrument (MSI), which uses a push-broom concept and its design has been driven by the large (290 km) swath and high (10-60 m) spatial resolution requirements. Revisiting is 5 days per month on average under the same viewing angles close to the nadir direction. MSI operates in the spectral range 0.44-2.2 \\(\\upmu\\)m. The instrument has already been used for numerous applications including studies of lake ecological quality [1], winter wheat [2] and seasonal agriculture land use [3] mapping, etc. (see, e.g., [https://www.mdpi.com/journal/remotesensing/special_issues/sentinel2_sa](https://www.mdpi.com/journal/remotesensing/special_issues/sentinel2_sa), last time accessed: 1 November 2021).
The climatic effects of snow cover depend not only on its extent [4] but also on its spectral albedo, which plays an important role in the modification of the backscattered solar energy on local and global scales [5, 6]. Snow albedo products are currently available at moderate resolution (i.e., 300 m from Ocean and Land Colour Imager (OLCI) [7, 8], 500 m from MODerate Imaging Spectrometer (MODIS) [9, 10] and Sea and Land Surface Temperature Radiometer (SLSTR) [11, 12], and 1 km from Second Generation Global Imager (SGLI) [13]. However, they do not allow capturing the fine details of spatial variability of the snow surface properties [14]. The technique presented here provides snow albedo products on the scale of 10-20 m and makes it possible to derive and validate subpixel snow cover products as obtained, e.g., from MODIS measurements [15, 16].
The task of this paper is to derive the spectral snow albedo using MSI/S-2A (B) 10-60 m spatial resolution reflectance data. The spatial resolution of the product dependson the spectral channels used. In addition, the algorithm to determine the total ozone column (TOC) over snow fields using MSI/S-2 is proposed. The retrieval algorithm is described in the next section. We also present the validation of the algorithm using ground-based snow spectral albedo and TOC measurements performed at Dome C (75\\({}^{\\circ}\\)05\\({}^{\\prime}\\)59\\({}^{\\prime\\prime}\\)S, 123\\({}^{\\circ}\\)19\\({}^{\\prime}\\)56\\({}^{\\prime\\prime}\\)E, 3233 m above sea level) in Antarctica.
## 2 The Determination of the Total Ozone and Spectral Snow Albedo Using MSI Measurements
This work is aimed at the retrievals of total ozone column and snow properties from MSI measurements over Antarctica. The atmospheric light scattering effects are rather weak [17; 18] in Antarctica for the MSI channels (see Table 1). Additionally, the concentration of light-absorbing impurities in most Antarctic snow is very low [19; 20] and can be neglected in the first iteration. We shall use the following approximation for the spectral top-of-atmosphere reflectance:
\\[\\Re=T_{g}R \\tag{1}\\]
where \\(T_{g}\\) is the atmospheric gaseous transmittance, \\(R\\) is the underlying surface-atmosphere reflectance under the assumption that there are no absorbing gases in the atmosphere. Ignoring weak atmospheric light scattering effects at the MSI channels, we can write for highly reflective clean Antarctic surfaces [7; 8; 21]:
\\[R=R_{0}\\exp\\Bigl{\\{}-\\sqrt{\\alpha L}\\Bigr{\\}} \\tag{2}\\]
where \\(\\alpha=\\frac{4\\pi T}{\\lambda}\\) is the bulk absorption coefficient of ice, \\(\\chi\\) is the imaginary part of ice refractive index and \\(\\lambda\\) is the wavelength. Here, \\(R_{0}\\) is the snow reflectance in the absence of absorption and \\(L\\) is the effective light absorption path (ELAP). In the case of dirty snow, Equation (2) must be modified to account for the snow pollutants as discussed in [7].
The task of this work is to retrieve the spectral snow albedo and total ozone concentration using MSI measurements in the spectral range 443-865 nm, where the accuracy of Equation (2) is higher as compared to the case of short-wave infrared wavelengths [7]. Additionally, the influence of all atmospheric gases on the value of transmittance \\(T_{g}\\) (except
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{**Sentinel-2**} & \\multicolumn{2}{c}{**Sentinel-2A**} & \\multicolumn{2}{c}{**Sentinel-2B**} & \\\\ \\cline{2-5} & **Central** & **Bandwidth** & **Central** & **Bandwidth** & **Spatial** \\\\
**Bands** & **Wavelength** & **(nm)** & **Wavelength** & **(nm)** & **Resolution** \\\\ \\hline Band 1 & 442.7 & 21 & 442.2 & 21 & 60 \\\\ Band 2 & 492.4 & 66 & 492.1 & 66 & 10 \\\\ Band 3 & 559.8 & 36 & 559 & 36 & 10 \\\\ Band 4 & 664.6 & 31 & 664.9 & 31 & 10 \\\\ Band 5 & 704.1 & 15 & 703.8 & 16 & 20 \\\\ Band 6 & 740.5 & 15 & 739.1 & 15 & 20 \\\\ Band 7 & 782.8 & 20 & 779.7 & 20 & 20 \\\\ Band 8 & 832.8 & 106 & 832.9 & 106 & 10 \\\\ Band 8A & 864.7 & 21 & 864 & 22 & 20 \\\\ Band 9 & 945.1 & 20 & 943.2 & 21 & 60 \\\\ Band 10 & 1373.5 & 31 & 1376.9 & 30 & 60 \\\\ Band 11 & 1613.7 & 91 & 1610.4 & 94 & 20 \\\\ Band 12 & 2202.4 & 175 & 2185.7 & 185 & 20 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: The specification of the MSI spectral channels. Band 9 is useful for the total water vapor column estimation and band 10 is used for the detection of Cirrus clouds. Bands 1β8 are used in atmospheric correction procedures. Bands 11 and 12 can be used to distinguish cloud and clear sky snow fields.
ozone) can be neglected in this spectral interval [8; 21]. This is especially valid in central Antarctica, where the atmosphere is thin and the air is dry.
We shall use the following approximation for the atmospheric ozone transmittance [8]:
\\[T_{g}(\\lambda)=\\exp\\{-KC_{abs}(\\lambda)\\} \\tag{3}\\]
where \\(K=mN\\), \\(m\\) is the air mass factor (AMF) depending on several parameters including the cosine of the solar zenith angle (SZA) \\(\\xi\\) and the cosine of the viewing zenith angle \\(\\eta\\), \\(N\\) is the total ozone column and \\(C_{abs}(\\lambda)\\) is the ozone absorption cross-section. The dependence of \\(C_{abs}(\\lambda)\\) on temperature and pressure is weak in the spectral range under study [22]. Therefore, it is not accounted for.
Summing up, the spectral MSI reflectance over pure snow fields under a clear sky is modelled as
\\[\\Re(\\lambda)=R_{a}\\exp\\{-KC_{abs}(\\lambda)-\\sqrt{L\\alpha(\\lambda)}\\} \\tag{4}\\]
where we assumed that light reflectance by snow in the absence of absorbers \\(R_{0}\\) can be substituted by the MSI measurements at band 1 (\\(R_{a}\\) ). The weak influences of atmospheric scattering effects on the value of MSI spectral reflectance at 442.7 nm can be accounted for, if needed, using either exact radiative transfer calculations for a given atmospheric model or various approximations [21]. These effects are ignored in this work.
Our simplified model of the MSI spectral reflectance in the visible and near-infrared given by Equation (4) makes it possible to determine two unknown constants (\\(K,L\\)) from Equation (4) analytically. Namely, it follows under the assumption that ozone absorption can be neglected at the wavelength \\(\\lambda_{c}\\) :
\\[K=\\frac{\\ln(R_{a}/R_{b})-\\sqrt{\\alpha_{b}L}}{C_{abs,b}},\\ L=\\frac{\\ln^{2}(R_{c }/R_{a})}{\\alpha_{c}} \\tag{5}\\]
where indices signify the spectral channels (\\(\\lambda_{a},\\lambda_{b},\\lambda_{c}\\) ). The wavelength \\(\\lambda_{b}\\) corresponds to the maximal ozone absorption in the MSI spectra (559.8 nm). It has a spatial resolution of 10 m. The channels \\((\\lambda_{a},\\lambda_{c})\\) are used for the determination of the effective light absorption path in the snowpack. We shall use the following wavelengths for MSI/S-2A (and similar channels for S-2B) in this work: \\(\\lambda_{a}=\\) 442.7 nm and \\(\\lambda_{c}=\\) 864.7 nm. They have the same bandwidths (21 nm) but different spatial resolutions (60 m for the channel located at \\(\\lambda_{a}\\) and 20 m for the channel located at \\(\\lambda_{c}\\) ). The channel located at \\(\\lambda_{c}=\\) 864.7 nm corresponds to the maximal ice absorption in the MSI spectra in the range 443-865 nm (with negligible ozone absorption effects). The channel located at \\(\\lambda_{a}=\\) 442.7 nm corresponds to the minimal effects of the ice and ozone absorption in the MSI visible range. The use of channels with different spatial resolutions is justified if snow reflectance in channel 1 (8A) weakly change on the scale of 10-60(20) m, which is true for Antarctic surfaces on average.
The fit of the MSI spectral reflectance measured in the vicinity of the Dome C in Antarctica using Equations (4) and (5) is given in Figure 1. We extracted reflectance from MSI/Sentinel-2 level 1C products at the nearest pixel of the measurement site using Google Earth Engine [23]. The solar and sensor geometry parameters for each spectral band were obtained from the product metadata. The spectral range 400-1050 nm is shown in Figure 1. The calculations using Equation (4) were performed with the spectral resolution of 1 nm. The spectral imaginary part of the ice refractive index as suggested in [24] was used at wavelengths below 600 nm. For the larger wavelengths, data presented in [25] were used. The spectral absorption cross-section of ozone \\(C_{abs}(\\lambda)\\) measured in [22] at 203K was used. The corresponding parameters appearing in Equation (5) are given in Table 2. The linear interpolation of tabular spectra \\(C_{abs}(\\lambda)\\) and \\(\\chi(\\lambda)\\) on the 1 nm grid in the spectral interval 400-1050 nm was applied.
The retrieved values of the parameters given in Equation (4) for the case shown in Figure 1 are: \\(K\\) = 1.66 \\(\\times 10^{19}\\) cm\\({}^{2}/\\mathit{molecule}\\) and \\(L\\) = 2.13 mm. They were derived using Equation (5) and data are given in Table 2.
The total ozone column can be derived from the value of \\(K\\) as follows: \\(N\\) = \\(K\\)/\\(m\\). Assuming that \\(m\\) is given by the geometrical approximation (\\(m=\\frac{1}{\\xi}+\\frac{1}{\\eta}\\) ), one derives for the value of TOC for the case shown in Figure 1: \\(N\\) = 4.8458 \\(\\times 10^{18}\\)\\(molecule/\\)cm\\({}^{2}\\) or \\(N\\) = 180.4 DU, where we used the conversion factor \\(\\kappa\\) = 3.722 \\(\\times 10^{-17}\\)\\(DU\\)\\(\\times\\) cm\\({}^{2}/\\mathit{molecule}\\) to derive the value of \\(N\\) in Dobson Units (DU). The ground SAOZ (Systeme d'Analyse par Observations Zenithales) TOC measurements [27; 28] at the Dome C site give the value of TOC equal to 158 and 180 DU in morning and evening, respectively, which is close to the TOC derived using MSI/S-2 measurements.
The value of \\(L\\) can be used to derive the effective absorption length \\(\\ell\\), which determines the snow spherical albedo [7]:
\\[r=\\exp\\Bigl{\\{}-\\sqrt{\\alpha\\ell}\\Bigr{\\}} \\tag{6}\\]
Namely, it follows [7]:
\\[\\ell=\\frac{R_{a}^{2}}{u^{2}(\\xi)u^{2}(\\eta)}L, \\tag{7}\\]
\\begin{table}
\\begin{tabular}{c c c} \\hline \\(\\mathbf{\\kappa_{b}}\\)**, cm\\({}^{-1}\\)** & \\(\\mathbf{\\kappa_{c}}\\)**, cm\\({}^{-1}\\)** & \\(\\mathbf{C_{abs,b}}\\)**, cm\\({}^{2}\\)/\\(\\mathit{molecule}\\)** \\\\ \\hline
7.48 \\(\\times 10^{-4}\\) & 3.49\\(\\times 10^{-2}\\) & 3.87 \\(\\times 10^{-21}\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: The parameters used in Equation (5).
Figure 1: The fit of MSI spectral reflectance over Dome C (Antarctica) (see Equation (4)). The MSI/ S-2 TOA reflectance observations were performed on 3 November 2020 during the ozone hole occurrence [26].
where [29]:
\\[u(\\mu)=\\frac{3}{5}\\xi+\\frac{1+\\sqrt{\\xi}}{3} \\tag{8}\\]
The function \\(u(\\xi)\\) describes the angular distribution of light escaping from semi-infinite nonabsorbing turbid media with light sources placed at infinity. Taking into account that \\(R_{a}=0.92\\), \\(\\xi=0.41\\), \\(\\eta=1\\), and \\(L=2.13\\) mm for the case shown in Figure 1, it follows that \\(\\ell=1.79\\)mm. This makes it possible to derive both spectral spherical \\(r\\) (see Equation (6)) and plane (\\(r_{p}=r^{u(\\mu)}\\) ) albedo [7]. Moreover, the ice grain effective diameter can be derived using the relationship proposed in [7]: \\(d_{ef}=\\ell/\\sigma\\), where \\(\\sigma\\) is the shape factor, which depends on the assumed shape of snow grains. We shall use the value of \\(\\sigma=16\\) close to that suggested in [7]. Therefore, it follows for the case shown in Figure 1: \\(d_{ef}=0.11\\) mm, which is a reasonable estimation for the snow surface at Dome C [6; 19]. In particular, it has been found the values of \\(d_{ef}=0.1\\) mm in the first 5 mm of snow at the South Pole (23 January 1986) [19]. The value of \\(d_{ef}\\) was two times smaller at the same location immediately after the snowfall [19].
The total ozone column over Dome C retrieved using the technique described above for November-December 2020 is shown in Figure 2 (MSI). The cloudy scenes were removed using MSI measurements at 2.2 \\(\\mu\\)m with the assumption that clouds have higher reflectance as compared to snow at this channel due to the smaller size of crystals in clouds as compared to snow on the surface. The threshold value of \\(R\\)(2.2 \\(\\mu\\)m) = 0.2 was used. The air mass factor was calculated as [30; 31].
\\[m=\\frac{1+s}{\\sqrt{2s+\\xi^{2}}}+\\frac{1+s}{\\sqrt{2s+\\eta^{2}}} \\tag{9}\\]
where \\(s=\\frac{h}{R}\\), \\(h=0.26-0.1L\\), \\(L\\) is the latitude in degrees (the negative number in the southern hemisphere), \\(R\\) is the radius of the Earth, \\(h\\) is the height of the ozone layer in \\(km\\). Equation (9) coincides with the geometrical AMF at \\(s=0\\). We also show the TOC derived from other satellite/ground observations (see Table 3 and [26] for details on various satellite measurements) and European Centre for Medium-Range Weather Forecasts (ECMWF) re-analysis in Figure 2. The temporal (daily product) and spatial 1deg averaging around the site was performed (except for MSI, where data for a given pixel or the average of two pixels (S-2A, S-2B), if available, is given). It follows that all instruments show the existence of the ozone hole in early November and its disappearance in the last week of December 2020. Data from all instruments closely follow all temporal oscillations of the TOC (see, e.g., the TOC wave centered at day 45 (15 December) in Figure 2). There is a temporal/spatial mismatch between MSI/S-2 and other satellite measurements. In particular, the MSI/S-2 measurements provide the total ozone column on the spatial scale of 10 m, which is not possible for other instruments (see Table 1). This could explain some differences seen in TOC derived from MSI as compared to other instruments.
The intercomparison of ground-based and satellite--retrieved albedo is given in Figures 3 and 4 for 9 December 2016 and 4-6 February 2018. The ground measurements of the plane albedo were performed in the spectral range 400-1050 nm. Further details on the ground measurements using the Autosolexs instrument are given in [7; 32; 33], where the same spectral albedo measurements were used to validate the OLCI/S-3 snow albedo retrievals. The time of ground (satellite) measurements on 9 December was 22:00(23:57)UTC at SZA = 65 degrees. Therefore, there was a 2 h temporal mismatch. The times of measurements for 4-6 February 2018 are specified in Table 4. The effective ice grain diameter retrieved from satellite measurements as specified above is shown in Table 4 as well. One can see that the temporal mismatch between ground (5 February, 22:50) and satellite (6 February, 00:07 UTC) measurement is just 1 h 17 min. It is assumed that the temporal change of albedo can be neglected at DOME C at the time interval 1-2 h.
Figure 2: The total ozone column retrieved using MSI/S-2 observations and also other ground and satellite observations (see Table 3) at DOME C (Antarctica) on NovemberβDecember 2020. The data derived from the ECMWF re-analysis is given as well.
\\begin{table}
\\begin{tabular}{c c c} \\hline
**Instrument** & **Abbreviation** & **Spatial Resolution** \\\\ \\hline Global Ozone Monitoring Experiment & GOME-2 & 80 \\(\\times\\) 40 km \\\\ \\hline Ozone Mapping and Profiling Suite & OMPS & 50 \\(\\times\\) 50 km \\\\ \\hline Ozone Monitoring Instrument & OMI & 13 \\(\\times\\) 24 km \\\\ \\hline TROPosphere Monitoring Instrument & TROPOMI & 7 \\(\\times\\) 3.5 km \\\\ \\hline Ocean and Land Colour Instrument & OLCI & 0.3 \\(\\times\\) 0.3 km \\\\ \\hline \\multirow{3}{*}{Multi Spectral Imager} & & from 0.01 \\(\\times\\) 0.01 km \\\\ & MSI & till 0.06 \\(\\times\\) 0.06 km (dependent on the \\\\ & & channel, see Table 1) \\\\ \\hline Systeme dβAnalyse par Observations & SAOZ-1 & ground zenith sky transmittance \\\\ \\cline{1-1} Zenitnales (morning ground observations) & measurements (morning) \\\\ \\hline Systeme dβAnalyse par Observations & SAOZ-2 & ground zenith sky transmittance \\\\ \\cline{1-1} Zenitnales (evening ground observations) & SAOZ-2 & measurements (evening) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: The instrumentation used in the determination of the total ozone column temporal changes over Dome C (Antarctica) shown in Figure 2. Further details on the satellite instrumentation used are given in [26].
It follows that ground and satellite measurements of spectral albedo give very similar results, which indirectly confirm the validity of our assumptions used in the retrieval procedure. We also note that although we used the spectral MSI/S-2 reflectance measurements at just two wavelengths (443 and 865 nm), the spectral albedo is estimated in a much larger spectral range useful also for the shortwave broadband albedo determination. This is due to the fact that the spectral pure plane snow albedo depends largely on two parameters--the effective absorption length and solar zenith angle. The difference between satellite and
Figure 4: The intercomparison of plane albedo derived from ground measurements on 5 February 2018 and satellite measurements on 4 and 6 February 2018.
Figure 3: (**a**,**b**) The snow spectral plane albedo derived from dualβchannel (443 and 865 nm) satellite (blue line) and ground (red) measurements. The measurements were performed on 9 December 2016 at Dome C in Antarctica. The satellite-derived snow albedo is provided on the spectral grid 10 nm. The right panel gives the plane albedo in a broader spectral range not covered by ground measurements.
ground measurements depends on the wavelength being smaller than 2% in the spectral range 400-1100 nm. This difference is below the error of respective optical measurements. The largest differences (2%) occur around 1050 nm, where MSI/S-2 does not perform any measurements. The ELAP estimated from the 864.7 nm MSI-S2 channel may differ from that derived from 1020 nm due to vertical snow inhomogeneity effects and spectrally variable light penetration in snow layers. The difference observed in Figures 3 and 4 may be reduced if a new channel (say, 1020 nm) is added to the MSI in the future.
One can conclude from Figure 4 that spectral albedo does not change considerably during 2 days, if measurements are performed at almost the same time of day (say, 4 February (00:17UTC) and 6 February (00:07UTC)). They are close to the ground measurements performed on 5 February (22:50UTC), 1 h 17 min ahead of satellite measurements performed on 6 February (00:07UTC).
## 3 Conclusions
The climatic effects of snow surfaces depend both on snow fraction and snow albedo. The retrieval of snow fraction using MSI/S-2 measurements are performed in [34; 35] at 20 m resolution in open terrain. In this paper a simple algorithm to retrieve the snow albedo and total ozone column using the high spatial resolution single-view MSI/S-2 measurements over Antarctica is proposed. Therefore, multiple observations of the same ground pixel from different directions as used, e.g., in the MODIS snow albedo retrieval algorithm [9] were not needed.
In addition, the algorithm allows the retrieval of the snow grain size on the scale of 10-20 m although this was not explicitly evaluated here. This algorithm should be useful for the understanding of intra-pixel total ozone and snow albedo variability in complement to satellite observations performed on a much coarser spatial resolution scale (0.3-1 km and even larger spatial scales). The algorithm can be extended for the case of polluted snow as discussed in [36].
It was shown that the MSI reflectance in the spectral range 443-865 nm over snow in Antarctica can be modelled using just two parameters (\\(K\\) and \\(L\\)). These parameters are proportional to the total ozone column and the effective diameter of ice grains, respectively. Therefore, we conclude that the main parameters influencing the MSI spectra in the range 443-865 nm are TOC and the effective diameter of snow grains with other effects of secondary importance for a clean Antarctic atmosphere. In particular, the influence of the molecular and aerosol light scattering on the MSI/S-2 reflectance spectra in Antarctica can be neglected in the first approximation. This is especially true for the channel located at 865 nm used for the effective grain diameter (see Equations (5)-(7)) and spectral albedo determination. We also assumed that the MSI reflectance at channel 1 (8A) is the same on the scale of 10 and 60 (20) m, which is true for most of the Antarctic surfaces. Otherwise, the channels (2, 3, 8, see Table 1) with the same spatial resolution (10 m) must be used to derive
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Date** & **Time (UTC) \\(d_{\\textit{eff}}\\)(mm)** & **Instrument** \\\\ \\hline
4 February 2018 & 00:17 0.21 & MSI \\\\ \\hline
4 February 2018 & 23:38 0.32 & MSI \\\\ \\hline
5 February 2018 & 22:50 0.21 & Autosolexs \\\\ \\hline
6 February 2018 & 00:07 0.21 & MSI \\\\ \\hline
6 February 2018 & 23:47 0.36 & MSI \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: The time of satellite/ground albedo retrievals/measurements over Dome C (Antarctica) shown in Figures 3 and 4. Further details on the ground measurements are given in [7]. The satellite SZA was in the range 65β67 degrees and ground measurements were performed at SZA = 71.6 degrees. The value of \\(d_{\\textit{eff}}\\) for the case of ground Autosolex measurements have been retrieved using the same approach as dicussed above for spaceborne observations.
the pair (\\(K,L\\)) from Equation (4). It follows from Figure 1 that the reflectances at channels 1 and 2 and also 8 and 8A have close values. Therefore, two different sets of channels used in the retrieval process produce almost the same retrieval results. Alternatively, the parameters (\\(K,L\\)) can be retrieved using all MSI channels in the framework of the optimal estimation approach.
The determination of the effective light absorption path makes it possible to estimate also snow broadband albedo [29; 37] both from ground and satellite measurements. An important point is that the information on the effective absorption length makes it possible to retrieve both spectral and broadband snow albedo under the assumption of vertically and horizontally homogeneous snow surfaces.
The technique is applicable for pure snow cases common in Antarctica, where the atmospheric aerosol load is low [17; 38]. The retrieval is performed outside absorption bands of water vapor, oxygen and carbon dioxide. The Rayleigh optical thickness is smaller than 0.15 at the MSI channels at Arctic and Antarctic sites [18] and, therefore, the molecular scattering over the highly reflective surface can be neglected as well. The accuracy of the retrievals is expected to decrease for the cases of polluted snow and atmosphere.
Conceptualization, A.K. and S.G.; methodology, A.K.; software, A.K., S.G.; validation, A.K., S.G., G.P. and L.A.; formal analysis, A.K.; investigation, A.K.; resources, A.K.; data curation, S.G., G.P., L.A.; writing--original draft preparation, A.K.; writing--review and editing, S.G., G.P., L.A.; visualization, A.K.; supervision, A.K.; project administration, A.K. All authors have read and agreed to the published version of the manuscript.
Agency (ESA) studies; ESRIN contract 4000118926/16/I-NB and the ESRIN contract 4000125043-ESA/AO/1-9101/17/I-NB EO SCIENCE FOR SOCIETY; Agence Nationale de la Recherche (France) studies; grant 1-JS56-005-01 (MONISNNOW).
The data are available upon request from the first author.
The authors acknowledge the use of total ozone measurements performed by the SAOZ team (saoz.obs.uvsq.fr, last time accessed: 1 November 2021) and also derived using multiple ESA and NASA instruments (see Table 3) with many thanks. The authors thank ECMWF for providing forecasted TOC data.
The authors declare no conflict of interest.
## References
* (1) Free, G.; Bresciani, M.; Trodd, W.; Tierney, D.; O'Boyle, S.; Plant, C.; Deakin, J. Estimation of lake ecological quality from Sentinel-2 remote sensing imagery. _Hydrobiologia_**2020**, _847_, 1423-1438. [CrossRef]
* (2) Zhang, D.; Fang, S.; She, B.; Zhang, H.; Jin, N.; Xia, H.; Yang, Y.; Ding, Y. Winter wheat mapping based on Sentinel-2 Data in heterogeneous planting conditions. _Remote Sens._**2019**, _11_, 2647. [CrossRef]
* (3) Debella-Gilo, M.; Gjertsen, A.K. Mapping seasonal agricultural land use types using deep learning on Sentinel-2 image time series. _Remote Sens._**2021**, _13_, 289. [CrossRef]
* (4) Peng, X.; Zhang, T.; Frauenfeld, O.W.; Du, R.; Jin, H.; Mu, C. A holistic assessment of 1979-2016 global cryogenic extent. _Earth's Future_**2021**, \\(9\\), e2020EF001969. [CrossRef]
* (5) Flanner, M.G.; Zender, C.S.; Randerson, J.T.; Rasch, P.J. Present-day climate forcing and response from black carbon in snow. _J. Geophys. Res._**2007**, _112_, D11202. [CrossRef]
* (6) Flanner, M.G.; Arnheim, J.; Cook, J.M.; Dang, C.; He, C.; Huang, X.; Singh, D.; Skiles, S.M.; Whicker, C.A.; Zender, C.S. SNICAR-AD v3: A community tool for modeling spectral snow albedo. _Geosci. Model Dev. Discuss._**2021**. preprint, in review. [CrossRef]
* (7) Kokhanovsky, A.A.; Lamare, M.; Danne, O.; Brockmann, C.; Dumont, M.; Picard, G.; Arnaud, L.; Favier, V.; Jourdain, B.; Le Meur, E.; et al. Retrieval of snow properties from the Sentinel-3 Ocean and Land Colour Instrument. _Remote Sens._**2019**, _11_, 2280. [CrossRef]
* (8) Kokhanovsky, A.A.; Lamare, M.; Rozanov, V.V. Retrieval of the total ozone over Antarctica using Sentinel-3 Ocean and Land Colour Instrument. _J. Quant. Spectrosc. Radiat. Transf._**2020**, _251_, 107045. [CrossRef]
* (9) Schaaf, C.B.; Gao, F.; Strahler, A.H.; Lucht, W.; Li, X.; Tsang, T.; Strugnell, N.C.; Zhang, X.; Jin, Y.; Muller, J.-P.; et al. First operational BRDF, albedo nadir reflectance products from MODIS. _Remote Sens. Environ._**2002**, _83_, 135-148. [CrossRef]
* (10) Hall, D.K.; Riggs, G.A.; Salomonson, V.V.; Di Girolamo, N.E.; Bayr, K.J. MODIS snow-cover products. _Remote Sens. Environ._**2002**, _83_, 181-194. [CrossRef]* Mei et al. (2021) Mei, L.; Rozanov, V.; Pohl, C.; Vountas, M.; Burrows, J.P. The retrieval of snow properties from SLSTR Sentinel-3--Part 1: Method description and sensitivity study. _Cryosphere_**2021**, _15_, 2757-2780. [CrossRef]
* Mei et al. (2021) Mei, L.; Rozanov, V.; Jakel, E.; Cheng, X.; Vountas, M.; Burrows, J.P. The retrieval of snow properties from SLSTR Sentinel-3--Part 2: Results and validation. _Cryosphere_**2021**, _15_, 2781-2802. [CrossRef]
* Chen et al. (2021) Chen, N.; Li, W.; Fan, Y.; Zhou, Y.; Aoki, T.; Tanikawa, T.; Niwano, M.; Hori, M.; Shimada, R.; Matoba, S.; et al. Snow parameter retrieval (SPR) algorithm for GCOM-C/SGLI. _Remote Sens. Environ._**2021**, in press.
* Schweizer et al. (2008) Schweizer, J.; Kronholm, K.; Jamieson, J.B.; Birkeland, K.W. Review of spatial variability of snowpack properties and its importance for avalanche formation. _Coll Reg. Sci. Technol._**2008**, _51_, 253-272. [CrossRef]
* Kaufman et al. (2002) Kaufman, Y.J.; Kiedlman, R.G.; Hall, D.K.; Martins, J.V.; Barton, J. Remote sensing of subpixel snow cover using 0.66 and 2.1 um channels. Geophys. Res. Lett._**2002**, \\(2\\), 1781. [CrossRef]
* Painter et al. (2009) Painter, T.H.; Ritteger, K.; McKenzie, C.; Slaughter, P.; Davis, R.E.; Dozier, J. Retrieval of subpixel snow covered area, grain size, and albedo from MODIS. _Remote Sens. Environ._**2009**, _113_, 868-879. [CrossRef]
* Six et al. (2005) Six, D.; Fily, M.; Blarel, L.; Goloub, P. First aerosol optical thickness measurements at Dome C (east Antarctica), summer season 2003-2004. _Atmos. Environ._**2005**, _39_, 5041-5050. [CrossRef]
* Tomasi and Petkov (2015) Tomasi, C.; Petkov, B.H. Spectral calculations of Rayleigh--Scattering optical depth at Arctic and Antarctic sites using a two--term algorithm. _J. Geophys. Res._**2015**, _120_, 9514-9538. [CrossRef]
* Grenfell et al. (1994) Grenfell, T.C.; Warren, S.G.; Mullen, P.C. Reflection of solar radiation by the Antarctic snow surface at ultraviolet, visible, and near-infrared wavelengths. _J. Geophys. Res._**1994**, _99_, 18669-18684. [CrossRef]
* Kang et al. (2020) Kang, S.; Zhang, Y.; Qian, Y.; Wang, H. A review of black carbon in snow and ice and its impact on the cryosphere. _Earth-Sci. Rev._**2020**, _210_, 103346. [CrossRef]
* Kokhanovsky et al. (2020) Kokhanovsky, A.; Box, J.E.; Vandecurx, B.; Mankoff, K.D.; Lamare, M.; Smirnov, A.; Kern, M. The determination of snow albedo from satellite measurements using fast atmospheric correction technique. _Remote Sens._**2020**, _12_, 234. [CrossRef]
* Gorshelev et al. (2014) Gorshelev, V.; Serdyuchenko, A.; Weber, M.; Chehade, W.; Burrows, J.P. High spectral resolution ozone absorption cross-sections--Part 1: Measurements, data analysis and comparison with previous measurements around 293 K. _Atmos. Meas. Tech._**2014**, \\(7\\), 609-624. [CrossRef]
* Gorelick et al. (2017) Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. _Remote Sens. Environ._**2017**, _202_, 18-27. [CrossRef]
* Picard et al. (2016) Picard, G.; Libois, Q.; Arnaud, L. Refinement of the ice absorption spectrum in the visible using radiance profile measurements in Antarctic snow. _Cryosphere_**2016**, _10_, 2655-2672. [CrossRef]
* Warren and Brand (2008) Warren, S.; Brand, R.E. Optical constants of ice from the ultraviolet to the microwave: A revised compilation. _J. Geophys. Res._**2008**, _113_, D14. [CrossRef]
* Kokhanovsky et al. (2021) Kokhanovsky, A.; Iodice, F.; Lelli, L.; Zschaege, A.; De Quattro, N.; Gasbarra, D.; Retscher, C. Retrieval of total ozone column using high spatial resolution top-of-atmosphere measurements by OLCI/S-3 in the ozone Chappuis absorption bands over bright underlying surfaces. _J. Quant. Spectr. Rad. Transf._**2021**, _276_, 107903. [CrossRef]
* Pommereau and Goutail (1988) Pommereau, J.-P.; Goutail, F. O3 and NO2 ground-based measurements by visible spectrometry during arctic winter and spring 1988. _Geophys. Res. Lett._**1988**, _15_, 891-894. [CrossRef]
* Sarkissian et al. (1997) Sarkissian, A.; Vaughan, G.; Roscoe, H.K.; Bartlett, L.M.; O'Connor, F.; Drew, D.G.; Hughes, P.A.; Moore, D.M. Accuracy of measurements of total ozone by a SAOZ ground-based zenith sky visible spectrometer. _J. Geophys. Res._**1997**, _102_, 1379-1390. [CrossRef]
* Kokhanovsky (2021) Kokhanovsky, A. Snow broadband albedo. _Front. Environ. Sci._**2021**, \\(9\\), 443. [CrossRef]
* Iqbal (1983) Iqbal, M. _An Introduction to Solar Radiation_; Academic Press: New York, NY, USA, 1983; p. 101.
* Savastiouk and McErloy (2004) Savastiouk, V.; McErloy, C.T. Calculating air mass factors for ozone and Rayleigh air mass factor calculations for ground--based spectrometers. In Proceedings of the Quadrentional Ozone Symposium, Kos, Greece, 1-8 June 2004. [CrossRef]
* Picard et al. (2016) Picard, G.; Libois, Q.; Arnaud, L.; Verin, G.; Dumont, M. Development and calibration of an automatic spectral albedometer to estimate near-surface snow SSA time series. _Cryosphere_**2016**, _10_, 1297-1316. [CrossRef]
* Picard et al. (2016) Picard, G.; Libois, Q.; Arnaud, L.; Verin, G.; Dumont, M. Time-Series of Snow Spectral Albedo and Superficial Snow Specific Surface Area at Dome C in Antarctica, 2012-2015. PANGAEA. 2016. Available online: [https://tc.copernicus.org/articles/10/1297/2016/tc-10-1297-2016-assets.html](https://tc.copernicus.org/articles/10/1297/2016/tc-10-1297-2016-assets.html) (accessed on 1 November 2021). [CrossRef]
* Gascoin et al. (2020) Gascoin, S.; Grizzonnet, M.; Bouchet, M.; Salgues, G.; Hagolle, O. Theia snow collection: High-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data. _Earth Syst. Sci. Data_**2020**, _11_, 493-514. [CrossRef]
* Gascoin et al. (2020) Gascoin, S.; Barrou Dumont, Z.; Deschamps-Berger, C.; Marti, F.; Salgues, G.; Lopez-Moreno, J.I.; Revuelto, J.; Michon, T.; Schattan, P.; Hagolle, O. Estimating fractional snow cover in open terrain from Sentinel-2 using the Normalized Difference Snow Index. _Remote Sens._**2020**, _12_, 2904. [CrossRef]
* Kokhanovsky et al. (2021) Kokhanovsky, A.; Di Mauro, B.; Garzonio, R.; Colombo, R. Retrieval of dust properties from spectral snow reflectance measurements. _Front. Environ. Sci._**2021**, \\(9\\), 42. [CrossRef]
* Kokhanovsky (2021) Kokhanovsky, A. _Snow Optics_; Springer Nature: Cham, Switzerland, 2021.
* Kokhanovsky et al. (2020) Kokhanovsky, A.; Tomasi, C.; Smirnov, A.; Herber, A.; Neuber, R.; Ehrlich, A.; Lupi, A.; Petkov, B.H.; Mazzola, M.; Ritter, C.; et al. Remote sensing of Arctic atmospheric aerosols. In _Physics and Chemistry of the Arctic Atmosphere_; Kokhanovsky, A., Tomasi, C., Eds.; (Springer Polar Sciences); Springer: Cham, Switzerland, 2020; pp. 505-590. | We proposed a simple algorithm to retrieve the total ozone column and snow properties (spectral albedo and effective light absorption path) using the high spatial resolution single-view MSI/S-2 measurements over Antarctica. In addition, the algorithm allows the retrieval of the snow grain size on a scale of 10-20 m. This algorithm should be useful for the understanding of intra-pixel total ozone and snow albedo variability in complement to satellite observations performed on a much coarser spatial resolution scale (0.3-1 km and even larger spatial scales).
snow; albedo; snow grain size; light scattering; radiative transfer; inverse problems 2021 | Write a summary of the passage below. | 131 |
arxiv-format/1301_6611v1.md | # The radial distribution of water ice and chromophores across Saturn's system
Filacchione, G.1, Capaccioni, F.2, Clark, R.N.3, Nicholson, P.D.4, Cruikshank, D.P.5, Cuzzi, J.N.5, Lunine, J.I.4, Brown, R.H.6, Cerroni, P.2, Tosi, F.2, Ciarniello, M.2, Buratti, B.J.7, Hedman, M.M.4, Flamini, E.8
Footnote 1: affiliation: INAF-IAPS, Istituto di Astrofisica e Planetologia Spaziali, Area di Ricerca di Tor Vergata, via del Fosso del Cavaliere, 100, 00133, Rome, Italy. Email: [email protected]
Footnote 2: affiliation: INAF-IAPS, Istituto di Astrofisica e Planetologia Spaziali, Area di Ricerca di Tor Vergata, via del Fosso del Cavaliere, 100, 00133, Rome, Italy.
Footnote 3: affiliation: US Geological Survey, Federal Center, Denver, CO, 80228, USA
Footnote 4: affiliation: Cornell University, Astronomy Department, 418 Space Sciences Building, Ithaca, NY 14853, USA
Footnote 5: affiliation: NASA Ames Research Center, Moffett Field, CA 94035-1000, USA
Footnote 6: affiliation: Lunar Planetary Laboratory, University of Arizona, Kuiper Space Sciences 431A, Tucson, AZ, USA
Footnote 7: affiliation: NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
Footnote 8: affiliation: ASI, Italian Space Agency, viale Liegi 26, 00198, Rome, Italy
## 1 Introduction
Saturn, the sixth planet orbiting the Sun at an average distance of 9.56 AU, has one of the most complex system of satellites and ring in our solar system. The composition of these objects is the result of a particular distribution of primordial material inside the proto planetary disk (Morbidelli et al. 2007) and in the circum-planetary nebula (Coradini et al. 2010), and of the subsequent evolution caused by large impacts, meteoroid bombardment, weathering and interaction with magnetospheric and exogenic particles. The satellites' surfaces are then shaped by endogenic processes like cryovolcanism, tectonism and resurfacing processes (Jaumann et al. 2009).
The entire Saturn system was formed in the outer part of the proto-planetary disk, in a region well beyond the snow-line, currently placed at a distance of about 2.7 AU from the Sun, where water and volatiles condense in ices (Lunine 2006). The study of Saturn's current ring and satellite composition is fundamental to constrain different formation scenarios and evolutionary models (Morbidelli et al. 2007). For this reason we have investigated how water ice and chromophores, e.g. organic and non-ice materials, are distributed on both ring and satellite surfaces by using observations in the 0.35-5.1 \\(\\mu m\\) spectral range returned by VIMS (Visual and Infrared Mapping Spectrometer) (Brown et al. 2004) aboard the Cassini mission.
The variability of water ice and chromophores across the icy surfaces of the Saturn's system is traced using specific spectral indicators applied to a high-quality and extensive sub-set of VIMS data, consisting of 2264 disk-integrated observations of the icy satellites and several ring mosaics collected from 2004 until June 2010 (Filacchione et al. 2012). We have adopted three spectral indicators that represent the best tracers for presence of chromophores and water ice: the spectral slopes in the 0.35-0.55 and 0.55-0.95 \\(\\mu m\\) ranges and the 2 \\(\\mu m\\) water ice band depth (Filacchione et al. 2007; Filacchione et al. 2010). Chromophores-rich surfaces appear in general more red in the visible spectral range and consequently have positive slopes. Water ice-rich surfaces are characterized by blue or neutral slopes and intense 2.0 \\(\\mu\\)m band depth. As discussed in (Filacchione et al. 2012), the water ice band depth is related to the regolith grain size: for a fixed phase angle, small grains (10 \\(\\mu m\\)) have high reflectance at continuum level but small band depth. On contrary, large grains (100 \\(\\mu m\\)) are characterized by a lower reflectance and larger band depth.
VIMS spectral slopes, expressed in \\(\\mu m^{-1}\\), are measured through a best linear fit to the reflectance spectra in the 0.35-0.55 and 0.55-0.95 \\(\\mu m\\) ranges. Before the fit, the reflectance is normalized to 1 at 0.55 \\(\\mu m\\) to remove illumination effects and separate color variations from brightness (Cuzzi et al. 2009).
Since both band depth and spectral slopes increase with solar phase, as explained by (Hapke 1993) and verified on VIMS data by (Filacchione et al. 2012), we have minimized this effect by selecting only observations acquired at intermediate solar phase angles between 20\\({}^{\\circ}\\) and 40\\({}^{\\circ}\\); within this interval the variability of the spectral parameters due to photometry is minimal allowing us to perform a comparative analysis of the entire population of the rings and satellites under the same illumination conditions.
## 2 Chromophores distribution
The 0.35-0.55 \\(\\mu m\\) spectral slope is sensitive to the effects of intraparticle mixing (e.g. grains within grains) of the chromophores in water ice, while the 0.55-0.95 \\(\\mu m\\) slope is an intimate (e.g. salt and pepper mixture) and areal mixing marker (Ciarniello et al. 2011). Pure water ice is characterized by an almost neutral spectral behavior, resulting in high albedo, a slightly positive 0.35-0.55 \\(\\mu m\\) spectral slope and a small negative 0.55-0.95 \\(\\mu m\\) spectral slope, with some variations caused by the different effective grain sizes in the regolith (Filacchione et al. 2012). Adding chromophores causes the two slopes to increase at a rate that depends upon the contaminant's composition and fractional amount. Meanwhile, the 2 \\(\\mu m\\) band depth depends on the relative abundances of ice and dark material, as well as the regolith grain size.
As shown in Fig. 1, top panel, across the rings we measure an increase of the 0.35-0.55 \\(\\mu m\\) spectral slope from inner C ring (between 1.24-1.52 \\(R_{S}\\); 1 \\(R_{S}\\), or Saturn's radius, corresponding to 60.268 km) to B ring (1.52-1.95 \\(R_{S}\\)), where the maximum reddening on the Saturnian system is measured (between 3 and 4 \\(\\mu m^{-1}\\)). In the Cassini division (1.95-2.02 \\(R_{S}\\)) the slope decreases to reach another local maximum at the A ring (2.02-2.27 \\(R_{S}\\)).
Prometheus and Pandora, the two shepherd moon of the F ring, appear different in color: Prometheus in fact is redder than Pandora. Moreover Prometheus is the only moon of the population that shows a reddening similar to A and B rings particles and for this reason it could be the missing link, able to connect together the origins, evolution and composition of rings and satellites: this result strengthens the hypothesis that the minor satellites were accreted starting from ring material during the viscous spreading of the young, massive disk from which the rings are thought to have formed (Charnoz et al. 2011; Porco et al. 2007).
Moving outwards, the slope is influenced by the presence of the E ring particles, released by the plumes emanating from the tiger stripes fractures at the south pole of Enceladus (Porco et al. 2006): starting from Janus and Epimetheus and out to Mimas, the abundance of chromophores decreases, reaching a minimum on Enceladus. Enceladus shows the bluest spectrum, a distinctive property caused by the presence of very fresh water ice with a negligible amounts of contaminants (Brown et al. 2006). Continuing to the other satellites orbiting within the outermost part of the E ring environment, from Tethys to Dione and Rhea, we observe a continuous increase of the reddening with the radial distance. However, both Calypso and Telesto, the two Tethys lagrangian satellites, and Helene, one of the two Lagrangians of Dione, appear bluer than Tethys and Dione themselves and similar to the color of Enceladus, indicating that the surfaces of these three small moons are fresher and probably coated by a significant amount of E ring particles. VIMS spectral results are supported by high-resolution images returned by Cassini-ISS in which the surfaces of these moons appear unusually smooth and lacking of small craters, compared to other Saturnian moons (Porco et al. 2007).
Regarding the inner regular satellites, the 0.35-0.55 \\(\\mu m\\) slope has a bowl-shaped distribution with a minimum on Enceladus and maxima on A-B rings in the inner part and on Rhea in the outer part. Rhea's reddening, equal to about 1.5 \\(\\mu m^{-1}\\), is similar to C ring and Cassini division particles. Beyond Titan (not considered in this analysis since the dense atmosphere does not allow us to accurately retrieve spectral indicators of the surface) we observe a linear decrease of the 0.35-0.55 \\(\\mu m\\) slope moving from Hyperion to Iapetus. The decreasing trend continues at Phoebe, the outermost satellite orbiting at 215 \\(R_{S}\\), for which we have measured a value of about 0.3 \\(\\mu m^{-1}\\) at about 90\\({}^{\\circ}\\) solar phase: these data have been not used in this work because they are outside the 20\\({}^{\\circ}\\)-40\\({}^{\\circ}\\) range.
The radial trend for the 0.55-0.95 \\(\\mu m\\) slope is shown in Fig. 1, central panel. Across the ring, the distribution shows two local maxima corresponding to the C ring and Cassini division where the slope reaches about 0.3 and 0.2 \\(\\mu m^{-1}\\), respectively. A and B rings appear more neutral, with slope running between 0.0 and 0.1 \\(\\mu m^{-1}\\). The large difference in 0.35-0.55 \\(\\mu m\\) reddening previously discussed between Prometheus and Pandora, is reduced substantially in the 0.55-0.95 \\(\\mu m\\) range, with a slightly higher reddening observed on Prometheus. Also in this case we observe a bowl-shaped distribution across the E ring environment, with a minimum corresponding to the orbit of Enceladus. Enceladus, like the three Lagrangian moons Calypso, Telesto and Helene, has a bluer spectrum resulting in the more negative 0.55-0.95 \\(\\mu m\\) spectral slope values. Neutral spectral slopes characterize the remaining inner regular satellites. In this spectral range the maximum reddening occurs on the outer satellites: on Hyperion on average is measured a spectral slope of 0.7 \\(\\mu m^{-1}\\) between 0.55-0.95 \\(\\mu m\\) while on Iapetus 0.4 and about 0.0 \\(\\mu m^{-1}\\) on the dark leading and bright trailing hemispheres, respectively. Such high reddening values are the result of the dominant presence of dark materials such as organics (PAH), iron-silicate nanophases or carbon within water ice (Clark et al. 2012; Cruikshank et al. 2008; Cruikshank et al. 2007). Phoebes 0.55-0.95 \\(\\mu m\\) spectral slope, not included in Fig. 1, is equal to about -0.1 \\(\\mu m^{-1}\\) at 90\\({}^{\\circ}\\) solar phase.
## 3 Water ice distribution
The 2 \\(\\mu m\\) water ice band depth (Fig. 1, bottom panel) increases almost linearly between 0.5 to 0.7 moving from inner to outer C ring. B ring shows a dual distribution of the band depth: the inner part of the B ring, for radial distance \\(<\\) 1.66 \\(R_{S}\\), or \\(<\\)100.000 km, is below 0.75 while the outer part has higher values, up to 0.8. At the spatial resolution of this dataset (400 km/bin) the ripple between 1.66-1.74 \\(R_{S}\\), or 100.000-105.000 km, is evident where the band-depth variations are correlated with sharp swings in optical depth, which likely alter the collisional environment experienced by the ring particles. The collisional processes occurring in this region are responsible for water ice resurfacing, resulting in an increase in the measured band depth. The band depth drops to about 0.65 in the middle of the Cassini division and then increases to values similar to the outer B ring across the A ring. Prometheus and Pandora have in average the highest band depth (\\(>\\)0.8) among the satellites and as seen previously for the two spectral slopes, also in this case they appear very similar to A and B ring particles. With a band depth of about 0.6, Janus and Epimetheus instead, are more similar to C ring and Cassini division particles. Moving to the regular satellites, from Mimas to Iapetus bright trailing hemisphere, we find that the band depth distribution is basically constant around a value of 0.7 with some small negative deviations associated with Dione and Hyperion. It is interesting to note the 2 \\(\\mu m\\) band depth differences measured in the Lagrangian systems: a difference of about 0.1 is seen between Calypso (higher) and Telesto (lower) with Tethys lying in between. Similarly, Helene has a stronger band depth than Dione. This could be the consequence of the layering of fresh material coming from E ring environment, which causes differences in surface water ice composition and in regolith grain size between the lagragian moons: higher band depth can be explained with more pure water ice or with larger grains.
On average Dione's band depth is the lowest of the inner regular satellites. This satellite shows a remarkable difference in the distribution of the band depth which appear stronger on the leading hemisphere (indicated by l in fig 1, bottom panel) while on the trailing side (indicated by t), where wispy terrains are located, the band depth is less intense. Another remarkable difference is seen between Iapetus bright trailing hemisphere observations, where the 2 \\(\\mu m\\) band depth is approximately 0.7 and dark leading hemisphere where it drops below 0.2. On Phoebe, not shown in Fig. 1, we have measured a 2 \\(\\mu m\\) water ice band depth of about 0.15-0.2 at 90\\({}^{\\circ}\\) solar phase.
## 4 Findings
VIMS data reveal striking differences among ring regions and satellites, ranging from Enceladus and Calypso s bluish surfaces, which appear very bright, water ice-rich and almost uncontaminated, to the more distant Hyperion, Iapetus and Phoebe where metals, organics and carbon dioxide are mixed within water ice, resulting in lower albedos and redder spectra (Cruikshank et al., 2008; Tosi et al., 2010). In general, ring particles appear to have peculiar properties, being the reddest objects in the Saturn s system at visible wavelengths while maintaining sharp and intense water ice bands in the infrared range (Filacchione et al. 2012; Cuzzi et al. 2010; Nicholson et al. 2008; Hedman et al. 2013). This spectral behavior is compatible with crystalline water ice polluted by chromophores, e.g. organic material, resulting from the irradiation of simple hydrocarbons (Johnson et al. 1983; Moore et al. 1983), nanophase iron or hematite (Clark et al. 2012; Clark et al. 2008), tholins in intimate mixing (Ciarniello et al. 2011), amorphous silicates (Poulet et al. 2003), carbonaceous particles (Cuzzi & Estrada 1998) or different combinations of these endmembers.
The three radial trends shown in Fig. 1 allow us to simultaneously retrieve and trace the distribution and mixing of chromophores within water ice particles across the Saturnian system: comparing the 0.35-0.55 \\(\\mu m\\) and 0.55-0.95 \\(\\mu m\\) spectral slopes profiles, it appears evident that these two quantities have opposite trends across the ring. The 0.55-0.95 \\(\\mu m\\) reddening becomes stronger across ring regions having low optical depth like the C ring and the Cassini division where the 0.35-0.55 \\(\\mu m\\) slope reaches the minimum values. These trends can be explained with the presence of a small fraction of inclusions of dark material distributed among the ice particles. In contrast, regions showing stronger band depth, like A and B rings, are characterized by higher reddening in the 0.35-0.55 \\(\\mu m\\) range while appearing more neutral in the 0.55-0.95 \\(\\mu m\\): such spectral behavior is compatible with the presence of UV-blue absorbing chromophores implanted in the water ice matrix. The ring particle's properties derived from VIMS radial profiles are therefore in agreement with the compositional trend resulting from the ballistic model (Cuzzi & Estrada 1998) in which the dark material, a residuum of meteoritic and cometary bombardment, accumulates in the Cassini division and C ring where the optical depth is lower.
Moving to the satellites, two distinctive zones characterize the 0.35-0.55 \\(\\mu m\\) and 0.55-0.95 \\(\\mu m\\) spectral slopes radial distributions: the inner satellites, orbiting within the E ring environment, and the outer satellites orbiting beyond Titan, which acts like a \"barrier\" between the rings and satellites inside of its orbit and the satellites and the giant dust ring of Phoebe (Verbiscer et al., 2009) outside its orbit. The 0.35-0.55 \\(\\mu m\\) visible slope of the inner satellites has a bowl-shaped distribution centered on Enceladus. The dominant exogenous process here is the release of particles from the plumes of Enceladus (Porco et al., 2006) which, after feeding the vast E ring region, impact with the embedded satellites. The outer satellites, Hyperion, Iapetus (and Phoebe), have distinctive compositional properties, differing from the inner ones with respect to the presence of carbon dioxide (Filacchione et al., 2010; Cruikshank et al., 2010; Tosi et al., 2010) and dark material, e.g., hydrocarbons, iron nanophases, and possibly tholins (Clark et al. 2012; Cruikshank et al. 2008; Coradini et al 2008). The linear decrease of the reddening observed moving outwards from Hyperion to Phoebe is compatible with the deposition of dust particles coming from Phoebe's ring. Therefore E ring and Phoebe's ring play similar roles in influencing the spectral properties of the moons orbiting within their neighborhoods: while the former spreads fresh and bright water ice particles in the inner Saturnian system, the latter causes a contamination of dark and organics-rich material on the outer moons.
## 5 Summary
In conclusion, the surface composition of rings and ice satellites that we measure today is the result of the original chemistry of the circum-planetary nebula from which they condensed and of the subsequent dynamical evolution (meteoroid and exogeneous particle impacts) and geochemical history (solar and high energy particles irradiation, loss of high volatility ices). While the reddening observed at visible wavelengths changes significantly across the Saturnian system, the strength of the water ice band is remarkably uniform once Phoebe and Iapetus' dark material, possibly originated by Phoebe (Tosi et al. 2010), is removed. The reddening variations are probably caused by secular processes while the water ice distribution seems to be more related to the primordial composition of the circum-planetary nebula. Therefore we can deduce that the chemistry occurring at the time of formation is not completely cancelled by the evolution processes of the ring and satellites. Such results can help us to discriminate among the different formation scenarios and to better understand the processes at the origin of the satellites and rings of the outer planets.
This research has made use of NASA's Astrophysics Data System and was completed thanks to the financial support of the Italian Space Agency (grant I/015/09/0) and NASA through the Cassini project.
## References
* Brown et al. (2004) Brown, R. H., et al. 2004, Space Sci. Rev., 115, 111
* Brown et al. (2006) Brown, R. H. et al. 2006, Science, 311, 1425
* Charnoz et al. (2011) Charnoz, S., et al. 2011, Icarus, 216, 535
* Ciarniello et al. (2011) Ciarniello, M., et al. 2011 Icarus, 214, 541
* Clark et al. (2008) Clark, R. N. et al. 2008, Icarus, 193, 372
* Clark et al. (2012) Clark, R. N., et al. 2012, Icarus, 218, 831
* Coradini et al. (2010) Coradini, A., Magni G. & Turrini, D. 2010, Space Sci. Rev., 153, 411.
* Coradini et al. (2008) Coradini, A. et al. 2008, Icarus, 193, 233
* Cruikshank et al. (2007) Cruikshank, D. P. et al. 2007, Nature, 448, 54
* Cruikshank et al. (2008) Cruikshank, D. P. et al. 2008, Icarus, 193, 334
* Cruikshank et al. (2010) Cruikshank, D. P., et al. 2010, Icarus, 206, 561
* Cuzzi & Estrada (1998) Cuzzi, J. N. & Estrada, J. N. 1998, Icarus, 132, 1
* Cuzzi et al. (2009) Cuzzi, J. et al. 2009, in Saturn from Cassini-Huygens, ed. M. K. Dougherty, L. W. Esposito, S. M. Krimigis (Springer Netherlands), 459
* Cuzzi et al. (2010) Cuzzi, J. N. et al. 2010, Science, 327, 1470
* Filacchione et al. (2007) Filacchione, G., et al. 2007, Icarus, 186, 259
* Filacchione et al. (2010) Filacchione, G., et al. 2010, Icarus, 206, 507
* Filacchione et al. (2012) Filacchione, G., et al. 2012, Icarus, 220, 1064
* Hapke (1993) Hapke, B. 1993, Theory of Reflectance and Emittance Spectroscopy (Cambridge, UK: Cambridge Univ. Press).
* Hedman et al. (2013) Hedman, M. M., et al. 2013, Icarus, 223, 105
* Jaumann et al. (2009) Jaumann, R. et al. 2009, in Saturn from Cassini-Huygens, ed. M. K. Dougherty, L. W. Esposito, S. M. Krimigis (Springer Netherlands), 637
* Jaumann et al. (2009)Johnson, R. E., Lanzeroti, L. J., Brown, W. L., Augustyniak, W. M. & Mussil, C. 1983, A&A, 123, 343
* Lunine (2006) Lunine, J. I. 2006, in Meteorites and the Early Solar System II, ed. D. S. Lauretta and H. Y. McSween Jr. (Tucson, AZ: University of Arizona Press), 309
* Moore et al. (1983) Moore, M. H., Donn, B., Khanna, R. & AHearn, M. F., 1983, Icarus, 54, 388
* Morbidelli et al. (2007) Morbidelli, A., Tsiganis, K., Crida, A., Levison, H. F. & Gomes. R. 2007, AJ, 134, 1790
* Nicholson et al. (2008) Nicholson, P. D. et al. 2008, Icarus, 193, 182
* Porco et al. (2006) Porco, C. C., et al., 2006, Science, 311, 1393
* Porco et al. (2007) Porco, C. C., Thomas, P. C., Weiss, J. W., & Richardson, D. C. 2007, Science, 318, 1602
* Poulet et al. (2003) Poulet, P., Cruikshank, D. P., Cuzzi, J. N., Roush, T. L. & French, R. G. 2003, A&A, 412, 305
* Tosi et al. (2010) Tosi, F., Turrini, D., Coradini, A. & Filacchione, G. 2010, MNRAS, 403, 1113
* Verbiscer et al. (2009) Verbiscer, A., Skrutskie, F. & Hamilton, D. P. 2009 Nature, 461, 1098Figure 1: Radial profiles of spectral indicators for Saturns ring, minor and regular satellites: visible spectral slope 0.35-0.55 \\(\\mu m\\) (top panel), 0.55-0.95 \\(\\mu m\\) slope (center panel), water ice 2 \\(\\mu m\\) band depth (bottom panel). The ring radial profiles span from inner C ring (73.500 km) to outer A ring (141.375 km) at 400 km/sample resolution. Co-orbiting satellites radial distances are shown with an offset to improve visualization. Spectral quantities are computed from observations taken with solar phases ranging between 20\\({}^{\\circ}\\) and 40\\({}^{\\circ}\\) to minimize photometric effects on the radial trends. Legend: Prometheus (Pr), Pandora (Pa), Epimetheus (Ep), Janus (Ja), Mimas (M), Enceladus (E), Tethys (T), Calypso (Ca), Telesto (Te), Dione (D), Helene (He), Rhea (R), Titan (Ti), Hyperion (H), Iapetus (I). Leading and trailing hemisphere observations for Dione and Iapetus are indicated with (l) and (t), respectively. | Over the last eight years, the Visual and Infrared Mapping Spectrometer (VIMS) aboard the Cassini orbiter has returned hyperspectral images in the 0.35-5.1 \\(\\mu m\\) range of the icy satellites and rings of Saturn. These very different objects show significant variations in surface composition, roughness and regolith grain size as a result of their evolutionary histories, endogenic processes and interactions with exogenic particles. The distributions of surface water ice and chromophores, i.e. organic and non-icy materials, across the saturnian system, are traced using specific spectral indicators (spectral slopes and absorption band depths) obtained from rings mosaics and disk-integrated satellites observations by VIMS. Moving from the inner C ring to Iapetus, we found a marking uniformity in the distribution of abundance of water ice. On the other hand the distribution of chromophores is much more concentrated in the rings particles and on the outermost satellites (Rhea, Hyperion and Iapetus). A reduction of red material is observed on the satellites surfaces orbiting within the E ring environment probably due to fine particles from Enceladus' plumes. Once the exogenousdark material covering the Iapetus leading hemisphere is removed, the texture of the water ice-rich surfaces, inferred through the 2 \\(\\mu m\\) band depth, appears remarkably uniform across the entire system. | Provide a brief summary of the text. | 292 |
arxiv-format/2304_00622v1.md | Automatic Detection of Natural Disaster Effect on Paddy Field from Satellite Images using Deep Learning Techniques
Tahmid Alavi Ishmann, Amin Ahsan Ali, Md Ahsaful Amin, A K M Mahbubur Rahman
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh
[email protected], [email protected], [email protected], [email protected]
## I Introduction
Rice serves as the staple food for 135 million people in Bangladesh. Natural disasters can have a significant impact on rice production, and it is crucial to accurately identify the affected areas to take preventive measures or provide governmental aid to affected individuals. However, collecting such data requires significant human and economic resources. The use of satellite images, such as Sentinel-2 images, can automatically detect crop loss and segment the affected areas. In this research, we utilized NDVI as a ground truth and DeepLab V3 plus for semantic segmentation using both RGB and FCI images. The contributions of this study include identifying possible paddy loss areas, validating ground-truth data, and performing a comparative analysis of the performance using RGB and FCI images. Fig 1 is showing overview of our work.
## II Background & Context
There have been works done on estimating rice production. They are estimated by NDVI value. Initially, we wanted to focus on boro paddy loss due to the heatwave, which occurred on 4th April 2021 [8]. But we cannot find any related work which shows estimating paddy loss due to heatwave from remote-sensing data. This mainly leads us to do this research. On the other hand, we annotated crop loss area using RGB image difference and FCI image difference as input in DeepLab V3 plus model. We also wanted to see how different band compositions performed at semantic segmentation to measure crop loss region.
From seminal-2, combination images are used, which consists of multiple bands. Combination imagery like False Color Infrared (FCI) is used for vegetation detection [3]. RGB satellite image was also combined to experiment and compare with FCI. For Ground truth, we have used the Normalized difference vegetation index (NDVI).
John Weier and David Herring (2000) first introduce NDVI as one of the measuring vegetation techniques [9]. NDVI is calculated by how much visible and near-infrared light is reflected from vegetation. The majority of visible light that strikes healthy plants is absorbed, whereas a considerable part of near-infrared light is reflected. Vegetation that is unhealthy or sparse reflects more visible light but not as much near-infrared light. Because Chlorophyll, a pigment found in plant leaves, absorbs visible light (between 0.4 and 0.7 m) and converts it to energy for photosynthesis.
On the other hand, the leaf cell structure reflects near-infrared light well (between 0.7 and 1.1 m). These wavelengths of light are affected more by the number of leaves a plant has. The NDVI value for a given pixel always ranges from minus one (-1) to plus one (+1); a zero value implies no vegetation, whereas a value near +1 (0.8-0.9) represents the highest density of green leaves conceivable [9].
Fig. 1: Work flow In 2020's paper the authors showed paddy rice phenology using Sentinel 2-A imaginary from NDVI band composition [5]. They verify greenness value with CCTV footage of the paddy field. There they find that Sentinel Image 2-A can be used to estimate the paddy rice phenology, and the start and end of the paddy rice planting season can be determined using the NDVI greenness value. They find out that at ndvi value 0.33, the first Phase of paddy vegetative started. This NDVI value is very significant for us. Because later, we will use this NDVI value as our threshold.
## III Data set preparation
### _Study Area in Bangladesh_
According to the daily star's 9th April 2021 news, a massive nor'wester heat wave swept over the country on 4th April 2021 [8]. According to the Department of Agricultural Extension (DAE), 47,000 hectares of boro paddy(BRRI-29) have been affected in Kishoreganj, Netrakona, Mymensingh Sunamgnaj, Moulvibazar, Barishal and Patuakhali. We select our study area in Sunamganj, Kisorganj & Netrokona. These districts are side by side. According to the Department of Agricultural Extension 2020, Sunamganj has 219,300 hectares, Kishoreganj has 166,710 hectares and Netrakona has 184,530 hectares area where boro is cultivated, which is 73%, 62%, and 22% respectively among haor districts [4].
### _Source of satellite data_
Google Earth Engine (GEE) is a large publicly available geospatial dataset.We exported seminal-2 images from GEE for our studied areas. Sentinal 2 carries an optical instrument payload which gives 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution [2]. We can combine these bands in various way for our task.
As different channels capture different land textures, we use different combinations from Sentinel-2 data. These combinations are formulated in a certain way to accomplish specific tasks. In this research, RGB and FCI are the combinations of three bands. NDVI is a single band image that we calculate from other bands using a formula. Each index image has a specific characteristic that can detect specific classes.
Because near-infrared (which vegetation strongly reflects) and red light (which vegetation absorbs), the vegetation index is good for quantifying the amount of vegetation. The formula for normalized difference vegetation index is (B8-B4)/(B8+B4); while high values suggest dense canopy [9].
### _Ground Truth making_
#### Iii-A1 Making NDVI Different images
To make ground truth, we subtract 2nd images NDVI band composition from 1st images NDVI band composition. At Fig 2, we can see the the subtraction process. As we are conducting our study on three districts, each district has three years of data. So, we are getting nine NDVI different images. NDVI is a single band image for better visualization; we colored them using QGIS render type single-band pseudocolor. We colored NDVI values above 0.33 or equals as red and others as yellow which has shown at Fig 3. Later we exported our Rendered GeoTIFF images from Qgis.
At Fig 4 and Fig 5 we can see NDVI Different for Sunamganj & Kisorgang.
As expected, we can see that in 2021 there is more NDVI difference than in 2020. However, here we can also see that in 2019 there is also a vast NDVI difference. It is because a 30-minute-long hailstorm struck south Sunamganj on 15-04-2019 morning [1]. Kisorhganj and Netrokona both are neighbour districts of Sunamganj; for that, we can see hailstorms impact on Kisorganj and Netrokona too. In the next subsection, we verify these outcomes with field-level interview with corresponding farmers.
Fig. 4: Sunamganj NDVI different from left to right 2019,2020 & 2021
Fig. 3: Coloring grayscale NDVI Different as the binary class where red means crop loss area & yellow indicates the okay region
Fig. 2: Illustration of NDVI Different. As an example, we took 2021βs NDVI Before 4th April and NDVI After 4th April of Sunamganj. We can see that NDVI Before image is whitish than NDVI After which indicates there is some loss of vegetation at NDVI After
#### Iii-B2 Verify NDVI different as Ground Truth
As we have seen from [5], NDVI can monitor rice phenology. But we need to be sure about our NDVI difference to use it as the ground truth of the deep lab v3+ model. For that, we need to go to each pinpoint location.
To do this, we selected two districts, Sunamganj and Kisorganj. Because we have observed similar NDVI patterns in Netrokona as well. On the other hand, Kishoreganj, Sunamganj, and Netrokona are located adjacent to each other. Suppose we can confirm Sunamganj and Kisorganj NDVI's different areas are rice field and affected due to haislotyms in 2019 and heatwave in 2021. In that case, we can also take Netrokona's NDVI different as correct and use all these NDVI different images as ground truth.
We randomly picked 30 points on Sunamganj and Kisorganj, as shown in Fig 6. Then by using their latitude and longitude, we found their address by reverse geocoding API. Here we use google map reverse geocode api & barikoi reverse geocode api.
From these 30 places, we chose ten places to visit at Sunamganj and Kisorganj.
We mainly focus on these questions at each point:
* Was there a paddy field? (This question was to verify if we can successfully recognize paddy field pattern from Sentinal-2A image)
* If yes, then was that boro or another variety of paddy?
* Was the field of the circle affected by the heatwave in 2021 April and if yes, then how much?
* Was the field of the circle affected by any natural disaster in 2020 and if yes, then how much?
* Was the field of the circle affected by a hailstorm in 2019 and if yes, then how much?
At all places, we talked with farmers, and they gave us similar kind of information. We found that-
* All selected places were rice fields. They always cultivate rice there. At that time of the year, they grow Boro.
* For 2021 they all said they faced boro crop loss due to heatwave. They told their crop was burnt by hot air.
* In 2020 they did not face any vast kind of natural disaster and got very good Boro production.
* For 2019, most of them said about hailstorms. Even they told the hailstorm had a much more devastating effect on the crop than 2021's heatwave.
From this ground level evaluation, we understand that our NDVI Different is indicating crop loss region correctly.
## IV Experimental setup
### _Export Sentinel 2 images from Google Earth Engine (GEE)_
To assess crop loss in Bangladesh after a heatwave on April 4, 2021, we need to collect sentinel-2 images before and after that date. However, clouds often cover Bangladesh's sky, and the sentinel-2 cloud mask is not effective for heavy cloud cover. Therefore, we must strategically set our date range. We collected all available images in GeoTIFF format from 2019, 2020, and 2021 for Sunamganj, Kisorganj, and Netrokona at a scale of 10 meters per pixel.
### _Input image_
For the input image, we exported the RGB difference between before and after and the FCI difference between before and after from the GEE. The process of making RGB different is demonstrated at Fig 7 and FCI different at Fig 8. After exporting the images from the GEE, we exported the rendered images with Qgis. We got a total of nine RGB difference images and nine FCI difference images. That means three RGB difference images and three FCI difference images for each district.
**RGB :** True color composite uses visible light bands red (B04), green (B03) and blue (B02) in the corresponding red, green and blue color channels, resulting in a natural-colored result [7].
**FCI :** The false color infrared band combination is meant to emphasize healthy and unhealthy vegetation. By using the combination of near-infrared (B8), red (B04) and green (B03). Plants reflect near-infrared and green light while absorbing red. Plant-covered areas appear deep red due to their high near-infrared reflection, and denser plant growth is represented by darker red color.
### _Pre-Processing_
#### Iv-C1 Split raster to feed model
The original image is split into 256x256 to feed into the network. As an example, Sunamganj's 2021 RGB different image's original dimension was 8987X7108. We zero-padded that for 256X256, which was 9216X7168. Then we split that image.
All split images were in GeoTIFF format. As we split our images, so from a single image, we have 1008 GeoTIFF for Sunamganj, 924 GeoTIFF for Netrokona and 810 GeoTIFF for Kisorganj. We have three years of data for each district. So in total, we got 3024 split images for Sunamganj, 2772 for Netrokona and 2430 for Kisorganj. We also mapped each split image with corresponding ground truth, in this case, which corresponds to split NDVI different GeoTIFF.
Fig. 5: Kisorganj NDVI different from left to right 2019,2020 & 2021
Fig. 6: 2021 before and after NDVI different of Sunamganj and Kisorganj with 30 sparsely selected points
### _Class Label_
We can see that our NDVI difference has mainly three colors. Black is in the background, red for the crop loss area and yellow for rest. So, for our segmentation, we have three classes. We declared our classes in the CSV file. In our model, we read these classes from that file.
### _Train, Validation and Test Splits_
We have three districts' data as each district has three years of data. So, we have a total of nine images. But we have split our images into 256X256. So now, we have a total of 8226 GeoTIFF images. We will use Sunamganj's data which is 36.76% of our total dataset, as our train dataset, Netrokona's data which is 33.70% of our total dataset, as our validation dataset & Kisorganj data, which is 29.54% of our total dataset as our test dataset.
### _Evaluation Metrics_
**IoU :** We have calculated Intersect over Union (IoU) and from the model's prediction and ground truths over test image pixels. IoU is calculated by dividing the area of overlap by the area of union.
\\[IoU=\\frac{Target\\cap Prediction}{Target\\cup Prediction}=\\frac{TP}{TP+FP+FN}\\]
**Mean IoU :** It means IoU average all over the classes. First, we calculated IoU by using the above formula for each class. In this case, for three classes. Then we divided by the number of classes. We used this at our testing result evaluation.
**Micro IoU :** Unlike mean IoU we calculate overall IoU for our model. We used micro IoU at training and validation phase. In this experiment, we calculated micro iou by using below formula.
\\[MicroIoU=\\frac{BackgroundTP+LossTP+OKTP+}{BackgroundTP+LossTP+OKTP+}\\]
\\[BackgroundFP+LossFP+OKFP+\\]
\\[BackgroundFN+LossFN+OKFN\\]
### _Augmentation_
Though after splitting, we have lots of data for training. But from augmentation, we can achieve stronger generalization ability. We usually augmented our data by flipping, rotating, panning etc [6].. Here we use albumentations which is a fast and flexible image augmentation library.
### _Post-Processing_
After getting output from Deeplab V3+, we set the same geotransform & projection to output images as our input image. We do that by using python's Geospatial Data Abstraction Library (GDAL) and saving those split rasters. After all split image prediction, we merged all splitted images to one raster.
## V Experiment & Result
In our experiment, we used RGB and FCI difference images as input and applied augmentation to both the train input data and corresponding ground truth mask. Our loss function was dice Loss, and we used resnet101 as the encoder and softmax2d as the activation function. We used the Adam algorithm as the optimizer with a learning rate of 0.0001. The batch size was 16 for the training dataset and 1 for the validation dataset with 2 workers. We trained for 110 epochs, saving the model with better micro IoU score for the validation dataset. Training took approximately 4.30 hours on Colab pro. After training, we predicted and stored split test images, later merging them to obtain year-wise results. We perform following experiments:
1. Train, Validate, and Test with RGB difference images.
2. Train, Validate, Test with FCI difference images.
In Fig 9 and Fig 10, we have presented the dice loss vs epochs graphs for both RGB and FCI images.
For RGB difference images, our best micro IoU score for the validation dataset was 0.9581 on epoch 87. At that time, the dice loss was 0.02386 and for the training dataset IoU score was 0.9432 and the dice loss was 0.02964. This was our saved model for RGB.
At Fig 9, we can see some spikes for the validation dataset in RGB. As our batch size is 1 for validation, spikes are expected. We see these spikes particularly for RGB because RGB images were noisier than FCI images.
Fig. 8: Illustration of FCI Different. As an example, we took 2021βs FCI Before 4th April and FCI After 4th April of Sunamganj. We can see that FCI Before image is redder than FCI After which indicates there is some loss of vegetation at FCI After
Fig. 7: Illustration of RGB Different. As an example, we took 2021βs RGB Before 4th April and RGB After 4th April of Sunamganj. We can see that RGB Before image is greener than RGB After which indicates there is some loss of vegetation at RGB AfterFor FCI difference images, our best IoU score for the validation dataset was 0.9693 on epoch 80. At that time loss was 0.01686 and for the training dataset IoU score was 0.9471 and loss was 0.02758.
The micro IoU score was a little better for FCI at the training phase. However, the difference is minimal.
At the testing phase, we have three years of data. First, we describe the overall performance for RGB and FCI difference images in test data; later, we analyze separate performance for each year's image for test data. We provide the mean IoU and F1 here to compare the performance.
Table I shows that the mean Intersection over Union (IoU) score is 0.77 for RGB and 0.81 for FCI images. When specifically examining crop compromised areas, the IoU score is 0.41 for RGB and 0.52 for FCI. Analysis of data from Table IV reveals a very low IoU score for both FCI and RGB in compromised areas during the 2021 heatwave. Fig 13 shows low red pixels in the predicted masks for both FCI and RGB compared to the ground truth. However, Fig 12 indicates that there were many false-positive loss area pixels in the FCI during the 2020 season. Table III demonstrates that RGB performed better than FCI in crop compromised areas. During the hailstorm-affected year of 2019, the affected area's IoU score was greater than other years, with a score of 0.49 for RGB and 0.61 for FCI from Table II. In Fig 11, the predicted mask shows better detection of red areas in the FCI image compared to RGB.
Fig. 11: 2019βs RGB and FCI difference images with ground truth mask and predicted mask
Fig. 12: 2020βs RGB and FCI difference images with ground truth mask and predicted mask
Fig. 10: Training vs Validation dice loss for FCI images
Fig. 9: Training vs Validation dice loss for RGB images
## VI Conclusion
This paper proposes an approach to develop ground truth data for paddy loss detection. It shows that performing Sentinal-2's NDVI subtraction before and after a disaster can be a way to develop ground truth for paddy loss area. This method can be used to train various segmentation models for automatic segmentation of paddy loss area. After the training, RGB and FCI images seems effective for automatic segmentation of paddy loss areas. Though our IoU score was not very good. At the loss area's IoU, we got a better result for FCI than RGB. Another observation is, from the year-wise segmentation result, we found that result was not same for paddy loss due to heatwave in 2021 and paddy loss due to hailstorm in 2019. As we know, hailistorms and heatwaves affect paddy field in different ways, the different outcomes are expected. However, RGB & FCI both do better for hailistorms. We can tell for very destructive disasters for paddy field like hailistorms, tornado, cyclone, flood; we can use RGB in these cases. Our research is prone to heavy cloud-covered areas. In future, we will do further study to develop the segmentation model with Synthetic Aperture Radar (SAR) data that are not affected by clouds.
## Acknowledgment
This project is supported by Independent University Bangladesh and ICT Division of Bangladesh Government.
## References
* [1]D. Tribune (2019) Hailistorms damage crops in Sunamganganj. farmers devastated. [online] Available at: [https://archive.dhalstribune.com/banglassd8/nation/2019/04/18/hailistorms-damage-crops-in-sunamganj-farmers-devastated](https://archive.dhalstribune.com/banglassd8/nation/2019/04/18/hailistorms-damage-crops-in-sunamganj-farmers-devastated) [Accessed 16 January 2022]. External Links: Link Cited by: SSI.
* Overview. [online] Available at: [https://sentinel.esa.int/web/sentinel/missions/sentinel-2/overview](https://sentinel.esa.int/web/sentinel/missions/sentinel-2/overview) [Accessed 16 January 2022]. External Links: Link Cited by: SSI.
* [3]J. H. Everitt, C. Yang, R. S. Fletcher, M. R. Davis, and D. L. Drawe (2004) Using aerial color-infrared photography and QuickBird satellite imagery for mapping wetland vegetation. Geocuro International19 (4), pp. 15-22. External Links: ISSN: 10106049, Document Cited by: SSI.
* [4]A. Rattananonghghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics57 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [5]A. Rattanonghghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [6]A. Rattanonghghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [7]A. Rattanonghghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [8]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [9]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [10]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [11]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [12]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [13]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [14]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [15]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [16]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [17]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [18]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [19]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [20]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [21]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [22]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [23]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [24]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [25]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [26]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [27]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [28]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [29]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [30]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [31]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [32]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [33]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [34]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [35]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A.
* [36]A. Rattanonghak and W. Boongsoood (2019-10) Design of Machine Vision System for Sugarcane Buds or Rings Detection. Journal of Image and Graphics7 (3), pp. 102-106. External Links: ISSN 1078-jorig, Document Cited by: SSII-A. | This paper aims to detect rice field damage from natural disasters in Bangladesh using high-resolution satellite imagery. The authors developed ground truth data for rice field damage from the field level. At first, NDVI differences before and after the disaster are calculated to identify possible crop loss. The areas equal to and above the 0.33 threshold are marked as crop loss areas as significant changes are observed. The authors also verified crop loss areas by collecting data from local farmers. Later, different bands of satellite data (Red, Green, Blue) and (False Color Infrared) are useful to detect crop loss area. We used the NDVI different images as ground truth to train the DeepLabV3plus model. With RGB, we got IoU 0.41 and with FCI, we got IoU 0.51. As FCI uses NIR, Red, Blue bands and NDVI is normalized difference between NIR and Red bands, so greater FCIs IoU score than RGB is expected. But RGB does not perform very badly here. So, where other bands are not available, RGB can use to understand crop loss areas to some extent. The ground truth developed in this paper can be used for segmentation models with very high resolution RGB only images such as Bing, Google etc.
Semantic segmentation, DeepLabV3+, Sentinal-2, Google Earth Engine, Paddy field | Provide a brief summary of the text. | 277 |
arxiv-format/2306_00303v1.md | # Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges
Anzhu Yu1, Wenjun Huang1, Qing Xu1, Qun Sun, Wenyue Guo, Song Ji, Bowei Wen, Chunping Qiu
1Authors share equal contribution.A. Yu, W. Huang, Q. Xu, Q. Sun, W. Guo, S. Ji, B. Wen and C. Qiu are with the PLA Strategic Support Force Information Engineering University, Zhengzhou 450001, China. Corresponding author: Wenjun Huang ([email protected]).
## I Introduction
The sea ice extraction (SIE) has been a crucial problem in many application aspects, such as the polar navigation [1], terrain analysis [2], polar cartography [3] and polar expedition [4]. With the rapid development of machine learning technique, computational capability and data acquisition, the SIE problem has reached the deep learning era. Machine learning-based approaches are being increasingly introduced to detect, segment or map the sea ice.
As a branch of the machine learning, Deep Learning technique attracts more attention to solve the SIE problem in last five years, based on which the mapping or cartography problem could also be solve subsequently. Most literature convert the SIE problem to another common topic, namely the semantic segmentation problem, which determines the category of each pixel via a post-classification procedure after the category probability is regressed by the deep convolutional neural networks. In recent years, there has been a growing body of research focusing on SIE. To gain insights into this field, we conducted a literature search using the keywords \"sea ice extraction\" and applied the Citespace [5] statistical algorithm to visualize the co-citation network of relevant publications from the past five years (Fig. 1). The visualization highlights key themes and research areas associated with SIE, with a particular emphasis on remote sensing and SAR. Currently, SIE primarily relies on remote sensing techniques such as visible/infrared remote sensing, passive microwave remote sensing, and active microwave remote sensing [6]. Visible/infrared remote sensing can provide texture information of sea ice, which is helpful for SIE tasks. However, it has certain limitations. Firstly, it is restricted in polar regions due to the occurrence of polar day and polar night phenomena. Additionally, the orbital inclination (typically \\(97^{\\circ}\\)-\\(98^{\\circ}\\)) and altitude of conventional remote sensing satellites affect observations in polar regions, leading to polar data gaps where effective observations are not possible. Consequently, polar orbit satellites are relied upon for conducting observations. On the other hand, passive microwave remote sensing, as an active remote sensing approach, offers global coverage capabilities and therefore holds certain advantages. Nevertheless, its drawback lies in relatively low spatial resolution. Typical instruments for passive microwave remote sensing, such as AMSR-E and AMSR2, generally provide spatial resolutions at the kilometer level. Such lower resolution may not fulfill the requirements for detailed SIE and further mapping. In contrast, active microwave remote sensing techniques, such as SAR, offer higher resolution capabilities. SAR technology can achieve resolutions at the meter level, making it highly suitable for fine-scale sea ice mapping [7][8]. As a consequence, current research on SIE predominantly focuses on the application of active microwave remote sensing technologies, notably SAR. Besides, significant achievements have been made in SIE tasks through the utilization of optical remote sensing [9][10] and the integration of SAR with optical approaches [11][12][13]. In addition to the aforementioned remote sensing satellite observations, some literatures have utilized real-time ice monitoring using aerial images captured by cameras onboard icebreakers [14][15] and unmanned aerial vehicles (UAVs) [16][17]. These methods serve as valuable supplementary approaches for SIE tasks.
Machine learning methods have made significant applications in the field of SIE. Recently, several reviews have provided summaries of sea ice remote sensing. In [18], the focus was on analyzing the advantages and disadvantages of sea ice classification methods based on SAR data. In [19], the advancements of Global Navigation Satellite System-Reflectometry (GNSS-R) data in SIE, ice concentration estimation, ice type classification, ice thickness inversion, and ice elevation were reviewed. In [8], a comprehensive analysis of sea ice sensing using polarimetric SAR data was conducted. Key geophysical parameters for SIE, including ice type, concentration, thickness, and motion, as well as SAR scattering characteristics analysis, were summarized. However,these papers primarily focused on providing overviews of sea ice monitoring methods using SAR technology, lacking comprehensive summaries of specific technical approaches. Moreover, they predominantly concentrated on summarizing sea ice remote sensing methods and lacked a comprehensive overview of downstream tasks related to SIE, specifically applications. Therefore, this review aims to provide a comprehensive summary of the latest SIE methods developed in the past five years. It aims to systematically categorize and analyze these methods, taking into account the associated datasets and subsequent mapping applications. Additionally, this review incorporates the latest advancements in technology to assess the challenges and future developments in SIE through the utilization of large-scale models.
The overall structure of this review is presented in Fig. 2. Section II of this review will provide detailed insights into recent methods for SIE. Section III will summarize the currently available open-source datasets related to ice. Section IV aims to outline downstream tasks and enumerate the generated geospatial information products resulting from ice extraction. Lastly, Section V will highlight areas where future developments are needed.
## II Method of sea ice extraction
### _Classical image segmentation methods_
In the early stages, research on SIC primarily relied on statistical algorithms. These algorithms generally combined probabilistic models and classical classification methods with texture or polarization features to generate sea ice type maps. There is a rich body of literature on classical image segmentation methods, and this section will focus on including only some recent publications.
#### Ii-A1 Bayesian
A new Bayesian risk function is proposed in [20] to minimize the likelihood ratio (LR) for polarimetric SAR data supervised classification. A novel spatial criterion is also introduced to incorporate spatial contextual information into the classification method, achieving a sea ice classification accuracy of 99.9%. Bayesian theorem, as described in [21], is utilized to compute the posterior probabilities of each class at each observed location based on the texture features extracted from the gray-level co-occurrence matrix (GLCM) of the image. In [22], labels each pixel in the SAR imagery as ice or water using the MAP-Guided Ice Classification (MAGIC) [23] and models the labeled pixels as a Bernoulli distribution. The estimated ice concentration is then obtained by incorporating the labeled data into the Bayesian framework
Fig. 1: The co-citation network for SIE research. The frequency of the keywords was visually represented by the size of the nodes, while the strength of their relationships was indicated by the width of the linking lines. Additionally, the publication year was visually depicted through the color variation of the nodes.
along with AMSR-E ice concentration data. The work [24] introduces a Gaussian Incidence Angle (GIA) classifier for sea ice classification, which replaces the constant mean vector in the multivariate Gaussian probability density function (PDF) of the Bayesian classifier with a linearly varying mean vector. The simplicity and fast processing time of the GIA classifier enable near real-time ice charting. The work [25] utilizes this GIA classifier to generate classified winter time series of sea ice in the regions covered during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) campaign, providing reliable support for navigation.
#### Ii-A2 Maximum Likelihood Estimation
In [26], Maximum Likelihood Estimation is used to compute the probabilities of ice and water in the observed SAR images. An unsupervised mixture Gaussian segmentation algorithm is proposed in [27], which provides reasonable sea ice classification results under similar incidence angle conditions. The work [28] applies logistic regression (LR) statistical techniques to demonstrate that the average and variance of texture features, specifically the GLCM, are most suitable for maximum likelihood supervised classification, thus extracting the sea ice density map of the western Antarctic Peninsula region.
#### Ii-A3 Thresholding Method
Zhu et al. [29] utilized the Delay-Doppler Map (DDM) of the Global Navigation Satellite System (GNSS) signals reflected by sea ice and seawater, which exhibit distinct scattering characteristics. The differential DDM, observed as the difference between two adjacent normalized DDMs, provides information about the differences between the two DDMs. By employing a thresholding method, the type of the reflecting surface can be determined, thus extracting the sea ice. Building upon this, Alexander et al. [30] proposed an adaptive probability threshold for automatic detection of ice and open water areas. Qiu et al. [9] discussed the textural and edge features of different sea ice types in various turbid regions, using the Yellow River Delta as an example, laying the foundation for the classification of sea ice types. Automatic extraction of sea ice can be achieved by employing the OTSU algorithm to determine the threshold automatically.
#### Ii-A4 Other Methods
Additionally, Zhang et al. [31] proposed an automatic classification method for SAR sea ice images combining Retinex and the Gaussian Mixture Model algorithm (R-gmm). Experimental results demonstrated that this algorithm effectively enhances the clarity of SAR imagery compared to the Single Scale Retinex Algorithm, GMM, and Markov Random Field (MRF)-based methods, thereby improving segmentation accuracy. Liu et al. [32] introduced a method based on curvelet transform and active contour to automatically detect the marginal ice zone (MIZ) in SAR imagery. In [33], a multi-scale strategy of the curvelet transform was further utilized to extract curve-like features from SAR images, distinguishing the MIZ from open water and consolidated ice areas. Xie et al. [34] employed the polarization ratio (PR) between VV and HH in SAR images calculated based on the roughness characteristics of the sea surface scattering and the X-Bragg backscatter model. This measurement comparison can differentiate between open water and sea ice, achieving an overall accuracy of approximately 96%. Mary et al. [35] utilized the coefficient of variation (COV) from co-pol/cross-pol SAR data to detect thin ice during the Arctic freezing period using a synergistic algorithm.
#### Ii-A5 Limitations
Generally, classical image segmentation methods exhibit high efficiency for simple segmentation tasks. However, as the complexity of the input image scenes increases, it becomes challenging to determine the appropriate thresholds for multiple-class objects. Moreover, the choice of thresholds is sensitive to image brightness and noise, which limits the generalization ability when applied to different
Fig. 2: Structure of this review.
scenes.
While classical methods have their strengths, these limitations pave the way for exploring alternative approaches to address the aforementioned challenges. By leveraging advanced techniques such as machine learning, probabilistic models, and adaptive algorithms, researchers have sought to overcome the issues associated with threshold-based segmentation. These alternative methods offer promising avenues to enhance segmentation accuracy, handle complex scenes, and mitigate the sensitivity to brightness and noise.
### _Machine learning-based methods_
Machine learning methods primarily leverage the polarimetric characteristics of sea ice images (HH, HV, HH/HV) and selected features such as GLCM texture features. These features are then subjected to rule-based machine learning methods for classification, enabling the differentiation between sea ice and open water areas. Furthermore, in the literature, there are approaches that further refine the classification of sea ice, distinguishing between multi-year ice (MYI) and first-year ice (FYI), among other categories. Expanding on the various methodological approaches, let's delve into each method and its specific contributions in sea ice classification.
#### Iii-B1 Iterative region growing using semantics (IRGS)
Yu et al. [24] proposed an image segmentation method called IRGS. IRGS [36] models the backscatter characteristics using Gaussian statistics and incorporates a Markov random field (MRF) model to capture spatial relationships. It is an unsupervised classification algorithm that assigns arbitrary class labels to identified regions, with the mapping of class labels left for manual intervention by human operators. Building upon IRGS, several researches have been conducted for sea ice-water classification. In [23], a binary ice-water classification system called MAGIC was developed. Subsequently, in [37], the authors used glocal IRGS to capture the spatial contextual information of RADARSAT-2 SAR images and identified homogeneous regions using a hierarchical approach. Pretrained SVM models were then used to assign ice-water labels. The IRGS method, combined with modified energy functions and the contributions of glocal and SVM classification results, balanced the contextual and texture-based information. This method was tested in [38] with four different SAR data types: dual-polarization (DP) HH and HV channel intensity images, compact polarimetric (CP) RH and RV channel intensity images, all derived CP features, and quad-polarimetric (QP) images. The experimental results demonstrated that utilizing CP data achieved the best classification results, which were further supported by similar findings in [39] and [40]. The self-training IRGS (ST-IRGS) was introduced in [41], which integrated hierarchical region merging with conditional random fields (CRF) to iteratively reduce the number of nodes while utilizing edge strength for classification and region merging. The key feature of ST-IRGS is the embedded self-training procedure. Wang et al. The work [42] extensively tested IRGS on dual-polarization images for lake ice mapping, minimizing the impact of incidence angle. The experimental results demonstrated that the IRGS algorithm provides reliable ice-water classification with high overall accuracy.
As emerging image classification methods advance, IRGS has been seamlessly integrated with various classification techniques to enhance sea ice classification. In [43], IRGS segmentation was integrated with supervised labeling using RF. The IRGS segmentation algorithm incorporated spatial context and texture features from the ResNet, utilizing region pooling for ice-water classification [44]. In [45], a comparison was made between two benchmark pixel classifiers, SVM and RF, and two models, IRGS-SVM and IRGS-RF. The experimental results indicated that IRGS-RF achieved better performance and demonstrated stronger robustness. In [46], the IRGS algorithm was utilized to oversegment the input HH/HV scene into superpixels. A graph was constructed on the superpixels, and node features were extracted from the HH/HV images. With limited labeled data, a two-layer graph convolution was employed to learn the spatial relationships between nodes. In [47], the segmentation results from the IRGS algorithm were combined with pixel-based predictions from the Bayesian CNN, and by analyzing the uncertainty of SAR images, sea ice and water were distinguished.
These researches demonstrate the versatility of IRGS and its integration with different classification methodologies, leading to improved performance and enhanced classification accuracy in sea ice analysis.
#### Iii-B2 Random Forest (RF)
Han et al. [48] utilized texture features from backscatter intensity and GLCM as input variables for sea ice mapping and developed a high spatial resolution summer sea ice mapping model for KOMPSAT-5 EW SAR images using a RF model. Mohammed Dabboor et al. [49], [50] employed the RF classification algorithm to identify effective compact polarimetric (CP) parameters and analyzed the discriminatory role of CP parameters for distinguishing between FYI and MYI. Alexandru Gegiuc et al. [51] applied RF for estimating the ridge density of sea ice in C-band dual-polarization SAR images. Han et al. [52] evaluated four representative sea ice algorithms using binary classification with RF based on PM-measured sea ice concentration (SIC) data. Tan et al. [53] employed a RF feature selection method to determine optimal features for sea ice interpretation and implemented a semi-automated sea ice segmentation workflow. Dmitrii MURASHKIN et al. [54] utilized a RF classifier to investigate the importance of polarimetric and texture features derived from GLCM for the detection of leads. James V. Marcaccio et al. [55] employed image object segmentation and an RF classifier for automated mapping of coastal ice, indicating Laurentian Great Lakes winter fish ecology. Yang et al. [56] developed an RF model to extract lake ice conditions from land satellite imagery. Jeong-Won Park et al. [57] performed noise correction on dual-polarization images, supervised texture-based image classification using the RF classifier, and achieved semi-automated SIE. Meanwhile, in [58], the first approach directly utilizing operational ice charts for training classifiers without any manual work was proposed based on RF.
These studies demonstrate the diverse applications of RF in sea ice analysis, including sea ice mapping, classification of different ice types, feature selection, noise correction, and automated ice detection. The RF model has shown its effectiveness in leveraging various image features for accurate and efficient sea ice analysis and has contributed to advancements in sea ice research and monitoring.
#### Ii-B3 Multilayer Perceptron (MLP)
Ressel et al. [59, 60] compared the polarimetric backscattering behavior of sea ice in X-band and C-band SAR images. Extracted features from the images were inputted into a trained Artificial Neural Network (ANN) for SIE. The experiments found that the most useful classification features were matrix-invariant features such as geometric strength, scattering diversity, and surface scattering fraction. In [61], further evidence was presented for the high reliability of neural network classifiers based on polarimetric features, demonstrating their suitability for near real-time operations in terms of performance, speed, and accuracy. [62] used neural networks to describe the mapping between image features and ice-water classification, with texture features extracted from co-polarized and cross-polarized backscatter intensities and autocorrelation. It was tested for ice-water classification in the Fram Strait, showing that the C-band reliably reproduced the contours of ice edges, while the L-band had advantages in areas with thin ice/calm water. Suman Singha et al. [63] inputted the extracted feature vectors into a neural network classifier for pixel-wise supervised classification. The classification process highlighted matrix-invariant features like geometric strength, scattering diversity, and surface scattering fraction as the most informative. The findings were consistent for both X-band and C-band frequencies, with minor variations observed for L-band. Furthermore, the work [64] explored the influence of seasonal changes and incidence angle on sea ice classification using an ANN classifier. The study concluded that in dry and cold winters, the classifier could adapt to moderate differences associated with the incidence angle. Additionally, it was found that the incidence angle dependency of backscatter remained consistent across various Arctic regions and ice types.
Juha Karvonen et al. [65] estimated ice concentration based on SAR image segmentation and MLP, combining high-resolution SAR images with lower-resolution radiometer data. In [66], they further demonstrated that MLP can estimate SIC from SAR alone, but the results were more reliable and accurate when SAR was combined with microwave radiometer data. Furthermore, in [67], they estimated the SIC and thickness in the Bohai Sea using dual-polarization SAR images from the 2012-2013 winter, AMRS 2 radiometer data, and sea ice thickness data based on the High-resolution Ice Thickness and Surface Properties (HIGHTSI) model. Additionally, Yan et al. [68] demonstrated the feasibility of using the TDS-1 satellite data for neural network-based sea ice remote sensing using a satellite-based GNSS-R digital data acquisition system. It relied on a MLP neural network with backpropagation learning using an LM algorithm (800 inputs, 1 hidden layer with 3 neurons, and 1 output). In a recent study [69], it was shown that MLP outperformed LR in capturing the nonlinear decision boundaries, thus reducing misclassifications in certain cases. Additionally, MLP combined cognitive uncertainty prediction methods with arbitrary heteroscedastic uncertainty to allow estimation of uncertainty at each pixel location.
Overall, MLP has proven to be a valuable tool in sea ice remote sensing, providing accurate classification results and enabling the estimation of sea ice parameters. As research in this field continues, further advancements in MLP models and their integration with other data sources will contribute to a better understanding of sea ice dynamics, improved sea ice monitoring, and enhanced decision-making for various applications related to sea ice.
#### Ii-B4 Support Vector Machine (SVM)
Prior to the surge in popularity of deep learning, SVM was the most favored models due to their solid mathematical foundation and the ability to achieve global optimum solutions (unlike linear models trained with gradient descent that may only converge to local optima). SVMs are commonly employed for binary classification tasks and are defined as linear classifiers that maximize the margin in the feature space.
The work [70] utilized backscattering coefficients, GLCM texture features, and SIC as the basis for SVM-based sea ice classification. Experimental results demonstrated that SVMs exhibit stronger robustness against normalization effects compared to Maximum Likelihood (ML) results.Some cases [71, 72, 73, 74] showcased the effectiveness of SVMs in distinguishing open water areas from sea ice tasks. In a study [75], combining Kalman filtering, GLCM, and SVM yielded better sea ice accuracy compared to simple CNN models at that time. Yan et al. [76, 77] proposed a simple yet effective feature selection (FS) approach and employed SVM classification, resulting in improved accuracy and robustness compared to NN, CNN, and NN-FS approaches. Furthermore, experiments indicated that SVMs require less data storage and fewer tuning parameters.
Additionally, researchers have explored combining SVM with other methods to enhance classification accuracy. For example, the work [78] integrated statistical distribution, region connection, multiple features, and SVM into the CRF model. Experimental comparisons revealed that SVM-CRF achieved the best performance. Moreover, by utilizing Transductive Support Vector Machines (TSVM) as the classifier had good performance on two hyperspectral images obtained from EO-1 [79].
In summary, SVMs were highly popular models in the field of sea ice classification before the rise of deep learning. They offer robustness, suitability for binary classification tasks, and the potential for integration with other techniques, contributing to their effectiveness in accurately distinguishing sea ice from other classes. Furthermore, SVMs have advantages such as lower data storage requirements and fewer tuning parameters.
#### Ii-B5 Others
In addition to the commonly used machine learning methods mentioned above, decision tree (DT), LR, and k-means have also been used in ice classification tasks. DT is commonly used to solve binary classification problems. For example, the work [80] employed a supervised classification model based on DT to differentiate ice lakes from water ice using the radiometric and textural properties of Landsat 8 OLI multispectral data. Furthermore, Johannes Lohse et al. [81] utilized DT for multi-class problems by decomposing them into a series of binary questions. Each branch of the tree separates one class from all other classes using a selected feature set specific to that class. In the Fram Strait region, ice was accurately classified into categories such as grey ice, lead ice, deformed ice, level ice, grey-white ice, and open water. Komarov et al. [82] modeled the probability of ice presence in the study area using LR. They automatically detected ice and open water from RADARSAT dual-polarized imagery. Additionally, based on the aforementioned modeling approach, they developed a multi-scale SAR ice-water inversion technique [83]. In [84], a multi-stage model was proposed for sea ice segmentation using superpixels. The preprocessing involved enhancing contrast and suppressing noise in high-resolution optical images. The segmentation results were refined through superpixel generation, K-means classification, and post-processing.
Furthermore, various machine learning algorithms have been combined to better extract sea ice. Wang et al. [85] proposed a two-round weight voting strategy in ensemble learning. In the first round of voting, six base classifiers, namely naive Bayes, DT, K-Nearest Neighbors (KNN), LR, ANN, and SVM, were employed. Misclassified pixels were further refined through fine classification. Kim et al. [86] combined image segmentation, image correlation analysis, and machine learning techniques, specifically RF, extremely randomized trees, and LR, to develop a fast ice classification model. Liu et al. [87] selected KNN and SVM classifiers for single-feature-based sea ice classification, while the classification of sea ice based on multiple feature combinations was performed using the selected KNN classifier. In [88], a Gaussian Markov Random Field model for automatic classification was introduced. The initial model parameters and the number of categories were determined by fitting the histogram of the imagery using a finite Gaussian mixture distribution. Experimental results show that it can achieve good classification effect.
#### Iii-B6 Limitations
In summary, researchers have integrated different machine learning algorithms to improve SIE. The two-round weight voting strategy and LR have demonstrated favorable classification performance. Combining image segmentation, correlation analysis, and machine learning techniques has facilitated the development of fast ice classification models. Additionally, the Gaussian Markov Random Field technique and self-supervised learning approaches have shown promise in SAR sea ice image classification. However, these approaches often involve manual feature extraction prior to network training, which can be a labor-intensive and time-consuming process. Additionally, when dealing with complex image scenes, the training process can become intricate and challenging.
### _Deep learning-based methods_
Traditional approaches to sea ice classification rely heavily on manual feature extraction from remote sensing images and the construction of classifiers. However, this methodology entails significant human and time costs, and often yields less accurate results in complex scenarios. In contrast, deep learning offers the ability to automatically learn and extract features, enabling more effective handling of sea ice classification tasks. Deep learning methods, such as classification networks and semantic segmentation networks, have been widely applied in sea ice classification, showcasing remarkable performance in feature extraction and classification, thus significantly improving the accuracy of sea ice classification. In this section, we will discuss the applications of deep learning methods in sea ice classification and explore the performance of different models in this domain, as shown in Fig. 3.
#### Iii-C1 Supervised Learning
Early on, researches generally used some simple CNN structures for sea ice classification. Wang et al. [89] were the first to employ CNN for SIC estimation from SAR images. Their work utilized a two-layer architecture consisting of convolutional and pooling layers, followed by a fully connected operation, eliminating the need for separate feature extraction or post-segmentation processing. The generated SIC maps exhibited an absolute average error of less than 10% compared to manually interpreted ice analysis charts. In [90], a fully convolutional neural network (FCNN) was proposed for estimating SIC from polarimetric SAR images. Experimental results showed slightly higher accuracy in SIC estimation using FCNN compared to CNN, along with additional computational efficiency. In [91], a three-layer CNN with convolutional and pooling operations, as well as non-linear transformations, was constructed. This CNN demonstrated reduced differences and biases between ice concentration and labels compared to MLP or ASI algorithms, highlighting the superiority of CNN. In [92], the CIFAR-10 CNN model was adapted to construct a CNN architecture, and experimental results demonstrated that CNN-based SIE achieved higher accuracy compared to traditional SVM methods. Yan et al. [93, 94] designed a classification-oriented CNN for SIE and a regression-based CNN for SIC estimation. The CNN comprised five 7x7 convolutional and pooling layers, followed by two fully connected layers. This was the first application of CNN technology to TDS-1 DDM data for SIE and SIC estimation. Compared to NN, this approach exhibited improved overall accuracy and required fewer parameters and less data preprocessing. Han et al. [95] utilized GLCM to extract spectral and spatial joint features from hyperspectral sea ice images and constructed a 3D-CNN for sea ice type classification. In [96], CNN was employed for sea ice type classification based on Sentinel-1 SAR data, distinguishing between four categories: ice-free, young ice, FYI, and old ice. Experimental comparisons with existing machine learning algorithms based on texture features and RF demonstrated improved accuracy and efficiency. CNN-based SIC estimation was shown to outperform earlier estimation algorithms in [97]. Additionally, Malmgren-Hansen et al. [98] tested CNN under the scenario of disparate resolutions between Sentinel-1 SAR and AMSR 2 sensors and found that CNN was suitable for multi-sensor fusion with high robustness. Additionally, the integration of SE-Block into a 3D-CNN deep network was proposed in [99] to enhance the contribution of different spectra for sea ice classification. By optimizing the weights of various spectral features through the fusion of SE-Block, based on their respective contributions, the quality of samples was further improved. This approach enables superior accuracy classification of small-sample remote sensing sea ice images.
Given the significant progress in deep learning, a wide range of mature classification and segmentation networks have been developed. Researchers have successfully applied these existing networks to achieve accurate SIE. By building upon these established networks, they have been able to effectively extract sea ice from various data sources and achieve accurate results. In [100], a hyperspectral sea ice image classification method based on principal component analysis (PCA) was proposed. A comparison was made among SVM, 1D-CNN, 2D-CNN, and 3D-CNN, showing promising results in sea ice classification with fewer training samples and shorter training time. Xu et al. [101] employed transfer learning to extract features from patches using AlexNet and applied a softmax classifier, achieving an overall classification accuracy of 92.36% on test data. They also improved SIC estimation by augmenting the training dataset with more independent samples of undersampled classes [102]. The impact of transfer learning, data augmentation, and input size on deep learning methods for binary classification of sea ice and open water, as well as multi-classification of different types of sea ice, was further investigated in [103]. Subsequently, DenseNet [104] was introduced and demonstrated excellent performance on the challenging ImageNet database. In [105], DenseNet was employed to extract SIC from SAR images, achieving errors of 5.24% and 7.87% on the training and testing sets, respectively. DenseNet161 was used in [106], where multi-scale techniques were employed for automatic detection of the MIZ in SAR images. Analysis of the DenseNet prediction results by Kruk et al. [107] revealed that neural networks faced greater challenges in distinguishing different types of ice samples compared to differentiating between water and ice samples. Lyu et al. [108] obtained SIE and classification results for the first time from real polarimetric SAR data using the Normalizer-Free ResNet (NFNet) [109]. The Sea Ice Residual Convolutional Network (AS-SI-Resnet) was proposed in [110], and experimental results demonstrated its superiority over MLP, AlexNet, and traditional SVM methods. The authors further considered spatial characteristics and temporal variations of sea ice and introduced long short-term memory (LSTM) networks to improve the accuracy of sea ice classification [111].
Building upon the outstanding performance of CNN in SIE tasks, researchers have further explored its application in larger datasets and research areas. Kortum et al. [112] combined convolutional neural networks with dense conditional random fields (DCRF) and incorporated additional spatio-temporal background data to enhance model robustness and achieve multi-seasonal ice classification. Zhang et al. [113] developed a deep learning framework called Multiscale MobileNet (MSMN), and experimental tests demonstrated an average improvement of 4.86% and 1.84% in classification accuracy compared to the SCNN and ResNet18 models, respectively. Singh Tamber et al. [114] trained a CNN using the binary cross-entropy (BCE) loss function to predict the probability of ice, and for the first time, explored the concept of augmented labels to enhance information acquisition in sea ice data.
In various domains, deep learning has made remarkable advancements in semantic segmentation in recent years. In particular, the U-Net network has been widely applied in various semantic segmentation tasks and has shown good segmentation performance. Researchers have also explored the application of the U-Net architecture in SIE. Ren et al. [115] proposed a U-Net-based model for sea ice and open water SAR image classification. This model can classify sea ice at the pixel level. Subsequently, the authors introduced a dual-attention mechanism, forming a dual-attention U-Net model (DAU-Net), which improved the segmentation accuracy compared to the U-Net model [116, 117]. Kang et al. [10] improved the decoding network and loss function, achieving excellent results in the 2021 High-Resolution Challenge. Baumhoer [118] used a modified U-Net for automatic extraction of Antarctic glacier and ice shelf fronts. Ji et al. [119] constructed the BAU-NET by adding a batch normalization layer and an adaptive moment estimation optimizer to the U-Net. In addition, An FCN inspired by the U-Net architecture was applied to SIC prediction [120]. Radhakrishnan et al. [121] proposed a novel training scheme using curriculum learning based on U-Net to make the model training more stable. Wang et al. [122] stacked U-Net models to generate aggregated sea ice classifiers. Stokholm et al. [123] studied the effect of increasing the number of layers and receptive field size in the U-Net model on extracting SIC from SAR data. RES-UNET-CRF (RUF) was proposed in [124], which leverages the advantages of residual blocks and Convolutional Conditional Random Fields (Conv-CRFs), as well as a dual-loss function. Experimental results show that the proposed RUF model is more effective compared to U-Net, DeepLabV 3, and FCN-8. Song et al. [125] proposed a network called E-MPSPNet, which combines multi-scale features with scale-wise attention. Compared to mainstream segmentation networks such as U-Net, PSPNet, DeepLabV 3, and HED-UNet, the proposed E-MPSPNet performs well and is relatively efficient. UNET++ was proposed in [126], and it performs well in medical image segmentation tasks.
Fig. 3: Chronological overview of the most relevant deep learning-based SIE methods.
Murashkin et al. [127] applied UNET++ to the task of mapping Arctic sea ice in Sentinel-1 SAR scenes. Feng et al. [128] proposed a joint super-resolution (SR) method to enhance the spatial resolution of original AMSR2 images. They used a DeepLabv3+ network to estimate SIC, which demonstrated good robustness in different regions of the Arctic at different times. In addition, Zhang et al. [129] combined semantic segmentation frameworks with histogram modification strategy to depict the disintegration frontier of Greenland's glaciers. It was found that the combination of histogram normalization and DRN-DeepLabv3+ was the most suitable. A hierarchical deep learning-based pipeline was designed [130], which significantly improved the classification performance in numerical analysis and visual evaluation compared to previous flat N-way classification methods.
In addition, Colin et al. [131] conducted segmentation research on ten marine meteorological processes using the fully supervised framework U-Net, demonstrating the superiority of supervised learning over weakly supervised learning in both qualitative and quantitative aspects. Hoffman et al. [132] employed U-Net with satellite thermal infrared window data for Sea Ice Lead detection. An improved U-Net was used for glacier ice segmentation [133]. It introduced a new self-learning boundary-aware loss, which improved the segmentation performance of glacier fragments covering ice. CNN has not only been well-applied in SIE tasks but also used for extracting river and lake ice to achieve continuous monitoring of glacial lake evolution on Earth [134, 135, 136]. These researches will provide references based deep learning for SIE tasks.
With the popularity and cost reduction of UAV technology, and considering its high spatiotemporal resolution, it has been widely applied in ice monitoring. It could fill the gap in satellite imagery data to some extent. Zhang et al. [14, 17] proposed ICENET and ICENETv2 networks for fine-grained semantic segmentation of river ice from UAV images captured in the Yellow River. ICENET achieved good results in distinguishing open water, surface ice, and background. In addition to UAV imagery, a few researches have utilized in-situ digital sea ice images captured by airborne cameras. Compared to large-scale satellite images, information recorded by airborne cameras has lower spatial scales, providing more detailed information about the formation of surrounding sea ice at higher resolutions. Dowden et al. [137] constructed semantic segmentation datasets based on photographs taken by the Nathaniel B. Palmer icebreaker in the Ross Sea of Antarctica. SegNet and PSPNet architectures were used to establish detailed baseline experiments for the datasets. In [138], an automated SIE algorithm was integrated into a mobile device. In [139], considering the impact of raindrops on the segmentation results of captured images, raindrop removal techniques were developed to improve the classification performance. In [140], a semantic segmentation model based on conditional generative adversarial network (cGAN) was proposed. This model has good robustness and makes the effect of raindrops on the segmentation results smaller. In addition, a fast online shipborne system was developed and validated in [15] for ice detection and estimation of their locations to provide \"ground truth\" information supporting satellite observations. Ice-Deeplab [141] was developed to segment airborne images into three classes: Ocean, Ice, and Sky. Zhao et al. [142] improved the U-Net network by introducing Vgg-16 and ResNet-50 for encoding, constructing the new networks VU-Net and RU-Net, and achieved good results in testing with mid-high-latitude winter sea ice images captured by airborne cameras. Furthermore, a multi-label sea ice classification model embedded with SE modules was used for airborne images [143], showing significant improvement in accuracy compared to machine learning methods such as RF and gradient boosting decision tree [144].
Deep learning techniques have also found application in predicting SIC from daily observations of passive microwave sensors such as SMMR, SSM/I, and SSMI/S [145, 146, 147]. Chen et al. [148] have utilized passive microwave and reanalysis data to quantitatively predict SIC, thereby providing not only navigational assurance for human activities in the Arctic but also valuable insights for studying Arctic climate change. Additionally, Gao et al. [149, 150] have made significant contributions by employing collaborative representation and a transferred multilevel fusion network (MLFN) to detect and track sea ice variations from SAR images, which holds crucial importance for ensuring maritime safety and facilitating the extraction of natural resources.
#### Iii-B2 Semi-supervised Learning
The current research on SIE is often limited by the scarcity of available datasets. To extract accurate information from large-scale datasets when only a limited number of labeled data is available, researchers have introduced SSL [151]. SSL is a technique that leverages unlabeled data to improve model performance. In the context of sea ice classification tasks, SSL can better utilize unlabeled sea ice images to enhance the model's classification accuracy. Staccone et al. [152] presented a SSL method based on generative adversarial networks (GANs) for sea ice classification. The approach leverages labeled and unlabeled data from two different sources to acquire knowledge and achieve more accurate results. Khaleghian [153] proposed a Teacher-Student label propagation method based on SSL (TSLB-SSL) to deal with a small number of labeled samples. Experimental results demonstrated its superior generalization capability compared to state-of-the-art fully supervised and three other semi-supervised methods, namely semi-GANs, MixMatch, and LP-SSL. Jiang et al. [46] proposed a semi-supervised sea ice classification model (IRGS-GCN) that combines graph convolution to address this challenge. Furthermore, a weakly supervised CNN approach was proposed in [154] for ice floe extraction. This research leveraged a limited number of manually annotated ice masks as well as a larger dataset with weak annotations generated through a watershed segmentation model, requiring minimal effort. By effectively leveraging unlabeled or weakly labeled data, this method was able to build more accurate extraction models on limited labeled datasets.
#### Iii-B3 Unsupervised Learning
Due to ongoing technological advancements, unsupervised learning has emerged as a promising approach for sea ice classification tasks. Taking advantage of the principle that SAR imagery can depict the electromagnetic properties of sea ice, Huang et al. employ a guided learning approach based on physical characteristics, designing the structure and constraints of the models to better capture the scattering characteristics and information of sea ice in SAR imagery. By combining physical models, prior knowledge can be introduced into deep learning models, enhancing their interpretability and generalization capability. In their work [155], the scattering mechanism was encoded as topic compositions for each SAR image, serving as physical attributes to guide CNN in autonomously learning meaningful features. A novel objective function was designed to demonstrate the learning process of physical guidance. The unsupervised method achieved sea ice classification results comparable to supervised CNN learning methods. In another work [156], a novel physics-guided and injected learning (PGIL) unsupervised approach for SAR image classification was proposed. Compared to data-driven CNN and other pre-training methods, PGIL significantly improved classification performance with limited labeled data. Furthermore, in [157], uncertainty was embedded into transfer learning to estimate feature uncertainty during the learning process. Experimental results demonstrated that this method achieved better sea ice classification performance.
These researches all demonstrate that physics-guided learning can help address the issue of scarce sea ice data. Manual annotation of SAR imagery data is time-consuming and expensive, making it challenging to obtain large-scale annotated data. However, physical characteristics can provide additional information to assist models in achieving more accurate classification and segmentation with limited labeled data. By leveraging physical models and prior knowledge, synthetic SAR imagery data can be generated for model training and optimization, thereby alleviating the problem of data scarcity. Therefore, future research can focus on achieving a more comprehensive and accurate understanding and classification of SAR imagery by combining physical characteristics with deep learning methods.
#### Ii-B4 Limitations
The application of deep learning in sea ice classification has certain limitations. One of these limitations is its dependence on labeled sea ice data for training, yet currently, there is a lack of large-scale and representative benchmark datasets. Additionally, the absence of large-scale models like SAM poses a challenge in determining whether it is feasible to conduct large-scale training across different regions and latitudes to adapt to varying SIC tasks. Furthermore, research on multi-source data fusion in SIC is relatively limited. The challenge lies in leveraging the complementary characteristics of different data sources to improve the accuracy of SIC. Multi-source data fusion can encompass remote sensing images acquired from different sensors, meteorological data, and oceanic observation data, among others. By integrating and analyzing these diverse datasets, more comprehensive and accurate sea ice information can be obtained.
## III Accessible ice datasets
According to the guidelines established by the World Meteorological Organization (WMO), sea ice can be classified in multiple ways, taking into account factors such as the stages of its growth process, its movement patterns, and the horizontal dimensions of its surface. The predominant classification method found in the literature is based on the developmental stages of sea ice, which encompass frazil ice, nilas ice, FYI, and MYI. Additionally, some studies focus on specific tasks, such as the binary classification of open water and sea ice, as well as the multi-classification of different types of sea ice.
Currently, as researchers' interest in sea ice continues to grow, there is a rising availability of relevant datasets that are openly accessible. In order to meet the demands for further experimental evaluations and establish a standardized framework for future researches, we have meticulously compiled a comprehensive database. This database encompasses all currently available open-source SAR-based, optical-based, airborne camera-based and drone-based datasets. A total of 12 datasets have been collected, accompanied by detailed descriptions of their sources. The emphasis is placed on key attributes such as sensor types, study areas, data sizes, and partitioning methods, ensuring a comprehensive and structured resource for the research community.
### _SAR-based datasets_
#### Iii-A1 Radiation characteristics of sea ice
SAR is the most commonly used active microwave data type and has been employed in 80% of SIC publications. The radar wavelength, polarization mode, and incidence angle of SAR have significant impacts on the extraction performance. The specific parameters can be referred to the work [7].
* **Radar wavelength** Many literatures on sea ice classification have discussed the effectiveness of different radar wavelengths, including the X-band, L-band, and C-band SAR. In summary, X-band and Ku-band are suitable for winter sea ice monitoring, while L-band offers advantages for summer sea ice monitoring. The C-band, which lies between Ku-band and L-band, provides a balanced choice for sea ice monitoring across different seasons. Currently, many sea ice monitoring tasks opt for SAR in the C-band for research purposes. The study [158] demonstrates that, compared to the C-band, the L-band is more accurate in detecting newly formed ice.
* **Polarization mode** Polarimetric techniques offer valuable insights for sea ice identification by capturing more detailed surface information using polarimetric SAR. This leads to improved classification of different sea ice types. For instance, the distinctive rough or deformed surfaces of FYI result in higher backscattering coefficients in cross-polarization. Conversely, MYI, known for its stronger volume scattering, exhibits higher backscattering coefficients in both co-polarization and cross-polarization. Notably, Nilas ice, characterized by its smooth surface and high salinity content, demonstrates consistently low backscattering coefficients across both polarizations in radar observations.
* **Incidence angle** In many scattering experiments, the statistical characteristics of sea ice backscattering coefficients with respect to varying incidence angles can be observed distinctly. When a radar emits microwaves towards a calm open water surface, the echo signal becomes prominent when the incidence angle is close to vertical or extremely small. However, as the incidence angle increases, the backscattering from the sea surface weakens, resulting in a gradual reduction in surface roughness. Researches have shown that at higher frequency bands, increasing the incidence angle improves the classification accuracy between sea ice and open water. Additionally, the backscattering coefficients during the melting period of sea ice are also influenced by the incidence angle. For instance, in HH-polarized data, the backscattering coefficients obtained at small incidence angles are significantly higher, and they exhibit a linear relationship with increasing incidence angles.
#### Iv-A2 Datasets
**SI-STSAR-7 [159]** The dataset is a spatiotemporal collection of SAR imagery specifically designed for sea ice classification. It encompasses 80 Sentinel-1 A/B SAR scenes captured over two freeze-up periods in Hudson Bay, spanning from October 2019 to May 2020 and from October 2020 to April 2021. The dataset includes a diverse range of ice categories. The labels for the sea ice classes are derived from weekly regional ice charts provided by the Canadian Ice Service. Each data sample represents a 32x32 pixel patch of SAR imagery with dual-polarization (HH and HV) SAR data. These patches are derived from a sequence of six consecutive SAR scenes, providing a temporal dimension to the dataset.
**The TenGeoP-SARwv dataset [16]** The dataset is built upon the acquisition of Sentinel-1A wave mode (WV) data in VV polarization. It comprises over 37,000 SAR image patches, which are categorized into ten defined geophysical classes.
**SAR WV Semantic Segmentation** The dataset is a subset of The TenGeoP-SARwv dataset. It consists of three parts: training, validation, and testing. The images comprise 1200 samples and are stored as PNG format files with dimensions of 512x512x1 uint8. The label data is stored as npy files, represented by arrays of size 64x64x10, where each channel represents one of the ten meteorological classes.
**KoVMrMI** The dataset utilizes Sentinel-1 Interferometric Wide (IW) SAR data, including Single-Look Complex (SLC) and Ground Range Detected High-Resolution (GRDH) products in the HH channel. The GRDH images are annotated with seven types of sea ice in patches of size 256x256. The H/\\(\\alpha\\) labeling is obtained by processing the dual-polarization SLC data using SNAP software.
**SAR based Ice types/ice edge dataset for deep learning analysis** The dataset is specifically compiled for sea ice analysis in the northern region of the Svalbard archipelago, utilizing annotated polygons as references. It encompasses a total of 31 scenes and contains six distinct classes. The dataset is organized into data records, referred to as patches, which are extracted from the interior of each polygon using a stride of 10 pixels. Each class is represented by patches of different sizes, including 10x10, 20x20, 32x32, 36x36, and 46x46 pixels.
**AI4Sealce [123]** The dataset consists of 461 Sentinel-1 SAR scenes matched with ice charts produced by the Danish Meteorological Institute during the period of 2018-2019. The ice charts provide information on SIC, development stage, and ice form in the form of manually drawn polygons. The dataset also includes measurements from the AMSR2 microwave radiometer sensor to supplement the learning of SIC, although the resolution is much lower than the Sentinel-1 data. Building upon the AI4Sealce dataset, Song et al. [125] constructed a ice-water semantic segmentation dataset.
**Arctic sea ice cover product based on SAR [122]** The dataset is based on Sentinel-1 SAR and provides Arctic sea ice coverage data. Approximately 2500 SAR scenes per month are available for the Arctic region. Each S1 SAR image acquired in the Arctic has been processed to generate NetCDF sea ice coverage data. Each S1 image corresponds to an NC file. The spatial resolution of the SAR-derived sea ice cover is 400 m. The website has released the processing of S1 data obtained in the Arctic from 2019 to 2021 and has uploaded the corresponding sea ice coverage data.
### _Optical-based datasets_
#### Iv-B1 Common optical sensors
There are several types of optical sensors commonly used for ice classification:
* **MODIS** MODIS is an optical sensor widely used for ice classification. It is carried on the Terra and Aqua satellites. By observing the reflectance and emitted radiation of the Earth's surface, MODIS can provide valuable information about ice characteristics such as color, texture, and spectral properties.
* **VIIRS** VIIRS is an optical sensor with multispectral observation capabilities, used for monitoring and classifying the Earth's surface. It provides high-resolution imagery and has applications in ice classification.
* **Landsat series** The Landsat satellites carry sensors that provide multispectral imagery for land cover classification and monitoring, including ice classification. Sensors such as OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor) on Landsat 8, as well as previous sensors like ETM+ (Enhanced Thematic Mapper Plus), have been extensively used in ice classification tasks.
* **Sentinel series** The European Space Agency's Sentinel satellite series includes a range of sensors for Earth observation, including multispectral and thermal infrared sensors. The multispectral sensor on Sentinel-2 is utilized for ice classification and monitoring, while the sensors on Sentinel-3 provide information such as ice surface temperature and color.
* **HY-1 (Haiyang-1)** HY-1 also contribute to ice classification and monitoring. The HY-1 satellite is a Chinese satellite mission dedicated to oceanographic observations, including the monitoring of sea ice. The HY-1 satellite carries the SCA (Scanning Multichannel MicrowaveRadiometer) sensor, which operates in the microwave frequency range. This sensor can provide measurements of SIC, sea surface temperature, and other related parameters. By detecting the microwave emissions from the Earth's surface, the SCA sensor can differentiate between open water and ice.
These optical sensors capture spectral information or radiation characteristics in different bands, enabling the acquisition of valuable data on ice morphology, types, and distribution. They play a crucial role in ice classification and monitoring. These sensors are widely employed in remote sensing and Earth observation, providing valuable data for ice monitoring and research purposes.
#### Iii-B2 Datasets
Compared to SAR-based datasets, there are fewer datasets based on optical imagery. To the best of our knowledge, there are currently two open-source optical imagery datasets available:
* **2021Gaofen** The dataset is based on HY-1 visible light images with a resolution of 50m. The scenes cover the surrounding region of the Bohai Sea in China. The provided images have varying sizes ranging from 512 to 2048 pixels and consist of over 2500 images. Each image has been manually annotated at the pixel level for sea ice, resulting in two classes: sea ice and background. The remote sensing images are stored in TIFF format and contain the R-G-B channels, while the annotation files are in PNG format with a single channel. In the annotation files, sea ice pixels are assigned a value of 255, and background pixels have a value of 0.
* **Arctic Sea Ice Image Masking** The dataset consists of 3392 satellite images of the Hudson Bay sea ice in the Canadian Arctic region, captured between January 1, 2016, and July 31, 2018. The images are acquired from the Sentinel-2 satellite and composed of bands 3, 4, and 8 (false color). Each image is accompanied by a corresponding mask that indicates the SIC across the entire image.
### _Datasets based on alternative acquisition methods_
Ice classification datasets based on alternative acquisition methods include imagery captured by icebreakers and drones.
* **Airborne camera-based datasets** The dataset is constructed from GoPro images captured during a two-month expedition conducted by the Nathaniel B. Palmer icebreaker in the Ross Sea, Antarctica [137]. The video clips captured can be found at [https://youtu.be/BNZu1uxNvlo](https://youtu.be/BNZu1uxNvlo). These images were manually annotated using the open-source annotation tool PixelAnnotationTool into four categories: ice, ship, ocean, and sky. The dataset was divided into three sets, namely training, validation, and testing, in an 8:1:1 ratio. Data augmentation was performed by horizontally flipping the images, resulting in a training dataset of 382 images.
* **River ice segmentation [160]** The dataset collects digital images and videos captured by drones during the winter seasons of 2016-2017 from two rivers in Alberta province: the North Saskatchewan River and the Peace River. The images in the dataset are segmented into three categories: ice, anchor ice, and water. The training set consists of 50 pairs, while the validation set includes 104 images; however, there are no labels available for the validation set.
* **NWPU_YRCC2 dataset** A total of 305 representative images were selected from videos and images captured by drones during aerial surveys of the Yellow River's Ningxia-Inner Mongolia section. These images contain four target classes and were cropped to a size of 1600 x 640 pixels. The majority of these images were collected during the freezing period. Each pixel of the images was labeled into one of four categories: coastal ice, drifting ice, water, and other, using Photoshop software. The dataset was split into training, validation, and testing sets in a ratio of 6:2:2, comprising 183, 61, and 61 images, respectively.
These datasets provide valuable resources for training and evaluating ice classification algorithms using imagery from icebreakers and drones. They contribute to the development of accurate and robust models for ice classification, utilizing alternative data sources.
## IV Applications
Given the progress in SIE and classification technologies, obtaining accurate spatial distribution and dynamic changes of sea ice has become increasingly vital. Through careful analysis and evaluation, a multitude of valuable geographic information products have been developed. These products play a pivotal role in various domains, including weather forecasting [161], maritime safety [162], resource development [149], and ecological conservation [163]. In this section, we will delve into the specific applications derived from and classification, as shown in Fig. 4.
### _Meteorological Forecasting and Climate Research_
The results of have significant applications in meteorological forecasting and climate prediction. By utilizing remote sensing techniques to extract and classify sea ice data, it becomes possible to improve the models that depict the
Fig. 4: The extracted sea ice information finds significant applications in various domains, including meteorological forecasting and climate research, navigation and maritime navigation, and geospatial information products.
interactions between the ocean and the atmosphere, further enhancing our understanding of sea ice response to climate change [164]. Analysis from research [161] reveals the potential value of sea ice observation data. The authors emphasize the regional variations in sea ice trends and highlight the lack of comprehensive records regarding marine connections. They utilize observation data to establish extensive Arctic and regional sea ice trends, enabling the identification and selection of climate models with optimal predictive capabilities on a global scale. These models subsequently provide more accurate predictions of future sea ice changes, which are closely linked to vital marine pathways in the Arctic region.
Furthermore, the extraction and classification of sea ice hold significant implications for monitoring climate change. This is due to the high albedo [165] of sea ice, which greatly alters the energy balance of the ocean. Additionally, sea ice exhibits low thermal conductivity, exerting a significant influence on the heat exchange between the ocean and the atmosphere. Thus, sea ice serves as a crucial indicator of climate change. Through regular extraction and classification of sea ice, we can monitor its temporal and spatial variations, analyze the trends of sea ice retreat and formation, and provide data support for climate change research. Research outlined in [163] evaluates Arctic amplification and sea surface changes by observing the anomalies in Arctic sea ice extent, thickness, snow depth, and ice concentration in comparison to the mean state during different periods (2011-2018).
Hence, the application of and classification is crucial for meteorological forecasting, climate prediction, and climate change monitoring. By utilizing remote sensing techniques to extract and classify sea ice data, we can enhance the predictive capabilities of climate models, delve deeper into the interactions between sea ice and the climate system, and assess and monitor the trends and impacts of climate change.
### _Maritime and Ocean Navigation_
Accurate extraction and classification of sea ice data play a vital role in maritime and ocean navigation. By utilizing remote sensing techniques to extract and classify sea ice information, it becomes possible to efficiently generate valuable products such as sea ice distribution maps, ice edge charts, and route planning tools. These products serve as crucial aids for ships, enabling them to navigate safely and avoid ice-prone areas.
the Arctic Northeast Passage (NEP) has undergone remarkable changes in sea ice conditions, significantly impacting both the environment and navigational capabilities [166]. Research indicates a continued reduction in Arctic sea ice, leading to the shortening of trade routes in the Arctic Ocean and potentially affecting the global economy [167]. The work [168] focusing on the Arctic NEP have examined the influence of sea ice variations on the future accessibility of the route. While reduced sea ice has made it relatively easier for vessels to traverse the Arctic NEP, challenges and risks still persist. Another work [169] analyzed changes in sea ice volume and age, assessing the accessibility and navigable regions of the Arctic route.
Furthermore, the extent and thickness of sea ice hold significant importance for navigation, as emphasized in [170]. MYI, known for its thickness and hardness, poses substantial risks to ships. In contrast, younger and thinner ice enables icebreakers and regular cargo vessels to navigate more freely along ice-free coastal areas during the summer [171]. A recent study [172] investigated the impact of sea ice conditions. Similarly, research [173] revealed that sea ice thickness has a greater impact on vessel speed than ice concentration, underscoring its pivotal role in successful transit through the Arctic route. Therefore, future research endeavors should focus on enhancing the spatial and temporal resolution of sea ice monitoring to accurately evaluate the navigational capabilities of critical straits and regions.
Recent achievements have been made in this domain. A study [174] utilized high-quality, co-located satellite data and observation-calibrated reanalysis data to analyze sea ice changes along Arctic shipping routes. This research investigated the spatiotemporal distribution characteristics, melt/freeze timing, and variations across trans-Arctic routes using datasets such as NSIDC SIC and daily PIOMAS SIT products. Additionally, by incorporating optimal interpolation sea surface temperature (SST) and SIC data, another study [175] examined the spatiotemporal distribution characteristics of SST and SIC above 60\\({}^{\\circ}\\)N in the Arctic, along with their interrelationships. These findings hold crucial implications for Arctic shipping and sea ice forecasting, contributing to enhanced navigation and decision-making in the region.
### _Geographic Information Products_
In recent years, significant advancements have been made in utilizing remote sensing techniques to generate geographic information products related to ice and polar regions. These applications encompass various aspects, including mapping, GIS, and algorithmic approaches. Reference [176] highlights the positive impact of Interferometric Synthetic Aperture Radar (InSAR) technology on Antarctic topographic mapping, not only at scales as small as 1:25,000 but also in thematic analysis and monitoring. By employing multiple radar images and D-InSAR techniques, it becomes possible to monitor subtle centimeter-level changes, offering tremendous potential for studying Antarctic glacier movement, mass balance, and global environmental changes. In a similar vein, the work [177] demonstrates the production of polar remote sensing products using very high-resolution satellite (VHRS) imagery, which proves to be an effective alternative to costlier aerial photographs or ground surveys. Moreover, the work [178] utilizes high-resolution ICESat laser altimetry to observe the dynamic changes in the grounding line of Greenland and Antarctic ice sheets, revealing a widespread thinning phenomenon across Greenland's latitudes and intensified thinning along critical Antarctic grounding lines.
Furthermore, the work [179] introduces the Ship Navigation Information Service System (SNISS), an advanced ship navigation information system based on geospatial data. SNISS offers a macroscopic perspective to develop optimal navigation routes for the Arctic NEP and provides ice image retrieval and automated data processing for key straits. Similarly, the work [180] develops RouteView, an interactive ship navigation system for Arctic navigation based on geospatial big data. By incorporating reinforcement learning and deep learning technologies, RouteView calculates the optimal routes for the next 60 days and extracts sea ice distribution. These studies have the potential to enhance the safety of vessels navigating the NEP and drive the development of augmented reality (AR) information extraction methods. Arctic sea ice distribution maps serve as valuable aids for route planning, enabling vessels to avoid ice-covered areas and ensure sufficient water depth for safe passage. In addition, PolarView is a ship navigation and monitoring system specifically designed for polar regions. It offers real-time vessel positioning and navigation information, including sea ice coverage, ship route planning, and hazard zone alerts. In the realm of path planning optimization, a sophisticated maze path planning algorithm with weighted regions has been proposed in research [162].
As remote sensing techniques continue to advance and polar observation data becomes increasingly accessible, a variety of geographic information integration and visualization platforms have emerged. One notable platform is Quantarctica [181], which has been specifically designed as a comprehensive visualization platform for mapping Antarctica, the Southern Ocean, and the islands surrounding Antarctica. It encompasses scientific data from nine disciplines, including sea ice, providing a wealth of information for researchers. Another significant resource is the International Bathymetric Chart of the Southern Ocean (IBCSO) [182], which offers detailed information about the bathymetry of the Southern Ocean. This dataset serves as a valuable resource for marine science research and the exploration of marine resources in the region. For terrain data in polar regions, ArcticDEM is a prominent system that enables terrain analysis, glacier research, hydrological modeling, and more. Its comprehensive dataset contributes to a better understanding of the physical characteristics of the polar regions. To access a wide range of information about the polar regions, the ArcticWeb platform serves as a comprehensive polar information hub. It offers various resources including maps, satellite imagery, weather data, and sea ice information. This integrated platform facilitates access to vital information for researchers, scientists, and policymakers working in the polar regions. Additionally, there are online systems dedicated to sea ice monitoring and prediction. IceMap utilizes satellite data and numerical models to provide real-time sea ice coverage maps, thickness estimates, and predictive simulations. It assists users in monitoring the state and trends of sea ice, providing valuable insights for various applications. For studying Arctic sea ice changes, the PIOMAS system offers simulation and analysis capabilities. It provides information on Arctic sea ice thickness, volume, and distribution, which are crucial for climate research and analysis of ice conditions. In terms of monitoring snow and ice cover thickness in polar regions, the SnowSAT remote sensing system employs radar and laser altimetry data to deliver high-resolution measurements. This data is valuable for understanding snow depth and ice cover thickness, aiding in researches related to climate change and polar ecosystems. Lastly, the Sea Ice Index, an online system provided by the U.S. National Snow and Ice Data Center, offers monitoring capabilities for global sea ice coverage and changes. It provides satellite-based sea ice indices and spatiotemporal distribution maps, enabling effective climate monitoring, environmental conservation, and management of marine resources in polar regions. These systems collectively contribute to a comprehensive understanding of the polar regions and their dynamic characteristics. Moving forward, it is crucial to enhance the analytical capabilities of these systems by incorporating structured modeling of sea ice, enabling more sophisticated geographical analysis and providing better support for various applications in polar environments.
From glacier change observations to information system integration, and from ship navigation to route planning, these applications provide valuable data and tools for scientists, governments, policymakers, and related industries, helping them better understand and manage sea ice resources. Additionally, scholars have conducted research on polar mapping and achieved significant results. Wang et al. [183] identified three commonly used map projection methods for the Antarctic region: Polar Stereographic Projection, Transverse Mercator Projection, and Lambert Conformal Conic Projection, all of which are equal-angle projections. Fig. 5 lists several commonly used projection visualizations of the Arctic region. The Quantarctica system utilizes the Antarctic Polar Stereographic projection EPSG:3031. Due to the unique geographical position of polar regions, commonly used map projections have their limitations, and specific research is needed to address specific issues.
### _Others_
Sea ice information is critical for the development of natural resources in coastal areas. Extracting and classifying sea ice can help assess its impact on activities such as fishing [184], oil and gas extraction [185], and submarine cable laying [186], providing important references for decision-makers.
Sea ice is an essential component of the polar ecosystem. Its freezing and melting not only has a certain balancing effect on temperature changes in polar regions, but also affects the stability of ocean temperature, salinity, and stratification, thereby impacting global ocean circulation [187]. Extracting and classifying sea ice can generate information such as sea ice boundaries, ice-water interfaces, and cracks, which are useful for ecological research and conservation efforts.
The results of and classification can be used in various fields of marine science [188][189], including ocean physics, marine biology, and marine geology. By analyzing the characteristics and distribution of sea ice, changes and evolutionary processes
Fig. 5: Several Projection Visualizations in the Arctic Region: (a) The projection center is at the North Pole, characterized by a circular boundary. The map is symmetrically and uniformly distorted in all directions from the North Pole as the center. (b) The projection center is shifted away from the North Pole. The map still has a circular boundary, but the center is no longer the North Pole, and the distortion of the projection is not symmetric. (c) Rectangular maps are commonly used to display the entire polar region. (d) Vertical map. The Universal Transverse Mercator projection is used to simultaneously depict the North and South Poles. (e) The projection center is shifted, resulting in a non-global polar effect, with the coordinate range forming a sector-shaped area.
of the marine environment can be inferred.
## V Challenges in ice detection
There are several issues and challenges in SIE tasks. Firstly, a major problem is the limited availability of data sources, which restricts the accuracy and spatiotemporal resolution of SIC. The scarcity and discontinuity of existing data sources make it difficult to comprehensively capture and analyze sea ice features. Secondly, current SIC techniques have limited accuracy in complex sea ice conditions. Sea ice exhibits diverse variations in morphology, density, thickness, and other characteristics, making it challenging for traditional algorithms to cope with. Moreover, complex sea ice features such as cracks, ridges, and leads undergo intricate changes, which are difficult to capture and represent using conventional methods. Additionally, there are limitations in the ability to detect underwater ice, making it challenging to obtain parameters such as its morphology and thickness. To address these issues, further exploration is needed in terms of detection methods, modeling approaches, and mapping applications.
### _Exploration Methods Aspect_
#### V-A1 Multi-sensor integration
Current research in primarily relies on optical imagery, SAR imagery, or aerial photography captured by airborne cameras. Different sensors have their own characteristics and limitations in observing sea ice. A single sensor may not provide comprehensive information about sea ice. By introducing multi-sensor integration, the advantages of various sensors can be fully utilized to compensate for the limitations of a single sensor and obtain more comprehensive and accurate sea ice data. Multi-sensor integration can combine different technological approaches, such as microwave radar, optical sensors, acoustic techniques, etc., to acquire more comprehensive information about sea ice. For example, combining radar and optical sensor data enables simultaneous extraction of sea ice geometry and surface features, facilitating more precise and monitoring. Moreover, multi-sensor integration can also fuse data obtained from ground-based observations, satellite remote sensing, UAVs, and other platforms, providing multi-scale and multi-angle sea ice observations, thereby gaining a more comprehensive understanding of the spatiotemporal variations of sea ice.
Furthermore, establishing a continuous monitoring system using multiple sensors allows for dynamic monitoring and analysis of sea ice through long time series of remote sensing observations. By utilizing satellite remote sensing and other data sources, long-term monitoring of sea ice changes can be achieved to reveal its seasonal and interannual variations. This enhances the reliability and consistency of data, enables multi-scale and all-weather sea ice observations, and improves the capability of sea ice monitoring and prediction. These advancements provide more comprehensive and accurate data support for sea ice research and related applications.
#### V-A2 Underwater ice detection
Currently, remote sensing techniques are primarily used for, employing remote sensing sensors such as satellites, aircraft, and UAVs to obtain image data of sea ice. Common remote sensing techniques include optical remote sensing, SAR, and multispectral remote sensing, which provide information on the spatial distribution, morphological features, cracks, and ice floes of sea ice. In addition, close-range images of sea ice can be acquired by mounting imaging devices on ships. Shipborne observations provide higher accuracy and local-scale sea ice information. Furthermore, UAVs equipped with sensors such as cameras and thermal infrared cameras enable high-resolution observations and measurements of sea ice. UAV technology offers high maneuverability and flexibility, allowing for more detailed information about sea ice.
However, remote sensing methods are primarily suitable for surface detection and observation of sea ice, while direct remote sensing detection of underwater ice, such as subsea ice caps, is relatively challenging. Due to the absorption and scattering properties of water, remote sensing techniques are limited in their penetration and detection capabilities underwater. However, the detection of underwater ice is crucial for navigation and hydrographic surveying, as it can have significant implications for ship and navigation safety. The presence of underwater ice can lead to collisions, obstruction of navigation, or structural damage to vessels. Therefore, accurate detection and localization of underwater ice are essential for safe navigation planning and guidance.
Some remote sensing techniques and sensors can still provide some information about underwater ice under specific conditions. Sonar remote sensing is a technique that uses sound waves for detection and imaging in underwater environments. It can provide relevant information about underwater ice, such as the morphology of the ice bottom surface and ice thickness, by measuring the time and intensity of sound waves propagating in water. Sonar remote sensing finds widespread applications in the study of subsea ice caps and marine surveying. Additionally, technologies such as lasers and radars can also be used to some extent for underwater ice detection. Laser depth sounders can measure the distance and shape of underwater objects, providing information about ice thickness. Radar systems can penetrate to a certain depth underwater and detect the presence of underwater ice layers when operating at appropriate frequency bands.
### _Model Approaches Aspect_
#### V-B1 Multi-source data fusion model
The monitoring of sea ice primarily relies on SAR remote sensing technology, which can penetrate meteorological conditions such as clouds, snowfall, and polar night to obtain high-resolution sea ice information. SAR also has the advantage of being sensitive to the structure and morphological changes of sea ice, enabling the identification and differentiation of different types of sea ice and providing more accurate monitoring and prediction of sea ice. There are also a few researches that utilize optical remote sensing technologies, such as visible light and infrared satellite imagery. However, optical remote sensing is limited under conditions of cloud cover, polar night, and other factors, making it difficult to obtain clear sea ice information. Furthermore, due to the complexity and variability of sea ice, the limitations of a single optical remote sensing technology can lead to misclassification and omission errors.
Therefore, some studies have fully considered the complementarity of optical and SAR data in sea ice classification and have fused the two to extract sea ice information in the study area. Li et al. [11] analyzed the imaging characteristics of sea ice in detail and achieved fusion by solving the Poisson equation based on Sentinel-1 and S2 images to derive the optimal pixel values. Compared to the original optical images, the fused images exhibit richer spatial details, clearer textures, and more diverse material textures and colors. The constructed OceanTDL 5 model is then employed for SIE.
In addition to directly fusing heterogeneous images, Han et al. [12] proposed a fusion of the features extracted from both sources. They first utilized an improved Spatial Pyramid Pooling (SPP) network to extract different-scale sea ice texture information from SAR images based on depth. The Path Aggregation Network (PANet) was employed to extract multi-level features, including spatial and spectral information, of different types of sea ice from the optical images. Finally, these extracted low-level features were fused to achieve sea ice classification. In their work [13], they further introduced a Gate Fusion Network (GFN) to adaptively adjust the feature contributions from the two heterogeneous data sources, thereby improving the overall classification accuracy.
Han's work primarily focuses on feature-level fusion of SAR and optical images. In addition, input-level fusion and decision-level fusion have been demonstrated as effective methods [190, 191, 192], yielding favorable results in land use classification tasks. However, in the context of sea ice classification, it is crucial to consider the influence of different spectral bands on the radiation properties of sea ice. For instance, a simple approach involves replacing one of the R, G, or B channels in the RGB image with a single SAR band. Through experimentation, it was found that replacing the B band yielded superior results, as the B band exhibits weaker texture characteristics while SAR better reflects the radiation properties of sea ice. Furthermore, another approach involves concatenating a single SAR band with the RGB three-channel image to form a four-channel image. However, during the model's pretraining process, there may be difficulties in loading certain weights, resulting in suboptimal outcomes.
#### Iv-B2 Unsupervised Deep Learning
However, deep learning methods currently face challenges in the classification of remote sensing images, and one major challenge is the extensive manual annotation required. Additionally, accurate labeling of sea ice categories relies on expert knowledge, resulting in a scarcity of large-scale sea ice datasets for research purposes. The emergence of unsupervised deep learning presents a promising solution to this problem. By leveraging pre-training techniques such as transfer learning and self-supervised learning, unsupervised approaches can learn informative features for different sea ice types, enabling effective sea ice classification tasks.
Researches generally focus on specific regions of interest, such as the Greenland area. However, imagery exhibits variations across different regions, and sea ice distribution patterns differ as well. Consequently, testing the same model in different regions yields substantial discrepancies in the results. To tackle this challenge, the work [74] proposed the integration of texture features derived from gray-level co-occurrence matrices into the extraction and classification of training samples. Unsupervised generation of training samples replaced the costly and labor-intensive process of manual annotation. Moreover, the method produced adaptable training samples that better accommodate the pronounced fluctuations in sea ice conditions within the Arctic MIZ. This concept has undergone initial testing using a subset of Gaofen-3 images. In response to the scarcity of labeled pixels in remote sensing images, the work [193] presents an effective approach for sea ice classification from two perspectives. Firstly, a feature extraction method is developed that extracts contextual features from the classification map. Secondly, an iterative learning paradigm is established. Experimental results demonstrate that with limited training data available, the training and classification of sea ice image representations with comprehensive exemplar representation under mutual guidance provide insights for addressing the scarcity of labeled sea ice data.
Therefore, in response to the limitations of annotated datasets in sea ice research, unsupervised deep learning emerges as a highly promising avenue. By directly extracting insights from unlabeled data itself, it serves as a powerful tool for automatic feature learning, representation learning, and clustering. Unsupervised deep learning methods exploit the intrinsic structures and patterns within sea ice imagery, enabling the automatic extraction of informative features without the reliance on external labels or manual feature engineering. Within the realm of sea ice classification tasks, unsupervised deep learning techniques, such as autoencoders, GANs, and variational autoencoders (VAEs), excel at acquiring meaningful representations from unlabeled sea ice data. These approaches discover similarities, textures, shapes, and other discernible patterns inherent in sea ice images, thereby transforming them into valuable feature representations. Moreover, the utilization of extensive unlabeled sea ice data for training purposes expands the available dataset, consequently enhancing the generalizability and robustness of sea ice classification models across varying timeframes, locations, and sensor conditions.
However, the application of unsupervised deep learning methods to SIC tasks introduces certain challenges. Primarily, the absence of external labels as supervision signals may yield inaccurate or ambiguous feature representations. Therefore, it is imperative to design suitable objective functions and loss functions to guide the unsupervised learning process, ensuring the acquired features effectively facilitate the classification and analysis of sea ice images. Additionally, training unsupervised learning models may necessitate increased computational resources and time due to the involvement of complex network architectures and larger-scale datasets. Furthermore, evaluating the performance of unsupervised learning methods and conducting comparative analyses to discern the strengths and weaknesses of different approaches represent inherently challenging tasks in this domain.
#### Iv-B3 Construct ICE-SAM large model
The Segment anything model (SAM) [194], originally designed for segmenting natural images, is capable of segmenting various objects. We applied this model to the task of sea ice classification, and the segmentation results are shown in Fig. 6.
SAM demonstrates high precision in the task of sea ice image segmentation, effectively distinguishing different types of sea ice. However, the model itself cannot directly determine the specific category names of the sea ice, i.e., it cannot associate the segmentation results with predefined sea ice categories. To address this issue, we try to introduce the CLIP model [195] as an auxiliary classifier, as it possesses the capability of joint understanding of images and text. We use the segmented sea ice image patches as inputs and compare them with a range of predefined sea ice category names. Through this comparative analysis, the CLIP model compre-hends the connection between image content and category names, identifying the most matching category. Consequently, we can accurately classify the sea ice image patches into their respective sea ice categories, obtaining specific category names for each sea ice region. Thus, the role of the CLIP model in sea ice image segmentation is to provide inference capability for sea ice category names. By leveraging its understanding of both images and text, the CLIP model establishes the association between segmentation results and category names, enabling us to acquire more comprehensive and detailed sea ice classification information. This approach allows for a more comprehensive understanding of sea ice features and attributes, providing more accurate data support for sea ice monitoring and research.
### _Cartographic Applications Aspect_
#### Iv-C1 Polar Geographic Information Systems (GIS)
Researchers have developed various GIS and tools specifically tailored for polar regions to support the processing, analysis, and visualization of polar environments and related data. In the early stages, a web-based GIS system [196] was developed, providing online access, exploration, visualization, and analysis of archived sea ice data. Subsequently, systems such as PolarView, SNISS [179], and RouteView [180] were designed for polar navigation planning and ship navigation. These systems offer functionalities such asoyage planning, vessel position monitoring, and channel information retrieval, utilizing real-time data and model analysis to facilitate safe and efficient navigation in polar waters. However, these systems have limited integration of information, and the analysis paths considered are relatively narrow, resulting in somewhat idealized outcomes that have only limited reference value. Furthermore, with the increasing availability of polar observation data, several geographic information integration and visualization platforms have emerged. For example, Quantarctica [181], (IBCSO) [182], ArcticDEM, and ArcticWeb provide functionalities for visualizing polar geographic data, scientific data querying, map generation, and analysis. Online systems dedicated to sea ice monitoring and prediction, such as IceMap, PIOMAS, SnowSAT, and Sea Ice Index, offer real-time sea ice coverage data, thickness estimation, and predictive simulations.
The aforementioned systems primarily encompass ship navigation and monitoring, sea ice monitoring and prediction, polar mapping and geospatial information display, ice thickness measurement, climate research, and environmental protection. These GISs generally employ a layered architectural framework consisting of a data layer, an application layer, and a user interface layer. The data layer is responsible for storing and managing various polar-related geographic data, generally organized and stored in databases or file systems. These data can originate from multiple sources such as satellite observations, remote sensing imagery, marine surveys, meteorological stations, and vessels. The application layer is dedicated to processing and analyzing polar geospatial data, providing various functionalities and services. Within these polar systems, the application layer includes functions such as sea ice monitoring and prediction, navigation planning and guidance, map creation and visualization, and geospatial analysis and modeling. The functionalities within the application layer are typically implemented through algorithms, models, and tools, enabling data processing, analysis, and generating corresponding results and products. The user interface layer is responsible for presenting and displaying geospatial data, functionalities, and results to users, facilitating interaction and visualization of the system's capabilities.
However, most existing systems primarily focus on data integration and visualization, lacking comprehensive geospatial analysis capabilities. In order to achieve geospatial analysis functions for polar regions (taking sea ice as an example), the architectural design and expansion of polar systems can be further improved. Here are some suggested feature enhancements and architectural directions:
* **Data Integration and Management.** Polar systems should integrate sea ice data from multiple sources and manage them in a unified and standardized manner. This includes satellite observations, remote sensing imagery, marine measurements, and more. To enable structured modeling and geospatial analysis, the data integration and management module should incorporate functionalities such as data cleansing, format conversion, quality control, and metadata management.
* **Structured Modeling.** The system needs to develop algorithms and models for structured modeling of sea ice, transforming raw sea ice data into structured representations with geospatial information. This involves modeling sea ice morphology, density, thickness, distribution, and the relationships between sea ice and other geographical features. The sea ice structured modeling module should consider the spatiotemporal characteristics of sea ice and establish associations with the geographic coordinate system.
* **Geospatial Analysis Capabilities.** The system should provide a wide range of geospatial analysis functions to extract useful geospatial information from the sea ice structured model. This may include spatiotemporal analysis of sea ice changes, thermodynamic property analysis, analysis of sea ice interactions with the marine environment, and more. The geospatial analysis module should support various analysis methods and algorithms, along with interactive visualization and result presentation.
* **Real-time Data and Updates.** To ensure timeliness, the system should support real-time acquisition and updates of sea ice data. This can be achieved through real-time connections with data sources such as satellite observations, buoys, UAVs, and more. Additionally, the system should possess efficient and scalable data storage and processing capabilities to handle large-scale data processing requirements.
Future systems can further expand their architectural framework by incorporating technologies such as distributed computing, cloud computing, and artificial intelligence to enhance system performance and scalability. Furthermore, strengthening data sharing, standardization, and interoperability can facilitate data integration and functional consolidation among different systems, enabling a higher level of integration and collaborative work. These extended functionalities will enhance the overall performance and practicality of polar systems, providing comprehensive support for scientific research, navigation safety, and environmental protection, among other domains.
#### V-A2 Polar Map Projections
The unique shape and geographical attributes of the Earth's surface in polar regions make mapping challenging, hence research on polar cartographic projections has always been an important topic.
Specifically, Bian et al. [197] introduced the concept of complex variable isometric latitude based on the Gauss projection complex variable function. They overcame the limitations of traditional Gauss projections and established a unified and comprehensive \"integrated representation\" of Gauss projection in polar regions. Building upon this foundation, through rigorous mathematical derivations, they provided theoretically rigorous direct and inverse expressions for Gauss projection that can be used to fully represent polar regions, as well as corresponding scale factors and meridian convergence formulas. This approach addresses the problem of the impracticality of traditional Gauss projection formulas in polar regions and is of significant importance in improving the mathematical system of Gauss projection. It can be applied to the entire polar region and has important reference value for compiling polar maps and polar navigation [198]. Furthermore, research [199] demonstrates that the non-singular Gauss projection formula for polar regions meets the requirements of continuous projection within the polar region, providing a theoretical basis for the production of polar charts. Due to its conformal property, Gauss projection can better determine directional relationships and is of significant reference value for the production of topographic maps along the central meridian in polar regions, and can be combined with the current need for polar navigation charts for the Arctic route. Gauss projection has advantages over sundial projection when applied to polar regions. Currently, most globally released Antarctic sea ice distribution maps are presented in a spherical projection, which cannot be directly used for mainstream tiled map publication. The work [200] converts polar azimuthal stereographic projection sea ice charts to the mainstream web Mercator projection map, and utilizes appropriate image resampling methods to generate tiles and store them with numbered tiles according to different scale levels, ultimately achieving the publication and sharing of sea ice image maps.
In recent years, there has been a relative lack of research on the latest developments in polar cartographic projections. The current major challenges include severe distortion of commonly used projection methods in polar regions and the difficulty of finding a suitable balance between equal area and equal angle properties. Additionally, polar regions generally possess highly complex data, such as sea ice distribution and ice sheet changes. Therefore, another challenge in polar projection is how to effectively visualize and present the geographical information of polar regions. To more effectively visualize and present geographic information of the polar regions to meet the needs of different users, there are several potential research prospects and directions for future development, including:
* **Novel polar projection methods.** Researchers can continue to explore and develop new polar projection methods to address the existing issues in current projection methods. This may involve introducing more complex mathematical models or adopting new technologies such as machine learning and artificial intelligence to achieve more accurate and geographically realistic polar projec
Fig. 6: SAM segmentation results applied to Sentinel-2 imagery. (a) Sentinel-2 imagery and (b) SAM segmentation results. It can be observed that: the first column accurately segments the image, the second and fifth columns can easily differentiate sea ice, the third and sixth columns do not perform segmentation, and the segmentation result in the fourth column is excessively detailed.
tions.
* **Multiscale and multi-resolution polar projections.** Polar regions encompass a wide range of scales, from local glaciers to the entire polar region, requiring map projections at different scales. Therefore, researchers can focus on how to perform effective polar projections at various scales and resolutions to meet diverse application requirements and data accuracy needs.
* **Dynamic polar projections.** The geographical environment in polar regions undergoes frequent changes, such as the melting of sea ice and glacier movements. Researchers can investigate how to address this dynamism by developing dynamic polar projection methods that can adapt to changes in the geographical environment, as well as techniques for real-time updating and presentation of geographic information.
* **Multidimensional polar projections.** In addition to spatial dimensions, data in polar regions also involve multiple dimensions such as time, temperature, and thickness. Researchers can explore how to effectively process and present multidimensional data within polar projections, enhancing the understanding of polar region changes and features.
## VI Conclusion
This review provides a summary and overview of the methods used for SIE in the past five years, including classical image segmentation methods, machine learning-based methods, and deep learning-based methods. In addition, we have compiled a list of currently available open-source datasets for ice classification and segmentation, and explored the application aspects of from multiple perspectives. Finally, we have identified potential research directions based on the challenges encountered in detection methods, model approaches, and cartographic applications.
## Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant 42101458, Grant 41801388, and Grant 42101455 and the Fund Project of ZhongYuan Scholar of Henan Province of China under Grant number 202101510001.
## References
* [1]M. Ababoor and M. Shokr (2013) A new likelihood ratio for supervised classification of fully polarimetric sar data: an application for sea ice type mapping. ISPPS journal of photogrammetry and remote sensing48, pp. 1-11. Cited by: SSII-A.
* [2]M. Abadi, S. Agarwal, and A. Agarwal (2016) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [3]M. Abadi, S. Agarwal, and A. Agarwal (2016) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [4]M. Abadi, S. Agarwal, and A. Agarwal (2017) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [5]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [6]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [7]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [8]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [9]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [10]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [11]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [12]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [13]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [14]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [15]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [16]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [17]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [18]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [19]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [20]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [21]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [22]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [23]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [24]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [25]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [26]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [27]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [28]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [29]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [30]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [31]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [32]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [33]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [34]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [35]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [36]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [37]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [38]M. Abadi, S. Agarwal, and A. Agarwal (2018) Multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [39]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [40]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [41]M. Abadi, S. Agarwal, and A. Agarwal (2018) Multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [42]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [43]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [44]M. Abadi, S. Agarwal, and A. Agarwal (2018) A multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [45]M. Abadi, S. Agarwal, and A. Agarwal (2018) Multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12 (1), pp. 1-12. Cited by: SSII-A.
* [46]M. Abadi, S. Agarwal, and A. Agarwal (2018) Multi-resolution polar projection method for the spatial dimension of polar projection. IEEE Transactions on Image Processing12consecutive day images,\" _Annals of Glaciology_, vol. 56, no. 69, pp. 285-294, 2015.
* [25] F. L. Hillebrand, I. D. de Carvalho Barreto, U. F. Bremer, J. Arigony-Neto, C. W. M. Junior, J. C. Simoes, C. N. da Rosa, and J. B. de Jesus, \"Application of textural analysis to map the sea ice concentration with sentinel la in the western region of the antarctic peninsula,\" _Polar Science_, vol. 29, no. 100719, 2021.
* [26] Y. Zhu, K. Yu, J. Zou, and J. Wickert, \"Sea ice detection using gnss-r delay-doppler maps from uk techdemosat-1,\" in _2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2017, pp. 4110-4113.
* [27] A. S. Komarov and M. Buehner, \"Adaptive probability thresholding in automated ice and open water detection from radarsat-2 images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 4, pp. 552-556, 2018.
* [28] X. Zhang and S. Ren, \"Automatic classification of sar image based on r-gmm algorithm,\" in _2018 11th international congress on image and signal processing, BioMedical Engineering and Informatics (CISP-BMEI)_. IEEE, 2018, pp. 1-5.
* [29] J. Liu, K. A. Scott, A. Gawish, and P. Fieguth, \"Automatic detection of the ice edge in sar imagery using curvelet transform and active contour,\" _Remote Sensing_, vol. 8, no. 6, p. 480, 2016.
* [30] J. Liu, K. A. Scott, and P. W. Fieguth, \"Detection of marginal ice zone in synthetic aperture radar imagery using curvelet-based features: a case study on the canadian east coast,\" _Journal of Applied Remote Sensing_, vol. 13, no. 1, pp. 014505-014505, 2019.
* [31] T. Xie, W. Perrie, C. Wei, and L. Zhao, \"Discrimination of open water from sea ice in the labrador sea using quad-polarized synthetic aperture radar,\" _Remote Sensing of Environment_, vol. 247, p. 111948, 2020.
* [32] M. R. Keller, C. M. Gifford, N. S. Winstael, W. C. Walton, and J. E. Dietz, \"Active/passive multiple polarization sea ice detection during initial freeze-up,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 7, pp. 5434-5448, 2020.
* [33] Q. Yu and D. A. Clausi, \"Irgs: Image segmentation using edge penalties and region growing,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 30, no. 12, pp. 2126-2139, 2008.
* [34] S. Leigh, Z. Wang, and D. A. Clausi, \"Automated ice-water classification using dual polarization sar satellite imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 9, pp. 5529-5539, 2013.
* [35] M. Ghanbari, D. A. Clausi, L. Xu, and M. Jiang, \"Contextual classification of sea-ice types using compact polarimetric sar data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 10, pp. 7476-7491, 2019.
* 2020 IEEE International Geoscience and Remote Sensing Symposium_, 2020, pp. 1456-1459.
* [37] M. Ghanbari, D. A. Clausi, and L. Xu, \"Cp-irgs: A region-based segmentation of multilook complex compact polarimetric sar data,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 14, pp. 6559-6571, 2021.
* [38] F. Li, D. A. Clausi, L. Wang, and L. Xu, \"A semi-supervised approach for ice-water classification using dual-polarization sar satellite imagery,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_, 2015, pp. 28-35.
* [39] J. Wang, C. R. Duguay, D. A. Clausi, V. Pinard, and S. E. Howell, \"Semi-automated classification of lake ice cover using dual polarization radarsat-2 imagery,\" _Remote Sensing_, vol. 10, no. 11, p. 1727, 2018.
* [40] M. Hoekstra, M. Jiang, D. A. Clausi, and C. Duguay, \"Lake ice-water classification of radarsat-2 images by integrating irgs segmentation with pixel-based random forest labeling,\" _Remote Sensing_, vol. 12, no. 9, p. 1425, 2020.
* [41] M. Jiang, L. Xu, and D. A. Clausi, \"Sea ice-water classification of radarsat-2 imagery based on residual neural networks (resnet) with regional pooling,\" _Remote Sensing_, vol. 14, no. 13, p. 3025, 2022.
* [42] M. Jiang, D. A. Clausi, and L. Xu, \"Sea-ice mapping of radarsat-2 imagery by integrating spatial curvature with textural features,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 15, pp. 7964-7977, 2022.
* [43] M. Jiang, X. Chen, L. Xu, and D. A. Clausi, \"Semi-supervised sea ice classification of sar imagery based on graph convolutional network,\" in _IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2022, pp. 1031-1034.
* [44] X. Chen, K. A. Scott, L. Xu, M. Jiang, Y. Fang, and D. A. Clausi, \"Uncertainty-incorporated ice and open water detection on dual-polarized sar sea ice imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2023.
* [45] H. Han, S.-H. Hong, H.-c. Kim, T.-B. Chae, and H.-J. Choi, \"A study of the feasibility of using kompstat-5 sar data to map sea ice in the chukukeli sea in late summer,\" _Remote Sensing Letters_, vol. 8, no. 5, pp. 468-477, 2017.
* [46] M. Dabboor, B. Montpetit, and S. Howell, \"Assessment of the high resolution sar mode of the radarsat constellation mission for first year ice and multiyear ice characterization,\" _Remote Sensing_, vol. 10, no. 4, p. 594, 2018.
* [47] Dabboor, Mohammed and Montpetit, Benoit and Howell, Stephen, \"Assessment of simulated compact polarimetry of the high resolution radarsat constellation mission sar mode for multiyear and first year sea ice characterization,\" in _IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2018, pp. 2420-2423.
* [48] A. Gegiuc, M. Simils, J. Karvonen, M. Lensu, M. Mikynen, and J. Vainio, \"Estimation of degree of sea ice ridging based on dual-polarized c-band sar data,\" _The Cryosphere_, vol. 12, no. 1, pp. 343-364, 2018.
* [49] H. Han and H.-c. Kim, \"Evaluation of summer passive microwave sea ice concentrations in the chukeli sea based on kompstat-5 sar and numerical weather prediction data,\" _Remote Sensing of Environment_, vol. 209, pp. 343-362, 2018.
* [50] W. Tan, J. Li, L. Xu, and M. A. Chapman, \"Semiautomated segmentation of sentinel-1 sar imagery for mapping sea ice in labrador coast,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 11, no. 5, pp. 1419-1432, 2018.
* [51] D. Murashkin, G. Spreen, M. Huntenmann, and W. Dierking, \"Method for detection of leads from sentinel-1 sar images,\" _Annals of Glaciology_, vol. 59, no. 76p2, pp. 124-136, 2018.
* [52] J. V. Marcaccio, J. Gardner Costa, J. L. Brooks, C. M. Boston, S. J. Cooke, and J. D. Midwood, \"Automated coastline ice mapping with sar can inform winter fish ecology in the laurtenian great lakes,\" _Canadian Journal of Remote Sensing_, vol. 48, no. 1, pp. 19-36, 2022.
* [53] X. Yang, T. M. Pavelsky, L. P. Bendezan, and S. Zhang, \"Simple method to extract lake ice condition from landsat images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-10, 2021.
* [54] J.-W. Park, A. A. Korosov, M. Babiker, J.-S. Won, M. W. Hansen, and H.-C. Kim, \"Classification of sea ice types in sentinel-1 synthetic aperture radar images,\" _The Cryosphere_, vol. 14, no. 8, pp. 2629-2645, 2020.
* [55] J.-W. Park, A. Korosov, M. Babiker, and H.-C. Kim, \"Automated sea ice classification using sentinel-1 imagery,\" in _IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2019, pp. 4008-4011.
* [56] R. Ressel and S. Singha, \"Comparing near coincident space borne c and x band fully polarimetric sar data for arctic sea ice classification,\" _Remote Sensing_, vol. 8, no. 3, p. 198, 2016.
* [57] R. Ressel, S. Singha, and S. Lehner, \"Evaluating suitability of pol-sar (terrasar-x, radarsat-2) for automated sea ice classification,\" in _Land Surface and Cryosphere Remote Sensing III_, vol. 9877. SPIE, 2016, pp. 137-150.
* [58] R. Ressel, S. Singha, S. Lehner, A. Rosel, and G. Spreen, \"Investigation into different polarimetric features for sea ice classification using x-band synthetic aperture radar,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 9, no. 7, pp. 3131-3143, 2016.
* [59] W. Aldenhoff, C. Heuze, and L. E. Eriksson, \"Comparison of ice/water classification in fram strai from c-and l-band sar imagery,\" _Annals of Glaciology_, vol. 59, no. 76p2, pp. 112-123, 2018.
* [60] S. Singha, M. Johansson, N. Hughes, S. M. Hvidegaard, and H. Skourop, \"Arctic sea ice characterization using spaceborne fully polarimetric l-, c-, and x-band sar with validation by airborne measurements,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, pp. 3715-3734, 2018.
* [61] S. Singha, A. M. Johansson, and A. P. Doulgeris, \"Robustness of sar sea ice type classification across incidence angles and seasons at l-band,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 12, pp. 9941-9952, 2020.
* [62] J. Karvonen, \"A sea ice concentration estimation algorithm utilizing radiometer and sar data,\" _The Cryosphere_, vol. 8, no. 5, pp. 1639-1650, 2014.
* [63] Karvonen, Juha, \"Baltic sea ice concentration estimation using sentinel-1 sar and amr2 microwave radiometer data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no.
* [67] J. Karvonen, L. Shi, B. Cheng, M. Simila, M. Malkynen, and T. Vihma, \"Bohai sea ice parameter estimation based on thermodynamic ice model and earth observation data,\" _Remote Sensing_, vol. 9, no. 3, p. 234, 2017.
* [68] Q. Yan, W. Huang, and C. Moloney, \"Neural networks based sea ice detection and concentration retrieval from grassr- delay-dozpler maps,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 10, no. 8, pp. 3789-3798, 2017.
* [69] N. Asadi, K. A. Scott, A. S. Komorov, M. Buehner, and D. A. Clausi, \"Evaluation of a neural network with uncertainty for detection of ice and water in sar imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 1, pp. 247-259, 2020.
* [70] H. Liu, H. Guo, and L. Zhang, \"Svm-based sea ice classification using textural features and concentration from radarsat-2 dual-pol sensor data,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 8, no. 4, pp. 1601-1613, 2014.
* [71] N. Zakhvatkina, A. Korosov, S. Muckenhuber, S. Sandven, and M. Babiker, \"Operational algorithm for ice-water classification on dual-polarized radarsat-2 images,\" _The Cryosphere_, vol. 11, no. 1, pp. 33-46, 2017.
* [72] H. Liu, H. Guo, X.-M. Li, and L. Zhang, \"An approach to discrimination of sea ice from open water using sar data,\" in _2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2016, pp. 4865-4867.
* [73] D.-B. Hong and C.-S. Yang, \"Automatic discrimination approach of sea ice in the civic ocean using sentinel-1 extra wide swath dual-polarized sar data,\" _International journal of remote sensing_, vol. 39, no. 13, pp. 4469-4483, 2018.
* [74] X.-M. Li, Y. Sun, and Q. Zhang, \"Extraction of sea ice cover by sentinel-1 sar based on support vector machine with unsupervised generation of training data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 4, pp. 3040-3053, 2020.
* [75] L. Zhang, H. Liu, X. Gu, H. Guo, J. Chen, and G. Liu, \"Sea ice classification using terasar-x scansar data with removal of scalloping and interscan banding,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 12, no. 2, pp. 589-598, 2019.
* [76] Q. Yan and W. Huang, \"Detecting sea ice from techdemosat-1 data using support vector machines with feature selection,\" _IEEE journal of selected topics in applied earth observations and remote sensing_, vol. 12, no. 5, pp. 1409-1416, 2019.
* [77] Yan, Qingyun and Huang, Weimin, \"Sea ice concentration estimation from techdemosat-1 data using support vector regression,\" in _2019 IEEE Radar Conference (RadarConf)_. IEEE, 2019, pp. 1-6.
* [78] T. Zhu, F. Li, G. Heygster, and S. Zhang, \"Antarctic sea-ice classification based on conditional random fields from radarsat-2 dual-polarization satellite images,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 9, no. 6, pp. 2451-2467, 2016.
* [79] Y. Han, Y. Zhao, Y. Zhang, J. Wang, S. Yang, Z. Hong, and S. Cao, \"A cooperative framework based on active and semi-supervised learning for sea ice classification using eo -1 hyperiong data,\" _TRANSACTIONS OF THE JAPAN SOCIETY FOR AERONATICAL AND SPACE SCIENCES_, vol. 62, no. 6, pp. 318-330, 2019.
* [80] K. Barbieux, A. Charristi, and B. Mermimod, \"Icy lakes extraction and water-ice classification using landsat 80 si multispectral data,\" _International journal of remote sensing_, vol. 39, no. 11, pp. 3646-3678, 2018.
* [81] J. Lohse, A. P. Doulgeris, and W. Dierking, \"An optimal decision-tree design strategy and its application to sea ice classification from sar imagery,\" _Remote Sensing_, vol. 11, no. 13, p. 1574, 2019.
* [82] A. S. Komarov and M. Buehner, \"Automated detection of ice and open water from dual-polarization radar-2 images for data assimilation,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no. 10, pp. 5755-5769, 2017.
* [83] Komarov, Alexander S and Buehner, Mark, \"Ice concentration from dual-polarization sar images using ice and water retrievals at multiple spatial scales,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 2, pp. 950-961, 2020.
* [84] S. Chen, Y. Yan, J. Ren, B. Hwang, S. Marshall, and T. Durrani, \"Superpixel based sea ice segmentation with high-resolution optical images: Analysis and evaluation,\" in _Communications, Signal Processing, and Systems: Proceedings of the 10th International Conference on Communications, Signal Processing, and Systems, Vol. 2_. Springer, 2022, pp. 474-482.
* [85] B. Wang, L. Xia, D. Song, Z. Li, and N. Wang, \"A two-round weight voting strategy-based ensemble learning method for sea ice classification of sentinel-1 imagery,\" _Remote Sensing_, vol. 13, no. 19, p. 3945, 2021.
* [86] M. Kim, H.-C. Kim, J. Im, S. Lee, and H. Han, \"Object-based landfast sea ice detection over west antarctic using time series alos pslax data,\" _Remote Sensing of Environment_, vol. 242, p. 111782, 2020.
* [87] M. Liu, R. Yan, J. Zhang, Y. Xu, P. Chen, L. Shi, J. Wang, S. Zhong, and X. Zhang, \"Arctic sea ice classification based on costant swim data at multiple small incidence angles,\" _Remote Sensing_, vol. 14, no. 1, p. 91, 2022.
* [88] S. Ren and X. Zhang, \"A new gmrf self-supervised algorithm applied to sar image classification,\" _Journal of the Indian Society of Remote Sensing_, vol. 49, no. 7, pp. 1569-1580, 2021.
* [89] L. Wang, K. A. Scott, L. Xu, and D. A. Clausi, \"Sea ice concentration estimation during melt from dual-pol sar scenes using deep convolutional neural networks: A case study,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 8, pp. 4524-4533, 2016.
* [90] L. Wang, K. A. Scott, D. A. Clausi, and Y. Xu, \"Ice concentration estimation in the gulf of st. lawnerse using fully convolutional neural network,\" in _2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2017, pp. 4991-4994.
* [91] L. Wang, K. A. Scott, and D. A. Clausi, \"Sea ice concentration estimation during freeze-up from sar imagery using a convolutional neural network,\" _Remote Sensing_, vol. 9, no. 5, p. 408, 2017.
* [92] J. Li, C. Wang, S. Wang, H. Zhang, Q. Fu, and Y. Wang, \"Gaofen-3 sea ice detection based on deep learning,\" in _2017 Progress in Electromagnetics Research Symposium-Fall (PIERS-FALL)_. IEEE, 2017, pp. 933-939.
* [93] Q. Yan and W. Huang, \"Sea ice sensing from gns-r data using convolutional neural networks,\" _IEEE geoscience and remote sensing letters_, vol. 15, no. 10, pp. 1510-1514, 2018.
* [94] Yan, Qingyun and Huang, Weimin, \"Convolutional neural networks-based sea ice detection from tds-1 data,\" in _2018 18th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM)_. IEEE, 2018, pp. 1-2.
* [95] Y. Han, Y. Gao, Y. Zhang, J. Wang, and S. Yang, \"Hyperspectral sea ice image classification based on the spectral-spatial-joint feature with deep learning,\" _Remote Sensing_, vol. 11, no. 18, p. 2170, 2019.
* [96] H. Boulze, A. Korosov, and J. Brajard, \"Classification of sea ice types in sentinel-1 sar data using convolutional neural networks,\" _Remote Sensing_, vol. 12, no. 13, p. 2165, 2020.
* [97] J. Karvonen, \"Bahide sea ice concentration estimation from c-band dual-polarized sar imagery by image segmentation and convolutional neural networks,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-11, 2021.
* [98] D. Malmgren-Hansen, L. T. Pedersen, A. A. Nielsen, M. B. Kreiner, R. Saldo, H. Skriver, J. Lavelle, J. Buus-Hinkler, and K. H. Krane, \"A convolutional neural network architecture for sentinel-1 and amrs2 data fusion,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 3, pp. 1890-1902, 2020.
* [99] Y. Han, C. Wei, R. Zhou, Z. Hong, Y. Zhang, and S. Yang, \"Combining 3d-cm and squeeze-and-excitation networks for remote sensing sea ice image classification,\" _Mathematical Problems in Engineering_, vol. 2020, pp. 1-15, 2020.
* [100] Y. Han, X. Shi, S. Yang, Y. Zhang, Z. Hong, and R. Zhou, \"Hyperspectral sea ice image classification based on the spectral-spatial-joint feature with the pca network,\" _Remote Sensing_, vol. 13, no. 12, p. 2253, 2021.
* [101] Y. Xu and K. A. Scott, \"Sea ice and open water classification of sar imagery using cnn-based transfer learning,\" in _2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2017, pp. 3262-3265.
* [102] Xu, Yan and Scott, K Andrea, \"Impact of intermediate ice concentration training data on sea ice concentration estimates from a convolutional neural network.\" _International Journal of Remote Sensing_, vol. 40, no. 15, pp. 5799-5811, 2019.
* [103] S. Khaleghian, H. Ullah, T. Kremer, N. Hughes, T. Eltoft, and A. Marinoni, \"Sea ice classification of sar imagery based on convolution neural networks,\" _Remote Sensing_, vol. 13, no. 9, p. 1734, 2021.
* [104] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 4700-4708.
* [105] C. L. Cooke and K. A. Scott, \"Estimating sea ice concentration from sar: Training convolutional neural networks with passive microwave data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 7, pp. 4735-4747, 2019.
* [106] A. S. Nagi, M. S. Minhas, L. Xu, and K. A. Scott, \"A multi-scale technique to detect marginal ice zones using convolutional neural networks,\" in _IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2020, pp. 3035-3038.
* [107] R. Kruk, M. C. Fuller, A. S. Komarov, D. Islefson, and I. Jeffrey, \"Proof of concept for sea ice stage of development classification using deep learning,\" _Remote Sensing_, vol. 12, no. 15, p. 2486, 2020.
* [108] H. Lyu, W. Huang, and M. Mahdianpari, \"Eastern arcie sea ice sensing: First results from the radarsat constellation mission data,\" _Remote Sensing_, vol. 14, no. 5, p. 1165, 2022.
* [109] A. Brock, S. De, S. L. Smith, and K. Simonyan, \"High-performance large-scale image recognition without normalization,\" in _International Conference on Machine Learning_. PMLR, 2021, pp. 1059-1071.
* [110] W. Song, M. Li, Q. He, D. Huang, C. Perra, and A. Liotta, \"A residual convolution neural network for sea ice classification with sentinel-1 sar imagery,\" in _2018 IEEE International Conference on Data Mining Workshops (ICDMW)_. IEEE, 2018, pp. 795-802.
* [111] W. Song, M. Li, W. Gao, D. Huang, Z. Ma, A. Liotta, and C. Perra, \"Automatic sea-ice classification of sar images based on spatial and temporal features learning,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 12, pp. 9887-9901, 2021.
* [112] K. Kortum, S. Singha, and G. Spreen, \"Robust multiclassonal ice classification from high-resolution x-band sar,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-12, 2022.
* [113] J. Zhang, W. Zhang, Y. Hu, Q. Chu, and L. Liu, \"An improved sea ice classification algorithm with geofen-3 dual-polarization sar data based on deep convolutional neural networks,\" _Remote Sensing_, vol. 14, no. 4, p. 906, 2022.
* [114] M. S. Tamber, K. A. Scott, and L. T. Pedersen, \"Accounting for label errors when training a convolutional neural network to estimate sea ice concentration using operational ice charts,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 15, pp. 1502-1513, 2022.
* [115] Y. Ren, H. Xu, B. Liu, and X. Li, \"Sea ice and open water classification of sar images using a deep learning model,\" in _IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2020, pp. 3051-3054.
* [116] Y. Ren, X. Li, X. Yang, and H. Xu, \"Development of a dual-attention u-net model for sea ice and open water classification on sar images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 19, pp. 1-5, 2021.
* [117] Ren, Yibin and Li, Xiaofeng and Yang, Xiaofeng and Xu, Huan, \"Sea ice detection from sar images based on deep fully convolutional networks,\" in _Artificial Intelligence Oceanography_. Springer Nature Singapore, 2023, pp. 253-276.
* [118] C. A. Baumhoer, A. J. Dietz, C. Kneisel, and C. Kuenzer, \"Automated extraction of antarctic glacier and ice shelf fronts from sentinel-1 imagery using deep learning,\" _Remote Sensing_, vol. 11, no. 21, p. 2529, 2019.
* [119] W. Ji, Z. Fang, D. Feng, and X. Ge, \"Semantic segmentation of arctic sea ice in summer from remote sensing satellite images based on baut,\" _Journal of Applied Remote Sensing_, vol. 16, no. 4, p. 046514, 2022.
* [120] I. De Gelis, A. Colin, and N. Longpe, \"Prediction of categorized sea ice concentration from sentinel-1 sar images based on a fully convolutional network,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 14, pp. 5831-5841, 2021.
* [121] K. Radhakrishnan, K. A. Scott, and D. A. Clausi, \"Sea ice concentration estimation: Using passive microwave and sar data with a u-net and curriculum learning,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 14, pp. 5339-5351, 2021.
* [122] Y.-R. Wang and X.-M. Li, \"Arctic sea ice cover data from spaceborne synthetic aperture radar by deep learning,\" _Earth System Science Data_, vol. 13, no. 6, pp. 2723-2742, 2021.
* [123] A. Stokholm, T. Wulf, A. Kucik, R. Saldo, J. Buus-Hinkler, and S. M. Hvidegaard, \"A&seqice: Toward solving ambiguous sar textures in convolutional neural networks for automatic sea ice concentration charting,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-13, 2022.
* [124] A. S. Nagi, D. Kumar, D. Sola, and K. A. Scott, \"Ruf: Effective sea ice face segmentation using end-to-end res-unet-crf with dual loss,\" _Remote Sensing_, vol. 13, no. 13, p. 2460, 2021.
* [125] W. Song, H. Li, Q. He, G. Gao, and A. Liotta, \"E-mpspnet: Ice-water sar scene segmentation based on multi-scale semantic features and edge supervision,\" _Remote Sensing_, vol. 14, no. 22, p. 5753, 2022.
* [126] Z. Zhou, M. M. Rahman Siddiquee, N. Thjabkash, and J. Liang, \"Unet++: A nested u-net architecture for medical image segmentation,\" in _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and SH International Workshop, ML-CDS 2018, Held in Comjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4_. Springer, 2018, pp. 3-11.
* [127] D. Murashkin and A. Frost, \"Arctic sea ice mapping using sentinel-1 sar scenes with a convolutional neural network,\" in _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_. IEEE, 2021, pp. 5660-5663.
* [128] T. Feng, X. Liu, and R. Li, \"Super-resolution-aided sea ice concentration estimation from arnsr2 images by encoder-decoder networks with atrous convolution,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2022.
* [129] E. Zhang, L. Liu, L. Huang, and K. S. Ng, \"An automated, generalized, deep-learning-based method for delineating the calving fronts of greneland glaciers from multi-sensor remote sensing imagery,\" _Remote Sensing of Environment_, vol. 254, p. 112265, 2021.
* [130] X. Chen, K. A. Scott, M. Jiang, Y. Fang, L. Xu, and D. A. Clausi, \"Sea ice classification with dual-polarized sar imagery: A hierarchical pipeline,\" in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2023, pp. 224-232.
* [131] A. Colin, R. Fablet, P. Tandeo, R. Husson, C. Peuvreux, N. Longpe, and A. Mouche, \"Semantic segmentation of metoceanic processes using sar observations and deep learning,\" _Remote Sensing_, vol. 14, no. 4, p. 851, 2022.
* [132] J. P. Hoffman, S. A. Ackerman, Y. Liu, J. R. Key, and I. L. McConnell, \"Application of a convolutional neural network for the detection of sea ice leads,\" _Remote Sensing_, vol. 13, no. 22, p. 4571, 2021.
* [133] B. Aryal, K. E. Miles, S. A. V. Zesati, and O. Fuentes, \"Boundary aware u-net for glacier segmentation,\" _arXiv preprint arXiv:2301.11454_, 2023.
* [134] N. Saberi, K. A. Scott, and C. Duguay, \"Incorporating aleator uncertainties in lake ice mapping using radarsat-2 sar images and cnns,\" _Remote Sensing_, vol. 14, no. 3, p. 644, 2022.
* [135] Z. Ma, Z. Liu, J. Pu, L. Xu, K. Li, L. Wanggu, R. Wu, Y. Ma, Y. Chen, and C. Duguay, \"Deep convolutional neural network with random field model for lake ice mapping from sentinel-1 imagery,\" _International Journal of Remote Sensing_, vol. 42, no. 24, pp. 9351-9375, 2021.
* [136] S. Wang, M. V. Peppa, W. Xiao, S. B. Maharjan, S. P. Joshi, and J. P. Mills, \"A second-order attention network for glacial lake segmentation from remotely sensed imagery,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 189, pp. 289-301, 2022.
* [137] B. Dowden, O. De Silva, W. Huang, and D. Oldford, \"Sea ice classification via deep neural network semantic segmentation,\" _IEEE Sensors Journal_, vol. 21, no. 10, pp. 11 879-11 888, 2020.
* [138] N. Balasooriya, B. Dowden, J. Chen, O. De Silva, and W. Huang, \"In-situ sea ice detection using deepclouds's semantic segmentation,\" in _OCANS 2021: San Diego-Portre_. IEEE, 2021, pp. 1-7.
* [139] N. M. Alsharay, Y. Chen, O. A. Dobre, and O. De Silva, \"Improved sea-ice identification using semantic segmentation with raindrop removal,\" _IEEE Access_, vol. 10, pp. 21599-21607, 2022.
* [140] N. M. Alsharay, O. A. Dobre, Y. Chen, and O. De Silva, \"Sea-ice classification using conditional generative adversarial networks,\" _IEEE Sensors Letters_, vol. 7, no. 4, pp. 1-4, 2023.
* [141] C. Zhang, X. Chen, and S. Ji, \"Semantic image segmentation for sea ice parameters recognition using deep convolutional neural networks,\" _International Journal of Applied Earth Observation and Geoinformation_, vol. 112, p. 102885, 2022.
* [142] J. Zhao, L. Chen, J. Li, and Y. Zhao, \"Semantic segmentation of sea ice based on u-net network modification,\" in _2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)_. IEEE, 2022, pp. 1151-1156.
* [143] L. Chen, J. Zhao, K. Tian, and Y. Zhao, \"Am-resnet: An attention-based multi-label classification network,\" in _2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)_. IEEE, 2022, pp. 380-384.
* [144] L. Chen, J. Zhao, W. Li, and Y. Zhao, \"Navigation environment detection in ice area based on vibration of ship main engine,\" in _2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)_. IEEE, 2022, pp. 1162-1166.
* [145] Y. J. Kim, H.-C. Kim, D. Han, S. Lee, and J. Im, \"Prediction of monthly arcie sea ice concentrations using satellite and reanalysis data based on convolutional neural networks,\" _The Cryosphere_, vol. 14, no. 3, pp. 1083-1014, 2020.
* [147] Q. Zheng, W. Li, Q. Shao, G. Han, and X. Wang, \"A mid-and long-term arctic sea ice concentration prediction model based on deep learning technology,\" _Remote Sensing_, vol. 14, no. 12, p. 2889, 2022.
* [148] X. Chen, R. Valencia, A. Soleymani, and K. A. Scott, \"Predicting sea ice concentration with uncertainty quantification using passive microwave and reanalysis data: A case study in baffin bayr,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 61, pp. 1-13, 2023.
* [149] Y. Gao, F. Gao, J. Dong, and S. Wang, \"Sea ice change detection in sar images based on collaborative representation,\" in _IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2018, pp. 7320-7323.
* [150] Gao, Yunhao and Gao, Feng and Dong, Junyu and Wang, Shengke, \"Transferred deep learning for sea ice change detection from synthetic-aperture radar images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 16, no. 10, pp. 1655-1659, 2019.
* [151] G.-J. Qi and J. Luo, \"Small data challenges in big data era: A survey of recent progress on unsupervised and semi-supervised methods,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 44, no. 4, pp. 2168-2187, 2020.
* [152] F. Staccone, \"Deep learning for sea-ice classification on synthetic aperture radar (sar) images in earth observation. classification using semi-supervised generative adversarial networks on partially labeled data,\" 2020.
* [153] S. Khaleghian, H. Ullah, T. Kraemer, T. Eltoft, and A. Marinoni, \"Deep semisupervised teacher-student model based on label propagation for sea ice classification,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 14, pp. 10 761-10 772, 2021.
* [154] B. C. Goncalves and H. J. Lynch, \"Fine-scale sea ice segmentation for high-resolution satellite imagery with weakly-supervised cnns,\" _Remote Sensing_, vol. 13, no. 18, p. 3562, 2021.
* [155] Z. Huang, C. O. Dumitru, and J. Ren, \"Physics-aware feature learning of sar images with deep neural networks: A case study,\" in _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_. IEEE, 2021, pp. 1264-1267.
* [156] Z. Huang, X. Yao, Y. Liu, C. O. Dumitru, M. Datcu, and J. Han, \"Physically explainable cm for sar image classification,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 190, pp. 25-37, 2022.
* [157] Y. Liu, Z. Huang, and J. Han, \"Aleatoric uncertainty embedded transfer learning for sea-ice classification in sar images,\" in _IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2022, pp. 4980-4983.
* [158] M. S. Mahmud, V. Nandan, S. Singha, S. E. Howell, T. Geldsetzer, J. Yackel, and B. Montpeti, \"C- and 1-band sar signatures of arctic sea ice during freeze-up,\" _Remote Sensing of Environment_, vol. 279, p. 113129, 2022.
* [159] W. Song, W. Gao, Q. He, A. Liotta, and W. Guo, \"Si-stasz-7: A large sar images dataset with spatial and temporal information for classification of winter sea ice in Hudson bayr,\" _Remote Sensing_, vol. 14, no. 1, p. 168, 2022.
* [160] A. Singh, H. Kalke, M. Loewen, and N. Ray, \"River ice segmentation with deep learning,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 58, no. 11, pp. 7570-7579, 2020.
* [161] T. Rogers, J. Walsh, T. Rupp, L. Brigham, and M. Sfraga, \"Future arctic marine access: analysis and evaluation of observations, models, and projections of sea ice,\" _The Cryosphere_, vol. 7, no. 1, pp. 321-332, 2013.
* [162] K.-Y. Chang, S. He, C. Chou, S. L. Kao, and A. Chiou, \"Route planning and cost analysis for travelling through the arctic northeast passage using public 3d gis,\" _International Journal of Geographical Information Science_, vol. 29, no. 8, pp. 1375-1393, 2015.
* [163] J.-L. Chen, S.-C. Kang, X.-H. Meng, and Q.-L. You, \"Assessments of the arctic amplification and the changes in the arctic sea surface,\" _Advances in Climate Change Research_, vol. 10, no. 4, pp. 193-202, 2019.
* [164] M. Malkynen, J. Haapala, G. Aulicino, B. Balan-Sarojini, M. Balmaseda, A. Gegiuc, F. Girard-Arthuin, S. Hendricks, G. Heygster, L. Istomina, _et al._, \"Seatellite observations for detecting and forecasting sea-ice conditions: A summary of advances made in the species project by the eu's horizon 2020 programme,\" _Remote Sensing_, vol. 12, no. 7, p. 1214, 2020.
* [165] E. Jakel, J. Stapf, M. Wendisch, M. Nicolaus, W. Dorn, and A. Rinke, \"Validation of the sea ice surface albedo scheme of the regional climate model hihman-naosim using aircraft measurements during the acdtol/pascal campaigns,\" _The Cryosphere_, vol. 13, no. 6, pp. 1695-1708, 2019.
* [166] R. Lei, H. Xie, J. Wang, M. Lepparanta, I. Jonsdottir, and Z. Zhang, \"Changes in sea ice conditions along the arctic northeast passage from 1979 to 2012,\" _Cold Regions Science and Technology_, vol. 119, pp. 132-144, 2015.
* [167] N. Melia, K. Haines, and E. Hawkins, \"Sea ice decline and 21st century trans-arctic shipping routes,\" _Geophysical Research Letters_, vol. 43, no. 18, pp. 9720-9728, 2016.
* [168] J. Chen, S. Kang, C. Chen, Q. You, W. Du, M. Xu, X. Zhong, W. Zhang, and J. Chen, \"Changes in sea ice and future accessibility along the arctic northeast passage,\" _Global and Planetary Change_, vol. 195, p. 103319, 2020.
* [169] J. Chen, S. Kang, W. Du, J. Guo, M. Xu, Y. Zhang, X. Zhong, W. Zhang, and J. Chen, \"Perspectives on future sea ice and navigibility in the arctic,\" _The Cryosphere_, vol. 15, no. 12, pp. 5473-5482, 2021.
* [170] A. Buixade Farre, S. R. Stephenson, L. Chen, M. Czub, Y. Dai, D. Demelchev, Y. Elinov, P. Graczyk, H. Gythe, K. Keil, _et al._, \"Compractical arctic shipping through the northeast passage: routes, resources, governance, technology, and infrastructure,\" _Polar Geography_, vol. 37, no. 4, pp. 298-324, 2014.
* [171] X. Zhou, C. Min, Y. Yang, J. C. Landy, L. Mu, and Q. Yang, \"Revisiting trans-arctic maritime navigibility in 2011-2016 from the perspective of sea ice thickness,\" _Remote Sensing_, vol. 13, no. 14, p. 2766, 2021.
* [172] Y. Wang, K. Liu, R. Zhang, L. Qian, and Y. Shan, \"Feasibility of the northeast passage: The role of vessel speed, route planning, and icebreaking assistance determined by sea-ice conditions for the container shipping market during 2020-2030,\" _Transportation Research Part E: Logistics and Transportation Review_, vol. 149, p. 102235, 2021.
* [173] C. Shi-Yi, S. Kern, L. Xin-Qing, H. Feng-Ming, Y. Yu-Fang, and X. Cheng, \"Navigability of the northern sea route for arc? ice-class vessels during winter and spring sea-ice conditions,\" _Advances in Climate Change Research_, vol. 13, no. 5, pp. 676-687, 2022.
* [174] Y. Cao, S. Liang, L. Sun, J. Liu, X. Cheng, D. Wang, Y. Chen, M. Yu, and K. Feng, \"Trans-arctic shipping routes expanding faster than the model projections,\" _Global Environmental Change_, vol. 73, p. 102488, 2022.
* [175] M. Yang, Y. Qiu, L. Huang, M. Cheng, J. Chen, B. Cheng, and Z. Jiang, \"Changes in sea surface temperature and sea ice concentration in the arctic ocean over the past two decades,\" _Remote Sensing_, vol. 15, no. 4, p. 1095, 2023.
* [176] C. ZHOU, E. Dongchen, and M. LIAO, \"Feasibility of insar application to antarctic mapping,\" _Geomatics and Information Science of Wuhan University_, vol. 29, no. 7, pp. 619-623, 2004.
* [177] Z. Kurczynski, S. Rozycki, and P. Bylina, \"Mapping of polar areas based on high-resolution satellite images: the example of the henryk atrocwbial polish anticotic station,\" _Reports on Geodesy and Geoinformatics_, vol. 104, no. 1, pp. 65-78, 2017.
* [178] H. D. Pritchard, R. J. Arthur, D. G. Vaughan, and L. A. Edwards, \"Extensive dynamic thinning on the margins of the greemland and antarctic ice sheets,\" _Nature_, vol. 461, no. 7266, pp. 971-975, 2009.
* [179] A. Wu, T. Che, X. Li, and X. Zhu, \"A ship navigation information service system for the arctic northeast passage using 3d gis based on big earth data,\" _Big Earth Data_, vol. 6, no. 4, pp. 453-479, 2022.
* [180] Wu, Adan and Che, Tao and Li, Xin and Zhu, Xiaowen, \"Routeview: an intelligent route planning system for ships saling through arctic ice zones based on big earth data,\" _International Journal of Digital Earth_, vol. 15, no. 1, pp. 1588-1613, 2022.
* [181] K. Matsuoka, A. Skoglund, G. Roth, J. de Pomereeu, H. Griffiths, R. Headland, B. Herried, K. Katsumata, A. Le Brocq, K. Licht, _et al._, \"Quantarctic, an integrated mapping environment for antarctic, the southern ocean, and sub-antarctic islands,\" _Environmental Modelling & Software_, vol. 140, p. 105015, 2021.
* [182] B. Dorschel, L. Hehemans, S. Viquerat, F. Warnke, S. Dreutter, Y. S. Tenberge, D. Accettella, L. An, F. Barrios, E. Bazhenova, _et al._, \"The international bathymmetric chart of the southern ocean version 2,\" _Scientific data_, vol. 9, no. 1, p. 275, 2022.
* [183] W. Qinghua, E. Dongchen, C. Chumming, _et al._, \"Popularmap-projectionsininanactactaindenriappelligio tion,\" _ChineseJournalofPolar Research_, vol. 14, no. 3, p. 2262 6233,cables on the marine environment: Knowledge gaps, recommendations and future directions,\" _Renewable and Sustainable Energy Reviews_, vol. 96, pp. 380-391, 2018.
* [187] S. Li and W. Liu, \"Impacts of arctic sea ice loss on global ocean circulations and interbasian ocean heat exchanges,\" _Climate Dynamics_, vol. 59, no. 9-10, pp. 2701-2716, 2022.
* [188] J. Yan, J. Jung, Q. Lin, M. Zhang, S. Xu, and S. Zhao, \"Effect of sea ice retreat on marine aerosol emissions in the southern ocean, antarctica,\" _Science of the Total Environment_, vol. 745, p. 140773, 2020.
* [189] M. Streejith, R. PG, B. P. Kumar, A. Raj, and T. Nair, \"Exploring the impact of southern ocean sea ice on the indian ocean swells,\" _Scientific Reports_, vol. 12, no. 1, pp. 1-9, 2022.
* [190] X. Li, G. Zhang, H. Cui, S. Hou, S. Wang, X. Li, Y. Chen, Z. Li, and L. Zhang, \"McaNet: A joint semantic segmentation framework of optical and sar images for land use classification,\" _International Journal of Applied Earth Observation and Geoinformation_, vol. 106, p. 102638, 2022.
* [191] X. Li, L. Lei, Y. Sun, M. Li, and G. Kuang, \"Collaborative attention-based heterogeneous gated fusion network for land cover classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 5, pp. 3829-3845, 2020.
* [192] W. Li, K. Sun, W. Li, J. Wei, S. Miao, S. Gao, and Q. Zhou, \"Aligning semantic distribution in fusing optical and sar images for land use classification,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 199, pp. 272-288, 2023.
* [193] X. Sun, X. Zhang, W. Huang, Z. Han, X. Lyu, and P. Ren, \"Sea ice classification using mutually guided contexts,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 61, pp. 1-19, 2023.
* [194] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, _et al._, \"Segment anything,\" _arXiv preprint arXiv:2304.02643_, 2023.
* [195] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, _et al._, \"Learning transferable visual models from natural language supervision,\" in _International conference on machine learning_. PMLR, 2021, pp. 8748-8763.
* [196] S. Li, C. Xiong, and Z. Ou, \"A web gis for sea ice information and an ice service archive,\" _Transactions in GIS_, vol. 15, no. 2, pp. 189-211, 2011.
* [197] B. Shaofeng, L. Zhongmei, and L. Houpu, \"The non-singular formula of gauss projection in polar regions by complex numbers,\" _Acta Geodaeticar of Cartographica Sinica_, vol. 43, no. 4, pp. 348-352, 2014.
* [198] L. Zhongmei, B. Shaofeng, J. Lixin, C. Cheng, and L. Qiang, \"Forward and inverse expressions of polar gauss projection without zoning limitations,\" _Acta Geodaetica et Cartographica Sinica_, vol. 46, no. 6, p. 780, 2017.
* [199] X. ZHANG, S. BIAN, and Z. LI, \"Comparisons between gauss and gnomonic projections in polar regions,\" _Geomatics and Information Science of Wuhan University_, vol. 40, no. 5, pp. 667-672, 2015.
* [200] T. Lu, A. Songtao, E. Dongchen, G. Hongqing, S. Quan, X. Ning, and Z. Hongyang, \"Application of sea ice map projection transformation and tile cutting over the antarctic ocean,\" _Chinese Journal of Polar Research_, vol. 24, no. 3, p. 284, 2012. | The deep learning, which is a dominating technique in artificial intelligence, has completely changed the image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications, and the future trends. Our review focuses on researches published from 2016 to the present, with a specific focus on deep learning-based approaches in the last five years. We divided all relegated algorithms into 3 categories, including classical image segmentation approach, machine learning-based approach and deep learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in 4 aspects including climate research, navigation, geographic information systems (GIS) production and others. It also provides insightful observations and inspiring future research directions.
Sea ice, extraction, semantic segmentation, SAR, infrared, mapping. | Condense the content of the following passage. | 192 |
arxiv/c5c8a123_ab01_409e_8f3a_71a6a07c0956.md | On the impact of key design aspects in simulated Hybrid Quantum Neural Networks for Earth Observation
Lorenzo Papa, Alessandro Sebastianelli, Gabriele Meoni, and Irene Amerini,
L. Papa and I. Amerini are with the Department of Computer, Control and Management Engineering, Sapienza University of Rome, Italy, IT, 00185. E-mail: {papa, amerini}@diag.uniroma1.itA. Sebastianelli is with the \\(\\phi\\)-lab at European Space Agency (ESA), Frascati, Italy, IT, 00078. E-mail: aslesandro. [email protected]. Meoni is with the \\(\\phi\\)-lab, ESA, Frascati, Italy, IT, 00078 and with the Advanced Concepts and Studies Office, ESA, Keplerlaan 1, 2201 AZ Noordwijk, Netherlands, NL.The work has been developed during the visiting research period of L. Papa at \\(\\phi\\)-lab European Space Agency (ESA), Frascati, Italy.Manuscript received April 19, 2021; revised August 16, 2021.
## I Introduction
The advent of quantum computing has introduced revolutionary opportunities for tackling machine learning (ML) tasks from a new and powerful perspective. Quantum computing leverages the principles of superposition, entanglement, and quantum interference, which enable the processing of complex computations that are far beyond the capabilities of classical systems. More in detail, in recent years, traditional ML approaches, especially deep learning (DL), have shown outstanding performances across several domains, including image recognition, natural language processing, and Earth Observation. However, as the researcher's pursuit of higher accuracy and efficiency continues to grow, the limitations of traditional computing become more evident.
As a result, the integration of quantum computing with traditional DL paradigms has emerged as a promising research frontier across various domains. This development is primarily driven by the ability of quantum algorithms to process and encode high-dimensional data in ways that classical systems find challenging, thereby offering the potential for improved accuracy and efficiency. Furthermore, quantum-enhanced models are capable of exploring larger solution spaces more effectively, which may lead to faster convergence training behaviors, improved generalization, and superior performance, especially in tasks involving large-scale and complex datasets.
Consequently, following this research trend, several works on remote sensing data have been developed in order to investigate the use of such innovative, powerful technologies over EO tasks. However, despite the growing interest in the combination of quantum computing (QC) and DL4EO, the majority of the existing research has been primarily focused on advancing hybrid models from an architectural perspective, i.e., focusing on (convolutional) encoding and/or quantum circuit components. Despite such significant advances, they represent just a portion of the broader challenges and potential associated with the application of quantum-enhanced models in EO tasks. Consequently, building on previously related works and motivated by Zaidenberg et al. [1], this study intends to explore several key features that are central to advancing the field of hybrid quantum DL models and their applications in EO tasks. More in detail, the rationale of this work is threefold:
1. Starting from Zaidenberg et al. [1], this study aims to evaluate the behavior of quantum computing libraries used to train quantum neural network (QNN) architectures.
2. Investigate and compare the sensitivity performance of both quantized and non-quantized neural networks in response to different initialization (i.e., seed values). Specifically, we will examine the convergence behavior of chosen architectures when subject to different starting conditions.
3. Explore the potential of hybrid quantum architectures by incorporating simple single quantum circuits into Vision Transformer (ViT) structures. More in detail, the objective is to push the boundaries of the work proposed by Zaidenberg et al. [1] assessing the performance of these novel HQViT models in Earth Observation(EO) tasks and comparing their behavior to their non-quantized counterpart.
However, while quantum computing has the potential to give new prospects when compared to traditional DL learning frameworks, it still faces several significant challenges and limitations. Concerns include hardware stability, error rates, scalability, and the difficulty of developing effective quantum algorithms. Consequently, we have to consider that these challenges may limit the practical deployment of quantum-enhanced neural networks and have to be considered while assessing their potential in real-world applications.
Summarizing (1) by evaluating different quantum libraries, this research seeks to uncover potential performance discrepancies and challenges that may arise when implementing quantum-enhanced neural networks. The second point (2) is crucial for understanding how initialization impacts the stability and training efficacy of quantum and classical networks, as well as evaluating their sensitivity to initial parameter choices, which is a common concern in neural network training. Finally, the third study (3) is motivated by the rising interest in hybrid quantum-classical techniques, which exploit quantum components to augment the capabilities of classical neural architectures in specific domains such as image classification and remote sensing.
As a result, while considering quantum limitations, this work intends to contribute to the growing body of knowledge on QNN by systematically investigating the interactions between quantum libraries, initialization values, and hybrid model structures, with a particular focus on their application to EO tasks.
The rest of this paper is organized as follows: Section II reviews the relevant literature on quantum and non-quantum deep learning approaches for Earth Observation (EO). Section III outlines the three case studies, highlighting their respective challenges. Section IV-B provides a detailed description of the dataset and the implementation specifics required to replicate the reported experiments. Section V presents and analyzes the experimental results, while Section VI provides final thoughts and discusses future research directions.
## II Related Works
The rapid advancements in both DL and quantum computing have generated significant interest in the EO domain in recent years. Consequently, we review recent related studies applied to EO tasks. This section provides a comprehensive overview of hybrid approaches, key challenges, and the potential of quantum computing in EO.
Zeng et al. [2] (2020) laid the groundwork by introducing the Quantum Mechanism Effect Spectral Clustering (QMESC) model. Their model leverages quantum mechanics to tackle pixel mixture challenges in hyperspectral images, using Green's function to accurately decompose mixed pixels and identify cluster centers with quantum potential energy.
The following year, Zaidenberg et al. [1] (2021) further advanced this field by developing a QNN model for remote sensing image classification using the EuroSAT dataset [3]. Their study emphasizes the speed and feasibility of QML for EO, showcasing performance on par with classical models. The author focuses on qubit decoherence and data processing on Noisy Intermediate-Scale Quantum (NISQ) devices, underscoring the need for improvements in data handling and model scalability for future applications. In the same year, Otgonbaatar and Datcu [4] (2021) explored quantum annealing with a D-Wave quantum computer for feature selection in hyperspectral images. Their Mutual Information-based method identifies the most informative spectral bands, demonstrated on the Indian Pine dataset. Therefore, by employing quantum classifiers like Qboost, their approach achieved comparable or improved accuracy over classical methods, illustrating quantum annealing's potential for remote sensing data processing. Sebastianelli et al. [5] (2022) build upon [1], introducing a hybrid quantum convolutional neural network (HQCNN) that incorporates quantum layers within a classical CNN for enhanced land-use classification. Tested on the EuroSAT dataset, the authors show that HQCNN can improve traditional DL models by leveraging entanglement for improved classification accuracy. This work highlights the potential of quantum circuits for EO, paving the way for future applications with hybrid architectures. Furthermore, Mate et al. [6] (2022) proposed an ansatz-free optimization technique for quantum circuits, parameterizing circuits in the Lie algebra to simplify optimization and enhance training speed. This approach enables flexible exploration of quantum circuits, avoiding the constraints of fixed architectures. Tested on both toy and image classification tasks, their method demonstrates the computational advantages of unitary optimization, adding robustness to quantum machine learning models. Expanding on hybrid quantum-classical approaches, Otgonbaatar et al. [7] (2022) investigated networks for large-scale EO data processing. They identified real-world problems suitable for quantum computing and proposed encoding strategies on NISQ devices. Their comparisons between hybrid models and conventional techniques underscore the potential for quantum computing to handle big data challenges, even amid hardware limitations. Moreover, Gupta et al. [8] (2022) examined the integration of classical neural networks with Projected Quantum Kernel (PQK) features for Land Use and Land Cover tasks using Sentinel-2 data. They found that PQK significantly improved training accuracy, highlighting the advantages of QML in handling multispectral EO data. This study suggests promising avenues for future applications of quantum-enhanced features in remote sensing.
Further developments in 2023 saw Gupta et al. [9] investigating PQK features for multispectral classification. They achieved substantial accuracy gains, underscoring the utility of quantum kernels for complex EO datasets. Chang et al. [10] introduced Equivariant Quantum Convolutional Neural Networks (EquivQCNN), which leverage planar symmetries to enhance generalization and performance, particularly in data-limited scenarios, while highlighting the potential for symmetry-based quantum models in EO. Furthermore, Nammouchi et al. [11] (2023) provided a comprehensive review of QML applications in climate change and sustainability, emphasizing quantum methods' potential in areas like energy systems and disaster prediction. They also discuss challengeswith current quantum hardware, suggesting that QML could improve model accuracy and data processing efficiency in climate research, with potential expansions into modeling extreme events. Moreover, Otgonbaatar et al. [12] (2023) firstly explored hybrid quantum transfer learning, combining classical VGG16 with QML for high-dimensional EO datasets. They compared real amplitude and strongly entangling quantum networks, finding that the latter often yielded better accuracy due to their local effective dimension, despite challenges related to limited quantum resources. Subsequently, in another study, Otgonbaatar et al. [13] (2023) employed quantum-inspired tensor networks to enhance deep learning models for Earth science tasks. They focused on compressing physics-informed neural networks (PINNs) and improving the spectral resolution of hyperspectral images, achieving computational efficiency without compromising accuracy.
Recently, Fan et al. [14] (2024) presented two HQCNNs for land cover classification using Sentinel-2 multispectral images. Their models combine quantum computing for feature extraction and classical methods for classification, achieving a performance boost over traditional CNNs. Similar to previous studies, also this research underlines the advantages of hybrid convolutional models in handling large EO datasets with improved accuracy and transferability. Moreover, Meyer et al. [15] (2024) investigate a different approach by applying quantum reinforcement learning to cognitive synthetic aperture radar (SAR) data for ship detection in maritime monitoring. Their two-stage approach integrates variational quantum circuits for scene adaptation and resource optimization, demonstrating how quantum methods could enhance SAR systems' adaptability and efficiency in EO.
This timeline of advancements demonstrates the growing potential of quantum computing in EO, from quantum-enhanced clustering and feature selection to hybrid architectures and reinforcement learning. These studies collectively underscore the transformative impact quantum computing could have on EO, offering promising directions for future research and applications. Consequently, this research study aims to build on previous knowledge exploring through three cases of study less-investigated quantum aspects, i.e., focusing on quantum libraries, sensitivity, and attention-based quantum structures.
## III Cases of Study
This section presents the three key areas of investigation in this study: quantum libraries in Section III-A, model robustness III-B in Section III-B, and architectural design in Section III-C. More in detail, we first describe the quantum computing libraries utilized for training quantum neural networks, examining their strengths and limitations. Next, we look into the sensitivity (sensitivity to initialization) of both quantized and non-quantized models by analyzing the importance of the impact of different random seed initializations. Finally, we define the architectures employed in this study and introduce the novel hybrid quantum ViTs.
### _Quantum Libraries_
Quantum computing has emerged as a revolutionary field capable of solving complex problems that are intractable for conventional computers, i.e., challenges that are too difficult or highly time-consuming. This behavior is mainly motivated by the fact that differently, unlike classical computers, which process information using bits (0s and 1s), quantum computers use quantum bits (qubits), allowing them to perform several calculations simultaneously. As researchers and developers explore this new frontier, various quantum computing libraries have been developed to facilitate the design, simulation, and execution of quantum algorithms. In this domain, two well-known frameworks are Qiskit and PennyLane, each offering specific features and capabilities that cover various elements of quantum computing and its integration with traditional machine learning techniques.
**Qiskit** has been developed by IBM company; it is a comprehensive framework that offers a wide range of tools for designing, simulating, and executing quantum circuits. One of its key features is the ability to access real quantum hardware through the IBM Quantum platform, which allows for practical experimentation. Qiskit's modular architecture enables users to work with specific components, such as Qiskit Terra for circuit creation, Qiskit Aer for simulation, and Qiskit Ignis for error mitigation; this structure covers a wide range of applications/requirements. Additionally, Qiskit also benefits from extensive documentation and a large community, which facilitates learning and troubleshooting. However, when compared with PennyLane, due to the needed interaction between multiple components, Qiskit could be trickier, requiring a substantial time and effort investment. Furthermore, while Qiskit provides access to quantum devices, the performance and availability can be influenced by hardware limitations, such as qubit count and coherence time. Then, as quantum circuits scale in size, managing complexity and ensuring effective execution on available hardware becomes increasingly challenging.
**PennyLane** has been developed by Xanadu company; it is specifically designed for quantum/traditional computations, and it integrates with popular machine learning libraries like PyTorch and TensorFlow. This integration allows for the efficient development of hybrid quantum models, which is one of PennyLane's key advantages. The framework supports differentiable quantum programming, enabling users to optimize quantum circuits alongside traditional neural networks based on backpropagation techniques. Moreover, PennyLane's design is flexible, allowing for multiple quantum hardware platforms and simulators and providing researchers with a wide range of experimental choices. However, PennyLane also has its drawbacks. For instance, even if the framework supports several backends, users may find limited access to real quantum devices, depending on the platform they choose. Additionally, the learning curve associated with understanding the hybrid model concept and differentiable programming can pose challenges in the model's convergence behaviors.
In summary, each quantum library has its advantages and disadvantages that pose challenges in its implementation and usage. Moreover, the development of hybrid models that effectively leverage both classical and quantum components could not come without limitations, even with the support of such powerful libraries. Furthermore, the research field of quantum computing is evolving rapidly, necessitating continuous learning and adaptation to new features and best practices within these libraries. Motivated by previous claims, in this first case of study, we aim to investigate the practical usage and the model's convergence behavior in hybrid quantum settings in order to understand the framework's strengths and weaknesses.
### _Sensitivity to initialization_
In DL, the concept of sensitivity to initialization refers to how the initial weights and biases affect the training dynamics, convergence rate, and final performance of the model. Stability, on the other hand, measures the consistency of a model's performance across different runs under varying initial conditions, such as different random seeds. Therefore, the concept of sensitivity to initialization is crucial in DL research scenarios, as it influences how the optimization process navigates the high-dimensional loss landscape. More in detail, the weights of neural network architectures are typically initialized randomly or following a given distribution guided by a random value (seed), such as Normal, Uniform, and many others. From a mathematical point of view, given a loss function \\(\\mathcal{L}(\\theta)\\), where \\(\\theta\\) represents the parameters of the model, in a standard training procedure, such function is minimized (or maximized) through an optimization algorithm. We following report (Equation 1) how the parameters are updated at each time step (\\(t+1\\)):
\\[\\theta_{t+1}=\\theta_{t}-\\eta\
abla\\mathcal{L}(\\theta_{t}) \\tag{1}\\]
We indicate with \\(\\eta\\) the learning rate, and with \\(\
abla\\mathcal{L}(\\theta_{t})\\) the gradient of the loss function with respect to the parameters at time step \\(t\\). Building on this formulation, the second case study of this work aims to explore \\(\\theta_{0}\\), i.e., the initialization of \\(\\theta\\) at time \\(t_{0}\\). This focus is motivated by the fact that DL models may converge to suboptimal (local) minima or present divergent behavior due to inadequate initialization, particularly in complex loss surfaces characterized by local minima and saddle points. Generally speaking, we investigate and compare classical DL models with their quantum-enhanced counterparts, detailed in the next section, under various initialization values/conditions. The objective is to examine the stability and convergence behaviors of novel techniques in comparison to traditional approaches within convolutional and transformer structures. This concern is, in fact, particularly relevant in the context of hybrid quantum models, where the interplay between classical and quantum layers may present specific challenges in ensuring stable and reliable convergence. More in detail, in our scenario, the quantum layer/circuit is added to a conventional convolutional or transformer architecture in order to increase the feature space by leveraging quantum properties and potentially enhancing the model's performances. However, such a layer may also introduce additional sensitivity and variability. These factors, together with quantum noise and gate fidelity, may significantly impact the stability of such hybrid models. Mathematically speaking, the output state of the quantum layer, reported in Equation 2, can be expressed as a unitary transformation applied to the input state vector \\(|\\psi_{in}\\rangle\\):
\\[|\\psi_{\\text{out}}\\rangle=U(\\theta)|\\psi_{\\text{in}}\\rangle \\tag{2}\\]
where \\(U(\\theta)\\) is a unitary operator parameterized by \\(\\theta\\), representing the sequence of quantum gates applied to the input state.
Thus, the comparative analysis of classical and quantum-enhanced models in this study aims to provide insights into the benefits and trade-offs associated with quantum integration into traditional architectures. Moreover, by examining convergence behaviors across multiple seed values, the study aims to explore robustness and stability while offering guidance for the development of future quantum-classical hybrid neural networks.
### _Architectures_
In this last section, we formally describe quantized and traditional architectures while introducing innovative hybrid quantum Vision Transformers (ViTs), which, to the best of our knowledge, are being employed for the first time in EO tasks. Specifically, this study examines four couple of architectural structures: three convolutional architectures, namely NN4EOv1, NN4EOv2, NN4EOv3 in their traditional forms, and their quantized counterparts, HQNN4EOv1, HQNN4EOv2, and HQNN4EOv3 which has been originally extracted from Zaidenberg et al. [1] and reduced in terms of number of convolutional operation to understand their behavior. Differently, the latter ViT-based structure is referred to as ViT and HQViT. These architectural structures are graphically represented in Figure 1 and following described.
Before going into the details of each architectural structure, we formally introduce their fundamental elements, i.e., the convolution operation employed in CNN and NN4EO, the self-attention mechanism used in ViT, and the elementary quantum layer utilized in their hybrid quantum configurations.
The **2D Convolution** operation is the foundational operation in image processing and a key component of well-established CNN architectures. This operation involves a filter (kernel), which slides over the input image to produce an output feature map. The primary objective of convolution is to extract an image's features, such as edges, textures, or patterns. More in detail, given an input image \\(I\\) and a kernel \\(K\\), the convolution produces an output feature map \\(O\\). Thus, the value of the output \\(O\\) at the pixel position \\((i,j)\\) is computed as follows:
\\[O(i,j)=\\sum_{m=-a}^{a}\\sum_{n=-b}^{b}I(i+m,j+n)\\cdot K(m+a,n+b) \\tag{3}\\]
Where \\(I(i,j)\\) represents the pixel value at the coordinates \\((i,j)\\) in the input image, while \\(K(m,n)\\) denotes the value at position \\((m,n)\\) within the kernel, which has dimensions of \\((2a+1)\\times(2b+1)\\). The parameters \\(a\\) and \\(b\\) represent the half-widths of the kernel in the vertical and horizontal directions, respectively. Consequently, as the kernel slides along the image, it computes a weighted sum of the pixel values covered by the kernel. This process effectively captures local patterns and translates the original image into a more abstract representation. Such a procedure enables subsequent convolutional layers to learn and extract increasingly complex features. To summarize, the kernel and its parameters significantly influence the performances of the convolution operation, as well as the types of features/information extracted from the image.
The **self-attention mechanism**, introduced by Vaswani et al. [16], is the key component of the attention block employed in ViT architectures. This mechanism is specifically designed to capture long-range relationships in image data by operating on embedded images or feature patches. In particular, the self-attention operation allows each patch to relate to all others within the sequence, thereby increasing the DL model's receptive field with respect to conventional local convolutional operations. Mathematically, given an input sequence of embedded patches, self-attention computes three matrices: the query (\\(Q\\)), key (\\(K\\)), and value (\\(V\\)). Subsequently, as detailed in Equation 4, the self-attention is computed performing the dot-product interactions between queries and keys, scaled by the dimensionality \\(\\sqrt{d_{k}}\\), followed by a softmax operation in order to generate attention scores, which are then applied to the values (\\(V\\)).
\\[A(Q,K,V)=\\text{Softmax}\\left(\\frac{Q\\cdot K^{T}}{\\sqrt{d_{k}}}\\right)\\cdot V \\tag{4}\\]
Such a formula computes the dot-product interactions between queries and keys, scaled by the dimensionality \\(\\sqrt{d_{k}}\\), followed by a softmax operation to generate attention scores, which are then applied to the values. However, as detailed in Papa et al. [17], the time and memory complexity of this operation is \\(\\mathcal{O}(n^{2})\\) due to the quadratic cost of computing \\(A(Q,K,V)\\) making this operation particularly powerful but computationally expensive for large input sizes.
Furthermore, this elementary operation can be parallelized into a multi-head self-attention (MSA) mechanism, in which multiple self-attention layers are executed simultaneously. This solution allows the model to focus on different areas/characteristics of the input features at the same time. More in detail, given the input features \\(X\\), the output features \\(X_{out}\\) resulting from the execution of an attention block can be mathematically formulated as follows:
\\[\\begin{split}& X_{MSA}=\\text{Norm}(\\text{MSA}(X,X))+X\\\\ & X_{out}=\\text{Norm}(\\text{FNN}(X_{MSA}))+X_{MSA}\\end{split} \\tag{5}\\]
Here, Norm denotes a normalization process, whereas FNN indicates a feed-forward network.
The **quantum layer** is a key component in quantum neural networks. It consists of a sequence of quantum gates that perform unitary transformations on qubits, allowing the manipulation and entanglement of quantum states. Quantum layers can also be designed to operate as the quantum equivalent of classical neural network layers. However, similar to binary architectures, such a qubit-based layer enables the encoding, processing, and transformation of input data within quantum circuits. Here, the fundamental elements of the elementary quantum circuit used in this work are described. Specifically, the structure of the quantum circuit, as following illustrated, has been derived from Zaidenberg et al. [1].
Generally speaking, the core of quantum computing is the concept of qubit, a two-level quantum system that can be represented on the Bloch sphere. The Bloch sphere provides a geometric representation of the qubit's state, where any point on the sphere corresponds to a valid qubit state. The north pole represents the state (\\(|0\\rangle\\)), and the south pole represents (\\(|1\\rangle\\)); mathematically, a qubit can be expressed as a linear combination of its basis states as reported in Equation 6.
\\[|\\psi\\rangle=\\alpha|0\\rangle+\\beta|1\\rangle \\tag{6}\\]
Here, (\\(\\alpha\\)) and (\\(\\beta\\)) are complex coefficients satisfying the normalization condition (\\(|\\alpha|^{2}+|\\beta|^{2}=1\\)). Moreover, following
Fig. 1: Graphical representation of the three reference models employed in this research study. Each traditional architecture, i.e., NN4EOv1, NN4EOv2, NN4EOv3, and ViT, is composed of a sequence of convolutional-self-attention layers (in orange/yellow) in addition to fully connected layers for classification. Differently, the quantum models, i.e., HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT, are developed by stacking a quantum circuit to the fully connected layers of traditional models. Within the same architectural design, double lines are used to distinguish between traditional and hybrid designs, while a Bloch sphere represents the single qubit circuit.
the single-qubit circuit previously reported, several key operations are performed: (1) the qubit is initialized to a specific state, typically \\(|0\\rangle\\). Then, (2) the Hadamard (\\(H\\)) gate is used in order to create a superposition state. The action of the Hadamard gate on the basis states is defined as:
\\[H|0\\rangle=\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle),\\quad H|1\\rangle=\\frac{1}{ \\sqrt{2}}(|0\\rangle-|1\\rangle)\\]
Where the matrix representation of the Hadamard gate is:
\\[H=\\frac{1}{\\sqrt{2}}\\begin{pmatrix}1&1\\\\ 1&-1\\end{pmatrix}\\]
Subsequently, (3) a rotation gate (\\(R_{y}(\\theta)\\)) allows manipulation of the qubit's state. In our case scenario, the rotation around the Y-axis of the Bloch sphere is given by the following formula:
\\[R_{y}(\\theta)=e^{-i\\frac{\\theta}{2}Y}=\\cos\\left(\\frac{\\theta}{2}\\right)I-i\\sin \\left(\\frac{\\theta}{2}\\right)Y\\]
where \\(Y\\) is the Pauli-Y matrix:
\\[Y=\\begin{pmatrix}0&-i\\\\ i&0\\end{pmatrix}\\]
Finally, (4) the qubit's state is measured by collapsing its superposition into one of the basis states, i.e., the probability of measuring state \\(|0\\rangle\\) or \\(|1\\rangle\\) is given by:
\\[P(0)=|\\alpha|^{2},\\quad P(1)=|\\beta|^{2}\\]
- - -
Once the fundamental elements of each compared architecture have been introduced, we discuss and present the four architectural structures along with their respective quantum configurations. More in detail, we leverage three convolutional and a ViT model. A block diagram representation of these networks and their quantum counterparts are reported in Figure 1. As can be noticed, all the architecture leverages fully connected layers in order to perform the final classification. More in detail, the three convolutional architectures are composed of concatenations of convolutional blocks. Each block is composed of a two-dimensional convolution with \\(5\\times 5\\) kernel, followed by a \\(2\\times 2\\) max pooling layer and a ReLU activation function. Additionally, following Zaidenber et al. [1], in NN4EOv2 and NN4EOv3, two fully connected layers in which the first one is used to match the output flattened features from the previous encoding part and compact the information into \\(64\\) output neurons, while, the second layer outputs the binary classification probability through a single neuron. Differently, in NN4EO, and similarly to the ViT-based model, a single fully connected layer is used. However, the CNN-based models differ for the number of subsequent convolutional blocks as illustrated in Figure 1 (orange blocks), i.e., NN4EOv1, NN4EOv2, and NN4EOv3 are respectively composed by one, two and three convolutional blocks counting respectively \\(6.6K\\), \\(18K\\) and \\(68K\\) trainable parameters. Furthermore, the ViT model implemented for this study has been intentionally designed in order to maintain a simple architecture because the main objective of the third study proposed in this work is not to develop a highly complex model but rather to demonstrate the potential effectiveness of integrating quantum circuits with the ViT structure for EO tasks. More in detail, the input image is divided into \\(8\\times 8\\) patches, which are then processed through a Multi-Head Self-Attention (MSA) layer with two attention heads, and finally fed into a single fully connected layer that takes the encoded features as input and return a single prediction with a single neuron. This minimalistic design ensures a lightweight model architecture resulting in less than \\(34K\\) trainable parameters.
On the other hand, quantum models, i.e., HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT, leverage the same architectural structure as their traditional counterparts, i.e., NN4EOv1, NN4EOv2, NN4EOv3, and ViT respectively, while adding the previously introduced single-qubit circuit for the final classification stage. The objective of this integration is to introduce quantum processing capabilities into traditional models, aiming to exploit quantum effects such as superposition and entanglement to potentially enhance classification performance on EO tasks.
## IV Experimental Setup
In this section, we detail the experimental setup used to evaluate the studies that have just been described in the EO domain. The experimental setup is divided into two parts. Firstly, we outline in Section IV-A, the characteristics and prepossessing steps applied to the training dataset. Then, in Section IV-B, we describe the implementation details, including the software libraries, training protocols, and hyperparameters used to train and evaluate the models.
### _Training Dataset_
The study was conducted using the EO application scenario. More in detail, quantum architectures have been investigated in order to tackle the image classification task, specifically the identification of scenes in the EuroSat dataset [3]. This dataset is composed of Sentinel-2 data covering 13 spectral bands and is divided into \\(10\\) classes, with a total of \\(27000\\) labeled and georeferenced images. Moreover, following the training protocol proposed in Zaidenberg et al. [1], and in order to simplify the task given the innovative use of hybrid quantum vision transformers in the research field of EO, the number of classes has been reduced to two, resulting in multiple binary classification tasks. Precisely, at training time, the dataset has been subsequently split into training and validation sets with a division factor of \\(20\\%\\). Out of the 13 available bands, only the RGB bands have been selected.
### _Implementation Details_
The study has been implemented using PyTorch1 (v12.4.1) deep learning API. All models have been trained from scratch, following the training protocol outlined in [1], while the Binary Cross Entropy loss function has been used in order to perform the binary classification across all possible pairwise combinations of the 10 dataset's classes. We identify such classes with numbers ranging from \\(0\\) to \\(9\\), which correspond to highway (0), forest (1), sea lake (2), herbaceous vegetation (3), river (4), industrial (5), residential (6), pasture (7), permanent crop (8), and annual crop (9). Specifically, Adam optimizer [18] has been employed with \\(\\beta_{1}=0.9\\), \\(\\beta_{2}=0.999\\), and an initial learning rate of \\(0.0001\\) for a total of \\(20\\) epochs with a batch size of \\(1\\), and no data augmentation techniques applied to the training dataset. Additionally, preliminary studies involving quantum circuits have been conducted using Qiskit (v1.2.0), while Penymale (v0.28.0) has been employed to facilitate GPU support for quantum computations. For robustness investigations, multi-start experiments has been performed using \\(k=10\\) distinct seed values, specifically: \\(0\\), \\(12\\), \\(123\\), \\(1000\\), \\(1234\\), \\(10000\\), \\(12345\\), \\(100000\\), \\(123456\\), \\(1234567\\). Once the training phase has been concluded, we quantitatively evaluate the trained models using the accuracy metric (\\(Acc\\)), which is widely adopted in the literature. Moreover, we evaluate the stability of reference models through the accuracy variance (\\(\\sigma^{2}(Acc)\\)) across the \\(k\\) training/seeds, as reported below.
\\[\\sigma^{2}(Acc)=\\frac{1}{k}\\sum_{i=1}^{k}(Acc_{i}-\\overline{Acc})^{2}\\]
Where \\(Acc_{i}\\) is the accuracy performance of the \\(i\\)-th run, and \\(\\overline{Acc}\\) is the mean accuracy across all runs. For completeness, we remind that the lower the variance, the higher the stability, i.e., a highly stable model will exhibit minimal accuracy variability and lower variance, suggesting that the model's training dynamics are robust to random factors.
## V Results and Discussion
This section will quantitatively analyze and compare the performance of eight models, namely four quantum and their respective non-quantum configurations. More in detail, Section V-A compares well-known quantum libraries to investigate their impact on QNNs training. Subsequently, in Section V-B, the impact of varying initialization values on model performances and stability will be analyzed. Lastly, Section V-C will present a comparative analysis between HQViT and ViT for EO classification tasks.
### _Comparison of Quantum Libraries_
In this first set of experiments, we are going to compare the different performances of well-known quantum libraries. As introduced in Section IV-B, we train four reference hybrid quantum models using Qiskit (v1.2.0) and PennyLane (v0.28.0) versions based on the same training configuration and a fixed seed value equal to 1699806. Due to the high number of experiments, we report the obtained results in the attached Appendix. More in details we report Tables III, V, VII, and IX respectively for HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT Qiskit configuration and in Tables IV, VI, VIII, and X for the same architectures in the PenyLane configuration. Moreover, in order to give a broader overview, we report in Table I a summary of the performed tests presenting a comparison between quantum models trained using the two previously introduced quantum computing libraries, i.e., Qiskit and PennyLane. More in detail, we report the average accuracy (\\(\\overline{Acc}\\)) and the average value in which the best model has been saved at training time (\\(k\\)*). The latter information can give us an overview of the amount of epochs needed for a model in order to reach convergence (a local minimum).
Based on the reported results, both Qiskit and PennyLane libraries exhibit strong performance across all models, with only minor variations in accuracy and \\(k\\)*. For instance, in the HQNN4EOv2 model, PennyLane achieves slightly better results in both accuracy, equal to \\(92.51\\%\\) and computational efficiency \\(k\\)* = \\(16.11\\) compared to Qiskit, which achieves an accuracy of \\(92.35\\%\\) and a slightly higher \\(k\\)* value equal to \\(16.36\\). Differently, HQNN4EOv1, HQNN4EOv3, and HQViT models present a different scenario, where Qiskit slightly surpasses PennyLane in terms of accuracy, achieving the highest score. However, also in this scenario, PennyLane remains competitive with a close accuracy of \\(91.80\\%\\), \\(93.15\\%\\), and \\(87.77\\%\\) with respect to \\(91.93\\%\\), \\(93.45\\%\\), and \\(87.77\\%\\) respectively for HQNN4EOv1, HQNN4EOv3, and HQViT while achieving better convergence performances with a lower \\(k\\)* epochs needed for Qiskit full convergence. These results suggest that while Qiskit performs slightly better in terms of accuracy in certain cases, PennyLane consistently demonstrates faster convergence behavior.
Furthermore, from a more detailed analysis of the tables presented in the Appendix, i.e., when comparing each pair of trained classes among the four-compared quantum models, the obtained results reveal that out of the 184 training sessions, i.e., 46 possible binary class configurations for each quantum-enhanced model, Qiskit and PennyLane obtains similar performances. More in detail, across the 184 training sessions, Qiskit and PennyLane achieve the same performances, i.e., Quiskit outperforms PennyLane in \\(45.6\\%\\) (\\(84/184\\)) instances, PennyLane outperforms Quiskit in \\(44.6\\%\\) (\\(84/184\\)), while in \\(16\\) sessions, both frameworks yield identical accuracy results.
In conclusion, both Qiskit and PennyLane perform well in terms of accuracy for EO classification tasks. However, PennyLane shows a potential advantage in computational efficiency, making it a valuable tool for scaling quantum models in resource-constrained environments. The latter assumption is motivated by the fact that PennyLane achieves a constant advantage in terms of \\(k\\)*, highlighting its potential for more efficient execution, especially when dealing with larger quantum circuits or more complex tasks. Additionally, PennyLane's integration with PyTorch and its support for GPU acceleration further enhance its suitability for hybrid quantum-classical learning. These features suggest that PennyLane may be more advantageous in contexts where computational resources are limited, or efficiency is a key priority.
### _Study on the Stability Towards Initialization Values_
In this section, which is related to the second case study of this work, we investigate and compare the stability and estimation performances of reference models. Similar to the previous section, due to the extensive number of conducted experiments, we report the average class-wise results in the Appendix. Specifically, the results for NN4EOv1, NN4EOv2, NN4EOv3, and ViT are detailed in Tables XIX, XXI, XXIII, XXV respectively. Similarly, the outcomes for HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT are provided in Tables XX, XXII, XXIV, and XXVI, respectively. However, in order to provide a more general overview, we show in Table II a summary of all the experiments, reporting the mean accuracy (\\(\\overline{Acc}\\)) and mean-variance (\\(\\overline{\\sigma}^{2}\\)) across all classes over the \\(k=10\\) seeds.
Based on the obtained results, it can be noted that quantum-based models, specifically HQNN4EOv3 and HQViT, demonstrate advantages in terms of accuracy, achieving mean accuracy of \\(93.47\\%\\) and \\(88.78\\%\\), respectively, i.e., a \\(0.5\\%\\) boost when compared to their traditional counterparts. Moreover, HQNN4EOv1 is also able to achieve small improvements with respect to its traditional variance. This improvement may indicate that hybrid quantum models, even if with a small boost, can enhance model performance. However, even if the improvement is limited, the quantum model is able to obtain higher estimations with a lower variance compared with its traditional configuration. Similarly, the HQNN4EOv3 model not only shows superior accuracy but also exhibits reduced variance compared to its traditional version. Such results may suggest that quantum-enhanced models can provide more consistent and stable performance across multiple initialization values. However, it is important to acknowledge that the benefits of quantum models are not uniform across all compared architectures. For instance, HQViT, while showing improved prediction performances, achieves a higher variance when compared with its traditional counterpart. This observation may underscore the need for careful parameter tuning when incorporating quantum elements. However, despite these challenges, the results reported in Table II, shows that even the simple integration of a single qubit can yield to performance gains; suggesting that quantum layers, even in their simplest forms, can enhance traditional models.
In summary, we can assess that a careful design of the initialization and optimization strategies is essential to mitigate instability and achieve reliable performance.
### _Towards Hybrid Quantum Vision Transformers for Earth Observation_
In this section, we report the quantitative evaluation of experiments performed for the third case study of this work. More in detail, we investigate the potential of HQViT architectures for EO tasks by comparing the performance estimation of a traditional ViT model with its quantum-enhanced counterpart; both the models have been detailed in Section III. The objective of this study, inspired by Zaidenberg et al. [1] on CNN models, is to determine whether the integration of quantum circuits, even in their simplest structure, can positively contribute over traditional ViT approaches.
However, do the the large amount of performed experiments, we report the class-wide results in the Appendix in Tables XVII and XVIII. Moreover, in order to give a faster look at the model's performances, we can refer to Table II.
Based on the obtained results, it can be noticed that the average accuracy (\\(\\overline{Acc}\\)) of the HQViT model is marginally higher (\\(88.78\\)) when compared to the traditional ViT (88.37) model. This finding may indicate a present, albeit modest, improvement in the performance of the quantum-enhanced ViT structure. Consequently, the results suggest that even a minimal quantum integration can introduce qualitative improvements, potentially paving the way for more sophisticated and efficient quantum-augmented models.
In conclusion, this third research study and respective set of experiments computed over minimal ViT-based architectural setups is thought of as proof-of-concept in order to highlight the potential of quantum computing in machine learning models. Moreover, the HQViT model shows that quantum-enhanced vision transformers can positively influence ViT-based models, encouraging future research into advanced quantum architectures and their integration into deep learning frameworks.
## VI Conclusions and Future Works
This study investigates less-explored aspects of quantum DL applications for EO tasks. More in detail, building upon Zaidenberg et al. [1], three cases of study are investigated. Firstly, we compare the convergence behavior of well-known quantum libraries, i.e., Quiskit and PennyLane, in order to understand their potential in training hybrid quantum models. This first case of the study reveals that both libraries provide benefits for QNN models achieving comparable classification performances and convergence behaviors; however, PennyLane easily integrates with PyTorch GPU libraries, which is advantageous for researchers. Secondly, we investigate the sensitivity/stability of quantum and traditional counterparts with respect to the initialization values (seeds). This second case of the study reveals that both types of architecture need a careful design of the initialization hyperparameters in order to mitigate possible instabilities; however, over \\(k=10\\) different trials, quantum models show higher (averaged) accuracy valueswith comparable (averaged) variance. These results underline the effective contribution of quantum structures into hybrid architectures, even with elementary circuits, i.e., in our case, a single-bit module. Finally, the third case study investigates the use of such a single qubit circuit embedded into a transformer-based architecture. More in detail, by combining a simple (2 heads) multi-head attention layer with the previously introduced circuit, we show that, even with a higher variance due to the initialization values, the HQViT model is able to achieve an average boost of almost \\(0.5\\%\\) when compared with its traditional counterpart. This finding pushes the boundaries of prior research on classical convolution-based models by demonstrating the advantages of hybrid quantum architectures in complex real-world applications like EO.
In summary, this study provides evidence that quantum computing libraries and quantum circuits may offer significant advantages even with simple DL architectural structures. Additionally, the successful integration of quantum circuits into ViT models for EO tasks may open new research trends for further exploration. Consequently, building on such findings, future research may focus on investigating if more extensive quantum circuits may reduce the variance with respect to the initialization values while leading to more stable models and optimizing hybrid quantum ViT architectures with more architectural-oriented structures for EO tasks and more complex EO applications in order to take advantage of such kind of global processing with respect to convolutional-based models.
## References
* [1]D. A. Zaidenberg, A. Sebastianielli, D. Spiller, B. Le Saux, and S. L. Ullo (2021) Advantages and bottlenecks of quantum machine learning for remote sensing. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 5680-5683. Cited by: SSI.
* [2]S. Otgonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [3]S. Otgonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [4]S. Otgonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
[MISSING_PAGE_POST]
. Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [36]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [37]S. Ogtonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [38]S. Ogtonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [39]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI.
* [40]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [41]S. Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [42]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [43]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2023) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [44]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [45]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [46]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [47]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [48]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI.
* [49]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [50]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [51]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [52]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [53]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [54]S. Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI.
* [55]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [56]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2023) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI.
* [57]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI.
* [58]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [59]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
* [60]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI.
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 99.36 (16) & 99.36 (18) & 79.27 (19) & 73.90 (20) & 95.00 (20) & 93.27 (20) & 88.00 (20) & 80.00 (20) & 81.82 (20) \\\\ \\hline
1 & 99.36 (16) & - & 88.50 (20) & 97.92 (13) & 96.36 (16) & 100.00 (4) & 100.00 (10) & 92.30 (16) & 99.36 (19) & 97.67 (18) \\\\ \\hline
2 & 99.36 (18) & 88.50 (20) & - & 94.75 (20) & 97.00 (20) & 100.00 (6) & 99.92 (15) & 90.60 (19) & 99.45 (20) & 95.42 (19) \\\\ \\hline
3 & 79.27 (19) & 97.92 (13) & 94.75 (20) & - & 89.45 (18) & 95.09 (19) & 88.42 (20) & 91.20 (20) & 74.55 (18) & 84.42 (20) \\\\ \\hline
4 & 73.90 (20) & 96.36 (16) & 97.00 (20) & 89.45 (18) & - & 97.10 (14) & 98.64 (20) & 82.44 (20) & 91.70 (18) & 88.27 (19) \\\\ \\hline
5 & 95.00 (20) & 100.00 (4) & 100.00 (6) & 95.09 (19) & 97.10 (14) & - & 94.53 (20) & 100.00 (12) & 95.90 (17) & 98.55 (18) \\\\ \\hline
6 & 93.27 (20) & 100.00 (10) & 99.92 (15) & 88.42 (20) & 98.64 (20) & 94.55 (20) & - & 96.00 (17) & 91.55 (19) & 98.17 (18) \\\\ \\hline
7 & 88.90 (20) & 92.30 (16) & 90.60 (19) & 91.20 (20) & 82.44 (20) & 100.00 (12) & 96.00 (17) & - & 89.11 (14) & 92.40 (16) \\\\ \\hline
8 & 80.00 (20) & 99.36 (19) & 99.45 (20) & 74.55 (18) & 91.70 (18) & 95.90 (17) & 91.55 (19) & 89.11 (14) & - & 82.73 (20) \\\\ \\hline
9 & 81.82 (20) & 97.67 (18) & 95.42 (19) & 84.42 (20) & 88.27 (19) & 98.55 (18) & 98.17 (18) & 92.40 (16) & 82.73 (20) & - \\\\ \\hline \\end{tabular}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 99.00 (19) & 98.09 (20) & 79.45 (20) & 75.90 (20) & 94.30 (16) & 90.18 (20) & 92.56 (16) & 80.30 (19) & 89.45 (20) \\\\ \\hline
1 & 99.00 (19) & - & 93.25 (20) & 97.83 (19) & 96.82 (18) & 100.00 (6) & 99.75 (17) & 92.30 (10) & 98.91 (13) & 98.17 (20) \\\\ \\hline
2 & 98.09 (20) & 93.25 (20) & - & 95.75 (16) & 96.55 (20) & 99.82 (8) & 99.50 (6) & 87.40 (17) & 99.09 (20) & 95.83 (20) \\\\ \\hline
3 & 79.45 (20) & 97.83 (19) & 95.75 (16) & - & 90.09 (20) & 96.18 (18) & 92.25 (18) & 91.70 (17) & 72.55 (20) & 85.83 (19) \\\\ \\hline
4 & 75.90 (20) & 96.82 (18) & 96.55 (20) & 90.09 (20) & - & 97.90 (19) & 96.55 (15) & 90.78 (18) & 92.20 (20) & 91.64 (19) \\\\ \\hline
5 & 94.30 (16) & 100.00 (6) & 99.82 (8) & 96.18 (18) & 97.90 (19) & - & 95.18 (19) & 100.00 (4) & 97.10 (16) & 98.73 (11) \\\\ \\hline
6 & 90.18 (20) & 99.75 (17) & 99.50 (6) & 92.25 (18) & 96.55 (15) & 95.18 (19) & - & 98.70 (14) & 95.18 (17) & 98.42 (19) \\\\ \\hline
7 & 92.56 (16) & 92.30 (10) & 87.40 (17) & 91.70 (17) & 90.78 (18) & 100.00 (4) & 98.70 (14) & - & 93.22 (18) & - & 91.70 (20) \\\\ \\hline
8 & 80.30 (19) & 98.91 (13) & 99.09 (20) & 72.55 (20) & 92.20 (20) & 97.10 (16) & 95.18 (17) & 93.22 (18) & - & 89.18 (19) \\\\ \\hline
9 & 89.45 (20) & 98.17 (20) & 95.83 (20) & 85.83 (19) & 91.64 (19) & 98.73 (11) & 98.42 (19) & 91.70 (20) & 89.18 (19) & - \\\\ \\hline \\end{tabular} TABLE VII: HQWN4EOv3 - PennyLane - Avg Test Accuracy 93.15 and best model saved at epoch 15.46
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 99.00 (20) & 97.45 (11) & 79.27 (17) & 76.20 (18) & 94.60 (17) & 89.73 (18) & 91.22 (10) & 79.70 (17) & 89.45 (18) \\\\ \\hline
1 & 99.00 (20) & - & 92.17 (16) & 98.08 (19) & 96.09 (16) & 100.00 (2) & 99.83 (19) & 92.10 (19) & 98.55 (18) & 98.17 (15) \\\\ \\hline
2 & 97.45 (11) & 92.17 (16) & - & 95.50 (19) & 93.09 (19) & 99.91 (8) & 99.91 (8) & 99.42 (7) & 86.30 (20) & 99.09 (18) & 96.00 (20) \\\\ \\hline
3 & 79.27 (17) & 98.08 (19) & 95.50 (19) & - & 89.91 (20) & 96.09 (17) & 92.42 (20) & 92.00 (19) & 71.55 (18) & 83.67 (18) \\\\ \\hline
4 & 76.20 (18) & 96.09 (16) & 93.09 (19) & 89.91 (20) & - & 98.10 (17) & 95.45 (17) & 91.67 (20) & 91.80 (20) & 91.00 (17) \\\\ \\hline
5 & 94.60 (17) & 100.00 (2) & 99.91 (8) & 96.09 (17) & 98.10 (17) & - & 95.09 (16) & 99.89 (4) & 97.30 (17) & 99.45 (18) \\\\ \\hline
6 & 89.73 (18) & 99.83 (19) & 99.42 (7) & 92.42 (20) & 95.45 (17) & 95.09 (16) & - & 99.20 (20) & 96.45 (17) & 98.83 (11) \\\\ \\hline
7 & 91.22 (10) & 92.10 (19) & 86.30 (20) & 92.00 (19) & 91.67 (20) & 99.89 (4) & 99.20 (20) & - & 89.67 (18) & 92.30 (19) \\\\ \\hline
8 & 79.70 (17) & 98.55 (18) & 99.09 (18) & 71.55 (18) & 91.80 (20) & 97.30 (17) & 96.45 (17) & 89.67 (18) & - & 89.00 (17) \\\\ \\hline
9 & 89.45 (18) & 98.17 (15) & 96.00 (20) & 83.67 (18) & 91.00 (17) & 99.45 (18) & 98.83 (11) & 92.30 (19) & 89.00 (17) & - \\\\ \\hline \\end{tabular} TABLE IX: HQVit - Qiskit - Avg Test Accuracy 87.95 and best model saved at epoch 16.25
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 99.00 (20) & 97.45 (11) & 79.27 (17) & 76.20 (18) & 94.60 (17) & 89.73 (18) & 91.22 (10) & 79.70 (17) & 89.45 (18) \\\\ \\hline
1 & 99.00 (20) & - & 92.17 (16) & 98.08 (19) & 96.09 (16) & 100.00 (2) & 99.83 (19) & 92.10 (19) & 98.55 (18) & 98.17 (15) \\\\ \\hline
2 & 97.45 (11) & 92.17 (16) & - & 95.50 (19) & 93.09 (19) & 99.91 (8) & 99.92 (7) & 86.30 (20) & 99.09 (18) & 96.00 (20) \\\\ \\hline
3 & 79.27 (17) & 98.08 (19) & 95.50 (19) & - & 89.91 (20) & 96.09 (17) & 92.42 (20) & 92.00 (19) & 71.55 (18) & 83.67 (18) \\\\ \\hline
4 & 76.20 (18) & 96.09 (16) & 93.09 (19) & 89.91 (20) & - & 98.10 (17) & 95.45 (17) & 91.67 (20) & 91.80 (20) & 91.00 (17) \\\\ \\hline
5 & 94.60 (17) & 100.00 (2) & 99.91 (8) & 96
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 98.49 & 98.06 & 80.74 & 71.37 & 94.33 & 88.81 & 90.02 & 77.46 & 84.90 \\\\ & (99.27, 97.18) & (99.36, 96.72) & (83.82, 77.82) & (75.2, 69.1) & (95.9, 93.2) & (96.18, 84.91) & (93.44, 83.22) & (79.7, 72.0) & (88.27, 79.64) \\\\ \\hline
1 & 98.49 & - & 84.62 & 97.38 & 94.71 & 99.98 & 99.47 & 95.22 & 98.87 & 98.94 \\\\
99.27, 97.18) & - & (93.75, 74.08) & - & (98.26, 96.58) & (96.73, 92.36) & (100.00, 99.82) & (99.92, 99.08) & (99.42, 91.9) & (99.64, 97.45) & (99.58, 97.5) \\\\ \\hline
2 & 98.06 & 84.62 & 98.26 & 98.26 & 98.26 & 98.27 & 99.10 & 99.71 & 99.76 \\\\
3 & 99.36 & - & 99.35 & 95.39 & 99.55 & 99.75 & 91.08 & 99.71 & 99.76 \\\\
3 & 99.35 & 99.27 & 93.75 & 99.48 & 99.12 & 99.10 & 99.80 & 99.92, 99.33 & (95.1, 88.6) & (98.73, 95.0) & (95.67, 93.58) \\\\ \\hline
3 & 80.74 & 97.38 & 93.95 & - & 88.20 & 96.29 & 91.36 & 89.27 & 70.89 & 83.89 \\\\
3 & 98.27 & 98.25 & 96.58 & 96.08 & 91.25 & - & (90.18, 84.27) & (97.09, 95.64) & (94.75, 88.92) & (92.1, 82.8) & (93.45, 86.18) \\\\ \\hline
4 & 71.37 & 97.41 & 95.39 & 88.20 & 97.03 & 95.16 & 85.95 & 96.09 & 88.27 \\\\
5 & 97.11 & 96.73 & 92.36 & 97.55 & 99.45 & (90.18, 84.27) & - & (98.2, 96.1) & (97.09, 92.64) & (92.44, 71.67) & (92.0, 89.0) & (91.36, 84.55) \\\\ \\hline
5 & 99.43 & 99.98 & 99.65 & 96.29 & 97.03 & 93.65 & 99.60 & 96.87 & 99.88 \\\\
3 & 99.52 & (1000, 99.82) & (1000, 98.5) & (97.90, 99.56) & (98.2, 96.1) & - & (95.91, 84.27) & (99.80, 99.88) & (97.05, 95.99) & (93.96, 96.73) \\\\ \\hline
6 & 88.81 & 99.47 & 99.75 & 91.36 & 95.16 & 93.65 & - & 97.54 & 94.39 & 98.74 \\\\
6 & 99.68 & 99.92 & 99.93 & (94.75, 88.75) & - & (97.09, 93.18) & (97.90, 99.18) & (99.22, 96.3) & (99.33, 95.27) & (99.12, 97.67) \\\\ \\hline
7 & 99.02 & 92.52 & 91.08 & 89.27 & 85.95 & 99.60 & 97.54 & 89.61 & 91.88 \\\\
93.48, 83.22 & (94.2, 91.5) & (95.1, 88.86) & (92.1, 82.9) & (92.44, 71.67) & (99.89, 97.88) & (99.3, 93.5) & - & (92.44, 87.33) & (93.0, 90.9) \\\\ \\hline
8 & 77.46 & 98.87 & 97.11 & 70.89 & 90.69 & 96.87 & 94.39 & 89.61 & 82.41 \\\\
(97.72, 7.20) & (96.64, 97.45) & (98.73, 95.0) & (74.56, 66.12) & (92.80, 89.0) & (97.5, 95.5) & (92.71, 27) & (92.44, 87.33) & - & (88.36, 76.82) \\\\ \\hline
9 & 84.90 & 98.84 & 94.76 & 83.89 & 88.27 & 98.83 & 98.74 & 91.88 & 82.41 \\\\
(98.27, 79.64) & (98.58, 97.93) & (95.67, 93.58) & (91.0, 80.33) & (91.0, 84.55) & (99.36, 76.33) & (99.67, 96.76) & (93.0, 90.9) & (88.36, 76.82) & - \\\\ \\hline \\end{tabular}
\\end{table} TABLE XII: HN4EOV1 - MEAN (MAX, MIN) values with 10 seeds
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline
0 & - & 98.11 & 98.40 & 81.45 & 74.45 & 94.50 & 88.71 & 91.74 & 78.64 & 87.72 \\\\
0 & - & (98.73, 97.27) & (99.45, 97.09) & (84.36, 77.28) & (74.7, 71.3) & (55.4, 93.6) & (92.45, 83.18) & (93.78, 89.56) & (81.65, 75.6) & (81.75, 86.7) \\\\ \\hline
1 & 98.11 & - & 90.22 & 97.54 & 94.51 & 99.93 & 99.34 & 92.32 & 98.82 & 98.32 \\\\
0 & (98.73, 97.27) & - & (94.83, 83.5) & (98.08, 97.0) & (96.36, 91.73) & (100.00, 99.73) & (99.83, 98.5) & (93.41, 90) & (99.73, 97.91) & (99.09, 97.58) \\\\ \\hline
2 & 98.40 & 90.22 & 99.22 & 95.46 & 96.18 & 99.89 & 99.67 & 90.44 & 97.92 & 95.39 \\\\
0 & (99.45, 97.09) & (94.0, 83.5) & - & (97.33, 93.67) & (97.91, 88.36) & (100.00, 99.73) & (100.00, 99.73) & (100.00, 98.42) & (95.6, 78.9) & (99.45, 92.09) & (96.33, 94.67) \\\\ \\hline
3 & 81.45 & 97.54 & 95.46 & - & 80.09 & 96.32 & 93.01 & 91.51 & 71.73 & 85.14 \\\\
4 & (94.36, 77.82) & (98.08, 97.07) & (97.33, 93.67) & (90.64, 86.73) & - & (90.64, 86.73) & (97.55, 91.69) & (93.47, 87.3) & (76.82, 67.0) & (89.75, 77.75) \\\\ \\hline
4 & 74.45 & 94.51 & 96.18 & 89.09 & 98.06 & 94.44 & 89.22 & 90.68 & 90.62 \\\\
4 & (77.4, 71.3) & (96.36, 91.73) & (97.81, 88.36) & (90.64, 86.73) & - & (98.7, 97.1) & (97.64, 92.27) & (92.33, 87.44) & (91.88, 84.0) & (92.64, 87.64) \\\\ \\hline
5 & 94.50 & 99.93 & 99.89 & 96.32 & 98.06 & 99.07 & 99.62 & 97.01 & 99.09 \\\\
0 & (95.4, 93.6) & (100.00, 97.93) & (97.55, 94.64) & (98.71, 97.1) & - & (94.73, 91.0) & (99.89, 99.33) & (97.96, 96.0) & (99.73, 97.82) \\\\ \\hline
6 & 88.71 & 99.34 & 99.67 & 93.01 & 94.44 & 93.07 & 98.99 & 95.75 & 98.84 \\\\
0 & (92.45, 83.18) & (99.83, 98.98) & (100.00, 98.42) & (94.75, 91.92) & (97.64, 92.27) & (94.73, 91.0) & - & (99.66, 96.7) & (99.14, 97.3) \\\\ \\hline
7 & 91.74 & 92.32 & 90.44 & 91.51 & 89.52 & 96.2 & 98.59 & - & 99.46 & 91.68 \\\\
0 & (93.88, 95.34) & (93.10, 95.89) & (93.4, 87.09) & (93.4, 87.3) & (93.23, 87.44) & (93.99, 93.3) & (96.9, 96.5) & (92.89, 88.56) & (92.6, 90.8) \\\\ \\hline
8 & 78.64 & 98.82 & 97.92 & 71.73 & 90.68 & 97.01 & 95.75 & 94.66 & - & 85.61 \\\\
8 & (81.6, 75.6) & (99.73, 97.191) & (99.54, 92.09) & (76.82, 67.0) & (91.88, 88.4) & (96.91, 94.73) & (92.89, 88.56) & (92.88, 88.56) & (92.89, 88.56) \\\\ \\hline
9 & 87.72 & 98.23 & 95.39 & 85.14 & 90.62 & 99.09 & 98.84 & 91.68 & 85.61 \\\\
9 & (90.27, 84.18) & (99.0, 97.58) & (96.33, 94.67) & (99.75, 77.75) & (92.64, 87.64) & (99.73, 97.82) & (99.42, 97.17) & (92.6, 90.8) & (99.73, 82.09) \\\\ \\hline \\end{tabular}
\\end{table} TABLE XVI HQNN4EOV3 - mean (max, min) values with 10 seeds
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 9.21 & 13.87 & 6.31 & 1.35 & 1.41 & 1.98 & 1.64 & 1.90 & 3.95 \\\\ \\hline
9.21 & - & 7.09 & 0.39 & 2.92 & 0.02 & 1.08 & 0.63 & 0.29 & 0.31 \\\\ \\hline
13.87 & 7.09 & - & 1.27 & 24.59 & 0.03 & 1.05 & 8.49 & 9.55 & 0.67 \\\\ \\hline
6.31 & 0.39 & 1.27 & - & 0.95 & 0.48 & 2.58 & 1.19 & 6.26 & 11.12 \\\\ \\hline
1.35 & 2.92 & 24.59 & 0.95 & - & 0.82 & 0.79 & 5.40 & 0.52 & 1.22 \\\\ \\hline
1.41 & 0.02 & 0.03 & 0.48 & 0.82 & - & 1.84 & 0.18 & 0.41 & 0.14 \\\\ \\hline
1.98 & 1.08 & 1.05 & 2.58 & 0.79 & 1.84 & - & 1.42 & 1.01 & 0.27 \\\\ \\hline
1.64 & 0.63 & 8.49 & 1.19 & 5.40 & 0.18 & 1.42 & - & 1.55 & 0.26 \\\\ \\hline
1.90 & 0.29 & 9.55 & 6.26 & 0.52 & 0.41 & 1.01 & 1.55 & - & 7.78 \\\\ \\hline
3.95 & 0.31 & 0.67 & 11.12 & 1.22 & 0.14 & 0.27 & 0.26 & 7.78 & - \\\\ \\hline \\end{tabular}
TABLE XXi
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.33 & 0.68 & 3.68 & 3.85 & 0.48 & 12.03 & 7.70 & 4.96 & 8.16 \\\\ \\hline
0.33 & - & 35.97 & 0.31 & 0.98 & 0.00 & 0.05 & 0.49 & 0.12 & 0.15 \\\\ \\hline
0.68 & 35.97 & - & 1.95 & 3.65 & 0.31 & 0.05 & 4.94 & 1.10 & 0.39 \\\\ \\hline
3.68 & 0.31 & 1.95 & - & 4.75 & 0.22 & 1.70 & 7.24 & 8.80 & 11.79 \\\\ \\hline
3.85 & 0.98 & 3.65 & 4.75 & - & 0.23 & 2.06 & 30.40 & 0.65 & 4.24 \\\\ \\hline
0.48 & 0.00 & 0.51 & 0.22 & 0.23 & - & 2.88 & 0.10 & 0.36 & 0.67 \\\\ \\hline
12.03 & 0.05 & 0.05 & 1.70 & 2.06 & 2.88 & - & 3.26 & 1.22 & 0.68 \\\\ \\hline
7.70 & 0.49 & 4.94 & 7.24 & 30.40 & 0.10 & 3.26 & - & 2.41 & 0.36 \\\\ \\hline
4.96 & 0.12 & 1.10 & 8.80 & 0.65 & 0.36 & 1.22 & 2.41 & - & 15.07 \\\\ \\hline
8.16 & 0.15 & 0.39 & 11.79 & 4.24 & 0.67 & 0.68 & 0.36 & 15.07 & - \\\\ \\hline \\end{tabular}
TABLE XXi
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.22 & 0.57 & 2.85 & 2.82 & 0.28 & 6.64 & 1.41 & 2.38 & 3.14 \\\\ \\hline
0.22 & - & 10.52 & 0.13 & 1.38 & 0.01 & 0.14 & 0.58 & 0.22 & 0.19 \\\\ \\hline
0.57 & 10.52 & - & 1.43 & 6.97 & 0.01 & 0.18 & 19.13 & 4.48 & 0.19 \\\\ \\hline
2.85 & 0.13 & 1.43 & - & 1.62 & 0.47 & 0.64 & 4.00 & 12.89 & 15.87 \\\\ \\hline
2.82 & 1.38 & 6.97 & 1.62 & - & 0.31 & 2.06 & 1.42 & 2.52 \\\\ \\hline
0.28 & 0.01 & 0.01 & 0.47 & 0.31 & - & 1.75 & 0.03 & 0.25 & 0.33 \\\\ \\hline
6.64 & 0.14 & 0.18 & 0.64 & 2.06 & 1.75 & - & 0.31 & 0.87 & 0.41 \\\\ \\hline
1.41 & 0.58 & 19.13 & 4.00 & 1.42 & 0.03 & 0.31 & - & 1.66 & 0.29 \\\\ \\hline
2.38 & 0.22 & 4.48 & 12.89 & 1.24 & 0.25 & 0.87 & 1.66 & - & 7.81 \\\\ \\hline
3.14 & 0.19 & 0.19 & 15.87 & 2.52 & 0.33 & 0.41 & 0.29 & 7.81 & - \\\\ \\hline \\end{tabular}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.13 & 2.54 & 2.97 & 2.86 & 0.58 & 7.98 & 1.35 & 2.91 & 6.83 \\\\ \\hline
0.13 & - & 4.05 & 0.14 & 1.39 & 0.00 & 0.07 & 0.68 & 0.32 & 0.21 \\\\ \\hline
2.54 & 4.05 & - & 1.22 & 0.46 & 0.00 & 0.17 & 16.69 & 2.26 & 0.69 \\\\ \\hline
2.97 & 0.14 & 1.22 & - & 0.95 & 0.43 & 1.45 & 1.59 & 10.05 & 7.06 \\\\ \\hline
2.86 & 1.39 & 0.46 & 0.95 & - & 0.42 & 2.68 & 4.86 & 1.16 & 2.82 \\\\ \\hline
0.58 & 0.00 & 0.00 & 0.43 & 0.42 & - & 2.46 & 0.03 & 0.08 & 0.10 \\\\ \\hline
7.98 & 0.07 & 0.17 & 1.45 & 2.68 & 2.46 & - & 0.13 & 1.24 & 0.49 \\\\ \\hline
1.35 & 0.68 & 16.69 & 1.59 & 4.86 & 0.03 & 0.13 & - & 1.54 & 0.35 \\\\ \\hline
2.91 & 0.32 & 2.26 & 10.05 & 1.16 & 0.08 & 1.24 & 1.54 & - & 13.99 \\\\ \\hline
6.83 & 0.21 & 0.69 & 7.06 & 2.82 & 0.10 & 0.49 & 0.35 & 13.99 & - \\\\ \\hline \\end{tabular}
TABLE XIV: ViT - Variance of Test Accuracy across 10 seed
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.49 & 10.95 & 6.44 & 1.70 & 3.24 & 3.10 & 10.08 & 2.15 & 3.27 \\\\ \\hline
0.49 & - & 8.03 & 0.08 & 0.63 & 0.11 & 0.44 & 2.19 & 0.12 & 0.25 \\\\ \\hline
10.95 & 8.03 & - & 0.56 & 3.21 & 0.67 & 1.08 & 5.15 & 0.58 & 1.23 \\\\ \\hline
6.44 & 0.08 & 0.56 & - & 1.62 & 6.64 & 16.37 & 1.58 & 2.66 & 15.73 \\\\ \\hline
1.70 & 0.63 & 3.21 & 1.62 & - & 0.67 & 0.58 & 3.64 & 0.51 & 1.01 \\\\ \\hline
3.24 & 0.11 & 0.67 & 6.64 & 0.67 & - & 1.36 & 1.85 & 1.50 & 10.26 \\\\ \\hline
3.10 & 0.44 & 1.08 & 16.37 & 0.58 & 11.36 & - & 0.79 & 7.04 & 2.85 \\\\ \\hline
10.08 & 2.19 & 5.15 & 1.58 & 3.64 & 1.85 & 0.79 & - & 2.63 & 0.27 \\\\ \\hline
2.15 & 0.12 & 0.58 & 2.66 & 0.51 & 1.50 & 7.04 & 2.63 & - & 0.97 \\\\ \\hline
3.27 & 0.25 & 1.23 & 15.73 & 1.01 & 10.26 & 2.85 & 0.27 & 0.97 & - \\\\ \\hline \\end{tabular} | Quantum computing has introduced novel perspectives for tackling and improving machine learning tasks. Moreover, the integration of quantum technologies together with well-known deep learning (DL) architectures has emerged as a potential research trend gaining attraction across various domains, such as Earth Observation (EO) and many other research fields. However, prior related works in EO literature have mainly focused on convolutional architectural advancements, leaving several essential topics unexplored. Consequently, this research investigates through three cases of study fundamental aspects of hybrid quantum machine models for EO tasks aiming to provide a solid groundwork for future research studies towards more adequate simulations and looking at the post-NISO era. More in detail, we firstly (1) investigate how different quantum libraries behave when training hybrid quantum models, assessing their computational efficiency and effectiveness. Secondly, (2) we analyze the stability/sensitivity to initialization values (i.e., seed values) in both traditional model and quantum-enhanced counterparts. Finally, (3) we explore the benefits of hybrid quantum attention-based models in EO applications, examining how integrating quantum circuits into VITs can improve model performance.
Quantum Computing, Quantum Machine Learning, Earth Observation, Remote Sensing | Write a summary of the passage below. | 232 |
arxiv-format/2007_00586v3.md | # Lightweight Temporal Self-Attention
for Classifying Satellite Images Time Series
Vivien Sainte Fare Garnot
LASTIG, ENSG, IGN, Univ Gustave Eiffel,
F-94160 Saint-Mande, France
[https://www.umr-lastig.fr/](https://www.umr-lastig.fr/)
Loic Landrieu
LASTIG, ENSG, IGN, Univ Gustave Eiffel,
F-94160 Saint-Mande, France
[https://www.umr-lastig.fr/](https://www.umr-lastig.fr/)
## 1 Introduction
Time series of remote sensing data, such as satellites images taken at regular intervals, provide a wealth of useful information for Earth monitoring. However, they are also typically very large, and their analysis is resource-intensive. For example, the Sentinel satellites gather over 25 Tb of data every year in the EU. While exploiting the spatial structure of the data poses a challenge on its own, we focus in this paper on the efficient extraction of discriminative temporal features from sequences of spatial descriptors.
Among the many possible approaches to handling time-series of remote sensing data, one can concatenate observations in the temporal dimension [7], use temporal statistics [8], histograms [1], time-kernels [12], or shapelets [16]. Probabilistic graphical models such as Conditional Random Fields can also be used to exploit the temporal structure of the data [2].
Deep learning-based methods are particularly well-suited for dealing with the large amount of data collected by satellite sensors. Neural networks can either model the temporal dimension independently of the spatial dimensions withrecurrent Neural Networks [4] or one-dimensional convolutions [9], or jointly with convolutional recurrent networks [10] or 3D convolutions [6].
More recently, the self-attention mechanism introduced by Vaswani _et al._[13], initially developed for Natural Language Processing (NLP), has been successfully used and adapted to remote sensing tasks [11, 5]. In Section 2.1, we present these approaches and their differences in greater details.
In this paper, we introduce the Lightweight Temporal Attention Encoder (L-TAE), a novel attention-based network focusing on memory and computational efficiency. Our approach is based on the Temporal Attention Encoder (TAE) of Garnot _et al._[5], with several modifications meant to avoid redundant computations and parameters, while retaining a high degree of expressiveness and adaptability. We evaluate the performance of our approach on the open-access dataset Sentinel2-Agri [5]. With an equal parameter count, our algorithm outperforms all state-of-the-art competing methods in terms of precision and computational efficiency. Our method allows for efficient parameters usage, as our L-TAE outperforms TAEs with close to 10 times the parameter count, as well as recurrent units over 300 times larger.
## 2 Method
Throughout this section, we consider a generic input time series of length \\(T\\) comprised of \\(E\\)-dimensional feature vectors \\(\\mathbf{e}=[e^{(1)},\\cdots,e^{(T)}]\\in\\mathbb{R}^{E\\times T}\\). For example, such vectors can be a sequence of learned embeddings of super-spectral satellite images.
### Multi-Headed Self-Attention
In its original iteration [13], self-attention--initially designed for text translation--consists of the following steps:
(i) compute a triplet of key-query-value \\(k^{(t)},q^{(t)},v^{(t)}\\) for each position \\(t\\) of the input sequence with a shared linear layer applied to \\(e^{(t)}\\),
(ii) compute attention masks representing the compatibility (dot-product) between the queries at each position and the keys corresponding to previous elements in the sequence,
(iii) associate to each position of the sequence an output defined as the sum of the previous values weighted by the corresponding attention mask.
This process is done in parallel for \\(H\\) different sets of independent parameters--or heads--whose outputs are then concatenated. This scheme allows each head to specialize in detecting certain characteristics of the feature vectors.
Russwurm _et al._[11] propose to apply this architecture to embed sequences of satellite observations by max-pooling the resulting sequence of outputs in the temporal dimension. Garnot _et al._[5] introduce the TAE, a modified self-attention scheme. First, they propose to directly use the input embeddings as values (\\(v^{(t)}=e^{(t)}\\)), taking advantage of the end-to-end training of the image embedding functions alongside the TAE. Additionally, they define a single master query \\(\\hat{q}\\) for each sequence, computed from the temporal average of the queries. This master query is compared to the sequence of keys to produce a single attention mask of dimension \\(T\\) used to weight the temporal mean of values into a single feature vector.
### Lightweight Attention
We build on this effort to adapt multi-headed self-attention to the task of sequence embedding. Our focus is on efficiency, both in terms of parameter count and computational load.
Channel Grouping:we propose to split the \\(E\\) channels of the input elements into \\(H\\) groups of size \\(E^{\\prime}=E/H\\) with \\(H\\) being the number of heads1, in the manner of Wu _et al._[14]. We denote by \\(e_{h}^{(t)}\\) the groups of input channels for the \\(h\\)-th group of the \\(t\\)-th element of the input sequence (1).
Footnote 1: \\(E\\) and \\(H\\) are typically powers of \\(2\\) and \\(E>H\\), ensuring that \\(E^{\\prime}\\) remains integer.
We encode the number of days elapsed since the beginning of the sequence into an \\(E^{\\prime}\\)-dimensional positional vector \\(p\\) of characteristic scale \\(\\tau=1000\\) (2). In order for each head to access this information, \\(p\\) is duplicated and added to each channel group. Each head operates in parallel on its corresponding group of channels, thus accelerating the costly computation of keys and queries. This also allows for each head to specialize alongside its channel group, and avoid redundant operations between heads.
Figure 1: The proposed L-TAE module processing an input sequence \\(\\mathbf{e}\\) of \\(T\\) vectors of size \\(E\\), with \\(H=3\\) heads and keys of size \\(K\\). The channels of the input embeddings are distributed among heads. Each head uses a learnt query \\(\\hat{q}_{h}\\), while a linear layer \\(\\mathrm{FC}_{h}\\) maps inputs to keys. The outputs of all heads are concatenated into a vector with the same size as the input embeddings, regardless of the number of heads.
_Query-as-Parameter:_ We define the \\(K\\)-dimensional master query \\(q_{h}\\) of each head \\(h\\) as a model parameter instead of the results of a linear layer. The immediate benefit is a further reduction of the number of parameters, while the lack of flexibility is compensated by the larger number of available heads.
_Attention Masks:_ As a result, only the keys are obtained with a learned linear layer (3), while values are bypassed (\\(v^{(t)}=e^{(t)}\\)), and the queries are model parameters. The attention masks \\(a_{h}\\in\\left[0,1\\right]^{T}\\) of each head \\(h\\) are defined as the scaled _softmax_ of the dot-product between the keys and the master query (4). The outputs \\(o_{h}\\) of each heads are defined as the sum in the temporal dimension of the corresponding inputs weighted by the attention mask \\(a_{h}\\) (5). Finally, the heads outputs are concatenated into a vector of size \\(E\\) and processed by a multi-layer perceptron MLP to the desired size (6). In Figure 1, we represent a schematic representation of our network. The different steps of the L-TAE can also be condensed by the following operations, for \\(h=1\\cdots H\\) and \\(t=1\\cdots T\\):
\\[e_{h}^{(t)} =\\left[e^{(t)}\\left[(h-1)E^{\\prime}+i\\right]\\right]_{i=1}^{E^{ \\prime}} \\tag{1}\\] \\[p^{(t)} =\\left[sin\\left(\\mathrm{day}(t)/\\tau^{\\frac{i}{E^{\\prime}}} \\right)\\right]_{i=1}^{E^{\\prime}}\\] (2) \\[k_{h}^{(t)} =\\mathrm{FC}_{h}(e_{h}^{(t)}+p^{(t)})\\] (3) \\[a_{h} =\\mathrm{softmax}\\left(\\frac{1}{\\sqrt{K}}\\left[q_{h}\\cdot k_{h}^{ (t)}\\right]_{t=1}^{T}\\right)\\] (4) \\[o_{h} =\\sum_{t=1}^{T}a_{h}[t]\\left(e_{h}^{(t)}+p^{(t)}\\right)\\] (5) \\[o =\\mathrm{MLP}([o_{1},\\cdots,o_{H}]). \\tag{6}\\]
### Spatio-temporal classifier
Our proposed L-TAE temporal encoder is meant to be learned alongside a spatial encoding module and a decoder module in an end-to-end fashion (7). The spatial encoder \\(S\\) maps a sequence of raw inputs \\(X^{(t)}\\) to a sequence of learned features \\(e^{(t)}\\), computed independently at each position of the sequence. The decoder \\(D\\) maps the output \\(o\\) of the L-TAE to a target vector \\(y\\), such as class logits in the case of a classification task.
\\[\\left[X^{(t)}\\right]_{t=1}^{T}\\xrightarrow[]{S}\\left[e^{(t)}\\right]_{t=1}^{T }\\xrightarrow[]{\\text{L-TAE}}\\;\\;o\\;\\xrightarrow[]{D}y. \\tag{7}\\]
## 3 Numerical Experiment
### Dataset
We evaluate our proposed method with the public dataset _Sentinel2-Agri_[5], comprised of 191 703 sequences of 24 superspectral images of agricultural parcelsfrom January to October. The acquisitions have a spatial resolution of 10m per pixel and 10 spectral bands. Each parcel is annotated within a 20 class nomenclature of agricultural crops.
### Metric and Protocol
We use two classification metrics to assess the performance of predictions: the Overall Accuracy (OA) and the mean Intersection-over-Union (mIoU). The former accounts for the precision of the prediction regardless of the class distribution, while the latter computes the IoU for each class and averages the results over the class set. Given that the dataset is unbalanced (4 classes represent 90% of the samples) the mIoU gives a more faithful assessment of the performance.
We propose two evaluation protocols to assess the efficiency of our proposed light-weight temporal attention encoder:
* We assess the performance of our method and several state-of-the-art parcel classification algorithms on the dataset Sentinel2-Agri. In order to perform a fair comparison, we chose configurations corresponding to around 150k parameters for all methods. We report the results in Table 1 alongside the theoretical number of floating point operations (in FLOPs) required for the sequence embedding modules to process a single sequence at inference time.
* We complement this first experiment by comparing the performance of different configurations of sequence embedding algorithms, and plot the performance with respect to the number of parameters. In order to remove the effects of the different spatial encoders, we use the same spatial encoder (a pixel set encoder [5]) in all experiments. We only adapt the last linear layer of the spatial encoder to produce embeddings of the desired dimensions.
### Evaluated Methods
We evaluate the performance of recent algorithms operating on satellite image time series in order to assess the relative improvement offered by our proposed method.
* **PSE+TAE** The approach proposed by Garnot _et al._[5]. They use a Pixel-Set Encoder (PSE) module to encode each image independently, and process the resulting sequence of embeddings with a TAE module. The decoder \\(D\\) is a 2-layer MLP.
* **PSE+L-TAE** Our proposed method. We keep the same architecture as the PSE+TAE, and replace the TAE by our L-TAE network.
* **CNN+GRU** A similar approach [4] to PSE+TAE, with a CNN instead of the PSE and a Gated Recurrent Unit [3] instead of the TAE.
* **CNN+TempCNN** Another variation of this architecture, with a two-dimensional CNN to encode the images and a one-dimensional CNN processing the temporal dimension independently. This architecture is based on the work of Pelletier _et al._[9].
* **Transformer** Russwurm _et al._ were the first to introduce self-attention methods to the classification of remote sensing images. In their work[11], the statistics of images is simply averaged over the parcels' pixels, while the resulting sequence is processed by a Transformer network [13]. The output sequence of embeddings is max-pooled along the temporal dimension to produce a single embedding for the input sequence.
* **ConvLSTM** Russwurm _et al._[10] combine the embedding of the spatial and temporal dimensions by using a ConvLSTM network [15]. This work has been adapted to process parcels instead of pixels [5].
* **Random Forest** We use the temporal concatenation scheme of Bailly _et al._ to train a random forest of 100 trees using the parcel-wise mean and standard deviation of the spectral bands.
### Analysis
In Table 1, we report the performances of competing methods (taken from [5]) and the L-TAE architecture, all obtained with a 5-fold cross-validation scheme. Our proposed L-TAE architecture outperforms other methods on this dataset both in overall accuracy and mIoU. While the OA is essentially unchanged compared to the TAE, the increase of 0.8 mIoU points is noteworthy since our model is not only simpler but also less computationally demanding by almost an order of magnitude.
We would like to emphasize that FLOP counts do not necessarily reflect the computational speed of the model in practice. In our non-distributed implementation, the total inference times are dominated by loading times and the spatial embedding module. However, this metric serves to illustrate the simplicity and efficciency of our network.
Furthermore, our network maintains a high precision even with a drastic decrease in the parameter count, as illustrated in Figure 2. We evaluate the four
\\begin{table}
\\begin{tabular}{l c c c} & OA & mIoU & MFLOPs \\\\ \\hline PSE+L-TAE (ours) & **94.3**\\(\\pm\\)0.2 & **51.7**\\(\\pm\\)0.4 & **0.18** \\\\ PSE+TAE [5] & 94.2 \\(\\pm\\)0.1 & 50.9 \\(\\pm\\)0.8 & 1.7 \\\\ CNN+GRU [4] & 93.8 \\(\\pm\\)0.3 & 48.1 \\(\\pm\\)0.6 & 3.6 \\\\ CNN+TempCNN [9] & 93.3 \\(\\pm\\)0.2 & 47.5 \\(\\pm\\)1.0 & 0.81 \\\\ Transformer [11] & 92.2 \\(\\pm\\)0.3 & 42.8 \\(\\pm\\)1.1 & 1.1 \\\\ ConvLSTM [10] & 92.5 \\(\\pm\\)0.5 & 42.1 \\(\\pm\\)1.2 & - \\\\ Random Forest [2] & 91.6 \\(\\pm\\)1.7 & 32.5 \\(\\pm\\)1.4 & - \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Performance of our proposed models and competing approaches parameterized to all have 150k parameters approximately. MFLOPs is the number of floating points operations (in \\(10^{6}\\)FLOPs) _in the temporal feature extraction module_ and for one sequence. This only applies to networks which have a clearly separated temporal module.
best performing sequence embedding modules (L-TAE, TAE, GRU, TempCNN) in the previous experiment with different configurations, ranging from \\(9k\\) to \\(3M\\) parameters. These algorithms all operate with the same decoder and spatial module: a PSE and decoder layer totaling 31k parameters. The smallest L-TAE configuration, with only \\(9k\\) parameters, achieves a better mIoU score than a TAE with almost \\(110k\\) parameters, a TempCNN with over \\(700k\\) parameters, and a GRU with \\(3M\\) parameters. See Table 4 in the Appendix for the detailed configurations corresponding to each points.
In Figure 3, we represent the average attention masks of a 16-head L-TAE for two different classes. We observe that the masks of the different heads focus on narrow and distinct time-extents, _i.e._ display a high degree of specialization. We also note that the masks are adaptive to the parcels crop types. This suggests that the attention heads are able to cater the learned features to the plant types considered. We argue that our channel grouping strategy, in which each head processes distinct time-stamped features, allows for this specialization and leads to an efficient use of the trainable parameters.
### Ablation Study and Robustness Assessment
In Table 2, we report the performance of our proposed L-TAE architecture with different configurations of the following hyper-parameters: number of heads \\(H\\), dimension of keys \\(K\\), and number of channels \\(E\\) in the input sequence. We note that our model retains a consistent performance throughout all configurations.
Figure 2: Performance (in mIoU) of different approaches plotted with respect to the number of parameters in the sequence embedding module. The number of parameters is given on a logarithmic scale. The shaded areas depict the observed standard deviation of mIoU across the five cross-validation folds. The L-TAE outperforms other models across all model sizes, and the smallest 9k-parameter L-TAE instance yields better mIoU than the 100k-parameter TAE model.
Number of heads:The number of heads seems to only have a limited effect on the performance. We hypothesize that while a higher number of heads \\(H\\) is beneficial, a smaller group size \\(E^{\\prime}\\) is however detrimental.
Key Dimension:Our experiments show that smaller key dimensions than the typical values used in NLP or for the TAE (\\(K=32\\)) perform better on our problem. Even 2-dimensional keys allow for the L-TAE to achieve performances similar to the TAE.
Input Dimension:The variation in performance observed with larger input embeddings is expected: it corresponds to a richer representation. However, the returns are decreasing on the considered dataset with respect to the number of incurred parameters.
Query-as-ParameterIn order to evaluate the impact of our different design choices, we train a variation of our network with the same master-query scheme than the TAE. The larger resulting linear layer increases the size of the model for a total of 170k parameters, resulting in a mIoU of only 49.7. This indicates that the query-as-parameter scheme is not only beneficial in terms of compactness but also performance.
Figure 3: Average attention masks of the L-TAE for parcels of classes Spring Cereal (left) and Summer Cereal (right), for a model with 16 heads (from top to bottom). The masks illustrate how each head focuses on short temporal intervals which depend on crop type.
### Computational Complexity
In Table 3, we report the asymptotic complexity of different sequence embedding algorithms. For the L-TAE, the channel grouping strategy removes the influence of \\(H\\) in the computation of keys and outputs compared to a TAE or a Transformer. The complexity of the L-TAE is also lower than the GRU's as \\(M\\), the size of the hidden state, is typically larger than \\(K\\) (130 vs 8 in the experiments presented in Table 1).
## 4 Conclusion
We presented a new lightweight network for embedding sequences of observations such as satellite time-series. Thanks to a channel grouping strategy and the definition of the master query as a trainable parameter, our proposed approach is more compact and computationally efficient than other attention-based architectures. Evaluated on an open-access satellite dataset, the L-TAE performs better than state-of-the-art approaches, with significantly fewer parameters and a reduced computational load, opening the way for continent-scale automated analysis of Earth observation.
Our implementation of the L-TAE can be accessed in the open-source repository: github.com/VSAinteuf/lightweight-temporal-attention-pytorch.
\\begin{table}
\\begin{tabular}{c c c}
\\begin{tabular}{c c} \\(H\\) & Params. & mIoU \\\\ \\hline
2 & 114k & 51.6 \\\\
4 & 118k & 51.0 \\\\
8 & 127k & 51.2 \\\\
16 & 143k & **51.7** \\\\
32 & 176k & 51.2 \\\\ \\hline \\end{tabular} &
\\begin{tabular}{c c c} \\(K\\) & Params. & mIoU \\\\ \\hline
2 & 118k & 50.7 \\\\
4 & 127k & 51.3 \\\\
8 & 143k & **51.7** \\\\
16 & 176k & 50.8 \\\\
32 & 242k & 51.2 \\\\ \\hline \\end{tabular} &
\\begin{tabular}{c c c} \\(E\\) & Params. & mIoU \\\\ \\hline
32 & 46k & 49.6 \\\\
64 & 59k & 49.6 \\\\
128 & 65k & 51.1 \\\\
256 & 143k & **51.7** \\\\
512 & 254k & 51.4 \\\\ \\hline \\end{tabular} \\\\ \\end{tabular}
\\end{table}
Table 2: Impact of several hyper-parameters on the performance of our method. Underlined, the default parameters values in this study; in **bold**, the best performance.
\\begin{table}
\\begin{tabular}{c c c c} Method & Keys & Mask & Output \\\\ \\hline L-TAE & \\(O(TEK)\\) & \\(O(HTK)\\) & \\(O(EX)\\) \\\\ TAE & \\(O(HTEK)\\) & \\(O(HTK)\\) & \\(O(HEX)\\) \\\\ Transf. & \\(O(HTEK)\\) & \\(O(HT^{2}K)\\) & \\(O(HEX)\\) \\\\ GRU & \\(O\\left(MT(E+M)\\right)\\) & \\(O(MX)\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Asymptotic complexity of different temporal extraction modules for the computation of keys, attention masks, and output vectors. For the GRU, the complexity of the memory update is given in the Keys and Mask columns. \\(X\\) is the size of the output vector. \\(M\\) is the size of the hidden state of the GRU.
## Acknowledgments
This research was supported by the AI4GEO project: [http://www.ai4geo.eu/](http://www.ai4geo.eu/) and the French Agriculture Paying Agency (ASP).
## Appendix
In Table 4, we give the exact configurations used to obtain the values in Figure 2.
## References
* [1] Bailly, A., Malinowski, S., Tavenard, R., Chapel, L., Guyet, T.: Dense bag-of-temporal-sift-words for time series classification. International Workshop on Advanced Analysis and Learning on Temporal Data (2015)
\\begin{table}
\\begin{tabular}{l c c c c} Parameters & E & H & K & MLP \\\\ \\hline
**L-TAE** & & & & \\\\ \\hline
9 k & 128 & 8 & 8 & 128 \\\\
34 k & 128 & 16 & 8 & 128 - 128 \\\\
112 k & 256 & 16 & 8 & 256 - 128 \\\\
288 k & 512 & 32 & 8 & 512 - 128 \\\\
740 k & 1024 & 32 & 8 & 1024 - 256 - 128 \\\\
3840 k & 2048 & 64 & 8 & 2048 - 1024 - 256 - 128 \\\\ \\hline
**TAE** & & & & \\\\ \\hline
19 k & 64 & 2 & 8 & 128 - 128 \\\\
39 k & 64 & 4 & 8 & 256 - 128 \\\\
76 k & 128 & 4 & 8 & 512 - 128 \\\\
195 k & 256 & 4 & 8 & 1024 - 128 \\\\
360 k & 256 & 4 & 8 & 1024 - 256 - 128 \\\\
641 k & 256 & 8 & 8 & 2048 - 256 - 128 \\\\
2592 k & 1024 & 8 & 16 & 8192 - 256 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c c c} Parameters & Hidden Size & Parameters & Kernels \\\\ \\hline
15k & 32 \\\\
37k & 64 \\\\
134k & 156 \\\\
296k & 256 \\\\
636k & 400 \\\\
3545k & 1024 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c c} 14k & 16 - 16 - 16 \\\\
45k & 32 - 32 - 32 \\\\
136k & 64 - 64 \\\\
296k & 128 - 128 & 64 \\\\
702k & 128 - 128 - 128 & 180 \\\\
3362k & 64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c c} 16 - 16 \\\\
32 - 32 \\\\
64 \\\\
702k \\\\
3545k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 136k & 64 - 64 \\\\
256 & 128 - 128 \\\\
256 & 1024 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
45k & 32 - 32 - 32 \\\\
136k & 64 - 64 \\\\
296k & 128 - 128 & 64 \\\\
702k & 128 - 128 - 128 & 180 \\\\
3362k & 64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 \\\\
128 - 128 - 128 \\\\
128 - 128 - 128 - 128 \\\\
64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 \\\\
64 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\
128 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 & 180 \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 - 128 & 180 \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 64 \\\\
128 - 128 - 128 & 180 \\\\
3362k \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
128 - 128 - 128 & 180 \\\\
64 - 128 - 256 & 512 - 128 \\\\
128 - 128 - 256 & 512 - 128 \\\\ \\hline \\end{tabular}
\\begin{tabular}{l c} 16 - 16 \\\\
32 - 32 - 32 \\\\
64 - 64 \\\\
702k \\\\
3362k \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Configurations of the L-TAE, TAE, GRU, and TempCNN instances used to obtain Figure 2.
* [2] Bailly, S., Giordano, S., Landrieu, L., Chehata, N.: Crop-rotation structured classification using multi-source Sentinel images and LPIS for crop type mapping. IGARSS (2018)
* [3] Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR (2014)
* [4] Garnot, V.S.F., Landrieu, L., Giordano, S., Chehata, N.: Time-space tradeoff in deep learning models for crop classification on satellite multi-spectral image time series. IGARSS (2019)
* [5] Garnot, V.S.F., Landrieu, L., Giordano, S., Chehata, N.: Satellite image time series classification with pixel-set encoders and temporal self-attention. CVPR (2020)
* [6] Ji, S., Zhang, C., Xu, A., Shi, Y., Duan, Y.: 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sensing (2018)
* [7] Kussul, N., Lemoine, G., Gallego, F.J., Skakun, S.V., Lavreniuk, M., Shelestov, A.Y.: Parcel-based crop classification in ukraine using Landsat-8 data and Sentinel-1A data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2016)
* [8] Pelletier, C., Valero, S., Inglada, J., Champion, N., Dedieu, G.: Assessing the robustness of random forests to map land cover with high resolution satellite image time series over large areas. Remote Sensing of Environment (2016)
* [9] Pelletier, C., Webb, G.I., Petitjean, F.: Temporal convolutional neural network for the classification of satellite image time series. Remote Sensing (2019)
* [10] Russwurm, M., Korner, M.: Convolutional LSTMs for cloud-robust segmentation of remote sensing imagery. NeurIPS Workshop (2018)
* [11] Russwurm, M., Korner, M.: Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536 (2019)
* [12] Tavenard, R., Malinowski, S., Chapel, L., Bailly, A., Sanchez, H., Bustos, B.: Efficient temporal kernels between feature sets for time series classification. ECML PKDD (2017)
* [13] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. NeurIPS (2017)
* [14] Wu, Y., He, K.: Group normalization. ECCV (2018)
* [15] Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c.: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. NeurIPS (2015)
* [16] Ye, L., Keogh, E.: Time series shapelets: a new primitive for data mining. ACM SIGKDD (2009) | The increasing accessibility and precision of Earth observation satellite data offers considerable opportunities for industrial and state actors alike. This calls however for efficient methods able to process time-series on a global scale. Building on recent work employing multi-headed self-attention mechanisms to classify remote sensing time sequences, we propose a modification of the Temporal Attention Encoder of Garnot _et al._[5]. In our network, the channels of the temporal inputs are distributed among several compact attention heads operating in parallel. Each head extracts highly-specialized temporal features which are in turn concatenated into a single representation. Our approach outperforms other state-of-the-art time series classification algorithms on an open-access satellite image dataset, while using significantly fewer parameters and with a reduced computational complexity.
Keywords:Time Sequence Self-Attention Multi-Headed Attention Sentinel Satellite | Summarize the following text. | 162 |
arxiv-format/2008_02067v1.md | # Parallel, Self Organizing, Consensus Neural Networks
Homayoun Valafar
University of Georgia, CCR
Athens, GA 30602
Ph: (706) 542-4401
Email: [email protected]
Faramarz Valafar
University of Georgia, CCR
Athens, GA 30602
Ph: (706) 542-4436
Email: [email protected]
Okan Ersoy
Purdue University, MSEE
West Lafayette, IN 47907
Ph: (317) 494-6162
Email:[email protected]
## Introduction
_Parallel, Self-organizing, Consensual Neural Network (PSCNN)_ is an alternative to the conventional cascaded neural networks. This network is a predecessor of _Hierarchical Neural Networks_[1]. _PSCNN_ offers a better performance[2], a faster algorithm that can even be considered for real time execution, self organization for optimal performance and finally a better and closer emulation of human brain for perception experiments such as speech and vision.
_PSCNN_ is an architecture which will create a purely parallel environment for the operation of neurons. This architecture will not only simulate the concept of modularity in the human brain but it will also self organize in the number of stages needed in order to achieve an optimal performance.
_PSCNN_ is a unification of several smaller and more primitive modules. Each one of these modules can be chosen at will, but, in this thesis, it is a fully connected, feed-forward, single stage network. These modules are designed in this thesis as completely independent of one another during the training and recall procedures. Therefore, all of the modules can be trained simultaneously and independently. This feature allows a highly parallel operation during the training procedure, which it has not been possible in the past. Further more, this feature of _PSCNN_ allows learning of a massive amount of data with a very high performance in relatively a short time. An example of a 4 stage _PSCNN_ is shown Figure 1. In this figure NLT stands for Nonlinear Transformation.
## Input and Output to PSCCN Networks
The inputs to each _PSCNN_ module is the original training set, transformed non-linearly several times (the number of transformations will depend on the module number). These transformations can be of any kind (as long as they are one-to-one, on-to and nonlinear transformations) such as FFT, DFT followed by point-wise non-linearities, or even a simple binary operation such as perfect shuffling, two's or one's complement [3]. In this research, a more drastic binary operation was used, namely Gray code of a binary code. Although the input to each module is radically changed, the desired output of each module remains the same as the original desired output. All the initial weights will be selected as random numbers.
Gray code [4] is a kind of binary representation of numbers like Two's complement or others. The advantage of gray coding is that numbers which appear to be very similar to each other in the One's or Two's complement
Figure 1: A diagram of a four module PSCNN.
representation will be very dissimilar in gray code. This will allow the easy recognition of similar and hard to detect vectors. Gray code transformation has an additional property of rotation of only a part of the binary space. This nonlinear rotation of space is very effective in reducing the distances between far points while increasing the distances of near points in the original binary space. This advantage of Gray code transformation is fully explored and illustrated in XOR problem.
## 2 Training of PSCNN Networks
The core learning algorithm of any neural network may consist of one of several available methods. Delta rule (conjugate descent) was selected as the core training algorithm in our experiments. Eventhough there exist more powerful and effective minimization (learning) algorithms we chose delta rule for the following two reasons: first to more effectively establish the success of this algorithm/architecture, and secondly to be able to compare the results to other results obtained as the result of delta rule learning. Therefore, one clear method of improving the performance of such networks is the employment of a more powerful minimization algorithm.
Training of each module starts with a regular update of weights based on simple delta rule learning algorithm. The training is performed on the locally transformed data for each module. The training of each module is often limited to a certain number of epochs and not allowed to converge to a local or global minimum point. This is again to establish the effectiveness of this network. After the terminated training, the architecture selects the required number of modules and discards the remaining if any in order to achieve the required performance. Due to the practical constraints, to prevent the size of the network to exceed a certain limit, an upper limit to the number of modules that _PSCNN_ can create is established ahead of time. This upper limit should reflect the computational resources that are available to this algorithm. Once the required modules have been selected, then the _accept_ or _reject_ boundaries will be determined (explained in next section). At this point the training of the system is complete and thus testing or recall may start.
## 3 Adjustment of Output Boundaries
Output boundaries are defined for each output neuron in order to allow each output of each module to express its level of certainty in regards to the classification task. If a neural network is not given the option of abstaining in the task of decision making, then it is likely that it will produce the wrong output. As a result, the correct classification produced by the minority of modules is overlooked by the incorrect classification produced by the majority. Therefore, by allowing the option of making no decision or an unsure decision, the likelihood of the emergence of the rarely correct and confident classification results will be increased.
Each output neuron in this network (all modules) is designed to establish 5 different output regions and produce and output corresponding to that region. Each region will carry a different weight in establishing the final, combined decision of the entire network. The 5 output regions are as following and are illustrated in Figure 2.
\\begin{tabular}{l l} \\(\\bullet\\) & Definite zero (output = -1). \\\\ \\(\\bullet\\) & Indefinite zero (output = -0.5). \\\\ \\(\\bullet\\) & No decision (output = 0). \\\\ \\(\\bullet\\) & Indefinite one (output = 0.5). \\\\ \\(\\bullet\\) & Definite one (output = 1). \\\\ \\end{tabular}
The thresholds for each of these regions is established at the end of the training. After the termination of training, these thresholds can be established based on different rules. The following describes the methods of establishing these thresholds in the experiments that were conducted in this research.
\\begin{tabular}{l l} \\(\\bullet\\) & The maximum number below which no false 0 outputs are produced is the definite 0 threshold. \\\\ \\(\\bullet\\) & The maximum number above which no correct 0 outputs are produced is the upper threshold for indefinite 0 region. \\\\ \\(\\bullet\\) & The minimum number above which no false 1 outputs are produced is the definite 1 threshold. \\\\ \\(\\bullet\\) & The minimum number below which no correct 1 outputs are produces is the lower threshold for indefinite 1 regions. \\\\ \\(\\bullet\\) & Any remaining region between the indefinite regions is the undecided region. \\\\ \\end{tabular}
Note that by definition, it is not possible for the two definite regions to overlap, however it is possible to have overlapping indefinite regions. In this scenario, the common region is marked to be the undecided region.
## 4 Output of the Network
The logic unit determines the final output of the network by averaging the outputs of all of the modules. The highest output is considered to be the final output. Other methods of determining the final classification result based on the
Figure 2: An example of 5 output regions for any given output neuron.
results of the modules can be implemented. For example to further eliminate possible ambiguity in the decision one can modify the output of each neuron to be determined by the following equation.
\\[O=O_{r}+\\frac{D_{r}}{\\left\\|r\\right\\|}\\]
Where O is the final output, O is the output corresponding to region r. \\(\\Delta_{r}\\) is the distance that the activation level falls into the region and \\(\\left\\|r\\right\\|\\) is the total size of the region. This modified output determination allows to resolution of competition among the output neurons of different classes. For example if two neurons have the output of one but one is 0.4 units into the definite and the other one is 0.35, then the first neuron will be a winner over the second one.
## 3 Testing of PSCNN Networks
During the testing, the test vector is transformed with the proper transformation specific to each module and then fed into the module. Each module classifies the input. A final decision-making module, which can be a small computer, analog circuit or even a logic circuit, gathers the responses from all the modules and makes a final decision based on majority voting.
## 4 Experimental Results
### X-Or Problem
Several sets of data were tested on this network to study its performance in comparison to backpropagation networks. A simple but yet difficult problem, namely the exclusive- or problem (XOR) was tested first to discover the behavior and operation of this network. This problem has been studied thoroughly by scientist and engineers during the past several years using various networks.
Study of the solution of PSCNN network to the Exclusive-Or problem helps to understand the functional mechanism of this algorithm. The first step in the study of the PSCNN's solution is the examination of the original problem space illustrated in Figure 3. It is a common knowledge that a single stage neural network is not capable of solving this problem. The solution to this problem requires at least a two stage network and with backpropagation selected as the learning algorithm, this problem can be solved with step size of less than 0.75, 2-40 hidden neurons and 558-6857 epochs of training [Error: Reference source not found]. In contrast, PSCNN succeeded in solving this problem in as few as two modules in which the step sizes were between 0.2 and 0.9 with the training epochs less than 50. PSCNN's solution to this problem is illustrated in Figure 3(a)&b.
### Remote Sensing Problem
The data set for this experiment is based on Multispectral Earth observational remote sensing data called Flight Line C1 (FLC1). The geographic location of the FLC1 is the southern part of the Tippecanoe county, Indiana. This multispectral image was collected with an airborne scanner in June 1966 at noon time. The FLC1 consists of 12 band signals with each band corresponding to a different farm product. Only 4 spectral bands (f
Figure 3: X-Or problem space.
Figure 4: The first module solution of PSCNN to the XOR problem.
12 are used in this experiment. These 4 farm products consisted of alfalfa, corn, oats, and red clover.
The purpose of this experiment is to identify the farm product based on the observed shade of gray. The length of the input vector for each PSCNN module is 64 and the length of the output vector is 4. The learning rate \\(\\rho\\) is set to 0.9/k where k is the number of iterations. The initial weights were randomly generated as numbers between -2.5 and 2.5. The results of these experiments are shown in Figure 5. This figure contains the results of the performance of PSCNN after 10, 50 and 500 iterations versus the number of modules allocated to the problem. These are the results of 10, 50 and 500 iterations during the learning respectively. Note that the best performance obtained from a two stage neural network (with optimal number of hidden neurons) trained with back-propagation was 88%.
## 5 Conclusion and Discussion
PSCNN offers many attractive features for engineers as well as scientists in other disciplines. The following is a list of some the advantages that PSCNN offers:
* No need for guessing the number of hidden neurons since it is possible to achieve the same performance as s multi-stage network with several single stage networks.
* Much more forgiving in selecting the step size since the discovery of a near minimum optimal point is sufficient for each module.
* It is possible to extract more detailed information regarding the spatial geometry of the clusters in space by examining the success of each module and its transformation in the task of classification.
* PSCNN can be implemented on a parallel machine to utilize its parallel functionality for speedup of classification.
The results of XOR problem demonstrates that how powerful and effective this network is in combining partial solutions in order to provide a global solution to the problem. Another feature of PSCNN is its ability to self-organize. This will allow a network to take the optimal path towards the maximum performance.
## References
* [1] O. K. Ersoy and D. Hong, \"Parallel, self-organizing, hierarchical neural networks,\" _IEEE Trans. Neural Networks_, vol. 1, no. 2, pp. 167-178, Jun. 1990.
* Conference Proceedings_, 1995, vol. 4, pp. 2056-2061.
* [3] D. E. Rumelhart and J. L. Mcclelland, _Parallel distributed processing: explorations in the microstructure of Cognition. Volume 1. Foundations_, vol. 327, no. 6122. MIT Press, Cambridge, MA, 1986.
* Conference Proceedings_, 1996, vol. 1.
Figure 5: Performance of PSCNN for the remote sensing problem as a function of number of modules. | A new neural network architecture (PSCNN) is developed to improve performance and speed of such networks. The architecture has all the advantages of the previous models such as self-organization and possesses some other superior characteristics such as input parallelism and decision making based on consensus. Due to the properties of this network, it was studied with respect to implementation on a Parallel Processor (Ncube Machine) as well as a regular sequential machine. The architecture self organizes its own modules in a way to maximize performance. Since it is completely parallel, both recall and learning procedures are very fast. The performance of the network was compared to the Back propagation networks in problems of language perception, remote sensing and binary logic (Exclusive-Or). PSCNN showed superior performance in all cases studied.
Parallel, Self organizing, consensus, Neural, Network. | Write a summary of the passage below. | 167 |
arxiv-format/2403_02746v3.md | Learning without Exact Guidance: Updating Large-scale High-resolution Land Cover Maps from Low-resolution Historical Labels
Zhuohong Li\\({}^{1}\\), Wei He\\({}^{1}\\), Jiepan Li\\({}^{1}\\), Fangxiao Lu\\({}^{1}\\), Hongyan Zhang\\({}^{1,2}\\)
\\({}^{1}\\)Wuhan University \\({}^{2}\\)China University of Geosciences
{ashelee, weihe1990, jiepanli, fangxiaolu}@whu.edu.cn, [email protected]
## 1 Introduction
Land-cover mapping is a semantic segmentation task that gives each pixel of remote-sensing images a land-cover class such as \"cropland\" or \"building\" [14]. The land-cover data should be continuously updated since nature and human activities frequently change the landscape [37]. As sensors and satellites developed, massive high-resolution (HR) remote-sensing images (\\(\\leq\\) 1 meter/pixel) could be easily obtained [1]. Rapid large-scale HR land-cover mapping is even more critical to facilitate downstream applications as the up-to-date HR land-cover data can accurately describe the land surface [21, 27, 55]. However, the complex ground details reflected by HR images and various landforms over wide-span areas still challenge the periodic updating of large-scale HR land-cover maps [28].
The advanced methods for HR land-cover mapping have been dominated by the convolutional neural network (CNN) for many years. Although CNN-based models can finely capture local details for semantic segmentation of HR images, the intrinsic locality of convolution operations still limits their implementation in various landforms across larger areas [2]. Recently, Transformer has achieved tremendous success in semantic segmentation [5, 18, 34] and large-scale applications of Earth observation [11, 41, 48]. It adopts multi-head self-attention mechanisms to model global contexts but struggles in the representation of local details due to the shortage of low-level features [10, 48]. Besides, current methods with either CNN or Transformer structures generally rely on sufficient exact training labels by adopting a fully supervised strategy
Figure 1: Illustration of resolution mismatched issue in using the HR remote-sensing image (**Source**) and LR historical labels (**Guide**) to generate HR land-cover results (**Target**).
[20, 32, 39]. However, creating accurate HR land-cover labels for large-scale geographic areas is extremely time-consuming and laborious [6, 37].
Fortunately, many low-resolution (LR) land-cover data with large coverage have already emerged in the past decades [9, 22, 44, 56]. Utilizing these LR historical land-cover data as alternative guidance is a way to alleviate the scarcity of HR labels [29]. Nevertheless, the unmatched training pairs of HR images and inexact LR labels posed a challenge for fully supervised methods. Moreover, due to the different applied scenarios, existing weakly supervised semantic segmentation methods for natural scenes (e.g., learning from bounding box or image-level labels) are not applicable in handling the challenge as well [15, 23, 24, 57].
Distinctively, the incorrect samples of LR land-cover labels are brought by satellites in different spatial resolutions during Earth observation. As shown in Figure 1, the objects in a \\(60m\\times 60m\\) area can be clearly observed from the HR (1 m/pixel) image \\(\\mathbf{X}\\). However, in the LR (30 m/pixel) label \\(\\mathbf{Y}\\), the area is only labeled by four pixels. To produce the 1-m land-cover result \\(\\mathbf{\\widehat{Y}}\\), a labeled pixel \\(y_{1}\\) needs to provide guiding information for 900 target pixels \\(\\{\\hat{y}_{1},\\hat{y}_{2}\\cdots\\hat{y}_{900}\\}\\), which raises a serious geospatial mismatch. How to reasonably exploit LR labels as the only guidance for semantic segmentation of large-scale HR satellite images is a particular problem shared in the fields of Earth observation and computer vision [28, 31, 37]. By summarizing the state-of-the-art methods of exploiting LR labels for large-scale HR land-cover mapping, there are still two main problems:
1. _For the wide-span application areas, existing feature extractors are difficult to jointly capture local details from HR images and model global contexts in various land-forms at once [29, 54]._
2. _For the mismatch of training pairs, existing pipelines, as shown in Figure 2 (a), either still rely on partial HR labels or require non-end-to-end optimization with human interventions [12, 27]._
To resolve these problems, as shown in Figure 2 (b), we propose the Paraformer as an HR-label-free, end-to-end framework to guide large-scale HR land-cover mapping with LR land-cover labels. Specifically, Paraformer parallelly hybrids a downsampling-free CNN branch with a Transformer branch to jointly capture local and global contexts from the large-scale HR images and adopts a pseudo-label-assisted training (PLAT) module to dig up reliable information from LR labels for framework training.
The main contributions of this study are summarized as follows: **(a)** We introduce an efficient, weakly supervised Paraformer to facilitate large-scale HR land-cover mapping by getting rid of the well-annotated HR labels and human interventions during framework training; **(b)** a downsampling-free CNN branch is parallelly hybridized with a Transformer branch to capture features with both high spatial resolution and deep-level representation. The structure aims to globally adapt large-scale, various land-forms and locally preserve HR ground details; **(c)** the PLAT module iteratively intersects primal predictions and LR labels to constantly refine labeled samples for guiding the framework training. It provides a concise way to update large-scale HR land-cover maps from LR historical data.
## 2 Related Work
**Land-cover mapping approach:** In the early stage, pixel-to-pixel classification methods, such as decision tree [19], random forest [7], and support vector machine [40], were popular in the land-cover mapping of multi-spectral LR images. However, these methods generally ignore contextual information and have fragmented results in HR cases, as optical HR images contain abundant spatial details but limited spectral features [29]. With the development of data-driven semantic segmentation, many CNN-based models were widely used in land-cover mapping of HR images [37, 52, 53]. Besides, as an alternative architecture, Transformer shows great power in capturing global contexts with sequence-to-sequence modeling [3, 10, 30] and demonstrates outstanding performance in many large-scale applications of Earth observation, such as building extraction [25, 41], road detection [11], and land-object classification [47]. Besides, many works developed new ways by saving labor to produce finer labels with the Segment Anything Model (SAM) [35, 50]. However, sufficient exact training labels are the foundation for large-scale applications of both CNN- and Transformer-based methods. The scarcity of HR labels still impedes these fully supervised approaches from large-scale HR land-cover mapping.
**Land-cover labeled data:** Creating large-scale HR labels via manual and semi-manual annotations is extremely time-consuming and expensive [17, 36]. Therefore, exiting HR land-cover data is generally limited to small scales. E.g., the LoveDA dataset contains 0.3-m land-cover data, covering
Figure 2: Two modes of large-scale HR land-cover mapping with LR labels. (a) Existing modes either reply on partial HR labels or require non-end-to-end training with human interventions. (b) **Paraformer** aims to form a mode that is HR-label-free and end-to-end trainable.
536.15 \\(km^{2}\\) of China [46]. The Agri-vision dataset contained 0.1-m labeled data, covering 560 \\(km^{2}\\) of the USA [13]. In the contract, the LR land-cover data generally has a larger coverage. E.g., the United States Geological Survey cyclically updates 30-m land-cover data covering the whole USA [49]. The European Space Agency (ESA) has updated an annual 10-m global land-cover data since 2020 [44]. These LR data can be seen as an alternative label source for guiding large-scale HR land-cover mapping. However, massive inexactly labeled samples still hinder them from being practicable.
**Strategies for LR historical label mining:** To alleviate the scarcity of accurate labels in large-scale HR land-cover mapping, many studies have made efforts to mine reliable information from LR labels. E.g., a label super-resolution network was designed to constrain the inexact parts of LR labels by using the statistical distribution inferred from HR labels [31, 37]. A multi-stage framework, named WESUP, was built for 10-m land-cover mapping with 30-m labels [12]. In WESUP, multi-models were trained to refine clean samples from LR labels. Similarly, the winner approach of the 2021 IEEE GRSS Data Fusion Contest (DFC) deployed a shallow CNN to refine the 30-m labels, and then multi-model were trained with pseudo-labels to create the 1-m land-cover map of Maryland, USA [27]. Moreover, a low-to-high network (L2HNet) was proposed to select confident parts of LR labels via weakly supervised loss functions [28]. To produce 1-m land-cover maps across China with available 10-m labels, seven L2HNets were selectively trained to adapt wide-span geographic areas [29].
Different from these approaches that either still rely on partial HR labels or require human interventions, Pafaromer is designed as an HR-label-free end-to-end framework to facilitate large-scale HR land-cover mapping.
## 3 Methodology
To jointly capture local and global contexts and reasonably exploit LR labels for large-scale HR land-cover mapping, Paraformer combines parallel CNN and Transformer branches with a PLAT module. In this section, the three components are introduced sequentially.
### CNN-based resolution-preserving branch
As a basic feature extractor of Paraformer and also the main structure of previous L2HNet V1 [28], the CNN branch is designed to capture local contexts from HR images and preserve the spatial details by preventing feature downsampling. As shown in Figure 3 (a), the CNN branch is constructed by five serially connected resolution-preserving (RP) blocks. Each RP block contains parallel convolution layers with the sizes of \\(1\\times 1\\), \\(3\\times 3\\), and \\(5\\times 5\\), whose steps are set to 1 for feature size maintaining. Partly similar to the inception module [42], the channel numbers of different scales' layers in each block are inversely proportional to their kernel sizes, which are set to 128, 64, and 32. Based on the setting, the RP blocks can capture features with a proper receptive field instead of downsampling the feature maps with a deep encoder-decoder pattern. The serial blocks aim at sufficiently preserving the spatial resolution of features by using the majority of \\(1\\times 1\\) kernels. The 3\\(\\times\\)3 and 5\\(\\times\\)5 kernels capture necessary surrounding information. Furthermore, the multi-scale feature maps are concatenated and reduced to 128 channels for branch light
Figure 3: Overall workflow of Paraformer. The framework only takes the HR images and LR labels as training input and includes three components: (a) CNN-based resolution-preserving branch, (b) Transformer-based global-modeling branch, and (c) Pseudo-Label-Assisted Training (PLAT) module.
ening. Besides, a shortcut connection is adopted between blocks for residual learning and detail preserving.
### Transformer-based global-modeling branch
The ground objects with the same land-cover class may have distinctive attributes in HR images and are differently annotated in LR labels. Figure 4 shows typical cases of lakes and rivers located in different areas. By considering that the CNN branch with intrinsic locality hinders the adaptation of various landforms over large-scale areas, we further hybrid the CNN branch with a Transformer branch which aims at capturing global contexts and building long-range support among dispersed geographic areas. As shown in Figure 3 (b), the Transformer branch contains 12 transformer layers. Each layer includes layer normalization, multi-head self-attention, and multi-layer perception. The feature maps extracted by each RP block are concatenated and inputted to the Transformer branch. Specifically, the extracted features from the CNN branch are downsampled and embedded in a hidden feature layer. And then the Transformer branch encodes the dense feature patches to capture global contexts. Subsequently, the encoded features are constantly upsampled to the size of HR images and classified to the final results. During the upsampling process, the outputted features of each stage are concatenated with the pre-encoded features, which bring massive local contextual information to the final feature maps.
### Pseudo-Label-Assisted Training module
To reasonably guide the large-scale HR land-cover mapping with weak LR labels, as shown in Figure 3 (c), a weakly supervised PLAT module is adopted to optimize the framework training. The PLAT module aims to screen out uncertain samples and dig up reliable information from the LR labels. Specifically, the two parts of the PLAT module are explained as follows. For the CNN branch, we use classifier\\({}^{(1)}\\), which is constructed by \\(3\\times 3\\) convolution layers, to generate the primal prediction\\({}^{(1)}\\) based on the extracted HR feature maps. Then we calculate the Cross-Entropy (CE) loss between prediction\\({}^{(1)}\\), represented as \\(\\mathbf{\\hat{Y}}^{\\prime}\\), and the LR label, represented as \\(\\mathbf{Y}\\). Formally, by regarding \\(H\\), \\(W\\), and \\(L\\) as the height, weight, and land-cover class of the patch, the CE loss of the CNN branch is written as:
\\[\\mathcal{L}_{\\mathrm{ce}}(\\mathbf{Y},\\mathbf{\\hat{Y}}^{\\prime})=\\frac{\\sum_{i= 0}^{W}\\sum_{j=0}^{H}\\left[\\sum_{l=1}^{L}y_{ij}^{(l)}\\log(\\hat{y}_{ij}^{\\prime(l )})\\right]}{H\\times W}. \\tag{1}\\]
As the final output of the framework, prediction\\({}^{(2)}\\) is classified from the concatenated feature maps of CNN and Transformer branches, which is represented as \\(\\mathbf{\\hat{Y}}^{\\prime\\prime}\\). During each training iteration, we take the simple but effective **intersection** of prediction\\({}^{(1)}\\) and LR label to generate mask labels. Specifically, the inconsistent samples in mask labels are set as void values to remove them from loss calculations. Moreover, since predictions of the CNN branch contain HR textual information that is highly consistent with the images, the mask labels also outline fine edges and retain stable labeled samples. Finally, the proposed Mask-Cross-Entropy (MCE) loss is calculated between prediction\\({}^{(2)}\\) and mask labels. Formally, the MCE loss is written as:
\\[\\mathcal{L}_{\\mathrm{mce}}(\\mathbf{M}\\cdot\\mathbf{Y},\\mathbf{\\hat{Y}}^{\\prime \\prime})=\\frac{\\sum_{i=0}^{W}\\sum_{j=0}^{H}\\left[\\sum_{l=1}^{L}y_{ij}^{(l)}m_{ i}\\log(\\hat{y}_{ij}^{\\prime\\prime(l)})\\right]}{\\mathrm{Sum}(\\mathbf{M}(i,j)=1)}. \\tag{2}\\]
In Eqs. 2, \\(\\mathbf{M}\\) is the **intersected** mask with the size of \\(H\\times W\\). \\(m_{ij},i\\in\\left[0,H\\right],j\\in\\left[0,W\\right]\\) is the element of \\(\\mathbf{M}(i,j)\\) which can be simply represented as:
\\[m_{ij}=\\left\\{\\begin{array}{l}1|\\,Y_{ij}={Y^{\\prime}}_{ij}\\\\ 0|\\,Y_{ij}\
eq{Y^{\\prime}}_{ij}.\\end{array}\\right. \\tag{3}\\]
The total loss of the Paraforormer is the combination of two branches' losses, which is written as:
\\[\\mathcal{L}_{\\mathrm{total}}=\\mathcal{L}_{\\mathrm{ce}}+\\mathcal{L}_{\\mathrm{ mce}}. \\tag{4}\\]
## 4 Experiments
### Study areas and using data
To comprehensively evaluate Paraforormer on various landforms and different LR labels, the experiments are conducted on two large-scale datasets.
**The Chesapeake Bay dataset** is sampled from the largest estuary in the USA and organized into 732 non-overlapping tiles, where each tile has a size of 6000 \\(\\times\\) 7500 pixels [37]. The specific data includes:
Figure 4: Example of the local mismatch/match in two regions. The edge of water is marked with yellow boundaries. Region 1 shows dispersed lakes around urban areas with unmatched annotation. Region 2 shows a large-scale river with matched annotation.
1. _The HR images (1 m/pixel)_ are from the U.S. Department of Agriculture's National Agriculture Imagery Program (NAIP). The images contained four bands of red, green, blue, and near-infrared [33].
2. _The LR historical labels (30 m/pixel)_ are from the USGS's National Land Cover Database (NLCD) [49], including 16 land-cover classes.
3. _The ground truths (1 m/pixel)_ are from the Chesapeake Bay Conservancy Land Cover (CCLC) project.
**The Poland dataset** contains 14 provinces of Poland and is organized into 403 non-overlapping tiles, where each tile has a size of 1024 x 1024 pixels. The specific data includes:
1. _The HR images (0.25m and 0.5 m/pixel)_ are from the LandCover.ai [4] dataset. The images contained three bands of red, green, and blue.
2. _The LR historical labels_ are collected from three types of 10-m land-cover data and one type of 30-m data, which are named FROM,GLC10 [9], ESA,GLC10 [44], ESRI,GLC10 [22], and GLC,FCS30 [56].
3. _The HR ground truths_ are from the OpenEarthMap [51] dataset with seven land-cover classes.
### Implementation Detail and Metrics
In the experiments, all methods only take LR land-cover data as training labels. Paraformer is trained by the AdamW optimizer with a patch size of 224\\(\\times\\)224 and batch size of 8. The learning rate is set to 0.01 and would decrease by 10%
Figure 5: Demonstration of the training data and visual comparisons of the **Paraformer** and other typical methods on the Chesapeake Bay dataset with 16 classes. (a) HR image. (b) LR label. (c) land-cover mapping result of Parafomer. (dβh) land-cover mapping results of five typical methods.
Figure 6: Six typical areas with finer observation scale on the Chesapeake Bay dataset. The first row shows the LR labels (**Guide**). The second row shows the HR images (**Source**). Third row shows the HR results (**Target**) produced by **Paraformer**.
when the loss stopped dropping over eight epochs. The metric of mean intersection over union (mIoU) is calculated between the results and the HR ground truths after their land-cover classes are unified into four base classes. The compared methods include: Random Forest (RF) is a pixel-to-pixel method widely used in large-scale land-cover mapping [7]. TransUNet [10], ConViT [18], CoAtNet [16], MobileViT [34], and EfficientViT [5] are CNN-Transformer hybrid methods for semantic segmentation. UNetformer [48] and DC-Swin [47] are dedicated CNN-Transformer methods for remote-sensing images. UNet [38], HRNet [45], and LinkNet [8] are typical CNN-based semantic segmentation methods which are widely adopted in HR land-cover mapping [52, 37, 53]. SkipFCN [26] and SSDA [43] are shallow CNN-based methods for updating 1-m land-cover change maps from 30-m labels, which won first and second place
\\begin{table}
\\begin{tabular}{c l c c c c c c c} \\hline \\hline \\multirow{2}{*}{Resolution gap} & \\multirow{2}{*}{Method} & \\multicolumn{5}{c}{mIoU (\\(\\%\\)) of six states in the Chesapeake Bay watershed} \\\\ & & Delaware & New York & Maryland & Pennsylvania & Virginia & West Virginia & **Average** \\\\ \\hline \\multirow{7}{*}{\\(30\\times\\)} & **Paraformer** & **65.57** & **71.43** & **70.20** & **60.04** & 68.01 & 52.62 & **64.65** \\\\ & L2HNet [28] & 61.77 & 68.12 & 65.24 & 58.52 & **69.39** & **55.43** & 63.08 \\\\ & TransUNet [10] & 53.15 & 60.53 & 60.42 & 51.08 & 66.21 & 47.52 & 56.49 \\\\ & ConViT [18] & 55.26 & 60.71 & 61.58 & 53.94 & 59.80 & 49.11 & 56.73 \\\\ & CoAtNet [16] & 56.89 & 62.83 & 61.25 & 53.57 & 65.67 & 51.34 & 58.59 \\\\ & MobileViT[34] & 58.03 & 61.32 & 61.84 & 55.53 & 57.04 & 48.64 & 57.07 \\\\ & EfficientViT[5] & 53.72 & 61.28 & 59.48 & 51.38 & 57.34 & 48.76 & 55.33 \\\\ & UNetFormer[48] & 58.85 & 65.11 & 61.34 & 59.10 & 60.84 & 47.20 & 58.74 \\\\ & DC-Swin[47] & 59.65 & 65.99 & 58.60 & 58.06 & 64.11 & 48.15 & 59.09 \\\\ & UNet [38] & 54.16 & 58.79 & 56.42 & 53.21 & 57.34 & 46.11 & 54.34 \\\\ & HRNet [45] & 52.11 & 56.21 & 50.76 & 50.03 & 57.48 & 45.42 & 52.00 \\\\ & LinkNet [8] & 58.27 & 62.05 & 52.96 & 52.11 & 48.71 & 48.93 & 53.84 \\\\ & SkipFCN [26] & 60.97 & 64.83 & 59.44 & 55.37 & 64.72 & 54.66 & 60.00 \\\\ & SSDA [43] & 57.91 & 61.54 & 54.85 & 51.71 & 57.71 & 47.15 & 55.15 \\\\ & RF [7] & 59.35 & 55.03 & 55.26 & 51.07 & 52.29 & 54.36 & 54.56 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: The quantitative comparison of the Paraformer and other methods on six states of the Chesapeake Bay watershed. All methods were trained with the 1-m images and 30-m labels. The mIoU (\\(\\%\\)) of different methods was calculated between their results and the 1-m ground truth.
\\begin{table}
\\begin{tabular}{c l c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Max gap} & \\multirow{2}{*}{LR label} & \\multirow{2}{*}{**Paraformer**} & \\multicolumn{5}{c}{mIoU (\\(\\%\\)) of different methods} \\\\ & & & (**ours**) & [28] & TransUNet [10] & ConvT [17] & MobileViT [34] & DC-Swin & HRNet [45] & SkipFCN & RF \\\\ \\hline \\multirow{3}{*}{\\(40\\times\\)} & \\multirow{3}{*}{FROM_GLC10 [9]} & **56.57** & 50.15 & 38.44 & 39.36 & 41.03 & 43.56 & 43.66 & 27.14 & 21.48 \\\\ & ESA\\_GLC10 [44] & **55.19** & 52.13 & 35.58 & 36.09 & 38.42 & 40.05 & 49.81 & 28.34 & 26.97 \\\\ & Esri\\_GLC10 [22] & **55.07** & 50.78 & 37.79 & 38.78 & 38.50 & 39.91 & 46.65 & 28.18 & 19.36 \\\\ \\(120\\times\\) & GLC\\_FCS30 [56] & **49.39** & 43.62 & 26.20 & 29.16 & 29.57 & 30.14 & 41.46 & 23.67 & 17.02 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: The quantitative comparison on the Poland dataset. The mIoU (\\(\\%\\)) of the Paraformer and other methods that trained with three types of 10-m labels (i.e., FROM_GLC10, ESA\\_GLC10, and Esri\\_GLC10) and one type of 30-m label (i.e., GLC\\_FCS30) are demonstrated.
Figure 7: Visual results of **Paraformer** in the Poland dataset. The demonstration area is one of the training pieces sampled from large-scale training regions. (aβe) the training pairs of HR images (0.5 m/pixel) and four types of LR labels including ESA_GLC (10 m/pixel), FROM_GLC (10 m/pixel), Eni_GLC (10 m/pixel), and GLC_FCS30 (30 m/pixel). (fβg) the ground truth (0.5 m/pixel) and the mapping results of Paraformer with different LR labels.
in the 2021 IEEE GRSS DFC [27]. L2HNet is a state-of-the-art method designed for weakly supervised land-cover mapping [28].
### Comparison Results
**Comparison on the Chesapeake Bay dataset:** Table 1 and Figure 5 show the comparisons on the Chesapeake Bay dataset. From the quantitative results, Paraformer shows superiority in the states of Delaware, New York, Maryland, and Pennsylvania. The L2HNet shows better results in Virginia and West Virginia. On average, Paraformer has the most accurate HR land-cover mapping results over the entire area, with a mIoU of 64.65%. As shown in Figure 5 (c), the visual result of Paraformer is more consistent with the HR image compared with other methods. Unlike the fully supervised semantic segmentation task, the unmatched training pairs can cause serious misguidedness during the model training. E.g., as the rough results shown in Figure 5 (f) and (g), UNet and HRNet over-downsample the features and encourage results to fit LR labels instead of being consistent with the HR images. Furthermore, quantitative results reveal that UNet, LinkNet, and HRNet have insufficient performance, with mIoU of 54.34%, 53.84%, and 52.00%. Although the compared CNN-Transformer methods (e.g., TransUNet) combine local and global contextual information, the structure does not focus on preserving the feature resolution or dealing with the geospatial mismatch. As a result, TransUNet shows a weak performance in visual results, shown in Figure 5 (h), and has a mIoU of 56.49%. Furthermore, SkipFCN, SSDA, and RF use small receptive fields or pixel-to-pixel strategies to extract features with fine land details. However, due to the lack of deep-level feature representation and global contextual information, SkipFCN, SSDA, and RF obtain a mIoU of 59.99%, 55.15%, and 54.56%, respectively. As an example shown in Figure 5 (e), RF finely predicts ground details but incorrectly classifies rivers, lakes, and pastures. To further demonstrate the effect of Paraformer on different landscapes, we sample six typical areas in Figure 6. The visual results indicate that the complex ground details among various landforms of HR land-cover maps can be well updated from the LR historical land-cover labels.
**Comparison on the Poland dataset:** In the experiments with the Poland dataset, all methods were used to produce 0.25/0.5-m land-cover maps of 14 provinces of Poland by exploiting four LR labels separately. These LR labels include 10-m FROM_GLC10, ESA_GLC10, Esri_GLC10, and 30-m GLC_FCS30. As shown in Table 2, Paraformer is compared with eight representative methods (i.e., weakly supervised, CNN-Transformer, CNN-based, pixel-to-pixel approaches) in a more extreme geospatial mismatch. Compared with the state-of-the-art method, the Paraformer has an increase in mIoU of 6.42%, 3.06%, and 4.29% in exploiting 10-m labels. By resolving 30-m labels with a max resolution gap of 120 \\(\\times\\), Paraformer has a mIoU of 49.39% with an increase of 5.77% compared with L2HNet. The typical CNN-based method has an average mIoU of 46.71% among the 10-m cases and 41.46% in the 30-m case. Skip_FCN and RF have the lowest mIoU among all methods, which shows the difficulty of dealing with extremely unmatched situations. Moreover, the quantitative results of Paraformer shown in the four cases reveal that the proposed frame
Figure 8: Example of training data and different outputs of Paraformer sampled from the Chesapeake Bay dataset with four unified classes. (a) HR images. (b) LR labels. (c) the primal prediction from the CNN branch. (d) Mask label, as the intersection parts of (b) and (c). The **black areas** are set to void without supervised information. (eβf) the incorrect samples (with pink color) of LR label and mask label. (g) the final results of Paraformer. (h) HR ground truth.
work obtains stable results from different LR labels. Figure 7 shows the visual results of Paraformer among four cases. With the parallel CNN-Transformer structure and PLAT module, Paraformer is able to refine the clear ground details (e.g., vegetation and roads) even if they are roughly labeled in local areas. In general, Paraformer shows the potential to robustly update large-scale HR land-cover maps from available LR historical labels.
### Ablation experiments
In this section, ablation experiments were conducted on the Chesapeake Bay dataset to evaluate different components of Paraformer. Each ablation in Table 3 is explained as follows: (1) the sole CNN branch is dependently trained by calculating CE loss with LR labels; (2) the sole Transformer branch embeds HR images instead of features from the CNN branch and calculates CE loss with LR labels; (3) the hybrid structure without PLAT directly calculates CE loss with the LR labels.
By ablating the PLAT module, the results obtained an average mIoU of 62.81%, which indicates a 1.84% decrease compared with the 64.65% of Paraformer. By ablating the CNN and Transformer branches, the results of the sole CNN branch obtained a mIoU of 60.15% and had a 4.5% decrease. Results of the sole Transformer branch obtained the lowest mIoU of 56.49% and had the most obvious decrease (8.16%). Figure 8 shows different outputs of Paraformer, where the inexact LR labels are gradually refined during framework training. The final result shown in Figure 8 (g) indicates both fine ground details and accurate land-cover patterns that are consistent with the ground truth. Moreover, Figure 9 shows the visualized contexts captured by the CNN branch, Transformer branch, and hybrid structure. Figure 9 (b) indicates that the CNN branch mostly focuses on capturing local details (e.g., the edges of roads, single houses, and shrubs). Figure 9 (c) indicates that the Transformer branch captures the feature in object scale, focusing on intact land objects of building areas and parking spots. The hybrid structure shows a strong response to the obvious objects with both fine edges and intact areas.
In general, the ablation results demonstrate two findings: **(1)** The PLAT module can stably optimize the framework training and reasonably exploit the LR labels during the large-scale HR land-cover mapping process. **(2)** The parallel CNN and Transformer branches are indispensable parts of the framework, which construct a more robust feature extractor to bridge local and global contextual information.
## 5 Conclusion
In this paper, a weakly supervised CNN-Transformer framework, Paraformer, is proposed to update large-scale HR land-cover maps in an HR-label-free, end-to-end manner. Experiments on two datasets show that Paraformer outperforms other approaches in guiding semantic segmentation of large-scale HR remote-sensing images with easy-access LR land-cover data. Further analysis reveals that the Paraformer can robustly adapt various landforms of wide-span areas and stably exploit different LR labels in producing accurate HR land-cover maps. The ablation studies demonstrate the effectiveness of the parallel CNN-Transformer structure and the PLAT module. Moreover, intermediate results of each training process and visualized contexts of each branch are demonstrated to transparently explain the components of Paraformer. In general, the proposed Paraformer has the potential to become an effective method for facilitating large-scale HR land-cover mapping.
## Acknowledgments
This work has been supported by the National Key Research and Development Program of China (grant no. 2022YFB3903605) and the National Natural Science Foundation of China (grant no.42071322).
\\begin{table}
\\begin{tabular}{l c c c c c c c|c} \\hline \\hline \\multirow{2}{*}{Ablation method} & \\multicolumn{6}{c}{mIoU (\\%) of six states in the Chesapeake Bay watershed} & \\multirow{2}{*}{\\begin{tabular}{c} **Average** \\\\ \\end{tabular} } & \\multirow{2}{*}{\\begin{tabular}{c} Params \\\\ \\end{tabular} } & \\multirow{2}{*}{
\\begin{tabular}{c} FLOPs \\\\ \\end{tabular} } \\\\ \\cline{1-1} \\cline{5-10} Paraformer & **65.57** & **71.43** & **70.20** & **60.04** & **68.01** & **52.62** & **64.65** & 109.4M & 141.3G \\\\ \\cline{1-1} \\cline{5-10} Sole CNN branch & 59.57 & 67.87 & 64.30 & 53.86 & 65.26 & 50.01 & 60.15 & 4.5M & 56.1G \\\\ \\cline{1-1} \\cline{5-10} Sole Transformer branch & 53.15 & 60.53 & 60.42 & 51.08 & 66.22 & 47.52 & 56.49 & 96.9M & 83.3G \\\\ \\cline{1-1} \\cline{5-10} Hybrid without PLAT & 62.69 & 70.39 & 67.15 & 58.33 & 67.47 & 50.83 & 62.81 & 109.4M & 141.3G \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: The ablation results of the Paraformer on six states of the Chesapeake Bay watershed. The sole CNN branch, sole Transformer branch, and Hybrid without PLAT aim to investigate the contribution of the CNN branch, Transformer branch, and PLAT module, respectively.
Figure 9: Demonstration of the extracted contexts from the ablation methods. (a) the original HR image. (b) the contexts extracted by the sole CNN branch. (c) the contexts extracted by the sole Transformer branch. (d) the contexts extracted by the CNN-Transformer hybrid backbone.
## References
* [1] Land-cover classification with high-resolution remote sensing images using transferable deep models. _Remote Sensing of Environment_, 237:111322, 2020.
* [2] Cross-spatiotemporal land-cover classification from vhr remote sensing images with deep learning based domain adaptation. _ISPRS Journal of Photogrammetry and Remote Sensing_, 191:105-128, 2022.
* [3] Unetformer: A unet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 190:196-214, 2022.
* [4] Adrian Boguszewski, Dominik Batorski, Natalia Ziemba-Jankowska, Tomasz Dziedzic, and Anna Zambrzycka. Land-cover. ai: Dataset for automatic mapping of buildings, woodlands, water and roads from aerial imagery. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1102-1110, 2021.
* [5] Han Cai, Junyan Li, Muyan Hu, Chuang Gan, and Song Han. Efficientvit: Lightweight multi-scale attention for high-resolution dense prediction. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 17302-17313, 2023.
* [6] Yinxia Cao and Xin Huang. A coarse-to-fine weakly supervised learning method for green plastic cover segmentation using high-resolution remote sensing images. _ISPRS Journal of Photogrammetry and Remote Sensing_, 188:157-176, 2022.
* [7] Jonathan Cheung-Wai Chan and Desire Paelinckx. Evaluation of random forest and adaboost tree-based ensemble classification and spectral band selection for ecotope mapping using airborne hyperspectral imagery. _Remote Sensing of Environment_, 112(6):2999-3011, 2008.
* [8] Abhishek Chaurasia and Eugenio Culurciello. Linknet: Exploiting encoder representations for efficient semantic segmentation. In _2017 IEEE visual communications and image processing (VCIP)_, pages 1-4. IEEE, 2017.
* [9] Bin Chen, B Xu, Z Zhu, C Yuan, H Ping Suen, J Guo, N Xu, W Li, Y Zhao, JJSB Yang, et al. Stable classification with limited sample: Transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. _Sci. Bull_, 64(370-373):3, 2019.
* [10] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medical image segmentation. _arXiv preprint arXiv:2102.04306_, 2021.
* [11] Keyan Chen, Zhengxia Zou, and Zhenwei Shi. Building extraction from remote sensing images with sparse token transformers. _Remote Sensing_, 13(21):4441, 2021.
* [12] Yujia Chen, Guo Zhang, Hao Cui, Xue Li, Shasha Hou, Jinhao Ma, Zhijiang Li, Haifeng Li, and Huabin Wang. A novel weakly supervised semantic segmentation framework to improve the resolution of land cover product. _ISPRS Journal of Photogrammetry and Remote Sensing_, 196:73-92, 2023.
* [13] Mang Tik Chiu, Xingqian Xu, Yunchao Wei, Zilong Huang, Alexander G Schwing, Robert Brunner, Hrant Khachatrian, Hovnatan Karapetyan, Ivan Dozier, Greg Rose, et al. Agriculture-vision: A large aerial image database for agricultural pattern analysis. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2828-2838, 2020.
* [14] J. Cihdar. Land cover mapping of large areas from satellites: Status and research priorities. _International Journal of Remote Sensing_, 21(6-7):1093-1114, 2000.
* [15] Jifeng Dai, Kaiming He, and Jian Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_, 2015.
* [16] Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. _Advances in neural information processing systems_, 34:3965-3977, 2021.
* [17] Runmin Dong, Weizhen Fang, Haohuan Fu, Lin Gan, Jie Wang, and Peng Gong. High-resolution land cover mapping through learning with noise correction. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-13, 2021.
* [18] Stephane d'Ascoli, Hugo Touvron, Matthew L Leavitt, Ari S Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. In _International Conference on Machine Learning_, pages 2286-2296. PMLR, 2021.
* [19] Mark A Friedl and Carla E Brodley. Decision tree classification of land cover from remotely sensed data. _Remote sensing of environment_, 61(3):399-409, 1997.
* [20] Raffaele Gaetano, Dino Ienco, Kenji Ose, and Remi Cresson. A two-branch cnn architecture for land cover classification of pan and ms imagery. _Remote Sensing_, 10(11):1746, 2018.
* [21] Nicolas Girard, Dmitriy Smirnov, Justin Solomon, and Yuliya Tarabalka. Polygonal building extraction by frame field learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5891-5900, 2021.
* [22] Krishna Karra, Caitlin Kontigis, Zoe Statman-Weil, Joseph C Mazzariello, Mark Mathis, and Steven P Brumby. Global land use/land cover with sentinel 2 and deep learning. In _2021 IEEE international geoscience and remote sensing symposium IGARSS_, pages 4704-4707. IEEE, 2021.
* [23] Jungbeom Lee, Seong Joon Oh, Sangdoo Yun, Junsuk Choe, Eunji Kim, and Sungroh Yoon. Weakly supervised semantic segmentation using out-of-distribution data. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 16897-16906, 2022.
* [24] Jing Li, Junsong Fan, and Zhaoxiang Zhang. Towards noiseless object contours for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 16856-16865, 2022.
* [25] Jiepan Li, Wei He, Weinan Cao, Liangpei Zhang, and Hongyan Zhang. Unet: An uncertainty-aware network for building extraction from remote sensing images. _IEEETransactions on Geoscience and Remote Sensing_, 62:1-13, 2024.
* [26] Zhuohong Li, Fangxiao Lu, Hongyan Zhang, Guangyi Yang, and Liangpei Zhang. Change cross-detection based on label improvements and multi-model fusion for multi-temporal remote sensing images. In _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_, pages 2054-2057. IEEE, 2021.
* [27] Zhuohong Li, Fangxiao Lu, Hongyan Zhang, Lilin Tu, Jiayi Li, Xin Huang, Caleb Robinson, Nikolay Malkin, Nebojsa Jojic, Pedram Ghamisi, et al. The outcome of the 2021 ieee grss data fusion contest--track msd: Multitemporal semantic change detection. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 15:1643-1655, 2022.
* [28] Zhuohong Li, Hongyan Zhang, Fangxiao Lu, Ruoyao Xue, Guangyi Yang, and Liangpei Zhang. Breaking the resolution barrier: A low-to-high network for large-scale high-resolution land-cover mapping using low-resolution labels. _ISPRS Journal of Photogrammetry and Remote Sensing_, 192:244-267, 2022.
* [29] Zhuohong Li, Wei He, Mofan Cheng, Jingxin Hu, Guangyi Yang, and Hongyan Zhang. Sinolc-1: the first 1 m resolutionnal-scale land-cover map of china created with a deep learning framework and open-access data. _Earth System Science Data_, 15(11):4749-4780, 2023.
* [30] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 10012-10022, 2021.
* [31] Kolya Malkin, Caleb Robinson, Le Hou, Rachel Soobitsky, Jacob Czawlytko, Dimitris Samaras, Joel Saltz, Lucas Joppa, and Nebojsa Jojic. Label super-resolution networks. In _International Conference on Learning Representations_, 2018.
* [32] Diego Marcos, Michele Volpi, Benjamin Kellenberger, and Devis Tuia. Land cover mapping at very high resolution with rotation equivariant cnns: Towards small yet accurate models. _ISPRS journal of photogrammetry and remote sensing_, 145:96-107, 2018.
* [33] Aaron E Maxwell, Timothy A Warner, Brian C Vanderbilt, and Christopher A Ramezan. Land cover classification and feature extraction from national agriculture imagery program (naip) orthoimagery: A review. _Photogrammetric Engineering & Remote Sensing_, 83(11):737-747, 2017.
* [34] Sachin Mehta and Mohammad Rastegar. Mobilevit: lightweight, general-purpose, and mobile-friendly vision transformer. _arXiv preprint arXiv:2110.02178_, 2021.
* [35] Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, Wesley Nunes Goncalves, Ana Paula Marques Ramos, Jonathan Li, and Jose Marcato Junior. The segment anything model (sam) for remote sensing applications: From zero to one shot. _International Journal of Applied Earth Observation and Geoinformation_, 124:103540, 2023.
* [36] Bruce Pengra, Jordan Long, Devendra Dahal, Stephen V Stehman, and Thomas R Loveland. A global reference database from very high resolution commercial satellite data and methodology for application to landsat derived 30 m continuous field tree cover data. _Remote sensing of environment_, 165:234-248, 2015.
* [37] Caleb Robinson, Le Hou, Kolya Malkin, Rachel Soobitsky, Jacob Czawlytko, Bistra Dilkina, and Nebojsa Jojic. Large scale high-resolution land cover mapping with multi-resolution data. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12726-12735, 2019.
* [38] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_, pages 234-241. Springer, 2015.
* [39] Elif Sertel, Burak Ekim, Paria Etehadi Osgouei, and M Erdem Kabadayi. Land use and land cover mapping using deep learning based segmentation approaches and vhr worldview-3 images. _Remote Sensing_, 14(18):4558, 2022.
* [40] Dee Shi and Xiaojun Yang. Support vector machines for land cover mapping from remote sensor imagery. _Monitoring and Modeling of Global Changes: A Geomatics Perspective_, pages 265-279, 2015.
* [41] Zhongyu Sun, Wangping Zhou, Chen Ding, and Min Xia. Multi-resolution transformer network for building and road segmentation of remote sensing image. _ISPRS International Journal of Geo-Information_, 11(3):165, 2022.
* [42] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2015.
* [43] Lilin Tu, Jiayi Li, and Xin Huang. High-resolution land cover change detection using low-resolution labels via a semi-supervised deep learning approach-2021 ieee data fusion contest track msd. In _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_, pages 2058-2061. IEEE, 2021.
* [44] Ruben Van De Kerchove, Daniele Zanaga, Wanda Keersmaecker, Niels Souverijns, Jan Wevers, Carsten Brockmann, Alex Grosu, Audrey Paccini, Oliver Cartus, Maurizio Santoro, et al. Esa worldcover: Global land cover mapping at 10 m resolution for 2020 based on sentinel-1 and 2 data. In _AGU Fall Meeting Abstracts_, pages GC45I-0915, 2021.
* [45] Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, et al. Deep high-resolution representation learning for visual recognition. _IEEE transactions on pattern analysis and machine intelligence_, 43(10):3349-3364, 2020.
* [46] Junjue Wang, Zhuo Zheng, Xiaoyan Lu, and Yanfei Zhong. Loveda: A remote sensing land-cover dataset for domain adaptive semantic segmentation. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)_, 2021.
* [47] Libo Wang, Rui Li, Chenxi Duan, Ce Zhang, Xiaoliang Meng, and Shenghui Fang. A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images. _IEEE Geoscience and Remote Sensing Letters_, 19:1-5, 2022.
* [48] Libo Wang, Rui Li, Ce Zhang, Shenghui Fang, Chenxi Duan, Xiaoliang Meng, and Peter M Atkinson. Unetformer: A unet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 190:196-214, 2022.
* [49] James Wickham, Stephen V Stehman, Daniel G Sorenson, Leila Gass, and Jon A Dewitz. Thematic accuracy assessment of the ncld 2016 land cover for the conterminous united states. _Remote Sensing of Environment_, 257:112357, 2021.
* [50] Qiusheng Wu and Lucas Prado Osco. samgeo: A python package for segmenting geospatial data with the segment anything model (sam). _Journal of Open Source Software_, 8(89):5663, 2023.
* [51] Junshi Xia, Naoto Yokoya, Bruno Adriano, and Clifford Broni-Bediako. Openearthmap: A benchmark dataset for global high-resolution land cover mapping. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 6254-6264, 2023.
* [52] Jie Xie, Leyuan Fang, Bob Zhang, Jocelyn Chanussot, and Shutao Li. Super resolution guided deep network for land cover classification from remote sensing images. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-12, 2021.
* [53] Yue Xu, Jianya Gong, Xin Huang, Xiangyun Hu, Jiayi Li, Qiang Li, and Min Peng. Luojia-hssr: A high spatial-spectral resolution remote sensing dataset for land-cover classification with a new 3d-hrnet. _Geo-spatial Information Science_, pages 1-13, 2022.
* [54] Naoto Yokoya, Pedram Ghamisi, Ronny Hansch, and Michael Schmitt. 2020 ieee grss data fusion contest: Global land cover mapping with weak supervision [technical committees]. _IEEE Geoscience and Remote Sensing Magazine_, 8(1):154-157, 2020.
* [55] Hongyan Zhang, Wenbin Liu, and Liangpei Zhang. Seamless and automated rapeseed mapping for large cloudy regions using time-series optical satellite imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 184:45-62, 2022.
* [56] Xiao Zhang, Liangyun Liu, Xidong Chen, Yuan Gao, Shuai Xie, and Jun Mi. Glc_fcs30: Global land-cover product with fine classification system at 30 m using time-series land-sat imagery. _Earth System Science Data_, 13(6):2753-2776, 2021.
* [57] Tianfei Zhou, Meijie Zhang, Fang Zhao, and Jianwu Li. Regional semantic contrast and aggregation for weakly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 4299-4309, 2022.
**Supplementary Material - Learning without Exact Guidance: Updating Large-scale High-resolution Land Cover Maps from Low-resolution Historical Labels**
In this supplementary, we provide a detailed description of the proposed framework and dataset organization. More experimental results are also presented. These three parts are demonstrated sequentially.
## Appendix A. Details of Paraformer
In the proposed Paraformer, a robust feature extractor parallel hybrids a downsampling-free CNN branch with a Transformer branch. To demonstrate the structures of CNN and Transformer branches more clearly, Figures S1 and S2 show the basic units of CNN and Transformer branches.
In this section, we focus on illustrating the basic units of the CNN branch in detail. The resolution preserving (RP) block shown in Figure S1 was firstly proposed in our previous work: L2HNet1. Here, we use \\(\\mathbf{I}^{(b)}\\), \\(\\mathbf{M}^{(b)}\\), and \\(\\mathbf{F}^{(b)}\\) to denote the input, middle, and fusion feature maps of the \\(b\\)-th block. Specifically, the input feature map of the first block is generated by a 3 \\(\\times\\) 3 convolution input layer with four input channels (i.e., the R-G-B-NIR bands of the images) and \\(C_{I}\\) output channels. Therefore, the input feature map of the first block can be expressed as \\(\\mathbf{I}^{(1)}\\in\\mathbb{R}^{N\\times C_{I}\\times H_{I}\\times W_{I}}\\), where \\(N\\) represents the batch size and \\(C_{I}\\times H_{I}\\times W_{I}\\) represents the channels, height, and width of the map, respectively. For the operation symbols, we represent a one-stride \\((n\\times n)\\) convolutional layer with \\(C_{1}\\) input channels and \\(C_{2}\\) output channels as \\(W_{C_{1},C_{2}}^{n\\times n}\\) (with padding when \\(n=3,5\\)). In addition, the batch normalization layer with the rectified linear unit (ReLU) function is simply denoted by \\(bn(\\cdot)\\), and * represents the convolution operator. Based on this, the multi-scale feature fusion process from \\(\\mathbf{I}^{(b)}\\) to \\(\\mathbf{M}^{(b)}\\) can be described as:
Footnote 1: [https://doi.org/10.1016/j.isprjsprs.2022.08.008](https://doi.org/10.1016/j.isprjsprs.2022.08.008)
\\[\\mathbf{M}^{(b)}=\\text{concat}\\begin{bmatrix}bn(\\mathbf{I}^{(b)}*W_{C_{I},C_ {I}}^{1\\times 1}),\\\\ bn(\\mathbf{I}^{(b)}*W_{C_{I},\\frac{C_{I}}{2}}^{3\\times 3}),\\\\ bn(\\mathbf{I}^{(b)}*W_{C_{I},\\frac{C_{I}}{4}}^{5\\times 5})\\end{bmatrix}.\\] (S1)
As shown in Eq. (S1), the kernel numbers of the multi-scale convolutional layers are set to \\(\\omega=\\{\\sqrt{2^{(1-n)}}\\}_{n=1,3,5}\\), which is inversely proportional to their kernel sizes.
Subsequently, we adopt a 1 \\(\\times\\) 1 convolutional layer after the concatenation of the multi-scale layers to reduce the dimensions of \\(\\mathbf{M}^{(b)}\\) from \\(C_{I}\\left(1+1/2+1/4\\right)\\) to \\(C_{I}\\), thus keeping the blocks lightweight. In addition, to maintain the shallow features and put residual learning into effect, a shortcut connection is adopted from \\(\\mathbf{I}^{(b)}\\) to \\(\\mathbf{F}^{(b)}\\). As a result, the final \\(\\mathbf{F}^{(b)}\\) can be described as:
\\[\\mathbf{F}^{(b)}=bn(\\mathbf{M}^{(b)}*W_{C_{I}(1+1/2+1/4),C_{I}}^{1\\times 1})+ \\mathbf{I}^{(b)}.\\] (S2)
From Eqs. (S1)-(S2), \\(\\mathbf{F}^{(b)}\\) is a multi-scale fusion feature map with the same size, channels, and resolution as \\(\\mathbf{I}^{(b)}\\). Based on the structures, the RP block synchronously combines the multi-scale fusion attributes and residual learning ability to appropriately prevent the feature resolution reduction caused by the over-downsampling. Furthermore, after the feature fusion of several RP blocks, the predictions and corresponding CP maps are generated through a classifier that is constructed by a SoftMax function and a 1x1 convolutional layer \\(W_{C_{I},L}^{1\\times 1}\\), where \\(C_{I}=128\\) is the channel numbers maintained in the entire backbone and \\(L\\) is the output channel determined by the number of land-cover categories.
Moreover, the basic unit of the Transformer branch is shown in Figure S2, which includes a layer normalization (Layer Norm), multi-head self-attention (MSA), and multi-layer perception (MLP).
## Appendix B. Details of Study area and using data
In this section, we demonstrate the details of two large-scale datasets. Figures S3 and S4 show the location, coverage, and data samples of the Chesapeake Bay dataset and the Poland dataset. Tables S1 and S2 show the land-cover class unifying relations between the LR labels and HR ground truths.
**The Chesapeake Bay dataset:** The Chesapeake Bay, as the largest estuary in the USA, is about 320 kilometers long from its northern headwaters in the Susquehanna River to its outlet in the Atlantic Ocean. The Chesapeake Bay watershed covers about 160,000 \\(km^{2}\\) areas of the surrounding drainage basin. It includes six administrative states of the USA which are New York, Pennsylvania, Delaware, Maryland, Virginia, and West Virginia. The Chesapeake Bay watershed contains various landforms with abundant ecological communities and diverse flora which brings challenges for large-scale high-resolution (HR) land-cover mapping. The Chesapeake Bay dataset, grouped by Microsoft2, contains 1-meter resolution images and a 30-meter resolution land-cover product as the training data pairs and also contains a 1-meter resolution ground reference for assessment. Figure S3 illustrates the location, Digital Elevation Model (DEM), numbers of the tiles, and data samples of the Chesapeake Bay dataset. In more detail, the data sources are shown as follows:
Footnote 2: [https://illa.science/datasets/chesapeakelandcover](https://illa.science/datasets/chesapeakelandcover)
1. The HR remote sensing images with 1-meter resolution were captured by the airborne platform of the U.S. Department of Agriculture's National Agriculture Imagery Program (NAIP). The images contained four bands of red, green, blue, and near-infrared.
2. The rough historical land-cover products with 30-meter resolution were collected from the National Land Cover Database of the United States Geological Survey (USGS). The NLCD data contains 16 land-cover types and is utilized as the labels during the training process of the proposed Paraformer framework.
3. The HR ground references with 1-meter resolution were obtained from the Chesapeake Bay Conservancy Land Cover (CCLC) project. The CCLC data were interpreted based on the 1-meter NAIP imagery and LiDAR data containing six land-cover types. In this paper, the CCLC data were only used as the ground reference for quantitative and qualitative assessment and were not involved in the framework training or optimization process.
**The Poland dataset:** The Republic of Poland has a territory traversing the Central European Plain and extends from Baltic Sea in the north to the Sudeten and Carpathian Mountains in the south. Topographically, with the flat, long sea lie and the hilly, mountainous terrain, the landscape of Poland is characterized by diverse landforms, river systems, and ecosystems. The Poland dataset contained 14 Provinces of Poland which included the Provinces of Pomorskie, Lodzkie, Lubuskie, Dolnoslaskie, and so on. The Poland dataset contains 0.25-meter resolution images, three kinds of 10-meter resolution land-cover products, and a 30-meter resolution land-cover product to construct the training data pairs with different combinations. Figure S4 demonstrated the location, DEM, numbers of the tiles, and data samples of the Poland dataset. In more detail, the data sources are shown as follows:
1. The HR remote sensing images with 0.25-meter and 0.5-meter resolution were collected from the Land-Cover.ai dataset where the image sources are from the public geodetic resource used in the Land Parcel Identification System (LPIIS). The images contained three bands of red, green, and blue.
2. The rough historical labeled data with 10-meter resolution were collected from three types of global land-cover products which were (1) The FROM_GLC10 provided by the Tsinghua University, (2) The ESA_WorldCover v100 provided by the European Space Agency (ESA), and (3) The ESRI 10-meter global land cover (abbreviated as ESRI_GLC10) provided by the ESRI Inc. and IO Inc. The 30-meter resolution labeled data were collected from the 30-meter global land-cover product GLC_FCS30 provided by the Chinese Academy of Sciences (CAS).
3. The HR ground references were obtained from the OpenEarthMap dataset provided by the University of Tokyo. The ground references were interpreted based on the 0.25-meter and 0.5-meter resolution LPIS imagery and contained five land-cover types.
## Appendix C Supplementary experiment results
To comprehensively demonstrate the performance of Paraformer, we sequentially illustrate supplementary experiment results as follows:
**Visual results of the Chesapeake Bay dataset:** Figures S5-S7 demonstrate one large-scale and two small-scale visual comparisons between Paraformer and four typical methods. From these visual results, the Paraformer is able to update accurate HR land-cover maps from the HR images source and LR label guidance. TransUNet shows clearurban patterns but underestimates the built-up areas. UNet, as a typical CNN-based encoder-decoder framework, has a rough result consistent with the LR labels. L2HNet, as the state-of-the-art method for updating HR land-cover results from LR labels, shows an accurate edge of land objects but still has incorrect fragments in the results. RF, as a pixel-to-pixel learning method, has the finest edges but lacks of contextual information learning, which causes insufficient results overall (underestimating the water and low vegetation).
**Visual results of the Poland dataset:** Figures S8-S11 show the visual comparison between Paraformer and the other three typical methods which are trained with different LR land-cover labels. From the visual results, the Paraformer is able to refine a clear land-cover pattern from different types of LR land-cover labels. Even though some of the classes in the demonstration patches are not contained, Paraformer can jointly capture the local and global contexts and produce HR results that are consistent with the HR images.
**Further discussion:** In this part, we demonstrate more details of the loss fluctuation and supplementary large-scale experiments in China. Figure S12 shows the loss functions of \\(\\mathcal{L}_{\\mathrm{ce}}\\) and \\(\\mathcal{L}_{\\mathrm{mce}}\\) during framework training. The two training losses are stable to decrease in six states of the Chesapeake Bay dataset. This further indicates the robustness of the pseudo-label-assisted training (PLAT) module in learning from inexact LR labels. To further discuss the applicability of Paraformer, we conduct large-scale experiments in the whole of Wuhan City, China.
Based on our previous work on SinoLC-13 (i.e., the first 1-m land-cover map of China), we regard the intersected results of three 10-m land-cover products (ESA_GLC10, Esri_GLC10, and FROM_GLC10) as the LR training labels of 1-m Google Earth images. As shown in Fig. S13 (a), the 1-m Google Earth image reveals clear land details. Fig. S13 (b-d) demonstrates three types of 10-m land-cover products. Compared with the original 1-m SinoLC-1 shown in Fig. S13 (e), the Paraformer is able to refine a more accurate urban pattern shown in Fig. S13 (f). For the whole of Wuhan City, the reported overall accuracy (OA) of SinoLC-1 is \\(72.40\\%\\). The updated results of the proposed Paraformer reach \\(74.98\\%\\) with a \\(2.58\\%\\) improvement.
Figure S6. Sample A of the training data and visual comparisons of the **Paraformer** and other typical methods on the Chesapeake Bay dataset with four unified classes. (a) HR image. (b) LR label. (c) HR ground truth. (d) land-cover mapping result of Parafomer. (e-h) land-cover mapping results of four typical methods.
Figure S8. The visual results of Poland dataset with 10-m ESA,GLC10 training labels. (a) The 0.5-m image, (b) The 10-m label sampled from the ESA,GLC10. (c) Result of Paraformer. (d) Result of L2HNet. (e) Result of RF. (f) Result of UNet.
Figure S10: The visual results of Poland dataset with 10-m Esri,GLC10 training labels. (a) The 0.25-m image, (b) The 10-m label sampled from the Esri,GLC10. (c) Result of Paraformer. (d) Result of L2HNet. (e) Result of RF. (f) Result of UNet.
Figure S12: Demonstration of the loss functions \\(\\mathcal{L}_{\\mathrm{ce}}\\) and \\(\\mathcal{L}_{\\mathrm{mce}}\\) during framework training. Sub-figures (a)β(e) demonstrate the training process in six states of the Chesapeake Bay dataset. | Large-scale high-resolution (HR) land-cover mapping is a vital task to survey the Earth's surface and resolve many challenges facing humanity. However, it is still a non-trivial task hindered by complex ground details, various landforms, and the scarcity of accurate training labels over a wide-span geographic area. In this paper, we propose an efficient, weakly supervised framework (Paragformer) to guide large-scale HR land-cover mapping with easy-access historical land-cover data of low resolution (LR). Specifically, existing land-cover mapping approaches reveal the dominance of CNNs in preserving local ground details but still suffer from insufficient global modeling in various landforms. Therefore, we design a parallel CNN-Transformer feature extractor in Paragformer, consisting of a downsampling-free CNN branch and a Transformer branch, to jointly capture local and global contextual information. Besides, facing the spatial mismatch of training data, a pseudo-label-assisted training (PLAT) module is adopted to reasonably refine LR labels for weakly supervised semantic segmentation of HR images. Experiments on two large-scale datasets demonstrate the superiority of Paragformer over other state-of-the-art methods for automatically updating HR land-cover maps from LR historical labels.
+
Footnote β : Indicates equal contribution. \\({}^{\\dagger}\\)Corresponding author. The code and data are released at [https://github.com/LifhuoHong/Paragformer](https://github.com/LifhuoHong/Paragformer) | Write a summary of the passage below. | 300 |
arxiv-format/2007_06762v2.md | Dynamics of B-cell repertoires and emergence of cross-reactive responses in COVID-19 patients with different disease severity
Zachary Montague1*, Huibin Lv2*, Jakub Otwinowski3++, William S. DeWitt4,5, Giulio Isacchini3,6, Garrick K. Yip2, Wilson W. Ng2, Owen Tak-Yin Tsang7, Meng Yuan8, Hejun Liu8, Ian A. Wilson8,9, J. S. Malik Peiris2, Nicholas C. Wu10,11,12#, Armita Nourmohammad1,3,5#, Chris Ka Pun Mok2#
Footnote 1: email: [email protected]
Footnote 2: email: [email protected]
Footnote 3: email: [email protected]
Footnote 4: email: [email protected]
Footnote 5: email: [email protected]
Footnote 6: email: [email protected]
Footnote 7:
## Introduction
The novel coronavirus SARS-CoV-2, which causes the severe respiratory disease COVID-19, has now spread to 216 countries and caused more than 120 million infections with a mortality rate around 2.2% (WHO, 2021). COVID-19 patients show varying disease severity ranging from asymptomatic to requiring intensive care. While epidemiological and clinical data report that many factors such as age, gender, genetic background, and preexisting conditions are associated with disease severity, host immunity against the virus infection is the crucial component of controlling disease progression (Ellinghaus et al., 2020; Guan et al., 2020; McKechnie and Blish, 2020; Vabret et al., 2020; Wu et al., 2020). Shedding light on signatures of a protective immune response against SARS-CoV-2 infections can help elucidate the nature of COVID-19 and guide therapeutic developments as well as vaccine design and assessment.
Adaptive immunity is considered as one of the core protective mechanisms in humans against infectious diseases. A vast diversity of surface receptors on B- and T-cells enables us to recognize and counter new or repeated invasions from a multitude of pathogens (Janeway et al., 2005; Nielsen and Boyd, 2018). In particular, antibodies produced by B-cells can provide long-lasting protection against specific pathogens through neutralization or other antibody-mediated immune mechanisms (Janeway et al., 2005). During the early phase of an infection, antigens of a pathogen are recognized by a group of naive B-cells, which then undergo affinity maturation in a germinal center through somatic hypermutation and selection. The B-cell receptors (BCRs) of mature B-cells can react strongly to infecting antigens, resulting in B-cell stimulation, clonal expansion, and ultimately secretion of high-affinity antibodies in the blood (Burnet, 1959, 1960; Cyster and Allen, 2019). The specificity of a BCR is determined by a number of features such as V-, (D-), or J-gene usage and length and sequence composition of the HCDR3 region. It has been found that SARS-CoV-2-specific IgG antibodies can be detected in plasma samples of COVID-19 patients starting from the first week post-symptom onset (Perera et al., 2020). These antibodies bind to different antigens including the spike protein and nucleoprotein as well as other structural or non-structural proteins (Hachim et al., 2020). In addition, multiple studies have isolated SARS-CoV-2-specific B-cells from COVID-19 patients and determined their germline origin (Barnes et al., 2020; Brouwer et al., 2020; Cao et al., 2020; Chi et al., 2020; Han et al., 2020; Hansen et al., 2020; Hurlburt et al., 2020; Ju et al., 2020; Kreer et al., 2020; Kreye et al., 2020; Liu et al., 2020; NoyPorat et al., 2020; Robbiani et al., 2020; Rogers et al., 2020; Seydoux et al., 2020, 2020; Shi et al., 2020; Wu et al., 2020; Yuan et al., 2020; Zost et al., 2020). However, we still lack a comprehensive view of patients' entire BCR repertoires during SARS-CoV-2 infections.
Antibody repertoire sequencing has advanced our understanding of the diversity of adaptive immune repertoires and their response to pathogens (Boyd et al., 2009; Georgiou et al., 2014; Kreer et al., 2020; Robins, 2013). A few studies have performed BCR repertoire bulk sequencing to characterize the statistical signatures of the immune response to SARS-CoV-2 (Galson et al., 2020; Nielsen et al., 2020; Niu et al., 2020; Schultheiss et al., 2020). However, these studies have limited data on the dynamics of BCR repertoires, which could otherwise provide significant insight into responses specific to the infection. Moreover, they do not probe the composition of plasma B-cells during infection, which is the direct indicator of antibody production within an individual.
In this study, we have established a principled statistical approach to study the statistics and dynamics of bulk and plasma B-cell repertoires and to characterize the immune responses in 19 COVID-19 patients with different disease severities. By combining information from the statistics of sequence features in BCR repertoires, the expanding dynamics of clonal lineages during infection, and sharing of BCRs among COVID-19 patients, we identified 38 clonal lineages that are potential candidates for a response to SARS-CoV-2. Importantly, eight of these lineages contain BCRs from the plasma B-cell repertoire and, hence, are likely to have been secreting antibodies during infection. Moreover, using single-cell sequencing, we have verified the reactivity of BCRs shared among individuals to the epitopes of the receptor-binding domain (RBD) and the N-terminal domain (NTD) of SARS-CoV-2. Lastly, we identified cross-reactive responses to SARS-CoV-1 in some of the COVID-19 patients and a natural emergence of a previously isolated SARS-reactive antibody (Pinto et al., 2020) in three patients.
## Results
**Strong correlation between composition of bulk and plasma B-cell repertoires.** We obtained total RNA from the PBMC isolated from 19 patients infected with SARS-CoV-2 and three healthy individuals (see Methods, and Tables S1, S2 for details). To broaden our healthy control pool, we also incorporated into our analyses IgG B-cells from ten individuals in the Great Repertoire Project (GRP) (Briney et al., 2019). Sequence statistics for the first three biological replicates pooled together for each individual from the GRP are shown in Table S3 (see Methods). The patients showed different severities of symptoms, forming three categories of infected cohorts: two patients with mild symptoms, 12 patients with moderate symptoms, and five patients with severe symptoms. Specimens from all but one patient were collected over two or more time points during the course of the infection (Table S1). In addition to the bulk repertoire, we also isolated CD38\\({}^{+}\\) plasma B-cells from PBMC samples over at least two time points from seven patients in this cohort (six moderate, one severe) and from seven additional patients (two asymptomatic, three mild, two moderate) and three healthy individuals (Figure S1 and Table S4). The sampled time points for all patients in this study are indicated in Fig. 1 and Tables S1 and S4. IgG heavy chains of B-cell repertoires were sequenced by next-generation sequencing, and the statistics of the collected BCR read data from each sample are shown in Tables S1 and S2. Statistical models were applied to analyze the length of the HCDR3 region, IGHV- or IGHJ-gene usage, and expansion and sharing of specific clonal lineages (Fig. 1).
The bulk repertoire is a collection of all BCRs circulating in the blood, including receptors from naive, memory, and plasma B-cells. Plasma B-cells are actively producing antibodies and their receptors are more likely to be engaged in responding to an ongoing infection. Interestingly, the abundance of B-cell clonal lineages in the bulk and the plasma are strongly correlated (Fig. S3A), with Pearson correlations ranging from 0.55 - 0.88 across patients and significance \\(\\;\\mathrm{p}-\\mathrm{values}<5\\times 10^{-8}\\;\\mathrm{across}\\) patients; correlations and p-values are given for each patient in Fig. S3. The significant correspondence between the bulk and plasma B-cell repertoires in Fig. 2 indicates that samples from the bulk, which cover a larger depth, are representative of functional immune responses, at least in the course of the infection.
### BCR
**B-cell repertoires differ in receptor compositions across cohorts.** We aimed to investigate whether cohorts with different disease severities can be distinguished by molecular features of their B-cell repertoires. Since sequence features of immune receptors (e.g. HCDR3 length and V- or J-gene usage) are often associated with their binding specificity, we used statistical methods to
Figure 1: **Roadmap for analysis of BCR repertoires. Top:** We collected bulk blood IgG BCR samples from three healthy individuals and 19 COVID-19 patients where two patients had mild symptoms, 12 had moderate symptoms, and five had severe symptoms (different markers and colors); see Methods. We also collected CD38\\({}^{+}\\) plasma B-cells from PBMC samples of seven patients in this cohort (six moderate, one severe) and from seven additional patients (two asymptomatic, three mild, two moderate), and three healthy individuals (Fig. S2, Tables S1, S2). Samples were collected at different time points during infection (shown in center for bulk repertoires). We distinguished between productive receptors and unproductive receptors that had frameshifts due to V(D)J recombination. Line segments of varying lengths represent full V(D)J rearrangements (colors). In each patient, we constructed clonal lineages for productive and unproductive BCRs and inferred the naive progenitor of the lineage (Methods). **Bottom:** 1. Using the set of unproductive inferred naive BCRs, we inferred a model to characterize the null probability for generation of receptors \\(P_{\\text{gen}}(\\sigma)\\)(Marcou et al., 2018). We inferred a selection model (Sethna et al. 2020) to characterize the deviation from the null among inferred naive productive BCRs, with the probability of entry to the periphery \\(P_{\\text{post}}(\\sigma)\\) and selection factors \\(q_{f}(\\sigma)\\), dependent on receptor sequence features. 2. Based on temporal information of sampled BCRs, we identified clonal lineages that showed significant expansion during infection. 3. We identified progenitors of clonal lineages shared among individuals and assessed the significance of these sharing statistics based on the probabilities to find each receptor in the periphery. The shared expanding clonal lineages that contain plasma B-cells, are likely candidates for secreting responsive antibodies during infection. We verified reactivity of receptors to SARS-CoV-2 antigenic epitopes using sorted single-cell data. We also identified previously characterized monoclonal antibodies (mAbs) specific to SARS-CoV-2 and SARS-CoV-1.
activating and forming a clonal lineage in response to an infection. In particular, the subset of lineages that contain plasma B-cell receptors can signal specific responses for antibody production against the infecting pathogen. Statistics of unique sequences in the bulk and the plasma B-cell repertoires, on the other hand, contain information about the size of the circulating lineages. Importantly, these statistical ensembles are relatively robust to PCR amplification biases that directly impact read abundances (see Methods for error correction and processing of reads).
IGHV genes cover a large part of pathogen-engaging regions of BCRs, including the three complementarity-determining regions HCDR1, HCDR2, and a portion of HCDR3. Therefore, we investigated if there are any differences in V-gene usage across cohorts, which may indicate preferences relevant for response to a particular pathogen. We found that the variation in V-gene usage among individuals within each cohort was far larger than differences among cohorts both in the bulk (Fig. 3A) and in the plasma B-cell repertoire (Fig. S3B). Data from unique sequences also indicated large background amplitudes due to vast differences in the sizes of lineages within a repertoire (Figs. S2A, S3E). Similarly, IGHJ-gene usage was also comparable across different cohorts for both bulk and plasma B-cell repertoires (Figs. 2D, S2C, and S3D,G). Moreover, we do not see a significant distinction in statistics of gene usage between the bulk and the plasma B cell repertoires (Figs. 2, S2 for bulk and Fig. S3 for plasma B-cells). Our results suggest that the SARS-CoV-2 V-gene specific responses are highly individualized at the repertoire level.
HCDR3 is part of the variable chain of B-cell receptors and is often a crucial region in determining specificity. Importantly, HCDR3 is highly variable in its sequence content and length due to insertion and deletion of sequence fragments at the VD and DJ junctions of the germline receptor. Therefore, differential characteristics of the HCDR3 sequence in BCR repertoires of different cohorts can signal preferences for sequence features specific to a class of antigens. We found that HCDR3s of lineages in COVID-19 patients with moderate and severe symptoms are significantly longer than in the healthy controls both from this study and from the GRP (Briney et al., 2019) (see Fig. 3B-C; One-way ANOVA statistics for differences in mean HCDR3 length: Healthy-Moderate: \\(F_{1,13}=15.7\\), p-value \\(=1.6\\times 10^{-3}\\); Healthy-Severe: \\(F_{1,6}=37.5\\), p-value \\(=8.7\\times 10^{-4}\\); GRP-Moderate: \\(F_{1,20}=34.0\\), p-value \\(=1.1\\times 10^{-5}\\); GRP-Severe: \\(F_{1,13}=41.5\\), p-value \\(=2.2\\times 10^{-5}\\)). The difference between HCDR3 length in healthy individuals and patients with mild symptoms were less significant. These differences are also observed at the level of unique productive BCRs (Fig. S2B). These findings are consistent with previous reports of longer HCDR3 lengths in COVID-19 patients (Galson et al., 2020; Nielsen et al., 2020; Schultheiss et al., 2020). It should be noted that despite differences in experimental protocols, the HCDR3 length of the healthy cohort from this study and from GRP (Briney et al., 2019) are comparable to each other (Figs. 2B-C, S2B). In addition, we found no significant difference between the HCDR3 length of the unproductive BCR repertoires of healthy individuals and COVID-19 patients (Figs. S2E), which should reflect biases in the generation of receptors prior to functional selection. Taken together, these finding indicate that BCRs with a longer HCDR3 tend to be preferentially elicited in repertoires of individuals responding to SARS-CoV-2 infections. This preference seems to have a functional significance as longer HCDR3 is also observed among monoclonal antibodies (mAbs) specific to the receptor binding domain (RBD) and the N-terminal domain (NTD) of SARS-CoV-2 (Fig. 2B) which were identified in previous studies (Brouwer et al., 2020; Han et al., 2020; Hurlburt et al., 2020; Kreye et al., 2020; Pinto et al., 2020; Robbiani et al., 2020; Wu et al., 2020; Zost et al., 2020).
**Differential selection on B-cell repertoires in response to SARS-CoV-2.** Longer HCDR3 sequences in COVID-19 patients can introduce more sequence diversity at the repertoire level. Quantifying sequence diversity of a B-cell repertoire can be very sensitive to the sampling depth in each individual. Despite progress in the quality of high-throughput repertoire sequencing techniques, sequenced BCRs still present a highly under-sampled view of the entire repertoire. To characterize the diversity of repertoires and the statistics of sequence features that make up this diversity, we inferred principled models of repertoire generation and selection for the entry of receptors into the periphery (Methods) (Elhanati et al., 2014; Marcou et al., 2018; Sethna et al., 2020). To do so, we first used data from unproductive lineage progenitors of B-cell receptors in the bulk repertoire to infer the highly non-uniform baseline model that characterizes the probability \\(P_{\\text{gen}}(\\sigma)\\) to generate a given receptor sequence, dependent on its sequence features including the V-, D-, and J- gene choices and also the inserted and deleted sequences at the VD and DJ junctions (Elhanati et al., 2014; Marcou et al., 2018; Sethna et al., 2020) (Fig. 1 and Methods). The resulting model reflects the biased preferences in generating BCRs in the bone marrow by V(D)J recombination.
Figure 3: **Differential statistics of immune repertoires across cohorts.****(A)** The distribution of the log-probability to observe a sequence \\(\\sigma\\) in the periphery \\(\\log_{10}P_{\\text{post}}(\\sigma)\\) is shown as a normalized probability density function (PDF) for inferred naive progenitors of clonal lineages in cohorts of healthy individuals and the mild, moderate, and severe cohorts of COVID-19 patients. Full lines show distributions averaged over individuals in each cohort, and shadings indicate regions containing one standard deviation of variation among individuals within a cohort. **(B)** Clustering of cohorts based on their pairwise Jensen-Shannon divergences \\(D_{JS}\\) as a measure of differential selection on cohorts is shown (Methods). **(C)** The bar graph shows how incorporating different features into a SONIA model contributes to the fractional Jensen-Shannon divergence between models trained on different cohorts. The error bars show the variations of these estimates over five independently inferred models (Methods). Logo plots show the expected differences in the log-selection factors for amino acid usage, \\(\\langle\\Delta\\log Q_{\\text{cohort}}(a)\\rangle=(\\log Q_{\\text{cohort}}(a)-\\log Q _{\\text{healthy}}(a))\\) for the **(D)** mild, **(E)** moderate, and **(F)** severe COVID-19 cohorts. The expectation values \\(\\langle\\cdot\\rangle\\) are evaluated on the mixture distribution \\(\\frac{1}{2}\\Big{(}\\text{P}_{\\text{post}}^{\\text{cohort}}+\\text{P}_{\\text{ post}}^{\\text{healthy}}\\Big{)}\\). Positively charged amino acids (lysine, K; arginine, R; and histidine, H) are shown in blue while negatively charged amino acids (aspartate, D, and glutamate, E) are shown in red. All other amino acids are grey. Positions along the HCDR3 are shown up to 10 residues starting from the 3β (positive position values) and the 5β ends (negative position values). **(G)** The bar graph shows the average mean difference between the log-selection factors for IGHV-gene usage for the mild (green), moderate (yellow), and severe (red) COVID-19 cohorts, with the mean computed using the mixture distribution \\(\\frac{1}{2}\\Big{(}\\text{P}_{\\text{post}}^{\\text{cohort}}+\\text{P}_{\\text{ post}}^{\\text{healthy}}\\Big{)}\\) and the average taken over the mean differences of 30 independently trained SONIA models for each cohort. Error bars show one standard deviation for the estimated mean, due to variations in the inferred SONIA models.
The functional, yet pathogen-naive BCRs that enter the periphery experience selection through processes known as central tolerance (Janeway et al., 2005). In addition, the inferred progenitors of clonal lineages in the IgG repertoire have undergone antigen-dependent selection that led to expansion of their clonal lineages in response to an infection. These two levels of selection make sequence features of functional lineage progenitors distinct from the pool of unproductive BCRs that reflects biases of the generation process prior to any selection. In addition, differential selection on receptor features can be used to quantify a distance between repertoires of different cohorts that reflect their functional differences in responses to immune challenges (Isacchini et al., 2021).
To identify these distinguishing sequence features, we inferred a selection model for lineage progenitors (Methods). We characterized the probability to observe a clonal lineage ancestor in the periphery as \\(P_{\\text{post}}(\\sigma)\\sim P_{\\text{gen}}(\\sigma)e^{\\Sigma_{f:\\text{features }}q_{f}(\\sigma)}\\), which deviates from the inferred generation probability of the receptor \\(P_{\\text{gen}}(\\sigma)\\) by selection factors \\(q_{f}(\\sigma)\\)(Isacchini et al., 2020, 2020, 2021; Sethna et al., 2020). These selection factors \\(q_{f}(\\sigma)\\) depend on sequence features, including IGHV-gene and IGHJ-gene usages, HCDR3 length, and amino acid preferences at different positions in the HCDR3 (Methods) (Elhanati et al., 2014; Isacchini et al., 2020, 2020, 2021; Marcou et al., 2018; Sethna et al., 2020). Importantly, the inferred selection models are robust to the differences in the sample size of the repertoires, as long as enough data is available to train the models (Methods and Fig. S4C-F). As a result, selection models offer a robust approach to compare functional differences even between repertoires with widely different sample sizes, as is the case for our cohorts (Methods and Fig. S4C-F).
The distribution of the log-probability \\(\\log_{10}P_{\\text{post}}(\\sigma)\\) for the inferred progenitors of clonal lineages observed in individuals from different cohorts is shown in Fig. 3A. We find an overabundance of BCR lineages with progenitors that have a low probability of entering the periphery (i.e., a lower \\(P_{\\text{post}}(\\sigma)\\)) in COVID-19 patients compared to healthy individuals (Fig. 3A). A similar pattern is observed at the level of generation probability \\(P_{\\text{gen}}(\\sigma)\\) for functional receptors in the healthy versus COVID-19 infected individuals (Fig. S4A). Notably, the inferred selection models from the GRP healthy repertoires are comparable to the healthy cohort in this study (Fig. S4B). Thus, the overabundance of rare receptors in COVID-19 patients is likely to be linked to functional responses associated with the stimulation of the repertoires against SARS-CoV-2.
We estimated the diversity of the repertoires in each cohort by evaluating the entropy of receptor sequences generated by the respective repertoire models (see Methods). In particular, diverse repertoires that contain B-cell lineages with rare receptors (i.e., those with a lower \\(P_{\\text{post}}(\\sigma)\\)), should have larger entropies. Based on this analysis, we find that immune repertoires are more diverse in COVID-19 patients compared to healthy individuals (Fig. 3A and Methods). Specifically, the entropy (i.e., diversity) of BCR bulk repertoires grows with severity of the disease, from 39.18 bits in the healthy cohort to 40.81 \\(\\pm\\) 0.03 bits in the mild cohort, to 41.03 \\(\\pm\\) 0.25 bits in the moderate cohort, and to 41.32 \\(\\pm\\) 0.11 bits in the severe cohort (see Methods). The error bars indicate variations over different models inferred in each of the COVID-19 cohorts, from repertoires subsampled to the same size as the healthy control (Methods). As indicated in Fig. S4, the models inferred from subsampled repertoires are highly consistent within each cohort.
Selection factors \\(q_{f}(\\sigma)\\) determine the deviation in preferences for different sequence features of BCRs in each cohort, including their HCDR3 length and composition and IGHV-gene usages. A comparison of selection factors among cohorts can characterize their distinctive sequence features. To quantify the selection differences across cohorts, we evaluated the Jensen-Shannon divergence (\\(D_{\\text{JS}}\\)) between repertoires of different cohorts, which measures the distance between the features of their receptor repertoire distributions (Isacchini et al., 2021) (Methods). Clustering of the cohorts based on their pairwise Jensen-Shannon divergences indicates that repertoires diverge with growing disease severity, and the COVID-19 cohorts are more similar with each other than with the healthy cohort (Fig. 3B, Methods).
The inferred selection models enabled us to quantify how different receptor features affect the pairwise divergence \\(D_{\\text{JS}}\\) of BCR repertoires (Methods). In particular, we found that HCDR3 length contributes the most to differences in receptor distributions between the healthy and COVID-19 cohorts (Fig. 3C), consistent with the significant differences in the HCDR3 length distributions shown in Fig. 2C. In addition, we found that the amino acid composition of HCDR3 is the second most distinguishing factor between repertoires (Fig. 3C), indicating that negatively charged amino acids are slightly suppressed at the center of HCDR3s in COVID-19 cohorts compared to healthy repertoires (Fig. 3D-F). The selection differences of IGHV- and IGHJ-gene usages between the healthy and the COVID-19 patients are insignificant (Figs. 3C,G), consistent with our previous analysis of lineage characteristics in Fig. 2A,D. Taken together, HCDR3 length and composition represents the molecular features that are most distinguishable at the repertoire level across different cohorts. Nonetheless, further work is necessary to understand the molecular underpinnings that may make these receptor features apt in response to a SARS-CoV-2 challenge.
### Expansion of BCR clonal lineages over time indicates responses to SARS-CoV-2.
Next, we examined the dynamics of BCR repertoires in the COVID-19 patients. The binding level (measured by OD\\({}_{450}\\) in ELISA assays) of both IgM and IgG antibodies against the receptor-binding domain (RBD) or N-terminal domain (NTD) of SARS-CoV-2 increased in most of the COVID-19 patients in our study over the course of their infection (Figs. 4A, S5). We expected that the increase of OD\\({}_{450}\\) binding level is associated with activation of specific B-cells, resulting in an increase in mRNA production of the corresponding BCRs. Detecting expansion of specific clonal lineages is challenging due to subsampling of the repertoires. In fact, only a limited overlap of BCR lineages was found if we simply compared the data between different time points or between replicates of a repertoire sampled at the same time point (Fig. S6). To identify expanding clonal lineages, we examined lineages only in patients whose plasma showed an increase in binding level (OD\\({}_{450}\\)) to the RBD of SARS-CoV-2 and compared the sequence abundance of those lineages in the bulk repertoire that appeared in two or more time points (Figs. 4A, S6 and Methods). Using a hypothesis test with a false discovery rate of 7.5%, as determined by analyzing replicate data (Methods, Fig. S6), we detected significant expansion of clonal lineages of receptors harvested from the bulk repertoire within all investigated patients. The results reflect a dynamic repertoire in all patients, ranging from 5% to 15% of lineages with significant expansion and large changes in sequence abundances over time (Figs. 4, S6). The expanding lineages had comparable HCDR3 length to the rest of the repertoire in COVID-19 patients (Fig. S6). Moreover, we observed expanding lineages to show V-gene preferences comparable to those of previously identified antibodies against SARS-CoV-2 (RBD). This includes the abundance of IGHV4-59, IGHV4-39, IGHV3-23, IGHV3-53, IGH3-66, IGHV2-5, and IGHV2-70 (Brouwer et al., 2020; Ju et al., 2020; Pinto et al., 2020; Rogers et al., 2020). However, it should be noted that these preferences in V Figure 4: **Dynamics of BCR repertoires during infection.****(A)** The binding level (measured by OD\\({}_{450}\\) in ELISA assay) of the IgM (left) and IgG (right) repertoires to SARS-CoV-2 (RBD) epitopes increases over time in most individuals. **(B)** The log-ratio of BCR (mRNA) abundance at late time versus early time is shown for all clonal lineages that are present at least in two time points (see Methods). Each panel shows dynamics of lineages for a given individual, as indicated in the label. The analysis is shown in individuals for whom the binding level (OD\\({}_{450}\\)) of the IgG repertoire increases over time (shown in **(A)**). The count density indicates the number of lineages at each point. Lineages that show a significant expansion over time are indicated in red (see Methods for estimation of associated p-values). **(C)** IGHV-gene usage of lineages is shown for non-expanded (left) and expanded (middle) lineages in all individuals (colors). The right panel shows, for each patient (colors), the fraction of expanded lineages with a given IGHV gene as the number of expanded lineages divided by the total number of lineages with that given IGHV gene. The size of the circles indicates the total number of lineages in each category. **(D)** Boxplot of log\\({}_{10}\\) relative read abundance in the plasma B-cell (Methods) are shown for expanding (red) and non-expanding (cyan) lineages that contain reads from the plasma B-cell in different patients. Receptors from the plasma B-cell are significantly more abundant in expanding lineages in a number of patients based on the ANOVA test statistics: patient 3: \\(F_{1,42}=5.4\\), p-value = 0.02; patient 5: \\(F_{1,31}=0.5\\), p-value = 0.5; patient 7: \\(F_{1,49}=0.01\\), p-value = 0.91; patient 9: \\(F_{1,42}=4.1\\), p-value = 0.04; patient 10: \\(F_{1,42}=\\)2.9, p-value = 0.1; patient 13: \\(F_{1,64}=\\)7.7, p-value = 0.007.
gene usage among expanding lineages are comparable to the overall biases in V-gene usage within patients, and expanded lineages roughly make up 25% of lineages with a given V gene (Fig. 4C). Therefore, our results suggest that the overall response to SARS-CoV-2 is not driven by only a specific class of IGHV gene. We expect clonal expansions to reflect responses to SARS-CoV-2 during infection. Indeed, we observed that expanding lineages (based on the bulk data) show an over-representation of receptors harvested from plasma B-cells, which are likely to be associated with antibody-secreting B-cells (Fig. 4D and Methods); patient-specific significance p-values are reported in the caption of Fig. 4D.
### Sharing of BCRs among individuals.
Despite the vast diversity of BCRs, we observe a substantial number of identical progenitors of BCR clonal lineages among COVID-19 patients (Fig. 5) and among healthy individuals from our dataset and from the GRP (Fig. S7). Previous work has also identified sharing of BCRs among COVID-19 patients, which was interpreted by the authors as evidence for large-scale convergence of immune responses (Galson et al., 2020; Nielsen et al., 2020; Schultheiss et al., 2020). Although BCR sharing can be due to convergent response to common antigens, it can also arise from convergent recombination leading to the same receptor sequence (Elhanati et al., 2018; Pogorelyy et al., 2018) or simply from experimental biases. Therefore, it is imperative to formulate a null statistical model to identify the outliers among shared BCRs as candidates for common responses to antigens. Convergent recombination defines a null expectation for the amount of sharing within a cohort based on only the underlying biases for receptor generation within a repertoire (Elhanati et al., 2018; Pogorelyy et al., 2018) (Methods). Intuitively, sharing is more likely among commonly generated receptors (i.e., with a high \\(P_{\\text{post}}(\\sigma)\\)) and within cohorts with larger sampling (Methods). Importantly, rare receptors (i.e., with a low \\(P_{\\text{post}}(\\sigma)\\)) that are shared among individuals in a common disease group can signal commonality in function and a response to a common antigen, as previously observed for TCRs in response to a yellow fever vaccine (Pogorelyy et al., 2018) and CMV and diabetes (Pogorelyy et al., 2018).
We used the receptors' probabilities \\(P_{\\text{post}}(\\sigma)\\) to assess the significance of sharing by identifying a probabilistic threshold to limit the shared outliers both among the COVID-19 patients (dashed line in Fig. 5) and the healthy individuals (dashed lines in Fig. S7). Out of a total of 40,128 (unique)progenitors of clonal lineages reconstructed from the pooled bulk+plasma B-cell repertoires (Fig. 5A, Tables S1, S4), we found 10,146 progenitors to be shared among at least two individuals, and 761 of these lineages contained receptors found in the plasma B-cell of at least one individual. 167 of the 10,146 lineage progenitors were classified as rare, having a probability of occurrence below the indicated threshold (dashed line) in Fig. 5B, with 30 of them containing receptors harvested from plasma B-cell, indicating a significant over-abundance of plasma B-cells among the rare, shared receptors (p-value=7.2 \\(\\times\\) 10\\({}^{-6}\\)). Moreover, we found that 615 lineages shared a common sequence ancestor in at least two individuals and have expanded in at least one of the individuals (Fig. 5C-D). 38 of these shared, expanding lineages stemmed from rare naive progenitors (below the dashed line in Fig. 5B, D), eight of which contain receptors found in the plasma B-cell of at least one individual. The over-abundance of plasma B-cell receptors in the rare, shared expanding
Figure 5: **Sharing of BCRs among patients.****(A)** The histogram shows the number of clonal lineages that share a common progenitor in a given number of individuals, indicated on the horizontal axis. **(B)** The density plot shows the distribution of \\(\\log_{10}P_{\\mathrm{post}}\\) for progenitors of clonal lineages shared in a given number of individuals, indicated on the horizontal axis. Histogram bin size is 0.5. The scaling of sequence counts sets the maximum of the density in each column to one. Sharing of rare lineages with \\(\\log_{10}P_{\\mathrm{post}}\\) below the dashed line is statistically significant (Methods). Green diamonds indicate clonal lineages below the dashed line with significant expansion in at least one of the individuals. Orange triangles indicate clonal lineages below the dashed line that contain reads from the plasma B cell repertoire in at least one of the individuals. **(C, E)** The histograms show the number of clonal lineages that share a common progenitor in a given number individuals, which have significantly expanded during infection in at least one of the individuals (C), or contained reads from the plasma B-cell repertoire in at least one of the individuals (E). **(D, F)** The scatter plots with transparent overlapping markers show \\(\\log_{10}P_{\\mathrm{post}}\\) for progenitors of clonal lineages shared in a given number individuals that have expanded (D), or contain reads from the plasma B cell repertoire (E), in at least one individual. The dashed line is similar to (B).
lineages is significant (p-value=0.04). The sharing of these rare, expanding BCRs among COVID-19 patients, with an over-abundance of receptors associated with antibody production in the plasma B-cell data, indicates a potentially convergent response to SARS-CoV-2; these receptors are listed in Table S5.
Interestingly, we found that 24% of receptors in the 38 rare shared, expanding lineages contain multiple cysteines in their HCDR3s, in contrast to only 10% of the receptors in the whole repertoire. Such sequence patterns with cysteine pairs in the HCDR3 have been associated with stabilization of the HCDR3 loop by forming disulfide bonds with particular patterns and spacings of the cysteines (Lee et al., 2014; Prabakaran and Chowdhury, 2020). Disulfide bonds in the HCDR3 can decrease the conformational flexibility of the loop, thus decreasing the entropic cost of binding to improve the affinity of the receptor (Almagro et al., 2012). The significantly larger fraction of multi-cysteine HCDR3s among the candidate SARS-CoV-2 responsive receptors (p-value = 0.013 based on binomial sampling) indicates an underlying molecular mechanism for developing a potent response to SARS-CoV-2.
### Presence of SARS-CoV-2 and SARS-CoV-1 specific neutralizing antibodies within repertoires.
To further investigate the functional response in the repertoire of COVID-19 patients, we performed single-cell sequencing on pooled samples from all patients, sorted for reactivity to RBD or NTD epitopes of SARS-CoV-2 (Methods). This analysis suggests that about 0.2% of these single cells are RBD-reactive as opposed to only 0.02% that are NTD-reactive (Fig. S1). This inferred fraction of reactive antibodies is consistent with previous estimates (Kreer et al., 2020).
Next, we characterized the sequence features of RBD- and NTD-sorted antibodies. The IGHV-gene usage of these reactive receptors is shown in Fig. 6 and is compared to gene usage in monoclonal antibodies (mAbs) identified in previous studies (Brouwer et al., 2020; Han et al., 2020; Hurlburt et al., 2020; Kreye et al., 2020; Pinto et al., 2020; Robbiani et al., 2020; Wu et al., 2020; Zost et al., 2020). Despite the broad range of IGHV-gene usages associated with epitope reactivity, sorted single-cell data show common IGHV-gene preferences to that of the previously identified mAbs against SARS-CoV-2 epitopes. This includes an abundance of IGHV1-69, IGHV4-59, IGHV3-30-3, IGHV3-33, IGHV1-18, IGHV5-51, and IGHV1-46 against RBD, and IGHV3-23, IGHV4-59, IGHV4-39, IGHV3-21, and IGHV3-48 against NTD (Fig. 7A). Similarly, we observe consistent biases in V- and J- gene usages of the \\(\\kappa\\) and \\(\\lambda\\) light chains for the sorted single-cell data and the verified mABs (Fig. S8). Moreover, the HCDR3 length distributions of the sorted single-cell data are comparable to those of the verified mABs (Fig. S8). The average length of the HCDR3 for both the verified mAbs and the sorted single-cell receptors are comparable to that of bulk repertoires from COVID-19 patients, which is significantly longer than that of healthy individuals (Fig. 2B).
Figure 6: **Statistics of BCRs reactive to RBD and NTD epitopes.****(A)** The relative counts for IGHV-gene usage is shown for known mAbs (Table S8) reactive to RBD (pink) and NTD (green) epitopes of SARS-CoV-2 and for receptors obtained from single cell sequencing of the pooled sample from all patients (Methods), sorted for RBD (yellow) and NTD (blue) epitopes. **(B)** The histogram shows the number of NTD-sorted receptors from single cell sequencing (Table S6) and RBD- and NTD-specific verified mAbs (Table S7) found in the bulk+plasma B-cell repertoires of a given number of individuals (Methods), indicated on the horizontal axis. **(C)** The distribution of the log-probability to observe a sequence \\(\\sigma\\) in the periphery \\(\\log_{10}P_{\\text{post}}\\left(\\sigma\\right)\\) is shown as a normalized probability density function (PDF) for inferred naΓ―ve progenitors of known RBD- and NTD-specific mAb and for RBD- and NTD-sorted receptors from single cell sequencing. \\(P_{\\text{post}}(\\sigma)\\) values were evaluated based on the repertoire model created from patients with moderate symptoms. The corresponding \\(\\log_{10}P_{\\text{post}}\\) distribution for bulk repertoires of the moderate cohort (similar to Fig. 3A) is shown in black as a reference. **(D)** Similar to (C) but restricted to receptors that are found in the bulk+ plasma B-cell repertoire of at least one patient in the cohort (Tables S6, S7). Colors are consistent between panels and the number of samples used to evaluate the statistics in each panel is indicated in the legend.
To characterize how SARS-CoV-2 reactive receptors make up the patients' repertoires, we mapped the heavy chain receptors from the sorted single-cell data onto BCR lineages constructed from the bulk+plasma B-cell data in the COVID-19 patients (Methods, Table S6). We found that 13 (from 237) RBD-sorted and 13 (from 330) NTD-sorted antibodies from the single-cell data matched receptor lineages in at least one individual (Fig. 6B). Interestingly, we found a broad sharing of these antibodies with 10 RBD- and 6 NTD-sorted single cells present in at least two patients (Fig. 6B).
In repertoires of the COVID-19 patients, we found that several HCDR3s matched with SARS-CoV-2-specific mAbs that were previously isolated in other studies (Brouwer et al., 2020; Han et al., 2020; Hurlburt et al., 2020; Kreye et al., 2020; Pinto et al., 2020; Robbiani et al., 2020; Wu et al., 2020; Zost et al., 2020). Specifically, a total of 20 mAb families specific to SARS-CoV-2 epitopes were found to be close in sequence to HCDR3s in our data (with up to one amino acid difference), among which are 14 RBD-specific, one NTD-specific, and five S1-specific (reactive to either RBD or NTD) mAbs (Fig. 7B, Table S7). Interestingly, nine of these mAbs are shared among at least two individuals, and the NTD-specific antibody is found in eight individuals (Fig. 7B).
In addition, we found that two patients had exact HCDR3 matches to a previously identified antibody, S304, that has cross-reactivity to SARS-CoV-1 and SARS-CoV-2 (Pinto et al., 2020). We also observed in one patient an HCDR3 with only one amino acid difference to this antibody (Table S7). Importantly, the plasma in these patients showed a substantial binding level (OD\\({}_{450}\\)) to SARS-CoV-1 (Fig. S5), which indicates a possibility of cross-reactive antibody responses to SARS-CoV-1 and SARS-CoV-2.
We also investigated the matches between the RBD- and NTD-sorted single-cell receptors with the verified mAbs from previous studies (Brouwer et al., 2020; Han et al., 2020; Hurlburt et al., 2020; Kreye et al., 2020; Pinto et al., 2020; Robbiani et al., 2020; Wu et al., 2020; Zost et al., 2020). Although we found no matches between the heavy chain CDR3 of sorted single-cell receptors and the verified mAbs, we found a large number of matches between the \\(\\kappa\\) and \\(\\lambda\\) light chain CDR3s of the sets (Fig. S8). Notably, 59 of 142 \\(\\text{IG}_{\\text{\\tiny{K}}}\\) and 47 of 110 \\(\\text{IG}_{\\text{\\tiny{\\lambda}}}\\) from the RBD-reactive single cells, and 1 of 202 \\(\\text{IG}_{\\text{\\tiny{K}}}\\), and 22 of 155 \\(\\text{IG}_{\\text{\\tiny{\\lambda}}}\\) from the NTD-reactive single cells matched to light chain CDR3s of mAbs in those respective subsets (Fig. S8). Given the low sequence diversity of light chain receptors, it remains to be seen as to whether these matches between the light chain mAbs and sorted single-cell data are statistically significant--a question that would require modeling the generation and selection of the light chain receptors' repertoire.
Lastly, we observed that the previously verified mAbs have a lower probability \\(P_{\\text{post}}(\\sigma)\\) of generation and entry to the periphery compared to the overall repertoire (Fig. 6C). This is in part expected since the selection models used to evaluate these probabilities were trained on different repertoires than those from which the mAbs were originally harvested. Consistently, the evaluated probabilities for the sorted single sorted receptors are within the range for the bulk repertoire (Fig. 6C), as the two datasets were derived from the same cohort. It should also be noted that all of the verified mAbs and the sorted receptors from the single-cell data that we can match to the patients' repertoires have a relatively high probability \\(P_{\\text{post}}(\\sigma)\\) (Fig. 6D). This is not surprising as it is very unlikely to observe rare BCRs (with small \\(P_{\\text{post}}(\\sigma)\\)) to be shared in across different cohorts. Overall, our results are encouraging for vaccine development since they indicate that even common antibodies can confer specific responses against SARS-CoV-2.
### Discussion
COVID-19 will remain an ongoing threat to public health until an effective SARS-CoV-2 vaccine is available globally. Understanding the human B-cell immune response to SARS-CoV-2 is critical for vaccine development and assessment (Wec et al., 2020). A repertoire of immune receptor sequences represents a unique snapshot of the history of immune responses in an individual (Boyd et al., 2009; Georgiou et al., 2014; Kreer et al., 2020; Robins, 2013), and the changes in a repertoire during an infection can signal specific responses to pathogens (Horns et al., 2019; Nourmohammad et al., 2019). Identifying signatures of a functional response to a given pathogen from a pool of mostly unspecific BCRs collected from the blood is challenging--it is a problem of finding a needle in a haystack. Therefore, principled statistical inference approaches are necessary to extract functional signal from such data. Here, we systematically characterize the B cell repertoire response to SARS-CoV-2 in COVID-19 patients with different disease severity by combining evidence from the overall statistics of repertoires together with dynamics of clonal lineages during infection and the sharing of immune receptors among patients.
At the repertoire level, we showed that the HCDR3 of BCRs in COVID-19 patients are significantly longer than HCDR3 in healthy individuals, and the amino acid composition of this receptor region varies among cohorts of patients with mild, moderate, and severe symptoms. Moreover, we observed large-scale sharing of B-cell receptors among COVID-19 patients, consistent with previous findings in COVID-19 patients (Galson et al., 2020; Nielsen et al., 2020; Schultheiss et al., 2020). Sharing of receptors among individuals can signal common immune responses to a pathogen. However, BCR sharing can also be due to convergent recombination leading to the same receptor sequence or other experimental biases that influence statistics of shared sequences. These statistical nuances can substantially sway conclusions drawn from the sharing analysis and, therefore, should be carefully accounted for. Here, we established a null expectation of BCR sharing due to convergent recombination by inferring a model of receptor generation and migration to the periphery and used this null model to identify sequence outliers. Our analysis identified a subset of rare BCRs shared among COVID-19 patients, which appears to signal convergent responses to SARS-CoV-2.
Bulk B-cell repertoires predominantly contain a mixture of naive, memory, and plasma-B cells. At the early stages of viral infection, antigen-specific plasma B-cells may develop, which act as antibody factories and confer neutralization against the infecting pathogen (Wrammert et al., 2008). Almost all prior work on immune repertoires has focused on bulk repertoires, which are often easier to sample from and to analyze. Moreover, functional studies, using single-cell sequencing of antigen-sorted B-cell receptors, have often been disconnected from the large-scale analysis of receptor repertoires. Our study synergizes data from bulk and plasma B-cell sequencing with antigen-sorted single-cell B-cell receptors to draw a more complete picture of the human immune response to SARS-CoV-2. Importantly, our joint longitudinal analysis of the bulk and the plasma B-cell repertoires in COVID-19 patients brings insight into the dynamics of antigen-specific B-cells as well as the statistics of receptor sequence features associated with responses to SARS-CoV-2.
In addition to the statistics of repertoires, we observed that the activity of many B-cell lineages (i.e. mRNA production) in COVID-19 patients significantly increases during infection, accompanied by an increase in the binding level (OD\\({}_{450}\\)) of the patients' plasma to the RBD and NTD of SARS-CoV-2. Dynamics of clonal lineages during an infection provide significant insights into the characteristics of responsive antibodies (Horns et al., 2019; Nourmohammad et al., 2019). By taking advantage of data collected at multiple time points in most patients, we identified expanded lineages shared among patients and found 38 clonal lineages that are candidates for a response specific to SARS-CoV-2 antigens (Fig. 5, Table S5). Importantly, the over-representation of plasma B-cells among these shared expanding lineages signifies their potential role in mounting protective antibody responses against SARS-CoV-2. It should be noted that none of these 38 clonal lineages matched with the verified mAbs. This is in part expected since the verified mAbs that matched the bulk repertoires have relatively high probabilities \\(P_{\\text{post}}\\) (Fig. 6C), whereas these 38 lineages are chosen explicitly to be rare.
Our analysis of repertoire dynamics has identified a large-scale expansion of B-cell clonal lineages (5 -15% of lineages) over the course of COVID-19 infections. However, it is hard to imagine that all of these expanding clones that account for a sizeable portion of the repertoire are engaged in responding to SARS-CoV-2 specifically. In contrast, our single-cell analysis identified only about 0.2% of receptors as reactive to RBD and only 0.02% as reactive to NTD epitopes (Fig. S1)--an estimate that is consistent with previous findings (Kreer et al., 2020). This disparity raises an outstanding question: why do we observe such a large-scale expansion of clonal lineages during an acute immune response?
Identifying antibodies with cross-reactive neutralization abilities against viruses in the SARS family is of significant interest. While cross-neutralization antibodies have been isolated from COVID-19 patients (Brouwer et al., 2020; Liu et al., 2020; Zhou et al., 2020), it remains unclear how prevalent they are. Interestingly, in nine patients, we see a substantial increase in the binding level (OD\\({}_{450}\\)) of their plasma to SARS-CoV-1 epitopes during the course of COVID-19 infection. Moreover, in three patients, we identify a BCR identical to the heavy chain of antibody S304 (Pinto et al., 2020), which was previously isolated from a patient who recovered from a SARS-CoV-1infection. This antibody was shown to be moderately cross-reactive to both SARS-CoV-1 and SARS-CoV-2, and our results further indicate a possibility for such cross-reactive antibodies to emerge naturally in response to SARS-CoV-2 (Brouwer et al., 2020; Lv et al., 2020; Rogers et al., 2020). Taken together, our findings provide substantial insight and strong implications for devising vaccines and therapies with a broad applicability against SARS-CoV-2.
## Materials and Methods
### Data and code availability
BCR repertoire data and single-cell data can be accessed through:
[https://www.ncbi.nlm.nih.gov/bioproject/PRJNA645245](https://www.ncbi.nlm.nih.gov/bioproject/PRJNA645245)
[https://www.ncbi.nlm.nih.gov/bioproject/PRJNA679920](https://www.ncbi.nlm.nih.gov/bioproject/PRJNA679920)
All codes for data processing and statistical analysis can be found at: [https://github.com/StatPhysBio/covid-BCR](https://github.com/StatPhysBio/covid-BCR)
### Experimental Procedures
**Cell lines.** Sf9 cells (_Spodoptera frugiperda_ ovarian cells, female, ATCC catalogue no. CRL-1711) and High Five cells (_Trichoplusulia ni_ ovarian cells, female; Thermo Fisher Scientific, Waltham, United States (US), catalogue number: B85502) were maintained in HyClone (GE Health Care, Chicago, US) insect cell culture medium.
**Sample collection and PBMC isolation.** Specimens of heparinized blood were collected from the RT-PCR-confirmed COVID-19 patients at the Infectious Disease Centre of the Princess Margaret Hospital, Hong Kong. The study was approved by the institutional review board of the Hong Kong West Cluster of the Hospital Authority of Hong Kong (approval number: UW20-169). All study procedures were performed after informed consent was obtained. Day 1 of clinical onset was defined as the first day of the appearance of clinical symptoms. The severity of the COVID-19 cases was classified based on the adaptation of the Sixth Revised Trial Version of the Novel Coronavirus Pneumonia Diagnosis and Treatment Guidance. The severity of the patients was categorized as follows: Mild - no sign of pneumonia on imaging, mild clinical symptoms; Moderate - fever, respiratory symptoms and radiological evidence of pneumonia; Severe - dyspnea, respiratory frequency \\(>\\)30/min, blood oxygen saturation 93%, partial pressure of arterial oxygen to fraction of inspired oxygen ratio \\(<\\)300, and/or lung infiltrates \\(>\\)50% within 24 to 48 hours; Critical - respiratory failure, septic shock, and/or multiple organ dysfunction or failure or death.
The blood samples were first centrifuged at 3000 xg for 10 minutes at room temperature for plasma collection. The remaining blood was diluted with equal volume of PBS buffer, transferred onto the Ficoll-Paque Plus medium (GE Healthcare), and centrifuged at 400 xg for 20 minutes. Peripheral Blood Mononuclear Cells (PBMC) samples were then collected and washed with cold RPMI-1640 medium for three times. The isolated PBMC samples were finally stored at cell freezing solution (10% DMSO + 90% FBS) and kept in -80\\({}^{\\mathrm{o}}\\)C until used.
**RNA extraction and reverse transcription.** Total RNA was extracted from \\(5\\times 10^{5}\\) PBMC using the RNeasy Mini isolation kit (Qiagen) according to the manufacturer's protocol. Reverse transcription of the RNA samples was performed using the Proto- Script(r) II Reverse Transcriptase kit (New England Biolabs, NEB) with random hexamer primers according to the manufacturer's protocol. The thermal cycling conditions were designed as follows: 25\\({}^{\\mathrm{o}}\\)C for 5 minutes, 42\\({}^{\\mathrm{o}}\\)C for 60 minutes, and 80\\({}^{\\mathrm{o}}\\)C for 5 minutes. The resulting cDNA samples were stored in 80\\({}^{\\mathrm{o}}\\)C freezer before PCR was performed.
**Amplification of B cell repertoire from the samples by PCR.** The cDNA samples were used as a template to amplify the antibody IgG heavy chain gene with six FR1-specific forward primers and one constant region-specific reversed primer using the Phusion(r) High-Fidelity DNA Polymerase. The primer sequences were the same as previously described (Wu et al., 2015); primer sequences are listed in Table S2. The thermal cycling conditions were set as follows: 98\\({}^{\\circ}\\)C for 30 seconds; 30 cycles of 98\\({}^{\\circ}\\)C for 10 seconds, 58\\({}^{\\circ}\\)C for 15 seconds, and 72\\({}^{\\circ}\\)C for 30 seconds; and 72\\({}^{\\circ}\\)C for 10 minutes. Then 10 ng of the PCR product was used as a template for the next round of gene amplification with sample-specific barcode primers. The thermal cycling conditions were set as follow: 98\\({}^{\\circ}\\)C for 3 min; 30 cycles of 98\\({}^{\\circ}\\)C for 10 seconds, 58\\({}^{\\circ}\\)C for 15 seconds, and 72\\({}^{\\circ}\\)C for 15 seconds; and a final extension at 72\\({}^{\\circ}\\)C for 10 min using Phusion(r) High-Fidelity DNA Polymerase. The PCR product was purified by QIAquick Gel Extraction Kit (Qiagen), and quantified by NanoDrop Spectrophotometers (Thermofisher).
**Protein expression and purification.** The receptor-binding domain (RBD, residues 319-541) and N-terminal domain (NTD, residues 14 to 305) of the SARS-CoV-2 spike protein (GenBank: QHD43416.1) as well as the RBD (residues 306-527) and NTD (residues 14-292) of SARS-CoV-1 spike protein (GenBank: ABF65836.1) were cloned into a customized pFastBac vector (Lv et al., 2020; Wec et al., 2020b). The RBD and NTD constructs were fused with an N-terminal gp67 signal peptide and a C-terminal His\\({}_{6}\\) tag. Recombinant bacmid DNA was generated using the Bac-to-Bac system (Life Technologies, Thermo Fisher Scientific). Baculovirus was generated by transfecting purified bacmid DNA into Sf9 cells using FuGENE HD (Promega, Madison, US) and subsequently used to infect suspension cultures of High Five cells (Life Technologies) at a multiplicity of infection (moi) of 5 to 10. Infected High Five cells were incubated at 28 \\({}^{\\circ}\\)C with shaking at 110 rpm for 72 h for protein expression. The supernatant was then concentrated using a Centramate cassette (10 kDa molecular weight cutoff for RBD, Pall Corporation, New York, USA). RBD and NTD proteins were purified by Ni-NTA Superflow (Qiagen, Hilden, Germany), followed by size exclusion chromatography and buffer exchange to phosphate-buffered saline (PBS).
**CD38\\({}^{+}\\) plasma B-cell enrichment**. CD38\\({}^{+}\\) plasma B-cells were isolated from the PBMC samples by performing two subsequent magnetic separation steps according to the manufacturer's protocol(Plasma Cell Isolation Kit II, human, Miltenyi Biotec). Briefly, non-plasma B-cells are labeled with magnetic beads combined with cocktail antibodies and separated using the MACS column. Then, CD38\\({}^{+}\\) plasma B-cell are directly labeled with CD38 MicroBeads and isolated from the pre-enriched B cell pool. Purified CD38\\({}^{+}\\) plasma B-cell were eluted and washed in PBS containing 2% (v/v) fetal bovine serum (FBS) and kept for the following RNA isolation step. In order to test the purity of the CD38\\({}^{+}\\) plasma B cells, we also added staining antibodies and 10 ul of Anti-human CD19-BV510 (BioLegend) and CD38-PE-Cy7 (BioLegend) and incubated them for 15 minutes in the dark in the refrigerator (2-8\\({}^{\\circ}\\)C). Cells were finally fixed with 4% PFA for 20 minutes on ice. The stained samples were acquired by flow cytometry on a FACS Attune (Invitrogen) and analyzed with FlowJo software (Fig. S1).
### RBD and NTD protein specific binding B cell enrichment.
B-cells were enriched from the PBMC samples according to the manufacture's protocol (B Cell Isolation Kit II, human, Miltenyi Biotec). Briefly, non-B-cells are labeled with a cocktail of biotin-conjugated antibodies and separated by the MACS column. Purified B-cells were eluted and kept in the PBS buffer with 2% (v/v) FBS. The enriched B cells were then incubated with 2 ug Biotin-RBD or NTD protein for 30 min at 4\\({}^{\\circ}\\)C. After incubation, Anti-Biotin MicroBeads were added and incubated for 30 min. RBD and NTD specific bead binding B cells were washed and eluted in PBS and stored on ice until use. In order to test the purity of the RBD- or NTD-specific B cells, we also added staining antibodies, 10 ul of Anti-human CD19-BV510 (BioLegend), and 2 ug of SARS-CoV-2 RBD-PE or NTD-PE and incubated them for one hour in the dark in the refrigerator (2-8\\({}^{\\circ}\\)C). Cells were finally fixed with 4% PFA for 20 minutes on ice. The stained samples were acquired by flow cytometry on a FACS Attune (Invitrogen) and analyzed with FlowJo software (Fig. S1).
### Single B cell 5' mRNA and VDJ sequencing.
After RBD or NTD specific B-cells enrichment, cells were counted by using 0.4% (w/v) trypan blue stain solution in the microscope and directly loaded on the 10X ChromiumTM Single Cell A Chip. Then single B cell lysis and RNA first-strand synthesis were carried out following the 10X ChromiumTM Single Cell 5' Library & Gel Bead Kit protocol. The RNA sample were used for the next step B cell VDJ library construction following the ChromiumTM Single Cell V(D)J Enrichment Kits protocol. VDJ library sequencing was performed on a NovaSeq PE150 and the sequencing data were processed by Cell Ranger.
**ELISA.** A 96-well enzyme-linked immunosorbent assay (ELISA) plate (Nunc MaxiSorp, Thermo Fisher Scientific) was first coated overnight with 100 ng per well of purified recombinant protein in PBS buffer. The plates were then blocked with 100 ul of Chonblock blocking/sample dilution ELISA buffer (Chondrex Inc, Redmon, US) and incubated at room temperature for 1 h. Each human plasma sample was diluted to 1:100 in Chonblock blocking/sample dilution ELISA buffer. Each sample was then added into the ELISA plates for a two-hour incubation at 37\\({}^{\\circ}\\)C. After extensive washing with PBS containing 0.1% Tween 20, each well in the plate was further incubated with the anti-human IgG secondary antibody (1:5000, Thermo Fisher Scientific) for 1 hour at 37\\({}^{\\circ}\\)C. The ELISA plates were then washed five times with PBS containing 0.1% Tween 20. Subsequently, 100 ul of HRP substrate (Ncm TMB One; New Cell and Molecular Biotech Co. Ltd, Suzhou, China) was added into each well. After 15 min of incubation, the reaction was stopped by adding 50 ul of 2 M H2SO4 solution and analyzed on a Sunrise (Tecan, Mannedorf, Switzerland) absorbance microplate reader at 450 nm wavelength.
## 4 Statistical Inference and Methods
**BCR preprocessing.** We used a similar procedure for processing of the bulk and the plasma B-cell receptor repertoires. For initial processing of the raw reads, we used pRESTO (version 0.5.13) (Vander Heiden et al., 2014) to assemble paired-end reads, remove sequences with a mean quality score less than 30, mask primer subsequences, and collapse duplicate sequences into unique sequences. The small fraction of paired-end reads that overlapped were assumed to be anomalous and were discarded from the analysis. Additionally, after preprocessing with pRESTO, we discarded unique reads that contained ambiguous calls (N's) in their receptor sequence.
**BCR error correction.** We performed two rounds of error correction on sequences that passed the quality control check. In the first round, we clustered singletons and other low-frequency sequences into larger sequences if they were similar in sequence. The intent of this round was to correct for sequencing errors (e.g. from reverse transcription of mRNA to cDNA) that caused large abundance clones to be split into many similar sequences. We used two parameters: \\(\\Delta_{r}=1.0\\), themarginal Hamming distance tolerance per decade in log-ratio abundance (each \\(\\log_{10}\\) unit allowing \\(\\Delta_{r}\\) additional sequence differences), and \\(\\Delta_{a}=1.0\\), the marginal abundance tolerance of clusterable sequences per decade in log-ratio abundance (each \\(\\log_{10}\\) unit allowing abundance \\(\\Delta_{a}\\) higher as clusterable). For example, a sequence with abundance \\(a_{1}\\) and a Hamming distance \\(d\\) away from a higher abundance sequence with abundance \\(a_{2}\\) was absorbed into the latter only if \\(d\\leq\\Delta_{r}\\ \\log_{10}\\frac{a_{2}}{a_{1}}\\) and \\(a_{1}\\leq\\Delta_{a}\\log_{10}\\frac{a_{2}}{a_{1}}\\). We used the output of this first round as input for the second round of error correction, in which we more aggressively target correction of reverse transcriptase errors. In the second round, we used two different parameters to assess sequence similarity: \\(d_{\\text{thresh}}=2.0\\), the Hamming distance between sequences, and \\(a_{\\text{thresh}}=1.0\\), the ratio of sequence abundances. A sequence with abundance \\(a_{1}\\) and a Hamming distance \\(d\\) away from a sequence of larger abundance \\(a_{2}\\) was absorbed into the latter only if \\(d\\leq d_{\\text{thresh}}\\) and the ratio of the sequence abundances was greater than \\(a_{\\text{thresh}}\\), i.e. \\(\\frac{a_{2}}{a_{1}}\\geq a_{\\text{thresh}}\\). This round of error correction allows much larger abundance sequences to potentially be clustered than is possible in the first round. For both of the above steps, we performed clustering greedily and approximately by operating on sequences sorted by descending abundance, assigning the counts of the lower abundance sequence to the higher abundance one iteratively.
After error correction, the sequences still contained a large number of singletons, i.e. sequences with no duplicates (Tables S1, S4). We discarded these singletons from all analyses that relied on statistics of unique sequences (i.e., the results presented in Figs. S2A-C and S3E-G).
**Unproductive BCRs.** Due to a larger sequencing depth in healthy individuals, we were able to reconstruct relatively large unproductive BCR lineages. Unproductive sequences are BCRs that are generated but, due to a frameshift or insertion of stop codons, are never expressed. These BCRs reside with productive (functional) BCRs in a nucleus and undergo hypermutation during B-cell replication and, therefore, provide a suitable null expectation for generation of BCRs in immune repertoires.
**Clonal lineage reconstruction.** To identify BCR clonal lineages, we first grouped sequences by their assigned IGHV gene, IGHJ gene, and HCDR3 length and then used single-linkage clustering with a threshold of 85% Hamming distance. A similar threshold has been suggested previously by (Gupta et al., 2017) to identify BCR lineages. Defining size as the sum of the number of unique sequences per time point within a lineage, clusters of size smaller than three were discarded from most analyses. They were retained only for training IGoR and SONIA models and were not discarded in the sharing analysis only if the progenitor of that small cluster was also a progenitor of a cluster of size at least three in another patient. For each cluster, there may have been multiple inferred naive sequences, as this was an uncertain estimate. Therefore, the most common naive sequence was chosen to be the naive progenitor of the lineage. When the most common naive sequence of a productive lineage contained a stop codon, the progenitor of the lineage was chosen iteratively by examining the next most common naive sequence until it did not contain any stop codons. If all inferred naive sequences in a productive lineage had a stop codon, that lineage was discarded from the analysis. Tables S1 and S4 show the statistics of constructed clonal lineages in each individual for the bulk repertoire and combined bulk+plasma B-cell repertoire, respectively.
**Mapping of single-cell data onto reconstructed clonal lineages:** Like the repertoire datasets, the single-cell sequences were annotated by abstar (Briney and Burton, 2018). For each receptor acquired by single-cell sequencing, we identified a subset of reconstructed clonal lineages from the bulk repertoire which had identical HCDR3 length as the sequence and which also had an IGHV gene which was 90% similar to that of the single-cell receptor. This flexibility in V-gene choice would identify functionally homologous receptors and associate a receptor to a lineage with a sequence divergence in the V-segment, compatible with the expectation under somatic hypermutations (Lee et al., 2017). A single-cell sequence was matched to a reconstructed clonallineage from this subset if its HCDR3 could be clustered with other members of the lineages, using single-linkage clustering with a similarity threshold of 85% Hamming distance (similar to the criteria for lineage reconstruction for bulk repertoires).
**Inference of generation probability and selection for BCRs.** We used IGoR (version 1.4) (Marcou et al., 2018) to obtain a model of receptor generation. This model characterized the probability of generation \\(P_{\\text{gen}}(\\sigma)\\) of a receptor dependent on the features of the receptor, including the IGHV, IGHD, and IGHJ genes and the deletion and insertion profiles at the VD and DJ junctions. To characterize the parameters of this model, we trained IGoR on the progenitors of unproductive lineages, regardless of size, pooled from the bulk repertoire of all individuals, restricted to progenitors whose HCDR3 began with a cysteine and ended with a tryptophan. For consistency with our receptor annotations based on abstar, we used abstar's genomic templates and the HCDR3 anchors of abstar's reference genome as inputs for IGoR's genomic templates and HCDR3 anchors. \\(P_{\\text{gen}}(\\sigma)\\) distributions of the healthy and COVID-19 cohorts in this study are shown in Fig. S4A.
We used SONIA (version 0.45) (Sethna et al., 2020) to infer a selection model for progenitors of productive clonal lineages. The SONIA model evaluated selection factors \\(q\\) to characterize the deviation in the probability \\(P_{\\text{post}}(\\sigma)\\) to observe a functional sequence in the periphery from the null expectation based on the generation probability \\(P_{\\text{gen}}(\\sigma)\\): \\(P_{\\text{post}}(\\sigma)=\\frac{1}{Z}P_{\\text{gen}}(\\sigma)e^{\\Sigma f:\\text{ features}q_{f}(\\sigma)}\\), where \\(Z\\) is the normalization factor and \\(q_{f}(\\sigma)\\) are selection factors dependent on the sequence features \\(f\\). These sequence features include IGHV-gene and IGHJ-gene usages and HCDR3 length and amino acid composition (Sethna et al., 2020).
In our analysis, we used the SONIA left-right model with independent IGHV- and IGHJ-gene usages (Sethna et al., 2020). We used the output from IGoR as the receptor generation model for SONIA. We trained four cohort-specific SONIA models on progenitors of productive lineages, regardless of size, pooled from the bulk repertoire of all individuals within a cohort, restricted to progenitors whose HCDR3 began with a cysteine and ended with a tryptophan. 150 epochs, \\(L_{2}\\) regularization with strength 0.001, and 500,000 generated sequences were used to train each SONIA model. Fig. 3 shows the distributions for the probabilities of observing productive receptors sampled from each cohort \\(P_{\\text{post}}(\\sigma)\\) and the correlation of feature-specific selection factors \\(q_{f}\\) among cohorts. A SONIA model was also trained on all the productive lineage progenitors in the GRP dataset (Briney et al., 2019) and used 5,000,000 generated sequences, keeping the other parameters unchanged. We refrain from comparing directly \\(P_{\\text{post}}(\\sigma)\\) associated with GRP BCRs to BCRs in this study due to experimental differences.
It should be noted that the (pre-selection) generation model \\(P_{\\text{gen}}(\\sigma)\\) inferred by IGoR (Marcou et al., 2018) is robust to sequence errors due to experimental errors or hypermutations in the IgG repertoires. However, hypermutations in BCRs could introduce errors in inference of selection models and estimation of receptor probabilities by SONIA (Sethna et al., 2020). Therefore, we have restricted our selection analyses to only the inferred progenitors of clonal lineages. Although the inferred progenitors of lineages can still deviate from the _true_ (likely IgM naive) progenitors, the selection models inferred from _ensembles_ of inferred progenitors in IgG repertoires seem to be comparable to the models inferred from the IgM repertoires (Ruiz Ortega et al., 2021). The resulting selection models, trained on either true or inferred progenitors, reflect preferences for sequence features of unmutated receptors, including IGHV- and IGHJ- genes and HCDR3 length and composition, but they do not account for the hypermutation preferences that may distinguish one cohort from another.
**Characterizing the robustness of selection inference.** To test the sensitivity of the inferred selection models on the size of the training sets, we down-sampled the receptor data of each COVID-19 cohort to a size comparable to the smallest cohort, i.e., the healthy repertoire sequenced in this study. This down-sampling resulted in two independent training datasets for the mild COVID-19 cohort, 13 independent training datasets for the moderate COVID-19 cohort, and three independent training datasets for the severe COVID-19 cohort. Though this down-sampling resulted in over 400 independent training datasets for the GRP, we elected to use only 15. We then inferred a separate selection model with SONIA for each of these training datasets and used each model to evaluate the receptor log-probabilities \\(\\log_{\\mathbf{10}}P_{\\text{post}}(\\sigma)\\) for a set of 500,000 generated receptors. The evaluated probabilities are strongly correlated between models inferred from the down-sampled data in each cohort, with a Pearson correlation of \\(\\mathrm{r}>0.99\\) and \\(\\mathrm{p}\\)-value = 0 (Fig. S4C-F).
We used a similar approach to compare the selection model inferred from the healthy repertoires sequenced in this study and the GRP study (Briney et al., 2019). Fig. S4B shows that, using the model inferred with our healthy repertoire and 30 down-sampled independently inferred selection models using the GRP dataset, the evaluated log-probabilities \\(\\log_{10}P_{\\mathrm{post}}(\\sigma)\\) based on these two datasets are strongly correlated, with a Pearson correlation of \\(\\mathrm{r}>0.99\\) and \\(\\mathrm{p}\\)-value = 0 (Fig. S4B).
**Characterizing repertoire diversity.** We quantified the diversity of each cohort by evaluating the entropy of receptor sequences in each cohort. Entropy can be influenced by the size of the training dataset for the selection models. To produce reliable estimates of repertoires' diversities (and entropies), we used the procedure described above to learn independent selection models for subsampled repertoires in each cohort. We then used the inferred IGoR and SONIA models to generate 500,000 synthetic receptors based on each of the subsampled, cohort-specific models. We evaluated cohort entropies \\(H\\) as the expected log-probabilities to observe a functional sequence in the respective cohort: \\(H=-\\sum_{\\sigma}P_{\\mathrm{post}}(\\sigma)\\log P_{\\mathrm{post}}(\\sigma)\\) ; the estimates based on the generated receptors are reported in the main text. The error bars reported for these entropy estimates are due to variations across the inferred models in each cohort.
For comparison, we also evaluated the entropy estimated on the repertoire data in each cohort, which showed a similar pattern to the estimates from the generated cohorts (in the main text). Specifically, the entropy of BCR repertoires estimated from the data follows: \\(39.8\\pm 0.3\\) bits in healthy individuals, \\(41.9\\pm 0.7\\) bits for patients in the mild cohort, \\(42.7\\pm 0.3\\) bits for patients in the moderate cohort, and \\(42.9\\pm 0.5\\) for patients in the severe cohort. The error bars indicate the standard error due to differences among individuals within a cohort.
**Comparing selection between repertoires of cohorts.** Selection models enable us to characterize the sequence features of immune repertoires that differ between cohorts. We evaluated the Jensen-Shannon divergence \\(D_{JS}(r,r^{\\prime})\\) between the distribution of repertoires \\(r\\) and \\(r^{\\prime}\\), \\(P^{r}_{\\mathrm{post}}\\) and \\({P^{r^{\\prime}}_{\\mathrm{post}}}\\), defined as \\[D_{JS}(r,r^{\\prime}) = \\frac{1}{2}\\sum_{\\sigma:\\,\\text{sequences}}p_{\\text{post}}^{r}( \\sigma)\\log\\frac{P_{\\text{post}}^{r}(\\sigma)}{\\left(p_{\\text{post}}^{r}(\\sigma)+p _{\\text{post}}^{r^{\\prime}}(\\sigma)\\right)/2}+\\frac{1}{2}\\sum_{\\sigma:\\,\\text{ sequences}}p_{\\text{post}}^{r^{\\prime}}(\\sigma)\\log\\frac{P_{\\text{post}}^{r^{ \\prime}}(\\sigma)}{\\left(p_{\\text{post}}^{r}(\\sigma)+P_{\\text{post}}^{r^{\\prime }}(\\sigma)\\right)/2}\\] \\[= \\frac{1}{2}\\sum_{\\sigma:\\,\\text{sequences}}p_{\\text{post}}^{r}( \\sigma)\\log\\frac{2\\,Q^{r}(\\sigma)}{Q^{r}(\\sigma)+Q^{r^{\\prime}}(\\sigma)}+\\frac {1}{2}\\sum_{\\sigma:\\,\\text{sequences}}p_{\\text{post}}^{r^{\\prime}}(\\sigma)\\log \\frac{2\\,Q^{r^{\\prime}}(\\sigma)}{Q^{r}(\\sigma)+Q^{r^{\\prime}}(\\sigma)}\\]
where we used the relationship between a receptor's generation probability \\(P_{\\text{gen}}(\\sigma)\\) and its probability after selection \\(P_{\\text{post}}^{r}(\\sigma)\\), using the inferred selection factor \\(Q^{r}(\\sigma)=\\frac{1}{Z}\\ e^{\\Sigma_{f:\\text{features}}q_{f}^{r}(\\sigma)}\\) in repertoire \\(r\\): \\(P_{\\text{post}}^{r}(\\sigma)=P_{\\text{gen}}(\\sigma)\\ Q^{r}(\\sigma)\\). The Jensen-Shannon divergence \\(D_{JS}(r,r^{\\prime})\\) is a symmetric measure of distance between two repertoires, which we can calculate using their relative selection factors (Isacchini et al., 2021). Fig. 3 shows the expected partial Jensen-Shannon divergences evaluated over five independent realizations of 100,000 generated sequences for each partial selection model. The error bars show the variations of these estimates over the five independent realizations in this procedure.
**Clonal lineage expansion.** We studied clonal lineage expansion of BCR repertoires in individuals that showed an increase in the binding level (OD\\({}_{450}\\)) of their plasma to SARS-CoV-2 (RBD) during infection (Figs. 5A, S5): patients 2, 3, 4, 5, 6, 7, 9, 10, 11, 13, 14. Other individuals showed no increase in IgG binding to SARS-CoV-2 (RBD), either due to already high levels of binding at early time points or to natural variation and noise (Fig. S5). Our expansion test compared two time points. Therefore, for individuals with three time points, we combined data from different time points such that the separated times coincided with larger changes in binding levels (OD\\({}_{450}\\)). Specifically, we combined the last two time points for patients 2 and 7 and the first two time points for patient 9. In addition, we combined replicates at the same time point and filtered out small lineages with size less than three, where size was defined as the sum of the amount of unique sequences per time-point within a lineage.
To test for expansion, we compared lineage abundances (i.e., total number of reads in a lineage) between early and late time points. Many lineages appeared only in one time point due to the sparse sampling of clonal lineages and the cells that generate them (Fig. S8). Therefore, we tested for expansion only for lineages that had nonzero abundances at both time points.
Our expansion test relied on comparing the relative abundance of a given lineage with other lineages. However, due to primer-specific amplification biases, abundances were not comparable between reads amplified with different primers. Therefore, in our analysis we only compare a lineage with all other lineages that were amplified with the same primer.
We applied a hypergeometric test (Fisher's exact test) to characterize significance of abundance fold change for a focal lineage. A similar method was used to study clonal expansion in TCRs (DeWitt et al., 2015). For each focal clonal lineage \\(i\\) (in a given individual), we defined a \\(2\\ \\times 2\\) contingency matrix \\(\\mathcal{C}\\),
\\[\\mathcal{C}=\\begin{pmatrix}n_{i}^{\\text{early}}&N_{/i}^{\\text{early}}\\\\ n_{i}^{\\text{late}}&N_{/i}^{\\text{late}}\\end{pmatrix}\\]
where \\(n_{i}^{\\text{early}}\\) and \\(n_{i}^{\\text{late}}\\) are the abundances of the focal lineage at the early and late time, and \\(N_{/i}^{\\text{early}}\\) and \\(N_{/i}^{\\text{late}}\\) are the total abundances of all reads (with the same primer) minus those from lineage \\(i\\) at the early and late times. The ratio \\(\\frac{\\frac{n_{i}^{\\text{late}}}{n_{i}^{\\text{early}}}}{\\frac{N_{/i}^{\\text{late }}}{n_{/i}^{\\text{early}}}}\\) describes the fold change, or odds ratio, of lineage \\(i\\) relative to the rest of the reads in the same primer group. Based on the contingency matrix \\(\\mathcal{C}\\), one-sided p-values for Fisher's exact test were calculated using the \"fisher.test\" function in R version 4.0. Fold change and p-values are shown in Fig. S6G.
To determine a significance threshold for the Fisher's exact test, we examined the replicate data from samples collected from the same time point in each individual because we did not expect any significant expansion among replicates. We performed the expansion test on pairs of replicates (Fig. S6C) and compared the empirical cumulative distributions of the time point and replicate expansion data (Fig. S6E,F) (Storey, 2002; Storey and Tibshirani, 2003). We chose a p-value threshold of \\(10^{-300}\\), where there were 12.3 as many significant expansions as in the replicate data, and therefore the false discovery rate was approximately \\(1/(1+12.3)=0.075\\).
**Significance of BCR sharing among individuals.** The probability that receptor \\(\\sigma\\) is shared among a given number of individuals due to convergent recombination can be evaluated based on the probability to observe a receptor in the periphery \\(P_{\\text{post}}(\\sigma)\\), the size of the cohort \\(M\\), and the size of the repertoire (sequence sample size) \\(N\\). First, we evaluated the probability \\(\\rho(\\sigma;N)\\) that receptor \\(\\sigma\\) with probability \\(P_{\\text{post}}(\\sigma)\\) appears at least once in a sample of size \\(N\\),
\\[\\rho(\\sigma;N)=1-\\left(1-P_{\\text{post}}(\\sigma)\\right)^{N}\\simeq 1-e^{-NP_{\\text{post}}}\\]
The probability that receptor \\(\\sigma\\) is shared among \\(m\\) individuals out of a cohort of \\(M\\) individuals, each with a (comparable) sample size \\(N\\), follows the binomial distribution,
\\[P_{\\text{share}}(\\sigma;m,M,N)=\\binom{M}{m}[\\rho(\\sigma;N)]^{m}[1-\\rho(\\sigma; N)]^{M-m}\\]
We aimed to identify shared receptors that were outliers such that their probability of sharing is too small to be explained by convergent recombination or other biases in the data. To do so, we identified the receptors with the smallest sharing probabilities \\(P_{\\text{share}}\\) and found a threshold of \\(P_{\\text{post}}\\) (dashed lines in Fig. 6 and Fig. S11) at the 2% quantile of \\(P_{\\text{share}}\\) in the data. Specifically, since \\(P_{\\text{share}}\\) is a function of \\(P_{\\text{post}}\\) and \\(m\\) (number of individuals sharing), for each \\(m\\) we solved for \\(P_{\\text{post}}\\) such that \\(P_{\\text{share}}\\ =\\ c\\), and tuned the constant \\(c\\) such that only 2% of the data lay below \\(P_{\\text{share}}\\). This was a conservative choice to identify the rare shared outliers in the data.
## Acknowledgments
This work was supported by DFG grant (SFB1310) on Predictability in Evolution (A.N., Z.M., J.O., G.I.), the Max Planck Society through MPRG funding (A.N., Z.M., J.O., G.I.), Department of Physics at the University of Washington (A.N., Z.M.), Royalty Research Fund at the University of Washington (A.N., Z.M.), NIH NIAID F31AI150163 (WSD), Calmette and Yersin scholarship from the Pasteur International Network Association (H.L.), Bill and Melinda Gates Foundation OPP1170236 (I.A.W.), a startup fund at the University of Illinois at Urbana-Champaign (N.C.W.),US National Institutes of Health (contract no. HHSN272201400006C) (J.S.M.P), National Natural Science Foundation of China (NSFC)/Research Grants Council (RGC) Joint Research Scheme (N_HKU737/18) (C.K.P.M. and J.S.M.P) and the Research Grants Council of the Hong Kong Special Administrative Region, China (Project no. T11-712/19-N) (J.S.M.P). We acknowledge the support of the clinicians who facilitated this study, including Drs Wai Shing Leung, Jacky Man Chun Chan, Thomas Shiu Hong Chik, Chris Yau Chung Choi, John Yu Hong Chan, Daphne Pui-Lin Lau, and Ying Man Ho; the dedicated clinical team at Infectious Diseases Centre, Princess Margaret Hospital, Hospital Authority of Hong Kong; and the patients who kindly consented to participate in this investigation. We also thank the Center for PanorOmic Sciences (CPOS), LKS Faculty of Medicine, and University of Hong Kong for their support on next-generation sequencing and acknowledge the use of the computational infrastructure provided by the Hyak supercomputer system funded by the student technology fund (STF) at the University of Washington.
## Author Contributions
Z.M., H.Lv, J. O, I.A.W., J.S.M.P. N.C.W., A.N., and C.K.P.M. conceived and designed the study. O.T.-Y.T. organized patient recruitment, data collection, and sampling. H.Lv, G.K.Y., W.W.N., and C.K.P.M. prepared the next-generation sequencing libraries and performed the ELISA experiments. M.Y., H.Liu, and N.C.W. and expressed and purified the proteins. Z.M., J.O., W.S.D., G.I., and A.N. analyzed the data and performed the modelling work and statistical inference. Z.M., H.Lv, N.C.W., A.N., and C.K.P.M. wrote the paper. All authors reviewed and edited the paper.
## Competing Interests
The authors declare no competing interests.
## References
* Almagro et al. (2012) Almagro, J.C., Raghunathan, G., Beil, E., Janecki, D.J., Chen, Q., Dinh, T., LaCombe, A., Connor, J., Ware, M., Kim, P.H., et al. (2012). Characterization of a high-affinity human antibody with a disulfide bridge in the third complementarity-determining region of the heavy chain. J Mol Recognit _25_, 125-135.
* Barnes et al. (2020) Barnes, C.O., West, A.P., Huey-Tubman, K.E., Hoffmann, M.A.G., Sharaf, N.G., Hoffman, P.R., Koranda, N., Gristick, H.B., Gaebler, C., Muecksch, F., et al. (2020). Structures of human antibodies bound to SARS-CoV-2 spike reveal common epitopes and recurrent features of antibodies. Cell _182_, 828-842.e16.
* Boyd et al. (2009) Boyd, S.D., Marshall, E.L., Merker, J.D., Maniar, J.M., Zhang, L.N., Sahaf, B., Jones, C.D., Simen, B.B., Hanczaruk, B., Nguyen, K.D., et al. (2009). Measurement and clinical monitoring of human lymphocyte clonality by massively parallel VDJ pyrosequencing. Sci Transl Med \\(1\\), 12ra23.
* Briney and Burton (2018) Briney, B., and Burton, D.R. (2018). Massively scalable genetic analysis of antibody repertoires. BioRxiv 10.1101/447813.
* Briney et al. (2019) Briney, B., Inderbitzin, A., Joyce, C., and Burton, D.R. (2019). Commonality despite exceptional diversity in the baseline human antibody repertoire. Nature _566_, 393-397.
* Brouwer et al. (2020) Brouwer, P.J.M., Caniels, T.G., Straten, K. van der, Snitselaar, J.L., Aldon, Y., Bangaru, S., Torres, J.L., Okba, N.M.A., Claireaux, M., Kerster, G., et al. (2020). Potent neutralizing antibodies from COVID-19 patients define multiple targets of vulnerability. Science _369_, 643-650.
* Burnet (1959) Burnet, F.M. (1959). The clonal selection theory of acquired immunity (Vanderbilt University Press).
* Burnet (1960) Burnet, F.M. (1960). Immunity as an aspect of general biology. In Mechanisms of Antibody Formation, M. Holub, and J. Jaroskova, eds. (Prague: Publishing House of Czech. Acad. Sci.), pp. 15-21.
* Cao et al. (2020) Cao, Y., Su, B., Guo, X., Sun, W., Deng, Y., Bao, L., Zhu, Q., Zhang, X., Zheng, Y., Geng, C., et al. (2020). Potent neutralizing antibodies against SARS-CoV-2 identified by high-throughput single-cell sequencing of convalescent patients' B cells. Cell.
* Chi et al. (2020) Chi, X., Yan, R., Zhang, J., Zhang, G., Zhang, Y., Hao, M., Zhang, Z., Fan, P., Dong, Y., Yang, Y., et al. (2020). A neutralizing human antibody binds to the N-terminal domain of the spike protein of SARS-CoV-2. Science.
* Cyster and Allen (2019) Cyster, J.G., and Allen, C.D.C. (2019). B cell responses: cell interaction dynamics and decisions. Cell _177_, 524-540.
* Cyster et al. (2019)DeWitt, W.S., Emerson, R.O., Lindau, P., Vignali, M., Snyder, T.M., Desmarais, C., Sanders, C., Utsugi, H., Warren, E.H., McElrath, J., et al. (2015). Dynamics of the cytotoxic T cell response to a model of acute viral infection. J Virol _89_, 4517-4526.
* Elhanati et al. (2014) Elhanati, Y., Murugan, A., Callan, C.G., Mora, T., and Walczak, A.M. (2014). Quantifying selection in immune receptor repertoires. Proc Natl Acad Sci U S A _111_, 9875-9880.
* Elhanati et al. (2018) Elhanati, Y., Sethna, Z., Callan, C.G., Mora, T., and Walczak, A.M. (2018). Predicting the spectrum of TCR repertoire sharing with a data-driven model of recombination. Immunol Rev _284_, 167-179.
* Ellinghaus et al. (2020) Ellinghaus, D., Degenhardt, F., Bujanda, L., Buti, M., Albillos, A., Invernizzi, P., Fernandez, J., Prati, D., Baselli, G., Asselta, R., et al. (2020). Genomewide association study of severe COVID-19 with respiratory failure. N Engl J Med _383_, 1522-1534.
* Galson et al. (2020) Galson, J.D., Schaetzle, S., Bashford-Rogers, R.J.M., Raybould, M.I.J., Kovaltsuk, A., Kilpatrick, G.J., Minter, R., Finch, D.K., Dias, J., James, L., et al. (2020). Deep sequencing of B cell receptor repertoires from COVID-19 patients reveals strong convergent immune signatures. BioRxiv 10.1101/2020.05.20.106294.
* Georgiou et al. (2014) Georgiou, G., Ippolito, G.C., Beausang, J., Busse, C.E., Wardemann, H., and Quake, S.R. (2014). The promise and challenge of high-throughput sequencing of the antibody repertoire. Nat Biotechnol _32_, 158-168.
* Guan et al. (2020) Guan, W., Ni, Z., Hu, Y., Liang, W., Ou, C., He, J., Liu, L., Shan, H., Lei, C., Hui, D.S.C., et al. (2020). Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med _382_, 1708-1720.
* Gupta et al. (2017) Gupta, N.T., Adams, K.D., Briggs, A.W., Timberlake, S.C., Vigneault, F., and Kleinstein, S.H. (2017). Hierarchical clustering can identify B Cell clones with high confidence in Ig repertoire sequencing data. J Immunol _198_, 2489-2499.
* Hachim et al. (2020) Hachim, A., Kavian, N., Cohen, C.A., Chin, A.W., Chu, D.K., Mok, C.K.P., Tsang, O.T., Yeung, Y.C., Perera, R.A., Poon, L.L., et al. (2020). Beyond the spike: identification of viral targets of the antibody response to SARS-CoV-2 in COVID-19 patients. MedRxiv 10.1101/2020.04.30.20085670.
* Han et al. (2020) Han, X., Wang, Y., Li, S., Hu, C., Li, T., Gu, C., Wang, K., Shen, M., Wang, J., Hu, J., et al. (2020). A rapid and efficient screening system for neutralizing antibodies and its application for the discovery of potent neutralizing antibodies to SARS-CoV-2 S-RBD. BioRxiv 10.1101/2020.08.19.253369.
* Hansen et al. (2020) Hansen, J., Baum, A., Pascal, K.E., Russo, V., Giordano, S., Wloga, E., Fulton, B.O., Yan, Y., Koon, K., Patel, K., et al. (2020). Studies in humanized mice and convalescent humans yield a SARS-CoV-2 antibody cocktail. Science _369_, 1010-1014.
Horns, F., Vollmers, C., Dekker, C.L., and Quake, S.R. (2019). Signatures of selection in the human antibody repertoire: selective sweeps, competing subclones, and neutral drift. Proc Natl Acad Sci U S A _116_, 1261-1266.
* Hurlburt et al. (2020) Hurlburt, N.K., Seydoux, E., Wan, Y.-H., Edara, V.V., Stuart, A.B., Feng, J., Suthar, M.S., McGuire, A.T., Stamatatos, L., and Pancera, M. (2020). Structural basis for potent neutralization of SARS-CoV-2 and role of antibody affinity maturation. Nat Commun _11_, 5413.
* Isacchini et al. (2020) Isacchini, G., Sethna, Z., Elhanati, Y., Nourmohammad, A., Walczak, A.M., and Mora, T. (2020a). Generative models of T-cell receptor sequences. Phys Rev E _101_, 062414.
* Isacchini et al. (2020b) Isacchini, G., Olivares, C., Nourmohammad, A., Walczak, A.M., and Mora, T. (2020b). SOS: online probability estimation and generation of T-and B-cell receptors. Bioinformatics _36_, 4510-4512.
* Isacchini et al. (2021) Isacchini, G., Walczak, A.M., Mora, T., and Nourmohammad, A. (2021). Deep generative selection models of T and B cell receptor repertoires with soNNia. Proc Natl Acad Sci U S A _118_, e2023141118.
* Janeway et al. (2005) Janeway, C.A., Travers, P., Walport, M., and Shlomchik, M.J. (2005). Immunobiology: the immune system in health and disease, 6 edn (New York: Garland Science).
* Ju et al. (2020) Ju, B., Zhang, Q., Ge, J., Wang, R., Sun, J., Ge, X., Yu, J., Shan, S., Zhou, B., Song, S., et al. (2020). Human neutralizing antibodies elicited by SARS-CoV-2 infection. Nature _584_, 115-119.
* Kreer et al. (2020a) Kreer, C., Zehner, M., Weber, T., Ercanoglu, M.S., Gieselmann, L., Rohde, C., Halwe, S., Korenkov, M., Schommers, P., Vanshylla, K., et al. (2020a). Longitudinal isolation of potent near-germline SARS-CoV-2-neutralizing antibodies from COVID-19 patients. Cell _182_, 843-854.e12.
* Kreer et al. (2020b) Kreer, C., Gruell, H., Mora, T., Walczak, A.M., and Klein, F. (2020b). Exploiting B Cell receptor analyses to inform on HIV-1 vaccination strategies. Vaccines (Basel) \\(8\\).
* Kreye et al. (2020) Kreye, J., Reincke, S.M., Kornau, H.-C., Sanchez-Sendin, E., Max Corman, V., Liu, H., Yuan, M., Wu, N.C., Zhu, X., Lee, C.-C.D., et al. (2020). A SARS-CoV-2 neutralizing antibody protects from lung pathology in a COVID-19 hamster model. BioRxiv 10.1101/2020.08.15.252320.
* Lee et al. (2017) Lee, D.W., Khavrutskii, I.V., Wallqvist, A., Bavari, S., Cooper, C.L., and Chaudhury, S. (2017). BRILIA: Integrated Tool for High-Throughput Annotation and Lineage Tree Assembly of B-Cell Repertoires. Front Immunol \\(7\\).
* Lee et al. (2014) Lee, P.S., Ohshima, N., Stanfield, R.L., Yu, W., Iba, Y., Okuno, Y., Kurosawa, Y., and Wilson, I.A. (2014). Receptor mimicry by antibody F045-092 facilitates universal binding to the H3 subtype of influenza virus. Nat Commun \\(5\\), 3614.
Liu, H., Wu, N.C., Yuan, M., Bangaru, S., Torres, J.L., Caniels, T.G., van Schooten, J., Zhu, X., Lee, C.-C.D., Brouwer, P.J.M., et al. (2020a). Cross-neutralization of a SARS-CoV-2 antibody to a functionally conserved site is mediated by avidity. BioRxiv 10.1101/2020.08.02.233536.
* Liu et al. (2020b) Liu, L., Wang, P., Nair, M.S., Yu, J., Rapp, M., Wang, Q., Luo, Y., Chan, J.F.-W., Sahi, V., Figueroa, A., et al. (2020b). Potent neutralizing antibodies against multiple epitopes on SARS-CoV-2 spike. Nature _584_, 450-456.
* Lv et al. (2020) Lv, H., Wu, N.C., Tsang, O.T.-Y., Yuan, M., Perera, R.A.P.M., Leung, W.S., So, R.T.Y., Chan, J.M.C., Yip, G.K., Chik, T.S.H., et al. (2020). Cross-reactive Antibody Response between SARS-CoV-2 and SARS-CoV Infections. Cell Reports _31_, 107725.
* Marcou et al. (2018) Marcou, Q., Mora, T., and Walczak, A.M. (2018). High-throughput immune repertoire analysis with IGoR. Nat Commun \\(9\\), 561.
* McKechnie and Blish (2020) McKechnie, J.L., and Blish, C.A. (2020). The innate immune system: fighting on the front lines or fanning the flames of COVID-19? Cell Host & Microbe _27_, 863-869.
* Nielsen and Boyd (2018) Nielsen, S.C.A., and Boyd, S.D. (2018). Human adaptive immune receptor repertoire analysis-past, present, and future. Immunol Rev _284_, 9-23.
* Nielsen et al. (2020) Nielsen, S.C.A., Yang, F., Jackson, K.J.L., Hoh, R.A., Roltgen, K., Jean, G.H., Stevens, B.A., Lee, J.-Y., Rustagi, A., Rogers, A.J., et al. (2020). Human B cell clonal expansion and convergent antibody responses to SARS-CoV-2. Cell Host & Microbe _28_, 516-525.e5.
* Niu et al. (2020) Niu, X., Li, S., Li, P., Pan, W., Wang, Q., Feng, Y., Mo, X., Yan, Q., Ye, X., Luo, J., et al. (2020). Longitudinal Analysis of T and B Cell Receptor Repertoire Transcripts Reveal Dynamic Immune Response in COVID-19 Patients. Front Immunol _11_, 582010.
* Nourmohammad et al. (2019) Nourmohammad, A., Otwinowski, J., Luksza, M., Mora, T., and Walczak, A.M. (2019). Fierce selection and interference in B-Cell repertoire response to chronic HIV-1. Mol Biol Evol _36_, 2184-2194.
* Noy-Porat et al. (2020) Noy-Porat, T., Makdasi, E., Alcalay, R., Mechaly, A., Levy, Y., Bercovich-Kinori, A., Zauberman, A., Tamir, H., Yahalom-Ronen, Y., Israeli, M., et al. (2020). A panel of human neutralizing mAbs targeting SARS-CoV-2 spike at multiple epitopes. Nat Commun _11_, 4303.
* Perera et al. (2020) Perera, R.A., Mok, C.K., Tsang, O.T., Lv, H., Ko, R.L., Wu, N.C., Yuan, M., Leung, W.S., Chan, J.M., Chik, T.S., et al. (2020). Serological assays for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Euro Surveill _25_, 2000421.
* Pinto et al. (2020) Pinto, D., Park, Y.-J., Beltramello, M., Walls, A.C., Tortorici, M.A., Bianchi, S., Jaconi, S., Culap, K., Zatta, F., De Marco, A., et al. (2020). Cross-neutralization of SARS-CoV-2 by a human monoclonal SARS-CoV antibody. Nature _583_, 290-295.
* Pogorelyy et al. (2018) Pogorelyy, M.V., Minervina, A.A., Chudakov, D.M., Mamedov, I.Z., Lebedev, Y.B., Mora, T., and Walczak, A.M. (2018a). Method for identification of condition-associated public antigen receptor sequences. ELife \\(7\\), e33050.
Pogorelyy, M.V., Minervina, A.A., Touzel, M.P., Sycheva, A.L., Komech, E.A., Kovalenko, E.I., Karganova, G.G., Egorov, E.S., Komkov, A.Y., Chudakov, D.M., et al. (2018b). Precise tracking of vaccine-responding T cell clones reveals convergent and personalized response in identical twins. Proc Natl Acad Sci U S A _115_, 12704-12709.
* Prabakaran and Chowdhury (2020) Prabakaran, P., and Chowdhury, P.S. (2020). Landscape of non-canonical cysteine in human VH repertoire revealed by immunogenic analysis. Cell Rep _31_, 107831.
* Robbiani et al. (2020) Robbiani, D.F., Gaebler, C., Muecksch, F., Lorenzi, J.C.C., Wang, Z., Cho, A., Agudelo, M., Barnes, C.O., Gazumyan, A., Finkin, S., et al. (2020). Convergent antibody responses to SARS-CoV-2 in convalescent individuals. Nature _584_, 437-442.
* Robins (2013) Robins, H. (2013). Immunosequencing: applications of immune repertoire deep sequencing. Curr Opin Immunol _25_, 646-652.
* Rogers et al. (2020) Rogers, T.F., Zhao, F., Huang, D., Beutler, N., Burns, A., He, W.-T., Limbo, O., Smith, C., Song, G., Woehl, J., et al. (2020). Isolation of potent SARS-CoV-2 neutralizing antibodies and protection from disease in a small animal model. Science _369_, 956-963.
* Ruiz Ortega et al. (2021) Ruiz Ortega et al., M. (2021). Private communication.
* Schultheiss et al. (2020) Schultheiss, C., Paschold, L., Simnica, D., Mohme, M., Willscher, E., von Wenserski, L., Scholz, R., Wieters, I., Dahlke, C., Tolosa, E., et al. (2020). Next-generation sequencing of T and B cell receptor repertoires from COVID-19 patients showed signatures associated with severity of disease. Immunity _53_, 442-455.e4.
* Sethna et al. (2020) Sethna, Z., Isacchini, G., Dupic, T., Mora, T., Walczak, A.M., and Elhanati, Y. (2020). Population variability in the generation and selection of T-cell repertoires. PLoS Comput Biol _16_, e1008394.
* Seydoux et al. (2020) Seydoux, E., Homad, L.J., MacCamy, A.J., Parks, K.R., Hurlburt, N.K., Jennewein, M.F., Akins, N.R., Stuart, A.B., Wan, Y.-H., Feng, J., et al. (2020a). Analysis of a SARS-CoV-2-infected individual reveals development of potent neutralizing antibodies with limited somatic mutation. Immunity _53_, 98-105.e5.
* Seydoux et al. (2020) Seydoux, E., Homad, L.J., MacCamy, A.J., Parks, K.R., Hurlburt, N.K., Jennewein, M.F., Akins, N.R., Stuart, A.B., Wan, Y.-H., Feng, J., et al. (2020b). Characterization of neutralizing antibodies from a SARS-CoV-2 infected individual. BioRxiv 10.1101/2020.05.12.091298.
* Shi et al. (2020) Shi, R., Shan, C., Duan, X., Chen, Z., Liu, P., Song, J., Song, T., Bi, X., Han, C., Wu, L., et al. (2020). A human neutralizing antibody targets the receptor-binding site of SARS-CoV-2. Nature _584_, 120-124.
* Storey (2002) Storey, J.D. (2002). A direct approach to false discovery rates. J R Stat Soc Series B Stat Methodol _64_, 479-498.
* Storey and Tibshirani (2003) Storey, J.D., and Tibshirani, R. (2003). Statistical significance for genomewide studies. Proc Natl Acad Sci U S A _100_, 9440-9445.
* Storey et al. (2020)Vabret, N., Britton, G.J., Gruber, C., Hegde, S., Kim, J., Kuksin, M., Levantovsky, R., Malle, L., Moreira, A., Park, M.D., et al. (2020). Immunology of COVID-19: current state of the science. Immunity _52_, 910-941.
* Vander Heiden et al. (2014) Vander Heiden, J.A., Yaari, G., Uduman, M., Stern, J.N.H., O'Connor, K.C., Hafler, D.A., Vigneault, F., and Kleinstein, S.H. (2014). pRESTO: a toolkit for processing high-throughput sequencing raw reads of lymphocyte receptor repertoires. Bioinformatics _30_, 1930-1932.
* Wec et al. (2020a) Wec, A.Z., Haslwanter, D., Abdiche, Y.N., Shehata, L., Pedreno-Lopez, N., Moyer, C.L., Bornholdt, Z.A., Lilov, A., Nett, J.H., Jangra, R.K., et al. (2020a). Longitudinal dynamics of the human B cell response to the yellow fever 17D vaccine. Proc Natl Acad Sci U S A _117_, 6675-6685.
* Wec et al. (2020b) Wec, A.Z., Wrapp, D., Herbert, A.S., Maurer, D., Haslwanter, D., Sakharkar, M., Jangra, R.K., Dieterle, M.E., Lilov, A., Huang, D., et al. (2020b). Broad sarbecovirus neutralizing antibodies define a key site of vulnerability on the SARS-CoV-2 spike protein. BioRxiv 10.1101/2020.05.15.096511.
* WHO (2021) WHO (2021). Coronavirus disease (COVID-19) pandemic.
* Wrammert et al. (2008) Wrammert, J., Smith, K., Miller, J., Langley, W.A., Kokko, K., Larsen, C., Zheng, N.-Y., Mays, I., Garman, L., Helms, C., et al. (2008). Rapid cloning of high-affinity human monoclonal antibodies against influenza virus. Nature _453_, 667-671.
* Wu et al. (2020a) Wu, J.T., Leung, K., Bushman, M., Kishore, N., Niehus, R., de Salazar, P.M., Cowling, B.J., Lipsitch, M., and Leung, G.M. (2020a). Estimating clinical severity of COVID-19 from the transmission dynamics in Wuhan, China. Nat Med _26_, 506-510.
* Wu et al. (2020b) Wu, Y., Wang, F., Shen, C., Peng, W., Li, D., Zhao, C., Li, Z., Li, S., Bi, Y., Yang, Y., et al. (2020b). A noncompeting pair of human neutralizing antibodies block COVID-19 virus binding to its receptor ACE2. Science _368_, 1274-1278.
* Wu et al. (2015) Wu, Y.-C., Kipling, D., and Dunn-Walters, D. (2015). Assessment of B cell repertoire in humans. Methods Mol Biol _1343_, 199-218.
* Yuan et al. (2020) Yuan, M., Wu, N.C., Zhu, X., Lee, C.-C.D., So, R.T.Y., Lv, H., Mok, C.K.P., and Wilson, I.A. (2020). A highly conserved cryptic epitope in the receptor binding domains of SARS-CoV-2 and SARS-CoV. Science _368_, 630-633.
* Zhou et al. (2020) Zhou, D., Duyvesteyn, H.M.E., Chen, C.-P., Huang, C.-G., Chen, T.-H., Shih, S.-R., Lin, Y.-C., Cheng, C.-Y., Cheng, S.-H., Huang, Y.-C., et al. (2020). Structural basis for the neutralization of SARS-CoV-2 by an antibody from a convalescent patient. Nat Struct Mol Biol _27_, 950-958.
* Zost et al. (2020) Zost, S.J., Gilchuk, P., Chen, R.E., Case, J.B., Reidy, J.X., Trivette, A., Nargi, R.S., Sutton, R.E., Suryadevara, N., Chen, E.C., et al. (2020). Rapid isolation and profiling of a diverse panel of human monoclonal antibodies targeting the SARS-CoV-2 spike protein. Nat Med _26_, 1422-1427.
## Supplementary Information
Dynamics of B-cell repertoires and emergence of cross-reactive responses in COVID-19 patients with different disease severity
Montague _et. al._
**Figure S2.** **Bulk repertoire sequence statistics.****(A-C)** Similar statistics are shown as in Fig. 3 (A, C-D), but for unique receptors excluding singletons (Methods). Unique BCRs in healthy individuals (our control and the Great Repertoire Project (GRP) by (Briney et al., 2019) show significantly shorter HCDR3s compared to moderate and severe cohorts. ANOVA statistics for mean HCDR3 length between cohorts: Healthy-Mild: \\(F_{1,3}=8.7\\), p-value = 0.06; Healthy-Moderate: \\(F_{1,13}=17.2\\), p-value = 0.001; Healthy-Severe: \\(F_{1,6}=10.0\\), p-value = 0.020; GRP-Mild: \\(F_{1,10}=11.3\\), p-value = 0.0073; Healthy-GRP: \\(F_{1,11}=0.074\\), p-value = 0.791; GRP-Moderate: \\(F_{1,20}=34.0\\), p-value = 0.000011; GRP-Severe: \\(F_{1,13}=41.5\\), p-value = 0.000022. **(D-F)** Similar statistics are shown as in Fig. 2 (A, C-D), but for unproductive lineage progenitors. The differences in the statistics of HCDR3 length between the unproductive repertoires of healthy individuals and the COVID-19 cohorts are insignificant (ANOVA p-value \\(>\\) 0.01). Colors are consistent across panels.
**Figure S3. Sequence features of immune receptors in the plasma B-cell repertoire across cohorts.****(A)** Scatter plot shows \\(\\log_{10}\\) relative abundance of clonal lineages constructed from the plasma B-cell and bulk repertoire data from all time points and replicates in each patient (colors). To avoid primer-specific amplification biases, the relative abundance is estimated as the total read count of a clonal lineage relative to the total reads in the data associated with a specific primer amplification. Lineages with only bulk reads or only plasma reads are displayed as having \\(\\log_{10}\\) relative abundance = 1e-8. Pearson correlations (r) between abundances of lineages which were present in both the bulk and the plasma B-cell repertoires and the corresponding p-values are indicated in the legend for each patient. **(B-D)** Similar statistics are shown as in Fig. 3 (A,C,D), but for progenitors of clonal lineages with minimum size of three, in which at least one BCR is found in the plasma B-cell repertoire data; statistics of these lineages are reported in Tables S1, S2. Smaller read counts in the plasma B-cell data compared to the bulk do not allow for comparative analysis of receptor statistics across cohorts. **(D-F)** Similar statistics are shown as in Fig. S2 (A-C), but for unique receptors harvested from the plasma B-cell repertoires. Statistics of these receptors in each individual is described in Table S2. Smaller read counts in the plasma B-cell data compared to the bulk don't allow for comparative analysis of receptor statistics across cohorts. Colors are consistent across panels.
**Figure S4. Robustness of SONIA selection models.****(A)** The distribution of the log-generation probability of a sequence \\(\\sigma\\log_{10}\\mathrm{P_{gen}}(\\sigma)\\), evaluated using the inferred generation models by the IGoR software (Marcou et al., 2018), is shown as a normalized probability density function (PDF) for inferred naΓ―ve progenitors of productive clonal lineages in cohorts of healthy individuals and the mild, moderate, and severe cohorts of COVID-19 patients (colors). Full lines show distributions averaged over individuals in each cohort, and shadings indicate regions containing one standard deviation of variation among individuals within a cohort. **(B)** The scatterplot shows log \\(P_{\\mathrm{post}}\\) obtained by evaluating 500,000 generated sequences using the inferred selection (SONIA) models (Sethna et al., 2020) trained on the healthy cohort (x-axis) and 30 SONIA models trained on independent samples of the GRP dataset (Briney et al., 2019) down-sampled to the size of the healthy cohort in this study (7,161 receptors) (y-axis). The scatterplots show all unique pairwise comparisons between SONIA models trained on independent subsets with each cohort for **(C)** GRP (30 models), and COVID-19 patients with **(D)** mild (two models), **(E)** moderate (13 models), and **(F)** sever (three models) symptoms (Methods). The Pearson correlation between for pairwise model comparisons are shown in each panel.
**Figure S5. ELISA binding assays for IgG and IgM repertoires against SARS-CoV-2 and SARS-CoV.** Plasma binding levels (measured by OD\\({}_{450}\\) in ELISA) against RBD, NTD, and S2 subdomain of SARS-CoV-2 and against RBD and NTD epitopes of SARS-CoV are shown. As seen in binding assays, many individuals developed a cross-reactive response to SARS-CoV-2 and SARS-CoV. Some individuals showed no increase in IgG binding to SARS-CoV-2 RBD due to already high levels at sampling time or natural variation. For the expansion analysis (Fig. 4), we analyzed only individuals whose IgG repertoires showed an increase in binding to SARS-CoV-2 (RBD): 2, 3, 4, 5, 6, 7, 9, 10, 11, 13, and 14.
**Figure S6. Expansion supplement.** BCR repertoires are highly under-sampled, and relatively few BCR lineages appear in multiple time points and replicates. **(A)** Fraction of lineages present in only one time point before (blue) and after (red) filtering out small lineages (i.e., those with less than three unique sequences per time point) are shown. **(B)** Fraction of lineages present in only one replicate before (blue) and after (red) filtering out small lineages (i.e., those with less than three unique sequences per time point) are shown. **(C)** The log-ratio of abundance of receptors for all clonal lineages present in two replicates are shown. Each panel shows the test result for a given patient, as indicated in the label. The count density indicates the number of lineages at each point. Lineages that show a significant expansion over time are indicated in red. Since this is replicate data and representsa null model, red points indicate false positives. **(D)**\\(\\log_{10}\\) p-values of the expansion test versus \\(\\log_{10}\\) fold change (or odds ratio) for replicate data are shown. Color indicates density of points, and p-values of zero are displayed at the minimum nonzero value. See Methods for normalization, data processing, and hypothesis test. **(E)** Empirical cumulative density functions (CDF) of expansion data from multiple time points (red) and replicate data (blue) show that many more tests in expansion data result in low p-values compared to replicate data. **(F)** Ratio of empirical cumulative density functions (CDF) indicates that at a significance threshold of \\(10^{-300}\\) there are roughly 12.3 times more positives than false positives. **(G)**\\(\\log_{10}\\) p-values of the expansion test versus \\(\\log_{10}\\) fold change (or odds ratio) for data corresponding to Fig. 4B is shown. Color indicates density of points, and p-values of zero are displayed at the minimum nonzero value. See Methods for normalization, data processing, and hypothesis test. **(H)** Fraction of lineages expanded for different individuals is shown. HCDR3 length distributions of expanded and non-expanded lineages, **(I)** with each lineage having equal weight, and **(J)** with each lineage weighted by the number of unique sequences per time point (excluding singletons) are shown.
**Figure S7. Sharing of BCRs among healthy individuals.****(A)** The density plot shows the distribution of \\(\\log_{10}P_{\\text{post}}\\) for progenitors of clonal lineages shared in a given number of healthy individuals, indicated on the horizontal axis; histogram bin size is 0.5. The clonal lineages are constructed from the bulk data (Tables S1). The counts in each bin are scaled such that the maximum is equal to one for each column. The numbers above each column indicate the total number of sequences in the respective column. Sharing of rare lineages with \\(\\log_{10}P_{\\text{post}}\\) below the dashed line is statistically significant (see Methods). **(B)** Similar statistics as in **(A)** are shown but for healthy individuals in the Great Repertoire Project (Briney et al., 2019). Sharing of rare lineages with \\(\\log_{10}P_{\\text{post}}\\) below the black dashed line is statistically significant (see Methods). For comparison, the dashed line in **(A)** is shown as a red dashed line in **(B)** and extended to eight individuals.
**Figure S8. Sequence features of heavy and light chain receptors in sorted single cells and monoclonal antibodies.** The bar graphs show the relative counts for **(A)** the \\(\\kappa-\\)chain IGKV-gene usage and **(B)** the \\(\\lambda-\\)chain IGLV-gene usage for the verified mAbs reactive to RBD (pink) and NTD (green) epitopes of SARS-CoV-2 (Table S7) and the light chain receptors obtained from the RBD- (yellow) and NTD- (blue) sorted single cell data (Methods). Distributions of the lengths of **(C)** of HCDR (heavy chain), **(D)** KCDR3 (\\(\\kappa-\\)chain), and **(E)** LCDR3 (\\(\\lambda-\\)chain ) amino acid sequences are shown. **(F)** IGHJ-gene usage, **(G)** IGKJ-gene usage, and **(H)** IGLJ-gene usage of the sorted single cells is shown in relative counts in bar graphs. Colors are consistent between panels and the number of samples used to evaluate the statistics in each panel is indicated in the legend. **(I-L)** Circos plots show matches between the light chain CDR3 sequences of progenitors in the sorted single cell dataset (black) and light chain CDR3 sequences in the verified mAbs (colors) for RBD-reactive **(I)** IG\\({}_{\\kappa}\\), and **(J)** IG\\({}_{\\lambda}\\) sequences, and for NTD-reactive **(K)** IG\\({}_{\\kappa}\\), and **(L)** IG\\({}_{\\lambda}\\) sequences. Different colors indicate different studies from which mAbs were pooled. The reference to each study, the total number of mAbs in the study, and the number of mAbs with matching light chain CDR3 to the single cell data are reported in each panel.
Table S1. **Statistics of BCR repertoire sequence data from healthy individuals, and COVID-19 patients.** The information about individuals in each cohort is shown. Detailed statistics of the processed data for productive BCRs are shown for read abundance, number of singletons, and number of unique sequences for all replicates and sampled time-points in each individual. For each individual, the number of lineages with more than two and ten unique sequences across all time points are shown in separate columns. Read statistics for unproductive receptors pooled from all individuals are shown separately.
\\begin{tabular}{|c|c|} \\hline
**5'-end primer** & **Sequence (5'-3')** \\\\ \\hline IGHV1 & CCTCAGTGAAAGGTCTCCTGCAAGG \\\\ \\hline IGHV2 & TCCTGGCCTGGTGAACCCACACA \\\\ \\hline IGHV3 & GGTCCCTGAGACTCTCCTGTGCA \\\\ \\hline IGHV4 & TCGGAGACCCTGTCCCTCACCTG \\\\ \\hline IGHV5 & CAGTCTGAGCAGAGGTGAAA \\\\ \\hline IGHV6 & CCTGTGCCATCTCCGGGACAGTG \\\\ \\hline \\hline
**3'-end primer** & **Sequence (5'-3')** \\\\ \\hline CHG-R & GCGCCTGAGTTCCACGAC \\\\ \\hline \\end{tabular}
Table S2. **List of primers used for PCR amplification of B-cell repertoires samples.**
**Table S3. Statistics of IgG BCR repertoire sequence data from individuals in the Great Repertoire Project.** Because of the massive amount of data provided by the Great Repertoire Project (Briney et al., 2019), only the first three biological replicates were used for each individual to be comparable to the data sampled in this study. Detailed statistics of the processed data for productive BCRs are shown for the number of unique sequences pooled from all replicates for each individual. For each individual, the number of lineages with more than two and ten unique sequences are shown in separate columns. Read statistics for unproductive receptors pooled from all individuals and replicates are shown separately.
**Table S4.xlsx**
**Table S4. Statistics of plasma B-cell repertoire sequence data from COVID-19 patients.** The information about individuals in each cohort is shown. Detailed statistics of the processed data for productive BCRs are shown for read abundance, number of singletons, and number of unique sequences for all replicates and sampled time points in each individual. For each individual, the amounts of lineages with more than two and ten unique sequences across all time points are shown in separate columns which are split further by whether the lineages also contained bulk reads.
**Table S5.xlsx**
**Table S5. Rare expanding BCRs shared among individuals.** The list of 38 rare progenitors of clonal lineages (i.e., with \\(P_{\\text{post}}\\) below the dashed line in Fig. 6) that exhibit lineage expansion in at least one individual is shown. These receptors are indicated by green diamonds in Fig. 6. The presence of these lineages in the plasma B-cell repertoire is indicated in the last column (orange triangles in Fig. 6). The 38 rare expanding lineage progenitors shown here are shared among four to 12 COVID-19 patients.
TableS7.xlsx
**Table S7. Verified antibodies detected in BCR repertoires of COVID-19 patients.** HCDR3 and IGHV-gene of verified monoclonal antibodies responsive to SARS-CoV-2 (RBD, NTD, and S1) or SARS-CoV-1 epitopes, whose HCDR3 sequences match with a receptor (with up to Hamming distance of one amino acid) in the bulk+plasma B-cell repertoires of patients in this study are shown. Each row indicates a monoclonal antibody family, whose members have similar HCDR3, up to one amino acid difference. Mutations in the repertoire-matched receptors with respect to the original HCDR3 are in red. Single amino acid mutation differences in HCDR3s of monoclonal antibody families are shown in cyan. Patient ID for each repertoire-matched receptor is indicated in the last column. The complete list of verified antibodies is given in Table S8.
TableS8.xlsx
**Table S8. Complete list of verified monoclonal antibodies.** | COVID-19 patients show varying severity of the disease ranging from asymptomatic to requiring intensive care. Although a number of SARS-CoV-2 specific monoclonal antibodies have been identified, we still lack an understanding of the overall landscape of B-cell receptor (BCR) repertoires in COVID-19 patients. Here, we used high-throughput sequencing of bulk and plasma B-cells collected over multiple time points during infection to characterize signatures of B-cell response to SARS-CoV-2 in 19 patients. Using principled statistical approaches, we determined differential features of BCRs associated with different disease severity. We identified 38 significantly expanded clonal lineages shared among patients as candidates for specific responses to SARS-CoV-2. Using single-cell sequencing, we verified reactivity of BCRs shared among individuals to SARS-CoV-2 epitopes. Moreover, we identified natural emergence of a BCR with cross-reactivity to SARS-CoV-1 and SARS-CoV-2 in a number of patients. Our results provide important insights for development of rational therapies and vaccines against COVID-19. | Write a summary of the passage below. | 229 |
arxiv-format/1801_10240v1.md | # Non-local tensor completion for multitemporal remotely sensed images inpainting
Teng-Yu Ji, Naoto Yokoya, Xiao Xiang Zhu, and Ting-Zhu Huang
The work of T.-Y. Ji and T.-Z. Huang was supported by NSFC (6177203, 61402082). The work of N. Yokoya was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI 15K20955 and Alexander von Humboldt Fellowship for postdoctoral researchers. The work of X. X. Zhu has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087]), as well as from Helmholtz Association under the framework of the Young Investigators Group SiPEO (VH-NG-1018, www.sipeo.bgu.tum.de). _(Corresponding authors: Xiao Xiang Zhu and Ting-Zhu Huang.)_T.-Y. Ji and T.-Z. Huang are with the School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China (e-mail: [email protected]; [email protected]).N. Yokoya is with the Department of Advanced Interdisciplinary Studies, University of Tokyo 153-8904, Japan, the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR) Wessling 82234, Germany, and Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Munich 80333, Germany (e-mail: [email protected]).X. X. Zhu is with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany, and also with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany (e-mail: [email protected]).
## I Introduction
Remotely sensed images are important tools for exploring the properties of our living environment and have been used in many applications, such as hyperspectral unmixing [1, 2, 3, 4, 5, 6, 7], classification [8, 9, 10, 11, 12, 13, 14], and target detection [15, 16, 17, 18, 19, 20]. However, these applications are largely limited by the missing information that is introduced when acquiring these data by both/either the defective sensor and/or the poor atmospheric conditions. For example, three-quarters of the detectors (in band 6) of the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) are ineffective [21, 22], the scan line corrector (SLC) of the Landsat enhanced thematic mapper plus (ETM+) sensor has permanently failed [23, 24], and the ozone monitoring instrument (OMI) onboard the Aqua satellite suffers a row anomaly problem. On the other hand, the clouds cover approximately 35% of the Earth's surface at any one time [25]. Owing to the above two reasons, missing information is inevitable in optical remotely sensed images, particularly in the multitemporal image analysis. Thus, reconstructing the missing information is highly desirable.
Recently, many reconstruction methods for remotely sensed images have been proposed, which can be classified into four categories: spatial-based, spectral-based, temporal-based, and hybrid methods. The spatial-based methods take advantage of the relationships between different pixels in the spatial dimension without any other spectral and temporal auxiliary images and include interpolation methods [26], propagated diffusion methods [27, 28], variation-based methods [21, 29, 30, 31, 32, 33], and exemplar-based methods [34, 35]. This kind of method cannot reconstruct a large missing area because there is not enough reference information.
The spectral-based methods borrow the correlative information from other spectral data to reconstruct the missing area. The basic idea of this kind of method is to estimate the relationship of the known areas between the complete and incomplete bands and then use the relationship to reconstruct the missing area. The typical example of this kind of method is Aqua MODIS band 6 inpainting. For example, Wang et al. [22], Rakwatin et al. [36], and Shen et al. [37] reconstructed the missing area of band 6 by considering the spectral relationship with band 7 because these two bands are highly correlated. Gladkova et al. [38] and Li et al. [39] took the relationships between band 6 and the other six bands into consideration. These methods can reconstruct a large area and get a better result than the spatial-based methods. However, for the most remotely sensed images, all bands contain the same missing areas. For this case, the spectral-based methods fail in getting a promising result.
The third class of methods is to reconstruct the missing area by making use of other data taken at the same location and different periods. Temporal-based methods have been widely studied for remotely sensed inpainting, especially cloud removal. The clouds vary with time. Thus, the missing areas in different images are diverse. The basic temporal-based method is to replace the missing area with the same area of different periods [23, 25, 40]. Inspired by the temporal filter methods for the one-dimensional signal denoising, many researchersdeveloped temporal filter methods by regarding the temporal fibers as signals [41, 42, 43]. Recently, temporal learning model-based methods exploit the compressed sensing and regression technologies to reconstruct the missing information [44, 45]. More recently, Wang et al. [46] proposed a temporally contiguous robust matrix completion model for cloud removal by making the best use of the temporal correlations: low-rankness in time-space and temporal smoothness. As the method (ALM-IPG) considers the local temporal correlation, temporally contiguous property, it is good at processing the data whose adjacent temporal images are similar.
The above three classes of methods make use of only one kind of relationship (spatial domain, spectral domain, or temporal domain). In some cases, they are powerful, but sometimes they are not. To get a better result, the hybrid methods were introduced to extract the complementary information from two or three domains. This kind of method includes the joint spatio-temporal methods [23, 47], joint spatio-spectral methods [48], and joint spectral-temporal methods [49]. Recently, Li et al. [50] proposed the patch matching-based multitemporal group sparse representation (PM-MTGSR) that makes use of the local sparsity in the temporal domain and the non-local similarity in the spatial domain to reconstruct the missing information. Obviously, PM-MTGSR is a joint spatio-temporal method, namely, it also makes use of only two domains relationships.
The hybrid methods perform better than each of the three classes of methods. This indicates that the results would be better if a method takes advantage of more latent structures information in the observed data. In this paper, we present a new methodology that makes full use of spatial, spectral, and temporal relationships for the reconstruction of missing data in multitemporal remotely sensed images. The proposed method is designed to be good at processing not only the temporally contiguous data but also the data that have a large difference between the adjacent temporal images. Low-rank tensor-based methods characterize the global correlations for each dimension. Inspired by this, the paper introduces a non-convex low-rank approximation for tensor rank to make the best use of the global correlations in the spatial, spectral, and temporal domains. Similar concept has been applied for time series analysis of radar data [51]. To take advantage of the three domains similarities, we group similar patches and consider their low-rankness. Experimental results on both cloud removal and destriping experiments show that our low-rank approach is capable of achieving more accurate reconstruction than other state-of-the-art approaches.
This paper is organized as follows. Some notations are introduced in Section II. Section III describes the proposed algorithm for the multitemporal remotely sensed image reconstruction. Section IV presents the experimental results and discussion, and the conclusion is given in Section V.
## II Preliminary
In this paper, we use non-bold low-case letters for scalars, e.g., \\(x\\), bold low-case letter for vectors, e.g., \\(\\mathbf{x}\\), bold upper-case letters for matrices, e.g., \\(\\mathbf{X}\\), and bold calligraphic letters for tensors, e.g., \\(\\mathbf{\\mathcal{X}}\\). Moreover, we also use bold norm up-case letters for clusters of some variables, e.g., \\(\\mathbf{M}=(\\mathbf{\\mathcal{M}}_{1},\\cdots,\\mathbf{\\mathcal{M}}_{N})\\). An \\(N\\)-order tensor is defined as \\(\\mathbf{\\mathcal{X}}\\in\\mathbb{R}^{J_{1}\\times\\cdots\\times J_{N}}\\), and \\(x_{j_{1},\\cdots,j_{N}}\\) is its \\((j_{1},\\cdots,j_{N})\\)-th component.
**Fibers** are the higher-order analogue of matrix rows and columns. A fiber is defined by fixing every index but one. For example, \\(\\mathbf{x}_{j_{2}\\cdots j_{N}}=(x_{1,j_{2},\\cdots j_{N}},\\cdots,x_{J_{1},j_{2}, \\cdots j_{N}})\\) is one of mode-1 fibers of \\(N\\)-order tensor \\(\\mathbf{\\mathcal{X}}\\in\\mathbb{R}^{J_{1}\\times\\cdots\\times J_{N}}\\). The mode-\\(n\\)**unfolding** of a tensor \\(\\mathbf{\\mathcal{X}}\\) is denoted as \\(\\mathbf{X}_{(n)}\\in\\mathbb{R}^{J_{n}\\times\\prod_{j\
eq n}J_{j}}\\), whose columns are the mode-\\(n\\) fibers of \\(\\mathbf{\\mathcal{X}}\\) in the lexicographical order. Fig. 1 shows the mode-\\(n\\)\\((n=1,2,3)\\) fibers and unfoldings for a 3-order tensor. The inverse operator of unfolding is denoted as \"fold\", i.e., \\(\\mathbf{\\mathcal{X}}=\\text{fold}_{n}(\\mathbf{X}_{(n)})\\). Then \\(n\\)**-rank** of an \\(N\\)-order tensor \\(\\mathbf{\\mathcal{X}}\\), denoted as \\(\\text{rank}_{n}(\\mathbf{\\mathcal{X}})\\), is the rank of \\(\\mathbf{X}_{n}\\), and the rank of \\(\\mathbf{\\mathcal{X}}\\) based on \\(n\\)-rank is defined as an array: \\(\\text{rank}(\\mathbf{\\mathcal{X}})=(\\text{rank}_{1}(\\mathbf{\\mathcal{X}}),\\cdots,\\text{ rank}_{N}(\\mathbf{\\mathcal{X}}))\\). The tensor \\(\\mathbf{\\mathcal{X}}\\) is low-rank, if \\(\\mathbf{X}_{(n)}\\) is low-rank for all \\(n\\). Please refer to [52] for its extensive overview.
## III Methodology
Missing information is inevitable in the observation process for remotely sensed images. The existing methods that characterize the correlation are mostly interpolation [26], sparse [50], smooth [46], and low-rank technologies [46], no matter which of the four methods (spatial, spectral, temporal, or hybrid) is adopted. For example, PM-MTGSR [50] characterizes the local relationships in the temporal domain using the group sparse technology, and ALM-IPG [46] characterizes the local and global correlations in the temporal domain using the smooth and low-rank technology, respectively. Although these two methods take advantage of spatial and temporal relationships, they prefer the relationships in the temporal domain. Recently, low-rank tensor based methods have attracted much attention regarding the completion of high-dimensional images because the tensors rank can characterize the correlations in different domains [53, 54]. Combined with the definition of \\(n\\)-rank that is a vector composed of ranks of mode-\\(n\\) unfoldings, it can be seen from the Fig. 1 that the rank of mode-\\(n\\) unfolding describes the correlations of mode-\\(n\\) fibers. To present the motivation in detail, we analyze the low-rankness of some 4-order tensor groups stacked by the 3-order similar patches that are extracted from the 4-temporal cloud-free Landsat-8 data (\"Image 3\" in Fig. 7, Section IV); see Tab. I. For example, the first group is of size \\(4\\times 4\\times 3\\times 708\\), where the four dimensions correspond to the numbers of pixels, observations, spectral channels, and patches, and it has 8496 mode-1 fibers, 8496 mode-2 fibers, 11328 mode-3 fibers, and 48 mode-4 fibers. The dimensions of the spaces (DimSpac) generated by mode-1, -2, -3, and -4 fibers are 2, 3, 2, and 13, respectively. That means the mode-\\(n\\)\\((n=1,\\cdots,4)\\) fibers are highly correlated. i.e., it is possible to reconstruct the
Fig. 1: Mode-\\(n\\) fibers and corresponding unfoldings of a 3-order tensor.
missing area using tensor low-rank technology. Inspired by this, we introduce the tensors rank to characterize the global correlations in the spatial, spectral, and temporal domains to reconstruct the missing information of remotely sensed images after grouping the similar patches. It should be noted that missing areas are detected before their reconstruction.
The flowchart of the proposed NL-LRTC is shown in Fig. 2. The proposed NL-LRTC method consists of three parts. The method first reshapes the observed 4-order tensor into a 3-order tensor so that the pixels at the different periods but the same location become adjoining. Next, we search and group similar patches in a searching window. Last, the missing information of every group is reconstructed using the low-rank tensor completion method.
### _Data Rearrangement_
The observed multitemporal remotely sensed data set \\(\\boldsymbol{\\mathcal{Y}}\\in\\mathbb{R}^{m\\times n\\times b\\times t}\\) is a 4-order tensor, where \\(m\\times n\\) denotes the number of pixels of remotely sensed images, \\(b\\) denotes the number of spectral channels of remote sensors, and \\(t\\) is the number of time series. The indices set \\(\\boldsymbol{\\Omega}\\in\\mathbb{R}^{m\\times n\\times b\\times t}\\) is also a 4-order tensor, where the position \\((i,j,k,l)\\in\\mathbb{Z}^{m}\\times\\mathbb{Z}^{n}\\times\\mathbb{Z}^{b}\\times \\mathbb{Z}^{t}\\) is covered by cloud if \\(\\boldsymbol{\\Omega}_{i,j,k,l}=0\\) and is cloud free if \\(\\boldsymbol{\\Omega}_{i,j,k,l}=1\\).
To make the best use of correlations between different periods and find more accurate similar patches, it is necessary to reshape the observed data. The reshaping process is illustrated in the first step of Fig. 2. PM-MTGSR also reshaped the data before searching the similar patches. The difference between PM-MTGSR and our method is that PM-MTGSR reshapes the 4-order tensor into a matrix, while the result of our reshaping is a 3-order tensor. The difference leads to another difference between these two methods: the similar patches in our method are 3-order tensors but matrices in PM-MTGSR.
As the description above indicates, the aim is to reshape the observed data into a 3-order tensor to take advantage of the temporal correlations. Detailedly, we stack the mode-1 slices at the same locations and different periods one by one, i.e., the observed 4-order tensor \\(\\boldsymbol{\\mathcal{Y}}\\) is reshaped into a 3-order tensor \\(\\hat{\\boldsymbol{\\mathcal{Y}}}\\in\\mathbb{R}^{m\\times n\\times b}\\) which is defined by \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{u,v,w}=\\boldsymbol{\\mathcal{Y}}_{i,j,k,l}\\) when \\(u=i\\), \\(v=(j-1)n+l\\), and \\(w=k\\). Similarly, we reshape the indices tensor \\(\\boldsymbol{\\Omega}\\in\\mathbb{R}^{m\\times n\\times b\\times t}\\) into a 3-order tensor \\(\\hat{\\boldsymbol{\\Omega}}\\in\\mathbb{R}^{m\\times t\\times b}\\) which is defined by \\(\\hat{\\boldsymbol{\\Omega}}_{u,v,w}=\\boldsymbol{\\Omega}_{i,j,k,l}\\) when \\(u=i\\), \\(v=(j-1)n+l\\), and \\(w=k\\). Let \\(\\boldsymbol{\\mathcal{X}}\\) be the recovery data we are seeking and \\(\\hat{\\boldsymbol{\\mathcal{X}}}\\in\\mathbb{R}^{m\\times tn\\times b}\\) be the corresponding reshaped 3-order tensor.
### _Grouping of Similar Patches_
This section is to search and group the similar patches for missing area pixels of the reshaped data \\(\\hat{\\boldsymbol{\\mathcal{Y}}}\\). The second step of Fig. 2 shows the process of similar patch searching after reshaping. The red square denotes the target patch \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{i,j}=\\hat{\\boldsymbol{\\mathcal{Y}}}(i:(i+w-1),j:(j+w-1),:)\\in\\mathbb{R}^{w\\times w\\times b}\\), where \\((i,j)\\) denotes the coordinate of the target patch, and \\(w\\times w\\times b\\) denotes the patch size. When the target patch is given, similar patches are searched for in the surrounding window with a radius of \\(r\\) in the reshaped data \\(\\hat{\\boldsymbol{\\mathcal{Y}}}\\). According to the reshaping procedure only the similar information in the square window of size \\((2(r/t)+1)\\times(2(r/t)+1)\\) in the original data \\(\\boldsymbol{\\mathcal{Y}}\\) is used. Given a metric of the similarity indicator between the target patch and the other patches and an indicator threshold \\(\\gamma_{2}\\), a similar patch can be found when the indicator reaches the given condition. There are many indicators for measuring similarity between two vectors [50, 55], such as the Euclidean distance, the mean relative error, normalized cross-correlation, cosine coefficients, and so on. This work adopts the normalized cross-correlation defined as:
\\[Q=\\frac{\\sum\\limits_{j_{1}\\cdots j_{N}}(x_{j_{1},\\cdots,j_{N}}-\\mu_{\\boldsymbol {\\mathcal{X}}})(y_{j_{1},\\cdots,j_{N}}-\\mu_{\\boldsymbol{\\mathcal{Y}}})}{ \\sqrt{\\sum\\limits_{j_{1}\\cdots j_{N}}(x_{j_{1},\\cdots,j_{N}}-\\mu_{\\boldsymbol{ \\mathcal{X}}})^{2}}\\sqrt{\\sum\\limits_{j_{1}\\cdots j_{N}}(y_{j_{1},\\cdots,j_{N} }-\\mu_{\\boldsymbol{\\mathcal{Y}}})^{2}}},\\]
where \\(\\boldsymbol{\\mathcal{X}},\\boldsymbol{\\mathcal{Y}}\\in\\mathbb{R}^{J_{1}\\times \\cdots\\times J_{N}}\\), \\(\\mu_{\\boldsymbol{\\mathcal{X}}},\\mu_{\\boldsymbol{\\mathcal{Y}}}\\) are the mean values of \\(\\boldsymbol{\\mathcal{X}}\\) and \\(\\boldsymbol{\\mathcal{Y}}\\), respectively. The mean value of an \\(N\\)-order tensor \\(\\boldsymbol{\\mathcal{X}}\\) is defined as \\(\\mu_{\\boldsymbol{\\mathcal{X}}}:=\\frac{1}{N}\\sum_{j_{1},\\cdots,j_{N}}x_{j_{1}, \\cdots,j_{N}}\\).
After completing a search for similar patches, these 3-order-tensor patches are stacked into a 4-order tensor \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{G}\\in\\mathbb{R}^{w\\times w\\times b\\times n}\\), where \\(n\\) is the number of similar patches. The corresponding indices set \\(\\hat{\\boldsymbol{\\Omega}}_{G}\\) can be obtained according to the coordinates of patches in \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{G}\\).
### _Low-rank Reconstruction_
In this section, we propose a low-rank method to reconstruct the missing information in the 4-order tensor \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{G}\\) obtained in the last subsection. Different from the low-rank matrix methods which consider only one mode correlation, e.g., [46], NL-LRTC studies the low-rankness of \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{G}\\) from four aspects, i.e., NL-LRTC considers the correlations of mode-\\(i(i=1,2,3,4)\\) using rank\\((\\hat{\\boldsymbol{\\mathcal{Y}}}_{G})\\). In fact, the four dimensions of \\(\\hat{\\boldsymbol{\\mathcal{Y}}}_{G}\\) denote spatial, temporal, spectral, and patch similarity, respectively. As mentioned in the description about tensor rank previously with Fig. 1, rank\\((\\hat{\\boldsymbol{\\mathcal{Y}}}_{G})\\) takes advantage of all of the spatial, spectral, and temporal relationships. This can be found in Fig. 3 where NL-LRTC considers the low-rankness of the group of similar patches by analyzing the low-rankness of four unfolding matrices that can describe the correlations of the spatial, temporal, and spectral domains. In contrast, the group of PM-MTGSR only describes the spatial and temporal domains relationships (seen Fig. 3 of [50]). This is another difference between NL-LRTC and PM-MTGSR. Tensor nuclear norm and the corresponding algorithm (HaLRTC) were developed in order to make it possible to minimize the tensor rank [53]. However, the tensor nuclear norm cannot treat the different singular values accurately according to their different importance. For the proposed non-convex surrogate of tensor rank, the larger singular values that are associated with the major projection orientations and are more important can be shrunk less to preserve the major data components [56, 57]. This is one of the differences between HaLRTC and NL-LRTC. Another difference is that HaLRTC is without the patch strategy.
In the last section, \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\) and \\(\\hat{\\mathbf{\\Omega}}_{G}\\) have been obtained. Next, we reconstruct the missing areas in \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\) group by group using the following model:
\\[\\begin{split}\\underset{\\hat{\\mathbf{\\mathcal{X}}}_{G}}{\\text{min}} \\text{ logDet}(\\hat{\\mathbf{\\mathcal{X}}}_{G},\\mathbf{\\varepsilon})=\\sum_{i=1}^{4}\\alpha_ {i}L(\\hat{\\mathbf{X}}_{G_{(i)}})\\\\ \\text{s.t.}\\ \\ \\hat{\\mathbf{\\mathcal{X}}}_{G_{\\mathbf{\\Omega}_{G}}}=\\hat{\\mathbf{ \\mathcal{Y}}}_{G_{\\mathbf{\\Omega}_{G}}},\\end{split} \\tag{1}\\]
where \\(L(\\hat{\\mathbf{X}}_{G_{(i)}})=\\text{log det}((\\hat{\\mathbf{X}}_{G_{(i)}}\\hat{\\mathbf{X}}_{ G_{(i)}}^{T})^{1/2}+\\varepsilon_{i}\\mathbf{I}_{i})\\), \\(\\mathbf{I}_{i}\\) is the Identity matrix, \\(\\alpha_{i}\\)s are constants satisfying \\(\\alpha_{i}\\geq 0\\) and \\(\\sum_{i=1}^{N}\\alpha_{i}=1\\), \\(\\mathbf{\\varepsilon}=(\\varepsilon_{1},\\cdots,\\varepsilon_{N})^{T}>0\\), and \\(\\hat{\\mathbf{X}}_{G_{(i)}}\\) is the mode-\\(i\\) unfolding of \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\).
The \\(L(\\hat{\\mathbf{X}}_{G_{(i)}})\\) can be approximated by using the first-order Taylor expansion:
\\[\\begin{split} L(\\hat{\\mathbf{X}}_{G_{(i)}})\\approx&\\sum _{j=1}^{J_{i}}\\frac{\\sigma_{j}(\\hat{\\mathbf{X}}_{G_{(i)}})}{\\sigma_{j}^{k}(\\hat{ \\mathbf{X}}_{G_{(i)}})+\\varepsilon_{i}}+\\text{constant}\\\\ =&(\\mathbf{\\omega}^{k})^{T}\\mathbf{\\sigma}+\\text{constant} \\stackrel{{\\triangle}}{{=}}L_{\\mathbf{\\omega}^{k}}(\\hat{\\mathbf{X}}_{G_{(i )}}),\\end{split} \\tag{2}\\]
where \\(\\sigma_{j}^{k}(\\hat{\\mathbf{X}}_{G_{(i)}})\\)s are the solutions obtained in the \\(k\\)-th iteration, \\(\\mathbf{\\omega}^{k}=(1/(\\sigma_{1}^{k}(\\hat{\\mathbf{X}}_{G_{(i)}})+\\varepsilon_{i}), \\cdots,1/(\\sigma_{j}^{k}(\\hat{\\mathbf{X}}_{G_{(i)}})+\\varepsilon_{i}))^{T}\\), \\(\\mathbf{\\sigma}=(\\sigma_{1}(\\hat{\\mathbf{X}}_{G_{(i)}}),\\cdots,\\sigma_{J_{i}}(\\hat{ \\mathbf{X}}_{G_{(i)}}))^{T}\\), and \\((J_{1},J_{2},J_{3},J_{4})=(w,w,b,n)\\). From the approximate function (2), we can see that the proposed function logDet indeed shrinks the larger singular values less.
Next, we present a computationally efficient algorithm that is based on the alternating direction method of multipliers (ADMM) [58, 59, 60] to solve the problem (1) by replacing \\(L(\\hat{\\mathbf{X}}_{G_{(i)}})\\) with \\(L_{\\mathbf{\\omega}^{k}}(\\hat{\\mathbf{X}}_{G_{(i)}})\\) and introducing some auxiliary values. Thus, the problem (1) can be rewritten as:
\\[\\begin{split}\\underset{\\hat{\\mathbf{X}}_{G},\\mathbf{\\mathcal{M}}_{1}, \\ldots,\\mathbf{\\mathcal{M}}_{4}}{\\text{min}}&\\mathbf{1}_{\\hat{\\mathbf{ \\mathcal{X}}}_{G}}^{\\hat{\\mathbf{\\Omega}}_{G}}(\\hat{\\mathbf{\\mathcal{X}}}_{G})+\\sum_{i=1 }^{4}\\alpha_{i}\\mathbf{L_{\\mathbf{\\omega}^{k}}}(\\mathbf{M}_{i,(i)})\\\\ &\\text{s.t.}\\ \\ \\ \\ \\ \\ \\mathbf{\\mathcal{M}}_{1}=\\hat{\\mathbf{\\mathcal{X}}}_{G}, \\cdots,\\mathbf{\\mathcal{M}}_{4}=\\hat{\\mathbf{\\mathcal{X}}}_{G},\\end{split} \\tag{3}\\]
where \\(\\mathbf{1}_{\\hat{\\mathbf{\\mathcal{X}}}_{G}}^{\\mathbf{\\Omega}_{G}}(\\hat{\\mathbf{\\mathcal{X}} }_{G})=0\\) if \\(\\hat{\\mathbf{\\mathcal{X}}}_{G_{\\mathbf{\\Omega}_{G}}}=\\hat{\\mathbf{\\mathcal{Y}}}_{G_{\\mathbf{ \\Omega}_{G}}}\\), otherwise \\(\\mathbf{1}_{\\hat{\\mathbf{\\mathcal{Y}}}_{G}}^{\\hat{\\mathbf{\\Omega}}_{G}}(\\hat{\\mathbf{ \\mathcal{X}}}_{G})=\\infty\\).
By attaching the Lagrangian multiplier \\(\\{\\Lambda_{i}\\}_{i=1}^{4}\\) that have the same size with \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\) to the linear constraint, the augmented Lagrangian function of (3) is given by:
\\[\\begin{split}&\\mathcal{L}(\\hat{\\mathbf{\\mathcal{X}}}_{G},\\mathbf{\\mathcal{M}}_{1}, \\ldots,\\mathbf{\\mathcal{M}}_{4},\\Lambda_{1},\\ldots\\Lambda_{4})=\\mathbf{1}_{\\hat{\\mathbf{ \\mathcal{Y}}}_{G}}^{\\mathbf{\\Omega}_{G}}(\\hat{\\mathbf{\\mathcal{X}}}_{G})+\\\\ &\\sum_{i=1}^{4}\\left(\\alpha_{i}L_{\\mathbf{\\omega}^{k}}(\\mathbf{M}_{i,(i)})+ \\frac{\\beta}{2}\\|\\hat{\\mathbf{\\mathcal{X}}}_{G}-\\mathbf{\\mathcal{M}}_{i}+\\frac{\\Lambda _{i}}{\\beta}\\|_{F}^{2}\\right),\\end{split} \\tag{4}\\]
where \\(\\beta\\) is the penalty parameter for the violation of the linear constraints, and \\(\\langle\\cdot,\\cdot\\rangle\\) is the sum of the elements of the Hadamard product. Thus, \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\) and \\(\\{\\mathbf{\\mathcal{M}}_{i}\\}_{i=1}^{4}\\) can be obtained by minimizing the augmented Lagrangian function (4), alternately. The Lagrangian multipliers are updated as \\(\\Lambda_{i}^{k+1}=\\Lambda_{i}^{k}+\\beta(\\hat{\\mathbf{\\mathcal{X}}}_{G}^{k+1}-\\mathbf{ \\mathcal{M}}_{i}^{k+1})\\) for \\(i=1,\\ldots,4\\).
Fig. 2: Illustration for the proposed NL-LRTC method. βHeightβ denotes one of the spatial mode, βWidthβ denotes another mode of spatial domain, βWidthβTimeβ means this mode contains the information of both spatial (Width) and temporal mode. The proposed method comprises three steps: (1) reshape the 4-order observed data into 3-order data; (2) search and group similar patches; (3) reconstruct each group using the low-rank tensor completion method.
First, \\(\\hat{\\mathbf{X}}_{G}\\) is obtained by solving the following optimization subproblem:
\\[\\min_{\\hat{\\mathbf{X}}_{G}}\\left\\{\\mathbf{1}_{\\mathbf{\\mathcal{Y}}_{G}}^{\\mathbf{\\Omega}_{G}}( \\hat{\\mathbf{X}}_{G})+\\sum_{i=1}^{4}\\frac{\\beta}{2}\\|\\hat{\\mathbf{X}}_{G}-\\mathbf{\\mathcal{ M}}_{i}^{k}+\\frac{\\Lambda_{i}^{k}}{\\beta}\\|_{F}^{2}\\right\\}. \\tag{5}\\]
It is obvious that the objective function of (5) is differentiable, thus \\(\\hat{\\mathbf{X}}_{G}^{k+1}\\) has a closed form solution:
\\[\\hat{\\mathbf{X}}_{G}^{k+1}=\\frac{1}{4\\beta}\\left(\\sum_{i=1}^{4}(\\beta\\mathbf{\\mathcal{ M}}_{i}^{k}-\\Lambda_{i}^{k})\\right)_{\\hat{\\mathbf{\\Omega}}_{G}^{c}}+\\hat{\\mathbf{ \\mathcal{Y}}}_{G_{\\mathbf{\\Omega}_{G}}}, \\tag{6}\\]
where \\(\\hat{\\mathbf{\\Omega}}_{G}^{c}\\) is the complementary set of the indices set \\(\\hat{\\mathbf{\\Omega}}_{G}\\).
Next, \\(\\{\\mathbf{\\mathcal{M}}_{i}\\}_{i=1}^{4}\\)-subproblems are solved. Note that \\(\\mathbf{\\mathcal{M}}_{i}\\)-subproblems are independent, and thus we can solve them separately. Without loss of generality, the typical variable \\(\\mathbf{\\mathcal{M}}_{i}\\) is solved through the following problem:
\\[\\min_{M_{i,(i)}}\\frac{\\alpha_{i}}{\\beta}L_{\\mathbf{\\omega}^{k}}(\\mathbf{M}_{i,(i)})+ \\frac{1}{2}\\|\\mathbf{M}_{i,(i)}-\\hat{\\mathbf{X}}_{G_{(i)}}^{k+1}-\\frac{\\Lambda_{i,(i)} ^{k}}{\\beta}\\|_{F}^{2}, \\tag{7}\\]
where \\(\\mathbf{\\omega}^{k}=(1/(\\sigma_{1}(M_{i,(i)}^{k})+\\varepsilon_{i}),\\cdots,1/( \\sigma_{J_{i}}(M_{i,(i)}^{k})+\\varepsilon_{i}))^{T}\\), \\((J_{1},J_{2},J_{3},J_{4})=(w,w,b,n)\\). \\(\\mathbf{M}_{i,(i)}^{k+1}\\) can be obtained using a thresholding operator [56, 61],
\\[\\mathbf{M}_{i,(i)}^{k+1}=\\mathbf{U}^{k}(\\mathbf{\\Sigma}^{k}-\\tau\\text{diag}(\\mathbf{\\omega}^ {k}))_{+}(\\mathbf{V}^{k})^{T}, \\tag{8}\\]
where \\(\\mathbf{U}^{k}\\mathbf{\\Sigma}^{k}(\\mathbf{V}^{k})^{T}\\) is the SVD of \\(\\hat{\\mathbf{X}}_{G_{(i)}}^{k+1}+\\frac{1}{\\beta}\\Lambda_{i,(i)}^{k}\\) and \\((\\mathbf{X})_{+}=\\max\\{\\mathbf{X},0\\}\\). Thus, \\(\\mathbf{\\mathcal{M}}_{i}^{k+1}=\\text{fold}_{i}(\\mathbf{M}_{i,(i)}^{k+1})\\).
Based on the previous derivation, we develop the low-rank method to reconstruct missing information in multitemporal remotely sensed images, as outlined in **Algorithm 2**. Then the proposed NL-LRTC method is outlined in **Algorithm 1**.
```
0: Data \\(\\mathbf{\\mathcal{Y}}\\) and index set \\(\\mathbf{\\Omega}\\), radius of searching window \\(r\\), patch size \\(w\\), and indicator threshold \\(\\gamma_{2}\\), parameters \\(\\beta\\), \\(\\alpha\\), and \\(\\varepsilon\\).
1: Obtain the 3D tensors \\(\\hat{\\mathbf{\\mathcal{Y}}}\\) and \\(\\hat{\\mathbf{\\Omega}}\\) by rearranging the data \\(\\mathbf{\\mathcal{Y}}\\) and \\(\\mathbf{\\Omega}\\), respectively;
2:while\\(\\hat{\\mathbf{\\Omega}}^{c}\
eq 0\\)do
3: Find \\((i,j)\\) subject to \\(\\hat{\\mathbf{\\Omega}}_{i,j}=0\\), that means the pixel \\((i,j)\\) is covered by clouds;
4: Search the similar patches for patch \\(\\hat{\\mathbf{\\mathcal{Y}}}_{i,j}\\) in the searching window;
5: Stack these similar patches as a group \\(\\hat{\\mathbf{\\mathcal{Y}}}_{G}\\), and obtain the corresponding index set \\(\\hat{\\mathbf{\\Omega}}_{G}\\);
6: Estimate the missing pixel values in \\(\\hat{\\mathbf{\\mathcal{Y}}}_{G}\\) using Algorithm 2 and set \\(\\hat{\\mathbf{\\Omega}}_{G}=1\\);
7: Replace the corresponding entries in \\(\\hat{\\Omega}\\) and \\(\\hat{\\mathbf{\\mathcal{Y}}}\\) with \\(\\hat{\\mathbf{\\Omega}}_{G}\\) and \\(\\hat{\\mathbf{\\mathcal{X}}}_{G}\\), respectively.
8:endwhile
9: Recovered data \\(\\mathbf{\\mathcal{X}}\\).
```
**Algorithm 1** NL-LRTC for multitemporal remotely sensed images inpainting.
## IV Experiments and Discussion
### _Test Data_
The proposed reconstruction method, NL-LRTC, for multitemporal remotely sensed images is applied to three data images inpainting.
Fig. 3: Illustration for how to exploit the low-rankness of the group of similar patches using four unfolding matrices. The mode-1 unfolding is of size \\(w\\times wbn\\), the mode-2 unfolding is of size \\(w\\times wbn\\), the mode-3 unfolding is of size \\(b\\times w^{2}n\\), and the mode-4 unfolding is of size \\(n\\times w^{2}b\\).
sets for simulated and real-data experiments. The first data set was taken over Munich, Germany, by Landsat-8. The data set has nine bands, and three bands with 30-m resolution (red, green, blue) are used. The data set was acquired over the Munich suburbs (which consist of forests, mountains, hills, etc.) and includes six temporal images denoted as \"M102014\", \"M012015\", \"M022015\", \"M032015\", \"M042015\", and \"M082015\", where \"MXXYYYY\" means the data is taken over Munich in XX-th month YYYY-year; see Fig. 4. The second data set was taken over Beijing, China, by Sentinel-2 with six spectral bands at a ground sampling distance of 20 meters (bands 5, 6, 7, 8A, 11, 12). The data set was acquired over the Beijing suburbs (which consist of villages, mountains, etc.) and includes five temporal images denoted as \"BJ122015\", \"BJ032016\", \"BJ072016\", \"BJ082016\", and \"BJ092016\", where \"BJXXYYYY\" means the data is taken over Beijing in XX-th month YYYY-year; see Fig. 5. The third data set was taken over Eure, France, by Sentinel-2 and atmospheric correction has been processed by MAYA [62]. The data set includes four temporal images and four spectral bands at a ground sampling distance of 10 meters (bands 2, 3, 4, and 8), see Fig. 6. In our experiments, the observed multitemporal remotely sensed data contain four different temporal data, namely, the observed tensor \\(\\mathcal{Y}\\) is of size \\(m\\times n\\times b\\times 4\\). For the first data set, \"M032015\", \"M042015\", and \"M082015\" are the reference data; four subimages of \"M012015\" and \"M022015\" are used for the simulated experiments in that the size of the tested images is \\(512\\times 512\\) in the spatial domain; \"M102014\" is used for the real experiment in that the size of the tested images is \\(1080\\times 1920\\) in the spatial domain. For the Munich area, the surface reflectance is changed with time due to snow, seasonal change of vegetation, etc. Thus, \"M012015\", \"M022015\", and \"M102014\" are greatly different from the other reference data. We also study how NL-LRTC performs when the temporal difference is not so great with the second data set. For this data set, \"BJ092016\" and \"BJ072016\" are used for simulated and real experiment, respectively. The other temporal data are as reference data. The size of tested Beijing images is \\(256\\times 256\\) in the spatial domain. We test another real experiment with \"EU082017\" in the third data set whose size is \\(400\\times 400\\) in the spatial domain.
### _Performance Evaluation_
In the simulated experiments, the performance of multitemporal remotely sensed images reconstruction is quantitatively evaluated by peak signal-to-noise ratio (PSNR) [21], structural similarity (SSIM) index [63], metric Q [64], average gradient (AG) [65], and blind image quality assessment (BIQA) [66]. The PSNR and SSIM assess the recovered image by comparing it with the original image from the gray-level fidelity and the structure-level fidelity aspects, respectively. The metric Q, AG, and BIQA assess the recovered image without the reference image based on the human vision system. Given a reference image \\(\\tilde{\\mathbf{X}}\\in\\mathbb{R}^{m\\times n}\\), the PSNR of a reconstructed image \\(\\mathbf{X}\\in\\mathbb{R}^{m\\times n}\\) is computed by the standard formula
\\[\\text{PSNR}(\\mathbf{X},\\tilde{\\mathbf{X}})=10\\log_{10}\\frac{N\\tilde{\\mathbf{X}}_{\\text{ max}}^{2}}{\\|\\tilde{\\mathbf{X}}-\\mathbf{X}\\|_{F}^{2}}, \\tag{9}\\]
where \\(N=mn\\) denotes the number of the pixels in the image, and \\(\\tilde{\\mathbf{X}}_{\\text{max}}\\) is the maximum pixel value of the original image. The SSIM of the estimated image \\(\\mathbf{X}\\) is defined by
\\[\\text{SSIM}(\\mathbf{X},\\tilde{\\mathbf{X}})=\\frac{(2\\mu_{\\mathbf{X}}\\mu_{\\tilde{\\mathbf{X}}}+c_ {1})(2\\sigma_{\\mathbf{X}\\tilde{\\mathbf{X}}}+c_{2})}{(\\mu_{\\mathbf{X}}^{2}+\\mu_{\\tilde{\\mathbf{X} }}^{2}+c_{1})(\\sigma_{\\mathbf{X}}^{2}+\\sigma_{\\tilde{\\mathbf{X}}}^{2}+c_{2})}, \\tag{10}\\]
where \\(\\mu_{\\mathbf{X}}\\) and \\(\\mu_{\\tilde{\\mathbf{X}}}\\) represent the average gray values of the recovered image \\(\\mathbf{X}\\) and the original clear image \\(\\tilde{\\mathbf{X}}\\), respectively. \\(\\sigma_{\\mathbf{X}}\\) and \\(\\sigma_{\\tilde{\\mathbf{X}}}\\) represent the standard deviation of \\(\\mathbf{X}\\) and \\(\\tilde{\\mathbf{X}}\\), respectively. \\(\\sigma_{\\mathbf{X}\\tilde{\\mathbf{X}}}\\) represents the covariance between \\(\\mathbf{X}\\) and \\(\\tilde{\\mathbf{X}}\\). The metric Q of an image is defined by
\\[\\text{Q}(\\mathbf{X})=s_{1}\\frac{s_{1}-s_{2}}{s_{1}+s_{2}}, \\tag{11}\\]
where \\(s_{1}\\) and \\(s_{2}\\) are two singular values of the gradient matrix of the image \\(\\mathbf{X}\\). The AG is computed by
\\[\\text{AG}(\\mathbf{X})=\\frac{1}{(m-1)(n-1)}\\sum_{i=1}^{m-1}\\sum_{j=1}^{n-1}\\sqrt{( \\Delta_{1}x_{i,j}^{2}+\\Delta_{2}x_{i,j}^{2})/2}, \\tag{12}\\]
where \\(\\Delta_{1}x_{i,j}\\) and \\(\\Delta_{2}x_{i,j}\\) are the first differences along both directions, respectively. The BIQA can be calculated
Fig. 4: Data set taken by Landsat-8. βMXXYYYYβ means the image is taken over Munich in XX-th month YYYY-year.
Fig. 5: Band 6 of Beijing data. βBIXXYYYYβ means the image is taken over Beijing in XX-th month YYYY-year.
Fig. 6: RGB bands (bands 4, 3, and 2) of Eure data. βEUXXYYYYβ denotes the image taken over Eure in XX-th month YYYY-year.
according to [66] and its code is available online1. In the real experiments, the performance is quantitatively evaluated by the metric Q, AG, and BIQA. In our experiments, the PSNR, SSIM, Q, AG, and BIQA values of a multispectral image are the average values of those for all bands. For all the five indicators, the larger the value, the better the results.
Footnote 1: [https://cn.mathworks.com/matlabcentral/fileexchange/30800-blind-image-quality-assessment-through-anisotropy](https://cn.mathworks.com/matlabcentral/fileexchange/30800-blind-image-quality-assessment-through-anisotropy)
Without any special instructions, the parameters are set as following: the number of time series \\(t=4\\), patch size \\(w=4\\) (patch size \\(w\\) should be multiples of \\(t\\) due to the rearrangement procedure), indicator thresholding value \\(\\gamma_{2}\\) is 0.91, radius of searching windows \\(r=100\\) (\\(r/t>20\\) is recommended to make the search region large enough), searching step is 2 (half of patch size \\(w\\)), penalty parameter \\(\\beta=1\\) or \\(10\\), and \\(\\varepsilon=10^{-4}\\), \\(10^{-2}\\) for the images whose value ranges are \\([0,255]\\) and \\([0,1]\\) respectively. Three of the most advanced missing information reconstruction methods, HaLRTC [53], ALM-IPG [46], and PM-MTGSR [50], are compared with NL-LRTC. The parameters for these three compared methods are tuned to maximize the reconstruction PSNR value for each data set. All the experiments are performed under Windows 10 and Matlab Version 9.0.0.341360 (R2016a) running on a desktop with an Inter(R) Core(TM) i7-6700 CPU at 3.40 GHz and 16 GB of memory.
### _Simulated Experiments_
In this section, the simulated experiments are presented to test NL-LRTC. The test data of Munich are \"M012015\" and \"M022015\". To assess the performance of NL-LRTC fully and efficiently, three subimages of \"M012015\" and one subimage of \"M022015\" shown in Fig. 7 are tested in the simulated experiments. These subimages are of size \\(512\\times 512\\times 3\\). The structure and details of these subimages are different: the data set \"Image 1\" is cut from Fig. 4(b) (red square) and corresponding areas of Fig. 4(d)-(f) and mostly contains vegetation areas with relatively low contrast due to flat terrains; \"Image 2\" is cut from Fig. 4(b) (green square) and corresponding areas of Fig. 4(d)-(f) and mainly contains mountains, hills, and rivers; and \"Image 3\" and \"Image 4\" are extracted from Fig. 4(b) (blue square) and (c) (white square), respectively, and the corresponding areas of Fig. 4(d)-(f). The data sets \"Image 3\" and \"Image 4\" contain both characteristics of \"Image 1\" and \"Image 2\". The simulated clouds and stripes removal results are shown in Fig. 8, where Exps. 1-4 are for clouds removal and Exps. 5 and 6 are for stripes removal.
results of band 6 reconstructed by the four methods for Exp. 4 are shown in Fig. 8. The Exp. 4 shows that the results by all the four methods are almost visually similar. One can get a similar conclusion from the scatter plots shown in Fig. 10. The points on the scatter plots of all the four methods are mostly distributed surrounding the blue diagonal, but there are also a few points deviating from the diagonal line. In this experiment, we can see that the results obtained by ALM-IPG are improved compared to Exp. 1-3 because ALM-IPG mainly studies the smoothness of adjacent temporal data. In conclusion, when the
Fig. 8: Simulated experiments, Exps. 1β6, for clouds and stripes removal. Exps. 1β3 are cloud removal for βImages 1β3β, respectively; Exp. 4 is cloud removal for Beijing data (results for band 6 are shown in this figure); Exps. 5 and 6 are stripes removal for βImage 3β and βImage 4β, respectively.
temporal difference is not large, all the four methods perform in a similarly effective manner.
Besides the cloud removal experiments, the destriping experiments (Exps. 5 and 6) are also conducted using \"Image 3\" and \"Image 4\" shown in Fig. 7: the first column is the simulated corrupted data, and the other three columns are the supplementary data. Some regular diagonal and random vertical stripes are manually added into \"Image 3\" and \"Image 4\", respectively, as shown in Fig. 8. It is obvious that the results by NL-LRTC are the best visually. The scatter plots comparing the original and reconstructed pixel values in the missing areas for Exps. 5 and 6 are shown in Fig. 10. The points on the scatter plot of HaLRTC and ALM-IPG results obviously deviate from the diagonal. The points for PM-MTGSR and NL-LRTC are better, but the points of PM-MTGSR distributed in the direction orthogonal to the diagonal line are wider than those of NL-LRTC. That means the scatter plots of our method are the best. Overall, the proposed method obtains the best results for the removal of stripes.
The quantitative comparison is shown in Tab. II. The table shows that all the four methods can recover better results compared to the corrupted image itself. PSNR and SSIM evaluate the recovered image by comparing with the ground truth image. By analyzing the PSNR and SSIM results, HaLRTC obtains the worst results for all the experiments. This is because HaLRTC only takes advantage of the low-rankness of the observed data. ALM-IPG obtains better results than HaLRTC, because it considers the low-rankness and temporal continuous property simultaneously. However, it assumes the smoothness of adjacent temporal images, the results depend on the similarity of the adjacent temporal images. This algorithm is suited to process high-temporal-resolution images, such as videos. PM-MTGSR obtains better results than HaLRTC and ALM-IPG because it makes use of the patch similarity. NL-LRTC also takes the patch similarity into consideration and makes the best use of low-rankness of the three different dimensions. Thus NL-LRTC obtains the best results. For Exp. 4, since the difference between the cloud-contained and reference data is not great, the result of ALM-IPG is better than those of Exps. 1-3. Although the PSNR value of PM-MTGSR is higher than that of NL-LRTC, the difference is slight. Moreover, the SSIM value of NL-LRTC is higher than that of PM-MTGSR. The Q, AG, and BIQA results also show NL-LRTC obtains the best results for Exps. 1-4. The elapsed times of PM-MTGSR and NL-LRTC are longer than those of HaLRTC and ALM-IPG because they are patch-based methods in which searching similar patches costs much more time. NL-LRTC is much faster than PM-MTGSR because PM-MTGSR processes multispectral images band by band, while NL-LRTC reconstructs the missing areas of all band at the same time. In conclusion, the proposed method performs the best for cloud and stripe removal when the temporal difference is large and can also obtain promising results when the temporal difference is slight.
Next, two simulated data containing more than one piece of cloud are tested. The recovered results are shown in Fig. 11, which are for the \"Image 3\" taken by Landsat-8 (Exp. 7) and Beijing data taken by Sentinel-2 (Exp. 8). Exp. 7 and Exp. 8 perform the similar results with Exps. 1-3. and Exp. 4, respectively. The results recovered by PM-MTGSR and NL-LRTC in Exp. 7 are visually similar, but are visually better than those reconstructed by HaLRTC and ALM-IPG. Exp. 8 shows all the four methods obtain the visually similar results. Tab. III shows that, for both Exps. 7 and 8, NL-LRTC obtains the best quantitative results.
At last, we analyze the impact of the number of time series on the reconstruction performance by changing the number (\\(t=2,4,\\cdots,16\\)). The test data were taken over Munich by Landsat-8 on between December, 2014 and April, 20172. The SSIM and PSNR values with respect to the number of time series are displayed in Fig. 12. This figure shows that reconstruction results are becoming better with increasing of the number of time series. When the number of time series reach to a large amount, the PSNR and SSIM values reach to the highest with a little fluctuation. This is because more temporal data not only provide more correlative information but also contain more interference information especially when the acquisition times of the cloud-contained and reference data are far form each other.
Footnote 2: The sixteen test data were taken during four years: 2014 (December), 2015 (January-April, June, August, October), 2016 (April-September), and 2017 (January, February).
### _Real Experiments_
In this section, real-data experiments are undertaken. The experimental data are \"M102014\", \"BJ072016\", and \"EU082017\". The cloud detection is not our focus in this work and is complex for different kinds of atmospheric conditions. For the Landsat data \"M102014\", the cloud is detected via a modified version of the thresholding-based cloud detection method proposed in [46]; see Appendix A for more details. Beijing data \"BJ072016\" contains shadows that cannot be detected by a simple thresholding method. Thus, for \"BJ072016\",
Fig. 9: Zoom results of PM-MTGSR and NL-LRTC for Exps. 1β3. From left to right: original data, corrupted data, zoom part of results reconstructed by PM-MTGSR and NL-LRTC, respectively. From top to bottom: zoom results for Exps. 1, 2, 3, respectively. For the corrupted data, the black areas are missing.
the mask for clouds and their shadows is manually drawn. For the Eure data \"EU082017\", the cloud detection is processed by MAYA [62]. The corresponding recovery results are shown in Figs. 13, 14, and 15. For \"M102014\" (see Fig. 13), the color composite images of reconstruction areas obtained by HaLRTC, ALM-IPG, and PM-MTGSR are obviously different from the known areas. The reconstruction area of NL-LRTC shows a more natural visual effect. For \"BJ072016\" (see Fig. 14), the recovery results by ALM-IPG have obvious stairs in the edge of missing and known areas. HaLRTC and PM-MTGSR fail in reconstructing clear details. NL-LRTC shows better results containing more clear details and being more natural compared to the known area. For \"EU082017\" (see Fig. 15), the recovery results by HaLRTC, ALM-IPG, and PM-MTGSR are visually worse than that by NL-LRTC, that means the missing areas recovered by NL-LRTC are in harmony with the know areas. Moreover, quantitative results for Figs. 13, 14,
data, all the three index values (Q, AG, and BIQA) for NL-LRTC are better than those for HaLRTC, ALM-IPG, and PM-MTGSR. For the Landsat-8 real data, the Q and BIQA values for NL-LRTC are worse than those for HaLRTC, ALM-IPG, and PM-MTGSR. However, the difference is slight. The AG value for NL-LRTC is better than the other three compared methods. The real-data experiments also demonstrate that the proposed method is effective.
## V Conclusion
In this paper, a non-local low-rank tensor completion (NL-LRTC) method has been proposed to reconstruct the missing
Fig. 11: Cloud removal results for βImage 3β and Beijing data containing more than one piece of cloud. Exp. 7 is for βImage 3β and Exp. 8 is for Beijing data.
Fig. 12: SSIM and PSNR values with respect to the numbers of time series.
information in the multitemporal remotely sensed images. By proposing a non-convex approximation for tensors rank, all the three domains (spatial, spectral, and temporal) relationships were considered in NL-LRTC. To take advantage of the spatial correlations, we grouped the 3-order similar patches into a 4-order tensor and considered the tensor low-rankness. Because NL-LRTC made use of the global correlations of all the three domains, it is good at processing not only the temporally contiguous data but also the data that have large differences between the adjacent temporal images regarding the characteristics and conditions of the Earth's surface. In the simulations with various image data sets, NL-LRTC showed comparable or better results than HaLRTC, ALM-IPG, and PM-MTGSR, which are three of the state-of-the-art algorithms. For the real-data experiments, our method obtained visually more natural and quantitatively better reconstruction results.
## Appendix A Cloud Detection
In this section, we present an automatic thresholding method for cloud detection motivated by the algorithm of [46]. This method assumes that most cloud values in the remotely sensed images are larger than other cloud free values, i.e., clouds are predominantly white. Given a 4-order observation remotely sensed image \\(\\boldsymbol{\\mathcal{Y}}\\in\\mathbb{R}^{m\\times n\\times b\\times t}\\), where \\(m\\times n\\) denotes the number of pixels of remotely sensed images, \\(b\\) denotes the number of spectral channels of remote sensors, and \\(t\\) is the number of time series. In this research, we talk about how to restore the image at one time according to the other times. Suppose \\(\\boldsymbol{\\mathcal{Y}}^{t_{1}}\\) taken at time \\(t_{1}\\) is the cloud contained image, then the other images (denoted as \\(\\boldsymbol{\\mathcal{Y}}^{t_{1}}\\)) are the references. The cloud detector is to produce a set of indices \\(\\boldsymbol{\\Omega}\\in\\mathbb{R}^{m\\times n\\times b\\times t}\\), where the position \\((i,j,k,l)\\in\\mathbb{Z}^{n}\\times\\mathbb{Z}^{n}\\times\\mathbb{Z}^{b}\\times \\mathbb{Z}^{l}\\) is covered by cloud if \\(\\boldsymbol{\\Omega}_{i,j,k,l}=0\\) and is cloud free if \\(\\boldsymbol{\\Omega}_{i,j,k,l}=1\\). Note that all the spectral bands of the practical remotely sensed images taken at the same position and same period will be covered by the same clouds, i.e, \\(\\boldsymbol{\\Omega}(i,j,k,t_{1})=0,\\forall k\\in\\{1,2,\\cdots,b\\}\\) if exist \\(k_{1}\\in\\{1,\\cdots,b\\}\\) subject to \\(\\boldsymbol{\\Omega}(i,j,k_{1},t_{1})=0\\).
In this research, there are some other cloud free references. The key point of the detection method is to maximize the similarity of cloud contained and free images in the existing
Fig. 14: Results for real Sentinel-2 data taken over Beijing.
Fig. 13: Results for real Landsat experiment.
Fig. 15: Results for real Sentinel-2 data taken over Eure, France. In this figure, the images are shown in color format using bands 4, 3, and 2.
region \\(\\mathbf{\\Omega}\\). Given a similar function \\(f(\\mathbf{x},\\mathbf{y})\\), find the indices set \\(\\mathbf{\\Omega}\\) by optimizing the following problem:
\\[\\mathbf{\\tilde{\\Omega}}=\\max_{\\mathbf{\\Omega}}f(\\mathbf{\\mathcal{Y}}_{\\mathbf{ \\Omega}}^{t_{1}},\\mathbf{\\mathcal{Y}}_{\\mathbf{\\Omega}}^{t_{1}}). \\tag{13}\\]
The function can be any similarity functions, such as correlation coefficients, cosine coefficients, generalized Dice coefficients, and generalized Jaccard coefficients. Here, we use the correlation coefficients. Besides the mentioned method (13), one can also minimize the distance function such as Euclidean distance, the mean absolute error (MAE), and the mean relative error (MRE) to get the indices set \\(\\mathbf{\\Omega}\\). The thresholding produce is detailedly summarized in Algorithm 3.
```
0: temporal sequence of cloudy images \\(\\mathbf{\\mathcal{Y}}\\), parameter step for increase the thresholding value \\(s\\).
1: Obtain the initial indices set \\(\\mathbf{\\Omega}_{0}=\\mathbf{\\mathcal{Y}}^{t_{1}}>0\\);
2: Set \\(\\mathbf{\\tilde{\\Omega}}=\\mathbf{\\Omega}_{0}\\) and \\(\\gamma_{1}=0\\);
3:while\\(f(\\mathbf{\\mathcal{Y}}_{\\mathbf{\\Omega}}^{t_{1}},\\mathbf{\\mathcal{Y}}_{\\mathbf{ \\Omega}}^{t_{2}})\\) increase do
4: Update the thresholding value \\(\\gamma_{1}=\\gamma_{1}+s\\);
5: Calculate the correlation \\(f(\\mathbf{\\mathcal{Y}}_{\\mathbf{\\Omega}}^{t_{1}},\\mathbf{\\mathcal{Y}}_{\\mathbf{ \\Omega}}^{t_{2}})\\).
6:endwhile
7:\\(\\mathbf{\\tilde{\\Omega}}\\): initial guess for index set.
```
**Algorithm 3** Thresholding method.
The above described thresholding method regards the stationary white background as the cloud, as seen in Fig. 17. It is better that these white objects remain in the reconstructed images. Fortunately, the clouds usually cover a big continuous area, this fact motivates us to delete the discrete points. To this end, we propose a KNN-like method. In detail, the pixel in \\(\\mathbf{\\tilde{\\Omega}}^{c}\\) is not regarded as cloud if most of its surrounding pixels are not cloud, i.e., most of its neighbor pixels are in \\(\\mathbf{\\tilde{\\Omega}}\\). The procedure for finding the white background is shown in Fig. 16.
The two stages of cloud detection is summarized in Algorithm 4 below.
```
0: temporal sequence of cloudy images \\(\\mathbf{\\mathcal{Y}}\\), parameter thresholding value \\(\\gamma_{1}\\), and parameter \\(r\\) in the similar k-nearest-neighbors search.
1: Obtain the initial guess \\(\\mathbf{\\Omega}\\) via Algorithm 3: \\(\\mathbf{\\Omega}=\\mathbf{\\tilde{\\Omega}}\\);
2:for\\((m,n,:,l)\
otin\\mathbf{\\tilde{\\Omega}}\\)do
3: Extract the patch \\(\\mathbf{P}\\) with the center \\((m,n,:,l)\\) and a radius of \\(r_{1}\\): \\[\\mathbf{P}(i,j,:,l)=\\mathbf{\\Omega}(i,j,:,l),\\] where \\(\\|(i,j)-(m,n)\\|_{\\infty}<r_{1}\\);
4: Calculate the percent (\\(p\\)) of the cloud pixels number;
5:if\\(p<0.5\\)then
6:\\(\\mathbf{\\Omega}=\\mathbf{\\Omega}\\cup\\{(m,n,:,l)\\}\\).
7:endif
8:endfor
9:\\(\\mathbf{\\Omega}\\): index set of non-cloudy pixels.
```
**Algorithm 4** Cloud detection.
## Acknowledgment
The authors would like to thank Prof. H. Shen and X. Li from Wuhan University for sharing the codes of the PM-MTGSR method.
## References
* [1] M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, \"Sparse unmixing of hyperspectral data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 49, no. 6, pp. 2014-2039, 2011.
* [2] M.-D. Iordache, J. M. Bioucas-Dias and A. Plaza, \"Total variation spatial regularization for sparse hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 11, pp. 4484-4502, 2012.
* [3] J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 5, no. 2, pp. 354-379, 2012.
* [4] X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, and R. J. Plemmons, \"Deblurring and sparse unmixing for hyperspectral images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 51, no. 7-1, pp. 4045-4058, 2013.
* [5] N. Yokoya, T. Yairi, and A. Iwasaki, \"Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 2, pp. 528-537, Feb 2012.
* [6] N. Yokoya, J. Chanussot, and A. Iwasaki, \"Nonlinear unmixing of hyperspectral data using semi-nonnegative matrix factorization,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 2, pp. 1430-1437, Feb 2014.
Fig. 16: Modified cloud detection procedure. The black pixels denote 0 (cloud), others denote 1. In this figure, \\(r\\)=3. For the red target pixel, it should be the white background rather than cloud, because in the search window, most pixels are 1. While for the blue target pixel, it should be cloud.
Fig. 17: Illustration of the proposed cloud detection procedure. (a) cloud contained image, (b) removing the detected cloud, (c) cloud detected via Alg. 3, (c) cloud detected via Alg. 4.
* [7] J. Bieniarz, E. Aguilera, X. X. Zhu, R. Muller, and P. Reinartz, \"Joint Sparsity Model for Multilook Hyperspectral Image Umixing,\"_IEEE Geoscience and Remote Sensing Letters_, vol. 12, no. 4, pp.696-700, 2014.
* [8] D. Lunga, S. Prasad, M. M. Crawford, and O. Ersoy, \"Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning,\" _IEEE Signal Processing Magazine_, vol. 31, no. 1, pp. 55-66, Jan 2014.
* [9] Y. Tarabalka, M. Fauvel, J. Chanussot, and J. A. Benediktsson, \"SVM-and MRF-based method for accurate classification of hyperspectral images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 7, no. 4, pp. 736-740, Oct 2010.
* [10] J. C. Harsanyi and C.-I. Chang, \"Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 32, no. 4, pp. 779-785, Jul 1994.
* [11] T. Matsuki, N. Yokoya, and A. Iwasaki, \"Hyperspectral tree species classification of Japanese complex mixed forest with the aid of lidar data,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 8, no. 5, pp. 2177-2187, May 2015.
* [12] P. Ghamisi, R. Souza, J. A. Benediktsson, X. X. Zhu, L. Rittner, and R. A. Lottou, \"Extinction profiles for the classification of remote sensing data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 10, pp. 5631-5645, Oct 2016.
* [13] L. Mou, P. Ghamisi, X. Zhu, \"Deep Recurrent Neural Networks for Hyperspectral Image Classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no. 5, pp. 3639-3655, 2017.
* [15] D. Manolakis, D. Marden, and G. A. Shaw, \"Hyperspectral image processing for automatic target detection applications,\" _Lincoln Laboratory Journal_, vol. 14, no. 1, pp. 79-116, 2003.
* [16] R. M. Willett, M. F. Duarte, M. A. Davenport, and R. G. Baraniuk, \"Sparsity and structure in hyperspectral imaging : Sensing, reconstruction, and target detection,\" _IEEE Signal Processing Magazine_, vol. 31, no. 1, pp. 116-126, Jan 2014.
* [17] N. M. Nasrabadi, \"Hyperspectral target detection : An overview of current and future challenges,\" _IEEE Signal Processing Magazine_, vol. 31, no. 1, pp. 34-44, Jan 2014.
* [18] N. Yokoya, N. Miyamura, and A. Iwasaki, \"Detection and correction of spectral and spatial misregistrations for hyperspectral data using phase correlation method,\" _Applied Optics_, vol. 49, no. 24, pp. 4568-4575, Aug 2010.
* [19] N. Yokoya and A. Iwasaki, \"Object detection based on sparse representation and hough voting for optical remote sensing imagery,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 8, no. 5, pp. 2053-2062, May 2015.
* [20] M. Shahzad and X. X. Zhu, \"Automatic detection and reconstruction of 2-D3-D building shapes from spaceborne TomsAR point clouds,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 3, pp. 1292-1310, March 2016.
* [21] H. Shen, X. Li, L. Zhang, D. Tao, and C. Zeng, \"Compressed sensing-based inpainting of Aqua moderate resolution imaging spectroradiometer band 6 using adaptive spectrum-weighted sparse bayesian dictionary learning,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 2, pp. 894-906, 2014.
* [22] L. Wang, J. J. Qu, X. Xiong, X. Hao, Y. Xie, and N. Che, \"A new method for retrieving band 6 of Aqua MODIS,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 3, no. 2, pp. 267-270, April 2006.
* 194, 2013.
* [24] H. Shen, X. Li, Q. Cheng, C. Zeng, G. Yang, H. Li, and L. Zhang, \"Missing information reconstruction of remote sensing data: A technical review,\" _IEEE Geoscience and Remote Sensing Magazine_, vol. 3, no. 3, pp. 61-85, 2015.
* [25] C.-H. Lin, K.-H. Lai, Z.-B. Chen, and J.-Y. Chen, \"Patch-based information reconstruction of cloud-contaminated multitemporal images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 1, pp. 163-174, Jan 2014.
* [26] C. Yu, L. Chen, L. Su, M. Fan, and S. Li, \"Kriging interpolation method and its application in retrieval of MODIS across optical depth,\" in _19th International Conference on Geoinformatics_, June 2011, pp. 1-6.
* [27] A. Maalouf, P. Carre, B. Augereau, and C. Fernandez-Maloigne, \"A bandlet-based inpainting technique for clouds removal from remotely sensed images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 47, no. 7, pp. 2363-2371, July 2009.
* [28] C. Ballester, M. Bertallino, V. Caselles, G. Sapiro, and J. Verdera, \"Filling-in by joint interpolation of vector fields and gray levels,\" _IEEE Transactions on Image Processing_, vol. 10, no. 8, pp. 1200-1211, Aug 2001.
* [29] A. Bugeau, M. Bertallino, V. Caselles, and G. Sapiro, \"A comprehensive framework for image inpainting,\" _IEEE Transactions on Image Processing_, vol. 19, no. 10, pp. 2634-2645, Oct 2010.
* [30] W. He, H. Zhang, L. Zhang, and H. Shen, \"Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 1, pp. 178-188, 2016.
* [31] Q. Cheng, H. Shen, L. Zhang, and P. Li, \"Impainting for remotely sensed images with a multichannel nonlocal total variation model,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 1, pp. 175-187, 2014.
* [32] H. Shen and L. Zhang, \"A MAP-based algorithm for destriping and inpainting of remotely sensed images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 47, no. 5, pp. 1492-1502, May 2009.
* 286, 2010.
* [34] A. Criminisi, P. Perez, and K. Toyama, \"Region filling and object removal by exemplar-based image inpainting,\" _IEEE Transactions on Image Processing_, vol. 13, no. 9, pp. 1200-1212, Sept 2004.
* [35] A. A. Efros and T. K. Leung, \"Texture synthesis by non-parametric sampling,\" in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_, 1999.
* [36] P. Rakwatin, W. Takeuchi, and Y. Yasuoka, \"Restoration of Aqua MODIS band 6 using histogram matching and local least squares fitting,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 47, no. 2, pp. 613-627, Feb 2009.
* [37] H. Shen, C. Zeng, and L. Zhang, \"Recovering reflectance of AQUA MODIS band 6 based on within-class local fitting,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 4, no. 1, pp. 185-192, March 2011.
* [38] I. Gladkova, M. D. Grossberg, F. Shahriar, G. Bonev, and P. Romanov, \"Quantitative restoration for MODIS band 6 on Aqua,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 6, pp. 2409-2416, June 2012.
* [39] X. Li, H. Shen, L. Zhang, H. Zhang, and Q. Yuan, \"Deal pixel completion of aqua MODIS band 6 using a robust M-estimator multiregression,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 11, no. 4, pp. 768-772, April 2014.
* [40] C. Zeng, H. Shen, M. Zhong, L. Zhang, and P. Wu, \"Reconstructing MODIS LST based on multitemporal classification and robust regression,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 12, no. 3, pp. 512-516, March 2015.
* 625, 2010.
* [42] W. Zhu, Y. Pan, H. He, L. Wang, M. Mou, and J. Liu, \"A changing-weight filter method for reconstructing a high-quality NDVI time series to preserve the integrity of vegetation phenology,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 4, pp. 1085-1094, April 2012.
* [43] A. Savitzky and M. J. E. Golay, \"Smoothing and differentiation of data by simplified least squares procedures.\" _Analytical Chemistry_, vol. 36, no. 8, pp. 1627-1639, 1964.
* [44] L. Lorenzi, F. Melgani, and G. Mercier, \"Missing-area reconstruction in multispectral images under a compressive sensing perspective,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 51, no. 7, pp. 3998-4008, July 2013.
* [45] X. Li, H. Shen, L. Zhang, H. Zhang, Q. Yuan, and G. Yang, \"Recovering quantitative remote sensing products contaminated by thick clouds and shadows using multitemporal dictionary learning,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 11, pp. 7086-7098, Nov 2014.
* [46] J. Wang, P. A. Olsen, A. R. Conn, and A. C. Lozano, \"Removing clouds and recovering ground observations in satellite image sequences via temporally contiguous robust matrix completion,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016.
- 68, 2014.
* [48] S. Benabdlekader and F. Melgani, \"Contextual spatiospectral postreconstruction of cloud-contaminated images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 5, no. 2, pp. 204-208, April 2008.
* [49] X. Li, H. Shen, L. Zhang, and H. Li, \"Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 106, pp. 1-15, 2015.
* [50] X. Li, H. Shen, H. Li, and L. Zhang, \"Patch matching-based multi-tiemporant group sparse representation for the missing information reconstruction of remote-sensing images,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 9, no. 8, pp. 3629-3641, 2016.
* [51] J. Kang, Y. Wang, M. Schmitt, X. Zhu, \"Object-based Multipass InSAR via Robust Low Rank Tensor Decomposition,\" _IEEE Transactions on Geoscience and Remote Sensing_, in press.
* [52] T. G. Kolda and B. W. Bader, \"Tensor decompositions and applications,\" _SIAM Review_, vol. 51, no. 3, pp. 455-500, 2009.
* [53] J. Liu, P. Musialski, P. Wonka, and J. Ye, \"Tensor completion for estimating missing values in visual data,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 35, no. 1, pp. 208-220, 2013.
* [54] T.-Y. Ji, T.-Z. Huang, X.-L. Zhao, T.-H. Ma, and G. Liu, \"Tensor completion using total variation and low-rank matrix factorization,\" _Information Sciences_, vol. 26, pp. 243-257, 2016.
* [55] M. M. Dera and E. Deza, \"In Encyclopedia of Distances. Springer, 2009, pp. 1-583.
* [56] T.-Y. Ji, T.-Z. Huang, X.-L. Zhao, T.-H. Ma, and L.-J. Deng, \"A non-convex tensor rank approximation for tensor completion,\" _Applied Mathematical Modelling_, vol. 48, pp. 410-422, 2017.
* [57] S. Gu, L. Zhang, W. Zuo, and X. Feng, \"Weighted nuclear norm minimization with application to image denoising,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2014.
* [58] Z. Lin, M. Chen, and Y. Ma, \"The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,\" _arXiv preprint arXiv:1009.5055_, 2010.
* [59] B. He, M. Tao, and X. Yuan, \"Alternating direction method with gaussian back substitution for separable convex programming,\" _SIAM Journal on Optimization_, vol. 22, no. 2, pp. 313-340, 2012.
* [60] X.-L. Zhao, F. Wang, and M. K. Ng, \"A new convex optimization model for multiplicative noise and blur removal,\" _SIAM Journal on Imaging Sciences_, vol. 7, no. 1, pp. 456-475, 2014.
* [61] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, \"Compressive sensing via nonlocal low-rank regularization,\" _IEEE Transactions on Image Processing_, vol. 23, no. 8, pp. 3613-3632, 2014.
* [62] V. Lonjou, C. Desjardins, O. Hogolle, B. Petruacci, T. Tremas, M. Dejus, A. Makarau, and S. Aure, \"Maccacc-ator joint algorithm (MAIA),\" in _Proceedings of SPIE Remote Sensing_, vol. 10001, 2016, pp. 1-13.
* [63] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, \"Image quality assessment: from error visibility to structural similarity,\" _IEEE Transactions on Image Processing_, vol. 13, no. 4, pp. 600-612, 2004.
* [64] X. Zhu and P. Milanfar, \"Automatic parameter selection for denoising algorithms using a no-reference measure of image content,\" _IEEE Transactions on Image Processing_, vol. 19, no. 12, pp. 3116-3132, Dec 2010.
* [65] Z. Li, Z. Jing, X. Yang, and S. Sun, \"Color transfer based remote sensing image fusion using non-separable wavelet frame transform,\" _Pattern Recognition Letters_, vol. 26, no. 13, pp. 2006-2014, 2005.
* [66] S. Gabarda and G. Cristobal, \"Blind image quality assessment through anisotropy,\" _Journal of the Optical Society of America A_, vol. 24, no. 12, pp. B42-B51, Dec 2007.
\\begin{tabular}{c c} & Teng-Yu Ji received the B.S. degree from the School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China, in 2012, where he is currently pursuing the Ph.D. degree with the School of Mathematical Sciences. His current research interests include tensor decomposition and applications, including tensor completion and remotely sensed image reconstruction. \\\\ \\end{tabular} \\begin{tabular}{c c} & Naoto Yokoya (S'10-M'13) received the M.Sc. and Ph.D. degrees in aerospace engineering from the University of Tokyo, Tokyo, Japan, in 2010 and 2013, respectively. From 2012 to 2013, he was a Research Fellow with Japan Society for the Promotion of Science, Tokyo, Japan. Since 2013, he is an Assistant Professor with the University of Tokyo. From 2015 to 2017, he was also an Alexander von Humboldt Research Fellow with the German Aerospace Center (DLR), Oberpfaffenhofen, and Technical University of Munich (TUM), Munich, Germany. His research interests include image analysis and data fusion in remote sensing. Since 2017, he is a Co-chair of IEEE Geoscience and Remote Sensing Image Analysis and Data Fusion Technical Committee. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xiao Xiang Zhu (S'10-M'12-SM'14) received the Master (M.Sc.) degree, her doctor of engineering (Dr.-Ing.) degree and her \"Habilitation\" in the field of signal processing from Technical University of Munich (TUM), Munich, Germany, in 2008, 2011 and 2013, respectively. She is currently the Professor for Signal Processing in Earth Observation (www.sipco.bgu.tum.de) at Technical University of Munich (TUM) and German Aerospace Center (DLR); the head of the Team Signal Analysis at DLR; and the head of the Helmholtz Young Investigator Group \"SiPEO\" at DLR and TUM. Prof. Zhu was a guest scientist or visiting professor at the Italian National Research Council (CNR-IREA), Naples, Italy, Fudan University, Shanghai, China, the University of Tokyo, Tokyo, Japan and University of California, Los Angeles, United States in 2009, 2014, 2015 and 2016, respectively. Her main research interests are remote sensing and Earth observation, signal processing, machine learning and data science, with a special application focus on global urban mapping. Dr. Zhu is a member of young academy (Junge Akademie/Junges Kolleg) at the Berlin-Brandenburg Academy of Sciences and Humanities and the German National Academy of Sciences Leopoldina and the Bavarian Academy of Sciences and Humanities. She is an associate Editor of IEEE Transactions on Geoscience and Remote Sensing. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Ting-Zhu Huang received the B. S., M. S., and Ph. D. degrees in Computational Mathematics from the Department of Mathematics, Xian Jiaotong University, Xian, China. He is currently a professor in the School of Mathematical Sciences, UESTC. He is currently an editor of The Scientific World Journal, Advances in Numerical Analysis, J. Appl. Math., J. Pure and Appl. Math.: Adv. and Appl., J. Electronic Sci. and Tech. of China, etc. His current research interests include scientific computation and applications, numerical algorithms for image processing, numerical linear algebra, preconditioning technologies, and matrix analysis with applications, etc. \\\\ \\end{tabular} | _This is the pre-acceptance version, to read the final version please go to IEEE Transactions on Geoscience and Remote Sensing on IEEE Xplore._ Remotely sensed images may contain some missing areas because of poor weather conditions and sensor failure. Information of those areas may play an important role in the interpretation of multitemporal remotely sensed data. The paper aims at reconstructing the missing information by a non-local low-rank tensor completion method (NL-LRTC). First, non-local correlations in the spatial domain are taken into account by searching and grouping similar image patches in a large search window. Then low-rankness of the identified 4-order tensor groups is promoted to consider their correlations in spatial, spectral, and temporal domains, while reconstructing the underlying patterns. Experimental results on simulated and real data demonstrate that the proposed method is effective both qualitatively and quantitatively. In addition, the proposed method is computationally efficient compared to other patch based methods such as the recent proposed PM-MTGSR method._
Multitemporal remotely sensed images, missing information reconstruction, tensor completion. | Give a concise overview of the text below. | 225 |
mdpi/00148f1c_27f3_462e_b8aa_21d7f516cc40.md | Using Steel Slag for Dissolved Phosphorus Removal: Insights from a Designed Flow-Through Laboratory Experimental Structure
Linhua Wang
1Key Laboratory of Vegetation Restoration and Management of Degraded Ecosystems, South China Botanical Garden, Chinese Academy of Sciences, Guangzhou 510650, China; [email protected]
Chad Penn
2National Soil Erosion Research Laboratory, USDA-ARS, West Lafayette, IN 47906, USA; [email protected] (C.P.); [email protected] (C.-h.H.); [email protected] (S.L.)
Chi-hua Huang
2National Soil Erosion Research Laboratory, USDA-ARS, West Lafayette, IN 47906, USA; [email protected] (C.P.); [email protected] (C.-h.H.); [email protected] (S.L.)
Stan Livingston
2National Soil Erosion Research Laboratory, USDA-ARS, West Lafayette, IN 47906, USA; [email protected] (C.P.); [email protected] (C.-h.H.); [email protected] (S.L.)
Junhua Yan
1Key Laboratory of Vegetation Restoration and Management of Degraded Ecosystems, South China Botanical Garden, Chinese Academy of Sciences, Guangzhou 510650, China; [email protected]
2National Soil Erosion Research Laboratory, USDA-ARS, West Lafayette, IN 47906, USA; [email protected] (C.P.); [email protected] (C.-h.H.); [email protected] (S.L.)
## 1 Introduction
Eutrophication is characterized by dissolved nutrient enrichment that stimulates the growth of aquatic plants and algae, which may cause issues such as oxygen depletion, drinking water shortages, and fishery and recreational water degradation [1, 2]. Eutrophication is partly caused by increases in nitrogen and phosphorus (P) inputs, which are required nutrients for plant and algae growth [3]. Phosphorus is a primary limiting constituent, particularly in the dissolved reactive form, which is readily assimilated in aquatic ecosystems. Dissolved reactive P has been shown to be a major contributor to the re-eutrophication of Lake Erie, where an annual average of 2792 Mg dissolved P was deposited between 2009 and 2013 [4]. The source of the P is derived from both point and non-point sources discharged into surface water bodies. Dolan and Chapra [5] reported that approximately 70% of total P comes from non-point sources, mostly from surface runoff and tile drainage from agricultural lands. Thus, non-point sources of P have been identified as the main source of dissolved P and deserves great consideration for improving agricultural water quality.
One potential management strategy to reduce P export from agricultural fields is to remove dissolved P in surface runoff through use of sorption materials. Phosphorus sorption materials (PSMs) have a quick and strong ability to remove P from water. These PSMs are often byproducts of industrial or natural origin, such as steel slag [6], drinking water and mine drainage residuals [7], fly ash [8], bauxite waste [9], Fe oxides [10], and gypsum [11], which are rich in calcium, aluminum, or iron. The main mechanism for removing P is through adsorption onto metal oxides and oxyhydroxides, and precipitation as calcium phosphates, thus transforming dissolved P into an insoluble state [12; 13; 14]. A P removal structure constructed with PSMs is considered an engineering technology for reducing P loss. Types of P removal structure include modular boxes, buried beds and ditch filters, which have proven to be an effective practice for reducing dissolved P in surface runoff [6; 15]. The type of P removal mechanism can influence the design of a P removal structure, due to the speed and efficiency of the various reactions. Specifically, this is manifested in the necessary retention time (RT) and PSM mass required for a structure.
Tile drainage is a common management practice implemented on agricultural lands with poorly drained or high subsurface water tables. Tiles or pipes buried below the soil surface are designed to remove excess water in the plant root zone, thereby leading to suitable condition for growing crops, management, and harvest operations. King et al. [16] estimated that approximately 1.8-2.8 \\(\\times\\) 10\\({}^{5}\\) km\\({}^{2}\\) of cropland are tile drained in the US Midwest. Although substantial improvements in crop productivity have resulted from tile drainage, it has also caused adverse environmental impacts. On one hand, tile drainage can reduce surface runoff and soil erosion, and consequently minimizes P loss in runoff. On the other hand, artificially drained water through the tiles may increase total drainage yield from a watershed. Numerous studies in the midwest of the US have investigated the proportion of tile drainage water in total watershed discharge. It has been estimated that 42-86% of stream water in agricultural watersheds comes from tile drainage [16]. Similarly, Williams et al. [17] investigated the contribution of tile drainage to total discharge in the Upper Big Walnut Creek watershed at NE Indiana and found that the proportion of discharge by tile drainage reached 47%. Since a large amount of watershed discharge is attributed to tile drainage, it is conceivable that a substantial portion of dissolved P is exported via tile drainage.
Traditionally, surface runoff is regarded as the principal pathway for the transport of P from agricultural lands [18]. However, Smith et al. [19] found that 25-80% of the dissolved P was loaded by tile drainage from agricultural fields in the St. Joseph River watershed in northeast Indiana. Moreover, Ruark et al. [20] reported that tile drainage contributed 16-58% of the dissolved P export in Wisconsin. Gentry et al. [21] also investigated dissolved P transport from agricultural lands to streams in the tile-drained Big Pitch watershed of the Sangamos River in east-central Illinois. Thus, tile drains represent a potential interception point with regards to reducing dissolved P loads.
Flow-through experiments have emerged as an important and common technique to evaluate PSMs in reducing P loads to surface water [7]. Field and laboratory flow-through experiments demonstrated that steel slag possesses an appreciable capacity to reduce P [15; 22; 23; 24]. As an alternative to a single segment flow-through setup, this study proposed a multisegmented flow-through experimental design to investigate P and steel slag interactions, throughout the length of the slag bed. The objectives of this study were (i) to assess the P removal efficiency impacted by slag mass and RT, and (ii) investigate changes in P and Ca concentrations and pH, as water moves through a horizontal slag column. An improved understanding of these objectives will provide additional insights into how slag is able to remove dissolved P from flowing water, thus improving our ability to design more effective P removal structures in the field.
## 2 Materials and Methods
### Experimental Equipment Description
This experiment was conducted at the Agricultural Research Service, National Soil Erosion Research Laboratory, located in West Lafayette, Indiana, US. In this study, electric arc furnace (EAF) steel slag was used as the PSM and obtained from Edw. Levy Company, Dearborn, MI (US). Steel slag, a byproduct of steel making, is produced during the separation of the molten steel from impuritiesin steel-making furnaces. These impurities consist of carbon as gaseous carbon monoxide, and liquid oxides of silicon and manganese, which combine with lime (CaO) to form the solid steel slag. The principal components of steel slag are limestone (CaO content: 28-55%) and silica (SiO\\({}_{2}\\) content: 12-34%). Many applications utilizing the physical and chemical characteristics of steel slag have been developed to use in a broad range of fields. In this study, the experimental steel slag was equilibrated in deionized (DI) water for 24 h with the subsequent solution analyzed via inductively coupled plasma optical emission spectroscopy (ICP-OES, Optima 8300, Perkin Elmer, USA) to determine the solubility of several elements using standard wavelengths specified by the manufacturer. The results showed that the concentration of Ca, Si, Al, K, and Zn were 32.95, 20.82, 0.34, 0.29, and 0.02 mg L\\({}^{-1}\\), respectively. Previous studies have shown that EAF steel slag does not release trace metals to solution at appreciable concentrations [22; 24; 25]. Since un-sieved steel slag would decrease the hydraulic conductivity in flow-through experiments [15], air-dried steel slag was sieved at 5-8 mm and prepared for the study.
To better understand the process of dissolved P removal by steel slag, this experiment was conducted using multisegmented flow-through columns constructed with PVC pipes. This setting included four filter segments (S1, S2, S3, and S4), sampling chambers, autosamplers, water tank, pump, and plastic connection pipes. Each flow-through segment consisted of 1 m long and 11 cm diameter pipe. To retain the steel slag in the 1 m pipe segment, a steel mesh (<2 mm) was attached to the inflow side, and a perforated cap was installed at the discharge side. A 30 cm long segment was used for connecting each 1 m segment and also served as the sampling chamber (Figure 1). At the end of each 1 m test segment, an autosampler was used to collect water samples for laboratory analysis. At the end of this 4-segment flow column, drainage occurred through a vertical pipe to ensure a submerged condition for the steel slag during the flow-through experiment (Figure 1). The flow inlet segment was a 1 m vertical PVC pipe with an overflow pipe to keep a constant water head and a stable flow rate (Figure 1). The excess water at the flow inlet drained back to the water tank.
### Flow-Through Experiments
#### 2.2.1 Preparation
Fifteen kilograms of steel slag was packed into each segment. Before starting the flow-through experiment, a flow test was conducted for each 1-m segment to avoid interference from porosity differences on flow rate. It was assumed equal porosity in each segment when there were no significant differences in the flow rate among each single segment. Afterwards, four segments were connected and prepared for the flow-through experiment.
Figure 1: Schematic diagram of the multisegmented flow-through equipment in this study.
#### 2.2.2 Sampling and Measurement
In this study, the P concentration and load are referred to as dissolved P (PO\\({}_{4}\\)[3]-P). The inflow P solution was prepared by adding potassium dihydrogen phosphate (KH\\({}_{2}\\)PO\\({}_{4}\\)) to DI water and thoroughly mixing the solution in the water tank. Two target P inflow concentrations were examined (2.5 and 5.0 mg L\\({}^{-1}\\)) and labeled as P\\({}_{\\text{conc.0}}\\). All flow-through experiments were duplicated for each P concentration. To ensure all segments reached an equilibrium state, the flow-through experiment was conducted for 2 h. The inflow P solution was collected every 30 min from a valve at the inlet and was used to measure the initial P concentration and pH. The autosampler was programmed to take a composite sample every 6 min, consisting of three subsamples collected at 2nd, 4th, and 6th minute mark. After each 2 h experiment, all samples were divided into two parts. One part for pH measurement and the other was immediately filtered through a 0.45 \\(\\upmu\\)m nylon filter (Waltham, Thermo Fisher Scientific, US) using a syringe, for subsequent elemental analysis. Ten milliliters were poured into an acid-washed bottle. Then, samples were acidified with concentrated nitric acid (HNO\\({}_{3}\\)) for preservation. All the acidified samples were stored in a 4 \\({}^{\\circ}\\)C cooler. Consider that formation of calcium phosphate is the dominant P removal mechanism for slag [26; 27; 28; 29]:
\\[\\text{Ca}^{2+}+\\text{H}_{2}\\text{PO}_{4}{}^{-}+2\\text{H}_{2}\\text{O}\\longleftrightarrow \\text{CaHPO}_{4}\\text{ (brushite)}+\\text{H}^{+} \\tag{1}\\]
Thus, all samples were analyzed for P (wavelength: 213.617 nm) and Ca\\({}^{2+}\\) (wavelength: 317.933 nm) concentration by ICP-OES.
#### 2.2.3 Flow Rate Measurement
The discharge flow rate was measured every 30 min using a bucket and stopwatch during the 2 h flow-through experiment. The results showed that the flow discharge rate was 0.13 L s\\({}^{-1}\\) under the condition of 1.0 m high water head inflow solution. In addition, the RT represents the time required for the solution water to pass through a filter segment. Thus, the RT can be calculated based on the flow rate, segment volume, and steel slag porosity in each segment, as shown in Table 1.
### Data Analysis
The P loading into each segment is a function of the P concentration (mg L\\({}^{-1}\\)) and volume of water (L) treated. With the 4-segment design and sample collection at the end/beginning of each flow segment, a wide range of testing scenarios can be achieved in a single run, i.e., single (S1, S2, S3, and S4), double (S1 + S2, S2 + S3, and S3 + S4), triple (S1 + S2 + S3 and S2 + S3 + S4), and quadruple (S1 + S2 + S3 + S4) segments. Accordingly, Table 1 shows the filter length, steel slag mass and RT in each
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Scenario**} & \\multirow{2}{*}{**Filter Segment**} & \\multirow{2}{*}{\\begin{tabular}{c} **Filter** \\\\ **Length (m)** \\\\ \\end{tabular} } & \\multirow{2}{*}{\\begin{tabular}{c} **Steel Slag** \\\\ **Mass (kg)** \\\\ \\end{tabular} } & \\multirow{2}{*}{\\begin{tabular}{c} **RT** \\\\ **(s)** \\\\ \\end{tabular} } & \\multicolumn{3}{c}{\\begin{tabular}{c} **Inflow P = 2.5 mg L\\({}^{-1}\\)** \\\\ **(CP\\({}_{\\text{rem}}\\) = \\(k^{*}\\)CP\\({}_{\\text{add}}\\)+\\(b\\))** \\\\ \\end{tabular} } & \\multicolumn{3}{c}{
\\begin{tabular}{c} **(CP\\({}_{\\text{rem}}\\) = \\(k^{*}\\)CP\\({}_{\\text{add}}\\)+\\(b\\))** \\\\ \\end{tabular} } \\\\ \\cline{5-10} & & S1 & & & & 0.20 & 8.53 & 0.99 & 0.11 & 16.95 & 0.98 \\\\ \\cline{5-10} & S2 & & & & 0.26 & 3.34 & 0.98 & 0.18 & 8.37 & 0.95 \\\\ \\cline{5-10} & S3 & & & & 0.37 & 0.50 & 0.99 & 0.24 & 3.02 & 0.99 \\\\ \\cline{5-10} & S4 & & & & 0.45 & 0.04 & 0.99 & 0.33 & \\(-\\)0.78 & 0.99 \\\\ \\hline \\multirow{3}{*}{Double} & S1 + S2 & & & & 0.41 & 4.81 & 0.99 & 0.27 & 1.11 & 0.97 \\\\ \\cline{5-10} & S2 + S3 & 2.0 & & 30 & 66 & 0.53 & 1.30 & 0.99 & 0.38 & 4.39 & 0.98 \\\\ \\cline{5-10} & S3 + S4 & & & & 0.65 & \\(-\\)0.07 & 0.99 & 0.49 & 0.49 & 0.99 \\\\ \\hline \\multirow{3}{*}{Triple} & S1 + S2 + S3 & & & & 0.63 & 2.18 & 0.99 & 0.45 & 6.38 & 0.98 \\\\ \\cline{1-1} & S2 + S3 + S4 & & & & 0.74 & 0.33 & 0.99 & 0.59 & 1.68 & 0.99 \\\\ \\hline Quadruple & S1 + S2 + S3 + S4 & 4.0 & & 60 & 132 & 0.80 & 0.79 & 0.99 & 0.64 & 2.25 & 0.99 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Summary of steel slag mass, retention time (RT), and filter length condition, and the relationship between CP\\({}_{\\text{rem}}\\) and CP\\({}_{\\text{add}}\\) for each scenario. CP\\({}_{\\text{rem}}\\): cumulative removed P, mgkg\\({}^{-1}\\); CP\\({}_{\\text{add}}\\): cumulative added P, mg kg\\({}^{-1}\\); \\(k\\): slope; \\(b\\): intercept.
scenario. Consequently, one flow-through run contains ten testing scenarios on how steel slag may affect P transport.
With the known inflow and outflow P concentrations and flow rate for each testing segment, the P added to a filter segment at each time interval was calculated by integrating the tested inflow P concentration with the flow volume. The P load after treatment by a segment at each time interval was calculated by integrating the tested outflow P concentration with the flow volume. The P (mg kg\\({}^{-1}\\)) added to the structure was calculated using the inflow P concentration, flow rate, time interval and steel slag mass. Phosphorus removal and efficiency can be calculated according to a simple mass balance:
\\[\\mathrm{P_{rem.}(mg\\ kg^{-1})}=\\frac{(Q\\times t)\\times(P_{conc.}In-P_{conc.}Out)} {M} \\tag{2}\\]
\\[\\mathrm{P_{rem.}(\\%)}=\\frac{(Q\\times t)\\times(P_{conc.}In-P_{conc.}Out)}{(Q \\times t)\\times P_{conc.}In} \\tag{3}\\]
where Q (L min\\({}^{-1}\\)) is the flow rate; M (kg) is the mass of the steel slag in a scenario; Pconc.In and Pconc.Out (mg L\\({}^{-1}\\)) are the inflow and outflow P concentration; Prem.(mg kg\\({}^{-1}\\)) and Prem.(\\(\\%\\)) is the dissolved P removal expressed in terms of slag mass or percentage of inflow P over a certain flow interval time. To present the relationship between the total input P (mg kg\\({}^{-1}\\)) and P removal (mg kg\\({}^{-1}\\)) or P removal efficiency (\\(\\%\\)), we used the exponential model [30; 31]:
\\[\\mathrm{TP_{rem.}(mg\\ kg^{-1})}=\\mathrm{P_{rem.MAX}\\times(1-exp\\ (-a\\times TP_{add}))} \\tag{4}\\]
\\[\\mathrm{TP_{rem.}(\\%)}=b\\times\\mathrm{e^{-k\\times TP_{add}}} \\tag{5}\\]
where TPrem. (mg kg\\({}^{-1}\\)) is the amount of total P removed for any given value of TPadd (mg kg\\({}^{-1}\\)), which is the total P input expressed per unit mass of steel slag. Prem.MAX (mg kg\\({}^{-1}\\)) is the estimated maximum P retention capacity. TPrem. (\\(\\%\\)) is the P removal efficiency for a given value of TPadd. The variables \\(a\\), \\(b\\), and \\(k\\) are constants. The regression model was constructed using SPSS (Statistics Package for Social Science) [32]. The corresponding figures were developed using Sigma Plot 10.0 [33].
## 3 Results and Discussion
### Dynamic Changes of Phosphorus Removal by Steel Slag
Figure 2a and b shows the changes in P concentrations after flowing through each filter segment (Table 2) under various inflow conditions. The P concentration drastically increased during the first 15 min before coming to a steady concentration for the remaining period, as also observed by Hua et al. [34] and Yin et al. [35] using steel slag via column experiments. Meanwhile, the discrete removal also decreased with time (i.e., P loading) and varied as a function of the segment length (i.e., slag mass; Figure 3). The discrete P removal efficiency for a single segment was between 15\\(\\%\\) and 92\\(\\%\\), with an average of 33\\(\\%\\) P removed under the 2.5 mg L\\({}^{-1}\\) inflow. For an initial P concentration of 5.0 mg L\\({}^{-1}\\), the P removal efficiency varied in the range of 9\\(\\%\\) to 85\\(\\%\\), with an average of 24\\(\\%\\) P reduced in a single segment. In general, the 5 mg L\\({}^{-1}\\) inflow P concentration had overall lower removal efficiency for any given time compared to the 2.5 mg L\\({}^{-1}\\) concentration, due to the fact that the higher inflow P concentration loaded twice the amount of P to the slag. However, when normalized for slag mass, the P removal efficiency was similar for the two different inflow P concentrations
Phosphorus removal for the second, third, and fourth segment was initially low before increasing to much higher levels, unlike the first segment (Figure 3). This was due to the fact that initially, the P concentration entering into the second, third, and fourth segments were very low as they came from output of the first, second, and third segments, respectively (Figure 2), while the first segment received a constant inflow concentration of either 2.5 or 5 mg L\\({}^{-1}\\). Based on the thermodynamic equilibrium shown in Reaction 1, a lesser P concentration has less chemical potential for precipitating calcium phosphate. Notice that reaction 1 involves protons, such that an increase in protons (i.e., decrease in pH) will push the reaction to the left and prevent calcium phosphate precipitation. Reaction 1 also illustrates the fact that precipitation of calcium phosphate will depress the pH by producing protons. Clearly, a decrease in soluble calcium and P will reduce the potential for calcium phosphate to precipitate. In all cases, P removal decreases with time and P loading (Figure 3) because soluble Ca is being depleted and pH is decreasing (Equation (1); Figure 2c-f). Electric arc furnace steel slag is an appreciable source of soluble calcium with total and water soluble concentrations of large particles (i.e., what was used in the current study) ranging from 160 to 340 g kg\\({}^{-1}\\) and 250 to 5000 mg kg\\({}^{-1}\\), respectively [22; 25; 31].
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Filter Segment**} & **Inflow** & \\multicolumn{3}{c}{**2.5 mg L\\({}^{-1}\\)**} & \\multicolumn{3}{c}{**5.0 mg L\\({}^{-1}\\)**} \\\\ \\cline{2-7} & **Outflow** & **P (mg L\\({}^{-1}\\))** & **Ca2\\({}^{+}\\) (mg L\\({}^{-1}\\))** & **pH** & **P (mg L\\({}^{-1}\\))** & **Ca2\\({}^{+}\\) (mg L\\({}^{-1}\\))** & **pH** \\\\ \\hline \\multirow{2}{*}{S1} & Range & 0β2.3 & 5.1β50.4 & 7.6β10.2 & 0.5β4.9 & 3.8β50.5 & 7.3β10.6 \\\\ & Mean & 1.8 & 7.7 & 8.8 & 4.1 & 8.1 & 8.0 \\\\ \\hline \\multirow{2}{*}{S2} & Range & 0β1.6 & 8.6β53.2 & 8.0β10.7 & 0.4β4.4 & 7.5β57.7 & 7.9β10.9 \\\\ & Mean & 1.3 & 13.1 & 9.5 & 3.3 & 11.3 & 9.3 \\\\ \\hline \\multirow{2}{*}{S3} & Range & 1.0β1.2 & 10.9β74.4 & 7.9β10.8 & 0.4β3.8 & 7.9β50.1 & 7.4β10.8 \\\\ & Mean & 0.8 & 16.5 & 9.7 & 2.5 & 12.7 & 8.7 \\\\ \\hline \\multirow{2}{*}{S4} & Range & 0β0.7 & 14.4β63.4 & 8.2β10.6 & 0.4β2.7 & 5.0β37.1 & 7.8β10.8 \\\\ & Mean & 0.5 & 21.1 & 9.7 & 1.7 & 12.3 & 9.1 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: The range and mean concentrations of phosphorus (P) and calcium (Ca2\\({}^{+}\\)) and pH of treated water from each segment described in Table 1.
Figure 2: Phosphorus (P; (**a,b**)), calcium (Ca; (**c,d**)) concentrations, and pH (**e,f**) with time after treatment by each slag segment during the 2 h flow-through test. Figures (**a,c,e**) are for the inflow P treatment of 2.5 mg L\\({}^{-1}\\) and (**b,d,f**) are for 5.0 mg P L\\({}^{-1}\\). The green line indicates inflow P concentrations.
The differences in Ca\\({}^{2+}\\) and pH between the segments explain the differences in P removal, specifically the observation that P removal efficiency increased with segment number. For example, notice that pH and Ca\\({}^{2+}\\) increase with segment, i.e., the previous segment depleted more Ca\\({}^{2+}\\) and depressed pH more than the downstream segment, meaning better conditions for calcium phosphate precipitation occurred in the downstream segments (Figure 2c-f) as Ca\\({}^{2+}\\) flowed out of each segment and contributes to the P removal in the following segment. Downstream segments not only accumulated Ca\\({}^{2+}\\) from the previous segment, but also consumed less Ca\\({}^{2+}\\) because the lower dissolved P concentration flowing into them required less Ca\\({}^{2+}\\) for calcium phosphate precipitation compared to upstream segments that received higher input dissolved P concentrations. Keep in mind that slag had a finite ability to provide soluble Ca and buffer the pH to a high level, and therefore P removal will correspond to the remaining soluble Ca and elevated pH that can be provided as Ca and pH are diminished with further P removal (Reaction 1, Figures 2 and 3, and Table 2). These results are also supported by McGrath et al. (2016), who reported that a decreased P removal was accompanied by a lower pH and Ca\\({}^{2+}\\) concentration in solution.
Similarly, Figure 4 and Table 1 illustrates cumulative P removed (CP\\({}_{\\text{rem}}\\)) as a function of cumulative P added (CP\\({}_{\\text{add}}\\)) for individual segments and a combination of several segments. The slope of each line quantifies the ability of each segment scenario to remove dissolved P; the larger the slope, the greater the amount of CP\\({}_{\\text{rem}}\\) for any given CP\\({}_{\\text{add}}\\) value. Notice that among each individual segment (Table 1), the slope value (i.e., \\(k\\) value) increased in the order of segment \\(1<2<3<4\\), indicating a greater ability to remove dissolved P. As previously discussed, the greater ability of the downstream segments to remove dissolved P when normalized on a cumulate per mass basis compared to upstream segments is due to the higher solution pH and Ca\\({}^{2+}\\) levels that flow into the downstream segments (Figure 2c-f and Table 2). Figure 4 shows the same phenomenon for any number of segments; in every case, the downstream segments possessed greater slopes and therefore were more efficient in P removal compared to the upstream segments.
Figure 3: Discrete phosphorus (P) removal with time, among each slag segment during the 2 h flow-through test, expressed as both mg P removed kg\\({}^{-1}\\) slag (**c**,**d**) and percentage (**a**,**b**). Figures (**a**,**c**) are for the inflow P treatment of 2.5 mg L\\({}^{-1}\\), while (**b**,**d**) are 5.0 mg L\\({}^{-1}\\).
In this regard, these results clearly revealed that P removal in slag proceeds as a \"front\" as P-rich water flows through the slag columns, in the same manner that Ca\\({}^{2+}\\) and pH are depleted in that same moving front. Eventually with continued loading, the entire slag column will become equilibrated and contain similar concentrations of solution P, Ca\\({}^{2+}\\), and pH.
### Total P Removal under Varied Steel slag Mass and Phosphorus Input
Although the initial influent P concentrations were 2.5 and 5.0 mg L\\({}^{-1}\\), the average influent P concentration of each segment ranged from 0.9 to 4.9 mg L\\({}^{-1}\\). The RTs were 33, 66, 99, and 132 s, corresponding to the single, double, triple, and quadruple segments, respectively (Table 1 and Figure 1). Retention time was proportionally increased with the filter mass due to a greater total pore volume.
With regard to the impact of RT, Figure 5 allows for comparison since the P removal resulting from combination of different segments was normalized as a function of slag mass (i.e., TP\\({}_{\\rm add}\\) in units of mg kg\\({}^{-1}\\)). For example, while each individual segment possessed the same RT of 33 seconds, a combination of segments would increase the RT accordingly (Table 1). However, since an increasing number of segments also possess an increasing mass (Table 1), it is necessary to first normalize P addition and P removal based on slag mass. Obviously, a larger slag mass, such as two segments (30 kg), will remove more P than any single segment (15 kg), and normalization for slag mass allows for a true comparison of RT. For example, at an inflow P concentration of 5 mg L\\({}^{-1}\\), total addition of 50 mg P kg\\({}^{-1}\\) slag results in cumulative P removal of 58, 49, 40, and 32 mg kg\\({}^{-1}\\) for quadruple, triple, double, and single segments, respectively (Figure 5), representing RT of 132, 99, 66, and 33 s. In general,
Figure 4: Cumulative phosphorus (P) removal (CP\\({}_{\\rm rem}\\)) expressed as a function of cumulative P added (CP\\({}_{\\rm add}\\)) for each individual slag segment and combination of successive segments. (**aβd**) are the single, double, triple, and quadruple segments for the inflow P treatment of 2.5 mg L\\({}^{-1}\\), respectively. (**eβh**) are the single, double, triple, and quadruple segments for the inflow P treatment of 5.0 mg L\\({}^{-1}\\), respectively.
P removal increased with increased RT (i.e., segment length normalized for slag mass). These findings are consistent with the results reported by Yin et al. (2019), who stated that an enhanced RT could achieve a higher P removal under flow-through conditions. Additionally, Barca et al. (2019) concluded that the limited availability of Ca\\({}^{2+}\\) released from a smaller amount of steel slag and shorter RT caused a lower P removal efficiency, as observed in the scenarios with the single or double segments in this study. Consequently, these results affirm those of Stoner et al. (2017) and Penn et al. (2019) that a longer RT will increase dissolved P removal in electric arc furnace steel slag. The increased P removal with increasing RT is a consequence of the P sorption mechanism, which in this case is calcium phosphate precipitation.
In addition, P removal capacity is a fundamental factor for designing P removal structures, which was 60 mg kg\\({}^{-1}\\) in this study (via Equation (4)), as shown in Figure 6a. In comparison, Table 3 summarizes previously reported P removal capacity of steel slag under the conditions indicated for each study such as inflow P concentrations, RT, and laboratory vs. field scale. Generally, the tested steel slag mass, RT, and experimental methods varied widely. Even with the variation in scale, inflow P concentrations, and variations between the slag materials itself, Table 3 supports our results in that longer RTs promote greater P removal. For treating non-point drainage water, structures with a relatively short RT are necessary due to the high flow rates and need to limit the footprint of the structures. Therefore, the results of the current study, along with Penn et al. (2019), Penn et al. (2019), and Klimeski et al. (2020) illustrate that under a short RT (i.e., less than 30 min), P removal is expected to be in the range of 20-60 mg kg\\({}^{-1}\\), depending on the characteristics of the slag material and inflow P concentrations. Penn et al. (2019) illustrated that P removal among different slag materials will vary dramatically based on their ability to buffer pH to a high level and supply soluble Ca, as well as the RT
Figure 5: (a) Total phosphorus (P) added (TP\\({}_{\\text{add}}\\)) and (b) corresponding total P removed (TP\\({}_{\\text{rem}}\\)), as a function of retention time when considering all segment lengths. Blue circles indicate TP\\({}_{\\text{add}}\\) and corresponding TP\\({}_{\\text{rem}}\\) when a TP\\({}_{\\text{add}}\\) 50 mg kg\\({}^{-1}\\), as described in the text.
and inflow P concentration chosen to be employed. Therefore, further study is needed to optimize the relationship between the removal efficiency, RT, steel slag mass, and its costs for designing a P removal structure.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Steel Slag** & **P conc.** & **RT** & **Flow Rate (ml min\\({}^{-1}\\))** & **P Removal (mg kg\\({}^{-1}\\))** & **Experiment Type** & **References** \\\\ \\hline
2.1 & 0β10 & 2.4β9.5 h & 2.5β10 & 3700 & Laboratory Flow-through & Hua et al. [34] \\\\ \\hline
18.3 & 20 & 24 h & 2.1β2.8 & 2200 & Laboratory Flow-through & Drizo et al. [27] \\\\ \\hline
20 & 0.05β5.3 & 5β24 min & 333β1167 & 3200 & Laboratory Flow-through & Klimeski et al. [30] \\\\ \\hline
45.36 & 10 & 24 h & 20 & 910 & Laboratory Flow-through & Barca et al. [37] \\\\ \\hline
45.36 & 10 & 24 h & 26.7 & 810 & Laboratory Flow-through & [37] \\\\ \\hline
60 & 0.84β4.87 & 0.5β2 min & 7800 & 61 & Laboratory Flow-through & Current study \\\\ \\hline
454 & 0.11β0.60 & 10 min & 0.4β6.4 & 59 & Laboratory Flow-through & Penn et al. [31] \\\\ \\hline
2712 & 0.50 & 19.3 min & 29.8 & 25.9 & Field Flow-through & Penn et al. [22] \\\\ \\hline
7000 & 0.05β0.25 & 10 minβ50 h & 600β180,000 & 60 & Field Flow-through & Klimeski et al. [30] \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Summary of phosphorus (P) removal by slag, among several studies conducted under a variety of experimental conditions.
Figure 6: Predicted and measured total phosphorus (P) removal (TP\\({}_{\\text{rem}}\\)) as a function of total P added to the slag, expressed as a function of slag mass (**a**) and as a percentage (**b**).
Figure 6 shows the relationship between the total added P and P removal for all segment lengths. Each point represents the ratio of the total removed to the total added P during a 2 h flow-through experiment. The solid line shows the exponential relationship derived from the experimental data. Knowing the P loading via tile drainage water in a specific agricultural field and the desired lifetime for a structure, one can determine the steel slag mass needed to achieve a desired P removal goal [15; 38]. For example, Algoazany et al. [39] investigated dissolved P transport through tile drains from an intensively tile-drained field in the Little Vermilion River watershed, eastern Illinois, USA. The average annual P load was 160, and 116 g P ha\\({}^{-1}\\) y\\({}^{-1}\\) from two agricultural fields with an area of 4.86 and 3.34 ha, respectively, from 1994 to 2000. For a hypothetical P removal structure constructed with the same slag utilized and a 50% P removal goal of the 5-y load, the estimated required steel slag mass is 45 and 22 Mg for these two fields, respectively. Thus, these results provide information for designing a filter unit for a specific tile-drained field site with a given P loading mass, and desired structure lifetime and removal efficiency.
## 4 Conclusions
This study was conducted to investigate P removal using a steel slag filter test column separated into four segments in order to further elucidate the process of P removal. As expected, P concentration decreased with filter segment length due to contact with an increasing mass of slag, which provided less opportunity for downstream segments to remove P. Similarly, Ca\\({}^{2+}\\) and pH decreased with further exposure to P in all segments, and were greater in downstream segments compared to upstream segments, illustrating how Ca\\({}^{2+}\\) was consumed and H\\({}^{+}\\) was produced with precipitation of calcium phosphate. Downstream slag segments were more efficient at removing P than upstream segments because they were exposed to more favorable conditions for calcium phosphate precipitation, specifically higher Ca\\({}^{2+}\\) concentrations and pH. These results showed that P was removed in a moving front as Ca\\({}^{2+}\\) and slag pH buffer capacity were consumed. When P input and removal was normalized for mass of slag, an increase in RT increased P removal, concomitant with the calcium phosphate precipitation mechanism shown in previous studies. The estimated removal capacity of the steel slag was 61 mg kg\\({}^{-1}\\), which was similar to previous studies conducted on slag samples with a similar RT (less than 30 min) and particle size (>5 mm). Results emphasize the importance of designing field scale structures with sufficient RT to accommodate the formation of calcium phosphate.
Conceptualization, C.-h.H. and L.W.; methodology, S.L. and L.W.; formal analysis, L.W.; writing--original draft preparation, L.W.; writing--review and editing, C.-h.H., C.P. and J.Y.; supervision, C.-h.H., and C.P.; project administration, C.-h.H.; funding acquisition, C.-h.H. All authors have read and agreed to the published version of the manuscript.
This research was funded by National Soil Erosion Research Laboratory-USDA.
We thank Crumley Amber, Hofmann Brenda, Graef Rhonda and McAfee Scott for their help during the flow through experiments and laboratory analysis.
The authors declare no conflict of interest.
## References
* (1) Smith, V.H.; Schindler, D.W. Eutrophication science: Where do we go from here? _Trends Ecol. Evol._**2009**, _24_, 201-207. [CrossRef]
* (2) Schindler, D.W. Recent advances in the understanding and management of eutrophication. _Limmod. Oceanogr._**2006**, _51_, 356-363. [CrossRef]
* (3) Bennett, E.; Carpenter, S.; Caraco, N. Human impact on erodable phosphorus and eutrophication: A global perspective. _BioScience_**2001**, _51_, 227-234. [CrossRef]
* (4) Maccoux, M.J.; Dove, A.; Backus, S.M.; Dolan, D.M. Total and soluble reactive phosphorus loadings to Lake Erie. _J. Great Lakes Res._**2016**, _42_, 1151-1165. [CrossRef]
* (5) Dolan, D.M.; Chapra, S.C. Great Lakes total phosphorus revisited: 1. Loading analysis and update (1994-2008). _J. Great Lakes Res._**2012**, _38_, 730-740. [CrossRef]* _Penn et al. (2007)_ Penn, C.J.; Bryant, R.B.; Kleinman, P.J.A.; Allen, A.L. Removing dissolved phosphorus from drainage ditch water with phosphorus spring materials. _J. Soil Water Conserv._**2007**, _62_, 269-276.
* _Stoner et al. (2012)_ Stoner, D.; Penn, C.J.; McGrath, J.; Warren, J. Phosphorus Removal with By-Products in a Flow-Through Setting. _J. Environ. Qual._**2012**, _41_, 654-663. [CrossRef]
* _Gustafsson et al. (2008)_ Gustafsson, J.P.; Renman, A.; Renman, G.; Poll, K. Phosphate removal by mineral-based sorbents used in filters for small-scale wastewater treatment. _Water Res._**2008**, _42_, 189-197. [CrossRef]
* Hedstrom (2006) Hedstrom, A. Reactive filter systems for small scale wastewater treatment: A literature review. _Vatten_**2006**, _62_, 253-263.
* _Lyngsie et al. (2014)_ Lyngsie, G.; Borggaard, O.; Hansen, H.C.B. A three-step test of phosphate sorption efficiency of potential agricultural drainage filter materials. _Water Res._**2014**, _51_, 256-265. [CrossRef]
* _Feyereisen et al. (2015)_ Feyereisen, G.W.; Francesconi, W.; Smith, D.R.; Papiernik, S.K.; Krueger, E.S.; Wente, C.D. Effect of Replacing Surface Inlets with Blind or Gravel Inlets on Sediment and Phosphorus Subsurface Drainage Losses. _J. Environ. Qual._**2015**, _44_, 594-604. [CrossRef] [PubMed]
* _Shilton et al. (2013)_ Shilton, A.; Chen, L.; Elemetri, I.; Pratt, C.; Pratt, S. Active slag filters: Rapid assessment of phosphorus removal efficiency from effluent as a function of retention time. _Environ. Technol._**2013**, _34_, 195-200. [CrossRef]
* _Penn et al. (2011)_ Penn, C.J.; Bryant, R.B.; Callahan, M.P.; McGrath, J. Use of Industrial By-products to Sorb and Retain Phosphorus. _Commun. Soil Sci. Plant Anal._**2011**, _42_, 633-644. [CrossRef]
* _Pratt et al. (2007)_ Pratt, C.; Shilton, A.; Pratt, S.; Haverkamp, R.G.; Bolan, N. Phosphorus Removal Mechanisms in Active Slag Filters Treating Waste Stabilization Pond Effluent. _Environ. Sci. Technol._**2007**, _41_, 3296-3301. [CrossRef]
* _Penn et al. (2016)_ Penn, C.J.; Bowen, J.; McGrath, J.; Nairn, R.W.; Fox, G.; Brown, G.; Wilson, S.; Gill, C. Evaluation of a universal flow-through model for predicting and designing phosphorus removal structures. _Chemosphere_**2016**, _151_, 345-355. [CrossRef] [PubMed]
* _King et al. (2015)_ King, K.W.; Williams, M.R.; Fausey, N.R. Contributions of systematic tile drainage to watershed-scale phosphorus transport. _J. Environ. Qual._**2015**, _44_, 486-494. [CrossRef]
* _Williams et al. (2015)_ Williams, M.R.; King, K.W.; Fausey, N.R. Contribution of tile drains to basin discharge and nitrogen export in a headwater agricultural watershed Agric. _Water Manag._**2015**, _158_, 42-50. [CrossRef]
* _Dougherty et al. (2004)_ Dougherty, W.J.; Fleming, N.K.; Cox, J.W.; Chittleborough, D.J. Phosphorus Transfer in Surface Runoff from Intensive Pasture Systems at Various Scales. _J. Environ. Qual._**2004**, _33_, 1973-1988. [CrossRef]
* _Smith et al. (2015)_ Smith, D.R.; Francesconi, W.; Livingston, S.J.; Huang, C.-H. Phosphorus losses from monitored fields with conservation practices in the Lake Erie Basin, USA. _Ambio_**2015**, _44_, S319-S331. [CrossRef]
* _Ruark et al. (2012)_ Ruark, M.; Madison, A.; Cooley, E.; Stuntebeck, T.; Komiskey, M. Phosphorus loss from tile drains: Should we be concerned? In Proceedings of the 2012 Wisconsin Crop Management Conference, Madison, WI, USA, 10-12 January 2012; Volume 51, pp. 9-14. Available online: [https://extension.soils.wisc.edu/wp-content/uploads/sites/68/2014/02/2012_wcmc_proc.pdf](https://extension.soils.wisc.edu/wp-content/uploads/sites/68/2014/02/2012_wcmc_proc.pdf) (accessed on 17 December 2018).
* _Gentry et al. (2007)_ Gentry, L.E.; David, M.B.; Royer, T.V.; Mitchell, C.A.; Starks, K.M. Phosphorus Transport Pathways to Streams in Tile-Drained Agricultural Watersheds. _J. Environ. Qual._**2007**, _36_, 408-415. [CrossRef]
* _Penn et al. (2012)_ Penn, C.J.; McGrath, J.; Rounds, E.; Fox, G.; Heeren, D. Trapping Phosphorus in Runoff with a Phosphorus Removal Structure. _J. Environ. Qual._**2012**, _41_, 672-679. [CrossRef] [PubMed]
* _Barca et al. (2012)_ Barca, C.; Gerente, C.; Meyer, D.; Chazarc, F.; Andres, Y. Phosphate removal from synthetic and real wastewater using steel slags produced in Europe. _Water Res._**2012**, _46_, 2376-2384. [CrossRef] [PubMed]
* _Penn et al. (2020)_ Penn, C.; Livingston, S.; Shedekar, V.; King, K.; Williams, M. Performance of Field-Scale Phosphorus Removal Structures Utilizing Steel Slag for Treatment of Subsurface Drainage. _Water_**2020**, _12_, 443. [CrossRef]
* _Penn and Bowen (2018)_ Penn, C.J.; Bowen, J.M. _Design and Construction of Phosphorus Removal Structures for Improving Water Quality_; Springer: Berlin/Heidelberg, Germany, 2018; pp. 91-94.
* _Claveau-Mallet et al. (2013)_ Claveau-Mallet, D.; Wallace, S.; Comeau, Y. Removal of phosphorus, fluoride and metals from a gypsum mining leachate using steel slag filters. _Water Res._**2013**, _47_, 1512-1520. [CrossRef] [PubMed]
* _Drizo et al. (2002)_ Drizo, A.; Comeau, Y.; Forget, C.; Chapuis, R.P. Phosphorus Saturation Potential: A Parameter for Estimating the Longevity of Constructed Wetland Systems. _Environ. Sci. Technol._**2002**, _36_, 4642-4648. [CrossRef] [PubMed]
* _Bowden et al. (2006)_ Bowden, L.I.; Johnson, K.L.; Jarvis, A.; Robinson, H.; Ghazireh, N.; Younger, P. The use of basic oxygen steel furnace slag (bos) as a high surface area media for the removal of iron from circum neutral mine water. _J. Am. Soc. Min. Reclam._**2006**, _2006_, 234-246. [CrossRef]* _Eveborn et al. (2009)_ Eveborn, D.; Gustafsson, J.P.; Hesterberg, D.; Hillier, S. XANES Speciation of P in Environmental Samples: An Assessment of Filter Media for on-Site Wastewater Treatment. _Environ. Sci. Technol._**2009**, _43_, 6515-6521. [CrossRef]
* _Klimeski et al. (2015)_ Klimeski, A.; Uusitalo, R.; Turtola, E. Variations in phosphorus retention by a solid material while scaling up its application. _Environ. Technol. Innov._**2015**, \\(4\\), 285-298. [CrossRef]
* Penn and McGrath (2011) Penn, C.J.; McGrath, J. Predicting Phosphorus Sorption onto Steel Slag Using a Flow-through approach with Application to a Pilot Scale System. _J. Water Resour. Prot._**2011**, \\(3\\), 235-244. [CrossRef]
* IBM (2010) IBM. _IBM SPSS Statistics for Windows, Version 19.0_; IBM Corporation: Armonk, NY, USA, 2010.
* _Systat Sigma Plot. Version 10.0_; Systat Software Inc.: San Jose, CA, USA, 2008.
* _Hua et al. (2016)_ Hua, G.; Salo, M.W.; Schmit, C.G.; Hay, C.H. Nitrate and phosphate removal from agricultural subsurface drainage using laboratory woodchip bioreactors and recycled steel byproduct filters. _Water Res._**2016**, _102_, 180-189. [CrossRef]
* _Yin et al. (2017)_ Yin, H.; Yan, X.; Gu, X. Evaluation of thermally-modified calcium-rich attapulgite as a low-cost substrate for rapid phosphorus removal in constructed wetlands. _Water Res._**2017**, _115_, 329-338. [CrossRef] [PubMed]
* _McGrath et al. (2012)_ McGrath, J.; Penn, C.J.; Coale, F.J. A modelling approach to the design of in-situ agricultural drainage filters. _Soil Use Manag._**2012**, _29_, 155-161. [CrossRef]
* _Barca et al. (2014)_ Barca, C.; Meyer, D.; Liira, M.; Drissen, P.; Comeau, Y.; Andres, Y.; Chazarenc, F. Steel slag filters to upgrade phosphorus removal in small wastewater treatment plants: Removal mechanisms and performance. _Ecol. Eng._**2014**, _68_, 214-222. [CrossRef]
* _Penn et al. (2017)_ Penn, C.; Chagas, I.; Klimeski, A.; Lyngsie, G. A Review of Phosphorus Removal Structures: How to Assess and Compare Their Performance. _Water_**2017**, \\(9\\), 583. [CrossRef]
* _Algoazany et al. (2007)_ Algoazany, A.S.; Kalita, P.K.; Czapar, G.F.; Mitchell, J.K. Phosphorus Transport through Subsurface Drainage and Surface Runoff from a Flat Watershed in East Central Illinois, USA. _J. Environ. Qual._**2007**, _36_, 681-693. [CrossRef] | Steel slag, a byproduct of the steel making process, has been adopted as a material to reduce non-point phosphorus (P) losses from agricultural land. Although substantial studies have been conducted on characterizing P removed by steel slag, few data are available on the removal of P under different conditions of P input, slag mass, and retention time (RT). The objective of this study was to investigate P removal efficiency as impacted by slag mass and RT at different physical locations through a horizontal steel slag column. Downstream slag segments were more efficient at removing P than upstream segments because they were exposed to more favorable conditions for calcium phosphate precipitation, specifically higher Ca\\({}^{2+}\\) concentrations and pH. These results showed that P is removed in a moving front as Ca\\({}^{2+}\\) and slag pH buffer capacity are consumed. In agreement with the calcium phosphate precipitation mechanism shown in previous studies, an increase in RT increased P removal, resulting in an estimated removal capacity of 61 mg kg\\({}^{-1}\\) at a RT of 30 min. Results emphasized the importance of designing field scale structures with sufficient RT to accommodate the formation of calcium phosphate.
phosphorus removal, steel slag; retention time; calcium concentration; multi-segments +
Footnote β : journal: Materials Research Letters | Summarize the following text. | 266 |
arxiv-format/2011_13974v1.md | # Trends in deep learning for medical hyperspectral image analysis
Uzair Khan
1Department of Electrical and Computer Engineering, Purdue University Northwest 1
Paheding Sidike
2Department of Applied Computing, Michigan Technological University2
Colin Elkin
1Department of Electrical and Computer Engineering, Purdue University Northwest 1
Vijay Devabhaktuni
1Department of Electrical and Computer Engineering, Purdue University Northwest 1
## 1 Introduction
Medical imaging refers to images used to aid in clinical work relative to the human body such as surgical procedures, diagnoses for impeding diseases, or simply to analyze and study body functions and is primarily based on radiological research. For the last couple of decades in particular, modern imaging techniques such as X-rays, magnetic resonance imaging (MRI), and ultrasound have had significant impacts on not only medical symptoms analysis but also on the spawning of more imaging techniques for improvised examination. Computed tomography (CT) scan, for example, is an X-ray procedure that displays the cross-sectional image of the body, which now also helps to assess brain or head-related injuries [1]. Another circumstance in which X-rays have been influential for further progress in research and application is Mammography, where a low energy x-ray photon beam is implemented to diagnose breast cancer, which is presently a common medical problem worldwide [2]. These implementations of x-rays are more commonly known as computer-aided diagnosis (CAD) [3]. More than often for medical imaging, image fusion is utilized, in which the final output image consists of more meaningful information, which is obtained from multiple input images [4]. An emerging imaging method in the discipline of biomedicine is Terahertz (THz) imaging, which is currently working to overcome its limitations with the aid of nanotechnology by finding its roots in the highly promising medical imaging methodology and becoming safer than traditional methodologies [5]. The primary goals for any effort to obtain medical images for diagnosis is via a non-invasive and inexpensive methodology. While the aforementioned techniques provide the desired output, each method is either invasive in some form, or not economical and in some cases, neither. This leads to professionals in the medical field and in academia alike to look towards alternative methods for imaging that are better suited for the required criteria of a non-invasive and a low-cost method.
Hyperspectral imaging (HSI) is a developing imaging technique amid the medical imaging modality and offers noninvasive disease diagnosis. HSI comprises various images aligned in adjacent narrow wavelengths or spectral bands (often in the range of hundreds) and recreates a reflectancespectrum of all the pixels in the band [6]. This is done by separating light using a spectral separator consisting of bandpass filter(s) and accumulating it on a focal plane detector (typically a complementary metal oxide semiconductor (CMOS) sensor) to form the image. Hyperspectral imaging has traditionally been used for remote sensing [7][8], agriculture [9], food safety & quality assessment [10][11], image enhancement [12], disaster monitoring [13][14], feature extraction [15], classification [16], object detection [17][18][19], and recently even for conservation of art [20]. For medical imaging, it is principally obtained by targeting tissue samples by transmission of light and used to diagnose and detect various types of cancers [21] and other medical applications. With cancer being the second leading cause of death in the US, this significantly impacts the medical society and its research to eliminate cancer. Medical Hyperspectral Imaging (MHSI) has previously aided in successfully distinguishing between tumors and normal tissues from a rat breast tumor model, providing a clear indication of how influential MHSI can be for future research on breast cancer [22]. It has also proven its significance in detection of cancerous tissue cell detection from normal tissue specimen for neck and head cancer [23]. Gastric cancer is the most common cause of cancer in the United States, and its future diagnosis will also be significantly simplified thanks to MHSI [24]. Furthermore, it has also facilitated cancer detection of head and neck in surgical specimens [25]. All these MHSI applications for several medical diagnosis have led to the development of multiple algorithms to more accurately and efficiently classify the cancerous tissues from a sample [26], which further instigates on the different techniques for the processing and analysis of MHSI.
Artificial intelligence, and its most common subset known as Machine Learning, is a highly popular approach to process hyperspectral images and extract meaningful data from it. Machine learning (ML) algorithms make use of data and statistical models to learn and identify patterns to complete specific tasks and make decisions with or without human supervision. Several such ML algorithms are utilized when examination of hyperspectral images is considered and consequently in identifying and classifying differences in a tissue specimen when studying MHSI. While ML algorithms can be rudimentarily divided into supervised and unsupervised learning models, the algorithms are prone to develop into increasingly complex models as we delve further into deep learning (DL) [27], a branch of ML that is influenced by the structure and function of the human brain. In supervised learning, the model is fashioned from a dataset containing a number of input features and outputs, or labels. This model is formed by finding the optimal model parameters from a training sample of the dataset, which is subsequently used to predict the outcome based on the minimized cost function. Unsupervised learning processes data without any particular structure and is trained to find patterns and typically create groupings based on clusters.
Traditional ML algorithms typically used for MHSI applications lean towards classification models for identification and diagnosis, all of which include \\(k\\)-nearest neighbors (kNN) [28], linear discriminant analysis (LDA) [29], and support vector machines (SVM) [30][31][32][33], with the latter being most prominently applied. With ML already heavily facilitating the processing of MHSI, the next logical step is to apply deep learning to achieve a more cost-effective and more accurate prediction model, which will provide a more comprehensive diagnosis to this pertaining medical issue. Deep learning in general has started taking off extraordinarily as early as 2012 [34][35], and deep learning for MHSI has been no different in this regard. This subset of ML has already proven to outperform traditional ML techniques in the principle of head and neck surgery [36] and looks promising for MHSI in a broader aspect. Deep learning has also been applied for classification of a previously listed example related to head and neck cancer and shows significant improvement in accuracy over SVM and kNN [37].
Our end goal from this paper will be to highlight the key DL methods being used for MHSI and the challenges for successful application of DL in MHSI. This is because the DL methods to be implemented for MHSI in the coming decade will have a significant implication for future studies in several key areas of the medical discipline, including cancer research. The rest of the survey will be discussed as follows: Section 2 will introduce the fundamentals of DL methods that are currently being used as well as several emerging algorithms. Section 3 will discuss which DL methods are presently implemented for MHSI as well as additional algorithms that could be applied in the coming decade. Section 4 will ultimately discuss the current challenges faced by DL for MHSI.
As previously mentioned, the trend in research for DL methods being used for MHSI has seen a sudden surge since as recently as 2012. This can easily be traced back to the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) that was conducted in 2012, in which a deep learning method, which we will discuss later in Section 2, broke prior records by providing accuracy results close to 41% better than the best previous attempt [38]. Since then, more and more researchers have explored DL methodologies in a wide array of disciplines. Our survey covers papers published that specifically target the confrontation of medical hyperspectral imaging with deep learning techniques. For papers to be considered valid for our purpose, we examined keywords only with major publishers comprising of IEEE, Elsevier, Springer, SPIE, MDPI, International Journal of Biomedical Imaging, Journal of Biomedical Optics, and a handful of small publishers via Google Scholar as well as the publisher's respective search engines, with the latest being published in December 2019. While numerous publications are already present for deep learning being implemented for medical image analysis, with our topic of discussion being so highly specific, we cross-checked all sections within the publications that matched the subject matter of deep learning being implemented for medical hyperspectral imaging to verify papers for this survey. For certain situations in which published work corresponded with multiple papers, we only considered papers with greater significance in regard to their contributions. The goal for these papers being included in this survey, as mentioned earlier, encompassed the benefits of deep learning methodologies, how they are contributing towards medical hyperspectral imaging, and the challenges being faced for effectively applying these deep learning techniques to medical hyperspectral imaging.
Figure 1: Chart representing the number of papers published in the last decade The bars depict the quantity of papers published spread apart evenly for every two consecutive years for each category of DL methodology.
With the rise of a global pandemic in the shape of coronavirus known as COVID-19 in late December of 2019, the race to find the solutions to its discovery and eventually its termination in form of a vaccine has severely heated up during early 2020. COVID-19 cases arise from cases of pneumonia, and as a result, affected humans display severe respiratory diseases and eventually death. In terms of applications of DL for medical imaging in regard to COVID-19, it is currently in the phase of being utilized heavily to detect the presence of COVID-19 amongst the screening of potential patients. The predominant testing technique currently being employed to detect COVID-19 is transcription-polymerase chain reaction (RT-PCR). X-ray procedures such CT scans are showing a pivotal role in early diagnosis, and it is in this domain that DL is providing significant results [39]. With several DL methodologies [40][41] already cropping up to assist in the detection of COVID-19, it will not be long before DL also starts supplementing the eradication of the virus. For the purposes of this paper, however, we could not find any paper discussing DL in MHSI for the COVID-19 within our search criteria.
## 2 Deep Learning Methods
Machine Learning methodologies are typically divided into either supervised or unsupervised learning algorithms. A learning algorithm that makes use of a dataset comprising of a set of input features and an output label to obtain a predicting model is known as a supervised learning algorithm. The output label in the dataset for supervised learning could be categorized as either a classification or regression. A classification problem categorizes the output into discrete values such as type A and type B. A regression problem provides the output as continuous value, which could represent a real value such as dollars or a more symbolic numerical value, such as a normalization of a different number. The learning part in supervised learning refers to finding the optimal weights or parameters that minimize the cost (or loss) function. This would mean that the no further loss can be achieved and that the weights cannot be further improvised, which results in the best-fit model with the given inputs, bias, and learning rate. After successfully creating the model, a portion of the data is used to test the accuracy and/or the efficiency of the model. This is simply done by comparing the output from the model against the actual value from the dataset. While this is also implemented in the process of minimizing the cost function, the goal of testing the model is to perceive the vulnerability of the model.
An unsupervised learning algorithm is used to figure out the underlying structure of a dataset, which does not consist of a definite output. The algorithm does this by either clustering the output into categories or by finding the associative properties among the input parameters. Since the initial weights and parameters are selected ambiguously, the resulting output is not always similar every time, as a different model is obtained during every training process. In the following sections, we will delve upon various deep learning methods, which are built upon the ML fundamentals previously stated.
### Supervised Learning Methods
#### 2.1.1 Neural Networks
Neural Networks (also known as multilayer perceptron) are a category of learning algorithms built upon the idea and structure of a human brain, as the name suggests, and it lays the foundation for the majority of the deep learning methods. A neural network contains neurons (mathematically referred to as a perceptron) as the base unit which comprises of an activation number, and other sets of parameters such as weights W and bias b. This can be simplified to an activation function, which can be expressed as
\\[\\alpha^{(1)}=\\sigma(Wa^{(0)}+b), \\tag{1}\\]
where \\(\\alpha^{(0)}\\) represents the initial activation numbers, or the input features, and \\(\\alpha^{(1)}\\) refers to the activation numbers for the next layer. The \\(\\sigma\\) refers to the transfer function, which is traditionally denoted as either a sigmoid function or a step function. To form a fully constructed layer at each interval, we obtain the dot product between the weight vector and the input features vector. Thisrepresent a single layer of neurons, which has a simple feedforward mechanism, i.e. the neuron takes in a single input, performs the operation on it, and then passes it on to the next layer. A multi-layer perceptron (MLP) consists of two or more layers of neurons or perceptrons, which are also known as hidden layers. All of these layers from MLP combine together to form the basis of what is more commonly known as a Deep Neural Network (DNN). While there are different additional DNN techniques on which we will be expanding next, MLP is one of the most basic DNN architectures.
#### 2.1.2 Convolutional Neural Networks
Convolution Neural Network (CNN) is an exceedingly popular deep learning algorithm that is used to classify an input image by recognizing patterns and features in order to differentiate between objects. In fact, one of the earliest perceptron concepts in 1957 was designed with the purpose of classifying the input image as man or woman [42]. The weights in a CNN architecture are designed to perform convolution operations on the input image rather than the features. In a convolution layer, the spatial structure is preserved along with any temporal dependencies with the usage of filters or kernels. This means that instead of appropriating a weight for an input feature via the combined stretched matrix from the input features, a small filter with appropriate dimensions relative to the input image is applied to perform the element-wise dot product between the filter and a chunk of the image otherwise 'convolved' in order to obtain the activation. Note that the depth of the filter should match the depth of the image to make this convolution possible. This process is repeated over all the spatial locations of the image, thereby providing us with a final activation map over the spatial region.
CNNs are predominantly part of typical classification architecture widely used in medical image analysis. One example for simpler architectures would be LeNet [43], which was introduced over two decades ago, which despite being a rather shallow network, displayed the basic concept of a CNN in a simple and elegant fashion, using the tangent function as the activation function. Later on, Alexnet [38] shattered expectations at the ILSVRC in 2012. Bearing similar characteristics to LeNet, AlexNet [38] also made use of kernels with a larger field of layers closer to the input and smaller kernels closer to the output, with the key difference being in the assimilation of rectified linear units (ReLU) for the activation function, which has since become the activation function of choice for modern CNNs due to higher classification accuracy. It can be duly noted that the rise in popularity of deep learning techniques also coincides after this series of events. This has led to the exploration of several architectures with farther reaching hidden layers. Building upon the base of deeper networks, more intricate layers are introduced which further reduces the error rate in a successful classification while
Figure 2: Visual representation of (a) simple neural network: a machine learning algorithm modelled from the biological neural network of a human brain, (b) deep neural network: a neural network comprising of more than one hidden layer of neurons for its architecture.
also doing so more efficiently. In ILSVRC 2014, GoogleNet [44], used \"inception\" blocks that essentially reduced the number of operations performed at each layer by making use of smaller sets of convolutions. This inception module used the different sizes of convolution layers together, thereby allowing the final filter concatenation to stack the output together. Later, ResNet [45] was also introduced, which comprised of ResNet blocks. The residual block, as the name suggests, learns residual features, which in turn provide a shortcut that skips one or more layers. This further extended the efficiency of deeper models, thus enabling the more common vanishing gradient problem for deep learning models to be solved.
## 2 Feature Learning Classification
#### 2.1.3 U-net
Segmentation is a popular architecture not only among the medical image analysis community but also for the field of computer vision in general. While CNNs classify the pixels of an image, segmentation provides inference by making prediction labels for each pixel that facilitates the enclosing of core object locations, thus separating the image into sections of fields. While this provides the unfortunate overlap of neighboring pixels over regions with the same convolutions being computed multiple times, several solutions have been proposed to overcome this ordeal, the most prominent of which is known as a Fully Convolution Network (FCN). The core idea behind FCN is to take the original CNN with arbitrarily sized input images and use the fully connected layer as convolutions to produce the segmented output. However, this still results in a degraded feature map due to the propagation through several pooling layers.
U-net architecture [46] provides a solution for this issue built upon the foundation of the FCN. This architecture consists of the basic FCN supported by an upsampling layer as opposed to a pooling layer, which concludes in an increase in resolution for the final output image. It uses stochastic gradient descent to train the network and calculate the energy equation with the aid of softmax, implemented pixel-wide across final feature map in which the softmax layer is defined as
\\[p_{k}(x)=\\frac{e^{a_{k}(x)}}{\\sum_{k^{\\prime}=1}^{K}e^{a_{k}(x)}}\\quad\\quad, \\tag{2}\\]
where a(x) is the activation function. This is then applied to an energy equation as
\\[E=\\sum w(x)\\text{log}(p_{k(x)}(x)), \\tag{3}\\]
Figure 3: Convolutional Neural Network (CNN): a deep learning architecture containing several convolutional and pooling layers, which are then connected at the output. This output in turn is used to classify the input image provided. CNN has proved exceeding popular in computer vision and visual image analysis as of late due to its high accuracy and precision.
where \\(w\\) is the weight matrix for the model. This energy function is in short, a combination of the soft-max layer over the final feature map with the cross-entropy loss function.
#### 2.1.4 Recurrent Neural Networks
Recurrent Neural Networks (RNNs) were developed with the thought of tackling progression of vectors over time, which is something CNNs cannot accomplish, as they are restricted to handling fixed vector size to provide fixed-size outputs. In addition, CNNs operate on a fixed number of layers. RNNs, by contrast, can have input and output vector of varying lengths, which make them invaluable for undertaking problems in the Natural Language Processing (NLP) domain [47], as the input matrix in such an application is constantly evolving. Another way in which the RNNs differ from traditional CNNs is that they are not feedforward systems and instead loop the outputs of each hidden layer back to itself. For a classification problem, the output from the hidden layer is used as the inputs along with the normal input for the hidden layer. This can be represented as
\\[h_{t}=\\sigma(Wx_{t}+Vh_{t-1}+b), \\tag{4}\\]
where \\(x_{t}\\) is the input vector, \\(h_{t}\\) is the hidden layer vector, W and V are the weight matrices, and \\(b\\) is the bias vector.
The RNN, however, also experiences the same vanishing gradient problem during training as regular DNNs. Several solutions involving special memory units have been proposed for RNNs, with the Long Short Term Memory (LSTM) cell [34] being one of the most popular ones as well as having the distinction of being of the earliest.
Figure 3: U-net architecture: a deep learning algorithm based upon Fully Convolutional Network (FCN) which in itself is built upon the foundation of CNN. Majority of the segmentation techniques involve u-net in one way or another, with it forming the core component of a proposed model for segmentation.
inputs to modify the state of the hidden layer neuron. RNN has the capability of processing inputs of varying lengths without change in the model size, which is invaluable for constantly evolving datasets._
### Unsupervised Learning Methods
#### 2.2.1 Auto-encoders
Autoencoder is a type of neural network that is primarily trained to provide a learned representation of the input. In other words, an autoencoder generates a replicate for the provided input after undergoing a handful of operations. These operations are performed over a single hidden layer in which the model 'encodes' and then 'decodes' the input to provide the mapped output. Although this process might seem meaningless on the surface, this gives us the opportunity to map how data is projected for a lower dimension. This is because the hidden layer has the smallest dimension in this network, and it also encompasses all the information to reconstruct the output for a same class of input. This is particularly useful in the case of anomaly detection, in which the autoencoder is fed the inputs from same class of data. Since the mapping features would produce an incoherent result with respect to the autoencoder hidden layer, the anomalies would be discovered easily.
Figure 5: A simple Recurrent Neural Network (RNN): a neural network in which the previous output is also used as inputs to modify the state of the hidden layer neuron. RNN has the capability of processing inputs of varying lengths without change in the model size, which is invaluable for constantly evolving datasets.
#### 2.2.2 Restricted Boltzmann Machines
Restricted Boltzmann Machines (RBMs) are a relatively simpler deep learning system comprising of two layers: input and hidden. This forms the basis of deep belief networks. The nodes (neurons) from each layer form inter-layer connections while'restricting' intra-layer connections, which helps derive its name. The RBM comprises of bidirectional communications between layers and is thus a generative model that uses the hidden layer to fashion new data points. It does so by defining an energy function for a state of input and visible units of (y,z) as
\\[E(y,z)=\\sum_{i}a_{i}y_{i}-\\sum_{j}b_{j}z_{j}-\\sum_{i}\\sum_{j}y_{i}w_{i,j}z_{j}, \\tag{5}\\]
where \\(a\\) and \\(b\\) are the biases, \\(w\\) is the weight matrix, and \\(y\\) and \\(z\\) are the states for hidden unit \\(j\\) and visible unit \\(i\\). The pair of possible hidden and visible vector is computed by finding the probability as
\\[p(y,z)=\\frac{1}{z}e^{-E(y,z)}, \\tag{6}\\]
where Z is known as a partition function. The RBMs are primarily used to pre-train a NN to generate the initial weights and then are used to form the foundation for other deep learning methods such as a Deep Belief Network (DBN). These DBNs are then used for many different applications, including cyber security [48], NLP [49], and of course medical image analysis [50].
Figure 4: Comprehensive layout of an autoencoder: a type of neural network that is used to understand the efficient data coding, which is particularly useful for dimension reduction for a given dataset.
#### 2.2.3 Generative Adversarial Networks (GANs)
GANs, used widely in image, video and audio generative scenarios [51], are a generative architecture (as the name suggests) based largely on probability-related setups for unlabeled datasets, which provide a better substitute to maximum likelihood estimators. A GAN architecture pits two neural networks against one another, with the purpose of generating synthetic labels that are comparable to actual data. In each iteration run, the two neural networks keep improving repeatedly at the required task. This procedure continues until the output from the generator resembles the actual sample data as closely as possible.
The generator network and the discriminator network competing against each other, as displayed in Fig. 8, can be considered the two supposed neural networks that highlight the concept of GAN as an example. The example can be considered as a min-max situation, in which the function V (D,G) can be described as
\\[min_{G}max_{D}V(D,G)=E_{x-P_{data}(x)}[\\log(D(x))]+E_{x-P_{data}(x)}[\\log(1-D(G( x)))]. \\tag{7}\\]
Figure 5: RBM block diagram: A stochastic neural network that uses bidirectional layers (and consequently generative), comprising of hidden layers (h1, h2, & h3) and input layer \\(x\\) to effectively learn probability distributions for the given set of inputs.
Figure 6: Generative Adversarial Network block diagram: an architecture that pits two networks against each other, which is used to eventually learn the patterns of the input data in order to produce an output as closely as possible to an actual sample of the input.
## 3 DL Methods for MHSI
### 3.1 Classification
Classification for pathological images has been one of the earliest fields of not just MHSI, but medical analysis in general, in which deep learning techniques have been a significant influence. Typical procedures for processing the MHSI using classification include applications such as cell classification [52] in order to identify and classify cancerous cells. This is also sometimes synonymous with detection techniques on the surface, in which the architecture is designed to detect traces of cancer from samples, as opposed to just classifying the cells themselves, which we will discuss in the next section. Classification techniques for MHSI often make use of transfer learning, which has proven to be quite useful in these scenarios. MHSI typically utilize small datasets comprising of medical diagnosis images (typically hundreds or thousands), as opposed to the field of computer vision in which the datasets could consist of millions of sample images, which is one of the reasons why transfer learning has been feasible for this discipline.
Transfer learning is fundamentally the use of a pre-trained network to classify input images from testing samples, and as mentioned earlier, the smaller datasets allow greater practicality and ease of use, as opposed to training the network to obtain new input features every time for virtually similar input image datasets. The implementation of deep learning for MHSI took some time to catch on, compared to deep learning being used for other areas of research. Earlier implementations of deep learning in MHSI began with classification of benign & malignant tissues or cell samples using ANNs, typically MLPs [21]. A particular study that explores HSI for characterizing kidney stones [53] made use of Principal Component Analysis (PCA) to determine appropriate variables to be used for a simple ANN model comprising of a hidden layer with four nodes, which was used to classify the type of kidney stone from the HIS, whereas a similar approach was also undertaken to classify different cancer types [54][55]. There have been situations in which ANNs provided inferior results, which in turn highlights the problems related to deep learning, namely the absence of larger datasets. This particular study assessed the performance between four supervised algorithms that pitted ANN against random forest, SVM and k-nearest neighbors [56]. The paper found that from the 11 patient samples analyzed, SVM produced the best results, although they did remark that a larger training dataset may lead to better performance in general.
In more recent years, however, CNNs have become the prevalent choice for the task of tissue/cell classification. For instance, one paper used PCA for transfer learning for CNNs with kernel fusion [57] to complete the task of classification in MHSI. This publication made use of the Gabor kernel (which is implemented to obtain spatial features) and the CNN kernel to improvise conventional CNN execution for MHSI classification, with the proposed model showing improved performance. CNN has also been implemented to classify blood cell MHSI in a similar fashion [52][58][59], where increased pixel size for the MHSI produced better results with respect to classification accuracy, thus further proving the potential for CNN in MHSI. The CNN model showed promising prospects while classifying head and neck cancer [37], even for an animal model [60]. A proposed CNN model also produced successful results for classification of oral cancer diagnosis [61], the performance of which was verified by implementing the same dataset for SVM and DBN. The majority of the proposed CNN models discussed in this section all build upon the traditional architecture for CNN, which better suits the requirements for the medical diagnosis under examination.
### Detection
Detection techniques in general refer to object detection for a given input image. For MHSI, detection techniques are habitually used for the purpose of detection of malignant cancerous samples, although they are also used for reconstruction of tissue samples. While ANN has been used to detect such cancerous samples [62], the majority of detection techniques implemented for MHSI use CNNs for classification of the pixels of the image, which is then used to detect the malign cancer cells/tissues [63][64]. From this, after the pixel-wide classification is obtained from CNNs, it can also be considered as object classification, which is then used for post-processing to detect the presence of cancer or other purposes for the given input sample. One similar study implemented a CNN-based model to reconstruct tissue surface using an endoscopic probe, which displayed potential for practical applications [65]. One study also applied FCN to investigate tissue surface samples using an endoscopic probe in a similar fashion [66].
In recent years, CNN has been particularly useful for MHSI in head and neck squamous cell carcinoma detection [67][68], although AEN has also been applied for a similar scenario [69]. Implementation of CNN was also observed for aiding brain tumor resection surgeries in real-time amongst the considered papers [70] and similarly towards determining skin cancer detection [71]. Such publications heavily suggest that although CNN is finely suited for classification techniques, as discussed in prior subsection, it also possesses potential for detection techniques that support a promising outlook for future research of detection techniques in MHSI.
Figure 7: Visual representation of classification of a cancerous tissue sample.
### Segmentation
Segmentation provides outlines for certain parts of images that dictate the position, volume, and size of the relative objects in an image, as discussed in Section 2 of this study. For medical image diagnosis, this is particularly useful, as it could clearly outline certain organs, or noteworthy parts of a medical image. This is significant for medical diagnosis in which brain, liver or other important organs need to be distinguished from a medical image. While segmentation has been extensively used for medical diagnosis over the years [72][73], in the case of MHSI, we could only find one notable paper that used a segmentation technique, specifically for the purpose of retinal image analysis [74].
This paper implemented a dense-FCN (FCN being the foundation for U-net [46]) to segment the retinal image. With the usage of \\(k\\)-means clustering on the input data, the study was able to lessen the complexity in order to aid the segmentation process and ultimately complete validation against alternate approaches involving other ML algorithms such as SVM and random forest. Its findings suggests that spectral data may provide the opportunity for improvised segmentation results for optic disc and macula segmentation, which is important for retinal imaging analysis, thereby prompting the urge for further research for segmentation techniques for MHSI.
Figure 8: Standard workflow for cancer detection using deep learning on medical image analysis.
Figure 9: Classical representation of an FCN for the purposes of segmentation.
3.4 Other
DL applications of MHSI go beyond the normal implementation of traditional domains of machine learning. This can be observed in a handful of papers considered in this study. For example, one study utilized GAN with the aid of an autoencoder using MATLAB ML tools in order to determine the tissue oxygen saturation using hyperspectral input data, the results of which are useful for the detection of ischaemia, a fatal disease that affects the blood supply to different organs of the body, particularly the heart muscles [75].
Another similar case in which blood oxygenation is determined using MATLAB tools was discussed in a 2017 study [76]. In this circumstance, the NN fitting tool was used to retrieve the parameters necessary to determine the oxygen saturation levels. Lastly, in another 2017 study, GAN was employed again to assist in the task of staining lung histology imaging, which was then used to further study the necessary tissue samples [77]. Overall, these studies suggest a broader field of applications of DL for MHSI in the near future. Tables 1 and 2 below consist of the relevant publications we obtained for this paper. Table 1 details the titles of the papers and the category of DL the paper pertains to, while Table 2 lists the application area of the papers.
\\begin{table}
\\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \\hline
**Publication Title** & **Category** \\\\ \\hline Cell classification using convolutional neural networks in medical hyperspectral imagery [52] & classification \\\\ \\hline Towards virtual H\\&E staining of hyperspectral lung histology images using conditional generative adversarial networks [77] & other \\\\ \\hline Hyperspectral image segmentation of retinal vasculature, optic disc and macula [74] & segmentation \\\\ \\hline Hyperspectral imaging for cancer detection and classification [54] & classification \\\\ \\hline Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks [65] & detection \\\\ \\hline Probe-based rapid hybrid hyperspectral and tissue surface imaging aided by fully convolutional networks [66] & detection \\\\ \\hline Hyperspectral system for imaging of skin chromophores and blood oxygenation [76] & other \\\\ \\hline Medical hyperspectral imaging: a review [21] & classification \\\\ \\hline Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging [37] & classification \\\\ \\hline Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model [60] & classification \\\\ \\hline Convolutional neural network for medical hyperspectral image classification with kernel fusion [57] & classification \\\\ \\hline Blood cell classification based on hyperspectral imaging with modulated Gabor and CNN [58] & classification \\\\ \\hline Medical hyperspectral image classification based on end-to-end fusion deep neural network [59] & classification \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Papers using deep learning techniques for MHSI.
\\begin{table}
\\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \\hline Tissue classification of oncologic esophageal resectates based on hyperspectral data [56] & classification \\\\ \\hline \\hline Table 1 (continued) & **Category** \\\\ \\hline A dual stream network for tumor detection in hyperspectral images [62] & detection \\\\ \\hline Adaptive deep learning for head and neck cancer detection using hyperspectral imaging [69] & detection \\\\ \\hline Hyperspectral imaging of head and neck squamous cell carcinoma for cancer margin detection in surgical specimens from 102 patients using deep learning [67] & detection \\\\ \\hline Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm [61] & classification \\\\ \\hline Hyperspectral imaging for head and neck cancer detection: specular glare and variance of the tumor margin in surgical specimens [68] & detection \\\\ \\hline Cancer detection using hyperspectral imaging and evaluation of the superficial tumor margin variance with depth [64] & detection \\\\ \\hline Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patients [70] & detection \\\\ \\hline Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks [63] & detection \\\\ \\hline Convolutional neural networks in skin cancer detection using spatial and spectral domain [71] & detection \\\\ \\hline Estimation of tissue oxygen saturation from RGB images and sparse hyperspectral signals based on conditional generative adversarial network [75] & other \\\\ \\hline Design of a multilayer neural network for the classification of skin ulcersβ hyperspectral images: a proof of concept [55] & classification \\\\ \\hline Hyperspectral imaging based method for fast characterization of kidney stone types [53] & classification \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Deep learning methodologies used in MHSI and their applications.
## 4 Challenges in and Future of DL MHSI
The boom in deep learning research following the ILSVRC in 2012 has had substantial effects on a variety of disciplines [35], particularly in medical image analysis and diagnosis, as evident from the surge in papers published [78][79]. While some domains in the medical community have already felt the impact of the applications of deep learning techniques, particularly the domain of radiology, which has severely diminished effects for the profession of practicing radiologists [80], MHSI has yet to experience a similar fate. This may simply be due to lack of broader researchers of MHSI in general. However, examining the trend observed in the number of publications for deep learning implemented in MHSI, there may be a similar outcome over the coming years. With the unprecedented advancement where general medical image analysis seemed to have reached an impasse, as deep learning methodologies augment and improvise efficiency with more accurate results [35], MHSI profession is also likely to also be impacted. Ultimately, further research for MHSI
\\begin{table}
\\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \\hline ANN & This review paper discusses the use of ANN for classification for MHSI [21] \\\\ \\hline CNN & Classifying cancerous tissue samples from neck and head regions using CNN [37] \\\\ \\hline \\end{tabular}
\\end{table}
Table 2 (continued)would provide the essential requirement of large available datasets, which was also an obstacle for deep learning as a whole prior to the last couple of decades.
With such deep rooted research in deep learning techniques, it is clearly evident that there is not one technique that trumps all others, as the needs and requirements vary from one situation to another. However, CNNs appear to be the prevalent choice, as the bulk of publications considered in this paper utilize it in some way or another. Meanwhile, there are still variants of architectures of CNNs implemented for a particular circumstance, such as in [37] and [68]. Although both investigate the same subject of neck and head cancer, they each use different approaches to tackle the challenge using CNNs. Issues for deep learning for MHSI also stem from several existing DL woes, one of which relates to the broader problems for classification and detection, particularly imbalance of classification for the purpose of object detection. The detection systems first perform a pixel-wide classification, as discussed in the earlier section, which typically causes the class imbalance to be biased towards non-object classes during the training process, which are easier to discern amongst the samples and may cause distortion in the overall detection process.
## 5 Conclusions
Despite the boom in deep learning, its rigorous applications in medical image analysis, and the benefit of hyperspectral imaging for medical analysis, there have not yet been any papers published that review the publications dedicated towards the implementation of deep learning for medical hyperspectral image analysis. In this paper, we discussed the deep learning techniques for medical hyperspectral imaging and relevant papers published relative to the topic within the year range of 2012-2019. In short, similar to the trend observed for deep learning since the iconic ILSVRC 2012, there was a definite boost in research and papers published for deep learning for MHSI and medical image analysis in general. We also discussed the different methodologies discussed in all the papers we found as well as what the future in DL for MHSI may look like. The trend for the majority of papers implementing DL for MHSI seems to be paved by the use of CNNs to classify the blood cells/tissue sample in order to aid cancer detection and analysis. We could only find one paper that discussed the use of FCN for segmentation rationales of retinal imaging. Lastly, there were also isolated uses of GAN and Matlab tools to determine tissue oxygenation levels that could potentially be used for detecting ischaemia (disease directly dependent on blood supply to vital organs) or even in staining of lung histology imaging.
**Conflicts of Interest:** The authors declare no potential conflict of interest and have no relevant financial motives for this article.
## Appendix A (Search Criteria)
Google Scholar was implemented to find the relative publications considered for this paper. The key search words included during our search are \"deep learning\" and \"medical hyperspectral imaging\"; we found that word searches for topics such as specific techniques yielded more inaccurate results relative to our purpose. In addition to Google Scholar, we also searched for the same search words on the publisher's own search engines for the papers considered for this article, which resulted in the discovery of papers that previously did not appear in google scholar.
## Appendix B (Datasets)
The majority of datasets utilized by the publications were obtained either directly through experimentation using various imaging systems or via contribution from hospitals, laboratories, or clinical storage. Some examples of testing systems implemented include CRI Maestro for vivo imaging system [69], more commonly used Liquid Crystal Tunable Filters (LCTF) [58][57][75][59], and Hybrid endoscopic apparatus (ICL SLHSI) [65]. Several papers also used actual medical samples, such as patients undergoing surgical cancer resection [64][37] and other contributions of local patient samples from hospitals, medical centers, and laboratories. The only largely available repository in use by a paper was BioGPS UCI repository [61].
## Appendix C (Software)
Various software packages were implemented across all the publications discussed in this paper, the majority of which implemented Tensorflow (wherever stated). Tensorflow is an open-source machine learning platform based on Python, which is supported by its vast number of libraries, tools, and community resources. The complete list of software and the papers utilizing those software can be found below-
## References
* [1] I. G. Stiell, G. A. Wells, K. L. Vandemheen, C. M. Clement, H.. Lesiuk, A.. Laupacis, R. D. McKnight, R.. Verbeek, R. J. Brison, D.. Cass, M. A. Eisenhauer, G. H. Greenberg and J.. Worthington, \"The Canadian CT Head Rule for patients with minor head injury,\" _The Lancet,_ vol. 357, no. 9266, pp. 1391-1396, 2001.
Figure 10: Sample dataset from BioGPS repository: collection of genome signatures of ovarian tissue that can be utilized to identify ovarian cancer associated fibroblasts (CAFs) gene signatures [82].
* [2] S. Procza, C. Avilab, J. Feya, G. Roqueb, M. Schuetza and E. Hamannc, \"X-ray and gamma imaging with Medipix and Timepix detectors in medical research,\" _Radiation Measurements,_ vol. 106104, no. 127, 2019.
* [3] K. Doi, \"Computer-aided diagnosis in medical imaging: historical review, current status and future potential,\" _Computerized Medical Imaging and Graphics,_ no. 31, p. 198-211, 2007.
* [4] F. E.-Z. A. El-Gamal, M. Elmogy and A. Atwan, \"Current trends in medical image registration and fusion,\" _Egyptian Informatics Journal,_ pp. 99-124, 2016.
* [5] A.. Stylianou and M. A. Talias, \"Nanotechnology-supported THz medical imaging,\" _F1000Research,_ vol. 2, no., pp. 100-100, 2013.
* [6] M. A. Calin, S. V. Parasca, D.. Savastru and D.. Manea, \"Hyperspectral Imaging in the Medical Field: Present and Future,\" _Applied Spectroscopy Reviews,_ vol. 49, no. 6, pp. 435-447, 2014.
* [7] S. Paheding, V. K. Ansari and V. Sagan, \"Progressively Expanded Neural Network (PEN Net) for hyperspectral image classification: A new neural network paradigm for remote sensing image analysis,\" _ISPRS Journal of Photogrammetry and Remote Sensing,_ vol. 146, pp. 161-181, 2018.
* [8] J. Ren, J. Zabalza, S. Marshall and J. Zheng, \"Effective Feature Extraction and Data Reduction in Remote Sensing using Hyperspectral Imaging,\" _IEEE Signal Processing Magazine,_ vol. 31, no. 4, pp. 149-154, 2014.
* [9] D. Lorente, N. Aleixos, J. Gomez-Sanchis, S. Cubero, O. L. Garcia-Navarrete and J. Blasco, \"Recent Advances and Applications of Hyperspectral Imaging for Fruit and Vegetable Quality Assessment,\" _Food and Bioprocess Technology,_ vol. 5, no. 4, pp. 1121-1142, 2012.
* [10] D.-W. S. Yao-Ze Feng, \"Application of Hyperspetral Imaging in Food Safety Inspection and Control: A Review,\" _Critical Reviews in Food Science and Nutrition,_ vol. 52, no. 11, pp. 1039-1058, 2012.
* Part I: Fundamentals,\" _Innovation Food Science & Emerging Technologies,_ vol. 19, pp. 1-14, 2013.
* [12] S. Paheding, Y. Diskin, S. Arigela and V. K. Asari, \"Visibility improvement of shadow regions using hyperspectral band integration,\" in _Proc. SPIE 9244, Image and Signal Processing for Remote Sensing XX_, Amsterdam, 2014.
* [13] M. S. Alam, R. P. Gollapalli and S. Paheding, \"Identification and detection of oil and oil-derived substances at the surface and subsurface levels via hyperspectral imaging,\" in _Optical Pattern Recognition XXIII. Vol. 8398. International Society for Optics and Photonics_, Baltimore, 2012.
* [14] M. S. Alam and S. Phaeding, \"Trends in Oil Spill Detection via Hyperspectral Imaging,\" in _2012 7th International Conference on Electrical and Computer Engineering_, Dhaka, 2012.
* [15] A. Essa, P. Sidike and V. Asari, \"Volumetric Directional Pattern for Spatial Feature Extraction in Hyperspectral Imagery,\" _IEEE GEOSCIENCE AND REMOTE SENSING LETTERS_, vol. 14, no. 7, pp. 1056-1060, 2017.
* [16] S. Phaeding, C. Chen, V. Asari, Y. Xu and W. Li, \"Classification of hyperspectral image using multiscale spatial texture features,\" in _IEEE 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing_, 2016.
* [17] S. Phaeding, V. K. Asari and M. S. Alam, \"Multiclass Object Detection With Single Query in Hyperspectral Imagery Using Class-Associative Spectral Fringe-Adjusted Joint Transform Correlation,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 2, pp. 1196-1208, 2016.
* [18] D. Manolakis and G. Shaw, \"Detection Algorithms for Hyperspectral Imaging Applications,\" _IEEE Signal Processing Magazine,_ vol. 19, no. 1, pp. 29-43, 2002.
* [19] S. Phaeding, V. K. Asari and M. S. Alam, \"Multiple object detection in hyperspectral imagery using spectral fringe-adjusted joint transform correlator,\" in _Proc. SPIE 9405, Image Processing: Machine Vision_, San Francisco, 2015.
* [20] C. Fischer and I. Kakoulli, \"Multispectral and hyperspectral imaging technologies in conservation: current research and potential applications,\" _Studies in Conservation,_ vol. 51, pp. 3-16, 2006.
* [21] G. Lu and B. Fei, \"Medical hyperspectral imaging: a review,\" _Journal of Biomedical Optics,_ vol. 010901, no. 19, 2014.
* [22] S. V. Panasyuk, S. Yang, D. V. Faller, D. Ngo, R. A. Lew, J. E. Freeman and A. E. Rogers, \"Medical Hyperspectral Imaging to Facilitate Residual Tumor Identification During Surgery,\" _Cancer Biology & Therapy,_ vol. 6, no. 3, pp. 439-446, 2007.
* [23] G. Lu, J. V. Little, X. Wang, H. Zhang, M. R. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen and B. Fei, \"Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging,\" _Clinical Cancer Research,_ vol. 23, no. 18, pp. 5426-5436, 2017.
* [24] S. Kiyotoki, J. Nishikawa, T. Okamoto, K. Hamabe, M. Saito, A. Goto, Y. Fujita, Y. Hamamoto, Y. Takeuchi, S. Satori and I. Sakaidaa, \"New method for detection of gastric cancer by hyperspectral imaging: a pilot study,\" _Journal of Biomedical Optics,_ vol. 18, no. 2, 2013.
* [25] G. Lu, J. V. Little, X. Wang, H. Zhang, M. R. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen and B. Fei, \"Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging,\" _Clinical Cancer Research,_ vol. 23, no. 18, pp. 5426-5436, 2017.
* [26] R. Pike, S. K. Patton, G. Lu, L. V. Halig, D. Wang, Z. G. Chen and B. Fei, \"A minimum spanning forest based hyperspectral image classification method for cancerous tissue detection,\" _SPIE Medical Imaging,_ vol. 9034, pp. 15-20, 2014.
* [27] Y. LeCun, Y. Bengio and G. Hinton, \"Deep learning,\" _Nature,_ vol. 521, p. 436-444, 2015.
* [28] G. Lu, X. Qin, D. Wang, S. Muller, H. Zhang, A. Chen, Z. G. Chen and B. Fei, \"Hyperspectral imaging of neoplastic progression in a mouse model of oral carcinogenesis,\" in _Proc. SPIE, Medical Imaging 2016: Biomedical Applications in Molecular, Structural, and Functional Imaging_, San Diego, 2016.
* [29] A. Madooei, R. M. Abdlaty, L. Doerwald-Munoz, J. Hayward, M. S. Drew, Q. Fang and J. Zerubia, \"Hyperspectral image processing for detection and grading of skin erythema,\" _Proc. SPIE, Medical Imaging 2017: Image Processing,_ vol. 10133, pp. 1-7, 2017.
* [30] H. Akbari, Y. Kosugi, K. Kojima and N. Tanaka, \"Blood Vessel Detection and Artery-Vein Differentiation Using Hyperspectral Imaging,\" in _31st Annual International Conference of the IEEE EMBS_, Minneapolis, 2009.
* [31] Z. Liu, D. Z. J.-q. Yan, Q.-L. Li and Q.-l. Tang, \"Classification of hyperspectral medical tongue images for tongue diagnosis,\" _Computerized Medical Imaging and Graphics,_ vol. 31, pp. 672-678, 2007.
* [32] R. Archibald and G. Fann, \"Feature Selection and Classification of Hyperspectral Images With Support Vector Machines,\" _IEEE Geoscience and Remote Sensing Letters,_ vol. 4, no. 4, pp. 674-677, 2007.
* [33] S. Vyas, A. Banerjee, L. Garza, S. Kang and P. Burlina, \"Hyperspectral signature analysis of skin parameters,\" in _Proc. SPIE, Medical Imaging 2013: Computer-Aided Diagnosis_, Lake Buena Vista, 2013.
* [34] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, S. Paheding, M. S. Nasrin, B. C. V. Essen, A. A. S. Awwal and V. K. Asari, \"The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,\" 2018.
* [35] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, M. Hasan, B. C. V. Essen, A. A. S. Awwal and V. K. Asari, \"A State-of-the-Art Survey on Deep Learning Theory and Architectures,\" _Electronics,_ vol. 8, no. 3, p. 292, 2019.
* [36] M. G. Crowson, J. Ranisau, A. Eskander, A. Babier, B. Xu, R. R. Kahmke, J. M. Chen and T. C. Y. Chan, \"A Contemporary Review of Machine Learning in Otolaryngology-Head and Neck Surgery,\" _The Laryngoscope,_ vol. 0, pp. 1-7, 2019.
* [37] M. Halicek, G. Lu, J. V. Little, X. Wang, M. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen and B. Fei, \"Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging,\" _Journal of Biomedical optics,_ vol. 22, no. 6, 2017.
* [38] A. Krizhevsky, I. Sutskever and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Advances in Neural Information Processing Systems_, 2012.
* [39] L. Brunese, F. Mercaldo, A. Reginelli and A. Santone, \"Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays,\" _Computer Methods and Programs in Biomedicine,_ vol. 196, 2020.
* [40] S. Minaee, R. Kafieh, M. Sonka, S. Yazdani and G. J. Soufi, \"Deep-COVID: Predicting COVID-19 From Chest X-Ray Images Using Deep Transfer Learning,\" _Medical Image Analysis,_ vol. 65, 2020.
* [41] A. I. Khan, J. L. Shah and M. M. Bhat, \"CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images,\" _Computer Methods and Programs in Biomedicine,_ vol. 196, 2020.
* [42] H. Wang and B. Raj, \"On the Origin of Deep Learning,\" _Carnegie Mellon University,_ 2017.
* [43] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, \"Gradient-based learning applied to document recognition,\" _Proceedings of the IEEE,_ vol. 86, no. 11, pp. 2278-2324, 1998.
* [44] C. Szegedy, W. L. Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, \"Going deeper with convolutions,\" in _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, Boston, 2015.
* [45] K. He, X. Zhang, S. Ren and J. Sun, \"Deep Residual Learning for Image Recognition,\" in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, Las Vegas, 2016.
* [46] O. Ronneberger, P. Fischer and T. Brox, \"U-Net: Convolutional Networks for Biomedical Image Segmentation,\" in _Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015_, Cham, Springer International Publishing, 2015, pp. 234--241.
* [47] W. Yin, K. Kann, M. Yu and H. Schutze, \"Comparative Study of CNN and RNN for Natural Language Processing,\" 7 February 2017.
* [48] M. Z. Alom, V. Bontupalli and T. M. Taha, \"Intrusion detection using deep belief networks,\" in _National Aerospace and Electronics Conference (NAECON)_, Dayton, OH, 2015.
* [49] R. Sarikaya, G. E. Hinton and A. Deoras, \"Application of Deep Belief Networks for Natural Language Understanding,\" _IEEE/ACM Transactions on Audio, Speech, and Language Processing,_ pp. 778-784, 19 April 2014.
* [50] A. Khatami, A. Khosravi, T. Nguyen, C. P. Lim and S. Nahavandi, \"Medical image analysis using wavelet transform and deep belief networks,\" _Expert Systems with Applications,_ vol. 86, pp. 190-198, 15 November 2017.
* [51] S. A. Jalalifar, H. Hasani and H. Aghajan, \"Speech-Driven Facial Reenactment Using Conditional Generative Adversarial Networks,\" 20 March 2018.
* [52] X. Li, W. Li, X. Xu and W. Hu, \"Cell classification using convolutional neural networks in medical hyperspectral imagery,\" in _2017 2nd International Conference on Image, Vision and Computing (ICIVC)_, Chengdu, 2017.
* [53] F. Blanco, M. Lopez-Mesas, S. Serranti, G. Bonifazi, J. Havel and M. Valiente, \"Hyperspectral imaging based method for fast characterization of kidney stone types,\" _Journal of Biomedical Optics,_ vol. 17, no. 7, p. 076027, 2012.
* [54] M. Nathan, A. Kabatznik and A. Mahmood, \"Hyperspectral imaging for cancer detection and classification,\" in _2018 3rd Biennial South African Biomedical Engineering Conference (SAIBMEC)_, Stellenbosch, 2018.
* [55] D. C. Jaramillo, J. E. Escobar, J. Galeano and M. C. Torres-Madronero, \"Design of a multilayer neural network for the classification of skin ulcers' hyperspectral images: a proof of concept,\" in _15th International Symposium on Medical Information Processing and Analysis_, Medelin, 2019.
* [56] M. Maktabi, H. Kohler, M. Ivanova, B. Jansen-Winkeln, J. Takoh, S. Niebisch, S. M. Rabe, T. Neumuth, I. Gockel and C. Chalopin, \"Tissue classification of oncologic esophageal resectates based on hyperspectral data,\" _International Journal of Computer Assisted Radiology and Surgery,_ vol. 14, no. 10, p. 1651-1661, 2019.
* [57] Q. Huang, W. Li and X. Xie, \"Convolutional neural network for medical hyperspectral image classification with kernel fusion,\" in _International Conference on Biological Information and Biomedical Engineering_, Shanghai, 2018.
* 170, 2019.
* 4492, 2019.
* [60] L. Ma, G. Lu, D. Wang, X. Wang, Z. G. Chen, S. Muller, A. Chen and B. Fei, \"Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model,\" in _SPIE Medical Imaging: Biomedical Applications in Molecular, Structural, and Functional Imaging_, Orlando, 2017.
* [61] P. R. Jeyaraj and E. R. S. Nadar, \"Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm,\" _Journal of Cancer Research and Clinical Oncology,_ vol. 145, p. 829-837, 2019.
* [62] P. Weijtmans, C. Shan, T. Tan, S. B. d. Koning and T. Ruers, \"A Dual Stream Network for Tumor Detection in Hyperspectral Images,\" in _2019 IEEE 16th International Symposium on Biomedical Imaging_, Venice, 2019.
* [63] M. Halicek, J. V. Little, X. Wang, A. Y. Chen and B. Fei, \"Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks,\" _Journal of Biomedical Optics,_ vol. 24, no. 3, p. 036007, 2019.
* [64] M. Halicek, H. Fabelo, S. Ortega, J. V. Little, X. Wang, A. Y. Chen, G. M. Callico, L. L. Myers, B. D. Sumer and B. Fei, \"Cancer detection using hyperspectral imaging and evaluation of the superficial tumor margin variance with depth,\" in _SPIE Medical Imaging_, San Diego, 2019.
* [65] J. Lin, N. T. Clancy, Y. H. Ji Qi, T. Tatla, D. Stoyanov, L. Maier-Hein and D. S. Elson, \"Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks,\" _Medical Image Analysis,_ vol. 48, pp. 162-176, 2018.
* MICCAI 2016_, 2016.
* [67] M. Halicek, J. D. Dormer, J. V. Little, A. Y. Chen, L. Myers, B. D. Sumer and B. Fei, \"Hyperspectral Imaging of Head and Neck Squamous Cell Carcinoma for Cancer Margin Detection in Surgical Specimens from 102 Patients Using Deep Learning,\" _Cancers,_ vol. 11, no. 9, p. 1367, 2019.
* [68] M. Halicek, H. Fabelo, S. Ortega, J. V. Little, X. Wang, A. Y. Chen, G. M. Callico, L. Myers, B. D. Sumer and B. Fei, \"Hyperspectral imaging for head and neck cancer detection: specular glare and variance of the tumor margin in surgical specimens,\" _Journal of Medical Imaging,_ vol. 6, no. 3, p. 035004, 2019.
* [69] L. Ma, G. Lu, D. Wang, X. Qin, Z. G. Chen and B. Fei, \"Adaptive deep learning for head and neck cancer detection using hyperspectral imaging,\" _Visual Computing for Industry, Biomedicine, and Art,_ vol. 2, no. 18, 2019.
* [70] H. Fabelo, M. Halicek, S. Ortega, A. Szolna, J. Morera, R. Sarmiento, G. M. Callico and B. Fei, \"Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patients,\" in _Proceedings Volume 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling_, San Diego, 2019.
* [71] I. Polonen, S. Rahkonen, L. Annala and N. N., \"Convolutional neural networks in skin cancer detection using spatial and spectral domain,\" in _Proceedings Volume 10851, Photonics in Dermatology and Plastic Surgery 2019_, San Francisco, 2019.
* [72] L. Bi, D. Feng and J. Kim, \"Dual-Path Adversarial Learning for Fully Convolutional Network (FCN)-Based Medical Image Segmentation,\" _The Visual Computer,_ vol. 32, p. 1043-1052, 2018.
- 42835, 28 March 2019.
* [74] A. Garifullin, P. Koobi, P. Ylitepsa, K. Adjers, M. Hauta-Kasari, H. Uusitalo and L. Lensu, \"Hyperspectral Image Segmentation of Retinal Vasculature, Optic Disc and Macula,\" in _Digital Image Computing: Techniques and Applications (DICTA)_, Canberra, 2018.
* [75] Q. Li, J. Lin, N. T. Clancy and D. S. Elson, \"Estimation of tissue oxygen saturation from RGB images and sparse hyperspectral signals based on conditional generative adversarial network,\" _InternationalJournalofComputerAssistedRadiologyandSurgery,_ vol. 14, p. 987-995, 2019.
* [76] E. Zherebtsov, A. Popov, A. Doronin, I. Meglinski and A. Bykov, \"Hyperspectral system for Imaging of skin chromophores and blood oxygenation,\" in _European Conferences on Biomedical Optics_, Munich, 2017.
* [77] N. Bayramoglu, M. Kaakinen, L. Eklund and J. Heikkila, \"Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks,\" in _IEEE International Conference on Computer Vision (ICCV)_, Venice, 2017.
* [78] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. v. d. Laak, B. v. Ginneken and C. I. Sanchez, \"A survey on deep learning in medical image analysis,\" _Medical Image Analysis,_ vol. 42, pp. 60-88, 2017.
* [79] N. Siddique, S. Paehding, C. Elkin and V. Devabhaktuni, \"U-Net and its variants for medical image segmentation: theory and applications,\" _arXiv:2011.01118_, 2020.
* [80] C. Liew, \"The future of radiology augmented with Artificial Intelligence: A strategy for success,\" _European Journal of Radiology,_ vol. 102, pp. 152-156, 2018.
* [81] D. Manolakis and G. Shaw, \"Detection Algorithms for Hyperspectral Imaging Applications,\" _IEEE Signal Processing Magazine,_ vol. 19, no. 1, pp. 29-43, 2002.
* [82] T.-L. Yeung, K.-K. Wong, M. J. Birrer and S. C. Mok, \"Dataset: A cancer associated fibroblasts (CAFs) specific gene signature in high grade serous ovarian cancer,\" 19 September 2014. [Online]. Available: [http://biogps.org/dataset/E-GEOD-40595/a-cancer-associated-fibroblasts-cafs-specific-gene/](http://biogps.org/dataset/E-GEOD-40595/a-cancer-associated-fibroblasts-cafs-specific-gene/). | Deep learning algorithms have seen acute growth of interest in their applications throughout several fields of interest in the last decade, with medical hyperspectral imaging being a particularly promising domain. So far, to best of our knowledge, there is no review paper that discusses the implementation of deep learning for medical hyperspectral imaging, which is what this review paper aims to accomplish by examining publications that currently utilize deep learning to perform effective analysis of medical hyperspectral imagery. This paper discusses deep learning concepts that are relevant and applicable to medical hyperspectral imaging analysis, several of which have been implemented since the boom in deep learning. This will comprise of reviewing the use of deep learning for classification, segmentation, and detection in order to investigate the analysis of medical hyperspectral imaging. Lastly, we discuss the current and future challenges pertinent to this discipline and the possible efforts to overcome such trials.
Keywords:Deep learning, neural networks, machine learning, medical image analysis, medical hyperspectral imaging, COVID-19 +
Footnote β : journal: Medical Image Analysis | Provide a brief summary of the text. | 203 |
arxiv-format/2103_07609v2.md | # Untrained networks for compressive lensless photography
Kristina Monakhova
1, \\({}^{\\dagger}\\), \\({}^{*}\\) Vi Tran
2, \\({}^{\\dagger}\\) Grace Kuo
1 and Laura Waller1
## 1 Introduction
Compressive imagers aim to recover more samples than they measure by leveraging compressive sensing, which guarantees signal recovery for underdetermined systems given certain assumptions about signal sparsity as well as incoherence in the measurement domain [1, 2]. For optical imaging, compressive imagers have been demonstrated for many applications, including single-pixel and coded-aperture cameras [3, 4, 5]. These imagers have been shown to be effective for a number of compressive sensing tasks, such as single-shot lightfield imaging [6], 3D imaging [7], hyperspectral [8, 9], and high-speed video imaging [10, 11, 12, 13]. Recently, there has been a push to make compressive imagers more accessible by using inexpensive and compact hardware [14, 15]. One such promising method includes lensless, mask-based imagers, which remove the lens of a traditional camera and instead place a phase or amplitude mask near the sensor to randomize the measurement, approximately achieving the measurement domain incoherence required for compressive sensing. These lensless mask-based cameras can be incredibly compact, with the mask only millimeters from the sensor, and assembly does not require precise alignment [16, 17, 18]. Mask-based lensless imagers have been demonstrated for compact single-shot 3D fluorescence microscopy [19, 20, 21, 22], hyperspectral imaging [23], and high speed single-shot video [24]. These sorts of cameras are very promising for a number of imaging modalities; however, their performance depends on the fidelity of the reconstruction algorithm. Since the algorithm must solve an underdetermined inverse problem with assumptions on signalsparsity, the choice of algorithm and imaging priors used can significantly impact results.
Traditionally, images from compressive cameras are recovered by solving a convex optimization problem, minimizing both a least-squares loss based on the physics of the imaging system and a hand-chosen prior term, which enforces sparsity in some domain, Fig. 1(a). For successful image recovery by compressive sensing, there must be both incoherence in the sensing basis and sufficient sparsity in the sample domain. Lensless mask-based cameras utilize multiplexing optics to fulfill the incoherent sampling requirement, mapping each pixel in the scene to many sensor pixels in a pseudo-random way. The prior term enforces sparsity and ensures successful signal recovery. Over the years, a number of different hand-picked priors have been used for compressive imaging, such as sparsity in wavelets, total-variation (TV), and learned dictionaries. TV, which is based on gradient sparsity, has been particularly popular [25]. These methods are effective at solving imaging inverse problems, however they have recently been outperformed by deep learning-based methods, which incorporate non-linearity and network structures that may be better suited to image representation [26]. Recent work on plug and play (PnP) priors [27, 28, 29, 30] offers some hope of combining inverse problems with state of the art denoisers (both deep and not, e.g. BM3D [31]); however, PnP still relies on using hand-picked denoisers or pre-trained networks that may not be well-suited for a given application.
Recently, several deep-learning based approaches have shown improved reconstruction quality for imaging inverse problems [26, 32, 33]; however, they rely on having a large dataset of experimental labeled ground truth and measurement pairs. In these methods, a deep neural network is used to approximate the imaging inverse problem, Fig. 1(b). The network is typically
Figure 1: **Comparison of different reconstruction approaches**. (a) Traditional reconstructions solve a convex optimization problem with a data-fidelity-term based on the known forward model and a regularization term which acts as the image prior. (b) Deep learning methods require thousands or more labeled image pairs in order to train a neural network to approximate the reconstruction. (c) Our untrained deep network (UDN) uses a neural network as the image prior, but does not require any training data. The system forward model is used to calculate the loss between the estimated image and the measurement, which is then used to update the network weights.
trained end-to-end and requires large datasets of labeled image and ground truth pairs. The network can be agnostic to the imaging system physics, or the network can incorporate knowledge of the physics for faster convergence and reduced training data requirements [34, 35]. For high-dimensional imaging applications, such as 3D and hyperspectral lensless imaging, obtaining ground truth datasets can be impractical, costly, or impossible. While there has been some work in training networks from synthetic data, these methods can suffer from model-mismatch if the synthetic data does not match the experimental system [36].
Recent work using unsupervised learning with untrained networks is especially promising for a number of imaging applications - leveraging the structure of neural networks without needing any training data. Untrained networks, such as deep image prior [37] and deep decoder [38], have shown that the structure of neural networks can be effective at serving as a prior on image statistics without any training. An untrained deep network with randomly initialized weights is used as an image generator, outputting the recovered image. The network weights are updated through a loss function comparing the generated image with the input data, for example, a noisy image. This method and several related papers have been shown to be particularly effective for simulated image denoising, deblurring, and super-resolution [39, 40]. For many computational imaging problems, the measurements do not resemble the reconstructed image; instead, the scene is related to the measurement through a forward model that describes the physics of the image formation problem. For this class of problems, untrained networks can be paired with differentiable imaging forward models in which the network weights are updated based on a loss between the measurement and the generated measurement from the network output passed through the imaging forward model, Fig. 1(c). This has been shown to be effective on several imaging modalities, such as phase imaging [41], MRI [42, 43, 44], diffraction tomography [45], and several small-scale compressive sensing problems [46]. Here, we extend this framework to compressive lensless photography.
We propose a reconstruction method for compressive lensless imaging based on untrained networks, which we call untrained deep network (UDN) reconstructions. Our approach uses a deep network for the image prior, but requires no training data. We present a general differentiable imaging model that could be used for multiple types of compressive lensless imaging, and test it on three different cases: 2D lensless imaging with erasures, single-shot video, and single-shot hyperspectral imaging.
To the best of our knowledge, untrained networks have not been used for compressive optical imaging before, thus providing both a stress test of untrained deep networks on several challenging underdetermined experimental systems, as well as bringing improved image quality to compressive lensless imaging. We provide simulation and experimental results, showing improved performance over existing methods that do not utilize training data. Our results indicate that untrained networks can serve as effective image priors for these systems, providing better reconstructions in cases where it is not possible to obtain labeled ground truth data.
## 2 Imaging models
Our system architecture is based on the lensless mask-based camera, DiffuserCam [17, 47], which consists of a thin diffuser (a smooth pseudorandom phase optic) which is placed a few millimeters in front of the sensor, Fig. 2(a). Light from each point in the scene refracts through the diffuser to create a high-contrast pseudorandom caustic pattern on the sensor plane. Since each point in the world maps to many pixels on the sensor, we need only a subset of the sensor pixels to reconstruct the full 2D image, given sufficient sparsity. This can be useful, for example, to compensate for dead sensor pixels, which act like an erasure pattern; or, we can capture the full 2D sensor image and use compressed sensing to recover higher-dimensional information. The higher-dimensional data (e.g. time, wavelength) must be physically encoded into different subsets of the sensor pixels; then compressed sensing is applied to recover a 3D datacube.
We demonstrate our UDN on two examples of three-dimensional data recovery from a single-shot 2D measurement: (1) high-speed video [24] and (2) hyperspectral imaging [23]. The high-speed video example takes advantage of the sensor's built-in rolling shutter, which exposes different rows of pixels at each time point, Fig. 2(b), effectively acting like an erasure pattern at each time point. From this data, one can use compressed sensing to recover a separate 2D image for each time point; the result is a full-resolution video at the framerate of the rolling shutter (the line scan rate). For the hyperspectral imaging example, the DiffuserCam device has an added spectral filter array in front of the sensor, with 64 different color channels, Fig. 2(b). Each of the 64 wavelengths thus map to a different subset of the sensor pixels, similarly acting like an erasure pattern for each wavelength. From this data, one can use compressed sensing to recover a hyperspectral datacube (2D spatial + 1D spectral) from a single 2D measurement. In both examples, we choose to reconstruct the full datacube simultaneously, rather than a sequential set of 2D reconstructions, since this allows us to add priors along the time/wavelength dimension.
We now go into detail on the modeling, starting with the 2D image formation model of DiffuserCam, and building to our three example scenarios: 2D imaging with erasures, high-speed video, and hyperspectral imaging. These formulations will be used to create the differential forward model within our network architecture.
### 2D lensless imaging
As described in [17, 47], we model our lensless camera as shift-invariant: we assume the pattern on the sensor from each point in the world is a translated version of the on-axis pattern, called the point spread function (PSF). We can therefore model the sensor response \\(\\mathbf{b}[x,y]\\) as a convolution of the scene \\(\\mathbf{v}[x,y]\\) with an on-axis PSF \\(\\mathbf{h}[x,y]\\):
\\[\\mathbf{b}[x,y] =\\mathrm{crop}\\Big{(}\\mathbf{v}[x,y]*\\mathbf{h}[x,y]\\Big{)} \\tag{1}\\] \\[=\\mathbf{A}_{\\mathrm{2D}}\\mathbf{v}. \\tag{2}\\]
where \\([x,y]\\) represents the discrete image coordinates, \\(*\\) represents a discrete 2D linear convolution and the crop function accounts for the finite sensor size. For 2D imaging, we assume that objects are placed beyond the hyperfocal distance of the imager so that the PSF does not vary with depth. In addition, we assume that there is no wavelength variation in the PSF as demonstrated in [23].
### 2D imaging with erasures
For the case of 2D imaging with erasures, we multiply the result of Eq. 1 with a binary mask, denoted \\(\\mathbf{M}[x,y]\\), that zeros out a subset of the sensor pixels to model the erasure pattern:
\\[\\mathbf{b}[x,y] =\\mathbf{M}[x,y]\\cdot\\mathrm{crop}\\Big{(}\\mathbf{v}[x,y]*\\mathbf{ h}[x,y]\\Big{)} \\tag{3}\\] \\[=\\mathbf{A}_{\\mathrm{2D\\;erasures}}\\mathbf{v}. \\tag{4}\\]
As we show in Sec. 5, compressed sensing enables recovery of a full image, even in the presence of a significant fraction of dead or missing pixels. Although commercial sensors are screened to minimize dead pixels, this scenario is useful for exploring the limits of compressive sensing in lensless cameras since we can synthetically increase the percentage of erasures to increase the ratio of data reconstructed to data measured.
### Higher dimensions: video and hyperspectral
Rather than sampling only a subset of pixels in order to reconstruct a 2D image, compressed sensing can instead be used to reconstruct 3D datasets from fully-sampled 2D images. If the Figure 2: **Imaging systems:** Our lensless 2D camera consists of a diffuser placed a small distance away from the sensor. Each point in the world maps to a high-contrast caustic pattern (PSF). (a) Since the PSF covers a large area of the sensor, we can erase a random subset of the pixels and still recover the full image. (b) The cameraβs rolling shutter can be used to encode temporal information into the measurement. As points flick on/off, the PSFs are filtered by the sensorβs rolling shutter function, which reads one row at a time (or two in the case of dual shutter, as shown here). The measurement is a sum of the rows captured at different time points, each of which contains information from the entire 2D scene. Hence, temporal information is encoded vertically across the sensor, with earlier time points at the top and bottom of the image and later time points towards the center. (c) Single-shot hyperspectral imaging is achieved with a spectral filter array in front of the sensor, which maps each band of wavelengths to a subset of the sensor pixels.
higher-dimensional data is physically encoded into different subsets of the pixels, one can use Eq. 3 to recover a collection of 2D images from a single acquisition. As described previously, our single-shot video example uses the rolling shutter to encode temporal information into different pixels, and our hyperspectral example uses a spectral filter array to encode wavelength information. We denote this extra dimension (time/wavelength) generically as the \\(k\\)-dimension. Sequential recovery using Eq. 3 prevents incorporating priors along the \\(k\\)-dimension, so we use the following model that depends on the full datacube:
\\[\\mathbf{b} =\\sum_{k=0}^{N_{k}}\\mathbf{M}_{k}\\left[x,y\\right]\\cdot\\mathrm{ crop}\\Big{(}\\mathbf{v}[x,y,k]*\\mathbf{h}[x,y]\\Big{)} \\tag{5}\\] \\[=\\mathbf{A}_{k\\text{-}\\text{D}}\\mathbf{v}. \\tag{6}\\]
Here, \\(N_{k}\\) is the number of discrete points along the \\(k\\)-dimension, and \\(\\mathbf{M}_{k}\\left[x,y\\right]\\) is a masking function, which depends on \\(k\\), and selects the sensor pixels corresponding to each video frame/wavelength. The convolution, \\(*\\) is only over the two spatial dimensions.
For the high-speed video case \\(\\mathbf{M}_{k}\\left[x,y\\right]\\), referred to as the shutter function, is based on the rolling shutter. Sensors typically have either a single shutter or a dual shutter, which we use in this work. For a single shutter, the sensor reads the pixels one horizontal line at a time, moving from the top to the bottom of the sensor; for a dual-shutter, the sensor reads pixels from two horizontal lines that move from the top and bottom of the sensor towards the middle, Fig. 2(b).
For the hyperspectral case \\(\\mathbf{M}_{k}\\left[x,y\\right]\\), referred to as the filter function, is determined by the spectral filter array. Each filter pixel acts as narrow-band spectral filter which integrates light within a certain wavelength range and blocks out light at other wavelengths. We approximate this as a finite sum across spectral bands with a non-binary filter function.
## 3 Inverse problem
Given our sensor measurement \\(\\mathbf{b}\\), our goal is to recover the scene \\(\\mathbf{v}\\). For the 2D erasures scenario \\(\\mathbf{v}\\) is a 2D image, whereas for single-shot video \\(\\mathbf{v}\\) is a video consisting of two spatial and one temporal dimension, and for single-shot hyperspectral \\(\\mathbf{v}\\) is a hyperspectral volume consisting of two spatial and one spectral dimension. In all cases, the problem is underdetermined, since we aim to recover more than we measure. First, we describe the traditional reconstruction methods based on convex optimization which will serve as our baseline comparison, and then we describe our untrained network reconstruction method.
### Traditional inverse problem
Due to incoherence of the measurement system that comes from our use of a multiplexing phase optic (diffuser), compressive sensing can be utilized to recover the image. By compressive sensing theory, we can formulate our inverse problem as follows:
\\[\\mathbf{\\hat{v}}=\\arg\\min_{\\mathbf{v}\\geq 0}\\frac{1}{2}\\|\\mathbf{b}-\\mathbf{A} \\mathbf{v}\\|_{2}^{2}+\\tau R(\\mathbf{v}), \\tag{7}\\]
where \\(R(\\mathbf{v})\\) is a prior on the scene, and is often of the form \\(\\|\\mathbf{D}\\mathbf{v}\\|_{1}\\), where \\(\\mathbf{D}\\) is a sparsifying transform. \\(\\mathbf{A}\\) is our forward model, which can be of the form \\(\\mathbf{A}_{\\text{2D}}\\), \\(\\mathbf{A}_{\\text{2D erasures}}\\), or \\(\\mathbf{A}_{\\text{k-}\\text{D}}\\).
In practice, 2D or 3D TV priors work well for a variety of scenes, and are implemented by defining the regularizer term as \\(R(\\mathbf{v})=\\|\
abla_{xyk}\\mathbf{v}\\|_{1}\\), where \\(\
abla_{xyk}=[\
abla_{x}\
abla_{y}\
abla_{k}]^{T}\\) is the matrix of forward finite differences in the \\(x\\), \\(y\\), and \\(k\\) directions. Convex optimization approaches such as fast iterative shrinkage-thresholding algorithm (FISTA) [48] or alternating direction method of multipliers (ADMM) [49] can be used to efficiently solve this problem. In each case, the prior is hand-tuned for the given application by varying the amount of regularization through the tuning parameter \\(\\tau\\), which trades off data-fidelity and regularization. As a baseline comparison, we use FISTA with 2DTV for our 2D imaging with erasures problem and weighted anisotropic 3DTV for our single-shot video and single-shot hyperspectral imaging [50]. We include an additional comparison using PnP-BM3D, PnP-BM4D [51], and a pretrained PnP denoiser network [29] in Supplement 1.
### Untrained network
In this work, we propose to instead use an untrained network for the image reconstruction. It has been shown that neural networks are good at representing and generating natural images; thus, minimizing an image generator network instead of directly solving for the image could be a good way to effectively regularize with a custom deep prior.
Rather than solving for the image \\(\\mathbf{v}\\) directly as before, we solve for the image indirectly as the output of the generator network \\(G(z;\\mathbf{W})\\). This output image \\(\\mathbf{v}_{gen}\\) is then passed through our imaging forward model \\(\\mathbf{A}\\) and compared against the measured image \\(\\mathbf{b}\\). The network weights are updated based on the difference between the generated measurement and the true measurement, which corresponds to solving the following problem:
\\[\\mathbf{W}*= \\arg\\min_{\\mathbf{W}}\\frac{1}{2}\\|\\mathbf{b}-\\mathbf{AG}(z; \\mathbf{W})\\|_{2}^{2} \\tag{8}\\] \\[= \\arg\\min_{\\mathbf{W}}\\frac{1}{2}\\|\\mathbf{b}-\\mathbf{Av}_{gen}\\| _{2}^{2}, \\tag{9}\\]
where our network \\(G(z;\\mathbf{W})\\) has a fixed input \\(z\\) and randomly initialized weights \\(\\mathbf{W}\\). The network output \\(\\mathbf{v}_{gen}\\) is the reconstructed image. \\(z\\) can be thought of as a latent code to our generator network and is randomly initialized and held constant. We update the weights of the network via backpropogation in order to minimize the loss between the actual measurement and the generated measurement (Fig. 3). This process must be repeated for each image reconstruction, since there are no 'training' and 'testing' phases as there are for deep learning methods with labeled data.
As in [37], we utilize an encoder-decoder architecture with skip connections and keep the
Figure 3: **Overview of our Untrained Deep Network (UDN) reconstruction pipeline**. An untrained network with randomly initialized weights takes in a fixed, random input vector. The network outputs a sequence of images along the \\(k\\)-axis (for video imaging, one image for each time point), and we pass this output volume into our known imaging forward model (defined by the PSF calibration image and known erasure function) to generate a single simulated measurement. We compare the generated measurement with our captured measurement and use the difference between the two to update the untrained network parameters.
input to the network fixed. See Supplement 1 for details on our network architecture and hyperparameters for each experiment.
## 4 Implementation details
For the 2D erasures case, we utilize an existing 2D lensless imaging dataset which consists of pairs of 2D lensless camera measurements and corresponding lensed camera ground truth images [34]. This dataset is advantageous because it includes experimental lensless measurements with real sensor noise and other non-idealities, but also provides labeled ground truth data from an aligned lensed camera. For our simulation results, we utilize the ground truth images from the dataset to generate simulated measurements using the forward model in Eq. 3; for our experimental results, we directly use the lensless camera measurements. For both, we synthetically add erasures to the measurement by point-wise multiplying it with an erasure function with 0%, 50%, 90%, 95%, and 99% erasures, picking a random subset of indices to erase.
For single-shot video and single-shot hyperspectral imaging, there does not exist experimental data with associated ground truth. For our experimental data analysis, we utilize experimental raw sensor measurements from [24] and [23]. This includes measurements taken from a PCO Edge 5.5 sCMOS camera with a dual rolling shutter and a homemade random diffuser (for single-shot video), and measurements from a hyperspectral DiffuserCam with a spectral filter array and Luminit diffuser [52] (for hyperspectral imaging). For our simulations, we use the PSF and shutter/filter functions from these systems to simulate our sensor measurements.
We implement our network and differentiable forward model with mean square error (MSE) loss in Pytorch and run our reconstructions on a Titan X GPU with the ADAM optimizer throughout training. We perform early stopping to obtain the best reconstructions, as described in [37], with reconstructions ranging from 1,000 - 100,000 iterations, which generally takes several hours. This process must be completed for every new image; unlike with deep learning methods that use training data, there is no distinction between training and testing and instead the network parameters must be re-optimized for every reconstruction. For reproducibility, training code will be available here upon publication of the paper: _[https://github.com/Waller-Lab/UDN/_](https://github.com/Waller-Lab/UDN/_).
## 5 Results
We compare the results of our untrained reconstruction method against the standard FISTA reconstruction with TV regularization. For all three cases, we present both simulation results as well as experimental validation. For the FISTA reconstructions, we reconstructed images with three different amounts of TV regularization, since more regularization leads to improved image quality and lower mean squared error, but tends to blur high-frequency information. Less regularization reveals high-frequency information, but at the price of increased reconstruction artifacts. When ground truth is available (in all simulations and in experiment for 2D erasures), we compare our reconstructed images to the ground truth via a variety of quality metrics: mean squared error (MSE), learned perceptual similarity metric (LPIPS) based on AlexNet and VGG [53], the structural similarity index measure (SSIM), and the multi-scale structural similarity measure (MS-SSIM) [54]. MSE is a standard image metric, but tends to favor low-frequency information. SSIM and MS-SSIM are both perception-based metrics, with MS-SSIM using multiple image scales and achieving better perceptual performance than SSIM. Both LPIPS metrics are learned perpetual similarity metrics based on deep networks, with LPIPS VGG being closer to a traditional perceptual similarly metric. Each method has its strengths and weaknesses, and therefore we provide comparisons using each of the metrics when possible (MS-SSIM requires a certain minimum image size which precludes us from using it for the single-shot video and single-shot hyperspectral cases, and LPIPS only works on RGB color images, so we cannot use it for single-shot hyperspectral). For MSE, LPIPS Alex, and LPIPS VGG, a lower score is better, whereas for SSIM and MS-SSIM, a higher score is better.
### 2D compressive imaging
#### 5.1.1 Simulation
First, we simulate a noise-free 2D measurement with increasing numbers of randomly-distributed pixel erasures (0%, 50%, 90%, 95%, and 99%), then compare reconstruction results using both FISTA and our UDN (Fig. 4). FISTA uses high, medium, and low amounts of TV corresponding to \\(\\tau\\)= 1e-4, 5.5e-4, 1e-5, respectively.
We compare our performance against the ground truth images on our five image quality metrics for an increasing percentage of erasures in Fig. 4(b) and show the corresponding images in Fig. 4(a) for our method vs. the best FISTA result (based on the LPIPS-Alex metric). From this, we can see that FISTA and our method perform similarly well for 0%-95% erasures, but
Figure 4: **2D Compressive imaging with erasures: simulation and experimental results**. Reconstruction results for increasing numbers of pixel erasures, showing the measurement along with the best fast iterative shrinkage-thresholding algorithm (FISTA) reconstruction (based on the LPIPS Alex metric) and the UDN reconstruction for (a) simulated measurements and (b) experimental measurements with synthetically added erasures. (c) and (d) compare our performance against the ground truth for several quality metrics (arrows indicate which direction is better), showing that in simulation UDN outperforms FISTA only at 99% erasures, whereas in experiments with real noise and non-idealities, UDN outperforms FISTA on the perceptual image metrics, LPIPS Alex and MS-SSIM, as well as on MSE.
our method has improved performance for 99% erasures. This shows that TV regularization is a sufficient prior for smaller numbers of erasures in the absence of noise or model non-idealities, but as the problem becomes severely underdetermined, our UDN provides a better image prior than TV and leads to improved image quality on all of our performance metrics.
#### 5.1.2 Experimental
Using experimentally captured images from [34], we synthetically add random pixel erasures to the measured data and compare our reconstructions against the ground truth lensed images using the five image metrics described above. As a baseline, we compare against FISTA with high, medium, and low amounts of TV (\\(\\tau\\)= 1e-1, 1e-2, 1e-3), shown in Fig. 4(d). A visual comparison with the best FISTA reconstruction (based on the LPIPS-Alex metric) is shown in Fig. 4(c). Unlike in the noise-free simulation, our method consistently performs better than FISTA for all amounts of erasures on the MSE, LPIPS Alex, and MS-SSIM metric, resulting in a clearer and
Figure 5: **Simulation results on single-shot video**. We recover a 38-frame video from a single measurement (bottom left). We show four sample reconstructed video frames in the top four rows, and plot the quality metrics for all frames below. Here we see that our reconstruction has sharper features across frames, enabling superior recovery especially for the first and last frames, where FISTA has more pronounced reconstruction artifacts. We achieve better MSE, LPIPS, and SSIM scores across frames (bottom right). See Visualization 1 for the full video.
sharper reconstruction. Thus, our method gives better performance improvements over FISTA for real-world datasets, likely because the UDN provides a better image prior and outperforms TV in the presence of real measurement noise and imaging non-idealities.
TV is a very commonly used prior that is computationally efficient, but other priors can be incorporated using the PnP framework. See Supplement 1 for additional comparisons against PnP-BM3D and a PnP pretrained denoiser network, demonstrating that the UDN outperforms these methods as well.
### Single-shot Video
#### 5.2.1 Simulation
Using an experimentally captured PSF and shutter function from [24], we simulate a compressive video measurement, shown in Fig. 5 top left, using a 38-frame video. Fig. 5 shows the results of our reconstruction compared to the FISTA result for several frames of the video (with \\(\\tau\\) = 1e-2, 1e-3, 5e-4 for FISTA). It is evident that our method generates a more visually appealing result with fewer artifacts, while preserving high-resolution features throughout the frames. This is quantified in Fig. 5(bottom), showing that UDN has a significantly better performance on all metrics. For each of the FISTA results, the values are worse at the beginning and end of the video sequence, which is consistent with [24]. Our method similarly has worse performance for
Figure 6: **Experimental results on single-shot video**. We recover 72 frames from a single measurement (top left). The rows show four sample reconstructed frames from our 72-frame reconstruction, with both our UDN method and FISTA with three different amounts of TV. Here we see that our reconstruction has sharper features across frames and better captures motion within the video. See Visualization 2 for the full video and Visualization 3 for a second experimental example.
the first and last frames, but the difference is not as pronounced, resulting in more uniform image quality throughout the recovered video.
#### 5.2.2 Experimental
Next, we recover 72-frame videos from a single experimental measurement, Fig. 6. When compared to the FISTA reconstructions (with \\(\\tau\\) = 1e-2, 1e-3, 1e-4), we can see that our method has more uniform image quality throughout. The foam dart has significant artifacts in the FISTA reconstruction for the first and last scenes of the video, even disappearing for low TV. Our method appears to capture the dart motion throughout the video frames and has fewer noticeable artifacts than the FISTA reconstruction. There is no ground truth available for this data, but our method seems to produce a more realistic and visually appealing reconstruction than FISTA in terms of
Figure 7: **Simulation results on hyperspectral data. We display a false-color image of the recovered 64-channel hyperspectral volume (top row), along with four selected spectral slices. The quality metrics are plotted for each wavelength at bottom right. Here we see that our reconstruction has sharper features and fewer artifacts than FISTA, and achieves a better MSE and SSIM score across all wavelengths. In addition, UDN achieves better cosine similarity (\\(\\theta_{\\text{avg}}\\)) to the ground truth spectra than FISTA. See Visualization 4 for the full hyperspectral volume.**
the dart motion. FISTA with smaller amounts of TV is able to better recover the text on the book in the scene, but at the expense of reduced image quality and more artifacts throughout the video. Our method is not quite able to recover the book text, but has more uniform image quality and better captures the motion in the scene.
### Single-shot hyperspectral
#### 5.3.1 Simulation
Finally, we recover a hyperspectral volume with 64 wavelengths from a single simulated measurement. We simulate the measurement using an experimentally captured PSF and filter function from [23], along with a ground truth hyperspectral image from [55] with 64 spectral channels. Figure 7 shows the results of our reconstruction compared to FISTA with high, medium, and low amounts of TV (\\(\\tau\\)= 3e-7, 6e-7, and 3e-6). We can see that UDN preserves more features across wavelengths and has fewer artifacts than FISTA. We compare the MSE and SSIM across the methods, Fig. 7(bottom). We can see that UDN has significantly better MSE and SSIM values than FISTA across wavelengths. In addition, we report the average cosine distance between the spectral profiles in the reconstruction and ground truth images (Fig. 7). The cosine distance is especially important for hyperspectral imaging due to its role in hyperspectral classification. UDN provides a lower average cosine distance than FISTA, indicating that UDN achieves a better
Figure 8: **Experimental results on hyperspectral data**. We show a false-color image of the recovered 32-channel hyperspectral volume (top row), along with four spectral slices. Here we see that our reconstruction has sharper features and fewer artifacts across wavelengths. See Visualization 5 for the full hyperspectral volume.
recovery of the spectral profiles. We note that FISTA with high TV achieves a better cosine distance than FISTA with low TV, but at the price of worse spatial resolution. Our UDN method achieves both better spatial quality (MSE and SSIM) as well as better spectral performance (cosine distance) than FISTA.
#### 5.3.2 Experimental
We test our performance on an experimentally captured measurement [23] of a Thorlabs plush dog illuminated by a broadband lightsource, recovering 32 spectral channels from a single measurement (downsampled 2\\(\\times\\) spectrally). We compare against FISTA with low and high TV (\\(\\tau\\)= 3e-4 and 5e-5) along with the best reconstruction from [23] (FISTA TV + low rank), down-sized to match our reconstruction size. While ground truth images do not exist for this data, UDN appears to provide more consistent image quality and retains more of the details across wavelengths than FISTA, Fig. 8.
## 6 Discussion
Untrained networks offer a number of distinct advantages for compressive lensless imaging. First, they do not require any training data, as opposed to deep learning-based methods. Training data is especially hard or impossible to acquire for higher-dimensional imaging, such as for high-speed video, hyperspectral, or 3D imaging, so this feature is particularly useful. Untrained networks can serve as a better prior for certain high-dimensional imaging problems off-the-shelf, potentially enabling better reconstructions for a number of compressive imaging modalities.
Currently, the main limitations of untrained networks are: 1) memory-constraints and 2) speed. First, many of our reconstructions are GPU memory limited. To take advantage of accelerated GPU processing, the entire untrained network must fit in memory on the GPU, limiting the size of reconstructions that we can process, and we find that we can process much larger images and volumes with FISTA than we can with UDN. In this work, our single-shot video and single-shot hyperspectral measurements are downsized between 2-16\\(\\times\\) from the original size in order to fit in the GPU, limiting our resolution. Looking forward, larger GPU sizes or clever computational techniques to better utilize GPUs for memory-limited problems could improve this and enable the reconstruction of larger images and volumes. Next, the speed of the untrained network reconstructions is generally an order-of-magnitude slower than standard FISTA reconstructions. Thus, our method is best suited for applications where a real-time reconstruction is not needed, since typical untrained reconstructions take between 1-5 hours. As machine learning speeds and processors improve, we expect these factors to be less limiting.
Given the benefits and limitations of untrained networks, we envision this reconstruction method to be useful for a certain subset of problem where it is difficult or impossible to obtain ground truth data for training, but where there are little or no time constraints on the reconstruction. Given the fact that no training data is needed, this method can be applied to many different imaging problems without needing to collect new datasets. UDNs are especially promising for imaging dense, natural scenes where TV can be a poor prior.
One interesting open problem for untrained reconstructions is the network choice. In our experiments, we utilized a convolutional network with an encoder-decoder structure and found that this worked well; however, our network tended to blur out high frequency features and there may be other architectures that could potentially provide better priors. For instance, deep decoder [38] has been demonstrated for similar tasks, but we found that for our application it required more iterations to converge and was outperformed by an encoder-decoder structure, demonstrating that the choice of network architecture is important and application-specific. Although our network worked well for photographic scenes, other network architectures may be more appropriate for other types of images, such as fluorescent biological targets which may have very different statistics than photographic scenes.
Conclusion
We have demonstrated that untrained networks can improve the image quality for lensless compressive imaging systems. We tested untrained networks on 2D lensless imaging with varying amounts of erasures, and we demonstrated their effectiveness on single-shot compressive video and single-shot hyperspectral imaging, in which we recover a full 72-frame video or 32 to 64 spectral slices, respectively, from a single measurement. In each case, we showed both in simulation and experiment that untrained networks can have better image quality than compressive-sensing-based minimization using total-variation regularization, demonstrating that non-linear networks can be a better prior than TV for dense, natural scenes. We believe that untrained networks are especially promising for situations in which training data is difficult or impossible to obtain, providing a better imaging prior for underdetermined reconstructions.
**Funding.** Placeholder
Acknowledgments.Vi Tran acknowledges support from the Transfer-to-Excellence REU program, funded by the Hopper Dean Foundation and hosted by the Center for Energy Efficient Electronics Science (NSF Award 0939514).
**Disclosures.** The authors declare no conflicts of interest
**Data Availability Statement.** All data will be available here: [https://github.com/Waller-Lab/UDN/](https://github.com/Waller-Lab/UDN/) upon publication of the paper.
**Supplemental document.** See Supplement 1 for supporting content.
## References
* [1] E. J. Candes, \"Compressive sampling,\" in _Proceedings of the international congress of mathematicians_, vol. 3 (European Mathematical Society, 2006), pp. 1433-1452.
* [2] E. J. Candes and M. B. Wakin, \"An introduction to compressive sampling,\" IEEE signal processing magazine **25**, 21-30 (2008).
* [3] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, \"Single-pixel imaging via compressive sampling,\" IEEE signal processing magazine **25**, 83-91 (2008).
* [4] M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, \"An architecture for compressive imaging,\" in _2006 international conference on image processing_, (IEEE, 2006), pp. 1273-1276.
* [5] G. Huang, H. Jiang, K. Matthews, and P. Wilford, \"Lensless imaging by compressive sensing,\" in _2013 IEEE International Conference on Image Processing_, (IEEE, 2013), pp. 2101-2105.
* [6] K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, \"Compressive light field photography using overcomplete dictionaries and optimized projections,\" ACM Transactions on Graph. (TOG) **32**, 1-12 (2013).
* [7] A. Levin, R. Fergus, F. Durand, and W. T. Freeman, \"Image and depth from a conventional camera with a coded aperture,\" ACM transactions on graphics (TOG) **26**, 70-es (2007).
* [8] A. Wagadarikar, R. John, R. Willett, and D. Brady, \"Single disperser design for coded aperture snapshot spectral imaging,\" Appl. optics **47**, B44-B51 (2008).
* [9] M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, \"Single-shot compressive spectral imaging with a dual-disperser architecture,\" Opt. express **15**, 14013-14027 (2007).
* [10] Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, \"Video from a single coded exposure photograph using a learned over-complete dictionary,\" in _2011 International Conference on Computer Vision, (IEEE, 2011)_, pp. 287-294.
* [11] D. Reddy, A. Veeraraghavan, and R. Chellappa, \"P2c2: Programmable pixel compressive camera for high speed imaging,\" in _CVPR 2011_, (IEEE, 2011), pp. 329-336.
* [12] P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, \"Coded aperture compressive temporal imaging,\" Opt. express **21**, 10526-10545 (2013).
* [13] L. Gao, J. Liang, C. Li, and L. V. Wang, \"Single-shot compressed ultrafast photography at one hundred billion frames per second,\" Nature **516**, 74-77 (2014).
* [14] D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, \"Compact snapshot hyperspectral imaging with diffracted rotation,\" ACM Transactions on Graph. (Proc. SIGGRAPH 2019) **38**, 117:1-13 (2019).
* [15] A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, \"Imaging with nature: Compressive imaging using a multiply scattering medium,\" Sci. reports **4**, 5552 (2014).
* [16] M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, \"Flatcam: Thin, lensless cameras using coded aperture and computation,\" IEEE Transactions on Comput. Imaging **3**, 384-397 (2016).
* [17] N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, \"DiffuserCam: lensless single-exposure 3D imaging,\" Optica **5**, 1-9 (2018).
* [18] T. Shimano, Y. Nakamura, K. Tajima, M. Sao, and T. Hoshizawa, \"Lensless light-field imaging with fresnel zone aperture: quasi-coherent coding,\" Appl. optics **57**, 2841-2850 (2018).
* [19] J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, \"Single-frame 3D fluorescence microscopy with ultraminative lensless FlatScope,\" Sci. advances **3**, e1701548 (2017).
* [20] G. Kuo, F. L. Liu, I. Grossruystheer, R. Ng, and L. Waller, \"On-chip fluorescence microscopy with a random microlens diffuser,\" Opt. Express **28**, 8384-8399 (2020).
* [21] K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, and L. Waller, \"Miniscope3d: optimized single-shot miniature 3d fluorescence microscopy,\" Light. Sci. & Appl. **9**, 1-13 (2020).
* [22] F. L. Liu, G. Kuo, N. Antipa, K. Yanny, and L. Waller, \"Fourier diffuserscope: Single-shot 3d fourier light field microscopy with a diffuser,\" arXiv preprint arXiv:2006.16343 (2020).
* [23] K. Monakhova, K. Yanny, N. Aggarwal, and L. Waller, \"Spectral DiffuserCam: Lensless snapshot hyperspectral imaging with a spectral filter array,\" Optica **7**, 1298-1307 (2020).
* [24] N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, \"Video from stills: Lensless imaging with rolling shutter,\" in _2019 IEEE International Conference on Computational Photography (ICCP)_, (IEEE, 2019), pp. 1-8.
* [25] C. Li, \"An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,\" Ph.D. thesis, Rice University (2010).
* [26] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, \"Deep convolutional neural network for inverse problems in imaging,\" IEEE Transactions on Image Process. **26**, 4509-4522 (2017).
* [27] E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, \"Plug-and-play methods provably converge with properly trained denoisers,\" in _International Conference on Machine Learning_, (PMLR, 2019), pp. 5546-5557.
* [28] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, \"Plug-and-play priors for model based reconstruction,\" in _2013 IEEE Global Conference on Signal and Information Processing_, (IEEE, 2013), pp. 945-948.
* [29] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, \"Plug-and-play image restoration with deep denoiser prior,\" arXiv preprint arXiv:2008.13751 (2020).
* [30] Y. Romano, M. Elad, and P. Milanfar, \"The little engine that could: Regularization by denoising (red),\" SIAM J. on Imaging Sci. **10**, 1804-1844 (2017).
* [31] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, \"Image denoising with block-matching and 3d filtering,\" in _Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning_, vol. 6064 (International Society for Optics and Photonics, 2006), p. 606414.
* [32] A. Sinha, J. Lee, S. Li, and G. Barbastathis, \"Lensless computational imaging through deep learning,\" Optica **4**, 1117-1125 (2017).
* [33] M. T. McCann, K. H. Jin, and M. Unser, \"Convolutional neural networks for inverse problems in imaging: A review,\" IEEE Signal Process. Mag. **34**, 85-95 (2017).
* [34] K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, \"Learned reconstructions for practical mask-based lensless imaging,\" Opt. Express **27**, 28075-28090 (2019).
* [35] S. S. Khan, V. Sundar, V. Boominathan, A. N. Veeraraghavan, and K. Mitra, \"Flatnet: Towards photorealistic scene reconstruction from lensless measurements,\" IEEE Transactions on Pattern Analysis Mach. Intell. (2020).
* [36] X. Zhang, Q. Chen, R. Ng, and V. Koltun, \"Zoom to learn, learn to zoom,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, (2019), pp. 3762-3770.
* [37] D. Ulyanov, A. Vedaldi, and V. Lempitsky, \"Deep image prior,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, (2018), pp. 9446-9454.
* [38] R. Heckel and P. Hand, \"Deep decoder: Concise image representations from untrained non-convolutional networks,\" arXiv preprint arXiv:1810.03982 (2018).
* [39] G. Mataev, P. Milanfar, and M. Elad, \"DeepRED: Deep image prior powered by RED,\" in _Proceedings of the IEEE International Conference on Computer Vision Workshops_, (2019), pp. 0-0.
* [40] J. Liu, Y. Sun, X. Xu, and U. S. Kamilov, \"Image restoration using total variation regularized deep image prior,\" in _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, (IEEE, 2019), pp. 7715-7719.
* [41] E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, \"Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,\" Optica **7**, 559-562 (2020).
* [42] K. Gong, C. Catana, J. Qi, and Q. Li, \"Pet image reconstruction using deep image prior,\" IEEE transactions on medical imaging **38**, 1655-1665 (2018).
* [43] J. Cui, K. Gong, N. Guo, C. Wu, X. Meng, K. Kim, K. Zheng, Z. Wu, L. Fu, B. Xu, Z. Zhu, J. Tian, H. Liu, and Q. Li, \"Pet image denoising using unsupervised deep learning,\" Eur. journal nuclear medicine molecular imaging **46**, 2780-2789 (2019).
* [44] K. H. Jin, H. Gupta, J. Yerly, M. Stuber, and M. Unser, \"Time-dependent deep image prior for dynamic MRI,\" arXiv preprint arXiv:1910.01684 (2019).
* [45] K. C. Zhou and R. Horstmeyer, \"Diffraction tomography with a deep image prior,\" Opt. Express **28**, 12872-12896 (2020).
* [46] D. Van Veen, A. Jalal, M. Soltanolkotabi, E. Price, S. Vishwanath, and A. G. Dimakis, \"Compressed sensing with deep image prior and learned regularization,\" arXiv preprint arXiv:1806.06438 (2018).
* [47] G. Kuo, N. Antipa, R. Ng, and L. Waller, \"DiffuserCam: diffuser-based lensless cameras,\" in _Computational Optical Sensing and Imaging_, (Optical Society of America, 2017), pp. CTu3B-2.
* [48] A. Beck and M. Teboulle, \"A fast iterative shrinkage-thresholding algorithm for linear inverse problems,\" SIAM J. on Imaging Sci. **2**, 183-202 (2009).
* [49] S. Boyd, N. Parikh, and E. Chu, _Distributed optimization and statistical learning via the alternating direction method of multipliers_ (Now Publishers Inc, 2011).
* [50] U. S. Kamilov, \"A parallel proximal algorithm for anisotropic total variation minimization,\" IEEE Transactions on Image Process. **26**, 539-548 (2016).
* [51] M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, \"Nonlocal transform-domain filter for volumetric data denoising and reconstruction,\" IEEE transactions on image processing **22**, 119-133 (2012).
* [52] Luminit, \"Technical Data and Downloads,\" [http://www.luminitco.com/downloads/data-sheets](http://www.luminitco.com/downloads/data-sheets) (2017). [Online; accessed 19-July-2008].
* [53] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, \"The unreasonable effectiveness of deep features as a perceptual metric,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition,_ (2018), pp. 586-595.
* [54] Z. Wang, E. P. Simoncelli, and A. C. Bovik, \"Multiscale structural similarity for image quality assessment,\" in _The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003,_ vol. 2 (Ieee, 2003), pp. 1398-1402.
* [55] R. Ennis, F. Schiller, M. Toscani, and K. R. Gegenfurtner, \"Hyperspectral database of fruits and vegetables,\" JOSA A. **35**, B256-B266 (2018). | Compressive lensless imagers enable novel applications in an extremely compact device, requiring only a phase or amplitude mask placed close to the sensor. They have been demonstrated for 2D and 3D microscopy, single-shot video, and single-shot hyperspectral imaging; in each case, a compressive-sensing-based inverse problem is solved in order to recover a 3D data-cube from a 2D measurement. Typically, this is accomplished using convex optimization and hand-picked priors. Alternatively, deep learning-based reconstruction methods offer the promise of better priors, but require many thousands of ground truth training pairs, which can be difficult or impossible to acquire. In this work, we propose an unsupervised approach based on untrained networks for compressive image recovery. Our approach does not require any labeled training data, but instead uses the measurement itself to update the network weights. We demonstrate our untrained approach on lensless compressive 2D imaging, single-shot high-speed video recovery using the camera's rolling shutter, and single-shot hyperspectral imaging. We provide simulation and experimental verification, showing that our method results in improved image quality over existing methods.
osajournal | Provide a brief summary of the text. | 238 |
arxiv-format/2107_00386v2.md | # SISAL Revisited
Chujun Huang
Mingjie Shao
Wing-Kin Ma
Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR of China
Anthony Man-Cho So
Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong SAR of China
## 1 Introduction
Simplex identification via split augmented Lagrangian (SISAL) is an algorithm developed by Jose M. Bioucas-Dias in 2009 [1]. It appears in a 4-page conference paper, with open source code (in MATLAB). It basically deals with a simplex-structured matrix factorization problem from hyperspectral imaging; the problem is famously known as hyperspectral unmixing (HU) in the community of hyperspectral remote sensing. It is worth mentioning that HU is not only a key topic in hyperspectral imaging [2, 3], it also has strong relationships with non-negative matrix factorization and the various machine learning applications thereof; see, e.g., [4, 5] and the references therein.
The development of SISAL revolves around problem formulation and optimization algorithm design. SISAL has a unique place in the course of history of HU: it offered one of the first, and most pioneering, practical algorithms for a promising but difficult-to-implement strategy for HU, namely, simplex volume minimization (SVMin). It has become a benchmark and has been frequently used by researchers. By the authors' understanding, the reasons boil down to one: _it works well in practice_. SISAL has good running speed, scales well with the data sizes (very large ones) computationally, delivers reasonably good unmixing results, and demonstrates resilience to noise and modeling error effects. SISAL shows powerful intuitions by its inventor. As an article to pay tribute to Bioucas-Dias' tremendous insights to hyperspectral imaging, allow us to quote a saying by Steve Jobs: \"Intuition is a very powerful thing, more powerful than intellect, in my opinion.\"
This article serves as an endeavor to continue the legacy of Bioucas-Dias' SISAL. It can also be regarded as the sequel of [6]. The SISAL work has left some open questions. First and foremost, SISAL requires tuning of a regularization parameter. That parameter has an impact on SISAL's noise resilience behaviors. It is not clear how we should choose that parameter, apart from empirical or human experience. To make the story more complicated, SISAL was motivated by the noiseless case, and the subsequent explanation of why SISAL works in the noisy case was intuitive. Our question is whether there exists an alternative explanation for the noisy case. To answer that, we pursue a probabilistic simplex component analysis (SCA) framework, wherein we employ a principled formulation, namely, the maximum likelihood, to deal with the problem under a pertinent statistical model (to be specified later). This statistical strategy for unmixing is different from SISAL or SVMin, which is geometric. The former, by principle, has the upper hand in the noisy case; it also frees us from parameter tuning. We will show that SISAL can be seen as an approximation scheme of probabilistic SCA. Moreover, the connections we build suggest a different concept: Rather than considering parameter tuning, we should work on a more general formulation of SISAL, which is induced from probabilistic SCA and has no pre-selected parameter (except for the noise variance which can be estimated from data).
Some prior work on the aforementioned direction should be recognized. The links between SVMin (but not SISAL) and statistical inference were noted in earlier works [7, 8], [9, Appendix]. The prequel of this article [6] describes the connections between SVMin and probabilistic SCA more explicitly, but it only showed similarities, not a direct connection, between SISAL and probabilistic SCA. This article shows a close connection between SISAL and probabilistic SCA, compared to the previous work. Curiously, a simple second-order statistics observation (to be shown in Section 3.4) provides the very crucial piece of jigsaw to complete the puzzle.
Second, it is intriguing to study the optimization aspects of SISAL. The problem formulated in SISAL is non-convex, and Bioucas-Dias derived a successive convex approximation algorithm to tackle the problem. The algorithm can be seen a first-order method, as will be elaborated upon later, and it is worth mentioning that, in 2009, non-convex first-order optimization was not as extensively studied as today. As mentioned, the algorithm proved to be a success in practice. Our question is whether the SISAL algorithm actually possesses any form of guarantees of finding a stationary point, leveraging on our much better understanding of non-convex first-order optimization today. We will see that the SISAL algorithm can be viewed as an instance of the proximal gradient method, with line search along the feasible direction. There are, however, caveats that prevent us from directly claiming convergence to a stationary point--a key component in the objective function does not have Lipschitz gradient, and its domain is the set of all invertible matrices (which is a non-convex set). In this connection we should mention that, in the current non-convex first order optimization literature, it is very common to assume the aforementioned component to have Lipschitz gradient. We will confirm that the SISAL algorithm, with a minor adjustment, can indeed guarantee convergence to a stationary point (more accurately, limit-point convergence). This is made possible by establishing associations between the SISAL algorithm and the line-search-based proximal gradient framework in [10].
Our endeavor to re-explain SISAL also gives rise to new insights for algorithms. Through connecting SISAL and probabilistic SCA, we see a more general formulation that resembles SISAL. The new formulation replaces SISAL's penalty term with a probabilistic penalty term, and it has the regularization parameter (which requires tuning in SISAL) eliminated. We custom-design a practical algorithm for the formulation (which is more difficult than the SISAL), and we will illustrate by numerical experiments that this probabilistic SISAL performs well under the high SNR regime. We also study a SISAL variant that is easier to work with from an optimization algorithm design viewpoint, and numerical results suggest that the variant is computationally competitive.
We organize this paper as follows. Section 2 provides the problem statement and reviews the formulation of SISAL. Section 3 studies probabilistic SCA, shows how probabilistic SCA and SISAL are connected, and, in the process, reveals new formulations. Section 4 considers the optimization aspects of SISAL, particularly, the stationarity guarantee of SISAL. Section 5 develops a practical algorithm for the new formulation of probabilistic SISAL. Section 6 provides synthetic and semi-real data experiments. Section 7 concludes this work.
Our basic notations are as follows. The sets of all real, non-negative and positive numbers are denoted by \\(\\mathbb{R},\\mathbb{R}_{+},\\mathbb{R}_{++}\\), respectively; boldface lowercase letters, such as \\(\\mathbf{x}\\), represent column vectors; boldface capital letters, such as \\(\\mathbf{X}\\), represent matrices; we may use the notation \\((x_{1},\\ldots,x_{n})\\) to represent a column vector; the superscripts \\({}^{\\top}\\), \\({}^{-1}\\) and \\({}^{\\dagger}\\) denote transpose, inverse and pseudo-inverse, respectively; \\(\\det(\\mathbf{X})\\) denotes the determinant of \\(\\mathbf{X}\\); \\(\\text{Diag}(x_{1},\\ldots,x_{n})\\) denotes a diagonal matrix with the \\(i\\)th diagonal element given by \\(x_{i}\\); \\(\\mathbf{0}\\) and \\(\\mathbf{1}\\) denote all-zero and all-one vectors of appropriate sizes, respectively; \\(\\mathbf{x}\\geq\\mathbf{0}\\) means that \\(\\mathbf{x}\\) is element-wise non-negative, and similarly \\(\\mathbf{X}\\geq\\mathbf{0}\\) means that \\(\\mathbf{X}\\) is element-wise non-negative; \\(\\|\\!\\cdot\\!\\|\\) denotes the Euclidean norm for both vectors and matrices; \\(\\text{conv}(\\mathbf{A})=\\{\\mathbf{y}=\\mathbf{A}\\mathbf{x}\\mid\\mathbf{x}\\geq 0,\\mathbf{1}^{\\top}\\mathbf{x}=1\\}\\) denotes the convex hull of the columns of \\(\\mathbf{A}\\); \\(p(\\mathbf{x};\\mathbf{\\theta})\\) denotes the probability distribution of a random variable \\(\\mathbf{x}\\), with the distribution parameter given by \\(\\mathbf{\\theta}\\); \\(p(\\mathbf{x},\\mathbf{y};\\mathbf{\\theta})\\) denotes the joint probability distribution of two random variables \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\), with distribution parameter \\(\\mathbf{\\theta}\\); \\(p(\\mathbf{x}|\\mathbf{y};\\mathbf{\\theta})\\) denotes the probability distribution of \\(\\mathbf{x}\\) conditioned on \\(\\mathbf{y}\\), with distribution parameter \\(\\mathbf{\\theta}\\); \\(\\mathbb{E}[\\cdot]\\) denotes the expectation. More notations will be defined in appropriate places.
## 2 Background
### Problem Statement
The problem of interest, in its most basic form, is as follows. We are given a collection of data points \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\in\\mathbb{R}^{M}\\). We postulate that
\\[\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}, \\tag{1}\\]
where \\(\\mathbf{A}_{0}\\in\\mathbb{R}^{M\\times N}\\), with \\(M\\geq N\\); \\(\\mathbf{s}_{t}\\) is a latent (and thus unknown) variable. The latent variables lie in the unit simplex, i.e., \\(\\mathbf{s}_{t}\\geq\\mathbf{0},\\mathbf{1}^{\\top}\\mathbf{s}_{t}=1\\). The matrix \\(\\mathbf{A}_{0}\\) is unknown. The problem is to recover \\(\\mathbf{A}_{0}\\) from \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\). Note that after recovering \\(\\mathbf{A}_{0}\\), we can recover \\(\\mathbf{s}_{t}\\) by solving the regression problem \\(\\min_{\\mathbf{s}_{t}\\geq\\mathbf{0},\\mathbf{1}^{\\top}\\mathbf{s}_{t}=1}\\|\\mathbf{y}_{t}-\\mathbf{A}_{0}\\mathbf{s }_{t}\\|^{2}\\). For convenience, the above problem of recovering \\(\\mathbf{A}_{0}\\) from \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\) will be called SCA in the sequel.
From a geometrical viewpoint, SCA is a problem of finding the vertices of a hidden simplex from a collection of data points that lie in that simplex. To be specific, observe from (1) that \\(\\mathbf{y}_{t}\\in\\mathrm{conv}(\\mathbf{A}_{0})\\); or, in words, the data points lie in \\(\\mathrm{conv}(\\mathbf{A}_{0})\\). The set \\(\\mathrm{conv}(\\mathbf{A}_{0})\\) is a simplex under the assumption of full-column rank \\(\\mathbf{A}_{0}\\), and, by the definition of simplices, the vertices of \\(\\mathrm{conv}(\\mathbf{A}_{0})\\) are the columns of \\(\\mathbf{A}_{0}\\).1 Hence, the \\(\\mathbf{y}_{t}\\)'s are simplicially distributed data, and recovering \\(\\mathbf{A}_{0}\\) is the same as finding the vertices. Such viewpoint is commonly used in the context of hyperspectral unmixing; see, e.g., [2, 3]. From a statistical viewpoint, SCA is reminiscent of latent factor analyses such as independent component analysis (ICA). Specifically they share the common goal of exploiting the underlying natures of the latent variables, which are based upon further postulates on the statistics of the \\(\\mathbf{s}_{t}\\)'s, to recover \\(\\mathbf{A}_{0}\\). Note that unit-simplex distributed \\(\\mathbf{s}_{t}\\)'s do not have element-wise independent \\(\\mathbf{s}_{t}\\)'s, the latter being the key postulate of ICA.
Footnote 1: We should recall that a set \\(\\mathcal{S}\\subseteq\\mathbb{R}^{m}\\) is called a simplex if it takes the form \\(\\mathcal{S}=\\mathrm{conv}(\\mathbf{A})\\), where \\(\\mathbf{A}=[\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{n}]\\in\\mathbb{R}^{m\\times n}\\) has \\(\\{\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{n}\\}\\) being affinely independent. A simplex \\(\\mathrm{conv}(\\mathbf{A})\\) has the property that the set of vertices of \\(\\mathrm{conv}(\\mathbf{A})\\) is \\(\\{\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{n}\\}\\). Also, it should be noted that if \\(\\mathbf{A}\\) has full column rank, then \\(\\{\\mathbf{a}_{1},\\ldots,\\mathbf{a}_{n}\\}\\) is affinely independent; the converse is not true.
An important application of SCA is hyperspectral unmixing (HU) in remote sensing [2, 3]. In fact, HU has provided strong motivations for researchers to study SCA, and one can argue that HU is central to the developments of SCA. A concise problem statement of HU is as follows. We are given a hyperspectral image taken from a scene. The image is represented by \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\), where each \\(\\mathbf{y}_{t}\\in\\mathbb{R}^{M}\\) is a collection of reflectance measurements over a number of \\(M\\) (over a hundred) fine-resolution spectral bands at a particular pixel. Under some assumptions we may postulate that \\(\\mathbf{y}_{t}\\) follows the SCA model (1) [2]. In particular, each column of \\(\\mathbf{A}_{0}\\) describes the spectral response of a distinct material (or endmember), and each \\(\\mathbf{s}_{t}\\) describes the proportional distribution (or abundance) of the various materials at pixel \\(t\\). The problem of HU is to identify the unknown materials and how they compose the scene, specifically, by uncovering the materials' spectral responses and the proportional distributions from the image. The problem is, in essence, SCA. The reader is refered to [2, 3, 6, 7, 8, 9, 11, 12] for further details of HU.
SCA has strong connections with non-negative matrix factorization (NMF). To describe, consider an NMF data model \\(\\mathbf{z}_{t}=\\mathbf{B}\\mathbf{c}_{t}\\) for \\(t=1,\\ldots,T\\), where \\(\\mathbf{B}\\geq\\mathbf{0}\\) and \\(\\mathbf{c}_{t}\\geq\\mathbf{0}\\) for all \\(t\\). Note that \\(\\mathbf{c}_{t}\\) may not satisfy \\(\\mathbf{1}^{\\top}\\mathbf{c}_{t}=1\\). Consider normalizing the data points \\(\\mathbf{z}_{t}\\)'s by \\(\\mathbf{y}_{t}=\\mathbf{z}_{t}/(\\mathbf{1}^{\\top}\\mathbf{z}_{t})\\). One can show that
\\[\\mathbf{y}_{t}=\\sum_{i=1}^{N}\\underbrace{\\mathbf{b}_{i}}_{:=\\mathbf{a}_{i,0}}\\underbrace{ \\frac{\\mathbf{1}^{\\top}\\mathbf{b}_{i}c_{i,t}}{\\sum_{j=1}^{N}\\mathbf{1}^{\\top}\\mathbf{b}_{j}c_{ j,t}}}_{:=s_{i,t}}=\\mathbf{A}_{0}\\mathbf{s}_{t},\\]
where \\(\\mathbf{b}_{i}\\) and \\(\\mathbf{a}_{i,0}\\) denote the \\(i\\)th column of \\(\\mathbf{B}\\) and \\(\\mathbf{A}_{0}\\), respectively, and the above defined \\(\\mathbf{s}_{t}\\) is seen to satisfy \\(\\mathbf{s}_{t}\\geq\\mathbf{0}\\) and \\(\\mathbf{1}^{\\top}\\mathbf{s}_{t}=1\\); see [4, 5] and the references therein. Thus, NMF can be cast as an SCA problem by the above normalization process. It is worth noting that the application of SCA to NMF does not exploit the non-negativity of \\(\\mathbf{A}_{0}\\) in general; rather, it focuses on leveraging the structures of the unit-simplex-distributed \\(\\mathbf{s}_{t}\\)'s to recover \\(\\mathbf{A}_{0}\\). The reader is referred to [4, 5] for details.
### Simplex Volume Minimization and SISAL
There are various ways to tackle SCA, and, among them, simplex volume minimization (SVMin) stands as a powerful approach. SVMin is built on the geometrical intuition that, if we can find a simplex that circumscribes all the data points and yields the minimum volume, that simplex is expected to be the ground-truth simplex \\(\\text{conv}(\\mathbf{A}_{0})\\); see the literature [2, 3, 4, 5] for more inspirations. The problem of finding the minimum-volume data circumscribing simplex can be formulated as
\\[\\begin{split}\\min_{\\mathbf{A}\\in\\mathbb{R}^{M\\times N}}\\text{vol}( \\mathbf{A}):=(N-1)!\\cdot(\\det(\\bar{\\mathbf{A}}^{\\top}\\bar{\\mathbf{A}}))^{1/2}\\\\ \\text{s.t.}\\ \\mathbf{y}_{t}\\in\\text{conv}(\\mathbf{A}),\\quad t=1,\\ldots,T, \\end{split} \\tag{2}\\]
where \\(\\text{vol}(\\mathbf{A})\\) is the volume of the simplex \\(\\text{conv}(\\mathbf{A})\\)[13] (we assume that every feasible point \\(\\mathbf{A}\\) of (2) has full column rank); \\(\\bar{\\mathbf{A}}=[\\ \\mathbf{a}_{1}-\\mathbf{a}_{N},\\ldots,\\mathbf{a}_{N-1}-\\mathbf{a}_{N}\\ ]\\), with \\(\\mathbf{a}_{i}\\) being the \\(i\\)th column of \\(\\mathbf{A}\\). Recent studies have revealed that SVMin is more than an intuition. It is shown that, under some technical conditions which should hold for sufficiently well-spread \\(\\mathbf{s}_{t}\\)'s, the optimal solution to the SVMin problem (2) is the ground truth \\(\\mathbf{A}_{0}\\) or its column permutation [11, 14, 15]. In other words, SVMin is equipped with provable recovery guarantees.
SISAL [1] is arguably the most popular algorithm for SVMin. Here we shed light onto how SVMin is formulated in SISAL. Bioucas-Dias, the author of SISAL, derived the SISAL formulation in an intuitively powerful way. In particular, he focused on rewriting SVMin to a form that is algorithmically friendly to handle. Assume \\(M=N\\); this is not a problem since we can apply dimensionality reduction to project the data points to a lower dimensional space [2, 3]. SISAL starts with the following variation of writing the SVMin problem
\\[\\begin{split}\\min_{\\mathbf{A}\\in\\mathbb{R}^{N\\times N},\\mathbf{S}\\in \\mathbb{R}^{N\\times T}}|\\det(\\mathbf{A})|\\\\ \\text{s.t.}\\ \\mathbf{Y}=\\mathbf{A}\\mathbf{S},\\ \\mathbf{S}\\geq\\mathbf{0},\\ \\mathbf{S}^{ \\top}\\mathbf{1}=\\mathbf{1},\\end{split} \\tag{3}\\]
where \\(\\mathbf{Y}=[\\ \\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\ ]\\). In particular the above problem replaces the simplex volume \\(\\text{vol}(\\mathbf{A})\\propto(\\det(\\bar{\\mathbf{A}}^{\\top}\\bar{\\mathbf{A}}))^{1/2}\\) in problem (2) with \\(|\\det(\\mathbf{A})|\\)--which is easier to work with. The first key idea leading to SISAL is to perform a transformation
\\[\\mathbf{B}=\\mathbf{A}^{-1},\\]
for which we assume that every feasible point \\(\\mathbf{A}\\) of problem (3) is invertible. By \\(\\mathbf{Y}=\\mathbf{A}\\mathbf{S}\\Longleftrightarrow\\mathbf{B}\\mathbf{Y}=\\mathbf{S}\\), we can transform problem (3) to
\\[\\begin{split}\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}}1/|\\det(\\mathbf{B} )|\\\\ \\text{s.t.}\\ \\mathbf{B}\\mathbf{Y}\\geq\\mathbf{0},\\ \\mathbf{Y}^{\\top}\\mathbf{B}^{ \\top}\\mathbf{1}=\\mathbf{1}.\\end{split} \\tag{4}\\]
The transformed problem above is a non-convex optimization problem with convex constraints, and in this regard we should note that the constraint \\(\\mathbf{Y}=\\mathbf{A}\\mathbf{S}\\) in the SVMin problem (3) is non-convex. The second idea, which looks minor but will be relevant to a key aspect later, is to assume that
\\[\\mathbf{Y}^{\\top}\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{1}\\qquad\\Longleftrightarrow\\qquad\\mathbf{B} ^{\\top}\\mathbf{1}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}. \\tag{5}\\]Note that (5) is true for \"\\(\\Longrightarrow\\)\", but (5) is not necessarily true for \"\\(\\Longleftarrow\\)\" when we are given an arbitrary \\(Y\\). Applying (5), we rewrite problem (4) as
\\[\\begin{split}\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}}& \\ 1/|\\det(\\mathbf{B})|\\\\ \\text{s.t.}&\\ \\mathbf{B}\\mathbf{Y}\\geq\\mathbf{0},\\ \\mathbf{B}^{\\top} \\mathbf{1}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}.\\end{split} \\tag{6}\\]
The constraint \\(\\mathbf{B}\\mathbf{Y}\\geq\\mathbf{0}\\), albeit convex, is a number of \\(NT\\) linear inequalities. These linear inequalities are unstructured, meaning that there is no special structure that we can utilize to handle the inequalities efficiently. When \\(T\\) is large, which is often the case in practice, forcing the numerous linear inequalities to hold can be a computational challenge. The third idea, which is a compromise, is to approximate the constraint \\(\\mathbf{B}\\mathbf{Y}\\geq\\mathbf{0}\\) by soft constraints. This gives rise to the final formulation of SISAL:
**Formulation 1, SISAL Formulation by Bioucas-Dias [1]:**
\\[\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}} -\\log(|\\det(\\mathbf{B})|)+\\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\text{ hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t})\\] \\[\\text{s.t.} \\ \\mathbf{B}^{\\top}\\mathbf{1}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1},\\]
where \\(\\text{hinge}(x)=\\max\\{-x,0\\}\\) is a hinge function, and it serves as a penalty function for non-negative \\(x\\); \\(\\mathbf{b}_{i}\\) denotes the \\(i\\)th row of \\(\\mathbf{B}\\); \\(\\lambda>0\\) is a pre-selected penalty parameter; recall \\(\\mathbf{B}=\\mathbf{A}^{-1}\\).
Our description of the formulation of SISAL is complete. Let us summarize the ideas that led to the SISAL formulation:
1. use the SVMin formulation (3), which considers \\(M=N\\) and replaces the simplex volume \\(\\text{vol}(\\mathbf{A})\\) in (2) with \\(|\\det(\\mathbf{A})|\\);
2. apply the variable transformation \\(\\mathbf{B}=\\mathbf{A}^{-1}\\);
3. assume that the equivalence in (5) is true;
4. apply the soft constraint approximations, replacing the constraints \\(\\mathbf{B}\\mathbf{Y}\\geq\\mathbf{0}\\) with a penalty function \\(\\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\text{hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t})\\) in the objective function.
All these operations aim at simplifying the problem for efficient optimization. Interestingly it is recently shown that, except for operation iv), and under appropriate model assumptions, all the above operations lead us to the same problem as the basic SVMin formulation in (2).
**Proposition 1** ([16]): _Suppose that the data points exactly follow the data model \\(\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}\\), with \\(M=N\\); that \\(\\mathbf{A}_{0}\\) has full column rank; and that \\(\\mathbf{S}=[\\ \\mathbf{s}_{1},\\ldots,\\mathbf{s}_{T}\\ ]\\) has full row rank. Then, the SVMin problem (2) is equivalent to problem (6). Particularly, given any feasible point \\(\\mathbf{A}\\) of problem (2), (a) \\(\\mathbf{A}\\) is invertible; (b) the both sides of the implications of (5) are true; (c) it holds that \\(\\text{vol}(\\mathbf{A})=C\\cdot|\\det(\\mathbf{A})|\\) for some constant \\(C\\)._
### Why is SISAL Successful?
There are two reasons for the success of SISAL. The first is with computational efficiency. Bioucas-Dias built a specialized algorithm for Formulation 1, which is a combination of successive convex approximation and the variable splitting augmented Lagrangian method. The result is a computationally efficient algorithm that scales well with the data size \\(T\\), particularly compared to other SVMin algorithms that deal with the hard constraint \\(\\mathbf{BY}\\geq\\mathbf{0}\\). The second is with noise robustness. The reader may have noticed that the SISAL formulation was derived under a data model that postulates that every data point is perfectly drawn from \\(\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}\\)--with no noise. As it turns out, the key success of SISAL lies in the noisy case. The soft constraint approximation, which was at first introduced to avoid the hard constraint \\(\\mathbf{BY}\\geq\\mathbf{0}\\), provides SISAL with resilience to noise effects. It was noticed that SISAL can be robust to outlying data points, while SVMin algorithms that faithfully implement the hard constraint \\(\\mathbf{BY}\\geq\\mathbf{0}\\) may not. This gives SISAL a significant advantage in practice.
SISAL does have a weakness. It is not clear how the penalty parameter \\(\\lambda\\) should be chosen, and usually it is manually tuned.
## 3 SISAL as Probabilistic SCA, and Beyond
Intriguingly, we can provide an explanation of why SISAL works in the noisy case. The idea is to build a connection between SISAL and a probabilistic SCA framework, and this is the focus of this section.
### Probabilistic SCA
To put into context, consider a noisy data model
\\[\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}+\\mathbf{v}_{t},\\quad t=1,\\ldots,T, \\tag{7}\\]
where \\(\\mathbf{v}_{t}\\) is noise. The model is accompanied with the following assumptions:
* \\(\\mathbf{A}_{0}\\) is square and invertible;
* every \\(\\mathbf{s}_{t}\\) is uniformly distributed on the unit simplex; or, equivalently, every \\(\\mathbf{s}_{t}\\) follows a Dirichlet distribution with concentration parameter \\(\\mathbf{1}\\);
* every \\(\\mathbf{v}_{t}\\) is Gaussian distributed with mean zero and covariance \\(\\sigma^{2}\\mathbf{I}\\);
* the \\(\\mathbf{s}_{t}\\)'s are independent and identically distributed (i.i.d.), the \\(\\mathbf{v}_{t}\\)'s are i.i.d., and the \\(\\mathbf{s}_{t}\\)'s are independent of the \\(\\mathbf{v}_{t}\\)'s.
Our point of departure is the maximum-likelihood (ML) estimator
\\[\\hat{\\mathbf{A}}\\in\\arg\\max_{\\mathbf{A}\\in\\mathbb{R}^{N\\times N}} \\frac{1}{T}\\sum_{t=1}^{T}\\log p(\\mathbf{y}_{t};\\mathbf{A})\\] (8) s.t. \\[\\mathbf{A}\\text{ is invertible,}\\]
where \\(p(\\mathbf{y};\\mathbf{A})\\) is the probability distribution of a data point \\(\\mathbf{y}\\) parameterized by \\(\\mathbf{A}\\), which will be specified shortly. The ML estimator (8) has been shown to possess a desirable identifiabilitycharacteristic [6]. In addition, ML estimation is deemed a principled and powerful approach for estimating \\(\\mathbf{A}_{0}\\) in the noisy case, and the same type of ML estimation is also seen in probabilistic forms of principal component analysis (PCA) and ICA [17, 18, 19, 20].
### Approximating the Likelihood
The expression of \\(p(\\mathbf{y};\\mathbf{A})\\) and how we handle it hold the first key of connecting SISAL and the ML estimator. To derive \\(p(\\mathbf{y};\\mathbf{A})\\), let \\(p(\\mathbf{y},\\mathbf{s};\\mathbf{A})\\) be the joint distribution of a data point \\(\\mathbf{y}\\) and its associated latent variable \\(\\mathbf{s}\\) (parameterized by \\(\\mathbf{A}\\)). From the model in (7) and its accompanying assumptions, \\(p(\\mathbf{y},\\mathbf{s};\\mathbf{A})\\) is given by
\\[p(\\mathbf{y},\\mathbf{s};\\mathbf{A}) =p(\\mathbf{y}|\\mathbf{s};\\mathbf{A})p(\\mathbf{s}), \\tag{9}\\] \\[p(\\mathbf{y}|\\mathbf{s};\\mathbf{A}) =\\mathcal{N}(\\mathbf{y};\\mathbf{A}\\mathbf{s},\\sigma^{2}\\mathbf{I}),\\] (10) \\[p(\\mathbf{s}) =(N-1)!\\cdot\\mathds{1}_{\\Delta}(\\mathbf{s}),\\quad\\Delta=\\{\\mathbf{s}\\in \\mathbb{R}^{N}_{++}\\mid\\mathbf{1}^{\\top}\\mathbf{s}=1\\}, \\tag{11}\\]
where \\(p(\\mathbf{s})\\) is the latent prior; \\(p(\\mathbf{y}|\\mathbf{s};\\mathbf{A})\\) is the distribution of \\(\\mathbf{y}\\) conditioned on \\(\\mathbf{s}\\) (and parameterized by \\(\\mathbf{A}\\)); \\(\\mathcal{N}(\\mathbf{x};\\mathbf{\\mu},\\mathbf{\\Sigma})\\) denotes a real-valued multivariate Gaussian distribution function with mean \\(\\mathbf{\\mu}\\) and covariance \\(\\mathbf{\\Sigma}\\);
\\[\\mathds{1}_{\\mathcal{X}}(\\mathbf{x})=\\left\\{\\begin{array}{ll}0&\\text{if }\\mathbf{x} \
otin\\mathcal{X}\\\\ 1&\\text{if }\\mathbf{x}\\in\\mathcal{X}\\end{array}\\right..\\]
The distribution \\(p(\\mathbf{y};\\mathbf{A})\\) is the marginalization of \\(p(\\mathbf{y},\\mathbf{s};\\mathbf{A})\\) over \\(\\mathbf{s}\\):
\\[p(\\mathbf{y};\\mathbf{A})=\\int p(\\mathbf{y},\\mathbf{s};\\mathbf{A})\\mathrm{d}\\mu(\\mathbf{s}), \\tag{12}\\]
where \\(\\mu\\) is the Lebesgue measure on \\(\\{\\mathbf{s}\\in\\mathbb{R}^{N}\\mid\\mathbf{1}^{\\top}\\mathbf{s}=1\\}\\). At first sight, and by intuition, one may be tempted to further write (12) as
\\[p(\\mathbf{y};\\mathbf{A})=\\int_{\\mathbb{R}^{N}}p(\\mathbf{y},\\mathbf{s};\\mathbf{A})\\mathrm{d}\\mathbf{s}. \\tag{13}\\]
But the correct way should be
\\[p(\\mathbf{y};\\mathbf{A})=\\int_{\\mathbb{R}^{N-1}}p(\\mathbf{y},(\\mathbf{s}_{1:N-1},1-\\mathbf{1}^{ \\top}\\mathbf{s}_{1:N-1});\\mathbf{A})\\mathrm{d}\\mathbf{s}_{1:N-1},\\]
where \\(\\mathbf{s}_{1:N-1}=(s_{1},\\ldots,s_{N-1})\\), and we use the relation \\(\\mathbf{1}^{\\top}\\mathbf{s}=1\\) to explicitly represent \\(s_{N}\\) by \\(s_{N}=1-\\mathbf{1}^{\\top}\\mathbf{s}_{1:N-1}\\). Simply speaking, (13) does not consider the mathematical caveat that \\(\\mathds{1}_{\\Delta}(\\mathbf{s})\\) is not measurable on \\(\\mathbb{R}^{N}\\). There is however a simple trick to get around this caveat and thereby allow us to use (13) (which is simpler), as we will study later.
The function in (12) requires us to solve an integral. Unfortunately, that integral is intractable in general. To be more precise, we do not know if there exists a simple analytical expression or a computationally efficient method to solve the integral, given an arbitrary instance of \\(\\mathbf{y},\\mathbf{A},N\\). As with many scientific and engineering studies, we pursue approximations and heuristics. Firstly, we adopt a quasi latent prior
\\[p(\\mathbf{s})\\simeq C\\cdot\\mathds{1}_{\\hat{\\Delta}}(\\mathbf{s}),\\quad\\hat{\\Delta}= \\{\\mathbf{s}\\in\\mathbb{R}^{N}_{++}\\mid|\\mathbf{1}^{\\top}\\mathbf{s}-1|<\\delta/2\\}, \\tag{14}\\]where \\(\\delta>0\\) is given and is small; \\(C\\) is a normalizing constant. Clearly, (14) should closely approximate the true latent prior when \\(\\delta\\) is very small. Since the quasi latent prior (14) is measurable on \\(\\mathbb{R}^{N}\\), we can use the expression (13) and write
\\[p(\\mathbf{y};\\mathbf{A})\\simeq C\\int_{\\mathbb{R}^{N}}\\mathcal{N}(\\mathbf{y};\\mathbf{A}\\mathbf{s}, \\sigma^{2}\\mathbf{I})\\mathbbm{1}_{\\hat{\\Delta}}(\\mathbf{s})\\mathrm{d}\\mathbf{s}. \\tag{15}\\]
Let \\(\\mathbf{B}=\\mathbf{A}^{-1}\\). By the change of variable \\(\\mathbf{x}=\\mathbf{A}\\mathbf{s}\\), (15) can be rewritten as
\\[p(\\mathbf{y};\\mathbf{A}) \\simeq C|\\det(\\mathbf{B})|\\int_{\\mathbb{R}^{N}}\\mathcal{N}(\\mathbf{y};\\bm {x},\\sigma^{2}\\mathbf{I})\\mathbbm{1}_{\\hat{\\Delta}}(\\mathbf{B}\\mathbf{x})\\mathrm{d}\\mathbf{x}\\] \\[=C|\\det(\\mathbf{B})|\\int_{\\mathbb{R}^{N}}\\mathcal{N}(\\mathbf{x};\\mathbf{y}, \\sigma^{2}\\mathbf{I})\\mathbbm{1}_{\\hat{\\Delta}}(\\mathbf{B}\\mathbf{x})\\mathrm{d}\\mathbf{x}. \\tag{16}\\]
By another change of variable \\(\\mathbf{v}=\\mathbf{x}-\\mathbf{y}\\), we can further rewrite (16) as
\\[p(\\mathbf{y};\\mathbf{A}) \\simeq C|\\det(\\mathbf{B})|\\int_{\\mathbb{R}^{N}}\\mathcal{N}(\\mathbf{v}; \\mathbf{0},\\sigma^{2}\\mathbf{I})\\mathbbm{1}_{\\hat{\\Delta}}(\\mathbf{B}(\\mathbf{y}+\\mathbf{v})) \\mathrm{d}\\mathbf{v}\\] \\[=C|\\det(\\mathbf{B})|\\cdot\\mathrm{Prob}(\\mathbf{B}(\\mathbf{y}+\\mathbf{v})\\in\\hat{ \\Delta}), \\tag{17}\\]
where \\(\\mathbf{v}\\sim\\mathcal{N}(\\mathbf{0},\\sigma^{2}\\mathbf{I})\\). By noting the definition of \\(\\hat{\\Delta}\\) in (14), the probability term in (17) can be expressed as
\\[\\mathrm{Prob}(\\mathbf{B}(\\mathbf{y}+\\mathbf{v})\\in\\hat{\\Delta})=\\mathrm{Prob}\\left(\\mathbf{b} _{1}^{\\top}(\\mathbf{y}+\\mathbf{v})>0,\\ldots,\\mathbf{b}_{N}^{\\top}(\\mathbf{y}+\\mathbf{v})>0,|\\mathbf{1}^ {\\top}\\mathbf{B}(\\mathbf{y}+\\mathbf{v})-1|<\\delta/2\\right), \\tag{18}\\]
where \\(\\mathbf{b}_{i}\\) denotes the \\(i\\)th row of \\(\\mathbf{B}\\). For convenience, let
\\[\\mathcal{E}_{i} =\\{\\mathbf{b}_{i}^{\\top}(\\mathbf{y}+\\mathbf{v})>0\\},\\quad i=1,\\ldots,N, \\tag{19a}\\] \\[\\mathcal{E}_{N+1} =\\{|\\mathbf{1}^{\\top}\\mathbf{B}(\\mathbf{y}+\\mathbf{v})-1|<\\delta/2\\}, \\tag{19b}\\]
and write
\\[\\mathrm{Prob}\\left(\\mathbf{B}(\\mathbf{y}+\\mathbf{v})\\in\\hat{\\Delta}\\right)=\\mathrm{Prob} \\left(\\cap_{i=1}^{N+1}\\mathcal{E}_{i}\\right).\\]
The following heuristic is very crucial.
**Heuristic 1**: _Approximate (18) by_
\\[\\mathrm{Prob}\\left(\\cap_{i=1}^{N+1}\\mathcal{E}_{i}\\right)\\approx\\prod_{i=1}^{ N+1}\\mathrm{Prob}(\\mathcal{E}_{i}).\\]
We will discuss how to make sense of Heuristic 1 in the next subsection. One can show from (19a) that
\\[\\mathrm{Prob}(\\mathcal{E}_{i})=\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}}{\\sigma \\|\\mathbf{b}_{i}\\|}\\right),\\quad i=1,\\ldots,N,\\]
where \\(\\Phi(x)=
for a very small \\(\\delta\\); again, the idea is that, for \\(\\mathbf{v}\\sim\\mathcal{N}(\\mathbf{0},\\sigma^{2}\\mathbf{I})\\), we have \\(\\mathbf{1}^{\\top}\\mathbf{B}(\\mathbf{y}+\\mathbf{v})-1\\sim\\mathcal{N}(\\mathbf{1}^{\\top}\\mathbf{B}\\mathbf{y}- 1,\\sigma^{2}\\|\\mathbf{B}^{\\top}\\mathbf{1}\\|^{2})\\). Putting the components together, we obtain an approximate expression of \\(p(\\mathbf{y};\\mathbf{A})\\) as follows
\\[p(\\mathbf{y};\\mathbf{A})\\approx\\delta C|\\det(\\mathbf{B})|\\cdot\\left(\\prod_{i=1}^{N}\\Phi \\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}}{\\sigma\\|\\mathbf{b}_{i}\\|}\\right)\\right)\\cdot \\mathcal{N}(0;\\mathbf{1}^{\\top}\\mathbf{B}\\mathbf{y}-1,\\sigma^{2}\\|\\mathbf{B}^{\\top}\\mathbf{1}\\|^{2}). \\tag{20}\\]
### Insights Revealed and Discussion
Allow us to pause a moment to examine how the ML problem looks like under the likelihood approximation derived in the preceding subsection. By applying (20) to the ML problem (8), the following formulation can be shown.
**Formulation 2, An Approximate Formulation of the ML Problem (8), Principally by Heuristic 1:**
\\[\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}}\\ -\\log(|\\det(\\mathbf{B})|)+g(\\mathbf{B})-\\frac{1}{T }\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{ \\sigma\\|\\mathbf{b}_{i}\\|}\\right),\\]
where we recall \\(\\Phi(x)=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{x}e^{-z^{2}/2}\\mathrm{d}z\\);
\\[g(\\mathbf{B})=\\log(\\|\\mathbf{B}^{\\top}\\mathbf{1}\\|)+\\frac{\\|\\mathbf{Y}^{\\top}\\mathbf{B}^{\\top}\\mathbf{ 1}-\\mathbf{1}\\|^{2}}{2\\sigma^{2}T\\|\\mathbf{B}^{\\top}\\mathbf{1}\\|^{2}}.\\]
As a minor point of note for Formulation 2, we do not explicitly write down the constraint of invertible \\(\\mathbf{B}\\), which comes from the constraint of invertible \\(\\mathbf{A}\\) in the ML problem (8). This is because \\(-\\log|\\det(\\mathbf{B})|=+\\infty\\) for non-invertible matrices, which means that the invertible matrix constraint is already taken care of.
Let us compare Formulation 2 and the SISAL formulation (Formulation 1). We see that both have penalty terms related to negative \\(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}\\). To better illustrate, Fig. 1 plots \\(-\\log\\Phi(x)\\) and the hinge function. It is observed that \\(-\\log\\Phi(x)\\) is monotone decreasing, and it gives stronger outputs as \\(x\\) is more negative. Hence we may see \\(-\\log\\Phi(x)\\) as a penalty function for negative \\(x\\), serving a similar aim as the hinge function. Moreover, the constraint \\(\\mathbf{B}^{\\top}\\mathbf{1}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}\\) in the SISAL formulation, which comes from \\(\\mathbf{Y}^{\\top}\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{1}\\), is seen to bear some resemblance to the penalty function \\(g\\) in Formulation 2. In the next subsection, we will put forth another element that will bring Formulation 2 even closer to the SISAL formulation. Some discussions are as follows.
**Remark 1**: Some related work should be mentioned. In [6], we derived an approximate ML formulation similar to Formulation 2. We applied an approximation similar to Heuristic 1, but we did not use the quasi latent prior in (14). As a result, our previous approximate ML formulation is still not as similar to SISAL as Formulation 2.
**Remark 2**: We return to the question of how we can make sense of Heuristic 1. Here is our intuition: By the probability result \\(\\mathrm{Prob}\\left(\\cap_{i=1}^{N+1}\\mathcal{E}_{i}\\right)\\leq\\mathrm{Prob}( \\mathcal{E}_{i})\\) for any \\(i\\), we have
\\[\\mathrm{Prob}\\left(\\cap_{i=1}^{N+1}\\mathcal{E}_{i}\\right)\\leq\\left(\\prod_{i=1}^ {N+1}\\mathrm{Prob}(\\mathcal{E}_{i})\\right)^{1/(N+1)}.\\]
From the above inequality, we can show that
\\[-\\frac{1}{T}\\sum_{t=1}^{T}\\log p(\\mathbf{y};\\mathbf{A})\\geq-\\log(|\\det(\\mathbf{B})|)+\\frac {1}{N+1}\\left[g(\\mathbf{B})-\\frac{1}{T}\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi\\left( \\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b}_{i}\\|}\\right)\\right], \\tag{21}\\]
which is a lower-bound approximation and sounds better in terms of being equipped with a rationale. Empirically, we however found that (21) tends to underestimate the negative log likelihood value \\(-\\frac{1}{T}\\sum_{t=1}^{T}\\log p(\\mathbf{y};\\mathbf{A})\\) quite significantly. Instead, removing the scaling \\(1/(N+1)\\) from (21) would give better results. As future work, it would be interesting to analyze the approximation accuracy of Heuristic 1 or to study better approximations under the genre of Heuristic 1.
### Bringing SISAL and ML Closer
We start with an assumption that does not seem to make sense at first. Let
\\[\\mathbf{p}=\\mathbf{A}_{0}^{-\\top}\\mathbf{1},\\]
and _suppose_ that we know \\(\\mathbf{p}\\). Consider the following modified ML problem
\\[\\begin{split}\\max_{\\mathbf{A}\\in\\mathbb{R}^{N\\times N}}& \\frac{1}{T}\\sum_{t=1}^{T}\\log p(\\mathbf{y}_{t};\\mathbf{A})\\\\ \\text{s.t.}&\\mathbf{A}^{-\\top}\\mathbf{1}=\\mathbf{p},\\quad\\mathbf{A }\\text{ is invertible,}\\end{split} \\tag{22}\\]
Figure 1: Comparison of \\(-\\log\\Phi(x)\\) and the hinge function.
wherein we include our prior information of \\(\\mathbf{p}\\) to better guide the estimation. By applying the preceding likelihood approximation to problem (22) (or by adding the constraint \\(\\mathbf{A}^{-\\top}\\mathbf{1}=\\mathbf{p}\\) to Formulation 2), we have the following formulation.
**Formulation 3, An Approximate Formulation of the modified ML Problem (22), Principally by Heuristic 1:**
\\[\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}}-\\log(|\\det(\\mathbf{B})|)-\\frac{ 1}{T}\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{ t}}{\\sigma\\|\\mathbf{b}_{i}\\|}\\right)\\] \\[\\text{s.t. }\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{p}.\\]
Formulation 3 is very similar to the SISAL formulation (Formulation 1) if \\(\\mathbf{p}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}\\). In fact, we have this surprising result.
**Fact 1** ([16]): _Suppose that the data points \\(\\mathbf{y}_{t}\\)'s follow the noiseless model \\(\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}\\) (with \\(M=N\\)); that \\(\\mathbf{A}_{0}\\) has full column rank; and that \\(\\mathbf{S}=[\\ \\mathbf{s}_{1},\\ldots,\\mathbf{s}_{T}\\ ]\\) has full row rank. Then,_
\\[(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}=\\mathbf{A}_{0}^{-\\top}\\mathbf{1}.\\]
Fact 1 was shown in [16], and we shall not repeat the proof. Rather, we are interested in its extension to the noisy case.
**Fact 2**: _Suppose that the data points \\(\\mathbf{y}_{t}\\)'s follow the model in (7) and the accompanying assumptions. Let \\(\\mathbf{\\mu}_{y}=\\mathbb{E}[\\mathbf{y}_{t}]\\) and \\(\\mathbf{R}_{yy}=\\mathbb{E}[\\mathbf{y}_{t}\\mathbf{y}_{t}^{\\top}]\\) be the mean and correlation matrix of \\(\\mathbf{y}_{t}\\), respectively. Then,_
\\[(\\mathbf{R}_{yy}-\\sigma^{2}\\mathbf{I})^{-1}\\mathbf{\\mu}_{y}=\\mathbf{A}_{0}^{-\\top}\\mathbf{1}.\\]
_Proof of Fact 2:_ Let \\(\\mathbf{R}_{ss}=\\mathbb{E}[\\mathbf{s}_{t}\\mathbf{s}_{t}^{\\top}]\\), \\(\\mathbf{\\mu}_{s}=\\mathbb{E}[\\mathbf{s}_{t}]\\). It can be verified that \\(\\mathbf{R}_{ss}\\) is positive definite. Also, from the data model (7), we can show that
\\[\\mathbf{R}_{yy}=\\mathbf{A}_{0}\\mathbf{R}_{ss}\\mathbf{A}_{0}^{\\top}+\\sigma^{2}\\mathbf{I},\\quad\\mathbf{ \\mu}_{y}=\\mathbf{A}_{0}\\mathbf{\\mu}_{s}.\\]
It follows that
\\[(\\mathbf{R}_{yy}-\\sigma^{2}\\mathbf{I})^{-1}\\mathbf{\\mu}_{y}=(\\mathbf{A}_{0}\\mathbf{R}_{ss}\\mathbf{A}_ {0}^{\\top})^{-1}\\mathbf{A}_{0}\\mathbf{\\mu}_{s}=\\mathbf{A}_{0}^{-\\top}\\mathbf{R}_{ss}^{-1}\\mathbf{ \\mu}_{s}.\\]
It can be shown that \\(\\mathbf{R}_{ss}^{-1}\\mathbf{\\mu}_{s}=\\mathbf{1}\\). Specifically,
\\[\\mathbf{R}_{ss}\\mathbf{1}=\\mathbb{E}[\\mathbf{s}_{t}\\underbrace{\\mathbf{s}_{t}^{\\top}\\mathbf{1}}_{ =1}]=\\mathbb{E}[\\mathbf{s}_{t}]=\\mathbf{\\mu}_{s}.\\]
The proof is complete. Note that this result also applies to a more general case wherein \\(\\mathbf{s}_{t}\\) follows a (and possibly non-uniform) \\(\\Delta\\)-supported distribution with positive definite \\(\\mathbf{R}_{ss}\\). \\(\\blacksquare\\)
Fact 2 provides us with an implication that, in practice, we can estimate \\(\\mathbf{p}\\) by
\\[\\hat{\\mathbf{p}}=(\\hat{\\mathbf{R}}_{yy}-\\sigma^{2}\\mathbf{I})^{-1}\\hat{\\mathbf{\\mu}}_{y},\\quad \\hat{\\mathbf{R}}_{yy}=\\frac{1}{T}\\sum_{t=1}^{T}\\mathbf{y}_{t}\\mathbf{y}_{t}^{\\top},\\quad \\hat{\\mathbf{\\mu}}_{y}=\\frac{1}{T}\\sum_{t=1}^{T}\\mathbf{y}_{t}. \\tag{23}\\]Our final touch is to explain how the negative penalty terms in Formulation 3 and the SISAL formulation are related. We start from the direction of Formulation 3. Consider the following result.
**Fact 3**: **([21], [22, footnote 1])** _It holds that \\(\\Phi(x)\\leq\\frac{1}{2}e^{\\sqrt{\\frac{2}{\\pi}}x}\\). Also, as a direct consequence,_
\\[-\\log\\Phi(x)\\geq-\\log\\left(\\max\\left\\{\\frac{1}{2}e^{\\sqrt{\\frac{2}{\\pi}}x},1 \\right\\}\\right)=\\max\\left\\{\\log(2)-\\sqrt{\\frac{2}{\\pi}}x,0\\right\\}.\\]
Using Fact 3, the penalty terms of Formulation 3 can be approximated by
\\[-\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b}_ {i}\\|}\\right) \\geq\\max\\left\\{\\log(2)-\\sqrt{\\frac{2}{\\pi}}\\frac{\\mathbf{b}_{i}^{\\top }\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b}_{i}\\|},0\\right\\}\\] \\[\\geq\\max\\left\\{-\\sqrt{\\frac{2}{\\pi}}\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y }_{t}}{\\sigma\\|\\mathbf{b}_{i}\\|},0\\right\\}\\] \\[=\\sqrt{\\frac{2}{\\pi}}\\frac{1}{\\sigma\\|\\mathbf{b}_{i}\\|}\\text{hinge}( \\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}). \\tag{24}\\]
The normalizing term \\(\\|\\mathbf{b}_{i}\\|\\) is hard to deal with. By pretending as if \\(\\|\\mathbf{b}_{i}\\|\\) were a constant, and by setting \\(\\sqrt{\\frac{2}{\\pi}}\\frac{1}{\\sigma\\|\\mathbf{b}_{i}\\|T}=\\lambda\\) for some pre-selected \\(\\lambda>0\\), we have
\\[-\\frac{1}{T}\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b}_{ i}\\|}\\right)\\approx\\lambda\\cdot\\text{hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}). \\tag{25}\\]
Now, we are ready to draw our main conclusion: _SISAL can be explained as an approximation of the ML estimator_ (22). In particular, the connection is made by applying Fact 1 and (25) to Formulation 3.
### A Hinge-Square Variant of SISAL
The explanation of SISAL as an approximate ML estimator in the preceding subsection gives us a new insight, namely, that the hinge function serves as a surrogate of the penalty function \\(-\\log\\Phi(x)\\) from the ML viewpoint. In that regard, we can choose a different surrogate of \\(-\\log\\Phi(x)\\). From Fig. 1 we see that, as \\(x\\) becomes more negative, the hinge function is a poor approximation of \\(-\\log\\Phi(x)\\). Consider the following result.
**Fact 4**: **(Chernoff bound; see, e.g., [21])** _It holds that, for \\(x\\leq 0\\), \\(\\Phi(x)\\leq\\frac{1}{2}e^{-x^{2}/2}\\). Also, as a direct consequence, we may approximate_
\\[-\\log\\Phi(x)\\approx-\\log\\left(\\frac{1}{2}e^{-\\max\\{-x,0\\}^{2}/2}\\right)=\\log (2)+\\frac{1}{2}\\text{hinge}(x)^{2}.\\]
Fig. 2 compares the above surrogate and \\(-\\log\\Phi(x)\\). We see that this new surrogate approximates \\(-\\log\\Phi(x)\\) better for negative \\(x\\). By approximating
\\[-\\frac{1}{T}\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b}_ {i}\\|}\\right)\\approx\\lambda\\cdot\\text{hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t})^{2} +\\text{constant}, \\tag{26}\\]
as before, we have the following variant of SISAL.
**Formulation 4, H\\({}^{2}\\)-SISAL; a Chernoff bound-based heuristic of the approximate ML problem in Formulation 3, or a hinge-square variant of SISAL in Formulation 1:**
\\[\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}} -\\log(|\\det(\\mathbf{B})|)+\\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\text{ hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t})^{2}\\] s.t. \\[\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{p},\\]
where \\(\\lambda>0\\) is a pre-selected penalty parameter.
Observe that the difference between Formulation 4 and the SISAL formulation (Formulation 1) is that the former puts a square on the hinge function. From an optimization viewpoint, this H\\({}^{2}\\)-SISAL formulation has the advantage that the hinge-square penalty terms, as well as the whole objective function, are continuously differentiable.
## 4 SISAL as an Algorithm, and More
Having explored the formulation aspects with SISAL, we turn to the algorithmic aspects. To facilitate our subsequent development, let us introduce some notations. Let \\(f:\\mathbb{R}^{n}\\to\\mathbb{R}\\cup\\{+\\infty\\}\\) be an extended real-valued function. We denote \\(\\text{dom}\\,f=\\{\\mathbf{x}\\in\\mathbb{R}^{n}\\mid f(\\mathbf{x})<+\\infty\\}\\) as the domain of \\(f\\); \\(\
abla f(\\mathbf{x})\\) as the gradient of \\(f\\) (when \\(f\\) is differentiable at \\(\\mathbf{x}\\));
\\[\\text{prox}_{f}(\\mathbf{x})\\in\\arg\\min_{\\mathbf{z}\\in\\mathbb{R}^{n}}\\frac{1}{2}\\|\\mathbf{ z}-\\mathbf{x}\\|^{2}+f(\\mathbf{x})\\]
as a proximal operator associated with \\(f\\). We also denote \\(\\langle\\cdot,\\cdot\\rangle\\) as the inner product;
\\[\\Pi_{\\mathcal{X}}(\\mathbf{x})\\in\\arg\\min_{\\mathbf{z}\\in\\mathcal{X}}\\|\\mathbf{z}-\\mathbf{x}\\|^ {2}\\]
as a projection of \\(\\mathbf{x}\\) onto a closed set \\(\\mathcal{X}\\subseteq\\mathbb{R}^{n}\\);
\\[\\mathbb{I}_{\\mathcal{X}}(\\mathbf{x})=\\left\\{\\begin{array}{ll}+\\infty&\\text{if } \\mathbf{x}\
otin\\mathcal{X}\\\\ 0&\\text{if }\\mathbf{x}\\in\\mathcal{X}\\end{array}\\right.\\]
Figure 2: Comparison of \\(-\\log\\Phi(x)\\) and a hinge-square based function.
as the indicator function associated with \\(\\mathcal{X}\\). Furthermore, we call \\(f\\) to have Lipschitz continuous gradient on \\(\\mathcal{X}\\) if \\(\
abla f\\) is Lipschitz continuous on \\(\\mathcal{X}\\); i.e., there exists \\(\\alpha>0\\) such that \\(\\|\
abla f(\\mathbf{x})-\
abla f(\\mathbf{y})\\|\\leq\\alpha\\|\\mathbf{x}-\\mathbf{y}\\|\\) for all \\(\\mathbf{x},\\mathbf{y}\\in\\mathcal{X}\\).
### The SISAL Algorithm
To describe the algorithm used in SISAL, we start with describing the basic natures of the SISAL problem. Recall from Formulation 1 the SISAL problem:
\\[\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N},\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{p}}\\ f(\\mathbf{B})= \\underbrace{-\\log|\\det(\\mathbf{B})|}_{:=f_{0}(\\mathbf{B})}+\\lambda\\sum_{t=1}^{T}\\sum_ {i=1}^{N}\\text{hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}), \\tag{27}\\]
where \\(\\mathbf{p}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}\\). The problem is non-convex and non-smooth: the second term of \\(f\\), which has the hinge function involved, is convex and non-differentiable; \\(f_{0}\\) is non-convex and continuously differentiable on its domain \\(\\operatorname{dom}f_{0}\\); \\(\\operatorname{dom}f_{0}\\) is the set of all invertible matrices on \\(\\mathbb{R}^{N\\times N}\\); \\(f_{0}\\) does _not_ have Lipschitz continuous gradient on \\(\\operatorname{dom}f_{0}\\). If one wants to find an off-the-shelf optimization method that offers some form of guarantee of finding a stationary point of problem (27), that will not be immediately obvious. The non-triviality comes in two ways:
1. Implementation: One can actually apply an off-the-shelf method from the recent advances of optimization, particularly, first-order optimization. Take the proximal gradient method as an example. One needs to choose the step size, which is typically guided by the Lipschitz constant of \\(\
abla f_{0}\\). The absence of Lipschitz continuous \\(\
abla f_{0}\\) in our problem necessitates a different strategy to deal with the problem. Also, the problem domain, the set of all invertible matrices, is non-standard at first sight.
2. Theory: The Lipschitz continuity of \\(\
abla f_{0}\\) is needed in most convergence proofs. Again, we do not have Lipschitz continuous \\(\
abla f_{0}\\).
Back to 2009, Bioucas-Dias dealt with the problem by successive convex approximation. The ideas are to form a quadratic approximation of \\(f_{0}\\) at a given point \\(\\tilde{\\mathbf{B}}\\in\\operatorname{dom}f_{0}\\)
\\[f(\\mathbf{B})\\approx f_{0}(\\tilde{\\mathbf{B}})+\\langle\
abla f_{0}(\\tilde{\\mathbf{B}}), \\mathbf{B}-\\tilde{\\mathbf{B}}\\rangle+\\frac{\\mu}{2}\\|\\mathbf{B}-\\tilde{\\mathbf{B}}\\|^{2}:=g_{ \\mu}(\\mathbf{B},\\tilde{\\mathbf{B}}),\\]
for some \\(\\mu>0\\); and to solve, iteratively,
\\[\\mathbf{B}^{k+1}=\\arg\\min_{\\mathbf{B}\\in\\mathbb{R}^{N\\times N},\\mathbf{B}^{\\top}\\mathbf{1}= \\mathbf{p}}g_{\\mu_{k}}(\\mathbf{B},\\mathbf{B}^{k})+\\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N} \\text{hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}),\\quad k=0,1,2,\\cdots \\tag{28}\\]
for some \\(\\mu_{k}>0\\) for all \\(k\\). The problems encountered in (28) are convex (in fact, strictly convex). Bioucas-Dias solved these problems by a variable splitting augmented Lagrangian algorithm, which is now more popularly known as the alternating direction method of multipliers (ADMM). That ADMM algorithm exploits the problem structure of (28) and is computationally efficient. But (28) has a caveat: depending on how \\(\\mu_{k}\\) is chosen, a new iterate \\(\\mathbf{B}^{k+1}\\) may not be invertible; and when that happens, the successive convex optimization in (28) will crash.
Algorithm 1 is the actual form of the SISAL algorithm. Intuitively, we expect that there should exist a \\(\\theta_{k}\\in(0,1]\\), no matter how small it may be, such that \\(\\mathbf{B}^{k+1}=\\mathbf{B}^{k}+\\theta_{k}(\\tilde{\\mathbf{B}}^{k}-\\mathbf{B}^{k})\\) remains invertible. As mentioned, empirical studies suggest that SISAL works. This leads to an intriguing, and previously unanswered, basic question: Does Algorithm 1 have any guarantee of finding a stationary point of problem (27)?
### Line Search-Based Proximal Gradient Method
Our study found that the optimization framework by Bonettini _et al._[10] can be used to answer the question. To put into context, consider a problem
\\[\\min_{\\mathbf{x}\\in\\mathbb{R}^{n}}f(\\mathbf{x}):=f_{0}(\\mathbf{x})+f_{1}(\\mathbf{x}), \\tag{29}\\]
where \\(f_{0}\\) is continuously differentiable on its domain \\(\\operatorname{dom}f_{0}\\); \\(\\operatorname{dom}f_{0}\\) is open; \\(f_{1}\\) is convex, proper, lower semicontinuous, and bounded from below; \\(\\operatorname{dom}f_{1}\\) is closed and nonempty. For this problem, a point \\(\\bar{\\mathbf{x}}\\in\\operatorname{dom}f\\) is called a stationary point of problem (29) if the directional derivative of \\(f\\), defined as \\(f^{\\prime}(\\mathbf{x};\\mathbf{d})=\\lim_{t\\downarrow 0}(f(\\mathbf{x}+t\\mathbf{d})-f(\\mathbf{x}))/t\\), satisfies \\(f^{\\prime}(\\bar{\\mathbf{x}};\\mathbf{d})\\geq 0\\) for all \\(\\mathbf{d}\\in\\mathbb{R}^{n}\\). To describe the method, let
\\[h_{\\mu}(\\mathbf{z},\\mathbf{x})=\\langle\
abla f_{0}(\\mathbf{x}),\\mathbf{z}-\\mathbf{x}\\rangle+\\frac {\\mu}{2}\\|\\mathbf{z}-\\mathbf{x}\\|^{2}+f_{1}(\\mathbf{z})-f_{1}(\\mathbf{x}),\\qquad\\mu>0.\\]
Consider the following line search-based proximal gradient (LSB-PG) method: given \\(\\beta\\in(0,1)\\), \\(\\mathbf{x}^{0}\\in\\operatorname{dom}f\\), recursively compute
\\[\\mathbf{y}^{k} =\\arg\\min_{\\mathbf{z}\\in\\mathbb{R}^{n}}h_{\\mu_{k}}(\\mathbf{z},\\mathbf{x}^{k} )=\\operatorname{prox}_{\\mu_{k}^{-1}f_{1}}(\\mathbf{x}^{k}-\\mu_{k}^{-1}\
abla f_{0} (\\mathbf{x}^{k})),\\quad\\text{for some }\\mu_{k}>0, \\tag{30}\\] \\[\\mathbf{x}^{k+1} =\\mathbf{x}^{k}+\\theta_{k}(\\mathbf{y}^{k}-\\mathbf{x}^{k}), \\tag{31}\\]
for \\(k=0,1,2,\\cdots\\), where \\(\\theta_{k}\\in(0,1]\\) is chosen such that
\\[f(\\mathbf{x}^{k}+\\theta_{k}(\\mathbf{y}^{k}-\\mathbf{x}^{k}))\\leq f(\\mathbf{x}^{k})+\\beta\\theta _{k}h_{\\mu_{k}}(\\mathbf{y}^{k},\\mathbf{x}^{k}). \\tag{32}\\]
To be precise, we use an Armijo line search rule to find \\(\\theta_{k}\\): find the smallest non-negative integer \\(j\\) such that
\\[f(\\mathbf{x}^{k}+\\delta^{j}(\\mathbf{y}^{k}-\\mathbf{x}^{k}))\\leq f(\\mathbf{x}^{k})+\\beta\\delta^ {j}h_{\\mu_{k}}(\\mathbf{y}^{k},\\mathbf{x}^{k}). \\tag{33}\\]
for some given \\(\\delta\\in(0,1)\\), and then choose \\(\\theta_{k}=\\delta^{j}\\). It is worth noting that (32) is a sufficient decrease condition with the objective value, since \\(h_{\\mu_{k}}(\\mathbf{y}^{k},\\mathbf{x}^{k})\\leq 0\\). Also, the framework in [10] is much more general than the LSB-PG, and here we reduce the framework to the above minimal form which is enough to answer our question.
The LSB-PG method is equipped with the following stationarity guarantee.
**Proposition 2** (a rephrased, simplified, version of Corollary 3.1 in [10]): _Consider problem (29) and its associated LSB-PG method in (30)-(33). Suppose \\(\\operatorname{dom}f_{0}\\supseteq\\operatorname{dom}f_{1}\\). Also, assume that \\(\\{\\mu_{k}\\}\\subset[\\mu_{\\min},\\mu_{\\max}]\\) for some \\(0<\\mu_{\\min}\\leq\\mu_{\\max}<+\\infty\\), and that \\(\\{\\mathbf{x}_{k}\\}\\) has a limit point. Then any limit point of \\(\\{\\mathbf{x}_{k}\\}\\) is a stationary point of problem (29)._
As we will discuss in the next subsection, the application of the LSB-PG method to the SISAL problem does not have \\(\\operatorname{dom}f_{0}\\supseteq\\operatorname{dom}f_{1}\\) satisfied. This led us to rework the whole proof to see if the above assumption can be relaxed. The answer, fortunately, is yes.
**Corollary 1**: _The same stationarity result in Proposition 2 holds if we replace \\(\\operatorname{dom}f_{0}\\supseteq\\operatorname{dom}f_{1}\\) by \\(\\operatorname{dom}f_{0}\\cap\\operatorname{dom}f_{1}\
eq\\emptyset\\). As a comment, the assumption of open \\(\\operatorname{dom}f_{0}\\) plays a crucial role._
The proof of Corollary 1 is a meticulous re-examination of the whole proof of Corollary 3.1 in [10], including the proof of the theorems and propositions that precede it. We shall omit the proof. The following remark describes the unique aspect of proving Corollary 1, and the reader may choose to skip it and jump to the next subsection for the application of Corollary 1 to the SISAL problem.
**Remark 3**: We discuss the key proof differences of Proposition 2 and Corollary 1. In the proof, an important issue is to show that there exists a \\(\\theta_{k}\\in(0,1]\\) such that the sufficient decrease condition (32) holds. To achieve the latter, a prerequisite is to ensure \\(\\mathbf{x}^{k+1}\\in\\operatorname{dom}f_{0}\\). One can readily see from (30)-(31) that \\(\\mathbf{y}^{k}\\in\\operatorname{dom}f_{1}\\), and then \\(\\mathbf{x}^{k+1}\\in\\operatorname{dom}f_{1}\\) (due to the convexity of \\(\\operatorname{dom}f_{1}\\)). For the case of \\(\\operatorname{dom}f_{0}\\supseteq\\operatorname{dom}f_{1}\\), or Proposition 2, we automatically get \\(\\mathbf{x}^{k+1}\\in\\operatorname{dom}f_{0}\\). For the case of \\(\\operatorname{dom}f_{0}\
subseteq\\operatorname{dom}f_{1}\\), or Corollary 1, we need to leverage on the assumption of open \\(\\operatorname{dom}f_{0}\\). Since \\(\\operatorname{dom}f_{0}\\) is open, there exists \\(\\epsilon_{k}>0\\) such that, for any \\(\\mathbf{u}\\in\\mathbb{R}^{n}\\) with \\(\\|\\mathbf{u}\\|\\leq\\epsilon\\), we have \\(\\mathbf{x}^{k}+\\mathbf{u}\\in\\operatorname{dom}f_{0}\\). This implies that there must exist a \\(\\theta_{k}>0\\), no matter how small it is, such that \\(\\mathbf{x}^{k}+\\theta_{k}(\\mathbf{y}^{k}-\\mathbf{x}^{k})\\in\\operatorname{dom}f_{0}\\). The above is the distinct part of the proof of Corollary 1.
### Stationarity Guarantee of SISAL
Now we apply the framework in the preceding subsection to the SISAL problem. Let
\\[f_{0}(\\mathbf{B}) =\\ -\\log|\\det(\\mathbf{B})|,\\] \\[f_{1}(\\mathbf{B}) =\\ \\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\operatorname{hinge}(\\mathbf{b}_{ i}^{\\top}\\mathbf{y}_{t})+\\mathbb{I}_{\\mathcal{B}}(\\mathbf{B}),\\quad\\mathcal{B}=\\ \\{\\mathbf{B}\\in\\mathbb{R}^{N\\times N}\\mid\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{p}\\},\\]
and let \\(\\mu_{k}=\\mu\\) for some pre-selected constant \\(\\mu>0\\). We observe that the SISAL algorithm in Algorithm 1 is very similar to the LSB-PG method in (30)-(33), with \\(\\beta\\) being nearly zero. Or, more specifically, if we modify Algorithm 1 by changing the line search in Step 5 to the Armijo rule in (33), the algorithm is, faithfully, an instance of the LSB-PG method. To answer the question of stationarity guarantees, note that \\(\\operatorname{dom}f_{0}\\) is the set of all invertible matrices on \\(\\mathbb{R}^{N\\times N}\\), while \\(\\operatorname{dom}f_{1}=\\mathcal{B}\\). Clearly, we have \\(\\operatorname{dom}f_{0}\
subseteq\\operatorname{dom}f_{1}\\), and Proposition 2 is not applicable. Corollary 1 is applicable if \\(\\operatorname{dom}f_{0}\\) is open. In fact, it is known in topology that the set of invertible matrices is open.2 Let us conclude. By Corollary 1, the SISAL algorithm, upon a minor modification with its line search rule, is equipped with a stationarity guarantee.
Footnote 2: For the readerβs interest, here is a simple proof by matrix analysis. Let \\(\\mathcal{S}\\) be the set of invertible matrices on \\(\\mathbb{R}^{N\\times N}\\). Let \\(\\mathbf{X}\\in\\mathcal{S}\\), and let \\(\\sigma_{1}\\geq\\cdots\\geq\\sigma_{N}>0\\) be its singular values. Let \\(\\epsilon>0\\). Let \\(\\mathbf{Y}\\) be any matrix such that \\(\\|\\mathbf{X}-\\mathbf{Y}\\|\\leq\\epsilon\\), and let \\(d_{1}\\geq\\cdots\\geq d_{N}\\geq 0\\) be its singular values. By the singular value inequality \\(\\|\\mathbf{X}-\\mathbf{Y}\\|^{2}\\geq\\sum_{i=1}^{N}|\\sigma_{i}-d_{i}|^{2}
### Application to H\\({}^{2}\\)-SISAL and Formulation 3
It is exciting to point out that we can also use the LSB-PG method in Section 4.2 to deal with the H\\({}^{2}\\)-SISAL problem in Formulation 4. Specifically we choose
\\[f_{0}(\\mathbf{B})=-\\log(|\\det(\\mathbf{B})|)+\\lambda\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\text{ hinge}(\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t})^{2},\\quad f_{1}(\\mathbf{B})=\\mathbb{I}_{\\mathcal{B}} (\\mathbf{B}); \\tag{34}\\]
note that we put the (continuously differentiable) hinge-square penalty term to \\(f_{0}\\), which is different compared to SISAL. The resulting LSB-PG method has the proximal operation (30) reduced to
\\[\\bar{\\mathbf{B}}^{k}=\\text{prox}_{\\mu_{k}^{-1}f_{1}}(\\mathbf{B}^{k}-\\mu_{k}^{-1}\
abla f _{0}(\\mathbf{B}^{k}))=\\Pi_{\\mathcal{B}}(\\mathbf{B}^{k}-\\mu_{k}^{-1}\
abla f_{0}(\\mathbf{B} ^{k})),\\]
which has a simple closed form and is cheap to compute. We should recall that the proximal operation in SISAL has no closed form and requires us to call a solver (ADMM). We take advantage of the computational efficiency of the proximal operation by considering the following rule of choosing \\(\\mu_{k}\\): find the smallest non-negative integer \\(j\\) such that
\\[f(\\bar{\\mathbf{B}}^{k,j}) \\leq f(\\mathbf{B}^{k})+\\beta h_{\
u c^{j}}(\\bar{\\mathbf{B}}^{k,j},\\mathbf{B}^ {k}), \\tag{35a}\\] \\[\\bar{\\mathbf{B}}^{k,j} =\\Pi_{\\mathcal{B}}(\\mathbf{B}^{k}-(\
u c^{j})^{-1}\
abla f_{0}(\\mathbf{B} ^{k})), \\tag{35b}\\]
for some given \\(\
u>0,c>1\\), and then choose \\(\\mu_{k}=\
u c^{j}\\). Consequently, the sufficient decrease condition (32) will be satisfied for \\(\\theta_{k}=1\\), and we can simply set \\(\\theta_{k}=1\\), \\(\\mathbf{B}^{k+1}=\\bar{\\mathbf{B}}^{k,j}\\). Note that this is a typical scheme in proximal gradient methods (see, e.g., [23]), and (35) is popularly called the backtracking line search. We should also mention that the above LSB-PG scheme is identical to the projected gradient method, with a suitably chosen step size. By Corollary 1, this LSB-PG scheme is equipped with a stationarity guarantee under the assumption that the \\(\\mu_{k}\\)'s found by the backtracking line search are bounded.
Our actual algorithm, shown in Algorithm 2, is an extrapolated variant of the above scheme. Note that, by choosing \\(\\alpha_{k}=0\\), Algorithm 2 reduces to the previous LSB-PG scheme.
```
1:given: an invertible starting point \\(\\mathbf{B}^{0}\\); a constant \\(\\beta\\in(0,1)\\); and an extrapolation sequence \\(\\{\\alpha_{k}\\}\\), typically the FISTA sequence [23]
2:\\(k=0\\), \\(\\mathbf{B}^{-1}=\\mathbf{B}^{0}\\)
3:repeat
4:\\(\\mathbf{B}^{k}_{\\text{ex}}=\\mathbf{B}^{k}+\\alpha_{k}(\\mathbf{B}^{k}-\\mathbf{B}^{k-1})\\)
5:\\(\\mathbf{B}^{k+1}=\\Pi_{\\mathcal{B}}(\\mathbf{B}^{k}_{\\text{ex}}-\\mu_{k}^{-1}\
abla f_{0 }(\\mathbf{B}^{k}_{\\text{ex}}))\\), where \\(\\mu_{k}\\) is chosen such that \\(f(\\mathbf{B}^{k+1})\\leq f(\\mathbf{B}^{k}_{\\text{ex}})+\\)
6:\\(\\beta h_{\\mu_{k}}(\\mathbf{B}^{k+1},\\mathbf{B}^{k}_{\\text{ex}})\\), done by the backtracking line search (35); \\(f_{0}\\) is given in (34)
7:\\(k=k+1\\)
8:until a stopping rule is satisfied
9:output:\\(\\mathbf{B}^{k}\\)
```
**Algorithm 2** H\\({}^{2}\\)-SISAL, an extrapolated proximal gradient scheme for Formulation 4
Our consideration is more from the practical side. The LSB-PG framework does not cover the extrapolated variant, and hence it is not known if Algorithm 2 is equipped with stationarity guarantees. On the other hand, we want to leverage on the merits of extrapolation demonstrated in prior works. It is known that, when \\(f_{0}\\) is convex and has Lipschitz continuous gradient, the extrapolated proximal gradient method can lead to faster convergence rates than the proximal gradient method,both provably and empirically [24]; and that, when \\(f_{0}\\) is non-convex and has Lipschitz continuous gradient, the extrapolated proximal gradient method is shown to yield some stationarity guarantee [25, 26], and similar methods were empirically found to lead to faster convergence speeds in some applications [27, 28, 11, 22]. Our empirical experience with Algorithm 2 is good in terms of runtime speed and stability.
We should further note that all the developments in this subsection apply to the approximate ML problem in Formulation 3; change
\\[f_{0}(\\boldsymbol{B})=-\\log(|\\det(\\boldsymbol{B})|)-\\frac{1}{T}\\sum_{t=1}^{T} \\sum_{i=1}^{N}\\log\\Phi\\left(\\frac{\\boldsymbol{b}_{i}^{\\top}\\boldsymbol{y}_{t} }{\\sigma\\|\\boldsymbol{b}_{i}\\|}\\right)\\]
(this \\(f_{0}\\) can be shown to be continuously differentiable on the set of all invertible matrices). Unfortunately, by our numerical experience, the adaptation of Algorithm 2 (with or without extrapolation) to Formulation 3 is not promising: its convergence tends to be slow; and numerical instability could happen, if not careful enough. The culprit is most likely the normalizing terms \\(\\|\\boldsymbol{b}_{i}\\|\\): the term \\(1/\\|\\boldsymbol{b}_{i}\\|\\) becomes very large for small \\(\\|\\boldsymbol{b}_{i}\\|\\), and the occurrence of such event can cause numerical instability. These setbacks drove us to rethink our strategy for dealing with Formulation 3.
## 5 Probabilistic SISAL via Inexact Block Coordinate Descent
In this section we devise an algorithm for tackling the approximate ML problem in Formulation 3, with a focus on practicality and efficiency in our design.
### Reformulation and Inexact Block Coordinate Descent
As mentioned previously, the normalizing terms \\(\\|\\boldsymbol{b}_{i}\\|\\) in the objective function are troublesome. We deal with them by considering the change of variable
\\[\\boldsymbol{B}=\\boldsymbol{D}\\boldsymbol{C},\\ \\ \\boldsymbol{C}=\\begin{bmatrix} \\boldsymbol{c}_{1}^{\\top}\\\\ \\vdots\\\\ \\boldsymbol{c}_{N}^{\\top}\\end{bmatrix},\\ \\ \\boldsymbol{D}=\\begin{bmatrix}d_{1}&&\\\\ &\\ddots&\\\\ &&d_{N}\\end{bmatrix},\\ \\ d_{i}>0,\\ \\ \\boldsymbol{c}_{i}\\in\\mathcal{U}:=\\{ \\boldsymbol{c}\\in\\mathbb{R}^{N}\\ |\\ \\|\\boldsymbol{c}\\|=1\\},\\ \\forall i.\\]
Applying the above transformation to Formulation 3 leads to the following reformulation
\\[\\begin{split}\\min_{\\boldsymbol{C}\\in\\mathbb{R}^{N\\times N}, \\boldsymbol{d}\\in\\mathbb{R}^{N}}&-\\log|\\det(\\boldsymbol{C})|-\\sum_{i=1}^{N }\\log d_{i}-\\frac{1}{T}\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi(\\boldsymbol{c}_{ i}^{\\top}\\bar{\\boldsymbol{y}}_{t})\\\\ \\text{s.t.}&\\boldsymbol{C}^{\\top}\\boldsymbol{d}=\\boldsymbol{p},\\ \\boldsymbol{C}\\in \\mathcal{U}^{N},\\end{split} \\tag{36}\\]
where, for convenience, we denote \\(\\bar{\\boldsymbol{y}}_{t}=\\boldsymbol{y}_{t}/\\sigma\\), \\(\\mathcal{U}^{N}=\\{\\boldsymbol{C}=[\\ \\boldsymbol{c}_{1},\\ldots, \\boldsymbol{c}_{N}\\ ]^{\\top}\\ |\\ \\boldsymbol{c}_{i}\\in\\mathcal{U}\\ \\forall i\\}\\), and \\(\\boldsymbol{d}=(d_{1},\\ldots,d_{N})\\); note \\(\\operatorname{dom}\\left(-\\log\\right)=\\mathbb{R}_{++}\\). The upshot of the reformulation in (36) is that the normalizing terms disappear. The new challenges are that we are now faced with unit modulus constraints, and handling both the equality constraint \\(\\boldsymbol{C}^{\\top}\\boldsymbol{d}=\\boldsymbol{p}\\) and the unit modulus constraints is difficult. We make a compromise by considering a penalized alternation of problem (36)
\\[\\min_{\\boldsymbol{C}\\in\\mathcal{U}^{N},\\boldsymbol{d}\\in\\mathbb{R}^{N}}\\ F_{ \\eta}(\\boldsymbol{C},\\boldsymbol{d}):=-\\log|\\det(\\boldsymbol{C})|-\\sum_{i=1} ^{N}\\log d_{i}-\\frac{1}{T}\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi(\\boldsymbol{c}_ {i}^{\\top}\\bar{\\boldsymbol{y}}_{t})+\\eta\\|\\boldsymbol{C}^{\\top}\\boldsymbol{d} -\\boldsymbol{p}\\|^{2} \\tag{37}\\]for a given penalty parameter \\(\\eta>0\\) that is presumably large. Observe that \\(F_{\\eta}\\) is convex in \\(\\mathbf{d}\\), and non-convex in \\(\\mathbf{C}\\).
We employ a block coordinate descent (BCD) strategy to handle problem (37). The first layer of our algorithm is shown in Algorithm 3. We minimize \\(F_{\\eta}\\) over \\(\\mathbf{C}\\) and \\(\\mathbf{d}\\) in an alternating fashion. To be more precise, the minimization \\(F_{\\eta}\\) over \\(\\mathbf{C}\\in\\mathcal{U}^{N}\\) is only approximate since the problem is non-convex. Moreover, we gradually increase \\(\\eta\\). By experience, graduating increasing \\(\\eta\\) is better than applying a large fixed \\(\\eta\\). The second layer of our design deals with the computations of the coordinate minimizers in Steps 5-6 of Algorithm 3, which is detailed next.
```
1:given: an invertible starting point \\(\\mathbf{B}^{0}\\), a starting penalty value \\(\\eta>0\\), \\(c>1\\), and a rule for increasing \\(\\eta\\)
2:\\(k=0\\), \\(\\mathbf{d}^{0}=(\\|\\mathbf{b}^{0}_{1}\\|,\\ldots,\\|\\mathbf{b}^{0}_{N}\\|)\\), \\(\\mathbf{C}^{0}=[\\ \\mathbf{b}^{0}_{1}/d^{0}_{1},\\ldots,\\mathbf{b}^{0}_{N}/d^{0}_{N}\\ ]^{\\top}\\)
3:repeat
4:repeat
5:\\(\\mathbf{d}^{k+1}=\\arg\\min_{\\mathbf{d}\\in\\mathbb{R}^{N}}F_{\\eta}(\\mathbf{C}^{k},\\mathbf{d})\\) by Algorithm 4 with \\(\\mathbf{d}^{k}\\) as the starting point
6:\\(\\mathbf{C}^{k+1}\\approx\\arg\\min_{\\mathbf{C}\\in\\mathcal{U}^{N}}F_{\\eta}(\\mathbf{C},\\mathbf{d}^ {k+1})\\) by Algorithm 5 with \\(\\mathbf{C}^{k}\\) as the starting point
7:\\(k=k+1\\)
8:until a stopping rule is satisfied
9:\\(\\eta=\\eta\\,c\\)
10:until a stopping rule is satisfied
11:output:\\(\\mathbf{B}^{k}=\\mathbf{D}^{k}\\mathbf{C}^{k}\\), where \\(\\mathbf{D}^{k}=\\text{Diag}(\\mathbf{d}_{k})\\)
```
**Algorithm 3** Pr-SISAL, an inexact BCD algorithm for the altered problem (37) of Formulation 3
### Coordinate Minimization Over \\(\\mathbf{d}\\)
Let us first consider the coordinate minimization over \\(\\mathbf{d}\\) in Step 5 of Algorithm 3. The problem amounts to solving
\\[\\min_{\\mathbf{d}\\in\\mathbb{R}^{N}}\\ f(\\mathbf{d}):=\\underbrace{\\eta\\|\\mathbf{C}^{\\top}\\mathbf{ d}-\\mathbf{p}\\|^{2}}_{:=f_{0}(\\mathbf{d})}\\underbrace{-\\sum_{i=1}^{N}\\log(d_{i})}_{:=f_{1 }(\\mathbf{d})}. \\tag{38}\\]
The above problem is convex. It also falls into the scope of proximal gradient methods (cf. Section 4.2), with Lipschitz continuous \\(\
abla f_{0}\\). We employ the (standard) extrapolated proximal gradient method to compute the solution to problem (38). The algorithm is shown in Algorithm 4. Note that
\\[\\text{prox}_{\\mu^{-1}f_{1}}(\\mathbf{d})=\\left(\\tfrac{d_{1}+\\sqrt{d_{1}^{2}+4/\\mu} }{2},\\cdots,\\tfrac{d_{N}+\\sqrt{d_{N}^{2}+4/\\mu}}{2}\\right). \\tag{39}\\]
### Coordinate Minimization Over \\(\\mathbf{C}\\)
Next, consider the coordinate minimization over \\(\\mathbf{C}\\). The problem can be presented as
\\[\\min_{\\mathbf{C}\\in\\mathbb{R}^{N\\times N}}\\ f(\\mathbf{C}):=\\underbrace{-\\log|\\det(\\bm {C})|-\\frac{1}{T}\\sum_{t=1}^{T}\\sum_{i=1}^{N}\\log\\Phi(\\mathbf{c}_{i}^{\\top}\\bar{ \\mathbf{y}}_{t})+\\eta\\|\\mathbf{C}^{\\top}\\mathbf{d}-\\mathbf{p}\\|^{2}}_{:=f_{0}(\\mathbf{C})}+ \\underbrace{\\mathbb{I}_{\\mathbf{\\mu}^{N}}(\\mathbf{C})}_{:=f_{1}(\\mathbf{C})} \\tag{40}\\]We begin by considering the proximal gradient method:
\\[\\mathbf{C}^{k+1}=\\text{prox}_{\\mu_{k}^{-1}f_{1}}(\\mathbf{C}^{k}-\\mu_{k}^{-1}\
abla f_{0}( \\mathbf{C}^{k}))=\\Pi_{\\mathcal{U}^{N}}(\\mathbf{C}^{k}-\\mu_{k}^{-1}\
abla f_{0}(\\mathbf{C}^{k })), \\tag{41}\\]
where \\(\\mu_{k}>0\\) is chosen such that the sufficient decrease condition is satisfied, and it is done by the backtracking line search (cf. (35)); we have
\\[\\Pi_{\\mathcal{U}^{N}}(\\mathbf{C})=[\\ \\Pi_{\\mathcal{U}}(\\mathbf{c}_{1}),\\ldots,\\Pi_{ \\mathcal{U}}(\\mathbf{c}_{N})\\ ]^{\\top},\\quad\\Pi_{\\mathcal{U}}(\\mathbf{c})=\\left\\{\\begin{array}{ll}\\mathbf{c}/\\| \\mathbf{c}\\|&\\text{if }\\mathbf{c}\
eq\\mathbf{0}\\\\ \\text{any }\\mathbf{u}\\in\\mathcal{U}&\\text{if }\\mathbf{c}=\\mathbf{0}\\end{array}\\right.\\]
The method, by operations, is the same as the standard proximal gradient method. But the problem does not fall within the scope of the stationarity-guaranteed LSB-PG framework, because \\(\\mathcal{U}^{N}\\) is non-convex. We adopt this method mostly based on practicality: It is simple, and the same method or similar methods have been used in practice [29, 30, 31], with reasonable results demonstrated. Moreover, as a supporting argument, the method is shown to be equipped with some stationarity guarantee under the assumption of Lipschitz continuous \\(\
abla f_{0}\\)[30].
The above method is just a vanilla version of our actual algorithm. There is a practical issue: the computation of \\(\
abla f_{0}\\) is expensive, and the direct use of the proximal gradient method can be slow in terms of the runtimes. To give an idea, let us show \\(\
abla f_{0}\\):
\\[\
abla f_{0}(\\mathbf{C})=-\\mathbf{C}^{-\\top}-\\frac{1}{T}\\sum_{t=1}^{T}\\begin{bmatrix} \\frac{1}{\\Phi(\\mathbf{c}_{1}^{\\top}\\bar{\\mathbf{y}}_{t})}\\frac{1}{\\sqrt{2\\pi}}e^{-(\\bm {c}_{1}^{\\top}\\bar{\\mathbf{y}}_{t})^{2}/2}\\bar{\\mathbf{y}}_{t}^{\\top}\\\\ \\vdots\\\\ \\frac{1}{\\Phi(\\mathbf{c}_{N}^{\\top}\\bar{\\mathbf{y}}_{t})}\\frac{1}{\\sqrt{2\\pi}}e^{-(\\bm {c}_{N}^{\\top}\\bar{\\mathbf{y}}_{t})^{2}/2}\\bar{\\mathbf{y}}_{t}^{\\top}\\end{bmatrix}+2 \\eta\\,\\mathbf{d}(\\mathbf{C}^{\\top}\\mathbf{d}-\\mathbf{p})^{\\top}.\\]
We see that computing \\(\
abla f_{0}\\) requires evaluating \\(\\Phi\\) for a number of \\(NT\\) times (recall that \\(T\\) is large in practice). The function \\(\\Phi\\) does not have a closed form and is evaluated by a numerical method. While this should not be an issue when we are required to call \\(\\Phi\\) a few times, the problem here requires us to evaluate \\(\\Phi\\) numerous times (and at every iteration).
To reduce the number of times \\(\\Phi\\) is called, and thereby alleviate the computational burden, we consider a combination of the majorization-minimization (MM) and proximal gradient method. Recall the idea of MM: i) build a surrogate of \\(f\\) by finding a majorant \\(g(\\mathbf{C},\\tilde{\\mathbf{C}})\\) of \\(f\\) at \\(\\tilde{\\mathbf{C}}\\), i.e., \\(f(\\mathbf{C})\\leq g(\\mathbf{C},\\tilde{\\mathbf{C}})\\) for all \\(\\mathbf{C},\\tilde{\\mathbf{C}}\\), and \\(f(\\mathbf{C})=g(\\mathbf{C},\\mathbf{C})\\); ii) handle the problem by recursively solving \\(\\mathbf{C}^{k+1}=\\min_{\\mathbf{C}}g(\\mathbf{C},\\mathbf{C}^{k})\\). Consider the following fact.
**Fact 5** ([32] and the references therein): _It holds that, for any \\(\\tilde{x}\\in\\mathbb{R}\\),_
\\[-\\log\\Phi(x)\\leq g(x,\\tilde{x}):=\\frac{1}{2}|x+w(\\tilde{x})|^{2}+r(\\tilde{x}),\\]
_where \\(r(\\tilde{x})\\) does not depend on \\(x\\);_
\\[w(\\tilde{x})=-\\tilde{x}-\\frac{1}{\\Phi(\\tilde{x})}\\frac{1}{\\sqrt{2\\pi}}e^{- \\tilde{x}^{2}/2}.\\]
_Also, we have \\(g(x,x)=-\\log\\Phi(x)\\)._
Let us apply Fact 5 to build a majorant of \\(f_{0}\\):
\\[g_{0}(\\mathbf{C},\\tilde{\\mathbf{C}})=-\\log|\\det(\\mathbf{C})|+\\frac{1}{2T}\\sum_{t=1}^{T} \\sum_{i=1}^{N}\\left|\\mathbf{c}_{i}^{\\top}\\bar{\\mathbf{y}}_{t}-w(\\tilde{\\mathbf{c}}_{i}^{ \\top}\\bar{\\mathbf{y}}_{t})\\right|^{2}+\\eta\\|\\mathbf{C}^{\\top}\\mathbf{d}-\\mathbf{p}\\|^{2}+r( \\tilde{\\mathbf{C}}), \\tag{42}\\]
for some \\(r\\) that does not depend on \\(\\mathbf{C}\\). Also, let \\(g(\\mathbf{C},\\tilde{\\mathbf{C}})=g_{0}(\\mathbf{C},\\tilde{\\mathbf{C}})+f_{1}(\\mathbf{C})\\), which is a majorant of \\(f\\). We carry out MM, in an inexact sense, by approximating \\(\\mathbf{C}^{k+1}=\\arg\\min_{\\mathbf{C}}g(\\mathbf{C},\\mathbf{C}^{k})\\) via the proximal gradient method. By doing so, we hope that the number of times \\(\\Phi\\) is called can be reduced: the evaluations of \\(\\Phi\\) happen in the majorant construction step (42), but not in the (more intensively operating) proximal gradient iterations. Our high-level algorithm description is complete, and the algorithm is shown below. Note that the actual proximal gradient method we employ is extrapolated.
```
1:given: an invertible starting point \\(\\mathbf{C}^{0}\\); and an extrapolation sequence \\(\\{\\alpha_{k}\\}\\), typically the FISTA sequence [23]
2:\\(k=0\\),
3:repeat % MM iterations
4: compute \\(w((\\mathbf{c}_{i}^{k})^{\\top}\\bar{\\mathbf{y}}_{t})\\) for all \\(i,t\\)
5:\\(l=0\\), \\(\\mathbf{C}^{k,-1}=\\mathbf{C}^{k,0}=\\mathbf{C}^{k}\\)
6:repeat % extrapolated proximal gradient iterations
7:\\(\\mathbf{C}_{\\mathrm{ex}}^{k,l}=\\mathbf{C}^{k,l}+\\alpha_{l}(\\mathbf{C}^{k,l}-\\mathbf{C}^{k,l-1})\\)
8:\\(\\mathbf{C}^{k,l+1}=\\Pi_{\\mathbf{U}^{N}}(\\mathbf{C}_{\\mathrm{ex}}^{k,l}-\\mu_{k,l}^{-1} \
abla g_{0}(\\mathbf{C}_{\\mathrm{ex}}^{k,l},\\mathbf{C}^{k}))\\), where \\(\\mu_{k,l}\\) is chosen such that \\[g_{0}(\\mathbf{C}^{k,l+1},\\mathbf{C}^{k})\\leq g_{0}(\\mathbf{C}_{\\mathrm{ex}}^{k,l},\\mathbf{C}^{ k})+\\langle\
abla g_{0}(\\mathbf{C}_{\\mathrm{ex}}^{k,l},\\mathbf{C}^{k}),\\mathbf{C}^{k,l+1}- \\mathbf{C}_{\\mathrm{ex}}^{k,l}\\rangle+\\tfrac{\\mu_{k,l}}{2}\\|\\mathbf{C}^{k,l+1}-\\mathbf{C}_ {\\mathrm{ex}}^{k,l}\\|^{2}\\] (i.e., sufficient decrease) is satisfied, and it is done by the backtracking line search; \\(g_{0}\\) is given in (42)
9:\\(l=l+1\\)
10:until a stopping rule is satisfied
11:\\(\\mathbf{C}^{k+1}=\\mathbf{C}^{k,l}\\)
12:\\(k=k+1\\)
13:until a stopping rule is satisfied
14:output:\\(\\mathbf{C}^{k}\\)
```
**Algorithm 5** a combined MM and extrapolated proximal gradient algorithm for \\(\\min_{\\mathbf{C}\\in\\mathcal{U}^{N}}F_{\\eta}(\\mathbf{C},\\mathbf{d})\\)
Numerical Results
Now we proceed to numerical results. While we focused on giving a novel explanation of SISAL, the study itself showed new possibilities which we would like to examine by numerical experiments. The most interesting one is the approximate ML estimator in Formulation 3, which resembles a SISAL variant that adopts a probabilistic penalty term. This probabilistic SISAL does not have the regularization parameter \\(\\lambda\\), and we want to see how well it works compared to SISAL (which requires tuning \\(\\lambda\\)). Also we are interested in the hinge-square SISAL variant in Formulation 4, in terms of runtimes.
### Settings of the Algorithms
The implementations of the hinge-square and probabilistic SISAL formulations in Formulations 4 and 3 are accomplished by Algorithms 2 and 3, respectively. For convenience, Algorithms 2 and 3 will be called H\\({}^{2}\\)-SISAL and Pr-SISAL, respectively, in the sequel. We first specify the dimensionality reduction (DR) preprocessing, which is required by the SISAL algorithms. The standard PCA is used to perform DR. To be specific, let \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\in\\mathbb{R}^{M}\\) be the data points. We compute \\(\\hat{\\mathbf{R}}_{yy}=\\frac{1}{T}\\sum_{t=1}^{T}\\mathbf{y}_{t}\\mathbf{y}_{t}^{\\top}\\), compute the \\(N\\)-principal eigenvector matrix \\(\\mathbf{U}\\in\\mathbb{R}^{M\\times N}\\) of \\(\\hat{\\mathbf{R}}_{yy}\\), and take \\(\\hat{\\mathbf{y}}_{t}=\\mathbf{U}^{\\top}\\mathbf{y}_{t}\\in\\mathbb{R}^{N}\\) as the dimension-reduced data points. Pr-SISAL or H\\({}^{2}\\)-SISAL is then applied to \\(\\hat{\\mathbf{y}}_{1},\\ldots,\\hat{\\mathbf{y}}_{T}\\) to get an estimate of \\(\\hat{\\mathbf{A}}_{0}=\\mathbf{U}^{\\top}\\mathbf{A}_{0}\\), and we use the relation \\(\\mathbf{A}_{0}=\\mathbf{U}\\tilde{\\mathbf{A}}_{0}\\) to form the estimate of \\(\\mathbf{A}_{0}\\). In this connection, it is worth noting that, for the case of \\(M\\geq N+1\\), we can also estimate the noise power \\(\\sigma^{2}\\) from \\(\\hat{\\mathbf{R}}_{yy}\\), specifically, by taking the \\((N+1)\\)th eigenvalue of \\(\\hat{\\mathbf{R}}_{yy}\\) as the estimate of \\(\\sigma^{2}\\); this is a commonly-used trick in statistical signal processing [33, Chapter 4.5].
The settings of Pr-SISAL in Algorithm 3 are as follows. The vector \\(\\mathbf{p}\\) is estimated by (23). The starting point is generated by expanded vertex component analysis (VCA), a built-in function of SISAL and a slight modification of the output by the VCA algorithm [12]. We set the initial value of \\(\\eta\\) to \\(1\\) and set \\(c=5\\). We stop the inner loop (Steps 4-8) if \\(\\operatorname{rc}(\\mathbf{B}^{k+1},\\mathbf{B}^{k}):=\\|\\mathbf{B}^{k+1}-\\mathbf{B}^{k}\\|/\\|\\mathbf{ B}^{k}\\|\\leq 10^{-7}\\) ( \\(\\operatorname{rc}\\) stands for relative change) or if the number of inner loops exceeds \\(4\\times 10^{5}\\). We stop the outer loop if the number of outer loops exceeds \\(10\\). For the sub-algorithm Algorithm 4, we stop if \\(\\operatorname{rc}(\\mathbf{d}^{k+1},\\mathbf{d}^{k})\\leq 10^{-5}\\). For the sub-algorithm Algorithm 5, we stop the MM loop and the proximal gradient loop if \\(\\operatorname{rc}(\\mathbf{C}^{k+1},\\mathbf{C}^{k})\\leq 10^{-5}\\) and \\(\\operatorname{rc}(\\mathbf{C}^{k,l+1},\\mathbf{C}^{k,l})\\leq 10^{-3}\\), respectively. The extrapolation sequence \\(\\{\\alpha_{k}\\}\\) in Algorithms 4 and 5 is chosen as the (standard) FISTA sequence [23].
The settings of H\\({}^{2}\\)-SISAL in Algorithm 2 are as follows. We choose \\(\\mathbf{p}=(\\mathbf{Y}^{\\top})^{\\dagger}\\mathbf{1}\\). The starting point is generated by expanded VCA. The FISTA extrapolation sequence is used. We stop Algorithm 2 if \\(\\operatorname{rc}(\\mathbf{B}^{k+1},\\mathbf{B}^{k})\\leq 10^{-6}\\).
We will benchmark Pr-SISAL and H\\({}^{2}\\)-SISAL against SISAL itself, VCA [12], ISA-PRISM and VIA-PRISM [6]. SISAL and VCA have open source codes, and we use them directly. The stopping rule of SISAL is that the number of iterations exceeds \\(250\\). ISA-PRISM is an importance sampling scheme for implementing the ML estimator (8), and VIA-PRISM is a variational inference approximation scheme for the ML estimator (8). We run ISA-PRISM only for small \\(N\\), due to its demanding computational cost to achieve reasonable performance for large \\(N\\). We stop ISA-PRISM when the number of iterations exceeds \\(100\\), and we use rejection sampling, with \\(500\\) initial samples, to implement ISA-PRISM. We stop VIA-PRISM when the number of iterations exceeds \\(500\\). Also, our VIA-PRISM implementation has some differences from that in the original work [6]; we replace the optimization algorithm for the variational variables, Algorithm 1 in [6], with a projected gradient algorithm, which was found to be more efficient.
### Comparisons of SISAL, H\\({}^{2}\\)-SISAL and Pr-SISAL By Simulations
We conduct our simulations by the following way. We generate the data points \\(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{T}\\) by the model in (7), i.e., \\(\\mathbf{y}_{t}=\\mathbf{A}_{0}\\mathbf{s}_{t}+\\mathbf{v}_{t}\\), where the \\(\\mathbf{s}_{t}\\)'s are i.i.d. uniform distributed on the unit simplex; the \\(\\mathbf{v}_{t}\\)'s are i.i.d. Gaussian with mean zero and covariance \\(\\sigma^{2}\\mathbf{I}\\). In addition, for each simulation trial, \\(\\mathbf{A}_{0}\\) is drawn from an element-wise i.i.d. \\([0,1]\\) distribution; we also restrict the condition number of the admitted \\(\\mathbf{A}_{0}\\) to be no greater than \\(100\\). We use a number of \\(100\\) simulation trials to evaluate the mean square error (MSE)
\\[\\mathsf{MSE}(\\mathbf{A}_{0},\\hat{\\mathbf{A}})=\\min_{\\mathbf{P}\\in\\mathcal{P}}\\frac{1}{MN} \\|\\mathbf{A}_{0}-\\hat{\\mathbf{A}}\\mathbf{P}\\|^{2},\\]
where \\(\\hat{\\mathbf{A}}\\) denotes an estimate of \\(\\mathbf{A}_{0}\\) by some algorithm; \\(\\mathcal{P}\\) is the set of all permutation matrices on \\(\\mathbb{R}^{N\\times N}\\). We should also note that the signal-to-noise ratio (SNR) is defined as
\\[\\mathsf{SNR}=\\frac{\\frac{1}{T}\\sum_{t=1}^{T}\\|\\mathbf{A}_{0}\\mathbf{s}_{t}\\|^{2}}{M \\sigma^{2}}\\]
Fig. 3 compares Pr-SISAL and SISAL for various values of \\((M,N)\\) and for \\(T=1,000\\). Our observations are as follows. First, the recovery performance behaviors of SISAL vary from one choice of \\(\\lambda\\) to another. There is no single \\(\\lambda\\) that works best for all SNRs, which suggests the need for parameter tuning in practice. Second, Pr-SISAL performs unsatisfactorily for low SNRs, particularly when compared to VIA-PRISM. But we also see that the performance of Pr-SISAL improves drastically as the SNRs are greater than certain thresholds. Also, for \\((M,N)=(10,5)\\), Pr-SISAL achieves performance close to the ML estimator by ISA-PRISM when the SNR is high enough. These results indicate that Pr-SISAL is a good estimator for the high SNR regime.
Fig. 4 compares H\\({}^{2}\\)-SISAL and SISAL under the same settings as above. We see that H\\({}^{2}\\)-SISAL works reasonably and is comparable to SISAL. Also, H\\({}^{2}\\)-SISAL behaves differently for different regularization parameters \\(\\lambda\\), which suggests that H\\({}^{2}\\)-SISAL requires parameter tuning in practice (just like SISAL).
We move on to the comparison of computational efficiency. Tables 1-2 illustrate some runtime results. The runtimes were measured on a small server with the Intel Core i7-5820K CPU processor and 64GB memory, and with implementations using MATLAB 2019a. H\\({}^{2}\\)-SISAL is seen to run faster than SISAL. Pr-SISAL, in comparison, is slow, although this is so far the best algorithm we can build for the difficult formulation of probabilistic SISAL. The reader will see in the extra simulation results in Appendix 7 that the proximal gradient method for tackling SISAL and H\\({}^{2}\\)-SISAL is even slower for probabilistic SISAL.
Figure 3: Comparison of Pr-SISAL, SISAL and VIA-PRISM. The lines are the average MSEs, while the shaded areas show the standard deviations of the MSEs.
Figure 4: Comparison of H\\({}^{2}\\)-SISAL and SISAL.
### A Semi-Real Data Experiment
We further test Pr-SISAL by using real data. The application of interest is hyperspectral unmixing (HU). The real data set used to perform our experiment is the Cuprite hyperspectral image [34]; we will simply call it Cuprite for convenience. Cuprite is interesting in the sense that, among the popular and publicly available data sets in hyperspectral remote sensing, Cuprite is the only one that has more than 10 materials (to our best knowledge). Cuprite has been used to demonstrate many HU algorithms, e.g., [6, 12, 35, 36], and real data experiments by Cuprite have almost become a standard. An illustration of the Cuprite image is shown in Fig. 5(a).
The settings of our experiment are as follows. We largely follow the standard procedure in the literature [6, 12, 35, 36], particularly, the one in [6]. Some additional details are as follows. We adopt the band selection in [36]. It was argued that Cuprite is composed of 12 materials, namely, those shown in Table 3; we refer the reader to [37] and the references therein for details. The ground-truth \\(\\mathbf{A}_{0}\\) corresponds to the reference spectral responses of those materials, taken from the USGS library [38]. We test VCA, VIA-PRISM, SISAL, H\\({}^{2}\\)-SISAL and Pr-SISAL. For all the tested algorithms, we additionally do the following: we apply the data normalization preprocessing,
\\begin{table}
\\begin{tabular}{c||c|c|c|c|c|c|c} \\hline \\hline \\(T\\) & 1000 & 2000 & 3000 & 4000 & 5000 & 6000 & 7000 & 8000 \\\\ \\hline \\hline SISAL, \\(\\lambda=0.1\\) & 0.119 & 0.201 & 0.295 & 0.353 & 0.401 & 0.455 & 0.539 & 0.587 \\\\ \\hline H\\({}^{2}\\)-SISAL, \\(\\lambda=10.0\\) & 0.064 & 0096 & 0.139 & 0.192 & 0.230 & 0.246 & 0.281 & 0.325 \\\\ \\hline Pr-SISAL & 23.145 & 24.656 & 50.609 & 56.395 & 75.753 & 75.278 & 100.100 & 100.917 \\\\ \\hline VIA-PRISM & 0.986 & 1.600 & 2.276 & 2.860 & 3.349 & 3.928 & 4.746 & 4.961 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Average runtimes (in sec.) of SISAL, H\\({}^{2}\\)-SISAL, Pr-SISAL and VIA-PRISM. \\((M,N)=(20,10)\\), \\(\\mathsf{SNR}=30\\)dB.
Figure 5: Cuprite image; constructed by RGB bands.
described in Section 2.1, to the data points before DR; also, for Pr-SISAL and VIA-PRISM, we estimate the noise variance \\(\\sigma^{2}\\) by the eigenvalue method described in Section 6.1. Moreover, some of the stopping rules are modified: We stop SISAL if the number of iterations exceeds \\(1,000\\); we stop the inner loop of Pr-SISAL if \\(\\operatorname{rc}(\\mathbf{B}^{k+1},\\mathbf{B}^{k})\\leq 2\\times 10^{-7}\\) or if the number of iterations exceeds \\(10^{7}\\). We evaluate the recovery performance by the spectral angle distance (SAD)
\\[\\mathsf{SAD}(\\mathbf{a}_{0,i},\\hat{\\mathbf{a}}_{\\pi_{i}})=\\cos^{-1}\\left(\\frac{\\mathbf{a}_ {0,i}^{\\top}\\hat{\\mathbf{a}}_{\\pi_{i}}}{\\|\\mathbf{a}_{0,i}\\|\\|\\hat{\\mathbf{a}}_{\\pi_{i}}\\|} \\right),\\]
where \\(\\mathbf{a}_{0,i}\\) and \\(\\hat{\\mathbf{a}}_{i}\\) denote the \\(i\\)th column of \\(\\mathbf{A}_{0}\\) and \\(\\hat{\\mathbf{A}}\\), respectively; \\(\\mathbf{\\pi}=(\\pi_{1},\\ldots,\\pi_{N})\\) is a set of permutation indices for \\(\\{1,\\ldots,N\\}\\) (i.e. \\(\\pi_{i}\\in\\{1,\\ldots,N\\}\\) and \\(\\pi_{i}\
eq\\pi_{j}\\) for all \\(i\
eq j\\)), obtained by minimizing \\(\\sum_{i=1}^{N}\\mathsf{SAD}(\\mathbf{a}_{0,i},\\hat{\\mathbf{a}}_{\\pi_{i}})\\) over all possible permutations.
Table 3 shows the SADs of the tested algorithms. We see that all the algorithms give reasonable SAD performance, with VCA achieving the best average SAD. We also see that SISAL and H\\({}^{2}\\)-SISAL, with the regularization parameter tuned to \\(\\lambda=0.001\\) and \\(\\lambda=0.01\\), respectively, provide comparable performance to Pr-SISAL. But note that Pr-SISAL has no parameter to manually tune.
We also consider an experiment that puts some twist on the Cuprite data experiment. Specifically, we randomly pick some pixels and replace them with outliers; see Fig. 5(b) for an illustration. Our aim is to examine how robust the algorithms are. The experimental settings are the same as above, and additionally we randomly select \\(100\\) pixels and replace them with randomly selected spectral responses from the USGS library [38].
Table 4 displays the SAD performance of the tested algorithms for \\(10\\) trials (The locations and spectral responses of the outliers are changed at each trial). It is seen that VCA gives the worst average SAD, which suggests that VCA is sensitive to outliers. The other algorithms, including the new possibility of H\\({}^{2}\\)-SISAL and Pr-SISAL, are more robust as indicated by their SAD performance. Fig. 6 shows the estimated spectral signatures \\(\\hat{\\mathbf{a}}_{i}\\) of the various materials from one random trial.
\\begin{table}
\\begin{tabular}{c||c|c|c|c|c|c|c} \\hline \\hline \\multirow{2}{*}{EndmemberAlg.} & \\multirow{2}{*}{VCA} & \\multicolumn{2}{c|}{SISAL} & \\multicolumn{2}{c|}{H\\({}^{2}\\)-SISAL} & \\multirow{2}{*}{Pr-SISAL} & \\multirow{2}{*}{VIA-PRISM} \\\\ \\cline{3-3} \\cline{5-8} & & \\(\\lambda=0.001\\) & \\(\\lambda=0.01\\) & \\(\\lambda=0.01\\) & \\(\\lambda=0.1\\) & \\\\ \\hline \\hline Alunite & 2.07 & 4.55 & 6.82 & **1.65** & 3.83 & 3.27 & 4.54 \\\\ \\hline Andradite & 2.07 & 2.35 & 5.66 & 2.37 & 3.69 & **1.89** & 3.10 \\\\ \\hline Buddingtonite & **2.11** & 5.20 & 3.68 & 2.92 & 3.19 & 3.43 & 3.88 \\\\ \\hline Dumorierite & **2.66** & 3.25 & 8.07 & 3.32 & 6.49 & 3.51 & 3.39 \\\\ \\hline Kaolinite\\({}_{1}\\) & 2.51 & 2.22 & 2.78 & **2.16** & 3.06 & 2.67 & 3.90 \\\\ \\hline Kaolinite\\({}_{2}\\) & **1.99** & 2.48 & 7.77 & 2.29 & 6.20 & **1.99** & 2.79 \\\\ \\hline Muscovite & **2.12** & 2.80 & 3.15 & 6.07 & 4.30 & 3.64 & 2.67 \\\\ \\hline Montmorillonite & 1.74 & 2.53 & 3.88 & 1.99 & 2.77 & **1.27** & 3.22 \\\\ \\hline Nontronite & **1.97** & 3.81 & 2.84 & 3.03 & 3.72 & 2.75 & 3.14 \\\\ \\hline Pyrope & 2.10 & 1.45 & 3.93 & 1.94 & 2.76 & 1.70 & **1.32** \\\\ \\hline Sphene & **1.49** & 3.19 & 7.85 & 3.47 & 6.95 & 4.49 & 1.83 \\\\ \\hline Chalcedony & 2.86 & 3.82 & 3.85 & 3.09 & 3.38 & **1.59** & 4.35 \\\\ \\hline \\hline Average SAD & **2.14** & 3.14 & 5.02 & 2.86 & 4.19 & 3.13 & 3.07 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: SAD performances on the Cuprite dataset. The best SADs among all the tested algorithms are marked in **bold**.
We observe that SISAL, H\\({}^{2}\\)-SISAL and Pr-SISAL yield good recovery; VCA and VIA-PRISM are not as promising in comparison.
## 7 Conclusions
In this article we showed that the famous SISAL algorithm, developed by Bioucas-Dias in hyperspectral unmixing in 2009, can be explained as a probabilistic method for SCA. In particular, SISAL was derived from the noiseless case, and our study provides an explanation of why SISAL can be robust to noise. Moreover, we gave a positive answer to the question of whether the SISAL algorithm can lead to provable convergence to a stationary point. This was done by casting SISAL as an instance of a proximal gradient framework in non-convex first-order optimization. Furthermore, through connecting SISAL and probabilistic SCA, we also found new SCA formulations that resemble SISAL. To allow us to numerically study the new SCA formulations, we built customized algorithms for them. The potential of the new algorithms was demonstrated by numerical experiments.
## References
* [1] J. Bioucas-Dias, \"A variable splitting augmented Lagrangian approach to linear spectral unmixing,\" in _Procedings of the First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing_. IEEE, 2009.
* [2] J. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _IEEE J. Sel. Topics Appl. Earth Observ._, vol. 5, no. 2, pp. 354-379, 2012.
* [3] W.-K. Ma, J. M. Bioucas-Dias, T.-H. Chan, N. Gillis, P. Gader, A. J. Plaza, A. Ambikapathi, and C. Y. Chi, \"A signal processing perspective on hyperspectral unmixing,\" _IEEE Signal Process. Mag._, vol. 31, no. 1, pp. 67-81, 2014.
\\begin{table}
\\begin{tabular}{c||c|c|c|c|c|c|c} \\hline \\hline \\multirow{2}{*}{EndmemberAlg.} & \\multirow{2}{*}{VCA} & \\multicolumn{2}{c|}{SISAL} & \\multicolumn{2}{c|}{H\\({}^{2}\\)SISAL} & \\multirow{2}{*}{Pr-SISAL} & \\multirow{2}{*}{VIA-PRISM} \\\\ \\cline{3-3} \\cline{5-8} & & \\(\\lambda=0.001\\) & \\(\\lambda=0.01\\) & \\(\\lambda=0.01\\) & \\(\\lambda=0.1\\) & \\(\\lambda=0.1\\) \\\\ \\hline \\hline Alunite & 9.64\\(\\pm\\)4.59 & 4.74\\(\\pm\\)0.26 & 6.72\\(\\pm\\)1.21 & **2.82\\(\\pm\\)**1.30 & 5.84\\(\\pm\\)1.50 & 3.91\\(\\pm\\)0.77 & 11.65\\(\\pm\\)2.72 \\\\ \\hline Andradite & 8.38\\(\\pm\\)5.21 & 3.45\\(\\pm\\)0.48 & 7.50\\(\\pm\\)1.97 & 2.95\\(\\pm\\)0.61 & 6.16\\(\\pm\\)0.96 & **2.27**\\(\\pm\\)0.31 & 3.31\\(\\pm\\)0.41 \\\\ \\hline Buddingtonite & 13.42\\(\\pm\\)4.14 & 4.07\\(\\pm\\)1.12 & 3.93\\(\\pm\\)0.56 & **3.23\\(\\pm\\)**0.69 & 5.49\\(\\pm\\)0.90 & 3.47\\(\\pm\\)0.31 & 3.85\\(\\pm\\)1.02 \\\\ \\hline Dumortierite & 12.43\\(\\pm\\)3.74 & **2.93\\(\\pm\\)**0.83 & 6.51\\(\\pm\\)1.31 & 3.17\\(\\pm\\)0.52 & 5.38\\(\\pm\\)0.79 & 3.17\\(\\pm\\)0.54 & 6.85\\(\\pm\\)2.87 \\\\ \\hline Kaolinite\\({}_{1}\\) & 9.00\\(\\pm\\)4.05 & **2.33\\(\\pm\\)**0.43 & 4.42\\(\\pm\\)1.48 & 3.18\\(\\pm\\)0.72 & 5.41\\(\\pm\\)1.01 & 2.39\\(\\pm\\)0.28 & 4.38\\(\\pm\\)1.47 \\\\ \\hline Kaolinite\\({}_{2}\\) & 7.33\\(\\pm\\)4.86 & 2.53\\(\\pm\\)0.75 & 5.39\\(\\pm\\)2.09 & 2.59\\(\\pm\\)0.56 & 5.52\\(\\pm\\)1.57 & **2.34**\\(\\pm\\)0.59 & 3.36\\(\\pm\\)1.06 \\\\ \\hline Muscovite & 15.40\\(\\pm\\)5.50 & **3.11**\\(\\pm\\)0.59 & 5.14\\(\\pm\\)2.25 & 3.66\\(\\pm\\)1.24 & 5.30\\(\\pm\\)1.32 & 3.25\\(\\pm\\)0.58 & 4.57\\(\\pm\\)0.64 \\\\ \\hline Montmorillonite & 10.31\\(\\pm\\)3.65 & 3.47\\(\\pm\\)0.57 & 3.29\\(\\pm\\)0.24 & 2.31\\(\\pm\\)0.91 & 3.42\\(\\pm\\)0.53 & **2.11**\\(\\pm\\)0.48 & 2.79\\(\\pm\\)0.28 \\\\ \\hline Nontronite & 5.92\\(\\pm\\)2.96 & 3.66\\(\\pm\\)0.57 & 3.75\\(\\pm\\)0.64 & 3.33\\(\\pm\\)0.98 & 4.46\\(\\pm\\)1.03 & **2.58**\\(\\pm\\)0.42 & 3.36\\(\\pm\\)0.73 \\\\ \\hline Pyrope & 12.59\\(\\pm\\)3.87 & 2.72\\(\\pm\\)1.00 & 5.79\\(\\pm\\)2.17 & 3.44\\(\\pm\\)0.89 & 5.15\\(\\pm\\)1.45 & **2.62**\\(\\pm\\)0.53 & 3.11\\(\\pm\\)0.65 \\\\ \\hline Sphene & 11.96\\(\\pm\\)1.34 & **2.35\\(\\pm\\)**0.91 & 5.91\\(\\pm\\)1.37 & 2.99\\(\\pm\\)0.62 & 6.30\\(\\pm\\)2.09 & 3.69\\(\\pm\\)0.66 & 9.85\\(\\pm\\)1.39 \\\\ \\hline Chalcedony & 14.61\\(\\pm\\)4.89 & 2.68\\(\\pm\\)0.40 & 4.96\\(\\pm\\)2.06 & 2.98\\(\\pm\\)0.86 & 5.78\\(\\pm\\)1.11 & **2.58**\\(\\pm\\)0.78 & 6.27\\(\\pm\\)4.76 \\\\ \\hline \\hline Average SAD & 10.91 & 3.17 & 5.28 & 3.05 & 5.35 & **2.86** & 5.28 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: SAD performances on the Cuprite dataset with outliers. The best SADs averaged over 10 trials among all the tested algorithms are marked in **bold**.
Figure 6: Estimated spectrums of Cuprite. Algorithms: VCA, SISAL with \\(\\lambda=0.001\\), H\\({}^{2}\\)-SISAL with \\(\\lambda=0.01\\), Pr-SISAL, and VIA-PRISM.
* [4] X. Fu, K. Huang, N. D. Sidiropoulos, and W.-K. Ma, \"Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications,\" _IEEE Signal Process. Mag._, vol. 36, pp. 59-80, 2019.
* [5] N. Gillis, _Nonnegative Matrix Factorization_. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2021.
* [6] R. Wu, W.-K. Ma, Y. Li, A. M.-C. So, and N. D. Sidiropoulos, \"Probabilistic simplex component analysis,\" _IEEE Trans. Signal Process._, vol. 70, pp. 582-599, 2022.
* [7] J. M. Nascimento and J. M. Bioucas-Dias, \"Learning dependent sources using mixtures of Dirichlet: Applications on hyperspectral unmixing,\" in _Procedings of the First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing_. IEEE, 2009.
* [8] J. Nascimento and J. Bioucas-Dias, \"Hyperspectral unmixing based on mixtures of Dirichlet components,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, pp. 863-878, 2012.
* [9] N. Dobigeon, S. Moussaoui, M. Coulon, J.-Y. Tourneret, and A. O. Hero, \"Joint Bayesian endmember extraction and linear unmixing for hyperspectral imagery,\" _IEEE Trans. Signal Process._, vol. 57, pp. 4355-4368, 2009.
* [10] S. Bonettini, I. Loris, F. Porta, and M. Prato, \"Variable metric inexact line-search-based methods for nonsmooth optimization,\" _SIAM J. Optim._, vol. 26, pp. 891-921, 2016.
* [11] X. Fu, K. Huang, B. Yang, W.-K. Ma, and N. D. Sidiropoulos, \"Robust volume minimization-based matrix factorization for remote sensing and document clustering,\" _IEEE Trans. Signal Process._, vol. 64, pp. 6254-6268, 2016.
* [12] J. Nascimento and J. Bioucas-Dias, \"Vertex component analysis: A fast algorithm to unmix hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, pp. 898-910, 2005.
* [13] P. Gritzmann, V. Klee, and D. Larman, \"Largest \\(j\\)-simplices in \\(n\\)-polytopes,\" _Discrete Comput. Geom._, vol. 13, pp. 477-515, 1995.
* [14] C.-H. Lin, W.-K. Ma, W.-C. Li, C.-Y. Chi, and A. Ambikapathi, \"Identifiability of the simplex volume minimization criterion for blind hyperspectral unmixing: The no-pure-pixel case,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, pp. 5530-5546, 2015.
* [15] X. Fu, W.-K. Ma, K. Huang, and N. D. Sidiropoulos, \"Blind separation of quasi-stationary sources: Exploiting convex geometry in covariance domain,\" _IEEE Trans. Signal Process._, vol. 63, pp. 2306-2320, 2015.
* [16] W.-K. Ma, \"On hyperspectral unmixing,\" in _Proceedings of the IEEE International Geoscience and Remote Sensing Symposium_, 2021, online available: [https://arxiv.org/pdf/2106.14177.pdf](https://arxiv.org/pdf/2106.14177.pdf).
* [17] M. E. Tipping and C. M. Bishop, \"Probabilistic principal component analysis,\" _J. R. Stat. Soc. Ser. B. Stat. Methodol._, vol. 61, no. 3, pp. 611-622, 1999.
* [18] D. T. Pham and P. Garat, \"Blind separation of mixture of independent sources through a quasi-maximum likelihood approach,\" _IEEE Trans. Signal Process._, vol. 45, no. 7, pp. 1712-1725, 1997.
* [19] H. Attias, \"Independent factor analysis,\" _Neural Comput._, vol. 11, no. 4, pp. 803-851, 1999.
* [20] I. Khemakhem, D. Kingma, R. Monti, and A. Hyvarinen, \"Variational autoencoders and nonlinear ICA: A unifying framework,\" in _Proceedings of the 23th International Conference on Artificial Intelligence and Statistics_, vol. 108. PMLR, 2020, pp. 2207-2217.
* [21] S. Verdu, _Multiuser Detection_. Cambridge University Press, 1998.
* [22] M. Shao, Q. Li, W.-K. Ma, and A. M.-C. So, \"A framework for one-bit and constant-envelope precoding over multiuser massive MISO channels,\" _IEEE Trans. Signal Process._, vol. 67, pp. 5309-5324, 2019.
* [23] A. Beck, _First-Order Methods in Optimization_. Philadelphia, PA, USA: SIAM, 2017, vol. 25.
* [24] A. Beck and M. Teboulle, \"A fast iterative shrinkage-thresholding algorithm for linear inverse problems,\" _SIAM J. Imaging Sci._, vol. 2, pp. 183-202, 2009.
* [25] S. Ghadimi and G. Lan, \"Accelerated gradient methods for nonconvex nonlinear and stochastic programming,\" _Math. Program._, vol. 156, pp. 59-99, 2016.
* [26] Y. Xu and W. Yin, \"A globally convergent algorithm for nonconvex optimization based on block coordinate update,\" _J. Sci. Comput._, vol. 72, pp. 700-734, 2017.
* [27] ----, \"A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,\" _SIAM J. Imaging Sci._, vol. 6, pp. 1758-1789, 2013.
* [28] R. Wu, H.-T. Wai, and W.-K. Ma, \"Hybrid inexact BCD for coupled structured matrix factorization in hyperspectral super-resolution,\" _IEEE Trans. Signal Process._, vol. 68, pp. 1728-1743, 2020.
* [29] N. Boumal, \"Nonconvex phase synchronization,\" _SIAM J. Optim._, vol. 26, pp. 2355-2377, 2016.
* [30] J. Tranter, N. D. Sidiropoulos, X. Fu, and A. Swami, \"Fast unit-modulus least squares with applications in beamforming,\" _IEEE Trans. Signal Process._, vol. 65, pp. 2875-2887, 2017.
* [31] M. Shao, Q. Li, W.-K. Ma, and A. M.-C. So, \"Minimum symbol error rate-based constant envelope precoding for multiuser massive MISO downlink,\" in _Proceedings of Statistical Signal Processing Workshop (SSP)_. IEEE, 2018.
* [32] M. Shao and W.-K. Ma, \"Divide and conquer: One-bit MIMO-OFDM detection by inexact expectation maximization,\" in _Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2021, pp. 4890-4894.
* [33] P. Stoica and R. L. Moses, _Spectral Analysis of Signals_. New Jersey, US: Prentice Hall, Inc., 2005.
* [34] G. Vane, R. O. Green, T. G. Chrien, H. T. Enmark, E. G. Hansen, and W. M. Porter, \"The airborne visible/infrared imaging spectrometer (AVIRIS),\" _Remote Sensing of Environment_, vol. 44, pp. 127-143, 1993.
* [35] T.-H. Chan, C.-Y. Chi, Y.-M. Huang, and W.-K. Ma, \"A convex analysis based minimum-volume enclosing simplex algorithm for hyperspectral unmixing,\" _IEEE Trans. Signal Process._, vol. 57, pp. 4418-4432, 2009.
* [36] J. Li, A. Agathos, D. Zaharie, J. M. Bioucas-Dias, A. Plaza, and X. Li, \"Minimum volume simplex analysis: A fast algorithm for linear hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 9, pp. 5067-5082, 2015.
* [37] F. Zhu, \"Hyperspectral unmixing: ground truth labeling, datasets, benchmark performances and survey,\" _arXiv preprint arXiv:1708.05125_, 2017.
* [38] R. N. Clark, G. A. Swayze, R. Wise, K. E. Livo, T. Hoefen, R. F. Kokaly, and S. J. Sutley, \"USGS digital spectral library splib06a,\" _U.S. Geological Survey, Digital Data Series 231_, 2007.
## Appendix A Additional Simulation Results
We display two more numerical results for Pr-SISAL. The first is with Heuristic 1, which is used to build the approximate ML formulation in Formulation 3. To put into context, let us write down a slightly more general form of Formulation 3:
\\[\\min_{\\mathbf{B}^{\\top}\\mathbf{1}=\\mathbf{p}}\\ -\\log(|\\det(\\mathbf{B})|)-\\frac{\\tau}{T}\\sum_{t=1}^ {T}\\sum_{i=1}^{N}\\log\\Phi\\left(\\frac{\\mathbf{b}_{i}^{\\top}\\mathbf{y}_{t}}{\\sigma\\|\\mathbf{b }_{i}\\|}\\right), \\tag{43}\\]
where \\(\\tau>0\\), and Formulation 3 is the special case of \\(\\tau=1\\). In Remark 2, we argue that \\(\\tau=1/(N+1)\\) is arguably equipped with a better rationale (lower-bound approximation of the ML objective), but eventually the heuristic (and, intuitively, more progressive) choice of \\(\\tau=1\\) prevails in terms of approximating the ML problem better in practice. We want to illustrate that. Fig. 7 shows the performance of formulation in (43) for different values of \\(\\tau\\) and for \\((M,N)=(10,5)\\), \\(T=1,000\\); the simulation is done by exactly the same way as in Section 6.2. We see that \\(\\tau=1/(N+1)\\) does not work well, except for very high SNRs. We also try \\(\\tau=N+1\\) (more progressive than \\(\\tau=1\\)), and the result is not as good as \\(\\tau=1\\).
The second result is about the implementations of Formulation 3. It was mentioned that the proximal gradient method can be used to handle Formulation 3, but the results are not promising. Here we show the results. We implement Formulation 3 using the same proximal gradient algorithm in Algorithm 2, with or without extrapolation. We stop the algorithm if \\(\\operatorname{rc}(\\mathbf{B}^{k+1},\\mathbf{B}^{k})\\leq 10^{-8}\\) or if the number of iterations exceeds \\(4\\times 10^{5}\\). Fig. 8 and Table 5 show the MSE and runtime performance, respectively, for \\((M,N,T)=(20,10)\\), \\(T=1,000\\); the simulation settings are the same as the previous. There, \"Pr-SISAL\", \"Pr-SISAL, PG\" and \"Pr-SISAL, EPG\" refer to the inexact BCD algorithm in Algorithm 3, the proximal gradient algorithm and the extrapolated proximal gradient algorithm, all for Formulation 3. We see that all the implementations yield similar MSE performance, but the proximal gradient implementations are very slow.
Figure 7: Performance of the formulation in (43) for different values of \\(\\tau\\).
| Simplex identification via split augmented Lagrangian (SISAL) is a popularly-used algorithm in blind unmixing of hyperspectral images. Developed by Jose M. Bioucas-Dias in 2009, the algorithm is fundamentally relevant to tackling simplex-structured matrix factorization, and by extension, non-negative matrix factorization, which have many applications under their umbellas. In this article, we revisit SISAL and provide new meanings to this quintessential algorithm. The formulation of SISAL was motivated from a geometric perspective, with no noise. We show that SISAL can be explained as an approximation scheme from a probabilistic simplex component analysis framework, which is statistical and is principally more powerful in accommodating the presence of noise. The algorithm for SISAL was designed based on a successive convex approximation method, with a focus on practical utility. It was not known, by analyses, whether the SISAL algorithm has any kind of guarantee of convergence to a stationary point. By establishing associations between the SISAL algorithm and a line-search-based proximal gradient method, we confirm that SISAL can indeed guarantee convergence to a stationary point. Our re-explanation of SISAL also reveals new formulations and algorithms. The performance of these new possibilities is demonstrated by numerical experiments.
+
Footnote β : Chujun Huang and Mingjie Shao contributed equally to this work.
+
Footnote β : Chujun Huang and Mingjie Shao contributed equally to this work. | Give a concise overview of the text below. | 302 |
isprs/35dd2c10_14b6_44c8_9d3e_a27fc1ca9984.md | # Feasibility Study of Low-Cost Image-Based Heritage Documentation in Nepal
H. K. Dhonju\\({}^{\\star}\\)
W. Xiao\\({}^{\\star}\\)
V. Sarhosis\\({}^{\\star}\\)
J. P. Mills\\({}^{\\star}\\)
S. Wilkinson\\({}^{\\star}\\)
Z. Wang\\({}^{\\star}\\)
L. Thapa\\({}^{\\star}\\)
U. S. Panday\\({}^{\\star}\\)
\\({}^{\\star}\\)International Centre for Integrated Mountain Development (ICIMOD), Kathmandu, Nepal - [email protected]
\\({}^{\\star}\\)School of Civil Engineering and Geosciences, Newcastle University, NE1 7RU, Newcastle upon Tyne, UK - [email protected]
\\({}^{\\star}\\) Cadastral Survey Division, Survey Department, Ministry of Land Reform and Management, Nepal
\\({}^{\\star}\\) Department of Civil and Geomatics Engineering, Kathmandu University, Nepal
## 1 Introduction
Located on a ridge of the Tibetan and Indian Plates, Nepal is extremely prone and sensitive to natural disasters. Among the most devastating of events, Nepal experienced a 7.8 magnitude earthquake on 25 April, 2015. The impact of the earthquake was extensive on historically and culturally important heritage sites that have great spiritual and religious value. According to Nepal's Department of Archaeology, around 750 heritage structures, e.g. temples and shrines, of significant culture and religious value were affected. Many heritage structures were already in danger, e.g. Kathmandu Valley was once listed on the UNESCO List of World Heritage in Danger, and became more vulnerable after this devastating earthquake. Some sites were totally destroyed, and some were partially damaged and are likely to collapse in the future. Critically, remaining intact heritage that survived now needs to be protected and preserved.
The importance of cultural and historical heritage structural documentation is gaining momentum and is well recognized internationally (Remondino and Rizzi, 2010). There is an imperative need and responsibility to protect heritage sites and conserve cultural and religious values for future generations. There is increasing demand to document and preserve heritage digitally for later stage usage such as visualization. The continuous development of new sensors, data capture capability, 3D modelling technologies, and online visualization can contribute significantly to 3D documentation, conservation, and digital presentation of heritage structures, which attracts growing research interest in this field (Remondino, 2011).
Over recent years, it has become increasingly common to use digitization and 3D modelling for preservation and conservation of heritage sites due to advances in lidar (light detection and ranging)-based and image-based modelling and visualization techniques towards virtual reality. 3D construction methods based on laser scanning and automated image-based techniques are both widely applied to heritage documentation. In many cases, it is best to take advantage of both techniques by fusing different data sources. However, given the fact that expensive laser scanning equipment may not be available in developing countries, this paper studies the feasibility of a low-cost, easy to use, and high quality image-based surveying techniques
accompanied with open-source 3D reconstruction methods for the documentation of complex heritage structures.
Kathmandu Valley is characterised by rich ancient inscriptions, sculptures and movements of various size and shapes, and was listed as a UNESCO World Heritage Site in 1979. Figure 1 shows heritage temples under poor maintenance harmed by the earthquake in the centre of Kathmandu Durbar Square. In line with uncontrolled urbanization and loss of historic fabric, the world heritage sites in Kathmandu Valley were placed on the danger list in 2003. To control this trend, an integrated management plan (IMP) was introduced by the Government of Nepal and the sites were withdrawn from the danger list in 2007 (Acharya and Pradhananga, 2013). Implementation of the IMP was less effective due to difficulties faced in restructuring the institutions for restoration and reconstruction processes. Traditionally, communities actively participated in conservation, preservation and restoration of heritage affairs (Shakya et al., 2013; Tiwari, 2013). Vulnerability and risks to heritage structures are rapidly growing due to uncontrolled urban expansion, as well as seismic vulnerability and hazards. This leads to the need to assess for disaster risk reduction by mainstreaming cultural heritage for disaster management and the development of tools through the use of appropriate recent technology (Jigyasu, 2013; Maskey, 2013).
Figure 1: Heritage temples in Kathmandu Durbar Square.
## 2 Related Work
Recent capability of technological advances in documentation have demonstrated the potential for 3D registration and reconstruction of archaeological and cultural heritage monuments (Pavidis et al., 2007). The techniques can be broadly categorised into: i) photogrammetry, close-range and image-based modelling (Hanke and Grussenmeyer, 2002; Remondino and El-Hakim, 2006; Remondino and Menna, 2008; Santagati et al., 2013), and ii) laser scanning or lidar-based modelling (Guarnieri et al., 2004; Mills and Barber, 2004). Unmanned aerial vehicle (UAV) platform has been used for photogrammetric 3D modelling (Remondino, 2011; Eisenbeiss, 2004; Nex and Remondino, 2014), and its usage for laser scanning is still under rapid development. However, laser scanning technique is not trivial and requires a certain degree of expertise and skill while performing archaeological and heritage field work (Reu et al., 2013). Moreover, the technique is often not cost-effective due to the expensive equipment and its time consuming nature.
Numerous literature has discussed different 3D techniques for low cost 3D documentation of cultural heritage (Remondino, 2011; Boochs et al., 2007; Guidi et al., 2007; Kersten and Lindstaed, 2012; Reu et al., 2013). Mills et al. (2000), for example, investigated early low-cost PC software against state-of-practice photogrammetric instrumentation. More recently, Reu et al. (2013) proposed a 3D cost-effective registration of archaeological heritage. They used the software package PhotoScan, (Professional Edition) which allows the extraction of 3D point clouds from ordinary 2D images using the structure-from-motion (SIM) approach and dense stereo-matching. They claim to be cost effective compared to traditional methods and to provide a fully automated system with high accuracy and simple 3D model generation. However, costs are incurred for acquiring software licensees and high technical skills and knowledge are necessary. Similarly, Boochs et al. (2007) and Kersten and Lindstaed (2012) demonstrated free or low cost 3D reconstruction methods for archaeological and heritage objects with the combined used of photogrammetric and computer vision techniques. The method is targeted particularly at non-technical users with a different level of geometric accuracy.
In essence, documentation is a necessary step and important within the scope of conservation, preservation and restoration of archaeological and cultural heritage structures (Boochs et al., 2007). And, the heritage sites in the Kathmandu Valley are very specific in terms of complexity and uniqueness of the ancient structures (Kersten and Lindstaed, 2012) and their vulnerable spatial locations. In the context of Nepal, a feasibility study is proposed using low-cost 3D image-based modelling for heritage conservation with the aim to safeguard heritage in Kathmandu Valley.
## 3 Photogrammetric Documentation
Generally, for different applications, one needs to address different aspects of reconstruction, namely, high geometric accuracy, comprehensive detail capture, photorealism, high automation level, low cost, portability, application flexibility, and model size efficiency (El-Hakim et al., 2004). The priority and importance of these specifications depends on the purpose of 3D modelling, for instance, whether to restore the geometry of the heritage structure for future regeneration or to document for virtual reality targeted at tourism. For the former purpose, accurate dimension and detail carving information of the structure is important and mandatory, whereas for the later textures and photo-realistic details are more important than the accurate dimensions of the model.
In this study, the importance of documentation and its specification for actual 3D image-based surveying, modelling techniques and methodologies, including their limitations and potential, will be discussed. Examples of UNESCO world heritage structure 3D reconstructions are presented and discussed.
### Photogrammetric modelling for heritage conservation
The photogrammetric reconstruction procedure, i.e. SIM, has been validated in many applications, including heritage documentation. Figure 2 illustrates the diagram of photogrammetric 3D reconstruction for heritage conservation.
**Input:** For photogrammetric 3D modelling, optical images from cameras on different platforms are the primary source of data input. High resolution, well-calibrated surveying cameras will produce high quality images, and hence better 3D models are expected. However, low-cost consumer grade cameras, or even mobile phone cameras can yield useful models, and the use of them for heritage documentation is worthy of further investigation.
Besides photographic data, laser scanning data have long been used for accurate 3D modelling, e.g. building information modelling (BIM). There is an entire industrial pipeline for range-based modelling, from data acquisition, to registration, to (semi-automatic vectorization, and finally presentation. The use of images to complement laser scanning data, which is often purely geometric, is also well studied. However, the limitation is that a laser scanner is often very expensive. Therefore, it is not widely used in many developing countries such as Nepal. This paper will therefore focus solely on image-based modelling.
Ground-based image/laser scanning data acquisition has limited points of view, for instance building rooftops cannot be reached, hence one usually cannot obtain full coverage of an object of interest. One popular technique is therefore the use of UAVs, which have very flexible acquisition perspectives and can attain a large coverage with varied spatial resolution. The concern of using UAVs is security and safety issues, due to which flying permission can be difficult to obtain.
In recent years, crowd-sourced information collection and processing has also become popular. Crowd-sourced photogrammetric 3D reconstructions can offer unique opportunities for the digital interactive visualization of lost
Figure 2: Diagram of photogrammetric reconstruction workflow for heritage structure conservation.
heiritage (Vincent et al., 2015). This technique provides a platform to preserve and revive lost heritage in order to recall the memory of that heritage through digital preservation schemes. Even though true geometry of heritage is difficult to recover using crowd-sourced data, it can still help to address community interests. At the end, the 3D textured and realistic model can add value to visualizations, remembrance and digital documentation.
**Output**: The primary product of photogrammetric reconstruction is a group of points, usually called a point cloud, representing the shape of the structure. Based on this point cloud, other data formats of the structure can be generated, such as mesh models, geometric models, and structural models (detailed construction elements of an object). These models can be used for different aspects of heritage conservation.
**Models**: The most common product of photogrammetric 3D modelling is a mesh model, which is triangulated from the original point cloud. As this kind of mesh model is textured and photorealistic, it is widely used for visualization purposes. As for heritage conservation, it can be used in online virtual tourism to attract tourists. Mesh models can also be transferred to printable format, such as STL files, for 3D printing.
The other usage of the photogrammetric point cloud data is to generate geometric models by vectorization, such AutoCAD and CityGML models. These models maintain accurate spatial and geometric features, and thus can be integrated into geospatial databases for better urban planning and design, and preserved for documentation and reconstruction purposes. Some heritage structures may have designed CAD models, but new models based on real world data acquisition can validate and update the current models, and can be of even greater culture value.
Based on geometric models, an even more detailed structural model capturing the exact geometry of individual bricks and mortar in a historic masonry structure can be constructed. Such structural model can then be coupled with structural engineering tools, such as advanced 3D finite and discrete element methods of analysis (Bui et al., 2017; Gianundo et al., 2014), to assess the robustness and vulnerability of historic structures to extreme natural disasters, e.g. earthquakes, flooding (Sarshosis et al., 2016). In addition, advanced algorithms for automatic detection of structural damages and material degradation in the structure can be developed to assist with the structural health monitoring and assessment of historic structures. Such tools can be extremely useful to the structural engineers since accurate information about the current state of the structure could be obtained and predictions about the long-term behaviour of the structure subjected to various loading conditions can be assed.
### Consumer-grade cameras for 3D modelling
To assess the feasibility of low-cost image-based reconstruction methods for Nepalese heritage structures, images acquired from a DSLR camera and a mobile phone camera (Table 1) were used for 3D reconstruction using commercial software, Bentley's ContextCapture (former Acute3D1). Preliminary tests using an open source 3D reconstruction package, VisualSfM, generated recognisable, but still far from usable, structures. Both DSLR (Figure 3) and phone (Figure 4) cameras can generate visually appealing 3D models, especially when the structure is relatively small. Detailed information can be extracted from these image-based reconstructions.
Footnote 1: www.acute3d.comAs mentioned, the original point cloud can be transferred into different model formats for various applications. For instance, structural damage induced cracks can be captured by the 3D model (Figure 5), showing the potential of using this technique for structural assessment and analysis.
One of the reasons to assess the usability of low-cost 3D modelling, such as using phone cameras, is that non-experts (ordinary people with only limited knowledge of photogrammetric 3D reconstruction) can contribute to heritage preservation and documentation. Images of heritage sites taken by tourists can be used for 3D reconstruction. Moreover, civilians can use their phones to take images to help document their nation's culture treasures. Figure 6 shows an endangered monument, awaiting demolition, which is covered by scaffolding. In its present state it is no longer possible to document the structure as it has mostly been obscured. However, images taken prior to the erection of the scaffolding by local people have been used to generate a useful 3D model.
With the ubiquity of mobile phones, crowdsourcing has been an important method of data collection, and has been successfully applied for 3D modelling. There are currently a number of research projects related to crowdsourcing for heritage conservation, such as Curious Travellers3 and Project Mousl4. These projects encourage volunteers to upload their images of heritage sites, however often do not have any specifications for the photographs, such as a requirement for gotogs.
Footnote 3: www.visualisingheritage.org
### Practical limitations of image-based documentation
There are clear advantages of image-based documentation, such as photorealism, portability, application flexibility, and low cost, making it an obvious choice. However, there are also other aspects to consider in order to achieve high quality modelling, for example geometric accuracy, level of detail, completeness, automation level, and model size efficiency. Photogrammetric modelling is useful in certain situations, but there are practical limitations.
Firstly, light condition plays an important role in image acquisition. It is better to have bright and clear images to produce a higher quality model. In some cases, artificial lighting might be needed to capture ideal images. Photogrammetric 3D modelling relies on representative common pixels (corresponding feature points) from different images taken from various perspectives to reconstruct the camera locations and geometry of the object. Therefore, it might fail on structures on which there are repetitive patterns. It is better to capture the salient part of the object, and artificial targets can be used to strengthen the image matching. In addition, imaging angles can also influence the modelling result, as camera lenses will generate image distortions, especially low-cost ones, even if it can be rectified during the reconstruction process. It is recommended to take orthogonal photos, meaning having a vertical viewing angle. Finally, accessibility is a major constraint for image acquisition. For example, to model a tall and large building, it is always difficult to take images from ideal locations and angles. In dense urban environments, it is common to only have limited access to buildings.
The architecture of many Nepali heritage sites differs from others in terms of complexity and uniqueness of structure. There are usually many hand-carved sculptures for both decoration and support of upper levels, as shown in Figure 7. Such structures can create challenges for image-based reconstruction. Their complexity makes them difficult to model and they can be difficult to access. Moreover, light conditions can be poor as the sculptures are often shadowed by the roof.
Figure 8: 3D model of a typical Hindu temple in Nepal. Areas covered by the roof are poorly reconstructed.
Figure 6: A monument covered by scaffolds and its 3D model produced using crowdsourced data before scaffolding.
Figure 7: Changu Narayan temple imaged from the ground. Image-based reconstruction produces decent results at the foot of this type of structure, but sculptures such as supporting pillars are difficult to recover, as are sections shadowed by the roof. Figure 8 illustrates this problem, where structures beneath the roof are incomplete and badly reconstructed. Besides the challenges of complex structures, the 3D modelling process also introduces errors and noise due to incorrect dense image matching.
## 4 Comparison and Evaluation
To evaluate the geometric quality of low-cost image-based 3D reconstruction, the phone camera (shown in Table 1) and a high precision laser scanner (Leica HDS P20) were used to scan the same building and the two different point clouds were registered and compared. As laser scanning is not available on site in Nepal, an historic building in the UK was taken as an example (Figure 10). 48 images were taken and used for the 3D modelling. Six stations were planned for the laser scanning, and artificial targets were used to accurate register the scans to form a complete point cloud. The two different point clouds were co-registered manually by CloudCompare5.
Footnote 5: [http://www.cloudcompare.org](http://www.cloudcompare.org)
One thing to note is that the image-based models are not scaled, meaning the model does not reflect the real world structure size. However, this can be achieved by assigning one single scale factor, which can be easily obtained by tape measurement. For the comparison, the scale is approximated during the co-registration procedure while minimising the point-to-point distances using iterative closed point (ICP) between the two point clouds. Then, the point-to-point distances reflect the consistency between these two modelling techniques. Beside the point-to-point distance, the distance between a target point to a nearest local surface in the reference point cloud can also be used. This surface can be a least square plan (fitted) by either certain number of points or fixed radius), a triangular or a quadric surface. Only the point-to-point distance comparison is presented in Figure 10.
The comparison demonstrates that the photogrammetric method using a phone camera generates a comparable point cloud to that from laser scanning, as more than half of the distances (53.4%) were under 2 cm and the majority of the distances (83.6%) were smaller than 5 cm (Figure 9). Here, image distortion was not rectified, and the laser scanned point cloud was simply treated as ground truth, meaning the errors in laser scan registration were not taken into consideration. This accuracy should suffice for the majority of modelling purposes, e.g. documentation, visualisation, and wire-frame model generation.
## 5 Dicussion and Conclusion
Photogrammetric 3D modelling for heritage documentation is a well-studied topic. This paper has studied the feasibility of low-cost 3D reconstruction methods for the purpose of heritage conservation in developing countries such as Nepal. Ideally, all data sources, including ground-based cameras, UAVs, laser scanning and crowdsourcing, should be investigated and used for heritage documentation. Range and image-based modelling methods have both pros and cons, and can be complementary to each other.
In general, simple image acquisition is capable of generating point clouds that are comparable to laser scanning. However, due to the often complex structure and shape of cultural heritage objects, a more sophisticated and well planned image acquisition scheme is recommended. Low-cost image-based documentation is still of great value in terms of visualization of simple models.
Even very simple 2D images can be useful for documentation purposes. Geotagged images can help to identify the accurate number and the structural health and vulnerability of heritage structures. There are thousands of heritage structures that are in danger and in need of special attention and consideration to prolong their lives, especially in remote areas of the world. Therefore, easy-to-use crowdsourcing techniques should be developed in order to encourage the public to participate in heritage conservation.
Furthermore, ground level acquisition cannot provide comprehensive coverage of tall buildings and structures, hence a complementary imaging platform such as UAV is sought. Especially in the case of larger sites, aerial platforms can be significantly more efficient. A hierarchical image acquisition and reconstruction strategy will be tested in future work.
Figure 10: Comparison of image-based 3D modelling point cloud (top left) with laser scanned point cloud (top right). Data were registered manually and compared using CloudCompare5.
Figure 9: Histogram of distance between two point clouds.
## Acknowledgements
The authors would like to acknowledge the UK Engineering and Physical Sciences Research Council (EPSRC) for the financial support of the 'Disaster Risk Reduction of Heritage Structures in Nepal' project', and UNESCO Kathmandu Office, NSET, KVPT Department of Archaeology, and other local partners in Nepal for their tremendous support.
## References
* Acharya and Pradhananga (2013) Acharya, K.P. and Pradhananga, S., 2013. Review of the integrated management plan of Kathmandu Valley World Heritage property. In _International Symposium on Revisiting Kathmandu, Safeguarding Living Urban Heritage_. pp. 127-132.
* Boochs et al. (2007) Boochs, F., Heinz, G., Huxhagen, U. and Muller, H., 2007. Low-cost image based system for nontechnical experts in cultural heritage documentation and analysis. In _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. pp. 165-170.
* Bui et al. (2017) Bui, T.T., Limam, A., Sarhoiss, V. and Hijaj, M., 2017. Discrete element modelling of the in-plane and out-of-plane behaviour of masonry walls constructed with dry joints. _Engineering Structures_, 136, pp.277-294.
* Eisenebeiss (2004) Eisenebeiss, H., 2004. A mini unmanned aerial vehicle (UAV): system overview and image acquisition. _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 36(5/W1).
* El-Hakim et al. (2004) El-Hakim, S.F., Beraldin, J.-A., Picard, M. and Godin, G., 2004. Detailed 3D reconstruction of large-scale heritage sites with integrated techniques. _IEEE Computer Graphics and Applications_, 24(3), pp.21-29.
* Giamundo et al. (2014) Giamundo, V., Sarhoiss, V., Lignola, G.P., Sheng, Y. and Manfredi, G., 2014. Evaluation of different computational modelling strategies for the analysis of low strength masonry structures. _Engineering Structures_, 73(73), pp.160-169.
* Guarnieri et al. (2004) Guarnieri, A., Vettore, A., El-Hakim, S. and Gonzo, L., 2004. Digital photogrammetry and laser scanning in cultural heritage survey. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 35, p.B5.
* Guidi et al. (2007) Guidi, G., Remondino, F., Morlando, G., Del Mastio, A., Uchededu, F. and Pelagotti, A., 2007. Performances evaluation of a low cost active sensor for cultural heritage documentation. In _Conference on Optical 3-D Measurement Techniques_. pp. 59-69.
* Hanke and Grussenmeyer (2002) Hanke, K. and Grussenmeyer, P., 2002. Architectural Photogrammetry: basic theory, procedures, tools. In _ISPRS Commission_. pp. 1-2.
* Jigyasu (2013) Jigyasu, R., 2013. International initiatives for disaster risk management of cultural heritage. In _International Symposium on Revisiting Kathmandu, Safeguarding Living Urban Heritage_. pp. 277-282.
* Kersten and Lindstaedt (2012) Kersten, T.P. and Lindstaedt, M., 2012. Image-Based Low-Cost Systems for Automatic 3D Recording and Modelling of Archaeological Finds and Objects. In _International Conference on Progress in Cultural Heritage Preservation_. pp. 1-10.
* Maskey (2013) Maskey, P.N., 2013. Disaster risk of culture heritage sites of the Kathmandu Valley. In _International Symposium on Revisiting Kathmandu, Safeguarding Living Urban Heritage_. pp. 283-290.
* Mills and Barber (2004) Mills, J. and Barber, D., 2004. Geomatics Techniques for Structural Surveying. _Journal of Surveying Engineering_, 130(2), pp.56-64.
* Mills et al. (2000) Mills, J.P., Peirson, G.C., Newton, I. and Bryan, P.G., 2000. Photogrammetric investigation into the suitability of desktop image measurement software for architectural recording. In _International Archives of Photogrammetry and Remote Sensing_. pp. 525-532.
* Nex and Remondino (2014) Nex, F. and Remondino, F., 2014. UAV for 3D mapping applications: a review. _Applied Geomatics_, 6(1), pp.1-15.
* Pavlidis et al. (2007) Pavlidis, G., Koutsoudis, A., Arnaoutoglou, F., Tsioukas, V. and Chamzas, C., 2007. Methods for 3D digitization of cultural heritage. _Journal of cultural heritage_, 8(1), pp.93-98.
* Remondino (2011) Remondino, F., 2011. Heritage recording and 3D modeling with photogrammetry and 3D scanning. _Remote Sensing_, 3(6), pp.1104-1138.
* Remondino and El-Hakim (2006) Remondino, F. and El-Hakim, S., 2006. Image-based 3D modelling: a review. _The Photogrammetric Record_, 21(115), pp.269-291.
* Remondino and Menna (2008) Remondino, F. and Menna, F., 2008. Image-based surface measurement for close-range heritage documentation. _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XXXVII(B5-1), pp.199-206.
* Remondino and Rizzi (2010) Remondino, F. and Rizzi, A., 2010. Reality-based 3D documentation of natural and cultural heritage sites--techniques, problems, and examples. _Applied Geomatics_, 2(3), pp.85-100.
* Reu et al. (2013) Reu, J.D., Plets, G., Verhoeven, G., Smedt, P.D., Bats, M., Cherrette, B., Maeyer, W.D., Deconvck, J., Herremans, D. and Laloo, P., 2013. Towards a three-dimensional cost-effective registration of the archaeological heritage. _Journal of Archaeological Science_, 40(2), pp.1108-1121.
* Santagati et al. (2013) Santagati, C., Inzerillo, L. and Di Paola, F., 2013. Image-based modeling techniques for architectural heritage 3D digitalization: limits and potentialities. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XL(5), pp.555-560.
* Sarhoiss et al. (2016) Sarhoiss, V., Asteris, P., Wang, T., Hu, W. and Han, Y., 2016. On the stability of ancient colonnades under static and dynamic conditions. _Bulletin of Earthquake Engineering_, pp.1-22.
* Shakya et al. (2013) Shakya, L., Takada, M., Morishige, S. and Okubo, T., 2013. Community involvement in management of communal space in Patan Historic City. In _International Symposium on Revisiting Kathmandu, Safeguarding Living Urban Heritage_. pp. 197-206.
* Tiwari (2013) Tiwari, S.R., 2013. Community participation in heritage affairs. In _International Symposium on Revisiting Kathmandu, Safeguarding Living Urban Heritage_. pp. 189-196.
* Vincent et al. (2015) Vincent, M.L., Gutierrez, M.F., Coughenour, C., Manuel, V., Bendicho, L.-M., Remondino, F. and Fritsch, D., 2015. Crowdsourcing the 3D digital reconstructions of lost cultural heritage. In _IEEE Digital Heritage_. pp. 171-172. | Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality of using photogrammetry in Nepal. The full pipeline of 3D modelling for heritage documentation and conservation, including visualisation, reconstruction, and structure analysis, is proposed. In addition, crowdsourcing is discussed as a method of data collection of growing prominence.
3D modelling, Crowd-source, Heritage documentation, Low-cost, Photogrammetry +
Footnote β : This contribution has been peer-reviewed.
+
Footnote β : This contribution has been peer-reviewed. | Condense the content of the following passage. | 176 |
mdpi/03728b2b_1afb_45ee_9ca9_4a96a84c0a05.md | # Examining CEOs' Moral Reasoning in the Automotive Industry
Beatriz Garcia-Ortega
1Department of Management, Universitat Politecnica de Valencia, Camino de Vera s/n, Building 7D, 46022 Valencia, Spain; [email protected]
2Doctoral Program in Business Management, Universitat Politecnica de Valencia, 46022 Valencia, Spain; [email protected]
2
Blanca de-Miguel-Molina
1Department of Management, Universitat Politecnica de Valencia, Camino de Vera s/n, Building 7D, 46022 Valencia, Spain; [email protected]
Javier Galan-Cubillo
2Doctoral Program in Business Management, Universitat Politecnica de Valencia, 46022 Valencia, Spain; [email protected]
2
Footnote 1: email: {[email protected]}
Received: 15 September 2019; Accepted: 25 October 2019; Published: 27 October 2019
## 1 Introduction
The power and influence of CEOs (Chief Executive Officers) have grown in recent decades, in some cases contributing to the collapse of companies and to the financial crisis. Thus, the moral integrity of CEOs is under constant scrutiny [1]. The moral obligation that business has to society is stressed by corporate social responsibility (CSR) [2], while corporate social responsibility is strongly influenced by top-level managers [3].
Trevino and Brown [4] defined the role of a leader as that of a moral manager whose proactive efforts may both positively and negatively influence the behaviors of their followers. Along the same vein, Trevino et al. [5] related the effectiveness of ethical management with the communication of the importance of ethical standards. In all, Weber concluded that there is an expanded view of moral leadership: \"leaders must be individuals of moral character, as well as people-oriented leaders who communicate the importance of good values to the firm\" [6] (p. 168).
The automotive industry is striving to be more sustainable [7; 8; 9]. It is one of the most globalized sectors in the world with a highly dynamic market, increasing competition, and huge price and cost pressure. It is immersed at a crossroads with a deep transformational challenge towards cleaner energies ahead with new forms of mobility lurking and society being more and more aware of itsside effects. It must become vigilant and demanding with tightened up regulations to fight global climate change.
Recent scandals in the automotive industry have redoubled the interest in this sector. In 2014, authorities began to report discrepancies in emission tests, starting with the Volkswagen Group (VW) [10], who were using a defeat device in diesel engines to cheat emission tests. They pleaded guilty and were condemned to pay a high fine, and the CEO and other former executives were sentenced to prison. Meanwhile, other main players, such as FCA, PSA, Nissan, Renault, Daimler, Ford, and Suzuki, have been caught, or suspected of, carrying out similar practices. Along the same vein, three of the main German car manufacturing groups (Daimler, BMW, and VW) are being accused of a rollout of emission cleaning technology, and the Renault and Nissan's CEO for the last years is facing problems with justice at present. Indeed, most of the top car manufacturers have been related, in one way or another, to unethical practices, especially over the last five years.
The related body of literature provides evidence of the influence of CEO discourse and moral reasoning on CSR and overall company ethical values, which may be decisive in the avoidance of questionable practices and scandals affecting sectors such as the automotive industry. However, there is a scarcity of research focused on the assessment of CEO moral reasoning in their discourse in such sectors. This paper aims to fill this gap. Through the analysis of letters written by CEOs in annual and sustainability reports, our research strives to attain a diagnosis of the moral reasoning of CEOs leading the main automotive firms over the last years, as well as the extent to which the moral reasoning of CEOs is evolving and redressing, the diversity of such evolution depending on different factors, the relationship between moral reasoning and ethical behavior and scandals, and the degree to which scandals and other issues influence and shape the discourse of CEOs. In connection to this, we introduce a new concept--\"tone 'into' the top\". For such purposes, several hypotheses are established and tested. We also provide clues to enhance the performance of top managers and open new lines of research.
This paper is structured as follows: After this introduction, Section 2 provides a review of the relevant literature and develops the research hypothesis. Section 3 explains the data and methodology applied in our research. Section 4 presents the results and discussion. Finally, conclusions are shown in Section 5.
## 2 Literature Discussion
### The Role and Influence of CEOs
The literature emphasizes the role and influence of CEOs from different perspectives in terms of an organization's core values and decision-making processes, stakeholders and society, CSR policies, etc. The CEO is the most important leader of a company as they play a central role in top management [11]. Senior management has the potential to create mental settings in their organization by embedding their beliefs, values, and assumptions in their organizational culture, and CEOs have gained power and influence over the years [1]. Schein [12] stated that leaders play a key role in shaping and controlling organizational culture. Leader behavior influences the ethical culture of an organization [13; 14; 15]. Leaders represent relevant role models and guides for their followers [4], and followers tend to imitate their leaders, whether their influence is good or bad [16; 17; 18]. Leader's ethics shape their workplace decision-making processes [19; 20; 21].
The influence of CEOs is not just circumscribed to their organization. They are exposed to the stakeholders and society in general, and they fulfill a promotional function for the company [22]. Apart from their obvious role in transmitting the image of an economically successful company, the need to present their companies as socially responsible has increasingly grown in the last decades. The impulse from top to bottom and sustainability communication are two of the key success factors identified by Colsman [23] on the implementation of a corporate sustainability program. CSR has become a strategic tool for CEOs [24]. CSR is strongly influenced by top-level managers [3], while the CSR engagement of companies positively influences stakeholders' attitudes and behavioral intentions as well as their corporate image and reputation [25]. Socially responsible organizations are perceived as ethical [26]. Dennis et al. [27] stated that CEOs engage in philanthropy because they want to obtain legitimacy from influential stakeholders and make society a better place. Moreover, Connor [28] showed the importance of leaders in a company in the process of gaining, maintaining, and rebuilding trust, while Wang and Wanjek [29] explained the managerial implications of handling the post-crisis reputation of the Volkswagen emissions scandal. In some cases, CEOs may also promote greenwashing practices that are not necessarily successful in achieving their purpose [30]. Hence, the literature has widely recognized CEOs' strong leverage over their own organizations, stakeholders, and society.
### Moral Reasoning of CEOs
#### 2.2.1 The Concept and Its Implications
Cunningham [31] defined the tone at the top as the shared set of values in an organization emanating from the most senior executives, which creates an unwritten cultural code. Mahadeo [32] describes tone at the top as \"the ethical (or unethical) atmosphere created in the workplace by the leadership of an organization\". Amernic et al. [1] highlight that tone at the top offers clues on how CEOs project themselves to stakeholders. The concept \"tone at the top\" will be sometimes addressed in this paper as the \"moral tone at the top\" to reinforce the aspect of morality or ethics upon its definition.
The importance and usefulness of assessing the moral tone at the top is broadly reflected in the literature. It has a critical influence on the work environment, integrity, values, moral principles, and competence of employees [1,33]. Cheng et al. [34] concluded that a leader's ethics influence their behaviors. Research such as that by Avolio and Gardner [35] or Brown and Trevino [36] proposes that an ethical leader's behavior brings a positive outcome to a CEOs' performance.
Thoms [37] concluded that ethical integrity in leadership is directly linked to the organizational moral structure and found a correlation between highly ethical management and business success. Along the same vein, Johnson [38] found that ethical leadership improves organizational performance and profitability. Shin et al. [39] and de Luque et al. [40] showed evidence that ethical leadership enhances organizational performance. Tourish et al. [41] suggested that the tone at the top could be one of the key factors in leadership's contribution to a company's success. D'Aquila and Bean [42] provided evidence on how leaders are able to foster ethical decisions or, on the contrary, to encourage unethical responses.
Several studies link CEO ethical leadership to the ethical climate and cultural enhancement [43,44], and even to the improvement of a firm's performance under the conditions of a strong corporate ethics program [45]. Moreover, Akker et al. [46] (p. 116) established that \"the more leaders act in ways followers feel is the appropriate ethical leader behavior, the more that leader will be.\" In addition, Spraggon and Bodolica [47] offered, through their research on relational governance and emotional self-regulation, an interesting explanation of how moral reasoning may shape governance mechanisms and help to better understand the decision-making process. The assessment of the moral reasoning of CEOs is a direct tool to assess the tone at the top, to the point that it is often used indistinctively in literature [6].
In all, Staicu et al. [48] (p. 81) concluded that the \"tone at the top describes and influences the general business climate within and organization via ethical or non-ethical decision making performed by the top, and determines to some extent, in turn, the ethical behavior of all the people forming that organization\". They also inferred that the culture and behavior in an organization can be shaped by setting the proper tone at the top in order to steer employees in the same and proper direction, and they exposed evidence of how a poor tone or moral failure at the top may have a decisive influence on the crisis and collapse of companies, the latter also supported by Arjoon [49] and Argandona [50].
In order to emphasize the transfer of values into the organization and the environment by the tone at the top, some authors have coined the term \"tone 'from' the top\" [48,51]. Therefore, the moral tone at the top or the moral reasoning of CEOs is of particular interest, especially in terms of its practical implications, as a shaper of values and behaviors across an organization, as a tool to predict moral behaviors leading to right or wrong decision-making, and ultimately, as a key factor in a company's success or failure.
An assessment of the moral tone at the top and understanding its implications may help CEOs to consider engaging in programs to enhance their moral reasoning levels. Weber (2016) highlighted the importance of ethical education training for such a purpose. Further studies provided conclusively beneficial results for students, even following short programs (Steintein and Steintein, 2016; Steintein, 2016).
The moral tone at the top gains even more relevance when a company's performance is contested. People become more aware of ethical concerns when scandals emerge, and these put the company's reputation at serious risk (Steintein and Steintein, 2016). Beelitz and Merkl-Davies (2016) examined the use of CEO discourse to restore legitimacy after a nuclear power plant incident. Amernic et al. (2016) linked the major crises of companies to a dysfunctional tone at the top. Greenwashing practice is a clear exponent of the dysfunctional tone at the top. This may negatively influence the whole organization to engage in unethical practices. Recent scandals in the automotive sector, as a clear consequence of unethical behaviors, bring ethical concerns into even more focus and strengthen the interest in assessing the moral tone at the top, particularly in this sector.
#### 2.2.2 Assessing the Moral Reasoning of CEOs: Weber's Method
Different methodologies have been developed in the literature to assess the moral reasoning of CEOs. For our research, we apply the method proposed by Weber (2016), who adapted the Kohlberg's stages of moral development theory to the business organization context to enhance the predictability of managerial ethical behaviors.
Kohlberg's is one of the leading theories in the cognitive moral development field. Pettifor et al. (2016) defined moral reasoning or moral judgment as the ways in which individuals define whether a course of action is morally right, such as by their evaluating different venues of action and taking into account ethical principles when defining their position about an ethical issue.
Moral reasoning is positively related to moral behaviors (Steintein and Steintein, 2016; Stein and Steintein, 2016), which is necessary for moral decision-making (Steintein and Steintein, 2016). Kohlberg's theory aims to explain the human reasoning processes and how individuals tend to evolve to become more advanced in their moral judgments. He considers moral reasoning as a major element of moral or ethical behaviors. Kohlberg's theory, originally conceived by the psychologist Jean Piaget (1996) and further developed and enhanced by Lawrence Kohlberg along with his associates (Kohlberg, 1996; Kohlberg, 1996; Kohlberg, 1996), holds that moral reasoning has six identifiable development stages, each more adequate at responding to moral dilemmas than its predecessor. This stage model defines these stages as being grouped into three levels of morality: pre-conventional, conventional, and post-conventional. Each level contains two stages, with the second level of each stage representing a more advanced and organized form of reasoning than the first stage at that level. An overview of this model is described herein:
1. Pre-conventional level: Individuals show an egocentric orientation toward satisfying personal needs, ignoring the consequences that this might entail to others. Their obedience to the norms (laws and regulations) established by the authority is basically motivated by punishment (stage 1) or by the reward or exchange of favorable criteria (stage 2).
2. Conventional level: Individuals adhere to commonly held societal conventions, contributing to the system's maintenance and the preservation of social order. More attention is paid to achieving interpersonal harmony and improving relationships, creating a consensus-based culture in the workplace, living up to the expectations of the group, and fulfilling mutually agreed obligations (Kohlberg, 1996; Kohlberg, 1996). In comparison to the pre-conventional level, individuals move from selfish to concerned with others' approach (Stein and Steintein, 2016). Stage 3 is based on other people's approval circumscribed to a workgroup, friend circle, etc., where the main motivation is fear of authority and social condemation. Stage 4 is extended to actions evaluated in terms of laws and social conventions. Compliance with society and not only the closest group gains relevance.
3. Post-conventional (principled) level: Individuals make judgments about right and wrong based on their principles. Although these are not shared by the majority, moral autonomy is achieved. At stage 5 of the principled level, also known as the \"ethics of social contract\", the behavior of an individual is determined with respect to individual rights, and laws are seen as flexible tools for improving human purposes. Exceptions to certain rules are possible provided those rules are not consistent with ones' personal values or with individual rights and majority interests or considered to be against the common good or well-being of society. Laws or rules that are not consistent with the common good are considered morally bad and should be changed. Individuals at this stage pursue \"as much good for as many people as possible\", which is achieved by the majority. Stage 6, named the universal ethical principle orientation, is identified as the highest state of functioning and features abstract reasoning and ethical principle universality. The perspective not only of the majority but of every person or group potentially affected by a decision is considered.
Kohlberg's stages of moral development theory have been questioned and criticized by different researchers [70; 71; 72; 73], criticism that, in some cases, helped Kohlberg and other researchers shape and improve the theory [67]. Furthermore, McCauley et al. [74] and Peterson and Seligman [75] brought up arguments in favor of this model, relating the impact of leaders' moral development with their managerial performance. Moreover, it could be argued that moral reasoning might not necessarily lead to moral behaviors. However, a correlation was found between how someone scores on the scale and what their moral behavior is like, this being more responsible, consistent, and predictable from people at higher levels [76]. In fact, many authors have based their research on Kohlberg's theory in recent years for different purposes, among them, Kipper [77], Doyle et al. [78], Morilly [79], Weber [6], Hoover [80], Franklin [81], Daniels [82], Lin and Ho [83], Galla [84], Hyppolite [85], and Chavez [86].
Moreover, literature relates moral reasoning to leadership performance. Turner et al. [87] concluded that managers who scored at higher levels of Kohlberg's moral reasoning scale displayed greater evidence of transformational leadership behaviors. Orth et al. [88] found that leaders tend to improve their ability to carry out emotional self-control as they approach the highest level of moral reasoning, while this emotional self-control is a key ingredient for achieving success [89]. However, as Caniels et al. [90] highlighted, the different stages of moral reasoning should not be regarded as mutually exclusive, but as cumulative sets of governance tools that develop as a manager moves up the moral reasoning ladder. Furthermore, as Spraggon and Bodolica [47] propose, the moral reasoning level shown by a CEO is indicative of the type of governance mechanisms, while the higher level of manager's moral reasoning may be complemented by lower levels.
As earlier mentioned, Weber [57] devised an adaptation of Kohlberg's method to the business organization context. While Kohlberg's intended to assess the moral reasoning development from childhood to adulthood, Weber empirically tested an adapted method which eliminated the needless aspects that could hinder the achievement of results when applying the method to the measurement of managers' moral reasoning.
This comprehensive adaptation of an abbreviated scoring guide, presented in the methodology shown in Section 3.2, Table 2 enables a simpler, yet reliable, system that allows the analysis of written content to evaluate and categorize the CEOs' moral language into one of the moral development stages defined by Kohlberg. Weber did not find a significant difference in the results or reliability when applying this simplified method.
Weber [6] applied this adapted method to measure the moral reasoning level of CEOs in the automotive industry with interesting results that will also be contrasted with ours as part of our research. Kipper [77] also applied this adapted method to a different context with relevant conclusions. In all, we consider this method to be the most appropriate for the purpose and scope of our research.
#### 2.2.3 Moral Reasoning in CEOs' Letters
The CEO's letter is the most read section [91; 92] and one of the most important parts of a company's annual report [93; 22; 94], which is normally included at the beginning of the report. It intends to offer a broad overview of the company's performance throughout a year, including additional financial, but also non-financial, explanations, interpretations, expectations, and future objectives, with a promotional function by conveying a positive image of the company [22]. It sometimes falls into greenwashing practices [95] and triggers decision-making on investments or funding. To put it simply, a CEO letter aims to inform and to persuade.
Trevino et al. [96] exposed that the notion of a moral manager is based on three concepts: modeling through visible actions, the use of rewards and discipline, and communicating about ethics and values. The CEO's letter has a relevant role in sustainable communication, which is one of the main success factors in CSR. The related body of literature recognizes the CEO's letter as a rich source to investigate CSR and the relevance of its rhetoric in communicating their values [6; 97; 98; 99; 100; 101]. CEO discourse is also used to gain legitimacy, credibility, and trust from stakeholders [56; 97].
CEO discourse is an attempt at creating shared meanings and cultures [1]. The semantics used by CEOs may reveal important aspects of the CEO's leadership-through-language [103] and are expected to discuss or underlie the ethical components in the decision-making processes [4]. The CEO's letter is indeed a valuable tool to assess the mindset, values, and ethical aspects of management [1; 6; 104; 105].
The publication of the CEO's letter is voluntary, and its structure, information content, or rhetoric is not subject to predetermined rules [102]. Therefore, the moral tone at the top may naturally emerge, bearing in mind the strict scrutiny of financial analysts, shareholders, regulators, and journalists as the main constraints [106], as well as society above all.
Either the CEO actually writes the letter in full or with assistance. It is a written document signed by the CEO. Thus, the CEO takes responsibility for a public and accessible document that is expected yearly by stakeholders [1; 6]. The CEO's letter is, therefore, a valuable source to assess the moral reasoning of CEOs.
### Hypothesis
The CEO's letter is the most read section of annual reports [91; 92; 94] and is a means to gain legitimacy, credibility, and trust [97]. A proper tone at the top allows gaining, maintaining, and rebuilding reputation and trust [28]. Thus, we might expect leaders to use this valuable tool for this purpose by showing higher ethical values. Moreover, leaders showing higher ethical values are more prone to represent salient ethical role models for their followers and to attract their attention [107].
Nonetheless, according to Spraggon and Bodolica [47], CEOs cannot stop being individual human beings and members of an organization, so the adoption of higher ethical values does not imply they still keep part of their moral reasoning at lower values (i.e., part of the motivation still being to follow the rules), which are also needed for the governance of the company. Schwartz et al. [108] concluded that ethical and legal obligations (more related to lower ethical levels) are not mutually exclusive but reinforce each other. However, higher levels could be expected to be more predominant as they should be more and more influential on CEOs' motivations, while lower levels could be expected to be less emphasized or underlying in their discourse. By reaching higher levels of moral reasoning, we argue that CEOs will still keep satisfying their needs as individuals, but this will be more based on the satisfaction in succeeding at offering benefits to society. Therefore, Hypothesis 1 is stated as follows:
**Hypothesis 1 (H1)**.: _CEOs in the automotive industry tend to show an increasing level of moral reasoning predominance over the years._
According to stakeholder-driven principle, CSR is seen as a response to external pressure and scrutiny from stakeholders [109]. The automotive industry, by its own nature, is subject to above-average exposure to society [6] and to tremendous pressure and scrutiny from society to behavemore responsibly and become more sustainable, even more so after recent scandals. When scandals emerge, the reputation of companies is put under risk [55; 110; 111]. Cagle and Baucus [112] stated that scandals in business tend to improve moral reasoning since individuals feel that they cannot ignore the ethical aspects [113]. Scandals in the globalized sector bring a scenario of \"high moral intensity\" where moral or ethical considerations gain weight in the decision process [114]. People invest more time and energy in situations of high moral intensity and use more sophisticated moral reasoning [114; 115]. The automotive sector, owing to its intrinsic characteristics, can be considered a paradigm of the high moral intensity scenario, even more when scandals or conflicts occur. CEOs could also be expected to show pretentiously higher ethical values and fake commitment to sustainability by adopting greenwashing postulates [95; 116]. Hence, the authors expect CEOs in the automotive industry to react and shape their moral reasoning to higher stages with the aim of recovering their reputation and trust from stakeholders. Therefore, Hypothesis 2 is stated as follows:
**Hypothesis 2 (H2)**.: _When a company is affected by a scandal, its CEO will be more prone to reacting and shaping its message to show a higher level of moral reasoning._
Furthermore, Christensen and Kohls [117] proposed that during a crisis in a company, individuals with a higher level of moral reasoning show greater capacity to make the right ethical decision. Weber [57] predicted that the assessment of moral reasoning of managers could lead to greater predictability of managerial and organizational ethical behaviors. Further research follows the same line [1; 33; 42; 48; 118]. Amernic et al. [1] linked major crises of companies to a dysfunctional tone at the top. This takes us to our third hypothesis:
**Hypothesis 3 (H3)**.: _When a firm is affected by a scandal, it is more likely to be preceded by CEO moral reasoning at lower stages._
Likewise, the institutional theory states that political, educational, and cultural factors influence the CSR approach of companies [119]. Gatti and Seele [98] provided evidence of such influence, but on the other hand, exposed common CSR trends among companies from the same market sector. Paul [120] stated that leading companies are expected to establish practices and norms that other firms might be likely to follow.
Studies indicate that changes are related to adapting to trends, especially in terms of society's expectations about the behaviors of firms and the evolution of their economic performance [121]. For example, Fehre and Weber [122] found that the CEOs of German-listed companies talked less about CSR, including social issues, in times of crisis.
The automotive market sector, with a growing and persistent globalization scenario, could be expected to find confluent ethical behaviors over the years, in spite of the political, educational, or cultural factors of different countries or continents or even factors related to the CEO's personality or background. Therefore, our fourth hypothesis is stated as follows:
**Hypothesis 4 (H4)**.: _CEOs in the automotive industry are more likely to evolve over the years into a more uniform level of moral reasoning with a lower influence of factors stated by institutional theory._
## 3 Method
This section presents the data and process followed to test the hypotheses stated in the previous section.
### Data
We analyzed the moral language used in the CEOs' letters included in the annual sustainability or social responsibility reports from the top 15 automotive companies involved in vehicle production in the world during 2017 (Table 1), which was the most current ranking found at the beginning of the research. The reports are publicly available on their websites, thus available for public review and assessment. The sample provides comprehensive data from companies from America, Europe, and Asia, all of them being global players.
The time frame criteria were to select the latest available material for each of the chosen companies by using the available annual reports from 2013 to 2018, which included the two years before the last wave of scandals started to unfold. The material that served as a basis for this research consists of 90 letters. It was important to analyze the letters from the same period to ensure they were issued under the same circumstances to be equally comparable. Thus, we were able to compare different companies from the same sector under the same global context.
To our knowledge, this is the first study to collect data from such an extensive period of time and a wide range of companies using the hereunder methodology. On the one hand, by examining a period of several years, we were able to better appraise and modulate the overall tone of each CEO as well as sense any trend or pattern. On the other hand, we counted on the previous research carried out in the same sector by Weber Weber (2018), to which we will be able to compare our results and findings and see the evolution in the morality shown by CEOs with an even wider perspective.
In order to consider a CEO's letter in our study, firstly, it had to be clearly written or dictated by the top management. Secondly, it had to be written in first-person style (using \"I\", \"our\", \"We\" ), so letters written merely stating descriptive terms were discarded. No letter had to be discarded due to these constraints. Most letters included a picture of the CEO which further reinforced the idea of them transmitting their own discourse. Table 4 in Section 4.1 compiles a list of letters and the management signing them.
### Analysis Methodology
As discussed in the literature review, we used Kohlberg's stages of moral development theory (Kohlberg, 1977) further adapted to the business context by Weber Weber (2018) as the basis for our research and conducted a deep assessment of selected CEO letters through close reading. We applied an iterative process with a qualitative and interpretive approach based on the cycle's \"individual deep review-joint discussion-joint confirmatory analysis\". The diverse background of the authors granted various perspectives when analyzing the tests and enhanced the interpretive process in comparison to a single approach perspective.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline \\multirow{2}{*}{**Rank**} & \\multirow{2}{*}{**Company**} & \\multirow{2}{*}{**Country**} & \\multicolumn{2}{c}{**Approx. Number of Vehicles**} \\\\ & & & **Produced (Millions)** \\\\ \\hline
1 & TOYOTA & Japan & 10.5 \\\\
2 & VOLKSWAGEN & Germany & 10.4 \\\\
3 & HYUNDAI & South Korea & 7.2 \\\\
4 & GENERAL MOTORS (GM) & EEUU & 6.9 \\\\
5 & FORD & EEUU & 6.4 \\\\
6 & NISSAN & Japan & 5.8 \\\\
7 & HONDA & Japan & 5.2 \\\\
8 & FCA (FIAT-CHRYSLER) & Netherlands/Italy & 4.6 \\\\
9 & RENAULT & France & 4.2 \\\\
10 & PSA & France & 3.6 \\\\
11 & SUZUKI & Japan & 3.3 \\\\
12 & SAIC & China & 2.9 \\\\
13 & DAIMLER AG & Germany & 2.5 \\\\
14 & B.d.-M.-M.W. & Germany & 2.5 \\\\
15 & GEELY & China & 2 \\\\ \\hline \\hline \\end{tabular} Source: OICAβInternational Organization of Motor Vehicle Manufacturers.
\\end{table}
Table 1: Ranking of companies involved in vehicle production worldwide in 2017.
The coding process was carried out in four steps, following the criteria of Weber (1957); Krippendorff (1965) and Krippendorff (1965). Firstly, one author prepared a matrix in which the rows included a list with the sentences that appeared in each CEO's letter while the columns included a list with the indicators for detecting the stages in those letters (see Table 2). In the second step, the authors signaled the indicators found in each letter in the matrix. In the third step, coincidences and discrepancies in the coding of the researchers were checked. In single sentences where only one stage was evident, the coincidences between researchers accounted for 100%. Even so, these codes were revised again to ensure both reliability and validity, so there was coincidence and it was at the correct stage (Krippendorff, 1965). However, doubts arose in those sentences in which two or three stages could be characterized and where coincidences were around 75%. For this reason, the last step consisted of re-analyzing these sentences. To do this, sentences were divided into sections and then checked by the three authors to define what stages actually appeared in the narrative. In this way, coincidence in these codes was also achieved, assuring both reliability and validity (Krippendorff, 1965). Examples of the final coding are shown in Table 3.
\\begin{table}
\\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \\hline \\hline
**Stage** & **Overall Description** & **Further Explanation** & **Indicators or Clues on Letters** \\\\ \\hline \\multirow{3}{*}{\\#1} & Concer over the consequences of personal & & & Seeking avoidance of punishment \\\\ & harm & & & \\\\ \\hline \\multirow{3}{*}{\\#2} & Concer over the consequences of personal & -Concern for personal satisfaction & Focus on self-performance or business & Ambition for company or CEO success \\\\ & needs & -A sense of duty to oneself & Ambition to create or bring value or opportunities for the company \\\\ \\hline \\multirow{3}{*}{\\#3} & Concern over the consequences to an immediate group & -Concern over personal relationships with others & Focus on stakeholders: how the company interacts with them & \\\\ & & -A sense of duty due to how others will perceive me or my actions & -A show them business and CEO integrity and ethical behaviors & \\\\ & & -A sense of duty to the consequences it may have for others others & taking into account their needs, creating value, or bringing benefits for them & \\\\ \\hline \\multirow{3}{*}{\\#4} & Concern over the consequences to an immediate group & -A sense of duty due to a commitment to a code, oath principle & Explicit commitment, concern, responsibility, or motivation towards society and its norms, international guidelines, agreed principles or conventions, and human rights beyond those of immediate stakeholders & \\\\ & A sense of duty to a professional responsibility or group & -Concern for social order and harmony & Explicit commitment, concern, responsibility, or motivation to the planet and environmental protection by fulfilling the existing normative and guideline framework & \\\\ & & -Concern over the consequences to the larger societal group & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives beyond existing norms, guidelines, and conventions will improve the existing framework & \\\\ & & -A \"social contract\" to protect everyoneβs rights & -A show the greatest good for the greatest number of people & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives beyond existing norms, guidelines, and conventions will improve the existing framework & \\\\ & & -A \"The generalised group\" the greatest number of people affected & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives & \\\\ \\hline \\multirow{3}{*}{\\#6} & Universal principles of justice, fairness & -Universal laws governing behaviors and superseding societyβs laws & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives & \\\\ & & -A \"social contract\" to protect everyoneβs rights & -A show the greatest good for the everyoneβs rights & \\\\ & & -The greatest good for the greatest number of people affected & -A show the greatest number of people & \\\\ & & -A \"The greatest good for the greatest number of people affected & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives & \\\\ & & -A \"social contract\" to protect everyoneβs rights & -A show the greatest number of people & \\\\ & & -A \"The greatest good for the greatest number of people & Emphasis on ethical behaviors, embedded culture, and core values Personal commitment of the CEO by their own conviction with proactive initiatives & \\\\ &
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Examples of Stage 1:** & Not Found \\\\ \\hline \\hline \\multirow{7}{*}{**Examples of Stage 2:**} & _βAbove all, new trends and new technologies ultimately mean one thing: new business opportunitiesβ._ _(Vollswagen, 2014)_ \\\\ & This sentence refers to Stage 2, as the main focus is on business performance. \\\\ \\cline{2-3} & _βBy including sustainability considerations in all our business decisions, we create added value for the company.β_ (BMW, 2014)_ \\\\ & This sentence refers to Stage 2, as the ultimate motivation of sustainability considerations is creating value for the company. \\\\ \\cline{2-3} & _βWe believe that sustainable action makes our business model more competitive and secures our companyβs future growthβ_ (BMW, 2014)_ and _βThe sustainability of our performance, in terms of growth and profit, will be the main objectiveβ._ _(Renault, 2016)_ \\\\ & These two sentences refer to Stage 2, as the focus is on self-performance. \\\\ \\hline \\hline \\multirow{7}{*}{**Examples of Stage 3:**} & _βEveryone at VW is working most diligently and with great commitment to rebuild the high esteem this Group rightly enjoyed for so long.β_ (Vollswagen, 2015)_ \\\\ & This sentence refers to Stage 3βcleaning up the companyβs image, trying to recuperate trust, and focusing on how others perceive the company. \\\\ \\cline{2-3} & _βWhen we talk about openness, we also mean that we intend to pay even greater attention to how our stakeholders, as well as outside experts, view our work.β_ _(Vollswagen, 2016)_ \\\\ & This refers to Stage 3, as it focuses on stakeholders and how they perceive the business. \\\\ \\cline{2-3} & _βOver the last year, Volkswagen also substantially extended the Companyβs voluntary commitment to behave ethically and with integrity.β_ (Vollswagen, 2016)_ \\\\ & Stage 3, intending to show integrity and ethical behaviors. \\\\ \\cline{2-3} & _βWe also aim to offer our employees an inclusive work environment, where everyone feels respected and valuedβ._ _(FCA, 2018)_ \\\\ & Stage 3, taking into account stakeholdersβ needs. \\\\ \\hline \\hline \\multirow{7}{*}{**Example of Stage 2 and Stage 3 Combined:**} & _βOur ambition is to create lasting value: for the company, its employees and shareholders, but also for the countries and regions in which we operate.β_ (Vollswagen, 2013)_ \\\\ & This sentence indicates Stage 2 (ambition to create value for the company) along with Stage 3 (create value for stakeholders, consider their needs). \\\\ \\cline{2-3} & _βWe want to use our engineering and technological expertise to help solve some of todayβs most urgent social, environmental, and safety challengesβ._ (Nissan, 2016)_ \\\\ & Stage 4, concern for society, the environment, and safety. \\\\ \\cline{2-3} & _βThe Volkswagen Group feels committed to sharing this joint responsibility for our planet. Environmental and climate protection are guiding principles of our actionsβ_ and _βFor us as carmakers, climate protection is particularly relevant_ _Our goal is emission-free mobility_ _(Vollswagen, 2018)_ \\\\ & Stage 4, caring for the planet and environmental and climate protection. \\\\ \\cline{2-3} & _βWe understand that societyβs expectations of Honda are shifting towards a long term, sustainability-focused perspective. In response to these changes β_ _(Honda, 2014)_ \\\\ & Stage 4, reactive role to societyβs expectations. \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Examples of moral reasoning assessment carried out from CEOsβ (chief executive officersβ) letters.
In Table 3, we provide examples of the moral reasoning assessment carried out based on letters with an explanation under each extract:
On the other side, we made an assessment of each letter as a whole with the aim of complementing the first approach. We intended to better identify and screen CEO communicative intentions [124] from the actual message and overall moral tone conveyed. We considered factors such as the degree of repetition or reiteration of certain ideas or messages and the relative weight and emphasis of different contents within the letter and evaluated the extent to which certain argumentation undermined or reinforced other contents. For example, we looked for contradictions between well-sounding slogans or motto The last step was to share our separate findings and judgments and discuss them to enrich each other's views to reach a final consensus on the overall categorization of the stage(s). In the case of divergence found in the interpretation, the plan was to discuss it together with the scoring system taken for reference. In general, the degree of coincidence was complete after discussing and complementing individual views. In any case, through this qualitative approach, we were not looking for an exact figure but for the overall stage(s) identification.
## 4 Results
In this section, we depict the results of the evaluation of the CEOs' letters by the company throughout the six years of analysis and discuss relevant aspects found in relation to our research questions.
### Letter Assessment
Most letters from the same company presented a similar structure and even content over the years. It became evident that a template is often used and certain ideas, slogans or mottos are repeated, even when changing the CEO.
In most cases, the CEO issued the letter alone. In other cases, the CEO and chairman issued the letter either separately or together and seldom included directors or members of the Sustainability Board. Only one of the CEOs was a woman (GM), and a couple of women co-signed some letters.
In Table 4, we list the name(s) of the management signing the letters, and the stage categorization result for each letter. We also add notes pointing out the continuity of CEO from 2009 to 2013 and highlighting some relevant confirmed scandals. Other companies, such as GM, PSA, Daimler, or BMW, have been recently questioned, but facts have not been proven or are just starting to be revealed.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Company** & **2013** & **2014** & **2015** & **2016** & **2017** & **2018** \\\\ \\hline \\multirow{3}{*}{**Vollkswagen**} & Mr. Martin Winterdom 1 + & Mr. Martin Winterdom & Mr. Matthias & & & Mr. Herber \\\\ & Mr. Bernd & Winterdom & MΓΈller 2 & Mr. Matthias Muller & & Diess 3 \\\\ & Osterloh & & & & & \\\\ \\hline Stages & 2/3 & 2 & 3 & 3 & 2/3 & 2/3/4 \\\\ \\hline
**BMW** & Mr. Norbert Reithofer 1 & & & & Mr. Harald KrΓΈger & \\\\ \\hline Stages & 2 & 2 & 2/3 & 2/3 & 2/3 & 2/3/4 \\\\ \\hline
**Daimler** & & & Mr. Dieter Zetche 1 + changing members of the Sustainability Board & & & & \\\\ \\hline Stages & 3/4 & 3/4 & 2/3/4 & 2/3/4 & 2/3 & 2/3 \\\\ \\hline
**FCA** & & & Mr. Sergio Marchionne 4 & & & Mr. Mike \\\\ \\hline Stages & 2/3/4 & 2/3/4 & 2/3/4 & 2/3/4 & 2/3/4 & 2/3 \\\\ \\hline
**PSA** &
\\begin{tabular}{c} Mr. Philippe \\\\ Varin \\\\ \\end{tabular} & & Mr. Carlos Tavares & & & \\\\ \\hline Stages & 2/3 & 4/5 & 4/5 & 4/5 & 4/5 & 4/5 \\\\ \\hline
**Renault** & & & Mr. Carlos Chson 15 & & & \\\\ \\hline Stages & 2/3 & 2/3 & 2/3 & 2/3 & 2/3 & 2/3 \\\\ \\hline
**Nissan** & & Mr. Carlos Chson 16 & & & Mr. Hiroto Sikawa \\\\ \\hline Stages & 4 & 4 & 4 & 4 & 4 & 4 \\\\ \\hline
**Honda** & Mr. Takanobu Ito & & & Mr. J. Hachigo & & \\\\ \\hline Stages & 2/3 & 3/4 & 3/4 & 3/4 & 2/3/4 & 2/3/4 \\\\ \\hline
**Toyota** & & & Mr. Akio Toyota & & & \\\\ \\hline Stages & 4 & 4 & 4 & 4/5 & 4/5 \\\\ \\hline
**Suzuki** & Mr. Osamu Suzuki + \\(\\frac{3}{2}\\) members of board 7 & & & Mr. Toshihiro Suzuki & & \\\\ \\hline Stages & 3/4 & 3/4 & 3/4 & 2/3 & 2/3 & 2/3 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Summary of CEOs and other top executives signing the letters with stage categorization and scandals.
In Table 5, we compile the results of our research in terms of the moral reasoning scoring of CEOs at the beginning and at the end of our period of analysis. For a broader perspective in time, we also include the results obtained by Weber (2016), who analyzed the moral reasoning of letters from 2008/2009 using a similar approach. In Figure 1, we represent the frequencies of each stage. We have grouped the results every two years with the aim of reducing the bias related to a particular year:
In Table 5, we compile the results of our research in terms of the moral reasoning scoring of CEOs at the beginning and at the end of our period of analysis. For a broader perspective in time, we also include the results obtained by Weber (2016), who analyzed the moral reasoning of letters from 2008/2009 using a similar approach. In Figure 1, we represent the frequencies of each stage. We have grouped the results every two years with the aim of reducing the bias related to a particular year:
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline
**Company** & **2013** & **2014** & **2015** & **2016** & **2017** & **2018** \\\\ \\hline \\multirow{2}{*}{**Hyundai**} & \\multirow{2}{*}{Mr. Mong-Koo Chung} & \\multicolumn{2}{c}{Mr. Choong Ho} & \\multirow{2}{*}{Mr. Won Hee Lee} & \\multirow{2}{*}{} \\\\ & & & & & \\\\ \\hline Stages & 2/3/4 & 2/3/4 & 3/4 & 3/4 & no report found \\\\ \\hline \\multirow{4}{*}{**Sicic Motor**} & Mr. Hu & \\multirow{2}{*}{Mr. Cheng} & \\multirow{2}{*}{Mr. Cheng} & \\multirow{2}{*}{} & Mr. Cheng \\\\ & Mayovian (no letter, & & & & \\\\ & βBoards & & & & \\\\ & discussionβ) & & & & \\\\ \\hline Stages & 2/3 & 2/3 & 2/3 & 2/3 & 2/3 & no report found \\\\ \\hline \\multirow{2}{*}{**Geely**} & \\multicolumn{4}{c}{Mr. Li Shufu 1} & \\multirow{2}{*}{} \\\\ & & & & & \\\\ \\hline Stages & 2/3 & 2/3 & 2/3/4 & 2/3/4 & 2/3 & 2/3 \\\\ \\hline \\multirow{4}{*}{**Ford**} & Mr. Alan & \\multirow{2}{*}{Mr. Allan} & \\multirow{2}{*}{} & \\multirow{2}{*}{} \\\\ & Mullally (CEO) & Mullally (CEO) & & & \\\\ & 1Mr William & Mr William Clay & & & \\\\ & Clay (executive & & & \\\\ & chairman) & chairman) & & & \\\\ \\hline \\multirow{2}{*}{Stages} & 2/3/4 & 2/3/4 & \\multirow{2}{*}{3/4/5} & \\multirow{2}{*}{4β5} & \\multirow{2}{*}{4β5} \\\\ & 3/4/5 & 3/4/5 & & & \\\\ \\hline \\multirow{2}{*}{**General Motors**} & Mr. Dan & \\multirow{2}{*}{} & \\multirow{2}{*}{Ms. Mary Barra} & \\multirow{2}{*}{} \\\\ & Akerson & & & & \\\\ \\hline Stages & no report found & 3 & 2/3/4 & 2/3/4/5 & 2/3/4 & 2/3/4 \\\\ \\hline \\multirow{2}{*}{**GERMANY**} & \\multirow{2}{*}{BMW} & \\multirow{2}{*}{2} & \\multirow{2}{*}{2} & \\multirow{2}{*}{2/3/4} \\\\ & BMW & & & & \\\\ \\hline \\multirow{2}{*}{**FRANCE-ITALY**} & CA & 2 & 2/3/4 & & \\\\ \\cline{2-2} & RENAULT & β & 2/3 & 2/3 \\\\ \\cline{2-2} & PSA & 2/3 & 2/3 & 4/5 \\\\ \\hline \\multirow{4}{*}{**JAPAN-KOREA**} & NISSAN & 2/3 & 4 & 4 \\\\ \\cline{2-2} & HONDA & 4 & 2/3/4 & 2/3/4 \\\\ \\cline{2-2} & TOYOTA & 3/4 & 4 & 4/5 \\\\ \\cline{2-2} & SUZUKI & β & 3/4 & 2/3 \\\\ \\cline{2-2} & HYUNDAI & 2/3 & 2/3/4 & 3/4 \\\\ \\hline \\multirow{2}{*}{**CHINA**} & SAIC & β & 2/3 & 2/3 \\\\ \\cline{2-2} & GEELY & β & 2/3 & 2/3 \\\\ \\hline \\multirow{2}{*}{**EEUU**} & FORD & 2 & 2/3/4/5 & 4/5 \\\\ \\cline{2-2} & GM & 2 & 2/3/4 & 2/3/4 \\\\ \\hline \\end{tabular} Source: Own elaboration from the analysis of CEO letters.
\\end{table}
Table 4: _Cont._
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline & & **2008/2009** & **2013/2014** & **2017/2018** \\\\ \\hline \\multirow{2}{*}{**GERMANY**} & VW & 2 & 2/3 & 2/3/4 \\\\ \\cline{2-5} & BMW & 2 & 2 & 2/3/4 \\\\ \\cline{2-5} & DAIMLER & 4 & 3/4 & 2/3 \\\\ \\hline \\multirow{2}{*}{**FRANCE-ITALY**} & FCA & 2 & 2/3/4 & 2/3 \\\\ \\cline{2-5} & RENAULT & β & 2/3 & 2/3 \\\\ \\cline{2-5} & PSA & 2/3 & 2/3 & 4/5 \\\\ \\hline \\multirow{4}{*}{**JAPAN-KOREA**} & NISSAN & 2/3 & 4 & 4 \\\\ \\cline{2-5} & HONDA & 4 & 2/3/4 & 2/3/4 \\\\ \\cline{2-5} & TOYOTA & 3/4 & 4 & 4/5 \\\\ \\cline{2-5} & SUZUKI & β & 3/4 & 2/3 \\\\ \\cline{2-5} & HYUNDAI & 2/3 & 2/3/4 & 3/4 \\\\ \\hline \\multirow{2}{*}{**CHINA**} & SAIC & β & 2/3 & 2/3 \\\\ \\cline{2-5} & GEELY & β & 2/3 & 2/3 \\\\ \\hline \\multirow{2}{*}{**EEUU**} & FORD & 2 & 2/3/4/5 & 4/5 \\\\ \\cline{2-5} & GM & 2 & 2/3/4 & 2/3/4 \\\\ \\hline \\end{tabular} Source: Own elaboration from the analysis of CEO letters.
\\end{table}
Table 5: Moral reasoning scores grouped every two years.
### Discussion of Joint Results
#### 4.2.1 Introduction
We observed a good number of changes in CEOs over our period of analysis, some of them as a result of scandals (Volkswagen, Suzuki). During the period 2013-2018, only three companies out of 15 kept the same CEO, whereas six out of the 15 companies kept the same CEO in the period 2009-2013.
During our period of analysis, Nissan was controlled by Renault, and the letters of Renault and Nissan from 2013 to 2016 were addressed by Carlos Ghosh, which offered us a singular opportunity to confront the moral tone shown by the same manager in the two companies. Remarkably, we found a different moral reasoning stage when comparing Nissan with Renault. In spite of the emphasis and reiterations of good intentions in Nissan letters over the years, we found a couple of extracts, one from a 2015 letter and one from 2014 with part of stage 2. In both paragraphs, we were able to identify the nearly hidden motivation of self-interest that was more clearly and consistently evidenced in Renault's discourses.
Regarding Ford, during the years 2013 and 2014, the CEO and the chairman presented their letters separately, which gave us a good opportunity to compare their moral reasoning under the same context. From 2015 onwards, coinciding with the beginning of scandals, the letters were co-signed by both the CEO and the chairman.
#### 4.2.2 Hypothesis 1
As stated by Spraggon and Bodolica [47], the adoption of higher ethical values does not necessarily imply the absence of lower stages. The consequences of the financial crisis were still latent especially during 2013 with a slight trend to lose impact, which could explain the evolution of stage 2. In any case, we might interpret a trend towards higher stages of moral reasoning. At present, none of the management analyzed showed moral reasoning purely at the pre-conventional level (stage 2), which was the case for nearly 50% of companies in 2008/2009. We can argue that 40% of companies still showed stages not above 2/3 in 2017/2018, but this percentage was above 70% by 2008/2009. If we look at the higher stages, there was a noticeable increase in CEO reasoning at principle level stage 4, especially between 2008/2009 and 2013/2014. Furthermore, in 2013/2014, we started to see some glimpses of stage 5, while in 2017/2018, nearly 30% of companies were consistently showing the post-conventional level of moral reasoning. Thus, in relation to the first hypothesis:
**Hypothesis 1 (H1).**_CEOs in the automotive industry tend to show an increasing level of moral reasoning predominance over the years._
Figure 1: Frequencies of moral reasoning scores grouped every two years. Case 1, 15 companies from our research. Case 2, 11 companies from Weberβs research [6]. Source. Own source.
While some companies evolved positively (PSA, Toyota, Ford ) and others remained steady (Renault, Saic, Geely ) or even experience some setback (Daimler, Suzuki ), we found, on average, as predicted, a generally positive trend in the sector, progressing towards higher conventional and even postconventional levels, thus contributing to gaining reputation and trust [28]. Therefore, H1 can be confirmed.
From the results, while testing this first hypothesis, we found relevant implications. Taking into account the evidence found in the literature of the positive effects of highly ethical management [37; 38; 39; 40; 41], managers could track the evolution of their own moral reasoning scores over time and compare them with those of their competitors as well as become more aware of their transmitted ethical level, also considering their influence on ethical decisions [42] and on building an ethical climate and enhancing their company's culture [43; 44; 48].
#### Hypothesis 2
Moreover, concerning our second hypothesis:
**Hypothesis 2 (H2)**.: _When a company is affected by a scandal, its CEO will be more prone to reacting and shaping its message to show a higher level of moral reasoning._
We found that the context may clearly influence the discourse and moral reasoning of CEOs. Immediately after the Volkswagen scandal, the new CEO eliminated from its discourse any mention of business performance (Stage 2) and showed an increased emphasis and focus on stakeholders, particularly in terms of highlighting the integrity of the firm, trying to keep or regain reputation, intentional greenwashing [95], and seeking customer trust and loyalty (Stage 3). Letters are used intentionally for such purposes, as proposed by de-Miguel-Molina [97] or Connor [28], since after a scandal, the reputation of a company is put at risk [55]. In the case of Suzuki, the subsequent change in chairman also involved some increase in moral reasoning, whereas in the case of Nissan, the moral tone remained at a similar stage, as it did with Renault. Probably the change was not noticeable in the last cases due to the lower repercussions of the issue (Renault) or because the moral tone reasoning was already at a relatively high level (stage 4--Nissan). Thus, we may conclude that H2 tends to be accomplished with a greater or lesser incidence depending on the magnitude of the scandal and the level of moral reasoning prior to the scandal. Through this second hypothesis, we bring new evidence on the reactive role of CEOs in shaping their discourse and showing higher levels of moral reasoning at different degrees depending on the intensity of the scandal with a greater influence when coming from lower scores.
We could also infer that the financial crisis influenced the moral reasoning shown at different periods in time. For example, especially during 2013, with the echoes of the crisis, we found a remarkable emphasis on business performance and economic results, which, in most cases, has lost weight over the years or even disappeared.
#### Hypothesis 3
In relation to our third hypothesis:
**Hypothesis 3 (H3)**.: _When a firm is affected by a scandal, it is more likely to be preceded by CEO moral reasoning at lower stages._
At first sight, there was no clear relationship between the moral tone at the top and the appearance of scandals, at least when we looked at the short-immediate term.
When analyzing the three companies with the highest moral reasoning scores (stages 4/5) at present, we only found Toyota to be free of suspicion of unethical practices, whereas PSA's emission tests were contested during 2017 (facts not proven), and Ford has recently been under investigation for some anomalies in the modeling of emission tests. Thus, higher stages of moral reasoning do not necessarily guarantee or involve ethical behaviors, or at least this is hard to prove under such an approach.
However, when looking with a greater perspective, we can see that Toyota was the only one scoring relatively and consistently high in terms of moral reasoning, both 10 and 5 years ago, whereas Ford and PSA came from lower stages. The same happened with FCA, which had low scores years ago. It could be argued that lower stages of moral reasoning may have an impact on the organization over time, and this footprint may appear or be revealed a while later.
VW's evidence is more latent. We easily identified consistent lower stages of moral reasoning of the CEO during the years prior to the scandal. During the two immediate years after the scandal, the moral reasoning was mostly focused at stage 3, eliminating self-focus, in order to show integrity and with the will to get the stakeholders' trust back. Since 2017, the discourse has resumed its focus on business performance while keeping its attention on stakeholders. This is an example of how circumstances may shape the tone at the top. Nonetheless, the moral reasoning of CEOs is a relevant factor that is sure to be assessed along with other elements to predict a crisis or scandal in a company.
We can infer further evidence by looking at the case of a possible rollout of emission cleaning technology affecting the German companies in our research, since we observed that they come from relative low CEO scores over the years.
In all, in line with previous research [1; 48; 57], we may confirm our third hypothesis, with the additional consideration that the CEO's moral tone footprint endures over time.
In general, it is clear that the evolution of the moral reasoning of CEOs is not effective or quick enough for the extinction of new scandals looming over previous years and at present.
This third hypothesis confirms a connection between the lower levels of moral reasoning of CEOs and scandals and may help them become more aware of the importance of showing higher ethical values to influence the organization positively, in line with Staicu et al. [48], and to prevent scandals and crises, as predicted by Amernic et al. [1].
#### Hypothesis 4
Finally, with regard to our fourth Hypothesis:
**Hypothesis 4 (H4).**_CEOs in the automotive industry are more likely to evolve over the years into a more uniform level of moral reasoning, with less influence of factors stated by institutional theory._
Here the Hypothesis cannot be proven. As shown in Table 5, comparing 2013/2014 vs. 2017/2018, looking at companies individually, we found great gaps in the scores, from 2/3 to 4/5, without clear geographical or cultural patterns that could explain them. There are also significant differences among regions, with China showing consistently lower scores, which continue to the present day. On average, the European and American companies are evolving towards higher levels, while Asian companies remain steady, although this is hard to assess since there are no clear patterns when looking individually at each company.
As per the evidence found, in some cases, a change in CEO involves a clear change of the moral tone at the top (PSA from 2013 to 2014, BMW from 2014 to 2015, FCA from 2017 to 2018, Suzuki from 2015 to 2016). In other cases, either the companies themselves are more influential at shaping the discourse of the CEO, or the CEO adapts his discourse to the corporate culture and values of the company. Carlos Gohsn, being the CEO of Nissan and Renault during the period 2013-2016 is the most flagrant case, showing different stages of moral reasoning depending on the company. In addition, companies like Hyundai, Nissan, or Honda presented changes in CEO without noticeable changes in their tones, at least in the short term. In the case of Honda, the moral tone at the top only remained unchanged for two years. Finally, the context (i.e., VW scandal) alters the discourse and the moral reasoning, although in this case, it may be circumstantial, for a limited period of time of two years in the case of VW.
Under this background, it is difficult to foresee a more uniform level of moral reasoning, although, as seen before, there is a certain general trend towards embracing higher stages.
Moreover, we found several examples showing that when letters are co-signed by other members of the top management, the moral reasoning score is higher. The co-signed letter of VW 2013 showed a balanced combination of stages 2 and 3, while one year later, the letter just signed by the same CEO was merely at stage 2. Something similar happened in the case of Suzuki. During the years 2013 to 2015, the letters were co-signed, and the score was at stages 3-4, whereas from 2016 onwards, coinciding with a change in CEO, the letters were found to be at stages 2-3 only. In Suzuki's case, the mere fact that the new CEO removes other members of the top management from the letter might be indicative of a more individualistic behavior, and so it could also explain the lower level of moral reasoning shown. In the case of Ford, from 2015 onwards, coinciding with the change of its CEO, the letters are co-signed by CEO and chairman, whereas previously they wrote separate letters. Together, moral tone was more assimilated to the one showing a moral tone at higher stages. Again, this could be explained by a less individualistic approach of the CEO and chairman.
In all, the outcome obtained through the testing of our last hypothesis is that, in spite of extended globalization and the trend that leading companies set practices and norms that other firms might follow, noted by Paul Paul (2015), factors, such as those stated by the institutional theory, appear to carry relevant weight in terms of the moral reasoning of CEOs, as found by Matten and Moon Matten and Moon (2017) in relation to the CSR approach of companies.
## 5 Conclusions
This paper aimed to assess the moral tone reasoning trends of CEOs in the automotive industry by gauging its relation to ethical behaviors and scandals as well as analyzing the influence of scandals and other factors on the CEOs' moral tone reasoning. For this purpose, we applied Weber's method to the CEO letters in the annual reports of the top 15 automotive companies in vehicle production in 2017 for the period between 2013 and 2018. After the introduction, we developed an extensive literature review that led us to our research hypothesis and methodology. Then, we carried out an assessment of the moral reasoning stages of each letter and dissected and analyzed the outcomes.
From the results obtained, we may infer several relevant conclusions and findings. The first one is that, at present, most top automotive company CEOs are operating at the conventional level of moral reasoning, with some of them getting closer to the desirable post-conventional reasoning level, although nearly half of them have still not reached stage 4.
Secondly, when compared with the results of Weber Weber (2016), we observed a certain trend to attain higher stages of CEO moral reasoning, which should be a positive sign, although this was quite variable depending on the individual companies. Contrary to the results from Weber and Gillespie Weber and Gillespie (2015) and Weber Weber (2016) that placed most firm managers at stages 2 or 3 only, at present, the percentage of the sample taken has dropped to 40%. If we remove the two Chinese companies from the equation, the results are even more encouraging. Furthermore, the overall positive trend could lead to further positive global feedback, since leading companies, such as the automotive ones, may be expected to establish practices and norms that other firms might be likely to follow Paul (2015).
However, the scandals and suspicions globally affecting the automotive sector show that there is still some way to go, and further complementary approaches should be considered when seeking a way to predict the ethical behaviors of companies or foresee potential problems. It appears evident that the rise in moral tone apparently shown in the letters of CEOs has not resulted in higher ethical behavior in some cases, at least in the short term.
Moreover, as a third important conclusion, we noticed that the moral reasoning shown is not exclusively inherent to the CEO in question but is also influenced by the context (company values, scandals, external pressures, economic situation, etc.), and CEOs may intentionally modulate their discourse and moral reasoning, in line with Hyland [22], i.e., through greenwashing intention [95]. In addition, we found several cases that led us to think about a positive correlation between higher or lower stages of moral reasoning and a higher or lower ethical performance of the company, as predicted in our literature review, especially when adopting a broader perspective in time and not just the short term. It is reasonable to presume that any bad behaviors revealed at present may be the consequence of inappropriate ethical behaviors a while ago. Thus, it is important to consider a sufficient period of time in this type of analysis.
Last but not least, one of the collateral and significant outcomes of this research is that complementary to the extended convention that CEOs are decisive in influencing the culture and values in a company [1; 33; 48], the values embedded in a company may also decisively influence the moral tone of CEOs. This is clearly evidenced in the discourse of Carlos Ghosn, who adopted a different tone in his letters depending on the company he was leading (Renault or Nissan). We could also infer this from the cases where a new CEO did not change the discourse, although this process may be progressive and may need a certain period of time for each context, as evidenced in the case of Honda, where a new CEO started to reshape the discourse only after two years.
When we recall the definition of the tone at the top as a shared set of values in an organization emanating from the most senior executives which creates an unwritten cultural code [31], we can add that such a set of values may be deeply rooted in a company in such a way that the CEO may fit more into them rather than permeating the company with their own values. We argue that the moral reasoning of a CEO might be reshaped to a greater or lesser extent depending on different factors, such as management resilience, empathy, power over the board of directors, time in the company or at management, stakeholders' pressure, etc. It is reasonable to think that there is not a fixed pattern for this interaction, and it will all depend on each context and a confluence of factors.
Following the analogy of \"tone 'from' the top\" [48; 51], we coined the concept \"tone 'into' the top\" to reflect how the organizational core values or factors, such as scandals or crises, may modulate the CEO discourse and moral reasoning shown.
These findings, along with the implications of showing a certain stage of moral reasoning or moral tone at the top, may be taken into account by companies in the sector to seek how to enhance their overall performance and as a criterion to anticipate possible conflicts as well as a tool of assessment when appointing a top manager.
With regard to CEOs, the results may be of interest in order for them to become more aware of their moral reasoning and its consequences as a starting point to improve their own performance and message for the benefit of their companies, stakeholders, and society and, also, to consider the possibility to enroll in education programs for ethical level enhancement, the usefulness of which was noted by Weber [52], among others. Research on the existing moral tone at the top and its relevance may also encourage governments and business schools to promote or reinforce education in business ethics. In this direction, Pandey et al. [126] showed how education in mindfulness may have a positive impact on moral reasoning.
### Limitations and Future Scope of Research
The qualitative assessment carried out involved unavoidable subjectivity and bias, and this obviously may have had an impact on the results. However, it was considered by the authors to be a fair enough way to identify patterns and trends and to obtain revealing conclusions.
This research was based on the written communication of CEOs and not their performance. Staicu et al. [48] highlighted that it is important for leaders to not only express the values of the company but also to set an example with their own actions. Moreover, there is the possibility that some CEOs might be aware of the importance of presenting themselves to have high moral principles, which could intentionally affect their discourse in their letters. Bryce [127] exposed how a CEO with high knowledge about moral principles may use it for their own interest. We also found some further evidence of this in our research. Future studies could investigate the relationship between CEOs' moral discourse and crises or conflicts with stakeholders or society over time under different contexts, and we could also propose ways to test their message against their actual performance.
Another opened research avenue is the assessment of how the company and the CEO or top management interact and influence each other to create or reshape a set of values over time, which are the mechanisms and the factors for this process. The conceptual framework developed by Kulkarni and Ramamoorthy [128] may be of great support for this mission.
Finally, this methodology of research could be broadened to a wider range of companies in the sector and a longer time frame and could also be applied to other sectors with similar or different contexts to gather further findings and conclusions.
B.G.-O. and J.G.-C. conceived the study and carried out the investigation; J.G.-C. was in charge of the formal analysis, resources and original draft; all the authors established the methodology; B.G.-O. and B.d.-M.-M. carried out the review and editing; B.d.-M.-M. led the project administration, and supervised and approved the final version.
This research received no external funding.
We thank the reviewers for sharing their valuable and constructive comments to improve this manuscript. We are grateful for the means provided by our Department of Business Organization to ease our research works. Finally, we thank the journal for considering our research for publication and guiding us through the process.
The authors declare no conflict of interest.
## References
* _Amernic et al. (2010)_ Amernic, J.; Craig, R.; Tourish, D. _Measuring and Assessing Tone at the Top using Annual Report CEO Letters;_ The Institute of Chartered Accountants of Scotland: Edinburgh, UK, 2010.
* Swanson (2008) Swanson, D.L. Top managers as drivers for corporate social responsibility. In _The Oxford Handbook of Corporate Social Responsibility_; Oxford University Press: Oxford, UK, 2008; pp. 227-248.
* Waldman and Siegel (2008) Waldman, D.A.; Siegel, D. Defining the socially responsible leader. _Leadership. Q._**2008**, _19_, 117-131. [CrossRef]
* Trevino et al. (2003) Trevino, L.K.; Brown, M.; Hartman, L.P. A qualitative investigation of perceived executive ethical leadership: Perceptions from inside and outside the executive suite. _Hum. Relat._**2003**, _56_, 5-37. [CrossRef]
* Trevino and Brown (2004) Trevino, L.K.; Brown, M.E. Managing to be ethical: Debunking five business ethics myths. _Acad. Manag. Perspect._**2004**, _18_, 69-81. [CrossRef]
* Weber (2010) Weber, J. Assessing the \"Tone at the Top\": The moral reasoning of CEOs in the automobile industry. _J. Bus. Ethics_**2010**, _92_, 167-182. [CrossRef]
* Sukitsch et al. (2015) Sukitsch, M.; Engert, S.; Baumgartner, R. The implementation of corporate sustainability in the European automotive industry: An analysis of sustainability reports. _Sustainability_**2015**, \\(7\\), 11504-11531. [CrossRef]
* Wells (2013) Wells, P. Sustainable business models and the automotive industry: A commentary. _IIMB Manag. Rev._**2013**, _25_, 228-239. [CrossRef]
* Stoycheva et al. (2018) Stoycheva, S.; Marchese, D.; Paul, C.; Padoan, S.; Juhmani, A.S.; Linkov, I. Multi-criteria decision analysis framework for sustainable manufacturing in automotive industry. _J. Clean Prod._**2018**, _187_, 257-272. [CrossRef]
* Jung and Sharon (2019) Jung, J.C.; Sharon, E. The Volkswagen emissions scandal and its aftermath. _Glob. Bus. Organ. Excell._**2019**, _38_, 6-15. [CrossRef]
* Thomasson (2009) Thomasson, A. Exploring the ambiguity of hybrid organisations: A stakeholder approach. _Financ. Account. Manage_**2009**, _25_, 353-366. [CrossRef]
* Schein (1992) Schein, E.H. _How Can Organisations Learn Faster?: The problem of Entering the Green Room_; Massachusetts Institute of Technology: Cambridge, MA, USA, 1992.
* Kalshoven et al. (2013) Kalshoven, K.; Den Hartog, D.N.; De Hoogh, A.H. Ethical leadership and follower helping and courtesy: Moral awareness and empathic concern as moderators. _Appl. Psychol._**2013**, _62_, 211-235. [CrossRef]
* Grey (2005) Grey, C. Managerial Ethics: A Quantitative Correlational Study of Values and Leadership Styles of Veterinary Managers. Ph.D Thesis, University of Phoenix, Phoenix, UZ, USA, March 2005.
* Hood (2003) Hood, J.N. The relationship of leadership style and CEO values to ethical practices in organisations. _J. Bus. Ethics_**2003**, _43_, 263-273. [CrossRef]* (16) Kaptein, M.; Wempe, J.F.D.B. _The Balanced Company: A Theory of Corporate Integrity_, Oxford University Press: Oxford, UK, 2002.
* (17) Lasthuizen, K.M. Leading to Integrity: Empirical Research into the Effects of Leadership on Ethics and Integrity. Ph.D. Thesis, Faculty of Social Sciences, VU University, Amsterdam, The Netherlands, 2008.
* (18) Ho, Y.H.; Lin, C.Y. The moral judgment relationship between leaders and followers: A comparative study across the Taiwan Strait. _J. Bus. Ethics_**2016**, _134_, 299-310. [CrossRef]
* (19) Jackson, R.W.; Wood, C.M.; Zboja, J.J. The dissolution of ethical decision-making in organisations: A comprehensive review and model. _J. Bus. Ethics_**2013**, _116_, 233-250. [CrossRef]
* (20) Angelidis, J.; Ibrahim, N.A. The impact of emotional intelligence on the ethical judgment of managers. _J. Bus. Ethics_**2011**, _99_, 111-119. [CrossRef]
* (21) Kish-Gephart, J.J.; Harrison, D.A.; Trevino, L.K. Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. _J. Appl. Psychol._**2010**, _95_, 1. [CrossRef]
* (22) Hyland, K. Persuasion and context: The pragmatics of academic metadiscourse. _J. Pragmat._**1998**, _30_, 437-455. [CrossRef]
* (23) Colsman, B. Nachhaltigkeitscontrolling. In _Nachhaltigkeitscontrolling_; Springer: Berlin, Germany, 2013; pp. 43-90.
* (24) Porter, M.E.; Kramer, M.R. The link between competitive advantage and corporate social responsibility. _Harc. Bus. Rev._**2006**, _84_, 78-92.
* (25) Shanahan, F.; Seele, P. Shorting Ethos: Exploring the relationship between Aristotle's Ethos and Reputation Management. _Corp. Reput. Rev._**2015**, _18_, 37-49. [CrossRef]
* (26) Overall, J. Unethical behavior in organisations: Empirical findings that challenge CSR and egoism theory. _Bus. Ethics_**2016**, _25_, 113-127. [CrossRef]
* (27) Dennis, B.S.; Buchholtz, A.K.; Butts, M.M. The nature of giving: A theory of planned behavior examination of corporate philanthropy. _Bus. Soc._**2009**, _48_, 360-384. [CrossRef]
* (28) Connor, M. _Toyota Recall: Five Critical Lessons_. Available online: [http://business-ethics.com/2010/01/31/2123-toyota-recall-five-critical-lessons/](http://business-ethics.com/2010/01/31/2123-toyota-recall-five-critical-lessons/) (accessed on 8 March 2019).
* (29) Wang, Y.; Wanjek, L. How to fix a lie? The formation of Volkswagen's post-crisis reputation among the German public. _Corp. Reput. Rev._**2018**, _21_, 84-100. [CrossRef]
* (30) Cooper, S.A.; Raman, K.K.; Yin, J. Halo effect or fallen angel effect? Firm value consequences of greenhouse gas emissions and reputation for corporate social responsibility. _J. Account. Public Policy_**2018**, _37_, 226-240. [CrossRef]
* (31) Cunningham, C. Section 404 compliance and 'tone at the top'. _Finance. Exec._**2005**, _21_, 6-7.
* (32) Mahadeo, S. How management can prevent fraud by example. _Fraud_**2006**, _12_, 2007.
* (33) Bruinsma, C.; Wemmenhove, P. Tone at the Top is Vital! A Delphi Study. _ISACA J._**2009**, \\(3\\), 1-4.
* (34) Cheng, J.W.; Chang, S.C.; Kuo, J.H.; Cheung, Y.H. Ethical leadership, work engagement, and voice behavior. _Ind. Manag. Data Syst._**2014**, _114_, 817-831. [CrossRef]
* (35) Avalio, B.J.; Gardner, W.L. Authentic leadership development: Getting to the root of positive forms of leadership. _Leadst. Q._**2005**, _16_, 315-338. [CrossRef]
* (36) Brown, M.E.; Trevino, L.K. Ethical leadership: A review and future directions. _Leadst. Q._**2006**, _17_, 595-616. [CrossRef]
* (37) Thoms, J.C. Ethical integrity in leadership and organisational moral culture. _Leadsthip_**2008**, \\(4\\), 419-442. [CrossRef]
* (38) Johnson, C.E. _Meeting the Ethical Challenges of Leadership: Casting Light or Shadow_; Sage Publications: Thousand Oaks, CA, USA, 2017.
* (39) Shin, Y.; Sung, S.Y.; Choi, J.N.; Kim, M.S. Top management ethical leadership and firm performance: Mediating role of ethical and procedural justice climate. _J. Bus. Ethics_**2015**, _129_, 43-57. [CrossRef]
* (40) De Luque, M.S.; Washburn, N.T.; Waldman, D.A.; House, R.J. Unrequited profit: How stakeholder and economic values relate to subordinates' perceptions of leadership and firm performance. _Adm. Sci. Q._**2008**, _53_, 626-654. [CrossRef]
* (41) Tourish, D.; Craig, R.; Amernic, J. Transformational leadership education and agency perspectives in business school pedagogy: A marriage of inconvenience? _Brit. J. Manag._**2010**, _21_, s40-s59. [CrossRef]
* (42) D'Aquila, J.M.; Bean, D.F. Does a tone at the top that fosters ethical decisions impact financial reporting decisions: An experimental analysis. _Int. Bus. Econ. Res. J._**2003**, \\(2\\), 41-54. [CrossRef]* Shin (2012) Shin, Y. CEO ethical leadership, ethical climate, climate strength, and collective organisational citizenship behavior. _J. Bus. Ethics_**2012**, _108_, 299-312. [CrossRef]
* Huhtala et al. (2013) Huhtala, M.; Kangas, M.; Lamsa, A.M.; Feldt, T. Ethical managers in ethical organisations? The leadership-culture connection among Finnish managers. _Leadersh. Org. Dev. J._**2013**, _34_, 250-270. [CrossRef]
* Eisenbeiss et al. (2015) Eisenbeiss, S.A.; Van Knippenberg, D.; Fahrbach, C.M. Doing well by doing good? Analyzing the relationship between CEO ethical leadership and firm performance. _J. Bus. Ethics_**2015**, _128_, 635-651. [CrossRef]
* Akker et al. (2009) Akker, L.; Heres, L.; Lasthuizen, K.; Six, F. Ethical leadership and trust: It's all about meeting expectations. _Int. J. Leadership. Stud._**2009**, \\(5\\), 102-122.
* Spraggon and Bodolica (2015) Spraggon, M.; Bodolica, V. Trust, authentic pride, and moral reasoning: A unified framework of relational governance and emotional self-regulation. _Bus. Ethics_**2015**, _24_, 297-314. [CrossRef]
* Staicu et al. (2018) Staicu, A.M.; Tatomir, R.I.; Linca, A.C. Determinants and Consequences of \"Tone at the Top\". _IJAME_**2018**, \\(2\\), 76-88.
* Arjoon (2000) Arjoon, S. Virtue theory as a dynamic theory of business. _J. Bus. Ethics_**2000**, _28_, 159-178. [CrossRef]
* Argandora (2012) Argandora, A. _Tres dimensions etias de la crisis financiera_; Documento de Investigacion. DI-944. Catedra \"La Caixa\" de Responsabilidad Social de la Empresa y Gobierno Corporativo; IESE Bus. Sch. Univ. Navarra: Pamplona, Spain, 2012.
* Bandsuch et al. (2008) Bandsuch, M.R.; Pate, L.E.; Thies, J. Rebuilding stakeholder trust in business: An examination of principle-centered leadership and organisational transparency in corporate governance. _Bus. Soc. Rev._**2008**, _113_, 99-127. [CrossRef]
* Weber (1990) Weber, J. 'Managers' Moral Reasoning: Assessing Their Responses to Three Moral Dilemmas. _Hum. Relat._**1990**, _43_, 687-702. [CrossRef]
* Cummings et al. (2010) Cummings, R.; Maddux, C.D.; Cladianos, A.; Richmond, A. Moral reasoning of education students: The effects of direct instruction in moral development theory and participation in moral dilemma discussion. _Teach. Coll. Rec._**2010**, _112_, 621-644.
* Jones (2009) Jones, D.A. A novel approach to business ethics training: Improving moral reasoning in just a few weeks. _J. Bus. Ethics_**2009**, _88_, 367-379. [CrossRef]
* Garcia-Madariaga and Rodriguez-Rivera (2017) Garcia-Madariaga, J.; Rodriguez-Rivera, F. Corporate social responsibility, customer satisfaction, corporate reputation, and firms' market value: Evidence from the automobile industry. _Span. J. Mark. ESIC_**2017**, _21_, 39-53. [CrossRef]
* Beelitz and Merkl-Davies (2012) Beelitz, A.; Merkl-Davies, D.M. Using discourse to restore organisational legitimacy: 'CEO-speak' after an incident in a German nuclear power plant. _J. Bus. Ethics_**2012**, _108_, 101-120. [CrossRef]
* Weber (1991) Weber, J. Adapting Kohlberg to enhance the assessment of managers' moral reasoning. _Bus. Ethics Q._**1991**, \\(1\\), 293-318. [CrossRef]
* Pettifor et al. (2002) Pettifor, J.L.; Estay, I.; Paquet, S. Preferred strategies for learning ethics in the practice of a discipline. _Can. Psychol. Psychol. Can._**2002**, _43_, 260. [CrossRef]
* Kohlberg (1964) Kohlberg, L. Development of moral character and moral ideology. _Rev. Child Dev. Res._**1964**, \\(1\\), 381-431.
* Rest (1979) Rest, J.R. _Revised Manual for the Defiting Issues Test: An Objective Test of Moral Judgment Development_; Minnesota Moral Research Projects: Minneapolis, MN, USA, 1979.
* Trevino (1992) Trevino, L.K. Moral reasoning and business ethics: Implications for research, education, and management. _J. Bus. Ethics_**1992**, _11_, 445-459. [CrossRef]
* Rest (1984) Rest, J.R. The major components of morality. In _Monality, Moral Behavior, and Moral Development_; Wiley: New York, NY, USA, 1984; pp. 24-38.
* Piaget (2013) Piaget, J. _The Moral Judgment of the Child_; Routledge: Abingdon, UK, 2013.
* Kohlberg (1973) Kohlberg, L. Continuities in childhood and adult moral development revisited. In _Life-Span Developmental Psychology_; Elsevier: Amsterdam, The Netherlands, 1973; pp. 179-204.
* Kohlberg (1981) Kohlberg, L. _Essays on Moral Development_; Harper & Row: New York, NY, USA, 1981.
* Kohlberg (1984) Kohlberg, L. _The Psychology of Moral Development: Essays on Moral Development_; Harper & Row: New York, NY, USA, 1984.
* Colby et al. (1987) Colby, A.; Kohlberg, L.; Speicher, B.; Candee, D.; Hewer, A.; Gibbs, J.; Power, C. _The Measurement of Moral Judgement: Volume 2, Standard Issue Scoring Manual_; Cambridge University Press: Cambridge, UK, 1987.
* Trevino (1986) Trevino, L.K. Ethical decision making in organisations: A person-situation interactionist model. _Acad. Manag. Rev._**1986**, _11_, 601-617. [CrossRef]* Weber and Wasileelski (2001) Weber, J.; Wasileelski, D. Investigating influences on managers' moral reasoning: The impact of context and personal and organisational factors. _Bus. Soc._**2001**, _40_, 79-110. [CrossRef]
* Gilligan (1977) Gilligan, C. In a different voice: Women's conceptions of self and of morality. _Haro. Educ. Rev._**1977**, _47_, 481-517. [CrossRef]
* Gilligan (1982) Gilligan, C. New maps of development: New visions of maturity. _Am. J. Orthopsychiatr._**1982**, _52_, 199. [CrossRef] [PubMed]
* Harkness and Edwards (1981) Harkness, S.; Edwards, C.P.; Super, C.M. Social roles and moral reasoning: A case study in a rural African community. _Dev. Psychol._**1981**, _17_, 595. [CrossRef]
* Carpendale and Rohlberg (2000) Carpendale, J.I. Kohlberg and Piaget on stages and moral reasoning. _Dev. Rev._**2000**, _20_, 181-205. [CrossRef]
* McCauley et al. (2006) McCauley, C.D.; Drath, W.H.; Palus, C.J.; O'Connor, P.M.; Baker, B.A. The use of constructive-developmental theory to advance the understanding of leadership. _Leadersh. Q._**2006**, _17_, 634-653. [CrossRef]
* Peterson and Seligman (2004) Peterson, C.; Seligman, M.E. _Character Strengths and Virtues: A Handbook and Classification_; Oxford University Press: Oxford, UK, 2004.
* Crain (2015) Crain, W. _Theories of Development: Concepts and Applications_; Psychology Press: New York, NY, USA, 2015.
* Kipper (2017) Kipper, K. A Neo-Kohlbergian approach to moral character: The moral reasoning of Alfred Herrhausen. _J. Glob. Responsol._**2017**, \\(8\\), 196-211. [CrossRef]
* Doyle and Hughes (2013) Doyle, E.; Hughes, J.F.; Summers, B. An empirical analysis of the ethical reasoning of tax practitioners. _J. Bus. Ethics_**2013**, _114_, 325-339. [CrossRef]
* Morilly (2013) Morilly, S.W. Ethical leadership: An assessment of the level of moral reasoning of managers in a South African short-term insurance company. Master's Thesis, University of the Western Cape, Cape Town, South Africa, November 2013.
* Hoover (2010) Hoover, K.F. _Values and organisational culture perceptions: A study of relationships and antecedents to managerial moral judgment_; Bowling Green State University: Bellville, South Africa, 2010.
* Franklin (2010) Franklin, R.S. _Exploring the Moral Development and Moral Outcomes of Authentic Leaders_; ProQuest: Ann Arbor, MI, USA, 2010.
* Daniels (2009) Daniels, D.M. _Ethical Leadership and Moral Reasoning: An Empirical Investigation_; ProQuest: Ann Arbor, MI, USA, 2009.
* Lin and Ho (2009) Lin, C.-Y.; Ho, Y.-H. Cultural influences on moral reasoning capacities of purchasing managers: A comparison across the Taiwan Strait. _Soc. Behav. Pers. Int. J._**2009**, _37_, 203-208. [CrossRef]
* Galla (2007) Galla, D. Moral Reasoning of Finance and Accounting Professionals: An Ethical and Cognitive Moral Development Examination. Ph.D. Thesis, Nova Southeastern University, Fort Lauderdale, FL, USA, 2007.
* Hyppolite (2004) Hyppolite, F.A. _The Influence of Organisational Culture, Ethical Views and Practices in Local Government: A Cognitive Moral Development Study_; ProQuest: Ann Arbor, MI, USA, 2004.
* Chavez (2004) Chavez, J. Morality and Moral Reasoning in the Banking Industry: An Ethical and Cognitive Moral Development Examination. Ph.D. Thesis, Nova Southeastern University, Fort Lauderdale, FL, USA, 2004.
* Turner et al. (2002) Turner, N.; Barling, J.; Epitropaki, O.; Butcher, V.; Milner, C. Transformational leadership and moral reasoning. _J. Appl. Psychol._**2002**, _87_, 304-311. [CrossRef] [PubMed]
* Orth and Robins (2010) Orth, U.; Robins, R.W.; Soto, C.J. Tracking the trajectory of shame, guilt, and pride across the life span. _J. Pers. Soc. Psychol._**2010**, _99_, 1061-1071. [CrossRef] [PubMed]
* Muraven and Tice (1998) Muraven, M.; Tice, D.M.; Baumeister, R.F. Self-control as a limited resource: Regulatory depletion patterns. _J. Pers. Soc. Psychol._**1998**, _74_, 774-789. [CrossRef] [PubMed]
* Caniels et al. (2012) Caniels, M.C.; Gelderman, C.J.; Vermeulen, N.P. The interplay of governance mechanisms in complex procurement projects. _J. Purch. Supply Manag._**2012**, _18_, 113-121. [CrossRef]
* Fuoli and Paradis (2014) Fuoli, M.; Paradis, C. A model of trust-repair discourse. _J. Pragmat._**2014**, _74_, 52-69. [CrossRef]
* Toppinen et al. (2015) Toppinen, A.; Hanninen, V.; Lahtinen, K. ISO 26000 in the assessment of CSR communication quality: CEO letters and social media in the global pulp and paper industry. _Soc. Responsol. J._**2015**, _11_, 702-715. [CrossRef]
* Fanelli and Grasselli (2006) Fanelli, A.; Grasselli, N.I. Defeating the Minotaur: The construction of CEO charisma on the US stock market. _Organ. Stud._**2006**, _27_, 811-832. [CrossRef]
* Zaman Mir et al. (2009) Zaman Mir, M.; Chatterjee, B.; Shiraz Rahaman, A. Culture and corporate voluntary reporting: A comparative exploration of the chairperson's report in India and New Zealand. _Manag. Audit. J._**2009**, _24_, 639-667. [CrossRef]
* Siano et al. (2017) Siano, A.; Vollero, A.; Conte, F.; Amabile, S. \"More than words\": Expanding the taxonomy of greenwashing after the Volkswagen scandal. _J. Bus. Res._**2017**, _71_, 27-37. [CrossRef]* (96) Trevino, L.K.; Hartman, L.P.; Brown, M. Moral person and moral manager: How executives develop a reputation for ethical leadership. _Calif. Manag. Rev._**2000**, _42_, 128-142. [CrossRef]
* (97) De-Miguel-Molina, B.; Chirivella-Gonzalez, V.; Garcia-Ortega, B. CEO letters: Social license to operate and community involvement in the mining industry. _Bus. Ethics_**2019**, _28_, 36-55. [CrossRef]
* (98) Gatti, L.; Seele, P. CSR through the CEO's pen. _Uuy UmweltWirtschaftsForum_**2015**, _23_, 265-277. [CrossRef]
* (99) Van Alstine, J.; Barkemeyer, R. Business and development: Changing discourses in the extractive industries. _Resour. Policy_**2014**, _40_, 4-16. [CrossRef]
* (100) Makela, H. On the ideological role of employee reporting. _Crit. Perspect. Account._**2013**, _24_, 360-378. [CrossRef]
* (101) Marais, M. CEO rhetorical strategies for corporate social responsibility (CSR). _Soc. Bus. Rev._**2012**, \\(7\\), 223-243. [CrossRef]
* (102) Abrahamson, E.; Amir, E. The information content of the president's letter to shareholders. _J. Bus. Finan. Account._**1996**, _23_, 1157-1182. [CrossRef]
* (103) Amernic, J.H.; Craig, R. _CEO-Speak: The Language of Corporate Leadership_; McGill-Queen's Press-MQUP: Montreal, QC, Canada, 2006.
* (104) Amernic, J.H.; Craig, R.J. Guidelines for CEO-speak: Editing the language of corporate leadership. _Strategy Leadership._**2007**, _35_, 25-31. [CrossRef]
* (105) Craig, R.; Amernic, J. Are there language markers of hubris in CEO letters to shareholders? _J. Bus. Ethics_**2018**, _149_, 973-986. [CrossRef]
* (106) Smith, M.; Taffler, R.J. The chairman's statement-a content analysis of discretionary narrative disclosures. _Account. Audit Account._**2000**, _13_, 624-647. [CrossRef]
* (107) Jordan, J.; Brown, M.E.; Trevino, L.K.; Finkelstein, S. Someone to look up to: Executive-follower ethical reasoning and perceptions of ethical leadership. _J. Manag._**2013**, _39_, 660-683. [CrossRef]
* (108) Schwartz, M.S.; Dunfee, T.W.; Kline, M.J. Tone at the top: An ethics code for directors? _J. Bus. Ethics_**2005**, _58_, 79-100. [CrossRef]
* (109) Maignan, I.; Ralston, D.A. Corporate social responsibility in Europe and the US: Insights from businesses' self-presentations. _J. Int. Bus. Stud._**2002**, _33_, 497-514. [CrossRef]
* (110) Claeys, A.S.; Cauberghe, V.; Vyncke, P. Restoring reputations in times of crisis: An experimental study of the Situational Crisis Communication Theory and the moderating effects of locus of control. _Public Relat. Rev._**2010**, _36_, 256-262. [CrossRef]
* (111) Coombs, W.T. Protecting organisation reputations during a crisis: The development and application of situational crisis communication theory. _Corp. Reput. Rev._**2007**, _10_, 163-176. [CrossRef]
* (112) Cagle, J.A.; Baucus, M.S. Case studies of ethics scandals: Effects on ethical perceptions of finance students. _J. Bus. Ethics_**2006**, _64_, 213-229. [CrossRef]
* (113) Meyer, R.D.; Dalal, R.S.; Bonaccio, S. A meta-analytic investigation into the moderating effects of situational strength on the conscientiousness-Performance relationship. _J. Organ. Behv._**2009**, _30_, 1077-1102. [CrossRef]
* (114) Jones, T.M. Ethical decision making by individuals in organisations: An issue-contingent model. _Acad. Manag. Rev._**1991**, _16_, 366-395. [CrossRef]
* (115) May, D.R.; Pauli, K.P. The role of moral intensity in ethical decision making: A review and investigation of moral recognition, evaluation, and intention. _Bus. Soc._**2002**, _41_, 84-117. [CrossRef]
* (116) Pizzi, S. The Relationship between Non-financial Reporting, Environmental Strategies and Financial Performance. Empirical Evidence from Milano Stock Exchange. _Adm. Sci._**2018**, \\(8\\), 76. [CrossRef]
* (117) Christensen, S.L.; Kohls, J. Ethical decision making in times of organisational crisis: A framework for analysis. _Bus. Soc._**2003**, _42_, 328-358. [CrossRef]
* (118) Soltani, B. The anatomy of corporate fraud: A comparative analysis of high profile American and European corporate scandals. _J. Bus. Ethics_**2014**, _120_, 251-274. [CrossRef]
* (119) Matten, D.; Moon, J. \"Implicit\" and \"explicit\" CSR: A conceptual framework for a comparative understanding of corporate social responsibility. _Acad. Manag. Rev._**2008**, _33_, 404-424. [CrossRef]
* (120) Paul, K. Corporate sustainability, citizenship and social responsibility reporting: A website study of 100 model corporations. _JCC_**2008**, _32_, 63-78.
* (121) Tengblad, S.; Ohlsson, C. The framing of corporate social responsibility and the globalization of national business systems: A longitudinal case study. _J. Bus. Ethics_**2010**, _93_, 653-669. [CrossRef]* Fehre and Weber (2016) Fehre, K.; Weber, F. Challenging corporate commitment to CSR: Do CEOs keep talking about corporate social responsibility (CSR) issues in times of the global financial crisis? _Manag. Res. Rev._**2016**, _39_, 1410-1430. [CrossRef]
* Krippendorff (2018) Krippendorff, K. _Content Analysis. An Introduction to Its Methodology_; SAGE Publications: Thousand Oaks, CA, USA, 2018.
* Skorczynska Sznajder et al. (2016) Skorczynska Sznajder, H.T.; Gimenez Moreno, R.O.S.A. Variation in Letters to Shareholders from British, Polish and Spanish Companies. A Comparative Study. _J. Intercult. Commun._**2016**, _40_, 1-21.
* Weber and Gillespie (1998) Weber, J.; Gillespie, J. Differences in ethical beliefs, intentions, and behaviors: The role of beliefs and intentions in ethics research revisited. _Bus. Soc._**1998**, _37_, 447-467. [CrossRef]
* Pandey et al. (2018) Pandey, A.; Chandwani, R.; Navare, A. How can mindfulness enhance moral reasoning? An examination using business school students. _Bus. Ethics_**2018**, _27_, 56-71. [CrossRef]
* Dodd (2003) Dodd, R. Pipe dreams: Greed, ego, and the death of Enron. By Robert Bryce, with a foreword by Molly Ivins. _Challenge_**2003**, _46_, 106-109.
* Kulkarni and Ramamoorthy (2013) Kulkarni, S.; Ramamoorthy, N. Intra-firm transfer of best practices in moral reasoning: A conceptual framework. _Bus. Ethics_**2013**, _23_, 15-33. [CrossRef] | This paper examines the moral reasoning trends of CEOs (chief executive officers) in the automotive industry, gauging their relations to ethical behaviors and scandals as well as analyzing the influence of scandals and other factors on their moral reasoning. For such a purpose, we carried out a moral reasoning categorization for the top 15 automotive companies in vehicle production in 2017 by applying Weber's method to letters written by CEOs for the period 2013-2018. A positive global trend was observed, with some CEOs reaching high levels, although the evolution was uneven without clear patterns and, in the light of facts, not sufficient, at least in the short term. We also found evidence linking the moral reasoning stages with the ethical performance of companies and introduced the concept \"tone 'into' the top\", reflecting how CEO moral reasoning can be shaped by the company and external factors. This paper stresses the importance of considering the moral tone at the top in relation to company ethical behaviors and the interest of education in business ethics. The outcome is useful for CEOs and other managers seeking to improve corporate social responsibility (CSR) and company ethical performance and to anticipate conflicts as well as to leverage for future research.
m oral tone; moral reasoning; discourse analyze; CEO letter; CEO; automotive industry; CSR +
Footnote β : journal: _Sustainability_ | Give a concise overview of the text below. | 268 |
arxiv-format/2005_04108v1.md | # Computational modeling of Human-nCoV protein-protein interaction network
Sovan Saha
Anup Kumar Halder
Soumyendu Sekhar Bandyopadhyay
Jadavpur University, Computer Science and Engineering, Kolkata, 700032, India
Piyali Chatterjee
Netaji Subhash Engineering College, Computer Science and Engineering, Kolkata, 700152, India
Mita Nasipuri
Jadavpur University, Computer Science and Engineering, Kolkata, 700032, India
Subhadip Basu
## Introduction
COVID-19 evolved in the Chinese city of Wuhan (Hubei province)[1]. The first case of human species getting affected by nCoV was observed on 31 December 2019[2]. Soon it expands its adverse effect on almost all nations within a very short span of time[3]. World Health Organization (WHO) observes that the massive disastrous outbreak of nCoV is mainly due to mass community spreading and declares a global health emergency on 30 January 2020[4]. After proper assessment, WHO presumes its fatality rate to be 4%[5] which urges the global researchers to work together to discover a proper treatment for this pandemic[6, 7]. Coronavirusiidae is the family to which a corona virus belongs. It also infects birds and mammals besides affecting human beings. Though the common symptoms of corona virus are common cold, cough etc., but it is accompanied by severe acute chronic respiratory disease along with multiple organ failure leading to human death. Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) are the two major outbreaks in 2003 and 2012 respectively before SARS-CoV2. The source of origin of SARS was located in Southern China. Its fatality rate was within 14%-15%[8] due to which 774 people lost their lives among 8804 affected cases. Saudi Arabia was marked as the base for the commencement of MERS. 858 persons among 2494 infected cases were defeated in their battle against MERS virus. Thus it generated a much higher fatality rate of 34.4%[9] when compared to that of SARS. All of the three epidemic creators SARS, MERS and SARS-CoV2 are biologically included in the genus Betacoronavirus which is under the family Coronavirus. Both structural and non-structural proteins are involved in the formation of SARS-CoV2. Out of the two, structural proteins like the envelope (E) protein, membrane (M)protein, nucleocapsid (N) protein and the spike (S) protein play a major role in transmitting the disease by binding with the receptors after entering into human body[10]. No clinically proven and approved vaccine for nCoV is available till date though researchers have been trying hard to develop it. So there is an urgent need to understand and analyse the mechanism of disease transmission of this new virus.
Host-pathogen protein-protein interaction networks (PPIN) are very significant for understanding the mechanism of transmission of infection, which is essential for the development of new and more effective therapeutics, including rational drug design. Progression of Infection and disease results in due to the interaction of proteins in between pathogen and host. Pathogen plays an active role in spreading infection. It has been proved to be an acute threat to human lives as it has the mutative capability to adapt itself toward drugs. Pathogen and host PPIN permits pathogenic microorganisms to utilize host capabilities by manipulating the host mechanisms in order to abscond from the immune responses of the host [11, 12, 13]. Detection of target proteins through the analysis of pathogen and host PPIN is the central point of research study [14, 15]. Topologically significant proteins having higher degree of interactions are generally found to be essential drug targets. However, proteins having less number of interactions or topologically not significant may be involved in the mechanism of infection because of some biological pathway relevance.
However, clinically validated Human-nCoV protein interaction data is limited in the current literature. This has motivated us to develop a new computational model for nCoV-Human PPI network. We have subsequently validated the proteins involved in the host-pathogen interactions with respect to potential Food and Drug Administration (FDA) drugs for COVID-19 treatment. Key aspects of this research work are highlighted below:
* It has been reported that SARS-CoV has \\(\\thicksim\\) 89% [16, 17] genetic similarities with rGOV.
* SARS-CoV-Human protein-protein interaction network has also been studied widely and available in literature [18, 19, 20].
* Recently we have developed a computational model to identify potential spreader proteins in a Human-SARS CoV interaction network using SIS model [21].
* Sequence information of some of the nCoV proteins have been released [22].
* Gene ontological (GO) information (Biological Process (BP), Molecular Function (MF), Cellular Component (CC)) of some of the nCoV proteins are also available [22, 23].
* Recently we have also developed a method to predict interaction affinity between proteins from the available GO graph [24].
* Assessment of interaction affinity between nCoV proteins with potential Human target/bait proteins, which are susceptible to SARS-CoV infection, has been done.
* Fuzzy affinity thresholding is done to detect High Quality nCoV-Human PPIN. The selected human proteins are considered as level-1 human spreader nodes of nCoV.
* Level 2 spreader node in nCoV-Human PPIN are detected using spreadability index and validated by SIS [21, 25] model.
* Validation of our developed model is done with respect to the target proteins of the potential FDA drugs for COVID-19 treatment [26].
## Results
Our developed computational model of nCoV-Human PPIN contains high quality interactions (HQI) and proteins identified by Fuzzy affinity thresholding and spreadability index validated by SIS model respectively. Sources of input and the generated results always play a crucial role in any computational model which is also true for our proposed model.
### Overview of the data sets
SARS-CoV-Human PPIN serves as a baseline for our model. The potential level-1 and level-2 human spreaders of SARS-CoV becomes the possible candidate set for selecting level-1 human spreaders of SARS-CoV2. Various datasets have been curated for this purpose which has been outlined below:
#### Human PPIN
The dataset [27, 28] consists of all possible interactions between human proteins that are experimentally documented in humans. Human proteins are represented as nodes while the physical interactions between proteins are represented by edges. It is a collection of 21557 nodes and includes 342353 edges/interactions.
The dataset[18] consists of interactions between SARS-CoV proteins. It contains 7 unique proteins along with the involvement of 17 interacting edges out of which only the densely connected proteins are considered rather than the isolated ones since theformer play a more active role in transmission of infection than the later.
### SARS-CoV-Human PPIN
The dataset[18] comprises of 118 interactions between SARS-CoV and Human. It is used to fetch the level-1 human interactions of SARS-CoV.
### SARS-CoV2 Proteins
This data is collected from pre-released dataset of available SARS-CoV2 protein from UniProtKB[22] ([https://covid-19.uniprot.org/](https://covid-19.uniprot.org/)) which include 14 reviewed SARS-CoV2 proteins.
### GO Graph and Protein-GO annotations
Three types (CC, MF and BP) of GO graph are collected from GO Consortium[23, 29] ([http://geneontology.org/](http://geneontology.org/)). Protein to GO-annotation map is retrieved from UniProtKB database.
### Potential COVID-19 FDAdrugs
Seven potential FDA drugs : Lopinavir[30], Ritonavir[31], Hydroxychloroquine[32, 33], Azithromycin[33], Remdesivir[34, 35, 36], Favipi-ravir[37, 38] and Darunavir[39] have been identified from the DrugBank[40] published white paper[26] which have been used forvalidation in our proposed model.
Selection of spreader nodes in Human-SARS CoV interaction network using spreadability indexvalidated by SIS model:
SARS-CoV-Human PPIN (up to level-2) is formed by the combination of SARS-CoV-Human and Human-PPINdatasets. SARS-CoV-Human dataset generates the direct level-1 human interactions of SARS-CoV
Figure 1: Computational model for the selection of spreader nodes in Human-SARS CoV PPIN by spreadability index. Red colored nodes represent SARS-CoV proteins while blue colored nodes are the selected spreader nodes in it. Deep green colored nodes represent level-1 human connected proteins with SARS-CoV proteins while yellow colored nodes represent the selected human spreaders in it. Light green colored nodes represent level-2 human spreaders of SARS-CoV.
while human-Human PPIN dataset is used to fetch the corresponding level-2 human interactions. Potential spreader nodes are identified using spreadability index which has been validated by SIS model (see Figure 1) [21]. The selected spreader nodes in SARS-COV-Human PPIN are highlighted in Table S1, Table S2 and Table S3 in the supplementary document. The network view of SARS-CoV-Human PPIN at each level and various selected thresholds are also available online (SARS-CoV Level-1 human spreaders, Level-1 & Level-2:high human spreaders at high threshold of spreadability index, and Level-1 & Level-2:low human spreaders at low threshold of spreadability index).
### Identification of the nCoV-Human proteins interactions using Fuzzy PPI model:
The GO information can be useful to infer the binding affinity of any pair of interacting proteins using three different types of GO hierarchical relationship graph (CC, MF and BP) [23]. The fuzzy PPI model has been proposed to find the interaction affinity between the SARS-CoV2 and Human proteins using GO based information (please see Figure 2 and Methods for details). To identify the interactors of SARS-CoV2 on human using the Fuzzy PPI model, a set of candidate proteins are selected as the L1 and L2 spreader nodes of SARS-CoV using SIS model (as depicted in Figure 1). Fuzzy PPI model is constructed from the ontological relationship graphs by evaluating the affinity between all possible GO pairs annotated from any target protein pair and finally the fuzzy score of interaction affinity of protein pair is computed from these GO pair-wise interaction affinity in to a range of [0,1] (details are discussed in the Methods). The heatmap representation of fuzzy interaction affinities (with score \\(\\geq 0.2\\) for very high specificity \\(\\sim 99\\%\\)) is shown in supplementary Figure S1 and Table S4. The high quality interaction (HQI) is retrieved at threshold 0.4, (almost \\(\\sim\\) 99.98% Specificity) which results total 78 interactions between SARS-CoV2 and Human (37 proteins). The interaction networks predicted from Fuzzy-PPI model are shown in Figure 3.
### Identification of Human spreader proteins for nCoV
Human proteins present in the high quality interactions of nCoV-Human PPIN fetched by applying fuzzy affinity threshold are considered as level-1 spreaders. From these level-1 spreaders, corresponding level-2 human interactions are obtained using human-Human PPIN dataset. Spreadability index is thus computed for these level-2 human proteins for the identification of level-2 human spreader nodes. The selection is also verified by SIS model. The selected spreader nodes in SARSCOV-Human PPIN are highlighted in Table S4, Table S5 and Table S6 in the supplementary document. A sample computational model
Figure 4: Derived nCoV-Human PPIN with human spreader proteins from proposed computational model. Blue, yellow and green colored nodes denote nCoV spreaders, its human level-1 and level-2 spreaders. level-1 human spreaders are detected by applying fuzzy affinity thresholding while level-2 human spreaders are identified by spreadability index validated by SIS model.
of nCoV-Human PPIN under high threshold has been highlighted in Figure 4. The network view of SARS-CoV2-HumanPPIN at each level and various selected thresholds are also available online (SARS-CoV2 Level-1 human spreaders, Level-1 & Level-2:high spreaders at high threshold of spreadability index and Level-1 & Level-2:low human spreaders at low threshold of spreadability index).
### Validation using potential FDA drugs for COVID-19:
After proper assessment of all potential drugs as mentioned in the DrugBank [40] white paper [26], seven drugs Lopinavir [30], Ritonavir [31], Hydroxychloroquine [32, 33], Azithromycin [33], Remdesivir [34, 35, 36], Faviriavir [37, 38] and Darunavir [39] are identified which are showing expected results to some extent in the clinical trials done for SARS-CoV2 vaccine till date. All approved human protein targets for each of the five approved drugs are fetched from the advanced search section [41] of drug bank [40, 42]. These targets when searched in our proposed model of nCoV-Human PPIN are found to play an active role of spreader nodes. This reveals the fact that the selected spreader nodes are of biological importance in transmitting infection in a network which actually make them the protein drug targets of the potential FDA drugs for COVID-19. The target protein hits in our nCoV-Human PPIN for each of the 7 potential FDA drugs are highlighted in Figure 5. It can be observed that 4 target protein hits are obtained for Hydroxychloroquine, 3 target proteins for Ritonavir, 2 target protein hits for each of Lopinavir, Darunavir and Azithromycin and 1 target protein hit for Remdesivir and Faviriavir. Out of these protein targets, ACE2 is the most important one since it is considered to be the one of the crucial receptors of human for nCoV to transmit infection deep inside the human cell [43, 44, 45].
## Discussion
In any host-pathogen interaction network, identification of spreader nodes is crucial for disease prognosis. Not every protein in an interaction network has intense disease spreading capability. In this work, we have used the SARS-CoV-Human PPIN network and the spreader nodes at both level-1 and level-2 using the SIS model. These spreader nodes are considered for computing the protein interaction affinity score to unmask the level-1 human spreaders of nCoV. GO annotations have been also taken into consideration along with PPIN properties to make this model more effective and significant. With the gradual progress of the work, it has been observed that the selected human spreader nodes, identified by our proposed model, emerge as the potential protein targets of the FDA approved drugs for COVID-19. The basic hypotheses of the work may be listed as follows:
Figure 5: Validation of our developed computational model with respect to the target proteins of the FDA accepted drugs for COVID-19 treatment. Yellow and green colored nodes denote level-1 and level-2 human spreader of nCoV which acts as the drug protein targets.
* There is a genetic overlap of \\(\\sim\\) 89% (as suggested by ICTV) between SARS-CoV and SARS-CoV2, which also leads to a significant overlap in spreader proteins between human-SARS-COV and human-SARS-COV2 protein-interaction network.
* Fuzzy PPI approach can assess protein interaction affinities at very high specificity with respect to benchmark datasets as shown in Figure 6. High specificity signifies very low false positive rate at a given threshold. Thus, at 0.4 threshold (\\(\\sim\\) 99.9% specificity), the proposed model evaluates high quality positive interactions in Human-nCoV PPIN.
Finally we propose, that the developed computational model effectively identifies Human-nCoV PPIs with high specificity.nCoV-Human interactions are inferred from another pandemic initiator SARS-CoV which is believed to be highly genetically similar to nCoV. We also identify the human spreader proteins (up to level-2) using spreadability index, validated through SIS model. Due to high network density in human interaction network, number of proteins increase with the transition from onelevel to another. So, our proposed model is also capable of identifying human spreader proteins in level-2 by using spreadability index which is validated by SIS model.
Target proteins of the potential FDA drugs for COVID-19 are found to overlap with the spreader nodes of the proposed computational nCoV-Human protein interaction model. Target proteins of seven potential FDA drugs: Lopinavir[30], Ritonavir[31], Hydroxychloroquine[32], Azithromycin[33], Remdesivir[34], Favipiravir[37],[38] and Darunavir[39] for COVID-19 as mentioned in the DrugBank white paper[26] overlap with the spreader nodes of the proposed in silico nCoV-Human protein interaction model (see Figure 5). Though clinical trials for COVID-19 vaccine in 2020 are on their way till date but three out of the seven _i.e._ Remdesivir[46] Hydroxychloroquine[47] and Favipiravir[47] are found to be the most promising as well as effective ones and their protein targets R1AB_SARS2, TLR9, ACE2, CYP3A4 and ABCB1 are also successfully identified as spreader nodes by our proposed model. This assessment reveals the fact that these spreader nodes are indeed have biological relevance relative todisease propagation.
## Methods
Our developed computational model for nCoV-Human PPIN consists of two important methodologies 1) identification of spreader nodes by spreadability index along with the validation of SIS model and 2) Fuzzy PPI model.
### Identification of spreader nodes by spreadability index along with the validation of SIS model:
In nCoV-Human PPIN, the former acts as a pathogen while the host acts as bait. The transmission of infection
Figure 6: Specificity at different threshold (x-axis) of binding affinity obtained from Fuzzy PPI model for complete human proteome interaction network. At 0.2 onward threshold, it produces high specificity with respect to benchmark positive and negative interaction data. High quality interactions are extracted at 0.4 threshold with \\(\\sim\\) 99.9% specificity.
starts when a pathogen enters a host body and starts infecting its protein which in turn affects its directly or indirectly connected neighborhood proteins. Considering this method of transmission, PPIN of human and SARS-CoV are considered to detect spreader nodes. Spreader nodes are those nodes proteins which actually transmits the disease fast among its neighbors. But not all the nodes in a PPIN are spreaders. So, proper detection of spreader nodes is crucial. Thus, spreader nodes are identified by spreadability index which actually measure the transmission capability of a node protein.
Compactness of PPIN and its transferal capability is evaluated using centrality analysis. Nodes having high centrality value are usually considered as spreader nodes or the most critical node in a network.
Spreadability index[21] is one of the centrality based measure which is a combination of three major topological neighborhood based features of a network. They are: 1) Node weight[48] 2) Edge ratio[49] and 3) Neighborhood density[49]. Nodes having high spreadability index are considered as spreader nodes. The spreader nodes thus identified are also validated by SIS model[25]. SIS model is implemented with a motive of designing the SARS-CoV and SARS-CoV2 outbreak into a disease model consisting of proteins based on their present infection status. A protein can be in either of one of the three states: 1) S: Susceptible, which means that every protein is initially susceptible though it is not yet infected but is at risk of getting infected by disease. 2) I: Infected, which means that the protein is already infected by the disease and 3) S: Susceptible, which means proteins again become susceptible after getting recovered from infected state. This model is implemented in such a way that it generates the overall infection capability of a node after a certain range of iterations. Thus the sum of the infection capability of the top selected spreader nodes are computed by this model which is compared against the sum obtained for the selected top critical nodes by other existing centrality measures like Betweenness Centrality (BC)[50], Closeness centrality (CC)[51], Degree centrality (DC)[52] and Local average centrality (LAC)[53].
Our proposed method of selecting spreader nodes[21] has performed better in comparison to the other existing
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline
**Rank** & **Proteins** & \\(\\mathbf{E_{out}^{S_{t}}}\\) & \\(\\mathbf{E_{in}^{S_{t}}}\\) & **Edge Ratio** & **Neighborhood Density** & **Node Weight** & **Spreadability Index** & **Sum of SIS infection rate of top 5 nodes** \\\\ \\hline
1 & Node 3 & 6 & 3 & 1.75 & 6.94 & 2.83 & 14.99 & \\\\ \\hline
2 & Node 9 & 5 & 4 & 1.20 & 7.07 & 3.00 & 11.48 & \\\\ \\hline
3 & Node 6 & 5 & 2 & 2.00 & 3.93 & 2.60 & 10.46 & **1.19** \\\\ \\hline
4 & Node 8 & 6 & 2 & 2.33 & 2.27 & 3.25 & 8.55 & \\\\ \\hline
5 & Node 1 & 5 & 4 & 1.20 & 4.21 & 3.40 & 8.45 & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Computation of spreadability index of Figure 7 along with validation of selected top 5 spreader nodes by SIS model.
Figure 7: Synthetic protein-protein interaction network. It is comprised of 10 nodes and 25 edges.
state-of-the-art_. A synthetic PPIN is considered in Figure 7 to demonstrate the entire methodology of spreadability index. Computational analysis of spreadability index of our proposed model with one of the other methodology BC has been highlighted from Table 1 to supplementary Table S7. \\(E_{out}^{S_{l}}\\) is the total number of edges which are outgoing from the ego network \\(S_{l}\\) whereas \\(E_{in}^{S_{l}}\\) is denoted as the total number of interconnections in the neighborhood of node \\(i\\)[49]. \\(E_{out}^{S_{3}}\\) of node 3 is 6 while \\(E_{in}^{S_{3}}\\) of node 3 is 3 which highlights the fact that node 3 has the highest transmission ability from its ego network to outside when compared to other nodes. Node 3 has also the highest spreadability index. But BC failed to rank node 3 in first position. Same scenario can be observed for some other nodes in the synthetic network too. Besides SIS validation result shows that the selected top ranked spreader nodes in this proposed model have the highest infection capability in comparison to the other ranked nodes.
### Fuzzy PPI Model for potential SARS-CoV2-Human interaction identification:
The binding affinity between any two interacting proteins can be estimated by combining the semantic similarity scores of the GO terms associated with the proteins [54, 55, 56, 15, 24, 57]. Greater number of semantically similar GO annotations between any protein pair indicates higher interaction affinity. Fuzzy PPI model is a hybrid approach [24] that utilizes both the topological [57] features of the GO graph and information contents [58, 56, 59] of the GO terms.
GO is organized in three independent directed acyclic graphs (DAGs): molecular function (MF), biological process(BP), and cellular component(CC) [23]. The nodes in each GO graph represent GO terms and the edges represent different hierarchical relationships. In this work, two most important relations _'is a'_ and _'part of'_ has been used for GO relations [29].
The semantic similarity between any two proteins is estimated by considering the similarities between their all pairs of annotating gene ontology (GO) terms belonging to a particular ontological graph. The similarity of a GO term pair is determined by considering certain topological properties (shortest path length) of the GO graph and the average information content (IC) [60] of the disjunctive common ancestors (DCAs) [54, 55] of the GO terms as proposed in [24]. Fuzzy PPI first rely on a fuzzy clustering of the GO graph where the selection of GO terms as cluster center is based on the level of association of that GO term in the GO graph. The cluster centers are selected based on the proportion measure of GO terms. The proportion measure for any GO term t is computed as \\(PrM(t)=(|An(t)|+|Dn(t)|)/|O(t)|\\)where \\(An(t),Dn(t)\\) represents the ascendant and descendant of term \\(t\\) and \\(O(t)\\) is the total number of GO terms in ontology O. Higher value of proportion measure signifies higher coverage of ascendants and descendants associated with the specific node. The GO terms for which this proportion measure is above a predefined threshold are selected as cluster centers. In this work, the cluster centers are selected based on the threshold values as suggested in [24, 15].
After selecting the cluster centers, the degree of membership of a GO term to each of the selected cluster centers is calculated using its respective shortest path lengths to the corresponding cluster centers. The membership of the GO term to a cluster decreases with increase in its shortest path length to the cluster center. The membership function is defined as \\(MmF_{c}(t)=e^{-(x-c_{i})^{2}/2k^{2}}\\), where \\(c_{i}\\) is \\(i-th\\) center and \\(k\\) is the width of membership function and \\(x\\) is the shortest path length from \\(t\\) to \\(c_{i}\\). The difference in membership values between the GO pair \\(t_{i}\\)and \\(t_{j}\\) with respect to each cluster center, are computed to find the weight parameter. The weight parameter is defined as \\(Wt(t_{i},t_{j})=1-maxD\\big{(}t_{i},t_{j}\\big{)}\\). This weight value determines how dissimilar two GO terms can be with respect to the cluster centers. Next, the shared information content (SIC) is computed using average IC [60], of the DCAs of GO term pair \\((t_{i},t_{j})\\) is determined from three GO graphs. The SIC is defined as \\(SIC\\big{(}t_{i},t_{j}\\big{)}=\\sum_{a\\in DCA(t_{i},t_{j})}IC(a)/|DCA(t_{i},t_{j})|\\), where \\(DCA\\big{(}t_{i},t_{j}\\big{)}\\) represents the disjunctive common ancestors of GO-term \\(t_{i}\\) and \\(t_{j}\\).The semantic similarity of protein pair \\((P_{i},P_{j})\\) for each GO-type (CC, MF and BP), is estimated by utilizing the maximum similarity of all possible GO pairs from the annotations of proteins \\(P_{i}\\) and \\(P_{j}\\) for each type of GO. The binding affinity of protein pair \\((P_{i},P_{j})\\) is defined as the average of CC, MF and BP based semantic similarity. The fuzzy score of binding affinity is computed by normalizing the binding affinity using max-min normalization. In this work, the fuzzy binding affinity score is computed between the protein pairs of SARS-CoV2 and human proteins using the available ontological information. Finally, with high specificity threshold (please see Figure 6), high quality interactions are extracted for human-SARS-CoV2.
## References
* [1] Wang, C., Horby, P. W., Hayden, F. G. & Gao, G. F. A novel coronavirus outbreak of global health concern. _The Lancet_ 395, 470-473 (2020).
* [2] World-Health-Organization Coronavirus disease (COVID-19) outbreak.
* [3] World Map \\(|\\) CDC.
* [4] Statement on the second meeting of the International Health Regulations (2005) Emergency Committee regarding the outbreak of novel coronavirus (2019-nCoV).
* [5] Statement on the meeting of the International Health Regulations (2005) Emergency Committee regarding the outbreak of novel coronavirus 2019 (n-CoV) on 23 January 2020.
* [6] Huang, C. _et al._ Clinical features of patients infected with 2019 novel coronavirus in wuhan, china. _The Lancet_ 395, 497-506 (2020).
* [7] Heymann, D. L. Data sharing and outbreaks: best practice exemplified. _The Lancet_ 395, 469-470 (2020).
* [8] Organization, W. H. _et al._ Update 49-sars case fatality ratio, incubation period. World Heal. Organ. Geneva (2003).
* [9] WHO \\(|\\) Middle East respiratory syndrome coronavirus (MERS-CoV).
* [10] Chen, Y., Liu, Q. & Guo, D. Emerging coronaviruses: genome structure, replication, and pathogenesis. _J. medical virology_ 92, 418-423 (2020).
* [11] Dyer, M. D., Murali, T. & Sobral, B. W. Computational prediction of host-pathogen protein-protein interactions. _Bioinformatics_ 23, i159-i166 (2007).
* [12] Dutta, P., Halder, A. K., Basu, S. & Kundu, M. A survey on ebola genome and current trends in computational research on the ebola virus. _Briefings functional genomics_ 17, 374-380 (2018).
* [13] Dyer, M. D., Murali, T. & Sobral, B. W. Supervised learning and prediction of physical interactions between human and hiv proteins. _Infect. Genet. Evol._ 11, 917-923 (2011).
* [14] Saha, S., Sengupta, K., Chatterjee, P., Basu, S. & Nasipuri, M. Analysis of protein targets in pathogen-host interaction in infectious diseases: a case study on plasmodium falciparum and homo sapiens interaction network. _Briefings functional genomics_ 17, 441-450 (2018).
* [15] Halder, A. K., Dutta, P., Kundu, M., Basu, S. & Nasipuri, M. Review of computational methods for virus-host protein interaction prediction: a case study on novel ebola-human interactions. _Briefings functional genomics_ 17, 381-391 (2018).
* [16] China releases genetic data on new coronavirus, now deadly \\(|\\) CIDRAP.
* [17] Chan, J. F.-W. _et al._ Genomic characterization of the 2019 novel human-pathogenic coronavirus isolated from a patient with atypical pneumonia after visiting wuhan. _Emerg. microbes & infections_ 9, 221-236 (2020).
* [18] Pfefferle, S. _et al._ The sars-coronavirus-host interactome: identification of cyclophilins as target for concoronavirus inhibitors. _PLoS pathogens_ 7 (2011).
* [19] Von Brunn, A. _et al._ Analysis of intraviral protein-protein interactions of the sars coronavirus orefome. _PloS one_ 2 (2007).
* [20] Fung, T. S. & Liu, D. X. Human coronavirus: Host-pathogen interaction. _Annu. review microbiology_ 73, 529-557 (2019).
* [21] Saha, S., Chatterjee, P., Basu, S. & Nasipuri, M. Detection of spreader nodes and ranking of interacting edges in human-sars-cov protein interaction network. _bioRxiv_ (2020).
* [22] Uniprot: the universal protein knowedgebase. _Nucleic acids research_ 45, D158-D169 (2017).
* [23] Consortium, G. O. The gene ontology (go) database and informatics resource. _Nucleic acids research_ 32, D258-D261 (2004).
* [24] Dutta, P., Basu, S. & Kundu, M. Assessment of semantic similarity between proteins using information content and topological properties of the gene ontology graph. _IEEE/ACM transactions on computational biology bioinformatics_ 15, 839-849 (2017).
* [25] Bailey, N. T. _et al._ _The mathematical theory of infectious diseases and its applications_ (Charles Griffin & Company Ltd, 5a Crendon Street, High Wycombe, Bucks HP13 6LE., 1975).
* [26] Lucy Chin, Jordan Cox, Safiya Esmail, Mark Franklin, D. L. COVID-19 : Finding the Right Fit Identifying Potential Treatments Using a Data-Driven Approach. _Drugbank_ White Pap. (2020).
* [27] Agrawal, M., Zitnik, M., Leskovec, J. _et al._ Large-scale analysis of disease pathways in the human interactome. In PSB, 111-122 (World Scientific, 2018).
* [28] BioSAMP: Network datasets: Human protein-protein interaction network.
* [29] Botstein, D. _et al._ Gene ontology: tool for the unification of biology. _Nat genet_ 25, 25-29 (2000).
* [30] Harrison, C. Coronavirus puts drug repurposing on the fast track. Nat. biotechnology 38, 379-381 (2020).
* [31] Cao, B. _et al._ A trial of lopinavir-ritonavir in adults hospitalized with severe covid-19. _New Engl. J. Medicine_ (2020).
* [32] Food, Administration, D. _et al._ Emergency use authorization [http://www.fda.gov/emergencypreparedness/](http://www.fda.gov/emergencypreparedness/)counterterror- isn/medicalcountermeasures. MCMLegalRegulatoryandPolicyFra nework/ucm 182568 (2014).
* [33] Gautret, P. _et al._ Hydroxychloroquine and azithromycin as a treatment of covid-19: results of an open-label non-randomized clinical trial. _Int. journal antimicrobial agents_ 105949 (2020).
* [34] De Wit, E. _et al._ Prophylactic and therapeutic remdesivir (gs-5734) treatment in the rhesus macaque model of mers-cov infection. _Proc. Natl. Acad. Sci._ 117, 6771-6776 (2020).
* [35] Emergency Access to Remdesivir Outside of Clinical Trials.
* [36] Remdesivir Clinical Trials.
* Focus Taiwan.
* Full Text View
* [40] Wishart, D. S. _et al._ Drugbank: a knowledgebase for drugs, drug actions and drug targets. _Nucleic acids research_ 36, D901-D906 (2008).
* [42] DrugBank.
* [43] Mourad, J.-J. & Levy, B. I. Interaction between raas inhibitors and ace2 in the context of covid-19. _Nat. Rev. Cardiol._ 1-1 (2020).
* [44] ACE-2 is shown to be the entry receptor for SARS-CoV-2: R&D Systems.
* [45] Patel, A. B. & Verma, A. Covid-19 and angiotensin-converting enzyme inhibitors and angiotensin receptor blockers: what is the evidence? _Jama_ (2020).
* [46] Trial shows Covid-19 patients recover with Gilead's remdesivir.
* Full Text View
* [48] Wang, S. & Wu, F. Detecting overlapping protein complexes in ppi networks based on robustness. _Proteome science_ 11, S18 (2013).
* [49] Samadi, N. & Bouyer, A. Identifying influential spreaders based on edge ratio and neighborhood diversity measures in complex networks. Computing 101, 1147-1175 (2019).
* [50] Anthonisse, J. The rush in a directed graph, stichting mathematisch centrum, amsterdam (1971).
* [51] Sabidussi, G. The centrality index of a graph. _Psychometrika_ 31, 581-603 (1966).
* [52] Jeong, H., Mason, S. P., Barabasi, A.-L. & Oltvai, Z. N. Lethality and centrality in protein networks. _Nature_ 411, 41-42 (2001).
* [53] Li, M., Wang, J., Chen, X., Wang, H. & Pan, Y. A local average connectivity-based method for identifying essential proteins from the network level. _Comput. biology chemistry_ 35, 143-150 (2011).
* [54] Couto, F. M., Silva, M. J. & Coutinho, P. M. Semantic similarity over the gene ontology: family correlation and selecting disjunctive ancestors. _In Proceedings of the 14th ACM international conference on Information and knowledge management_, 343-344 (2005).
* [55] Couto, F. M., Silva, M. J. & Coutinho, P. M. Measuring semantic similarity between gene ontology terms. _Data & knowledge engineering_ 61, 137-152 (2007).
* [56] Resnik, P. Using information content to evaluate semantic similarity in a taxonomy. _arXiv preprint cmp-lg/9511007_ (1995).
* [57] Jain, S. & Bader, G. D. An improved method for scoring protein-protein interactions using semantic similarity within the gene ontology. _BMC bioinformatics_ 11, 562 (2010).
* [58] Lin, D. _et al._ An information-theoretic definition of similarity. _In Icml_, vol. 98, 296-304 (1998).
* [59] Jiang, J. J. & Conrath, D. W. Semantic similarity based on corpus statistics and lexical taxonomy. _arXiv preprint cmp-lg/9709008_ (1997).
* [60] Shannon, C. E. A mathematical theory of communication. acm sigmobile mob. _Comput. Commun. Rev_ 5, 3-55 (2001).
## Acknowledgements
This work is partially supported by the CMATER research laboratory of the Computer Science and Engineering Department, Jadavpur University, India, PURSE-II and UPE-II grants. Subhadip Basu acknowledges Department of Biotechnology grant (BT/PR16356/BID/7/596/2016), Government of India. For research fellowship support, Anup Kumar Halder acknowledges the Visvesvaraya PhD Scheme for Electronics & IT, an initiative of Ministry of Electronics & Information Technology (MeitY), Government of India. We acknowledge Prof. Jacek Sroka (Institute of Informatics, University of Warsaw ) for his contribution toward the developments of the fuzzy ppi methods.
## Author contributions statement
S.S., A.K.H. and S.B. conceived the idea of the research and wrote the manuscript. S.S. and A.K.H. conducted the experiment(s). P.C, S.S.B., M.N. and S.B. analyzed the results. M.N., P.C and S.B. reviewed the manuscript.
## Additional information
**Competing interests:** The authors declare no competing interests.
All _Supplementary information_ are freely available for academic and research purpose only.
All queries should be send to the corresponding author's email: [email protected] | COVID-19 has created a global pandemic with high morbidity and mortality in 2020. Novel coronavirus (nCoV), also known as Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV2), is responsible for this deadly disease. International Committee on Taxonomy of Viruses (ICTV) has declared that nCoV is highly genetically similar to SARS-CoV epidemic in 2003 (\\(\\sim\\) 89% similarity). Limited number of clinically validated Human-nCoV protein interaction data is available in the literature. With this hypothesis, the present work focuses on developing a computational model for nCoV-Human protein interaction network, using the experimentally validated SARS-CoV-Human protein interactions. Initially, level-1 and level-2 human spreader proteins are identified in SARS-CoV-Human interaction network, using Susceptible-Infected-Susceptible (SIS) model. These proteins are considered as potential human targets for nCoV bait proteins. A gene-ontology based fuzzy affinity function has been used to construct the nCoV-Human protein interaction network at \\(\\sim\\) 99.98% specificity threshold. This also identifies the level-1 human spreaders for COVID-19 in human protein-interaction network. Level-2 human spreaders are subsequently identified using the SIS model. The derived host-pathogen interaction network is finally validated using 7 potential FDA listed drugs for COVID-19 with significant overlap between the known drug target proteins and the identified spreader proteins. | Write a summary of the passage below. | 320 |
arxiv-format/2011_03303v2.md | # Deep coastal sea elements forecasting using U-Net based models
Jesus Garcia Fernandez
Ismail Alaoui Abdellaoui
Siamak Mehrkanoon
[email protected] Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands
## 1 Introduction
Renewable energy system has received increasing attention in the last years. In this context, the ability to accurately predict weather elements is crucial to the effective use of weather elements resources. In particular, it has been shown that weather forecasting affects sectors like agriculture, forestry, transportation and healthcare among others, thus having a major impact on the global economy [1; 2; 3; 4; 5]. More importantly, weather prediction can be used to save thousands of human lives by being able to forecast various types of natural disasters like torna- does and flash floods [6; 7].
Classical approaches to perform weather forecasting heavily relied on the thermodynamics laws, Navier-Strokes equations, the statistical properties of the data, as well as the various properties of the atmosphere [8; 9; 10; 11]. This set of methods belongs to the Numerical Weather Prediction (NWP) approaches and generally require a large amount of compute resources since the processing is done on supercomputers [12].
Furthermore, it has been shown that NWP based approaches might suffer from computational instability, mainly due to the initial conditions of the models [13]. Recent data-driven approaches on the other hand perform a simulation of an entire system in order to predict its next state. The main methodology used by these methods is the usage of historical data to perform the forecasting. Based on the success provided by machine learning models (i.e. support vector machines, random forests, gaussian processes and neural networks) to forecast time series, these approaches have also been used for weather data [14; 15; 16; 17; 18; 19; 20]. In particular, neural networks based models can use either shallow or deep architectures. As opposed to shallow networks that require domain knowledge and feature engineering, deep convolutional neural networks are less constrained by domain expertise. Indeed, these networks are capable of extracting the underlying complex patterns of the data by stacking of multiple nonlinear layers. Deep learning based models have already shown promising results in weather elements forecasting as well as several other domains such as biomedical signal analysis, healthcare, neuroscience and dynamical systems among others [21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. In particular, convolutional neural networks has shown to be successful for a wide range of tasks [31; 23]. The U-Net architecture which consists of two main parts, i.e. the contracting and expanding, has shown to be efficient in many computer vision tasks. It is efficient at processing the input data at lower resolutions and restore it to its original resolution. In particular, U-Net based architectures have been successfully applied to medical data for diverse tasks such as segmentation, cell counting, and reconstruction [32; 33; 34].
The main contribution of this paper is to extend U-Net based deep learning architectures to learn the underlying complex mapping between three-dimensional input data and two-dimensional output data. These architectures are then employed to perform multi-feature coastal weather forecasting as shown in Fig. 1. More precisely, we show that among the four novel proposed models, the one that uses a combination of asymmetric, parallel convolutions and skip connections inside each residual block outperforms the other models for the satellite imagery prediction of a coastal area of the Netherlands.
This paper is organized as follows. The literature review of satellite imagery for weather forecasting is discussed in section 2. The proposed models are presented in section 3. Section 4 introduces the used dataset. The experimental results, the corresponding discussion and finally the conclusion are given in sections 5 and 6 respectively.
## 2 Related Work
Weather forecasting based on deep learning models has recently gain a lot of attention due to the rapid advancement of neural network techniques and availability of weather data [35]. The authors in [36], used a deep convolutional neural network to predict thunderstorms and heavy rains. The model was then compared against traditional machine learning models such as random forests and support vector machines. The authors in [37] incorporated multiple ConvLSTM layers to predict the precipitation rate using radar data. Moreover, multi-stream convolutional neural networks combines with a self-attention mechanism [38; 39] has been studied for precipitation forecasting [40].
In this paper we are interested in weather forecasting based on satellite imagery. Several approaches have been discussed in the literature for performing frame prediction from satellite data for weather forecasting using modern data-driven approaches. For instance, the authors in [41], used satellite data in combination with deep learning techniques to perform sea surface temperature (SST) prediction in a subarea of the East China sea. The main type of layer used in this work was the ConvLSTM layer and was compared to three different models: support vector regression (SVR) model, a persistence model that was simulating a naive forecasting and a third model based on LSTM layers. It was shown that the ConvLSTM model outperformed the other models for 10 days ahead prediction in a recursive fashion.
Similarly to the previous work, sea surface temperature forecasting is also performed using deep learning and remote sensing imagery from satellite data in [42]. The methodology discussed in in [42] is based on a multi-input convolutional neural networks that process the inputs using different spatial resolutions. The end goal of this work was to establish a relationship between sea surface temperature forecasting and tropical instability waves.
In [43], eight cyclone datasets are used for two main objectives: classifying whether the given image contains a storm or not as well as predicting the storm's location. In contrast with the previous works, this methodology is not end-to-end since multiple preprocessing steps are performed before training the deep learning model. In particular, multiple optical flow based techniques are used to perform temporal interpolation and the result of this processing is then fed to the deep learning models. The neural networks used are existing approaches that provide a fast inference time, namely YOLO [44] and RetinaNet [45].
Another similar research work in [46] performs precipitation nowcasting using artificial neural networks and satellite data. In this work, thermal infrared image prediction is first performed in order to get an estimation of the predicted precipitation. Hourly data is used and the neural network model is compared to other approaches such as linear interpolation, steady state methodology and persistence prediction.
Furthermore, future frame prediction is an active field of research in computer vision because of the several use cases where it can be applied such as anomaly detection, video prediction among others. In this context, within the field of deep learning, two main approaches are generally considered: autoencoders and generative adversarial neural networks [47; 48; 49; 50].
## 3 Proposed models
We aim at proposing a model that accurately maps a set of input images to a set of output images. To this end, the UNet architecture [32] is used as the core model and is enriched by incorporating more advanced elements suitable for the task under study. The U-Net architecture is initially designed for medical image segmentation and has similar structure to that of an autoencoder. A first contracting part, where the features are extracted from the input image, is followed by an expanding part that performs classification on each pixel.
In this paper, we propose an extended UNet architecture. Additionally modern enhancement techniques such as residual connections [51], inception modules [52; 53] and asymmetric convolutions [54] are taken into account when designing these models. The residual or skip connections have shown to improve the performance on deep networks by avoiding the vanishing of small gradients. On the other hand, inception modules apply convolutions with different kernels at the same level to capture features from larger and smaller areas in parallel. In the same way, asymmetric convolutions allow us to enlarge the network and thus its learning capacity, and at the same time the number of parameters is reduced.
In what follows, we propose four different models, each one being an extended version of the previous one. These models are 3DDR-UNet, Res-3DDR-UNet, InceptionRes-3DDR-UNet and AsymmInceptionRes-3DDR-UNet.
### 3D Dimension Reducer UNet (3DDR-UNet)
This section introduces 3D Dimension Reducer UNet (3DDR-UNet) which is based on UNet core architecture [32]. The classical UNet model, is a fully convolutional neural networks containing two parts: A contraction part or encoder and an expansion part or decoder. The first part is composed of stacked convolutions and pooling operations to extract features and capture the context in the input. The second symmetric part combines the features extracted in the contraction part with an upsampled output. In this way, the network expands the data to its original size and projects the learned features onto the pixel space to perform an accurate classification of them.
Here, we propose the 3DDR-UNet architecture, which manipulates 3-dimensional data in the encoder and 2-dimensional data in the decoder. This configuration allows the network to capture spatial and temporal dependencies from a stack of 2-dimensional images in the contracting part. Then the first input dimension (time dimension) is reduced from n (number of time-steps or lags, which is 10 in our case) to 1 in the middle of the network before the data moves towards the expanding part. Those extracted and combined features are later used to reconstruct one single image in the decoder. The reduction in the first input dimension is carried out by convolutions with kernel size \\(n\\times 1\\times 1\\) (\\(10\\times 1\\times 1\\) in our case) and valid padding. The output of these operations is a weighted average of different time-steps (i.e. lags) in the input. Essentially, this architecture extracts features on the encoder part and averages them in a weighted fashion over the first dimension before it is fed into the decoder part. The number of convolutional filters grows exponentially after each pooling from n to 16n in the encoder part, and shrinks again to n in the decoder part, with a kernel size of \\(3\\times 3\\times 3\\). The size of both the pooling and the upsampling operations is set to \\(1\\times 2\\times 2\\). In this way, the temporal dimension of the data remains unchanged during the pooling and upsampling, while the spatial dimension of the data (second and third dimension) is reduced and later upsampled.
Given the nature of the task, we train the network to perform a regression of every pixel. Traditionally UNet is used for segmentation tasks. Here, as opposed to the segmentation tasks, in which each pixel belongs to a class, we first normalize both input and output data. Then the Mean Squared Error (MSE) metric is used during the training to minimize the difference between the predicted value of each pixel and the ground truth value. The architecture 3DDR-UNet of the network is shown in Fig. 2.
### Residual 3D Dimension Reducer UNet (Res-3DDR-UNet)
The second proposed model, Residual 3D Dimension Reducer UNet (Res-3DDR-UNet), is an extension of 3DDR-UNet model introduced previously. In order to augment its learning capacity, we scale up the model by adding a generous number of convolutional operations. We use three convolutional layers and a skip connection around the first layer and the final activation to avoid the vanishing of gradients. All these operations form a residual block.
Further, the outputs of the last convolutions in the block are normalized making use of a batch normalization layer. This normalization also aims at increasing the robustness of the network and alleviate the vanishing gradient problem.
Following the lines of [51], we skip two convolutional layers in each block which enhances the performance as well as faster training. As a result of these changes, the number of trainable parameters grows by 50% compare to the 3DDR-UNet. It should be noted that the kernel size of the convolutions, pooling and upsampling as well as the loss function were similar to the ones used in 3DDR-UNet. The architecture of the Res-3DDR-UNet is depicted in Fig. 3.
Figure 1: Prediction process overview. The time steps from \\(t\\)-\\(d\\) to \\(t\\) correspond to the lags of the models to generate the prediction of the time step \\(t\\)+\\(h\\), where \\(d\\) is the number of lags and \\(h\\) is the number of time steps ahead.
### Inception Residual 3D Dimension Reducer UNet (InceptionRes-3DDR-UNet)
Motivated by the effectiveness of inception modules in CNN classifiers [52; 55], we include similar modules in our residual blocks. Within this module, the data stream is splitted into parallel convolutions with different kernel sizes. Later the branches are concatenated again. This structure is motivated by the ability to extract various features through the usage of multiple kernel sizes applied to the same data. After these parallel operations, the features are concatenated and combined with a 1\\(\\times\\)1\\(\\times\\)1 convolution, which is equivalent to a weighted average. Hence, the network learns to favor over time the branches with the most suitable kernels. Essentially this module allows the network to employ different kernels for the task, and give more importance to the most relevant ones.
Here in particular, we use three parallel branches with \\(1\\times 1\\times 1\\), \\(3\\times 3\\times 3\\) and \\(5\\times 5\\times 5\\) kernels. As suggested in [55], we approximate the \\(5\\times 5\\times 5\\) convolution by two sequential \\(3\\times 3\\times 3\\) convolutions, leading to a reduction in the computational cost. In addition, a convolution of 1\\(\\times\\)1\\(\\times\\)1 is included at the beginning of each branch to reduce the dimensionality of the data and thus reducing the computational cost. Furthermore, inspired by the performance of [53], we kept the residual connection that skips the parallel branches. The number of convolutional filters, kernel sizes of pooling and upsampling as well as the loss function are the same as those of the previous models. The architecture of InceptionRes-3DDR-UNet is shown in Fig. 4.
### Asymmetric Inception Residual 3D Dimension Reducer UNet (AsymmInceptionRes-3DDR-UNet)
Driven by the need to reduce the parameters in InceptionRes-3DDR-UNet, we introduce a lighter, yet more effective model. Here we use asymmetric convolutions [54] to lower the complexity of the parallel convolutions. As shown in Fig. 5, each kernel is decomposed into three simpler ones and applied consecutively. The resulting combination of operations is an approximation of the original operation with considerably fewer parameters (see [54] for more details). In consequence of such reduction of parameters, we can afford to remove the \\(1\\times 1\\times 1\\) convolution at the beginning of each branch within the asymmetric inception residual block, whose purpose is to reduce the complexity of the data and thus making the model lighter. Furthermore, we have added two more parallel branches in each block to make the model comparable to the previous ones in terms of parameters. The number of convolutional filters, the
Figure 2: Architecture of 3DDR-UNet model. The annotations above the convolutions correspond to the output shape of those convolutions. Also, the number of filters is indicated below. We can appreciate that the first part reduces the dimensionality and then the second part upsamples the data to its original size (except for the temporal dimension). Between the reduction and expansion parts, intermediate convolutions reduce the temporal dimension (tags) from 10 to 1.
Figure 4: Architecture of InceptionRes-3DDR-UNet model. The annotations above the inception residual blocks correspond to the output shape of such blocks. Similarly to the previous models, the first part reduces the dimensionality, and then the second part upsamples the data to its original size (except for the temporal dimension). Between the reduction and expansion parts, intermediate convolutions reduce the temporal dimension (lags) from 10 to 1.
Figure 3: Architecture of Res-3DDR-UNet model. The annotations above the residual blocks correspond with the output shape of such blocks. Similarly to 3DDR-UNet, the first part reduces the dimensionality, and then the second part upsamples the data to its original size (except for the temporal dimension). Between the reduction and expansion parts, intermediate convolutions reduce the temporal dimension (lags) from 10 to 1.
kernel size of pooling and upsampling as well as the loss function are the same as in the previous described models. The architecture of AsymmInceptionRes-3DDR-UNet is depicted in Fig. 6.
## 4 Data Description
The data used in this paper consists of satellite images. It is provided by Copernicus 1, the observation program led by the European Commission and the European Space Agency (ESA). Specifically, it is part of the dataset \"Atlantic - European North West Shelf - Ocean Physics Analysis and Forecast\" [56], covering a geographical area with longitude from E 002\\({}^{\\circ}\\)000 to E 006\\({}^{\\circ}\\)000 and latitude from N 51\\({}^{\\circ}\\)600 to N 53\\({}^{\\circ}\\)400. The spatial resolution is approximately 1.5 km, so every pixel represents a region of size \\(1.5\\times 1.5\\) km. We chose such a geographical area since it covers both the land and sea of the Netherlands. Furthermore, the selected observations start on 01/03/2017 and end on 13/02/2019, with an hourly temporal resolution. The dataset consists of four weather variables, i.e. Eastward current velocity (EastCUR), Northward current velocity (NorthCUR), Seawater salinity (SAL) and Sea surface height (SSH).
Footnote 1: [https://marine.copernicus.eu/](https://marine.copernicus.eu/)
Therefore, each time-step of each variable is represented by \\(135\\times 135\\) image (see Fig. 7). For reproducibility purposes, the code as well as the dataset are available on Github 2. More details on the included variables, can be found in the official documentation3.
Footnote 2: [https://github.com/jesusgf96/Sea-Elements-Prediction-UNet-Based-Models](https://github.com/jesusgf96/Sea-Elements-Prediction-UNet-Based-Models)
Footnote 3: [https://resources.marine.copernicus.eu/documents/FUM/CMEMS-NWS-PUM-004-013.pdf](https://resources.marine.copernicus.eu/documents/FUM/CMEMS-NWS-PUM-004-013.pdf)
## 5 Experimental Results
### Data Preprocessing
As the data contains sea elements, the pixels in the image that represent the ground should not be taken into account by the network. In practice, the pixels corresponding to the land are masked initially. Hence, we apply a MinMax scaling with a boundary of 0.1 and 1 to the pixels representing the sea elements as follows:
\\[x=0.1+\\frac{((x-x_{min})*(1-0.1))}{x_{max}-x_{min}}. \\tag{1}\\]
Then the pixels representing the ground are assigned a zero value. In this way, the pixels that belong to the ground are invisible to the models because of the ReLu activation function used within them. Moreover, we crop the images seven pixels from the right and the bottom sides, resulting in a \\(128\\times 128\\) shape, which is suitable for the subsequent convolutional and pooling operations. The data is arranged in such a way that the resulting object is a four-dimensional array \\(\\mathcal{T}\\in\\mathbb{R}^{L\\times H\\times W\\times V}\\), where \\(L\\) is the number of timesteps, which makes up the time dimension. \\(H\\) and \\(W\\) refer to the size of the image and form the spatial dimensions. The last element \\(V\\) corresponds to the sea variables.
### Experimental Setup
For all the models, we use all the variables as input, and we perform a prediction of the same variables. The number of convolutional filters is chosen in such a way that all models contain comparable total number of trainable parameters.
In our experiments, the number of lags is set to 10 as empirically it was found to yield better performance compared to other tested lag values. Therefore, the model receives ten hours of information to predict one single time-step, which translate into having an input with shape (10, 128, 128, 4) and output with shape (1, 128, 128, 4). In addition, to test the predictive ability of the models as well as their robustness, four different experiments are carried out. It consists of performing 12, 24, 48, and 72 hours ahead predictions for all the variables. The models are trained with data spanned over a year, from 01/03/2017 to 01/03/2018, in total 8760 hours training data. The validation data is composed of 2016 hours, and represents 504 hours from each season (spring, summer, autumn and winter). This validation data corresponds to a period of time after the training data. Similarly, the test data is composed of another 2016 hours after the training data. The specific days used in both the validation and test can be found in Table 1.
### Training
The same training setup is used in all the models. As mentioned previously in section 3, the Mean Squared Error (MSE) is used as the loss function to minimize the differences between the predicted and the ground truth image. Adam optimization
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline & & **Spring** & **Summer** & **Autumn** & **Winter** \\\\ \\hline \\multirow{2}{*}{**Validation**} & **From** & 01/04/2018 & 01/07/2018 & 01/10/2018 & 01/01/2019 \\\\ & **To** & 22/04/2018 & 22/07/2018 & 22/10/2018 & 22/01/2019 \\\\ \\hline \\multirow{2}{*}{**Test**} & **From** & 23/04/2018 & 23/07/2018 & 23/10/2018 & 23/01/2019 \\\\ & **To** & 13/05/2018 & 12/08/2018 & 13/11/2018 & 13/02/2019 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Time-steps used for validation and test.
Figure 5: Example of kernel decomposition in the asymmetric convolution operation.
method [57] is applied to optimize the loss function. The batch size and the dropout rate are set to 16 and 0.5 respectively. We also implemented a checkpoint callback that monitors the validation loss and we let the models to be trained for 100 epochs in each of the experiments. The best results are then saved based on the performance of the models on the validation data.
### Results and discussion
This section presents the results obtained from the described experiments. The obtained MSEs of all the models for each of the configurations are tabulated in Table 2. These results correspond to testing of the models on the four combined seasons. It can be observed that for all the models the prediction error increases as the number of hours ahead increases. AsymmInceptionRes-3DDR-UNet model performs better than the other models in almost all the scenarios. This improvement in the performance is more apparent as the number of hours ahead increases. In Table 3, the test MSE of each model is displayed separated for each season. Similar to the previous results, AsymmInceptionRes-3DDR-UNet outperforms the other discussed models in most the setups. One may also observe that seasons with more changing weather conditions such as winter makes it more challenging for the networks to learn the underlying complex patterns. All of the four models yield noticeably better predictions in seasons with more stable weather, like summer. Furthermore, a comparison of the final number of convolutional layers of each model can be found in Fig. 8. Fig. 9 shows the obtained MSE of the test data for each sea element. Fig. 10 (a,b,c,d) corresponds to 12, 24, 48 and 72 hours ahead prediction respectively. In general, we observe that seawater salinity is the most challenging variables to be predicted among the considered variables in this study.
An example of the 48h ahead forecast with AsymmInceptionRes-3DDR-UNet model during winter is shown in Fig. 11. As it can be seen, the forecast is considerably accurate, even when it comes to seawater velocity (EastCUR and NorthCUR), which contains quite different areas. The obtained results suggest that inclusion of parallel branches of convolutions, presented in InceptionRes-3DDR-UNet and Asymm InceptionRes-3DDR-UNet models has led to a more noticeable performance improvement.
\\begin{table}
\\begin{tabular}{l c c c c} \\hline \\hline & \\multicolumn{4}{c}{**Hours ahead**} \\\\ \\hline
**Model** & **12h** & **24h** & **48h** & **72h** \\\\ \\hline
**3DDR-UNet** & 5.40e-02 & 7.78e-02 & 1.21e-01 & 1.69e-01 \\\\
**Res-3DDR-UNet** & 6.34e-02 & 7.99e-02 & 1.42e-01 & 1.77e-01 \\\\
**InceptionRes-3DDR-UNet** & 5.19e-02 & 7.08e-02 & 1.20e-01 & 1.47e-01 \\\\
**AsymmInceptionRes-3DDR-UNet** & 5.15e-02 & 7.56e-02 & 1.17e-01 & 1.41e-01 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Test MSE of all models in all four seasons.
Figure 6: Architecture of AsymmInceptionRes-3DDR-UNet model. The annotations above the asymmetric inception residual blocks correspond to the output shape of such blocks. Similarly to the previous models, the first part reduces the dimensionality, and then the second part upsamples the data to its original size (except for the temporal dimension). Between the reduction and expansion parts, intermediate convolutions reduce the temporal dimension (lags) from 10 to 1.
Figure 7: Example of the sea surface height in meters of the studied region.
## 6 Conclusion
In this paper, four new models based on the U-Net architecture are introduced for multi-step ahead coastal sea elements prediction. The proposed models are examined under different setups, i.e. different seasons and numbers of hours ahead. Among the discussed models, AsymmInceptionRes-3DDR-UNet and InceptionRes-3DDR-UNet have shown superior performance thanks to the use of parallel convolutions. However, the incorporation of asymmetric convolutions and additional parallel branches make the AsymmInceptionRes-3DDR-UNet perform slightly better than the latter, yielding the most promising results on the studied tasks. The scripts and models used in this paper can be found in [https://github.com/jesusgf96/Sea-Elements-Prediction-UNet-Based-Models](https://github.com/jesusgf96/Sea-Elements-Prediction-UNet-Based-Models).
## Acknowledgment
Simulations were performed with computing resources granted by RWTH Aachen University.
## References
* (1) R. W. Katz, A. H. Murphy, Economic value of weather and climate forecasts, Cambridge University Press, 2005.
* (2) R. G. da Silva, M. H. D. M. Ribeiro, S. R. Moreno, V. C. Mariani, L. dos Santos Coelho, A novel decomposition-ensemble learning framework for multi-step ahead wind energy forecasting, Energy 216 (2021) 119174.
* (3) S. R. Moreno, V. C. Mariani, L. dos Santos Coelho, Hybrid multi-stage decomposition with parametric model applied to wind speed forecasting in brazilian northeast, Renewable Energy 164 (2021) 1508-1526.
* (4) Z. Liu, R. Hara, H. Kita, Hybrid forecasting system based on data area division and deep learning neural network for short-term wind speed forecasting, Energy Conversion and Management 238 (2021) 114136.
* (5) S. R. Moreno, R. G. da Silva, V. C. Mariani, L. dos Santos Coelho, Multi-step wind speed forecasting based on hybrid multi-stage decomposition model and long short-term memory neural network, Energy Conversion and Management 213 (2020) 112869.
* (6) J. Henderson, E. R. Nielsen, G. R. Herman, R. S. Schumacher, A hazard multiple: Overlapping tornado and flash food warnings in a national weather service forecast office in the southeastern united states, Weather and Forecasting 35 (4) (2020) 1459-1481.
* (7) K. M. Simmons, D. Sutter, Wsr-88d radar, tornado warnings, and tornado casualties, Weather and Forecasting 20 (3) (2005) 301-310.
* (8) A. C. Lorene, Analysis methods for numerical weather prediction, Quarterly Journal of the Royal Meteorological Society 112 (474) (1986) 1177-1194.
* (9) P. Bauer, A. Thorpe, G. Brunet, The quiet revolution of numerical weather prediction, Nature 525 (7567) (2015) 47-55.
* (10) H. R. Glahn, Statistical weather forecasting (1985).
* (11) A. Holtslag, E. De Bruijn, H. Pan, A high resolution air mass transformation model for short-range weather forecasting, Monthly Weather Review 118 (8) (1990) 1561-1575.
* (12) K. Saito, H. Seko, T. Kuroda, T. Fujita, T. Kawabata, K. Aonashi, T. Tsuyuki, Next generation supercomputer project toward cloud resolving nwp, CAS/JSC WGNE Res. Act. Atmos. Ocea. Model 41 (2011) 5-19.
* (13) J. Zhongzhen, Z. Qingcun, Problems on nonlinear computational instability in nwp, Journal of the Meteorological Society of Japan. Ser. II 64 (1986) 255-261.
Figure 8: Comparison of the number of parameters and convolutional layers between models.
\\begin{table}
\\begin{tabular}{l l c c c c} \\hline \\hline \\multirow{2}{*}{** Season**} & \\multirow{2}{*}{**Model**} & **12h ahead** & **24h ahead** & **48h ahead** & **72h ahead** \\\\ \\hline \\multirow{4}{*}{**Spring**} & **3DDR-UNet** & 7.45e-02 & 9.95e-02 & 1.37e-01 & 2.14e-01 \\\\ & **Res-3DDR-UNet** & 8.37e-02 & 1.01e-01 & 1.65e-01 & 1.69e-01 \\\\ & **InceptionRes-3DDR-UNet** & 6.63e-02 & 9.37e-02 & 1.43e-01 & 1.78e-01 \\\\ & **AsymmInceptionRes-3DDR-UNet** & 6.98e-02 & 9.56e-02 & 1.38e-01 & 1.66e-01 \\\\ \\hline \\multirow{4}{*}{**Sommer**} & **3DDR-UNet** & 4.10e-02 & 7.92e-02 & 1.37e-01 & 1.69e-01 \\\\ & **Res-3DDR-UNet** & 5.23e-02 & 6.85e-02 & 1.23e-01 & 1.97e-01 \\\\ & **InceptionRes-3DDR-UNet** & 4.37e-02 & 6.55e-02 & 1.06e-01 & 1.49e-01 \\\\ & **AsymmInceptionRes-3DDR-UNet** & 4.03e-02 & 7.88e-02 & 1.04e-01 & 1.43e-01 \\\\ \\hline \\multirow{4}{*}{**Autumn**} & **3DDR-UNet** & 3.36e-02 & 4.81e-02 & 6.95e-02 & 7.89e-02 \\\\ & **Res-3DDR-UNet** & 3.97e-02 & 5.61e-02 & 8.92e-02 & 1.27e-01 \\\\ & **InceptionRes-3DDR-UNet** & 3.29e-02 & 4.31e-02 & 6.98e-02 & 6.75e-02 \\\\ & **AsymmInceptionRes-3DDR-UNet** & 2.97e-02 & 4.66e-02 & 6.79e-02 & 7.11e-02 \\\\ \\hline \\multirow{4}{*}{**Winter**} & **3DDR-UNet** & 6.62e-02 & 8.51e-02 & 1.62e-01 & 1.90e-01 \\\\ & **Res-3DDR-UNet** & 7.66e-02 & 9.44e-02 & 1.88e-01 & 1.99e-01 \\\\ & **InceptionRes-3DDR-UNet** & 6.41e-02 & 8.18e-02 & 1.63e-01 & 1.71e-01 \\\\ & **AsymmInceptionRes-3DDR-UNet** & 6.51e-02 & 8.28e-02 & 1.58e-01 & 1.62e-01 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Test MSE of all models in different seasons.
Figure 9: MSE of individual sea elements for different number of hours ahead. The predictions are performed with the (a) 3DDR-UNet, (b) Res-3DDR-UNet, (c) InceptionRes-3DDR-UNet, and (d)AsymmInceptionRes-3DDR-UNet trained models.
Figure 10: MSE of individual sea elements for the different models. The predictions are performed using (a) 12h, (b) 24h, (c) 48h, and (d) 72h ahead.
* (14) K.-j. Kim, Financial time series forecasting using support vector machines, Neurocomputing 55 (1-2) (2003) 307-319.
* (15) G. Dudek, Short-term load forecasting using random forests, in: Intelligent Systems' 2014, Springer, 2015, pp. 821-828.
* (16) A. Girard, C. E. Rasmussen, J. Q. Candela, R. Murray-Smith, Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting, in: Advances in neural information processing systems, 2003, pp. 545-552.
* (17) Y. Radhika, M. Shashi, Atmospheric temperature prediction using support vector machines, International journal of computer theory and engineering 1 (1) (2009) 55.
* (18) K. Rasouli, W. W. Hsieh, A. J. Cannon, Daily streamflow forecasting by machine learning methods with weather and climate inputs, Journal of Hydrology 414 (2012) 284-293.
* (19) K. Trebing, T. Stanczyk, S. Mehrkanoon, SmaAT-UNet: Precipitation nowcasting using a small attention-unet architecture, Pattern Recognition Letters 145 (2021) 178-186.
* (20) K. Trebing, S. Mehrkanoon, Wind speed prediction using multidimensional convolutional neural networks, in: IEEE Symposium Series on Computational Intelligence (IEEE-SSCI), 2020, pp. 713-720.
* (21) S. Webb, Deep learning for biology, Nature 554 (7693) (2018).
* (22) S. Mehrkanoon, J. A. K. Suykens, Deep hybrid neural-kernel networks using random fourier features, Neurocomputing 298 (2018) 46-54.
* (23) S. Mehrkanoon, Deep neural-kernel blocks, Neural Networks 116 (2019) 46-55.
* (24) I. Alaoui Abdellaoui, J. Garcia Fernandez, C. Sahinli, S. Mehrkanoon, Enhancing brain decoding using attention augmented deep neural networks, in: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2021, pp. 183-188.
* (25) S. Mehrkanoon, T. Falck, J. A. K. Suykens, Approximate solutions to ordinary differential equations using least squares support vector machines, IEEE transactions on neural networks and learning systems 23 (9) (2012) 1356-1367.
* (26) S. Mehrkanoon, J. A. K. Suykens, Learning solutions to partial differential equations using ls-svm, Neurocomputing 159 (2015) 105-116.
* (27) S. Mehrkanoon, S. Mehrkanoon, J. A. K. Suykens, Parameter estimation of delay differential equations: an integration-free ls-svm approach, Communications in Nonlinear Science and Numerical Simulation 19 (4) (2014) 830-841.
* (28) S. Mehrkanoon, Cross-domain neural-kernel networks, Pattern Recognition Letters 125 (2019) 474-480.
* (29) I. A. Abdellaoui, S. Mehrkanoon, Symbolic regression for scientific discovery: an application to wind speed forecasting, in: Accepted for publication in the Proc. of IEEE Symposium Series on Computational Intelligence (IEEE-SSCI), 2021.
* (30) T. Stanczyk, S. Mehrkanoon, Deep graph convolutional networks for wind speed prediction, in: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2021, pp. 147-152.
* (31) A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in neural information processing systems, 2012, pp. 1097-1105.
* (32) O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234-241.
* (33) T. Falk, D. Mai, R. Bensch, O. Cicek, A. Abdulkadir, Y. Marrakchi, A. Bohm, J. Deubner, Z. Jackel, K. Seiwald, et al., U-net: deep learning for cell counting, detection, and morphometry, Nature methods 16 (1) (2019) 67-70.
* (34) Y. Han, J. C. Ye, Framing u-net via deep convolutional framelets: Application to sparse-view ct, IEEE transactions on medical imaging 37 (6) (2018) 1418-1429.
* (35) S. Mehrkanoon, Deep shared representation learning for weather elements forecasting, Knowledge-Based Systems 179 (2019) 120-128.
* (36) K. Zhou, Y. Zheng, B. Li, W. Dong, X. Zhang, Forecasting different types of convective weather: A deep learning approach, Journal of Meteorological Research 33 (5) (2019) 797-809.
* (37) S. Kim, S. Hong, M. Joh, S.-k. Song, Deeprain: Convlstm network for precipitation prediction using multichannel radar data, arXiv preprint
Figure 11: AsymmInceptionRes-3DDR-UNetβs prediction of all variables 48 hours ahead in winter. The data is scaled back to its original values.
arXiv:1711.02316 (2017).
* (38) A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: Advances in neural information processing systems, 2017, pp. 5998-6008.
* (39) J. Ho, N. Kalchbrenner, D. Weissenborn, T. Salimans, Axial attention in multidimensional transformers, arXiv preprint arXiv:1912.12180 (2019).
* (40) C. K. Sponderby, L. Espeholt, J. Heek, M. Dehghani, A. Oliver, T. Salimans, S. Agrawal, J. Hickey, N. Kalchbrenner, Metnet: A neural weather model for precipitation forecasting, arXiv preprint arXiv:2003.12140 (2020).
* (41) C. Xiao, N. Chen, C. Hu, K. Wang, Z. Xu, Y. Cai, L. Xu, Z. Chen, J. Gong, A spatiotemporal deep learning model for sea surface temperature field prediction using time-series satellite data, Environmental Modelling & Software 120 (2019) 104502.
* (42) G. Zheng, X. Li, R.-H. Zhang, B. Liu, Purely satellite data-driven deep learning forecast of complicated tropical instability waves, Science Advances 6 (29) (2020) eaba1482.
* (43) S. Shakya, S. Kumar, M. Goswami, Deep learning algorithm for satellite imaging based cyclone detection, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020) 827-839.
* (44) J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
* (45) T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
* (46) G. Rivolta, F. Marzano, E. Coppola, M. Verdecchia, Artificial neural-network technique for precipitation nowcasting from satellite imagery (2006).
* (47) W. Liu, W. Luo, D. Lian, S. Gao, Future frame prediction for anomaly detection-a new baseline, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6536-6545.
* (48) V. Patraucean, A. Handa, R. Cipolla, Spatio-temporal video autoencoder with differentiable memory, arXiv preprint arXiv:1511.06309 (2015).
* (49) J.-T. Hsieh, B. Liu, D.-A. Huang, L. F. Fei-Fei, J. C. Niebles, Learning to decompose and disentangle representations for video prediction, in: Advances in Neural Information Processing Systems, 2018, pp. 517-526.
* (50) Y. Zhao, B. Deng, C. Shen, Y. Liu, H. Lu, X.-S. Hua, Spatio-temporal autoencoder for video anomaly detection, in: Proceedings of the 25th ACM international conference on Multimedia, 2017, pp. 1933-1941.
* (51) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
* (52) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
* (53) C. Szegedy, S. Ioffe, V. Vanhoucke, A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, arXiv preprint arXiv:1602.07261 (2016).
* (54) H. Yang, C. Yuan, B. Li, Y. Du, J. Xing, W. Hu, S. J. Maybank, Asymmetric 3d convolutional neural networks for action recognition, Pattern recognition 85 (2019) 1-12.
* (55) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826.
* european north west shelf
- ocean physics analysis and forecast dataset, Ocean Sci. 15 (2019) 1133-1158, [[https://doi.org/10.5194/os-15-1133-2019](https://doi.org/10.5194/os-15-1133-2019) The impact of a new high-resolution ocean model on the Met Office North-West European Shelf forecasting system].
* (57) D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). | The supply and demand of energy is influenced by meteorological conditions. The relevance of accurate weather forecasts increases as the demand for renewable energy sources increases. The energy providers and policy makers require weather information to make informed choices and establish optimal plans according to the operational objectives. Due to the recent development of deep learning techniques applied to satellite imagery, weather forecasting that uses remote sensing data has also been the subject of major progress. The present paper investigates multiple steps ahead frame prediction for coastal sea elements in the Netherlands using U-Net based architectures. Hourly data from the Copernicus observation programme spanned over a period of 2 years has been used to train the models and make the forecasting, including seasonal predictions. We propose a variation of the U-Net architecture and further extend this novel model using residual connections, parallel convolutions and asymmetric convolutions in order to introduce three additional architectures. In particular, we show that the architecture equipped with parallel and asymmetric convolutions as well as skip connections outperforms the other three discussed models.
keywords: Coastal sea elements, Time-series satellite data, Deep learning, Convolutional neural networks, U-Net +
Footnote β : journal: arXiv | Summarize the following text. | 238 |
arxiv-format/1809_04794v1.md | # Considering Gut Biofeedback for Emotion Regulation
Jelena Mladenovic
laria, Franco [email protected]
## Introduction
Recent developments in Human Computer Interaction (HCI), and physiological and affective computing brought to light the necessity for wearable and robust physiological sensors. So far, using physiological sensors a person can: (1) consciously monitor/regulate their bodily functions through biofeedback for well-being [10], (2) (un)consciously adapt an environment or task, which can for instance increase immersion in gaming [14], or (3) consciously manipulate an external device with only physiological (neural) activity, as in active Brain-Computer Interfaces, to control wheelchairs or for communication for example[18]. Measures of electrodermal activity (EDA), cardiac function, facial muscles activity, and respiration have been used frequently to assess emotional states [9]. Nowadays there are wearable devices developed for measuring EDA and heart rate, such as the Empatica E4 smartwatch. Remarkably however, the gastrointestinal system has often been neglected by affective research. Even though humans regularly experience having a \"gut feeling\" or \"butterflies in the stomach\", they often overlook the importance of such phenomenon as an actual physiological process. However, studies have shown that indeed the gut could have an important role in affective disorders [2]. Still, non-invasive, robust physiological measurements or wearable devices for such phenomena are not yet developed. The possibility of assisting users in regulating the internal processes of the gut, and thus regulating the emotions that arise with such physiological processes are not yet taken seriously into consideration.
In this paper we briefly explain what the gut signal is, and the usefulness of such modality for inferring and regulating emotions, using a biofeedback. We also tackle some fundamental questions about emotions which are often taken lightly in the HCI community.
### Gastro-intestinal tract
The gastro-intestinal (GI) tract comprises of the mouth, esophagus, stomach and intestines. The GI tract has a bidirectional communication with the Central Nervous System (CNS) through the sympathetic and parasympathetic systems [12], thus researchers often refer to the gut-brain axis. The GI tract is governed by the enteric nervous system which can act independently from the CNS and contains over 500 million nerves, which is why it is also called the \"second brain\". Moreover, today there has been many interest in the gut microbiota or microorganisms that inhabit the gut and have shown to have a role in the stress regulation in mice [12].
The electrogastrogram (EGG) is a reliable and noninvasive method of recording gastric myoelectrical activity [11]. The gastric myoelectrical activity paces the contraction of the stomach. The normal frequency of the electrogastric wave is 3 cycles per minute (cpm), and is termed normogastria [8]. It is worth nothing that amplifiers typically used for electroencephalography (assessing brain activity) have shown to be equally useful for EGG, for example in [4] using an affordable and open-source device, OpenBCI. Recent studies showed that EGG could be a valuable measure of emotion [15]. Individuals often report a \"nervous stomach\" for too frequent contractions (tachygastria, 4-9 cpm) during stressful experiences [16]. Participants reacted with tachycardia during horror movies, but a reduced frequency of gastric waves during a relaxation session [19]. It is also shown that gastric slow waves can be useful for predicting the experience of disgust [5].
Individuals clearly react emotionally with their gut, as well as the gut influences their emotions. As such, we advocate that it could be interesting to propose biofeedback specifically aimed at regulating a \"nervous stomach\".
### Biofeedback for gut awareness
Biological back is a system that externalizes one's internal bodily activity, for example in visual, audio or haptic modalities. It assists people to be aware of their internal processes or physiological activity, as a technique of interception, known to be beneficial for well-being [3]. Notice that biofeedback is built under the assumption that being aware of one's physiological processes creates or modulates an emotion. In other words, the perception of physiological changes contributes to the content of conscious experiences of emotion [13]. Biofeedback thus externalizes such phenomena and enables people to conscious examine and regulate their internal states and their experience of emotions. As the gut clearly has an important role in human emotion, we believe it could be beneficial to build an EGG wearable device which could record and process feedback to one's gut contractions, as depicted in Figure 1. Interestingly, the use of biofeedback could also expose the relationship between experiencing bodily activity and experiencing an emotion. In experiments where people are given a fake biofeedback to manipulate their emotions toward images of individuals, the perception of external audio stimuli dominated over their autonomic perception [17]. This leads us to ask whether the perceived physiological process is more important than the actual one.
### Relation between physiology and emotion
Sympathetic nervous system, governing the fight or flight mechanisms, influences sweat secretion, increases heart rate, constricts blood vessels in gastrointestinal organs or inhibits contractions in the digestive tract, and much more. These physiological changes are recognized as measures of emotion and expressed as stress, anxiety, fear etc. This assumption follows the James' theory [6] in which feeling (emotion experience) exists due to physiological changes in one's own body. James argued that seeing a fearful stimulus would first trigger emotional responses (increases in sympathetic activity), and that the perception of these physiological changes would form the basis for our conscious experience of emotion. Today, in affective neuroscience, the James theory is revised and updated, e.g. acknowledging the role of emotions in decision-making [1]; or distinguishing \"the conscious experience of an emotion (feeling), its expression (physiological response), and semantic knowledge about it (recognition)\" [13]. Taking more often into consideration the role of the GI tract might help to reconcile antagonist views of emotion. For example, in [7] authors described the dissociation between the autonomic response and affect through the study of patients with brain lesions. In this experiment, patients without autonomic responses would not sweat but would still be able to experience emotions related to music excerpts, while patients with different lesions, incapable of judging music, displayed EDA responses. As such, without a link between physiology and emotions, authors \"opposed\" James' theory. Nevertheless, we believe, as the enteric nervous system can function independently from the autonomic system, it could be that the physiology still contributed to the emotional perception of music.
## Conclusion
With this paper we hope to foster discussions among HCI practitioners about the study of gut signals. To discover further how the body contributes to the experience of emotion and _vice versa_, it can be useful to include EGG as an additional tool for emotion recognition. Also, affordable and mobile biosignal amplifiers could enable the creation of a new biofeedback mechanism, in which individuals could learn how to regulate their emotion related to the gut.
## Acknowledgment
I wish to thank Jeremy Frey and Angela Vujic for insightful discussions and for proofreading this paper.
## References
* [1] Antoine Bechara, Hanna Damasio, and Antonio R Damasio. 2000. Emotion, decision making and the orbitofrontal cortex. _Cerebral cortex_ (2000).
* [2] EJ Bennett, C Piesse, K Palmer, CA Badcock, CC Tennant, and JE Kellow. 1998. Functional gastrointestinal disorders: psychological, social, and somatic features. _Gut_ 42, 3 (1998), 414-420.
* [3] Norman Farb, Jennifer Daubenmier, Cynthia J. Price, Tim Gard, Catherine Kerr, Barnaby D. Dunn, Anne Carolyn Klein, Martin P. Paulus, and Wolf E. Mehling. 2015. Interoception, contemplative practice, and health. _Front. Psychol._ 6, June (2015), 763.
* [4] Armen A Gharibans, Benjamin L Smarr, David C Kunkel, Lance J Kriegstfeld, Hayat M Mousa, and Todd P Coleman. 2018. Artifact Rejection Methodology Enables Continuous, Noninvasive Measurement of Gastric Myoelectric Activity in Ambulatory Subjects. _Scientific reports_ 8, 1 (2018), 5019.
* [5] Neil A Harrison, Marcus A Gray, Peter J Gianaros, and Hugo D Critchley. 2010. The embodiment of emotional feelings in the brain. _J. Neurosci._ 30, 38 (2010).
* [6] William James. 1884. What is an emotion? _Mind_ 9, 34 (1884), 188-205.
* [7] Erica L Johnsen, Daniel Tranel, Susan Lutgendorf, and Ralph Adolphs. 2009. A neuroanatomical dissociation for emotion induced by music. _International Journal of Psychophysiology_ 72, 1 (2009), 24-33.
* [8] Kenneth L Koch and Robert Morris Stern. 2004. _Handbook of electrogastrography_. Oxford University.
* [9] A Mayer and CB Saper. 2000. Non-conscious brain processing indexed by psychophysiological measures. _The biological basis for mind body interactions_ (2000).
* [10] Michael G. McKee. 2008. Biofeedback: An overview in the context of heart-brain medicine. _Cleveland Clinic Journal of Medicine_ 75, SUPPL_2 (2008), 31-34.
* [11] Thomas S Nelsen and Shoichi Kohatsu. 1968. Clinical electrogastrography and its relationship to gastric surgery. _The American Journal of Surgery_ (1968).
* [12] Nobuyuki Sudo, Yoichi Chida, Yuji Aiba, Junko Sonoda, Naomi Oyama, Xiao-Nian Yu, Chiharu Kubo, and Yasuhiro Koga. 2004. Postnatal microbial colonization programs the hypothalamic-pituitary-adrenal system for stress response in mice. _J. Physiol._ (2004).
* [13] Naotsugu Tsuchiya and Ralph Adolphs. 2007. Emotion and consciousness. _Trends in cog. sciences_ (2007).
* [14] Bram van de Laar, Hayretin Gurkok, Danny Plass-Oude Bos, Mannes Poel, and Anton Nijholt. 2013. Experiencing BCI control in a popular computer game. _IEEE TCIAIG_ 5, 2 (2013), 176-184.
* [15] Eduardo PM Vianna and D Tranel. 2006. Gastric myoelectrical activity as an index of emotional arousal. _International Journal of Psychophysiology_ 61, 1 (2006).
* [16] Angela Vujic. 2018. Gut Brain Computer Interfacing. _International BCI Meeting '18 Master Class_ (2018).
* [17] Stanley B Woll and Miles E McFall. 1979. The effects of false feedback on attributed arousal and rated attractiveness in female subjects 1. _J. Pers._ (1979).
* [18] Jonathan R Wolpaw, Niels Birbaumer, Dennis J McFarland, Gert Pfurtscheller, and Theresa M Vaughan. 2002. Brain-computer interfaces for communication and control. _Clin. Neurophy._ (2002).
* [19] J Yin, D Levanon, and JDZ Chen. 2004. Inhibitory effects of stress on postprandial gastric myoelectrical activity and vagal tone in healthy subjects. _Neurogastroenterology & Molifity_ (2004). | Recent research in the enteric nervous system, sometimes called the second brain, has revealed potential of the digestive system in predicting emotion. Even though people regularly experience changes in their gastrointestinal (GI) tract which influence their mood and behavior multiple times per day, robust measurements and wearable devices are not quite developed for such phenomena. However, other manifestations of the autonomic nervous system such as electrodermal activity, heart rate, and facial muscle movement have been extensively used as measures of emotions or in biofeedback applications, while neglecting the gut.
HAPTIC
Electrostography; Physiological sensors; Gut Brain axis; biofeedback | Give a concise overview of the text below. | 126 |
arxiv-format/2211_07044v2.md | SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth Observation
Yi Wang,, Nassim Ait Ali Braham, Zhitong Xiong,, Chenying Liu, Conrad M Albrecht,, Xiao Xiang Zhu,
Y. Wang, N. A. A. Braham, C. Liu are with the Chair of Data Science in Earth Observation, Technical University of Munich (TUM), and the Remote Sensing Technology Institute, German Aerospace Center (DLR). Z. Xiong, X. X. Zhu are with the Chair of Data Science in Earth Observation, Technical University of Munich (TUM). C. M. Albrecht is with the Remote Sensing Technology Institute, German Aerospace Center (DLR).
## I Introduction
Self-supervised learning (SSL) has attracted wide attention in the remote sensing (RS) community with the ability to learn generic representations from unlabeled data. Numerous studies in the literature have proven the potential of SSL in Earth observation (EO) beyond natural images [1]. Despite the focus SSL for EO receives, only limited effort is dedicated to providing large-scale datasets and benchmarks for pre-training. On the one hand, relying on computer vision datasets like ImageNet [2] is not a preferred option due to the domain gap. On the other hand, while RS datasets like SEN12MS [3] or SeCo [4] exist, they are limited by geospatial overlap, sparse geographical distribution, or lack diversity in seasonal or multimodal information. Therefore, big EO-specific datasets for unsupervised pre-training are necessary to be developed.
In this work, we introduce a large-scale, globally distributed, multi-temporal and multi-sensor dataset _SSL4EO-S12: Self-Supervised Learning for Earth Observation - Sentinel-1/2_. The dataset samples 250K locations around the globe, each providing Sentinel-2 L1C, Sentinel-2 L2A, and Sentinel-1 GRD images with four snapshots from different seasons (in total 3 million 2640m\\(\\times\\)2640m patches). Additionally, we guarantee optimal geospatial coverage by avoiding the overlap of the randomly sampled locations. This renders SSL4EO-S12 the largest and most generic multi-spectral/SAR dataset in the RS literature [5].
We demonstrate the potential of SSL4EO-S12 dataset through a series of extensive experiments. Specifically, we evaluate four representative SSL algorithms--namely: MoCo [6], DINO [7], MAE [8], and data2vec [9]--on three different downstream tasks: scene classification, semantic segmentation and change detection. Our results indicate that pre-training on SSL4EO-S12 improves the downstream performance compared to existing datasets. Moreover, our ablation studies prove the benefits of RS-specific data augmentations including multi-sensor, multi-temporal and atmospheric correction.
## II Related work
**Self-supervised learning** Over the past years, self-supervised learning (SSL) has reached important milestones in computer vision, especially through contrastive methods with joint-embedding architectures. These methods get trained to promote similarity between augmented views of the same input, thereby enforcing invariance to data augmentation. Several families of such methods emerge: 1) contrasting negative samples for which the representations are encouraged to be dissimilar [6]; 2) Knowledge distillation between an asymmetric teacher-student network [7]; 3) redundancy reduction among the embedding dimensions; 4) clustering latent features to common prototypes from different views [10]. Meanwhile, recent developments in masked image modeling (MIM) reveal promising results in generative methods, which reconstruct the masked input at pixel-[8] or feature-[9] level.
We benchmark four representative methods MoCo [6], DINO [7], MAE [8], and data2vec [9] on the proposed dataset. This way, we cover a reasonably diverse set of representative methods from different categories: MoCo contrasts negative samples, DINO represents a distillation method, MAE is based on masked reconstruction, and data2Vec combines the masking mechanism with a joint-embedding architecture.
**Pre-training datasets** Pre-trained models on ImageNet are widely used for various computer vision tasks. However, this is less appropriate in the context of RS: 1) RS images are not object-centric; 2) there exist various types of sensors in RS; 3) temporal effects yield variations on the ground surface.
Therefore, EO-specific datasets are needed to provide the above in-domain knowledge. The literature has proven the benefits of pre-training on existing labeled RS datasets [11, 12], yet there are limitations such as class bias, and temporal and geographical coverage.
Consequently, there is a need for large-scale pre-training datasets in RS. Two datasets closely related to our efforts are SEN12MS [3] and SeCo [4]. However, SEN12MS is limited by temporal coverage, SeCo has only optical data, and both datasets contain strongly overlapping patches which limit the geospatial coverage. With the above in mind, our proposed SSL4EO-S12 dataset provides an improved spatio-temporal coverage by sampling more locations and removing overlapping patches, enclosing multiple seasons, and including Sentinel-1 as well as two Sentinel-2 products (Table I).
## III SSL4EO-S12 Dataset
### _Data curation & assembly_
The SSL4EO-S12 dataset (Figure 1) exploits openly available SAR/optical satellite data collected by the European Space Agency's Sentinel mission. Following a well-organized baseline provided by SeCo [4], we utilize the Google Earth Engine [14] to download and process the data. We filter image patches to retrieve from the 10,000 most populated cities1 in the world (top-10k) to guarantee reasonable global coverage. To obtain diverse land cover, we sample 251,079 locations close by the cities following a Gaussian distribution peaking at the city center and standard deviation of 50km--assuming most of the variability cast to the downtown and suburbs of cities [4]. At each location, we download 4 images drawn from four annual seasons to capture seasonal variation. We search for Sentinel-2 tiles with a cloud coverage lower than 10%. We also filter out most overlapping patches with an efficient grid search strategy. In total, we obtain about one million S1-GRD/S2-L1C/S2-L2A image triplets.
Footnote 1: [https://simplemaps.com/data/world-cities](https://simplemaps.com/data/world-cities)
**Data identification.** The collection of SSL4EO-S12 differs from SeCo mainly by introducing overlap filtering and multiple sensors (**bold** below). The workflow is shown as follows:
1. Uniformly sample one city from top-10k populated cities;
2. Sample one location from a Gaussian distribution with a standard deviation of 50km around the city center;
3. **Check if a 2640m\\(\\times\\)2640m image patch centered around that location has significant overlap with previous patches. If not, continue to 4, otherwise return to 1;**
4. For a 30-day interval around four reference dates (Mar 20, Jun 21, Sep 22, Dec 21) in 2021 (additionally look for 2020 as a buffer), check if there exist Sentinel-2 tiles with less than 10% of cloud coverage (**for both L1C and L2A**) and corresponding **Sentinel-1 GRD tiles**;
5. If there exist valid Sentinel-1/2 tiles close to all the four dates, process and download them into curated image patches, otherwise return to 1.
**Overlap filtering.** A simple way to check significant overlap between two patches is to calculate the distance between the two centers. If the distance is smaller than 3/4 the width of a patch, there is a non-negligible overlap (?25%). Naively, we need to execute this computation for every new patch relative to all existing patches. However, this becomes inefficient when the number of patches grows large, 250k+ for us. Therefore,
Fig. 1: Sample images of SSL4EO-S12 dataset assembled.
we employ a grid search strategy to perform efficient overlap filtering. Instead of calculating the distance to all previous patches, we distribute the patch center coordinates into 360x180 geographical longitude-latitude, one-by-one-degree grids. For each new patch, we convert the center coordinates into integer grid coordinates. Subsequently, we search for existing patches within this grid cell and exclusively calculate distances to those local patches. Assuming potential overlap of sampled patches from distinct grid cells is statistically negligible, we significantly reduce computing time compared to a global overlap search. Indeed, for SSL4EO-S12 we record an overlap for approx. 3% tiles of densely populated Tokyo, 1.5% in Chicago, and below 1% for locations such as Bejing, Munich, Kampala, and Brasilia.
### _Data characteristics & volume_
The presented SSL4EO-S12 dataset contains 251,079 globally distributed Sentinel-1 dual-pol SAR, Sentinel-2 top-of-atmosphere multispectral, and Sentinel-2 surface reflectance multispectral triplets over four seasonal timestamps. As of summer 2022, SSL4EO-S12 constitutes the biggest geospatial-temporal, multimodal dataset in terms of medium-resolution PolSAR and multi-spectral imagery serving more than 3 million images. The total data volume equates to an uncompressed size of \\(251,079\\times 4\\times[2\\cdot 4B+(13+12)\\cdot 2B]\\times 264^{2}\\approx 3.7TB\\).
Figure 2 depicts the geospatial distribution of the SSL4EO-S12 dataset, highlighting the dense coverage across the globe. Figure 3 depicts the effect of overlap filtering around Tokyo area.
## IV Experimental setup
We evaluate SSL4EO-S12 dataset by self-supervised pre-training and transfer learning on RS downstream tasks. Specific implementation details are provided in the appendix.
### _Self-supervised pre-training_
We perform pre-training using four representative SSL methods: _MoCo-v2/v3_[15, 16], _DINO_[7], _MAE_[8], and _data2vec_[9]. We pre-train ResNet [17] backbones with MoCo(-v2) and DINO, and Vision Transformer (ViT) [18] backbones for all four SSL methods listed above. Unless explicitly noted, Sentinel-2 L1C is used for pre-training. To utilize multi-temporal information, we use RandomSeasonContrast as a data augmentation strategy, i.e., for MoCo and DINO, the input views are randomly picked from two seasons. For MAE and data2vec, one random season is assigned for each patch.
Pre-training one ResNet/ViT model for 100 epochs takes 7-25 hours on 4 NVIDIA A100 GPUs, as shown in Table II.
### _Transfer learning_
The pre-trained models are transferred to various downstream tasks. For
* _scene classification_, we evaluate EuroSAT [19] (single-label land cover classification), BigEarthNet [13] (multi-label land cover classification), and So2Sat-LCZ42 [20] (local climate zone classification, culture-10 version).
* _semantic segmentation_, we include DFC2020 [21] (land cover segmentation) and OSCD [22] (change detection).
We perform commonly used linear probing (freezing the pre-trained encoder) and fine-tuning for the downstream tasks. The results are reported in percentage scores.
## V Benchmark results
### _Classification_
#### V-A1 Comparison of SSL methods
We first benchmark different SSL methods through linear probing on EuroSAT, BigEarthNet, and So2Sat-LCZ42. As detailed in Table III, all methods outperform random initialization (rand.init.) by a substantial margin. As expected, linear probing on BigEarthNet with all labels performs worse than fully _supervised_ training. Promisingly, the gap stays below 5%. On small datasets like BigEarthNet with 10% labels or EuroSAT, linear probing provides results comparable to supervised training within approx. \\(\\pm 1\\%\\). The trends are slightly different for So2Sat-LCZ42, where the training and testing sets are built upon different cities with a challenging geographical split. Because of this significant domain shift, adding labeled training data does not necessarily improve the testing performance. In fact, fitting the training data distribution does not guarantee out-of-distribution generalization. Nevertheless, the best pre-trained models with linear probing beat the supervised baseline by at least 1% up to about 4%.
Furthermore, we benchmark fine-tuning results in Table IV. All self-supervised methods outperform supervised learning
Fig. 3: Image patches without (left) and with (right) overlap filtering in Tokyo metropolitan area. We plot red circles of radius 1.32km (132 pixels) for better visualization.
Fig. 2: Geographical distribution of SSL4EO-S12 dataset.
with a margin from 1% to 6%. Top SSL-models score 99.1% on EuroSAT (MoCo/DINO) and over 90% on BigEarthNet (MoCo/DINO). Comparing linear probing and fine-tuning results, one interesting phenomenon shows up: in linear probing contrastive methods (MoCo and DINO) consistently score better than their image-masking (MAE and data2vec) counterparts.
#### Iv-A2 Comparison of pre-training datasets
To compare SSL4EO-S12 with other RS pre-training datasets, we report corresponding linear probing results pre-trained with MoCo-v2 (ResNet50 backbone) in Table V. Similar to SSL4EO-S12, RandomSeasonContrast is used to pick one timestamp image for each geospatial patch in SeCo dataset. In the first set of comparison, we use RGB bands only. SSL4EO-S12 significantly outperforms ImageNet by about 10%, SeCo by about 6%, and SEN12MS by 1.7% to 3.5%.
In a second set of experiments we evaluate all multispectral bands. Results indicate consistent performance gain as in RGB setting comparing SSL4EO-S12 with SEN12MS and SeCo. In addition, pre-training on SSL4EO-S12 outperforms BigEarthNet on itself and EuroSAT (both are EU only). This proves SSL4EO-S12's benefits to improve model transferability by learning valuable knowledge from a larger scale and wider geographical coverage.
#### Iv-A3 Comparison of different amounts of labels
Figure 4 visualizes performance results of transfer learning on BigEarthNet with a varying fraction of labeled samples. Compared to the supervised baseline, self-supervised pre-training on SSL4EO-S12 provides significant benefits when the amount of labeled samples is limited. In fact, fine-tuning on 10% of the labels outperforms 50%-labels supervised training; and with ViT-S/16, fine-tuning on 50% of the labels outperforms 100%-labels supervised training.
### _Segmentation_
#### Iv-B1 Land cover segmentation
We use DFC2020 [21] dataset to evaluate land cover semantic segmentation. We pre-train ResNet50 with MoCo-v2 on SSL4EO-S12 L1C products, and fine-tune a DeepLabv3+ [23] for segmentation. Table VI lists results with notable improvements when compared to SeCo pre-training. However, SSL4EO-S12 performs worse than SEN12MS in average accuracy (AA) and mean intersection over union (mIoU). This can be expected, since DFC2020 was built with direct reference to SEN12MS and they have similar data distribution. Nevertheless, the results are still comparable, proving again the transferability of the proposed dataset.
#### Iv-B2 Change detection
We evaluate the pre-trained models for change detection on the OSCD [22] dataset. We pre-train ResNet50 with MoCo-v2 on SSL4EO-S12 L1C products, freeze the backbone, and fine-tune a U-Net [24] for segmentation. The differences in feature maps between two timestamps are input to the network. As Table VII indicates, pre-training on SSL4EO-S12 yields superior performance in recall and F1-score when referenced to SeCo and SEN12MS. While SSL4EO-S12 performs worse in precision, this is due to the significant class unbalance that predicting all pixels as unchanged would result in a good precision score.
## VI Additional studies
We complete our benchmark by reporting a set of additional results to document key characteristics of the SSL4EO-S12 dataset, namely: multi-temporal, multimodal, multi-product-level, and data scale. For all studies, we pre-train ResNet50 with MoCo-v2 as a common setting.
### _Ablation studies_
#### Vi-A1 Benefits of multimodality
While Section V employs only optical data for fair comparison to existing literature, we highlight the benefits of multimodal pre-training in this section. We integrate SAR data by early fusion, and use RandomSensorDrop[12] as an additional data augmentation strategy. During training, the model gets fed random combinations of SAR/optical patches, thus learning both inner- and inter-modality representations. Then, the pre-trained model gets transferred to different scenarios where either both modalities or a single one is available. We compare multimodal pre-training (MM) to uni-modal pre-training (S1/2) on BigEarthNet. Table VIII presents results with notable improvement of 1%-3% for 100% and 1% label splits. While single-modality pre-training already works well for both Sentinel-2 and Sentinel-1 data, pre-training exploiting both modalities further improves performance.
#### Vi-A2 Ablation of seasonal information
We evaluate the effectiveness of multi-temporal information by replacing seasonal _augmentation_ (cf. Section IV) by _random_ season: the same randomly selected season for the two positive views; and _fixed_ season: the same season for each patch during training. We pre-train on a 50k subset of SSL4EO-S12, and evaluate on BigEarthNet-10% and EuroSAT. Table IX clearly proves the benefits of seasonal augmentation.
#### Vi-A3 Atmospheric correction as data augmentation
The motivation to include Sentinel-2 L1C and L2A products in SSL4EO-S12 is to match corresponding downstream tasks. However, these product levels with or without atmospheric correction can also be considered natural data augmentation for SSL. Accordingly, we conduct an ablation study on a 50k SSL4EO-S12 subset utilizing Sentinel-2 L1C, L2A or both (L1C+L2A). Table X summarizes our findings: 1) models pre-trained on the same product level as the downstream task have a slight edge (\\(\\sim 1\\%\\)) over models trained on the other product level, and 2) pre-training on both modalities generates a notable improvement of up to 4% compared to pre-training on single modality.
#### Vi-A4 Impact of pre-training scale
An aspect relevant to large-scale data mining in Earth observation is scaling of results with training data volume: why don't we add more images to SSL4EO-S12? One reason concerns computational costs. We believe the current dataset (1M patches for each Sentinel product) is comparable to the scale of ImageNet, and can serve as a good baseline in remote sensing for further development. Moreover, as observed by [25], saturating downstream performance kicks in beyond 500k pre-training images on ImageNet, with 250k images yielding acceptable results with as little as 1-2% accuracy loss. We observe such a trend in our dataset, too.
Fig. 4: BigEarthNet (BE) performance depending on amount of labels available to train downstream task. We report linear probing and fine-tuning results with ResNet50 and ViT-S/16 encoders pre-trained using MoCo-v2.
As demonstrated by Table XI, we pre-train on various amounts of data to report linear probing results for BigEarthNet-10%. While 50% (500K) or less pre-training data yields significant performance drops, there's little diminishing gaps from 75% (750K) on. Note this saturation effect depends also on the model size.
### _Representation visualization_
We qualitatively evaluate the data representations learned from self-supervised pre-training by visualizing the latent distributions with t-SNE (Figure 5). We pre-train a ResNet50 with MoCo-v2 on SSL4EO-S12, and transfer the frozen encoder to EuroSAT to calculate one 128d representation vector for each image. We then visualize all the vectors with t-SNE, and compare the distribution with a randomly initialized encoder.
## VII Conclusion
In this work, we present SSL4EO-S12--a large-scale multimodal, multi-temporal unlabeled dataset for self-supervised learning (SSL) in Earth observation. An extensive benchmark on various SSL methods and remote sensing applications proves the promising benefits of the proposed dataset.
SSL4EO-S12 has some limitations: 1) there's little coverage of polar regions; 2) geographical bias exists due to cloud filtering; 3) it is not strictly free of geospatial overlap; 4) medium-resolution radar and multispectral images are a limited subset of Earth observation data. Despite these, we believe SSL4EO-S12 renders a valuable basis to advance self-supervised pre-training and large-scale data mining in remote sensing.
## Acknowledgments
This work is jointly supported by the Helmholtz Association through the Framework of Helmholtz AI (grant number: ZT-I-PF-5-01) - Local Unit \"Munich Unit @ Aeronautics, Space and Transport (MASTr)\" and Helmholtz Excellent Professorship \"Data Science in Earth Observation - Big Data Fusion for Urban Research\"(grant number: W2-W3-100), by the German Federal Ministry of Education and Research (BMBF) in the framework of the international future AI lab \"AI4EO - Artificial Intelligence for Earth Observation: Reasoning, Uncertainties, Ethics and Beyond\" (grant number: 01DD20001) and by German Federal Ministry for Economic Affairs and Climate Action in the framework of the \"national center of excellence ML4Earth\" (grant number: 50EE2201C). The computing resources were supported by the Helmholtz Association's Initiative and Networking Fund on the HAICORE@FZJ partition.
## References
* [1]Y. Wang et al. (2022) Self-Supervised Learning in Remote Sensing: A Review. In IEEE Geoscience and Remote Sensing Magazine, Cited by: SSI.
* [2]J. Deng et al. (2009) Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, Cited by: SSI.
* a curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Cited by: SSI.
* [4]O. Mafas et al. (2021) Seasonal contrast: unsupervised pre-training from uncurated remote sensing data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Cited by: SSI.
* [5]Z. Xiong et al. (2022) EarthNets: empowering AI in Earth Observation. In arXiv:2210.04936, Cited by: SSI.
* [6]K. He et al. (2022) Momentum contrast for unsupervised representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Cited by: SSI.
[MISSING_PAGE_POST]
i2earthnet: a large-scale benchmark archive for remote sensing image understanding. In IEEE International Geoscience and Remote Sensing Symposium, Cited by: SSI.
* [34]X. Chen et al. (2021) Improved baselines with momentum contrastive learning. In arXiv:2003.04297, Cited by: SSI.
* [35]X. Chen, S. Xie, and K. He (2021) An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Cited by: SSI.
* [36]K. He et al. (2021) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: SSI.
* [37]A. Dosovitskiy et al. (2020) An image is worth 16x16 words: transformers for image recognition at scale. In arXiv:2010.11929, Cited by: SSI.
Fig. 5: t-SNE visualization of EuroSAT image representations. One color represents one class. Left: random-encoded features; right: SSL-encoded features. SSL-encoded features are well clustered even without label information.
* [19] Patrick Helber et al. \"Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification\". In: _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ (2019).
* [20] Xiao Xiang Zhu et al. \"So2Sat LCZ42: A benchmark data set for the classification of global local climate zones\". In: _IEEE Geoscience and Remote Sensing Magazine_ (2020).
* [21] Michael Schmitt et al. _IEEE GRSS Data Fusion Contest_. 2020.
* [22] Rodrigo Caye Daudt et al. \"Urban change detection for multispectral earth observation using convolutional neural networks\". In: _IEEE International Geoscience and Remote Sensing Symposium_. 2018.
* [23] Liang-Chei Chen et al. \"Encoder-decoder with atrous separable convolution for semantic image segmentation\". In: _Proceedings of the European conference on computer vision (ECCV)_. 2018, pp. 801-818.
* [24] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. \"U-net: Convolutional networks for biomedical image segmentation\". In: _International Conference on Medical image computing and computer-assisted intervention_. 2015.
* [25] Elijah Cole et al. \"When does contrastive visual representation learning work?\" In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2022.
## Appendix-1: Additional dataset information
### **Sentinel-1/2**
The proposed SSL4EO-S12 dataset exploits freely available SAR/optical satellite images from European Space Agency's Sentinel mission (under the CC-BY license).
The Sentinel-1 mission [26] consists of two polar-orbiting satellites, equipped with C-band SAR sensors, which enables them to acquire imagery regardless of the weather. For the Sentinel-1 images in the SSL4EO-S12 dataset, ground-range-detected (GRD) products with both VH and VV polarization acquired in the interferometric wide swath (IW) mode were used. These images contain the \\(\\sigma^{0}\\) backscatter coefficient in dB scale. The image resolution is 10m.
The Sentinel-2 mission [27] comprises two polar-orbiting satellites in the same orbit, equipped with multi-spectral imaging sensors. For Sentinel-2 images in the SSL4EO-S12 dataset, both level-1C top-of-atmosphere reflectance (13 bands) and level-2A atmospherically corrected surface reflectance (12 bands) were included. The image resolution ranges between 10m (visible and NIR), 20m (red edge and SWIR) and 60m (aerosols).
### **Dataset statistics**
Table XII and XIII present the mean and standard deviation of each band for each product of the proposed SSL4EO-S12 dataset.
### **Data storage**
The SSL4EO-S12 dataset is stored in GeoTiff format for each band of each patch. The file structure is shown in Figure 6, where s1/s2a/s2c represents Sentinel-1 / Sentinel-2 level-2A / Sentinel-2 level-1C, and t1 - t4 represent 4 seasons. Raw files (extracted GeoTiff) occupy about 500GB/800GB/800GB disk storage for S1/S2A/S2C, and compressed tar.gz files occupy about 450GB/500GB/500GB correspondingly. If converting to uint8 and encoding with jpeg, a lossy dataset occupies less than 50 GB for each product. We later show this won't affect much the downstream performance.
### **Metadata**
Each patch comes with a metadata file that collects the image properties of this patch. See Table XX and XXI for details.
### **Example visualization**
Figure 7 visualizes an example geospatial tile of SSL4EO-S12.
## References
Fig. 6: SSL4EO-S12 file structure.
Fig. 7: Sample visualization of one tile from SSL4EO-S12 dataset. The rows top-down present grayscale and false-color imagery based on the Sentinel-1 GRD product, Sentinel-2 level-1C, and Sentinel-2 level-2A multispectral data with corresponding columns representing the four seasons spring, summer, fall, and winter from left to right.
## Appendix-2 Implementation details
### _Pre-training_
We use Sentinel-2 level-1C images for the main pre-training experiments, which are pre-processed by converting to uint8 for efficiency (divided by 10000 and multiplied by 255, see Section VII-B5). We use 4 NVIDIA A100 GPUs with a total batch size of 256 for all the pre-training experiments. For the main experiments, we pre-train ResNet50 or Vit-S/16 for 100 epochs. Training time varies between different methods from 7 (MAE) to 25 (DINO) hours. The total experiments (including parameter tuning) take about 70k core hours (1400 GPU hours).
#### Vii-A1 MoCo
We pre-train the MoCo-v2/v3 models using their default settings following the publicly available repository ([https://github.com/facebookresearch/moco](https://github.com/facebookresearch/moco) and [https://github.com/facebookresearch/moco-v3](https://github.com/facebookresearch/moco-v3)). We use RandomResizedCrop, RandomBrightness/Contrast (to have a partial color jittering for multiple bands), RandomGrayscale, RandomGaussianBlur, RandomHorizontalFlip and RandomSeasonContrast as data augmentations. For MoCo-v2 (ResNet50), we use SGD optimizer and cosine learning rate schedule with a learning rate 0.03. For MoCo-v3 (ViT-S/16), we use AdamW optimizer and cosine schedule with a learning rate 1.5e-4.
#### Vii-A2 Dino
We pre-train the DINO models using its default settings following the publicly available repository ([https://github.com/facebookresearch/dino](https://github.com/facebookresearch/dino)). The data augmentations include those of MoCo, as well as additional Multi-Crop and Solarization. For ResNet50, we use SGD optimizer and cosine learning rate schedule with a learning rate 0.03. For ViT-S/16, we use AdamW optimizer and cosine learning rate schedule with a learning rate 1.5e-4.
#### Vii-A3 Mae
We pre-train the MAE models using its default settings following the publicly available repository ([https://github.com/facebookresearch/mae](https://github.com/facebookresearch/mae)). The mask ratio is set to 0.7. The data augmentations include RandomResizedCrop, RandomHorizontalFlip and RandomSeason. We use AdamW optimizer and cosine learning rate schedule with a learning rate 1.5e-4.
#### Vii-A4 data2vec
We pre-train the data2vec models using its default settings following the publicly available repository ([https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec)). The data augmentations include Resize/CenterCrop, RandomHorizontalFlip and RandomSeason. We use AdamW optimizer and cosine learning rate schedule with a learning rate 1e-3.
### _Downstream tasks_
Below are additional implementation details for the downstream tasks.
#### Vii-B1 EuroSAT
We split EuroSAT into 21600 training and 5400 testing images for evaluation. The data augmentations are RandomResizedCrop/RandomHorizontalFlip for training, and Resize/CenterCrop for testing. We resize the images to 224x224 for better performance (see Section VII-B5). The batch size is 256. We use CrossEntropyLoss and SGD optimizer with step decay learning rate (divided by 10 at epoch 60 and 80) for 100 epochs. We use simple grid search strategy to find suitable learning rates for linear probing and fine-tuning.
#### Vii-B2 BigEarthNet
We use 311667 training and 103944 testing images from BigEarthNet for evaluation. Different settings of the amount of labels affect only the training split. The data augmentations are RandomResizedCrop/RandomHorizontalFlip for training, and Resize/CenterCrop for testing. We use a cropping scale of 0.8 to avoid strong occlusions (BigEarthNet is a multi-label dataset). We resize the images to 224x224 for better performance. The batch size is 256. We use MultiLabelSoftMarginLoss and SGD optimizer with step decay learning rate (divided by 10 at epochs 60 and 80) for 100 epochs. We use a simple grid search strategy to find suitable learning rates for linear probing and fine-tuning.
#### Vii-B3 So2Sat-Lcz42
We use 352366 training and 24119 testing Sentinel-2 images from So2Sat-LCZ42 for evaluation. Different settings of the amount of labels affect only the training split. We use the _culture-10_ version of So2Sat-LCZ42: the training data and the testing data are from different cities. The data augmentations are RandomResizedCrop/RandomHorizontalFlip for training, and Resize/CenterCrop for testing. We resize the images to 224x224 for better performance. The batch size is 256. We use CrossEntropyLoss and SGD optimizer with step decay learning rate (divided by 10 at epochs 60 and 80) for 100 epochs. We use a simple grid search strategy to find suitable learning rates for linear probing and fine-tuning.
#### Vii-B4 Defc2020
We use 5128 training and 986 testing Sentinel-2 images from DFC2020 dataset for evaluation. The batch size is set to 8 and we train the models for 50 epochs. We use CrossEntropyLoss and SGD optimizer with momentum 0.9 and weight decay 5e-4. The initial learning rate is 1e-3, which is decayed by a factor of 0.9 in every epoch until 1e-4.
#### Vii-B5 Oscd
This dataset is composed of 24 pairs of multispectral images from Sentinel-2 in total. Following [22], we use 14 of them for training, and the rest for testing. In fine-tuning stage, we adopt settings similar to those in [4]. That is, the original images are split into non-overlapping patches of \\(96\\times 96\\) pixels as inputs, which leads to 827 and 285 patches for training and testing, respectively. The batch size is set to 32, and we in total train 50 epochs. We use Adam optimizer with a weight decay of 1e-4. The initial learning rate is 1e-3, and decreases exponentially with a multiplicative factor of 0.95 for every epoch. The resulting models are evaluated on the whole test set for overall precision, recall and F1 score (with a default threshold 0.5).
## Appendix-3 Additional experimental results
### **Effect of data pre-processing**
We show the influence of data pre-processing (for pre-training) in Table XIV, where int16 means 16 bits raw input, uint8 means compressed 8 bits input (divided by 10000 and multiplied by 255), uint8-n means normalization by mean and standard deviation, and L2A/L1C means with/without atmospheric correction. The results show similar performance between 16 bit and 8 bit, supporting compressed input as it saves a lot of storage space and computing time. The results also show comparable performance for L1C and L2A, as well as for the use of normalization. Therefore, in our main experiments, we use level-1C, uint8, unnormalized data for pre-training.
### **Effect of input image size**
We analyze the impact of image resolution on pre-training and transfer learning in Table XV. We clearly observe the advantage of upsampling the input image size. Therefore, in our main experiments, we upscale the downstream input images to 224x224 for better performance.
### **Effect of MAE masking ratio**
Table XVI shows the influence of masking ratios in MAE during pre-training. We find 70% to be the best masking ratio, which is similar to natural images as reported in MAE paper, where 75% is the best. It is also promising to see that the model still learns good representations even with 90% pixels masked.
### **Effect of different pre-training protocols**
#### Vi-D1 Different ImageNet pre-training protocols
Table XVII shows a comparison of different ImageNet pre-training protocols. We pre-train ResNet50 with MoCo-v2 for self-supervised pre-training, and report fine-tuning results on BigEarthNet. The table shows that ImageNet pre-training provides good representations that can be generalized well to remote sensing images with RGB but not all bands. It can also be seen that when using RGB, self-supervised pre-training on ImageNet can further improve the downstream performance in remote sensing compared to supervised pre-training.
#### Vi-D2 **Supervised pre-training on RS datasets**
Table XVIII shows a comparison of supervised and unsupervised pre-training on remote sensing datasets. We do supervised pre-training on BigEarthNet and self-supervised pre-training (MoCo-v2, ResNet50) on both BigEarthNet and SSL4EO-S12. We evaluate the pre-trained models on EuroSAT. The results show that self-supervised pre-training outperforms supervised pre-training on remote sensing data.
### _Additional dataset comparison results_
Table XIX reports fine-tuning results pre-trained on different datasets in complement to the main paper. A difference most notable is ImageNet's catch-up in performance compared to the other geospatial pre-training datasets. However, a \\(\\sim 5\\%\\) margin when compared to SSL4EO-S12 persists. The observed is a characteristic feature of the data _domain gap_: while pre-training on ImageNet learns good representations, the weights' distribution is shifted towards natural images, which can be further adjusted to remote sensing data with fine-tuning. We note that fine-tuning is more computationally expensive compared to linear probing, and Table XIX demonstrate: pre-training on SSL4EO-S12 outperforms all other datasets for downstream classification.
## Appendix-4: Metadata
## Appendix-4: Metadata
## Datasheets for Datasets
Here we answer the questions outlined in the datasheets for datasets paper by Gebru et al. [28].
### _Motivation_
**For what purpose was the dataset created?** The dataset was created for unsupervised pre-training in Earth observation. By integrating global coverage, multiple modalities and multiple timestamps, the dataset is intended to serve for diverse applications in remote sensing. The dataset fills the gap between multiple existing pre-training datasets, e.g. domain gap of ImageNet, regional coverage of BigEarthNet [13], single modality of SeCo [4], single timestamp of SEN12MS [3], and patch overlap of both SEN12MS and SeCo.
**Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset was created by the lab \"Data Science in Earth Observation\" at Technical University of Munich and German Aerospace Center.
**Who funded the creation of the dataset?** The creation of the dataset was funded by the Helmholtz Association through the Framework of Helmholtz AI (grant number: ZT-I-PF-5-01) - Local Unit \"Munich Unit @Aeronautics, Space and Transport (MASTr)\". The computing resources for benchmark experiments were supported by the Helmholtz Association's Initiative and Networking Fund on the HAICORE@FZJ partition.
### _Composition_
**What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?** This dataset only contains satellite images. In addition we provide meta-data for these images, which contain information about data acquisition.
**How many instances are there in total (of each type, if appropriate)?** The dataset contains 251079 geographical patches, each patch including 3 product types and 4 seasons. In total there are 1M patches each for Sentinel-1 GRD, Sentinel-2 L1C and Sentinel-2 L2A, resulting in 1.5TB as three tar.gz files.
**Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of all Sentinel-1/2 satellite images. While the dataset can still be extended, we make it as representative as possible by ensuring global coverage, multiple modalities and multiple timestamps.
**What data does each instance consist of?** Sentinel-1/2 images along with meta-data captured from the space.
**Is there a label or target associated with each instance?** No, our dataset is unlabeled. However, each patch is bound with geographical location and acquisition time, thus a match to other labeled maps is possible.
**Is any information missing from individual instances?** No.
**Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)?** Not applicable, though geographic location / acquisition time / product type / other properties can be extracted if needed.
**Are there recommended data splits (e.g., training, development/validation, testing)?** The dataset is intended for unsupervised pre-training. Users are free to use either the full split or a subset (either a subset of modalities or a subset of geographical patches) based on their targeted applications.
**Are there any errors, sources of noise, or redundancies in the dataset?** Yes, as mentioned in the data collection section, there are two kinds of noise/redundancies: first, potential overlap around grid cell boundaries; second, potential noise of clouds from inaccurate cloud filtering.
**Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?** The dataset is self-contained.
**Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)?** No.
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No.
**Does the dataset identify any subpopulations (e.g., by age, gender)?** No.
**Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** No.
**Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?**
### _Collection process_
**How was the data associated with each instance acquired?** The data was collected from the publicly available Sentinel-1/2 database.
**What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Google Earth Engine with Python were used to collect the data.
**If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?** The patch locations are Gaussian sampled around a city center (50km) which is uniformly sampled from top-10k populated cities across the globe. The timestamps are sampled from four seasons (dates around Mar 20th, Jun 21st, Sep 22nd and Dec 21st) in the year 2020/2021.
**Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** The data was automatically collected and verified by the authors.
**Over what timeframe was the data collected?** The data was collected by the authors between February and March 2022. The images within the dataset were captured in the year 2020/2021.
**Were any ethical review processes conducted (e.g., by an institutional review board)?** No.
**Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?** The data was collected from open sources.
**Were the individuals in question notified about the data collection?** N/A.
**Did the individuals in question consent to the collection and use of their data?** N/A.
**If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?** N/A.
**Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?** N/A.
### _Preprocessing/cleaning/labeling_
**Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** The data was pre-processed online during the collection/downloading process: filtering out cloudy patches and overlapping patches. No further pre-processing was done.
**Was the \"raw\" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** No. The cloudy and overlapping patches were removed before downloading.
**Is the software used to preprocess/clean/label the instances available?** Yes, we use Google Earth Engine with Python which is freely available.
### _Uses_
**Has the dataset been used for any tasks already?** In this paper we use the dataset to benchmark several self-supervised methods on several downstream tasks.
**Is there a repository that links to any or all papers or systems that use the dataset?** Yes we will organize and maintain all related information at [https://github.com/zhu-xlab/SSL4EO-S12](https://github.com/zhu-xlab/SSL4EO-S12).
**What (other) tasks could the dataset be used for?** The main function of this dataset is to provide a pre-training dataset for both the study of self-supervised learning, and specific downstream applications. The dataset can also be used as a baseline for further pre-training datasets in Earth observation. In addition, the dataset can be used directly for applications like image retrieval, domain adaptation and style transfer.
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** We do not unify the orbiting (ascending/descending) of Sentinel-1 data, which should be taken into consideration for SAR related applications. However, the orbiting information can be found in the meta-data and the dataset can be further processed for targeting applications.
**Are there tasks for which the dataset should not be used?** The authors are not aware of any specific task that should be avoided.
### _Distribution_
**Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?** Yes, the dataset is publicly available.
**How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?** The dataset is distributed as tarball on mediaTUM. Access to the dataset can be found at [https://github.com/zhu-xlab/SSL4EO-S12](https://github.com/zhu-xlab/SSL4EO-S12).
**When will the dataset be distributed?** Starting from June 2022.
**Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?** CC-BY.
**Have any third parties imposed IP-based or other restrictions on the data associated with the instances?** No.
**Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?** No.
### _Maintenance_
**Who is supporting/hosting/maintaining the dataset?** The dataset is hosted by mediaTUM and supported/maintained by the authors.
**How can the owner/curator/manager of the dataset be contacted (e.g., email address)?** The authors can be reached at their email addresses: {yi.wang, nassim.aitalibraham, conrad.albrecht, chenying.liu}@dlr.de, and {zhitong.xiong, xiaoxiang.zhu}@tum.de.
**Is there an erratum?** If errors are found an erratum will be added.
**Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?** Any updates will be posted and the dataset will be versioned.
**If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?** N/A.
**Will older versions of the dataset continue to be supported/hosted/maintained?** Depending on the updates (if there are), we will either continue hosting the older versions or make a clear update log that older versions can be generated from the newest version.
**If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?** Yes, please feel free to reach out to us.
### _Author statement of responsibility_
The authors confirm all responsibility in case of violation of rights and confirm the licence associated with the dataset.
## References
* a curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Cited by: SS1.
* [2]O. Manas et al. (2021) Seasonal contrast: unsupervised pre-training from uncurated remote sensing data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Cited by: SS1.
* [3]G. Sumbul et al. (2019) Bigearthnet: a large-scale benchmark archive for remote sensing image understanding. In IEEE International Geoscience and Remote Sensing Symposium, Cited by: SS1.
* [4]N. Gorelick et al. (2017) Google Earth Engine: planetary-scale geospatial analysis for everyone. In Remote sensing of Environment, Cited by: SS1.
* [5]R. Caye Daudt et al. (2018) Urban change detection for multispectral earth observation using convolutional neural networks. In IEEE International Geoscience and Remote Sensing Symposium, Cited by: SS1.
* [6]R. Torres et al. (2012) GMES Sentinel-1 mission. In Remote sensing of environment, Cited by: SS1.
* [7]M. Drusch et al. (2012) Sentinel-2: ESA's optical high-resolution mission for GMES operational services. In Remote sensing of Environment, Cited by: SS1.
* [8]T. Gebru et al. (2021) Datasheets for datasets. In Communications of the ACM, Cited by: SS1. | Self-supervised pre-training bears potential to generate expressive representations from large-scale Earth observation (EO) data without human annotation. However, most existing pre-training in the field is based on ImageNet or medium-size, labeled remote sensing (RS) datasets. In this paper, we share an unlabeled dataset SSL4EO-S12:
_Self-Supervised Learning for Earth Observation - Sentinel-1/2_ to assemble a large-scale, global, multimodal, and multi-seasonal corpus of satellite imagery. We demonstrate SSL4EO-S12 to succeed in self-supervised pre-training for a set of representative methods: MoCo-v2, DINO, MAE and data2vec, and multiple downstream applications including scene classification, semantic segmentation and change detection. Our benchmark results prove the effectiveness of SSL4EO-S12 compared to existing datasets. The dataset, related source code, and pre-trained models are available at [https://github.com/zhu-xlab/SSL4EO-S12](https://github.com/zhu-xlab/SSL4EO-S12).
Self-supervised learning, dataset, benchmark. | Condense the content of the following passage. | 225 |
arxiv-format/1703_04519v1.md | Services decentralises, robustes et efficaces pour une gestion automome et temp-reel de situations d'urgences urbaines
Frederic Le Mouel11, Carlos J. Barrios Hernandez1, Oscar Carrillo11, Gabriel Pedraza1
## 1 Introduction
Un des services citoyens presentant des enjeux importants dans les villes intelligentes est celui permettant de gerer les situations d'urgences. Nous presentons le Projet ALERT - _Autonomous Liable Emergency service in Real-Time_ - un service citoyen pour les situations d'urgence.
Une urgence fait reference a une situation ou des decisions doivent etre prises dans de brefs delais, les consequences de ces decisions pouvant etre vitales (tremblement de terre, un attendat, etc.)
Les plateformes numeriques sont essentielles dans ce genre de situation pour permettre de collecter des donnees de maniere a caracteriser la situation d'urgenceet a reagir rapidement en prenant une decision. Ces situations d'urgence pouvant elles-memes mettre a mal les outils numeriques.
Fort de l'expertise des deux laboratoires CITI (Golchay, Le Mouel, Ponge, & Stous, 2016 ; Lebre, 2016) et SC3 (Barrios et al., 2016), l'INSA de Lyon et l'Universite Industrielle de Santander (UIS) proposent de travailler sur deux points durs garantissant ce service :
* Architecture fiable de services : (1) efficace et temps-reel, (2) distribuee a differentes echelles, (3) tolerante aux pannes et persistante
Les sections suivantes detailent les avancees deja menees sur le sujet.
## Resultats
### Architecture distribuee de services
### _Algorithmes bio-inspires du comportement des fourmis pour la gestion du trafic_
La fluidite du trafic vehiculaire est un enjeu majeur et est un exemple de service montrant bien les aspects decentralises et collaboratifs. Nous proposons un service vehiculaire base sur le crowdsourcing ou le vehicule est assimile a une fourmi cherchant son chemin en temps reel (cf Figure 3) (Lebre, 2016). Au cours de leurs deplacements, les vehicules echangent leur connaissance du reseau avec les autres vehicules (cf Figure 4). Its calculent donc leur trajet avec une information partielle du reseau.
Les resultats peuvent etre tres bons en cas de trafic normal (KPP dans la Figure 5(a)) et avec un algorithme adapte dynamiquement en cas de catastrophe -ici un tremblement de terre (PPE dans la Figure 5(b)).
Figure 3: Utilisation de modeles bio-inspires pour un service vehiculaire de trafic
Figure 2: Architecture de services distribuees - Cloud Spontane de proximitΓ©
## Discussion
Une mise en place a Bucaramanga nous semble un choix interessant, la ville etant en pleine expansion et particulierement bien placee au niveau du deploiement numerique.
De meme, le laboratoire SC3 y est installe et juit d'une position phare en Colombie dans la gestion des donnees et calcul de haute performance.
## Remerciements
Les auteurs remercient les chercheurs de la Collaboration CATI ([http://www.sc3.uis.edu.co/catai](http://www.sc3.uis.edu.co/catai)) : Michel Riveill (l3S), Jose-Tiberio Hernandez et Harold Castro (UniAndes), Yves Denneulin et Claudia Roncancio (LIG), Frederic Merienne (ParisTech), ainsi que Regis Guillaume et Enrique Sanchez-Albarracin (Ambassade de France en Colombie), Marie-Ange Lebre (Valeo/INSA Lyon), Eric Menard (Valeo), Roya Golchay (INSA Lyon).
## Sources
Barrios, C., Pedraza, G., Hernandez, J. T., Castro, H., Riveill, M., Roncancio, C., Denneulin, Y. (2016). Rapport d'activite de collaboration franco-colombienne catal en informatique avancee pour le developpement durable.
Barrios, C., Puleo, R., Cruz, J., Bedoya, D., Bricefo, Y., Diaz Toro, G. J., Nuflez de Villavicencio, L. (2012). Un Modelo de Autosostenibilidad y Servicio para Computacion Avanzada en Latinoamerica inspirado en Aplicacion como Servicio (AaaS). Segunda Conferencia de Directores de Tecnologia Gestion de las TI en Ambientes Universitarios - TICAL 2012.
Burgos, D. (2015). Simulacion y visualizacion de la dinamica del comportamiento de multi- tudes usando aceleradores graficos (PhD Thesis). Universidad Industrial de Santander, Bucaramanga, Colombie.
Golchay, R., Le Mouel, F., Ponge, J., & Stouls, N. (2016, novembre). Spontaneous proximity clouds : Making mobile devices to collaborate for resource and data sharing. In Proceedings of the 12th eai international conference on collaborative computing : Networking, applications and worksharing (Collaboratecom'2016). Beijing, China. Consulte sur [https://hal.inria.fr/hal-01391114](https://hal.inria.fr/hal-01391114) (BestPaper)
* Lebre (2016) Lebre, M.-A. (2016). De l'impact d'une decision locale et autonome sur les systemes de transport intelligent a differentes echelles (PhD Thesis). Universite de Lyon, INSA Lyon, Lyon, France. | La mondialisation des echanges et l'organisation du travail provoquent actuellement un flux migratoire important vers les villes. Cette croissance des villes necessite de nouvelles planifications urbaines dans lesquelles le numerique prend une place de plus en plus preponderante - la caption des donnees permettant de comprendre et decider face aux changements. Ces environnements numeriques sont toutefois malmenes en cas de crises (catastrophes naturelles, terrorisme, accidents, etc). Bases sur l'expertise des laboratoires CITI de l'INSA de Lyon et SC3 de l'Universite Industrielle de Santander, nous nous proposons de creer le projet ALERT - _Autonomous Liable Emergency service in Real-Time_. Avec des services decentralises, fiables et efficaces, qui soient au plus proche des citoyens, les prises de decisions pourront s'effectuer en temps reel, localement, de maniere pertinente sans risque de deconnexion avec une autorite centrale. Ces collectes d'informations et prises de decision mettront en jeu la population avec des approches participatives et sociales. | Give a concise overview of the text below. | 260 |
mdpi/018422c9_7e78_4e20_866d_b36d1005b8d9.md | # A Temporal Downscaling Model for Gridded Geophysical Data with Enhanced Residual U-Net
Liwen Wang
1College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China; [email protected] (L.W.); [email protected] (X.P.); [email protected] (Q.L.)
Qian Li
1College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China; [email protected] (L.W.); [email protected] (X.P.); [email protected] (Q.L.)
Xuan Peng
1College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China; [email protected] (L.W.); [email protected] (X.P.); [email protected] (Q.L.)
Qi Lv
1College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China; [email protected] (L.W.); [email protected] (X.P.); [email protected] (Q.L.)
######
temporal downscaling; U-Net; flow regularization; residual blocks; ERA5 +
Footnote β : journal: remote sensing
0000-0002-461X/0000-0002-461X/0000-0002-461X/0000-0002-461X/0000-0002-461X/0000-0002-461X/0000-0002-461X/0000-002-461X/0000-002-461X/0000-002-461X/00
that exhibits higher temporal resolution. This technique is especially pertinent in the fields of climate science and meteorology, where it is used to refine the granularity of datasets, such as temperature records, precipitation amounts, or wind speeds, allowing for a more detailed and nuanced understanding of weather and climate phenomena over time. The primary goal of temporal downscaling is to interpolate or estimate the values of a variable at times between the recorded data points. The core function of this technology is to provide researchers and decision-makers with more detailed temporal data series, thereby enhancing the understanding of climate variability and extreme events [11]. For instance, temporal downscaling techniques can offer us a more precise comprehension of the frequency and intensity of extreme heat waves, diurnal patterns of precipitation, and other critical climatic features. In applications, temporal downscaling techniques play a significant role in various fields such as climate research, agriculture, water resources management, renewable energy, and urban planning [12]. In agriculture, for example, understanding the rainfall and temperature patterns during critical stages of crop growth is essential, and temporal downscaling techniques can provide important information for this purpose [13]. In the domain of renewable energy, particularly for wind and solar power, high-resolution temporal data can offer robust support for energy dispatch and storage strategies [14]. Moreover, with the rapid progress of urbanization, temporal downscaling techniques are becoming increasingly important for urban planning and design. For example, comprehending how urban heat island effects vary over time can assist urban planners in better designing and implementing mitigation measures [15]. The challenge lies in accurately capturing the dynamics that occur between these points. This is not a simple task, as it requires an understanding of the physical processes involved and their representation in the time series data.
Temporal downscaling methods fall into two primary categories: dynamical downscaling and statistical downscaling, each offering unique approaches to improve temporal resolution [16]. Dynamical downscaling is based on the use of mathematical and physical equations to simulate atmospheric processes at finer temporal scales than those offered by Global Circulation Models (GCMs) [17] or Regional Climate Models (RCMs) [18]. This approach integrates complex numerical weather prediction models with surface models, capturing nuanced atmospheric behaviors, especially in regions with complex geographical features such as mountainous terrains or coastal areas. The strength of dynamical downscaling lies in its ability to incorporate a physical understanding of atmospheric processes, though it demands substantial computational resources and expertise in numerical modeling. The results' accuracy largely depends on the boundary conditions provided by larger-scale models, highlighting a dependency on the quality of these inputs. On the other hand, statistical downscaling employs statistical techniques to establish relationships between large-scale atmospheric variables and local-scale climate variables. Instead of directly simulating physical processes, it uses historical data to train models that can project fine-scale climate details based on outputs from GCMs or RCMs. The methods in statistical downscaling range from simple regression models to sophisticated machine learning algorithms, with the choice depending on the specific study requirements. Its major advantage is computational efficiency, offering a practical approach to generating high-resolution temporal data. Both methods have their respective limitations [19]. Although dynamical downscaling offers a physically consistent representation of climate processes, it is computationally intensive [20]. Statistical downscaling, though more practical and less resource-intensive, operates under certain assumptions that may not hold in a changing climatic context. The decision to use either method depends on the study's goals, available resources, and the balance between the need for physical accuracy and computational feasibility. This article mainly discusses the latter, and all references to temporal downscaling in the following text refer to statistical downscaling.
Temporal downscaling has been the subject of extensive research over the past few years [21]. Traditional methods primarily rely on statistical models like polynomial regression or autoregressive integrated moving average models to interpolate between temporal data points [22]. Although useful for linear trends, these methods often fall short when applied to geophysical data characterized by complex, non-linear temporal dynamics [23].
Recent advancements in machine learning have facilitated the development of more sophisticated downscaling techniques. Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks have been applied to downscaling tasks, showing improved performance over traditional statistical methods [24; 25; 26; 27]. However, these machine learning-based techniques still face challenges in capturing intrinsic temporal dynamics and spatial relationships simultaneously [28].
The primary objective of this paper is to introduce a method for temporal downscaling of gridded geophysical data, combining flow regularization techniques with a Residual U-Net architecture, as depicted in Figure 1. The contributions of this paper are threefold:
1. We introduce an enhanced residual U-Net architecture for the downscaling of geophysical data. Unlike traditional U-Net architectures, this enhanced model incorporates advection loss in addition to regression loss for training the entire network, so that the model is not overly reliant on fitting to the data (as in pure regression loss) but also considers the underlying physical processes that drive changes in the atmosphere, which allow for a deeper network that can capture complex patterns without succumbing to issues like overfitting or vanishing gradient problems. The depth and architecture of the enhanced residual U-Net are also effective at capturing multi-scale temporal features, a quality lacking in many existing temporal downscaling methods.
2. We introduce the concept of flow regularization, which has been traditionally leveraged in computer vision tasks, to the domain of geophysical data downscaling. This addition serves as an auxiliary constraint that guides the model to adhere to the physical laws governing the movement and interaction of geophysical fields with higher accuracy than existing techniques.
3. We validate our model using multiple real-world geophysical data sets, comparing its performance against existing methods in terms of accuracy, computational efficiency, and fidelity of temporal features.
Figure 1: Overview of the residual U-Net model for temporal downscaling. The model consists of an encoder and a decoder, each with four residual blocks. It takes in grid data and outputs data with higher temporal resolution by performing temporal downscaling. After the encoder, the intermediate features are resampled to generate auxiliary flow information, which is then used to calculate the advection loss.
The paper is structured as follows: Section 2 provides a comprehensive review of related work, focusing on the principles of U-Net architectures and residual connections. Section 3 introduces the data sets used for the experiments. Section 4 provides a description of our proposed model, which employs enhanced residual U-Net for temporal downscaling. Section 5 presents the results, offering a comparative analysis with existing methodologies. Section 6 discusses the influence of the input grid data pixel size. Finally, Section 7 concludes the paper by summarizing key findings and outlining avenues for future research.
## 2 Related Work
### Temporal Downscaling
Temporal downscaling serves as a crucial technique in various scientific applications [29; 30; 31; 32], particularly in environmental modeling where high-frequency fluctuations often matter. The traditional ways to tackle this issue have primarily been statistical. Linear interpolation methods were among the earliest approaches, providing a quick yet overly simplistic way to fill in data between given time points. Soon after, Fourier-based methods were explored to address some of the linear assumptions but found limited applicability due to the inherent cyclical assumptions in the Fourier series [33]. Autoregressive Integrated Moving Average (ARIMA) models gained traction for their capabilities in capturing some level of non-linearity and seasonality [22]. Machine learning techniques like Support Vector Machines (SVMs) and Random Forests have been applied to the temporal downscaling problem as well [19]. Although these methods capture non-linearity better than linear interpolation, they often require extensive feature engineering and parameter tuning. Additionally, they fall short in integrating multi-scale features and incorporating flow information.
### Regularization
In geoscience, regularization techniques are often employed as a critical enforcement mechanism to ensure that model predictions align with physical realities [34]. Some studies have utilized methods such as Total Variation Regularization to maintain crisp boundaries and smooth transitions in geological formations [35]. Others have opted for more intricate, physics-based regularization frameworks like the Hamilton-Jacobi-Bellman equations to enforce dynamic consistency in fluid flow models [36]. Hydrological models frequently make use of an energy balance constraint as a regularization term to confirm the thermodynamic plausibility of predicted water cycles [37]. More recently, advanced methods have emerged that integrate machine learning with physical laws to create hybrid models [38]. These 'physics-informed' models use regularization terms sourced from governing equations, like the Navier-Stokes equations for fluid dynamics or the Laplace equation for potential fields, as constraints during the learning process [38; 39]. Nonetheless, many of these approaches often come with a trade-off between adherence to physical laws and computational efficiency [40; 41]. Our flow regularization technique with advection loss strikes a balance by ensuring compliance with the physical laws that govern geophysical fields, while also being computationally practical.
### Residual Connections
In recent years, residual connections have emerged as a critical innovation in the realm of deep learning architectures, particularly in convolutional neural networks (CNNs). The seminal work by He, et al. [42] introduced residual connections in their ResNet model, demonstrating that these connections alleviate the vanishing gradient problem, thus enabling the training of much deeper networks. Residual architectures have been adopted in various disciplines beyond image classification, including object detection and segmentation. In geoscience applications, residual connections have shown promising results in tasks such as seismic interpretation and subsurface reservoir modeling [43]. These architectures facilitate the learning of hierarchical features from geological data by promoting the flow of gradients throughout the network. By creating shortcuts between layers, residual connections allow for a more efficient and effective propagation of errors during backpropagation, improving the network's capacity to learn complex mappings.
### U-Net
The U-Net architecture, originally designed for biomedical image segmentation, has shown unparalleled success in various domains requiring complex spatial hierarchies. The architecture follows an encoder-decoder structure, capturing context in the encoding layers and using the decoding layers to reconstruct spatial details. One of the most distinguishing features of the U-Net is its use of skip connections, allowing it to preserve high-frequency details that would otherwise be lost during the encoding process.
In recent years, the U-Net architecture has seen several adaptations and modifications to suit different tasks [44; 45; 46; 47]. For example, 3D U-Nets have been developed to process volumetric data, and Temporal U-Nets have been explored to capture time-related changes in videos [48]. However, integrating temporal downscaling with U-Net's predominantly spatial-focused architecture remains an open challenge. Few works have attempted to adapt U-Net architectures for time series data, but these generally involve straightforward adaptations that do not fully utilize temporal dependencies. Similarly, while ResNet have been used in conjunction with LSTMs for sequence modeling, their application in temporal downscaling is yet to be fully realized.
In this light, our work aims to fill this gap by proposing a hybrid architecture that leverages the spatial prowess of U-Net, the learning capabilities of ResNet, and the flow regularization techniques, specifically tailored for the task of temporal downscaling in gridded geophysical data.
## 3 Study Area and Dataset
Our investigation targets the geographic region defined by longitudes 112degE to 118degE and latitudes 22degN to 28degN, with a grid resolution of 0.25deg \\(\\times\\) 0.25deg, as depicted in Figure 2. The selected region for our downscaling experiments was primarily influenced by its diverse climatic conditions and geographical significance. This area includes a variety of climatic zones, with distinct features ranging from coastal regions to varying inland topographies. This diversity presents an ideal scenario to test the effectiveness of our downscaling model in different climatic settings. Data for this area were sourced from the ERA5 reanalysis dataset. The training dataset is comprised of 21,912 sets, each containing samples from three consecutive hours, spanning the years 2010 to 2019, for a total of 87,648 h. For validation, we use a test set consisting of 2196 sets from the year 2020, also collected at three-hour intervals, totaling 8784 h. Our model aims to downscale these data to a finer one-hour temporal resolution. We evaluate the model's performance across three meteorological variables: 2 m surface air temperature, 850 hPa geopotential height, and 850 hPa relative humidity. These variables are experimented with separately.
## 4 Model
In this section, we describe the architecture and components of our enhanced residual U-Net model. We detail how the model integrates residual blocks, auxiliary flow information, and advection loss to perform temporal downscaling of geophysical data.
### Problem Definition
In the field of geophysical data analysis, the problem of temporal downscaling aims to refine the time resolution of observed data, thereby providing more frequent measurements. Specifically, given a dataset \\(X=\\{x_{t_{1}},x_{t_{2}},\\ldots,x_{t_{N}}\\}\\big{(}X\\in\\mathbb{R}^{N\\times H\\times W }\\big{)}\\) at a coarser temporal resolution of three hours, the objective is to estimate a fine-grained dataset \\(Y=\\Big{\\{}y_{t_{1}^{\\prime}},y_{t_{2}^{\\prime}},\\ldots,y_{t_{3N}^{\\prime}} \\big{\\}}\\big{(}Y\\in\\mathbb{R}^{3N\\times H\\times W}\\big{)}\\) at a one-hour resolution, where \\(t_{i}^{\\prime}=t_{i}/3\\) for \\(i=1,2,\\ldots,3N\\). Here, \\(N\\) represents the number of samples, \\(H\\) denotes the horizontal dimensions, and \\(W\\) denotes the vertical dimensions. The primary goal is to minimize the discrepancy between the ground truth and the predicted over the fine-grained temporal intervals. Mathematically, this can be formulated as
\\[\\min_{\\Theta}\\mathcal{L}\\Big{(}Y_{\\text{true}},Y_{\\text{pred}}\\Big{)}=\\min_{ \\Theta}\\left[\\sum_{i=1}^{3N-1}\\|Y_{i}-\\hat{Y}_{i}\\|^{2}+\\min_{\\Theta}\\sum_{i= 1}^{3N-2}\\|Y_{i}-\\hat{Y}_{i}\\|^{2}\\right]. \\tag{1}\\]
Here, \\(\\Theta\\) represents the parameters of the Enhanced Residual U-Net model, and \\(\\mathcal{L}\\) is the loss function.
### Residual U-Net
In this work, we introduce an architecture, enhanced residual U-Net, designed for the temporal downscaling of gridded geophysical data. This architecture merges the high-level feature extraction capabilities of U-Net with the robustness of Residual Networks (ResNet) to produce an efficient and scalable model (see Figure 3).
Figure 2: Study area. The red square indicates the study area.
The architecture is constructed from two main components: an encoder and a decoder. The encoder is responsible for downscaling the input tensor, thereby extracting high-level features. The decoder, on the other hand, upscales these high-level features to reconstruct the output tensor. These operations are standard in any U-Net architecture; however, our model introduces several enhancements.
One of the enhancements in our architecture is the introduction of residual blocks following key convolutional layers in the encoder section. The architecture's depth is primarily achieved through its deeper residual blocks, and each residual block comprises three 3 \\(\\times\\) 3 convolutional layers with ReLU activations [49]. The outputs of these layers are summed with the original input using a skip connection and these residual blocks help the model to learn complex features with reduced risk of vanishing or exploding gradients.
The encoder section is composed of a succession of four deeper residual blocks, each with distinct channel configurations--64, 128, 256, and 512 channels. Every deeper residual block comprises three convolutional layers, each followed by batch normalization and ReLU activation functions. This series of operations enriches the representation of the input data by sequentially increasing the number of channels. The architecture also incorporates max-pooling layers after each block to reduce the spatial dimensions of the feature maps. Subsequent to each max-pooling operation, the spatial dimensions are halved, thereby focusing on the extraction of high-level features.
The decoder section reverses the operations conducted by the encoder. It employs a series of up-convolutional layers paired with concatenation operations that merge high-level features from the encoder. Each up-convolutional layer also employs a ReLU activation function and effectively doubles the spatial dimensions. Similar to the encoder, residual blocks are also introduced in the decoder. These are positioned after each up-convolutional
Figure 3: The architecture of the residual U-Net. It consists of an encoder section on the left, a decoder section on the right, and an auxiliary flow information layer in between. The encoder features four residual blocks, each containing two convolutional layers with batch normalization and ReLU activation functions, responsible for reducing feature dimensions while capturing initial patterns from the input. The decoder also contains four residual blocks and uses transposed convolutions for upsampling. Skip connections merge the output from each encoder Residual Block with its corresponding decoder block, ensuring the preservation of spatial information across scales.
layer and function in the same manner as their encoder counterparts. These blocks refine the combined high-level and low-level features. The network concludes with a \\(1\\times 1\\) convolutional layer, which condenses the 64-channel feature map into a two-channel output.
### Flow Regularization Using Advection Loss
Conventional methods often miss capturing the evolving patterns. To address this limitation, we incorporate flow information with advection loss into our enhanced residual U-Net model, and this section details the mathematical and computational elements of this approach (see Figure 4).
Advection refers to the transport of a scalar field driven by flow regularization. Mathematically, it can be represented as a transformation function, \\(\\text{Advect}(Y_{t},F)\\), which takes in a geophysical field at time \\(t\\), denoted as \\(Y_{t}\\), and flow information \\(F\\), and returns an approximated field at time \\(t+1\\), represented as \\(\\hat{Y}_{t+1}\\). The principle behind advection is rooted in fluid dynamics and is used widely in computational fluid dynamics simulations and meteorological models. By adopting an advection transformation, we impose an auxiliary constraint on our neural network model, compelling it to learn physically meaningful dynamics.
The advection loss is introduced as an additional term in the loss function and is defined as \\(L_{\\text{advection}}=\\left\\|Y_{t+1}-\\left(\\text{Advect}(Y_{t},F)+\\hat{Y}_{t} \\right)\\right\\|_{2}^{2}\\). In essence, this loss measures the difference between the true field at \\(t+1\\) and the advected field \\(\\hat{Y}_{t+1}\\). It guides the network
Figure 4: Overview of flow information extraction from the intermediate features post-encoder phase. Following encoding, these intermediate features undergo specific convolutional operations and resampling procedures to yield the flow information. The bar chart illustrates the Mean Absolute Percentage Error (MAPE) for various flow pixel resolutions across different epochs. The \\(x\\)-axis represents the training epochs, ranging from 100 to 1000, while the \\(y\\)-axis represents the MAPE in percentages.
to learn a more accurate representation of the data and acts as a regularization term, reducing overfitting while still ensuring that the model learns the dynamics of the field.
For the computation of \\(\\text{Advect}(Y_{t},F)\\), spatial interpolation is employed. Given a 2D geophysical field \\(Y_{t}\\) and corresponding flow information, which is also a 2D tensor but with two channels representing the velocity vectors \\(\\left(F_{x},F_{y}\\right)\\), each point \\(\\left(x,y\\right)\\) in \\(Y_{t}\\) is shifted according to the velocity vector at that point. Specifically, \\(F_{x}\\) and \\(F_{y}\\) are the latitudinal and longitudinal components of the velocity at each point in a given field. This means that for each point, \\(F_{x}\\) indicates the rate and direction of movement along the horizontal axis, while \\(F_{y}\\) represents the same along the vertical axis. The new coordinates \\(\\left(x_{\\text{new}},y_{\\text{new}}\\right)\\) are calculated as \\(\\left(x+F_{x},y+F_{y}\\right)\\). Bilinear interpolation is used to estimate the value of the advected field \\(\\hat{Y}_{t+1}\\) at these new coordinates.
The final loss function incorporating both the L2 loss and the advection loss is formulated as follows:
\\[\\mathcal{L}=\\|\\hat{Y}_{t}-Y_{t}\\|_{2}^{2}+\\lambda\\|Y_{t+1}-\\left(\\text{Advect} (Y_{t},F)+\\hat{Y}_{t}\\right)\\|_{2}^{2}. \\tag{2}\\]
In this equation, \\(\\|\\hat{Y}_{t}-Y_{t}\\|_{2}^{2}\\) is the L2 loss, representing the squared Euclidean distance between the predicted output \\(\\hat{Y}_{t}\\) and the ground-truth \\(Y_{t}\\). The term \\(\\lambda\\|Y_{t+1}-\\left(\\text{Advect}(Y_{t},F)+\\hat{Y}_{t}\\right)\\|_{2}^{2}\\) is the weighted advection loss, and the weight of \\(\\lambda=0.3\\) is applied to balance the contribution of advection loss against the L2 loss. \\(Y_{t+1}\\) refers to the next immediate true data sample, which is either an hour or two hours apart.
## 5 Experiments
The model was implemented using PyTorch and the training was performed on a machine equipped with eight NVIDIA Tesla A5000 GPUs. During training, we employed the Adam optimizer with a learning rate of 0.0001 and a batch size of 32. The model was trained for 1000 epochs. A decay rate of 0.9 for the learning rate was applied every 200 epochs to ensure convergence. The loss function used in training was a combination of the Mean Squared Error (MSE) loss and the advection loss, as described in Section 4.3.
### Quantitative Comparison with Conventional Methods
In this section, we conduct a quantitative evaluation of our enhanced residual U-Net model against several benchmark methods, focusing on three critical atmospheric variables: 2 m air temperature, geopotential height, and relative humidity. For a robust comparison, we employ three well-established metrics: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). Linear interpolation was directly applied by averaging adjacent temporal data points, assuming uniform change over time. Cubic spline interpolation employed a piecewise third-order polynomial, enhancing smoothness and fitting the data's curvature better than the linear method. We also trained three computer vision models specifically for weather data, training from scratch rather than using pre-trained weights. These models, typically used for video frame interpolation, were employed to handle meteorological inputs by converting meteorological fields from single-channel data to three-channel format, without modifying the main backbone. The models' input and output layers were the only components modified to process our weather dataset.
As depicted in Table 1, it is clear that our enhanced residual U-Net model surpasses other techniques across all three metrics and for each atmospheric variable examined. Specifically, for 2 m air temperature, our model yields RMSE and MAE values of 0.20 and 0.17, respectively; for geopotential height, the corresponding values are 0.72 and 0.62; and for relative humidity, these metrics stand at 0.64, 0.46, and 0.59%.
When contrasted with conventional methods like linear interpolation and cubic spline--whose performance metrics are considerably higher in terms of RMSE and MAE--the superiority of our model becomes evident. The results are presented in Table 1. Although these deep learning-based models outperform linear interpolation and cubic spline, they still fail to match the superior performance of our enhanced residual U-Net model.
### Visual and Qualitative Analysis
To assess the effectiveness of our model, we selected a case of the 2 m temperature fields for 21 January 2020, between the hours of 10:00 and 17:00. As depicted by Figure 5, our model accurately reproduces the nonlinear variability in the temperature field.
Focusing on specific intervals, for the 11:00 downscaling between 10:00 and 13:00, the temperature patterns exhibit distinct characteristics. For example, the coastal areas, which originally showed a relatively warmer temperature at 10:00, start showing moderate cooling due to oceanic influences. In contrast, the central regions, which were cooler at 10:00, warm up slightly, likely due to increased solar radiation. This is captured with an RMSE of 0.22, MAE of 0.19, and MAPE of 0.06%. As we move to 12:00, the temperature in the valley regions starts showing minor fluctuations, likely due to local wind patterns. The RMSE improves to 0.20, MAE drops to 0.17, and the MAPE remains stable at 0.06%, emphasizing the model's competence in capturing these subtle dynamics.
Between 14:00 and 17:00, the 15:00 downscaling indicates that urban areas start to experience heat island effects, with temperature spikes in densely populated zones. These spikes contrast with adjacent rural or forested areas that show a more stable temperature profile. At this point, the RMSE is 0.20, MAE is 0.18, and the MAPE is stable at 0.06%. At 16:00, there is a notable decrease in temperature in the mountainous regions, likely due to the shadows cast by the changing sun angle. This dynamic is reflected with an RMSE of 0.19, MAE of 0.16, and a consistent MAPE of 0.06%.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Method**} & \\multirow{2}{*}{
\\begin{tabular}{c} **2 m Temperature** \\\\ **(RMSE/MAE/MAPE)** \\\\ \\end{tabular} } & **Geopotential Height** & **Relative Humidity** & **Parameters** \\\\ & & **RMSE/MAE/MAPE** & **(RMSE/MAE/MAPE)** & **(Million)** \\\\ \\hline Linear Interpolation & 0.51/0.42/0.14\\% & 1.79/1.55/0.09\\% & 1.61/1.15/1.47\\% & / \\\\ Cubic Spline & 0.42/0.35/0.12\\% & 1.50/1.30/0.08\\% & 1.35/0.96/1.23\\% & / \\\\ ConvLSTM [26] & 0.31/0.25/0.10\\% & 1.20/1.03/0.07\\% & 0.90/0.64/0.82\\% & 11.1 \\\\ Super-somo [27] & 0.25/0.22/0.08\\% & 1.24/1.10/0.07\\% & 0.93/0.65/0.83\\% & 19.8 \\\\ RIFE [50] & 0.23/0.20/0.07\\% & 0.87/0.74/0.05\\% & 0.75/0.51/0.65\\% & 9.8 \\\\
**Enhanced Residual U-Net** & **0.20/0.17/0.06\\%** & **0.72/0.61/0.05\\%** & **0.64/0.45/0.59\\%** & **11.0** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Comparison of different methods for temporal downscaling.
Figure 5: Ground truth and model-generated downscaled results. Points at 10 a.m., 1 p.m., 2 p.m., and 5 p.m. are the modelβs input, while the data at 11 a.m., 12 p.m., 3 p.m., and 4 p.m. are model-generated outputs. The black square indicates the study area.
### Ablation Studies
In our ablation studies, we examine the individual components of the enhanced residual U-Net model to understand their significance in achieving overall performance metrics (see Figure 6). We set the performance of the full model as the baseline, which exhibits RMSE and MAE values of 0.20 and 0.17 for 2 m temperature, 0.72 and 0.62 for geopotential height, and 0.64 and 0.46 for relative humidity. To remove multi-scale features in the U-Net for our ablation study, we simply omit the skip connections. This is accomplished by not using the 'torch.cat' operation to merge features from the encoder and decoder, thus preventing the combination of high-resolution details with low-resolution context. Upon removing the multi-scale features, we observed a discernible decrease in predictive accuracy across all variables, with RMSE and MAE values for 2 m temperature rising to 0.34 and 0.28, respectively (see Table 2). This confirms the importance of multi-scale features in capturing the complexity of geophysical data. When we omitted the flow regularization, there was a significant performance decline: RMSE and MAE values for 2 m temperature rose to 0.23 and 0.19, indicating the benefits of incorporating flow regularization to capture temporal dynamics effectively. Lastly, reducing the architectural depth of the model led to a less pronounced, yet still noticeable, decline in performance. For instance, RMSE for the 2 m temperature increased to 0.26, and MAE rose to 0.22, underlining the model's depth's role in capturing the intricacies of geophysical data. Overall, the degradation in model performance upon the removal of each component underscores their collective importance, reinforcing the need for their inclusion in the final architecture.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Method** & **2 m Temperature** & **Geopotential Height** & **Relative Humidity** \\\\ & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** \\\\ \\hline Without Multi-scale Features & 0.34/0.28/0.10\\% & 1.12/1.01/0.06\\% & 0.92/0.33/0.42\\% \\\\ Without Residual Identities & 0.28/0.23/0.08\\% & 1.08/0.96/0.06\\% & 0.71/0.29/0.36\\% \\\\ Without Flow Regularization & 0.23/0.19/0.06\\% & 0.96/0.82/0.05\\% & 0.87/0.34/0.44\\% \\\\ Reduced Architectural Depth & 0.26/0.22/0.07\\% & 1.05/0.95/0.06\\% & 0.74/0.28/0.36\\% \\\\
**Full Model (Baseline)** & **0.20/0.17/0.06\\%** & **0.72/0.61/0.05\\%** & **0.64/0.45/0.59\\%** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Ablation study on the effect of various components on the modelβs performance.
Figure 6: Visual comparison of ablation study results. We downscale the 2 m temperature fields from 10 a.m. to 1 p.m. on 21 January 2020, to obtain results for 11 a.m. Subfigure d depicts the wind field at 11 a.m., which offers indicative insights into temperature evolution.
In our ablation study concerning the advection loss weight \\(\\lambda\\), we explore the statistical significance of its calibration for the accuracy of temporal downscaling in geophysical data. The detailed line chart in our manuscript illustrates how the mean absolute error (MAE) varies with different \\(\\lambda\\) settings. With \\(\\lambda\\) set to 0.3, the model achieves the lowest MAE values, suggesting that this level maximizes the benefit of incorporating flow information while also allowing for local atmospheric dynamics, such as radiative heating or cooling, to be adequately represented. On the other hand, a high \\(\\lambda\\) value, such as 0.9, results in increased MAE, indicating a diminished ability to capture these local changes, as the model overly emphasizes adherence to the advected state. Conversely, the absence of advection loss (\\(\\lambda=0\\)) leads to increased errors, highlighting the necessity of this term for improving downscaling accuracy.
## 6 Discussion
In our analysis, we found that the input grid data pixel size significantly influences the model's Mean Absolute Error (MAE), forming a U-shaped pattern, as depicted in Figure 7. For smaller pixel sizes, specifically at 64 pixels, the MAE was around 0.26. The convolutional layers in this case are restricted to localized features, missing the larger spatial context that is crucial for accurate downscaling. Conversely, the MAE reaches its minimum value of 0.17 at an optimal pixel size of 224 (see Figure 8). Beyond this optimal point, the MAE starts to increase again, climbing to approximately 0.33 at a pixel size of 320. This suggests that while the model is effective in capturing global features at larger pixel sizes, it fails to grasp finer details, leading to increased error.
This performance is further illuminated when comparing the MAE curves between the full model and the reduced-depth model. The full model achieved a lower minimum MAE value of 0.17 as opposed to the reduced model's 0.22. This can largely be attributed to the residual modules in the full model, which allow for a more efficient and resilient feature extraction process. These modules facilitate better generalization across varying pixel sizes, thus accounting for the full model's more effective U-shaped performance curve in MAE across a broader range of pixel sizes.
Figure 7: Model performance with different \\(\\lambda\\) values.
In addition to the performance metrics discussed earlier, another significant advantage of our model is its capability to support real-time inference (see Figure 9). A set of experiments was conducted to evaluate the model's speed performance across multiple hardware configurations--A40, V100, and A5000. The inference time was observed at different intermediate snapshot levels: 9, 18, 27, 36, and 45. Remarkably, even at 45 intermediate snapshots, the inference time did not exceed 44 ms on A5000, and it was even lower on A40 and V100 setups, clocking at approximately 39 ms and 38 ms, respectively. This rapid inference time positions our model as not only accurate but also highly practical for real-time applications in weather forecasting and climate studies.
To assess the impact of spatial domain size on temporal downscaling accuracy, our study conducted tests across three increasingly localized areas within the ERA5 dataset's 0.25\\({}^{\\circ}\\) resolution grid (see Figure 10). Area I spans longitudes 112\\({}^{\\circ}\\)E to 118\\({}^{\\circ}\\)E, Area II narrows down to 113\\({}^{\\circ}\\)E to 117\\({}^{\\circ}\\)E, and Area III further reduces to 114\\({}^{\\circ}\\)E to 116\\({}^{\\circ}\\)E. The intent was to understand how the extent of spatial information affects the prediction quality for a specific grid cell's temporal downscaling. The results, as shown in Table 3, indicate a nuanced relationship between spatial domain size and downscaling accuracy. In Areas I, II, and III, we observed RMSE, MAE, and MAPE values of 0.20/0.17/0.06%, 0.17/0.18/0.06%, and 0.17/0.14/0.05%, respectively. This pattern suggests that as the spatial domain becomes more localized, the model's performance slightly improves in terms of RMSE and MAPE, while the MAE shows a minor increase from Area II to Area III. Our findings suggest that reducing the spatial domain helps the model to focus on more relevant atmospheric features specific to the area, enhancing the precision of temporal downscaling. This is particularly evident in Area III, where the smallest spatial extent was associated with the lowest RMSE and MAPE, indicating a refined prediction capability. However, the slight increase in MAE from Area II to III highlights the complexity of balancing spatial resolution
Figure 8: Influence of field pixels. In the experiments, the size of the selected area remains unchanged, but the dimensions of the input data are altered through bicubic interpolation.
with predictive accuracy. The increasing error rates in temporal downscaling from Areas I (22\\({}^{\\circ}\\)N-28\\({}^{\\circ}\\)N) to Area V (62\\({}^{\\circ}\\)N-68\\({}^{\\circ}\\)N) can be attributed to the complex atmospheric dynamics and pronounced seasonal variations typical of higher latitudes. Area I shows the lowest error rates (RMSE: 0.20, MAE: 0.17), indicating better model performance in lower latitudes. In contrast, Areas IV and V exhibit progressively higher errors, with Area V reaching an RMSE of 0.26 and an MAE of 0.20. These higher latitudes face challenges like greater temperature shifts between seasons, more complex weather systems like jet streams, and data sparsity due to fewer weather stations. These factors combined make accurate temporal downscaling more challenging in higher latitude regions.
This study primarily uses a 5 \\(\\times\\) 5 convolutional kernel size for temporal downscaling. The comparison between different kernel sizes--3 \\(\\times\\) 3, 5 \\(\\times\\) 5, and 7 \\(\\times\\) 7--reveals differences in their performance, as shown in Table 4. The 5 \\(\\times\\) 5 kernel size showed the most consistent and favorable results across the Full Model and the Reduced Depth model in terms of RMSE, MAE, and MAPE metrics. The 3 \\(\\times\\) 3 kernel displayed slightly higher errors than the 5 \\(\\times\\) 5 kernel, indicating slightly diminished performance in capturing temporal dependencies. Meanwhile, the 7 \\(\\times\\) 7 kernel size resulted in higher error metrics for both models, indicating reduced accuracy in temporal downscaling. The overall trend suggests that the 5 \\(\\times\\) 5 kernel size excels in extracting relevant features and capturing temporal patterns more effectively within the context of the temporal downscaling task carried out in this research.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{**Method**} & **Area I** & **Area II** & **Area III** & **Area IV** & **Area V** \\\\ & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** \\\\ \\hline
**Full model** & **0.20/0.17/0.06\\%** & **0.17/0.18/0.06\\%** & **0.17/0.14/0.05\\%** & **0.23/0.19/0.07\\%** & **0.26/0.20/0.07\\%** \\\\ Reduced Depth & 0.26/0.22/0.07\\% & 0.23/0.21/0.07\\% & 0.24/0.17/0.06\\% & 0.28/0.24/0.08\\% & 0.30/0.27/0.09\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Generalization test at different spatial resolutions on 2 m temperature fields.
Figure 9: Comparison of inference time under different GPU conditions.
## 7 Conclusions
In this study, we introduced the enhanced residual U-Net, which incorporates advection loss in addition to regression loss for training and combines the strengths of U-Net and ResNet to address the challenge of temporal downscaling in gridded geophysical data. The architecture is specifically designed to harness both local and global features within the data, thereby producing a robust and versatile model capable of delivering high-quality downscaling results. The residual U-Net has been applied in many other fields [51; 52; 53; 54] (including spatial downscaling), and we are the first to apply it in the domain of temporal downscaling. This design choice not only enhances the learning capability of the network but also alleviates issues related to the vanishing gradient problem, allowing for deeper and more effective networks. We also introduced a custom loss function that combines Mean Squared Error (MSE) with a spatial regularization term, which collectively ensures both the fidelity and spatial coherence of the downscaled output.
Our experimental results, based on a comprehensive evaluation using multiple gridded geophysical datasets, validated the effectiveness of the enhanced residual U-Net model. We demonstrated that the architecture outperformed traditional downscaling methods and other state-of-the-art machine learning approaches in key metrics, including RMSE and MAE, while maintaining computational efficiency.
In conclusion, the enhanced residual U-Net architecture stands as a robust and efficient solution for the temporal downscaling of gridded geophysical data. Its design features, including the use of residual blocks and a custom loss function, make it a highly promising tool for both academic research and practical applications in the field of geoscience.
Future work could further enhance this architecture by incorporating additional techniques for feature selection or by tailoring the network to different kinds of geophysical data. Moreover, real-world applicability of this model could be tested in other domains
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline \\multirow{2}{*}{**Method**} & **Kernel 3 \\(\\times\\) 3** & **Kernel 5 \\(\\times\\) 5** & **Kernel 7 \\(\\times\\) 7** \\\\ & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** & **(RMSE/MAE/MAPE)** \\\\ \\hline
**Full model** & **0.23/0.18/0.06\\%** & **0.20/0.17/0.06\\%** & **0.25/0.24/0.08\\%** \\\\ Reduced Depth & 0.26/0.24/0.08\\% & 0.26/0.22/0.07\\% & 0.31/0.26/0.09\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Performance of models with different convolutional kernel sizes.
Figure 10: Illustration of five selected areas, labeled Area I to Area V, showcasing the variations in both spatial extent and latitude. Area I, Area II, and Area III represent regions where the spatial domain progressively decreases in size, providing a comparative perspective on how varying spatial scales influence the modelβs performance. Additionally, Area I, Area IV, and Area V are arranged in ascending order of latitude, allowing for an examination of the impact of latitudinal differences on the effectiveness of temporal downscaling.
requiring high-fidelity downscaling, providing a broader utility beyond the specific use-case studied here.
Conceptualization, L.W. and Q.L. (Qian Li); methodology, L.W.; software, L.W.; validation, Q.L. (Qian Li) and Q.L. (Qi Lv); formal analysis, L.W.; investigation, Q.L. (Qian Li); resources, X.P.; data curation, L.W.; writing--original draft preparation, L.W.; writing--review and editing, Q.L. (Qi Lv); visualization, Q.L. (Qi Lv); supervision, Q.L. (Qian Li); project administration, Q.L. (Qian Li); funding acquisition, Q.L. (Qian Li). All authors have read and agreed to the published version of the manuscript.
This research was funded by the National Natural Science Foundation of China (Grant No. U2242201, 42075139, 42105146, 41305138), the China Postdoctoral Science Foundation (Grant No. 2017M621700), Hunan Province Natural Science Foundation (Grant No. 2021)JC0009, 2021J30773, 2023JJ30627), and Fengyun Application Pioneering Project (FY-APP-2022.0605).
All data necessary to reproduce the results of this work can be downloaded at the ERA5 Climate Data Store via [https://doi.org/10.24381/cds.bd0915c6](https://doi.org/10.24381/cds.bd0915c6) and [https://doi.org/10.24381/cds.adbb2d47](https://doi.org/10.24381/cds.adbb2d47), accessed on 12 April 2023.
The authors declare no conflicts of interest.
## References
* Reichstein et al. (2019) Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalho, N.; Prabhat, F. Deep learning and process understanding for data-driven Earth system science. _Nature_**2019**, _566_, 195-204. [CrossRef] [PubMed]
* Scipal et al. (2008) Scipal, K.; Holmes, T.R.H.; de Jeu, R.A.M.; Naeimi, V.; Wagner, W. A possible solution for the problem of estimating the error structure of global soil moisture data sets. _Geophys. Res. Lett._**2008**, _35_, 24. [CrossRef]
* Chen et al. (2023) Chen, S.; Zhang, M.; Lei, F. Mapping Vegetation Types by Different Fully Convolutional Neural Network Structures with Inadequate Training Labels in Complex Landscape Urban Areas. _Forests_**2023**, _14_, 1788. [CrossRef]
* Matthews et al. (2015) Matthews, T.R.; Dadson, S.J.; Lehner, B.; Abele, S.; Gedney, N. High-resolution global topographic index values for use in large-scale hydrological modelling. _Hydrol. Earth Syst. Sci._**2015**, _19_, 91-104. [CrossRef]
* Loew et al. (2017) Loew, A.; Bell, W.; Brocca, L.L.; Bulgin, C.E.; Burdanowitz, J.; Calbet, X.; Donner, R.V.; Ghent, D.; Gruber, A.; Kaminski, T.; et al. Validation practices for satellite-based Earth observation data across communities. _Rev. Geophys._**2017**, _55_, 779-817. [CrossRef]
* Mann et al. (2017) Mann, M.E.; Rahmstorf, S.; Kornhuber, K.; Steinman, B.A.; Miller, S.K.; Coumou, D. Influence of Anthropogenic Climate Change on Planetary Wave Resonance and Extreme Weather Events. _Sci. Rep._**2017**, \\(7\\), 1-12. [CrossRef]
* Rogel et al. (2019) Rogel, J.; Forster, P.M.; Kriegler, E.; Smith, C.J.; Seferian, R. Estimating and tracking the remaining carbon budget for stringent climate targets. _Nature_**2019**, _571_, 335-342. [CrossRef]
* Mason and Stephenson (2008) Mason, S.J.; Stephenson, D.B. How Do We Know Whether Seasonal Climate Forecasts are Any Good. In _Seasonal Climate: Forecasting and Managing Risk_; Springer: Dordrecht, The Netherlands, 2008; pp. 259-289.
* Schloss et al. (1999) Schloss, A.; Kicklighter, D.W.; Kaduk, J.; Wittenberg, U.; The Participants OF ThE Potsdam NpP Model Intercomparison. Comparing global models of terrestrial net primary productivity (NPP): Comparison of NPP to climate and the Normalized Difference Vegetation Index (NDVI). _Glob. Chang. Biol._**1999**, \\(5\\), 25-34. [CrossRef]
* Schleussner et al. (2015) Schleussner, C.; Lissner, T.; Fischer, E.M.; Wohland, J.; Perrette, M.; Golly, A.; Rogelj, J.; Childers, K.H.; Schewe, J.; Frieler, K.; et al. Differential climate impacts for policy-relevant limits to global warming: The case of 1.5 \\({}^{\\circ}\\)C and 2 \\({}^{\\circ}\\)C. _Earth Syst. Dyn. Discuss._**2015**, \\(7\\), 327-351. [CrossRef]
* Fowler et al. (2007) Fowler, H.J.; Blenkinsop, S.; Tebaldi, C. Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. _Int. J. Climatol._**2007**, _27_, 1547-1578. [CrossRef]
* Meams et al. (2003) Meams, L.; Giorgi, F.; Whetton, P.H.; Pabon, D.; Hulme, M.; Lal, M. Guidelines for Use of Climate Scenarios Developed from Regional Climate Model Experiments. _Data Distrib. Cent. Intergov. Panel Clim. Chang._**2003**, _38_.
* Challinor et al. (2014) Challinor, A.J.; Watson, J.E.M.; Lobell, D.; Howden, S.M.; Smith, D.R.; Chhetri, N. A meta-analysis of crop yield under climate change and adaptation. _Nat. Clim. Chang._**2014**, \\(4\\), 287-291. [CrossRef]
* Gupta et al. (2022) Gupta, R.; Yadav, A.K.; Jha, S.; Pathak, P.K. Time Series Forecasting of Solar Power Generation Using Facebook Prophet and XG Boost. In Proceedings of the 2022 IEEE Delhi Section Conference (DELCON), New Delhi, India, 11-13 February 2022; pp. 1-5.
* Monteith and Oke (1979) Monteith, J.L.; Oke, T.R. Boundary Layer Climates. _J. Appl. Ecol._**1979**, _17_, 517. [CrossRef]
* Salehnia et al. (2019) Salehnia, N.; Hosseini, F.S.; Farid, A.; Kolsoumi, S.; Zarrin, A.; Hasheminia, M. Comparing the Performance of Dynamical and Statistical Downscaling on Historical Run Precipitation Data over a Semi-Grid Region. _Asia-Pac. J. Atmos. Sci._**2019**, _55_, 737-749. [CrossRef]
* Global Circulation Models (2017) Global Circulation Models. In Proceedings of the ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems (SIGSPATIAL 2017), Redondo Beach, CA, USA, 7 November 2017.
* Kisembe et al. (2018) Kisembe, J.; Favre, A.; Dosio, A.; Lennard, C.J.; Sabiiti, G.; Nimusima, A. Evaluation of rainfall simulations over Uganda in CORDEX regional climate models. _Theor. Appl. Climatol._**2018**, _137_, 1117-1134. [CrossRef]* Vandal et al. (2017) Vandal, T.J.; Kodra, E.; Ganguly, A.R. Intercomparison of machine learning methods for statistical downscaling: The case of daily and extreme precipitation. _Theor. Appl. Climatol._**2017**, _137_, 557-570. [CrossRef]
* Tang et al. (2016) Tang, J.; Niu, X.; Wang, S.; Gao, H.; Wang, X.; Wu, J. Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. _J. Geophys. Res. Atmos._**2016**, _121_, 2110-2129. [CrossRef]
* Isotta et al. (2019) Isotta, F.A.; Begert, M.; Frei, C. Long-Term Consistent Monthly Temperature and Precipitation Grid Data Sets for Switzerland Over the Past 150 Years. _J. Geophys. Res. Atmos._**2019**, _124_, 3783-3799. [CrossRef]
* Arunkumar et al. (2022) Arunkumar, K.E.; Kalaga, D.V.; Mohan Sai Kumar, C.; Kawaji, M.; Brenza, T.M. Comparative analysis of Gated Recurrent Units (GRU), long Short-Term memory (LSTM) cells, autoregressive Integrated moving average (ARIMA), seasonal autoregressive Integrated moving average (ARIMA), seasonal autoregressive Integrated moving average (ARIMA) for forecasting COVID-19 trends. _Alex. Eng. J._**2022**, _61_, 7585-7603. [CrossRef]
* Majda and Harlim (2012) Majda, A.J.; Harlim, J. Physics constrained nonlinear regression models for time series. _Nonlinearity_**2012**, _26_, 201-217. [CrossRef]
* Yang et al. (2019) Yang, H.; Wang, T.; Zhou, X.; Dong, J.; Gao, X.; Niu, S. Quantitative Estimation of Rainfall Rate Intensity Based on Deep Convolutional Neural Network and Radar Reflectivity Factor. In Proceedings of the 2nd International Conference on Big Data Technologies, Jinan, China, 28 August 2019; pp. 244-247.
* Misra et al. (2018) Misra, S.; Sarkar, S.; Mitra, P. Statistical downscaling of precipitation using long short-term memory recurrent neural networks. _Theor. Appl. Climatol._**2018**, _134_, 1179-1196. [CrossRef]
* Xiang et al. (2020) Xiang, X.; Tian, Y.; Zhang, Y.; Fu, Y.R.; Allebach, J.P.; Xu, C. Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13-19 June 2020; pp. 3367-3376.
* Jiang et al. (2018) Jiang, H.; Sun, D.; Jampani, Y.; Yang, M.-H.; Learned-Miller, E.G.; Kautz, J. Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-23 June 2018; pp. 9000-9008.
* Lees et al. (2021) Lees, T.; Buechel, M.; Anderson, B.; Slater, L.J.; Reece, S.; Coxon, G.; Dadson, S.J. Rainfall-Runoff Simulation and Interpretation in Great Britain using LSTMs. In Proceedings of the 23rd EGU General Assembly, Online, 19-30 April 2021. EGU21-2778.
* Kajbaf and Bensi (2022) Kajbaf, A.A.; Bensi, M.T.; Brubaker, K.L. Temporal downscaling of precipitation from climate model projections using machine learning. _Stoch. Environ. Res. Risk Assess._**2022**, _36_, 2173-2194. [CrossRef]
* Barboza et al. (2022) Barboza, L.A.; Chen, S.; Alfaro-Cordoba, M. Spatio-temporal downscaling emulator for regional climate models. _Environmetrics_**2022**, _34_, e2815. [CrossRef]
* Huang et al. (2022) Huang, J.; Perez, M.J.R.; Perez, R.; Yang, D.; Keelin, P.; Hoff, T.E. Nonparametric Temporal Downscaling of GHI Clearsky Indices using Gaussian Copula. In Proceedings of the 2022 IEEE 49th Photovoltaics Specialists Conference (PVSC), Philadelphia, PA, USA, 5-10 June 2022; pp. 0654-0657.
* Michel et al. (2021) Michel, A.; Sharma, V.; Lehning, M.; Huwald, H. Climate change scenarios at hourly time-step over Switzerland from an enhanced temporal downscaling approach. _Int. J. Climatol._**2021**, _41_, 3503-3522. [CrossRef]
* Boehme (1966) Boehme, R.B.T.K. The Fourier Transform and its Applications. _Am. Math. Monthly_**1966**, _73_, 685. [CrossRef]
* Ahmmed et al. (2022) Ahmmed, B.; Vesselinov, V.V.; Mudunuru, M.K. SmartTensors: Unsupervised and physics-informed machine learning framework for the geoscience applications. In Proceedings of the Second International Meeting for Applied Geoscience & Energy, Houston, TX, USA, 28 August-1 September 2022.
* Greiner et al. (2021) Greiner, T.A.L.; Lie, J.E.; Kolbjernsen, O.; Evensen, A.K.; Nilsen, E.H.; Zhao, H.; Demyanov, V.V.; Gelius, L.J. Unsupervised deep learning with higher-order total-variation regularization for multidimensional seismic data reconstruction. _Geophysics_**2021**, _87_, V59-V73. [CrossRef]
* Kim and Yang (2020) Kim, J.; Yang, I. Hamilton-Jacobi-Bellman Equations for Maximum Entropy Optimal Control. _arXiv_**2020**, arXiv:2009.13097.
* Gan et al. (2022) Gan, T.; Tarboton, D.G.; Gichamo, T.Z. Evaluation of Temperature-Index and Energy-Balance Snow Models for Hydrological Applications in Operational Water Supply Forecasts. _Water_**2023**, _15_, 1886. [CrossRef]
* Raissi et al. (2019) Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _J. Comput. Phys._**2019**, _378_, 686-707. [CrossRef]
* Zhu et al. (2019) Zhu, Y.; Zabaras, N.; Koutsourelakis, P.-S.; Perdikaris, P. Physics-Constrained Deep Learning for High-dimensional Surrogate Modeling and Uncertainty Quantification without Labeled Data. _J. Comput. Phys._**2019**, _394_, 56-81. [CrossRef]
* Mizukami et al. (2017) Mizukami, N.; Clark, M.P.; Newman, A.J.; Wood, A.W.; Gutmann, E.D.; Nijssen, B.; Rakovec, O.; Samaniego, L. Towards seamless large-domain parameter estimation for hydrologic models. _Water Res._**2017**, _53_, 8020-8040. [CrossRef]
* Hrachowitz et al. (2009) Hrachowitz, M.; Soulsby, C.; Tetzlaff, D.; Dawson, J.J.C.; Dunn, S.M.; Malcolm, I.A. Using long-term data sets to understand transit times in contrasting headwater catchments. _J. Hydrol._**2009**, _367_, 237-248. [CrossRef]
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 25 June-1 July 2016; pp. 770-778.
* Laloy et al. (2017) Laloy, E.; Herault, R.; Jacques, D.; Linde, N. Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network. _Water Resour. Res._**2017**, _54_, 381-406. [CrossRef]
* Ronneberger et al. (2015) Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. _arXiv_**2015**, arXiv:1505.04597.
* Oktay et al. (2018) Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.J.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. _arXiv_**2018**, arXiv:1804.03999.
* Zhou et al. (2018) Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018;_ Springer: Berlin/Heidelberg, Germany, 2018; Volume 11045, pp. 3-11.
* Ibethaz and Rahman (2019) Ibethaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation. _Neural Netw. Off. J. Int. Neural Netw. Soc._**2019**, _121_, 74-87. [CrossRef] [PubMed]
* Cicek et al. (2016) Cicek, O.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 16-21 October 2016.
* Glorot et al. (2011) Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. _J. Mach. Learn. Res._**2011**, _15_, 315-323.
* Huang et al. (2022) Huang, Z.; Zhang, T.; Heng, W.; Shi, B.; Zhou, S. RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation. _arXiv_**2022**, arXiv:2011.06294.
* Zhang et al. (2017) Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. _IEEE Geosci. Remote Sens. Lett._**2017**, _15_, 749-753. [CrossRef]
* Alom et al. (2019) Alom, M.Z.; Yakopcic, C.; Hasan, M.; Taha, T.M.; Asari, V.K. Recurrent residual U-Net for medical image segmentation. _J. Med. Imaging_**2019**, \\(6\\), 014006. [CrossRef]
* Wang and Miao (2022) Wang, H.; Miao, F. Building extraction from remote sensing images using deep residual U-Net. _Eur. J. Remote Sens._**2022**, _55_, 71-85. [CrossRef]
* Afshari et al. (2023) Afshari, A.; Vogel, J.; Chockalingam, G. Statistical Downscaling of SEVIRI Land Surface Temperature to WRF Near-Surface Air Temperature Using a Deep Learning Model. _Remote Sens._**2023**, _15_, 4447. [CrossRef]
**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | Temporal downscaling of gridded geophysical data is essential for improving climate models, weather forecasting, and environmental assessments. However, existing methods often cannot accurately capture multi-scale temporal features, affecting their accuracy and reliability. To address this issue, we introduce an Enhanced Residual U-Net architecture for temporal downscaling. The architecture, which incorporates residual blocks, allows for deeper network structures without the risk of overfitting or vanishing gradients, thus capturing more complex temporal dependencies. The U-Net design inherently can capture multi-scale features, making it ideal for simulating various temporal dynamics. Moreover, we implement a flow regularization technique with advection loss to ensure that the model adheres to physical laws governing geophysical fields. Our experimental results across various variables within the ERA5 dataset demonstrate an improvement in downscaling accuracy, outperforming other methods. | Give a concise overview of the text below. | 170 |
mdpi/0072ab32_7b1d_4f33_87f9_8a5bc10169aa.md | A Bibliometric Study on Integrated Solar Combined Cycles (ISCC), Trends and Future Based on Data Analytics Tools
Miguel Angel Reyes-Belmonte
Department of Chemical and Energy Technology, School of Experimental Sciences and Technology (ESCET), Rey Juan Carlos University, 28933 Mostoles, Madrid, Spain; [email protected]
Received: 28 August 2020; Accepted: 2 October 2020; Published: 6 October 2020
######
Keywords:data analytics; ISCC; combined cycles; CSP; solar energy; bibliometric studies +
Footnote β : journal: Journal of the Royal
programs such as the 2030 Climate and Energy framework, regarding its target to achieve a 32% share for renewable energy and 40% cuts in greenhouse gas emissions (from 1990) [7].
In this near-to-mid future scenario with a high penetration of renewable energy sources, new grid challenges and difficulties--such as curtailment, service disruptions or negative bid prices--may appear [8]. That is based on the extensive deployment of the so called non-dispatchable renewable energy resources, such as wind energy and solar photovoltaics. Despite the competitive cost of those technologies, they cannot meet users' grid demands when they are not coupled to energy storage systems, which is translated into the aforementioned difficulties. Notwithstanding the latest advances and cost-reductions seen on electrochemical energy storage systems (batteries) for wind and PV plants [9], the storage of large amounts of electricity at a competitive price has not been solved yet. Recently, different alternatives (rather than electrochemical storage) for wind and photovoltaics have been discussed, such as Compressed Air Energy Storage (CAES) systems [10] or Thermal Energy Storage (TES) systems [11] based on liquid molten salts. Even the so-called 'Carnot-Batteries proposal' for the replacement of coal steam generators from conventional coal thermal power plants by molten salts electric heaters and TES tanks to use surplus renewable electricity have been proposed [12]. However, in those cases, exergy destruction appears, based on the multiple energy conversions involved (electrochemical, thermal, mechanical and electrical).
A simpler alternative with a proven track of record for commercial applications gained attention couple of decades ago. That technology is known as Concentrating Solar Thermal (CST) energy, which uses mirrors and optical devices to reflect and focus solar beams into a particular area where a device--a solar reactor or receiver, depending on the final application--is located [13]. In particular, a thermal fluid (water, air, or molten salts, typically) can be passed through that receiver in order to absorb solar radiation and convert it into high fluid enthalpy. Later, that hot fluid can be stored in TES devices and/or transferred to the working fluid (steam or air) to run the turbine of a power cycle in order to generate electricity. That series of transformations is known as Concentrating Solar Power (CSP), which still accounts for less than 1% of all electricity generation, with a total 6.45 GW of installed power, with Spain, the US, and recently China and MENA region countries being the main contributors [14]. Recently, great interest has been focused on CSP based on cost-production reductions, with bidding projects such as Cerro Dominador at 11.4 c$/kWh (2014), or the DEWA project (under construction) with a bid of a 7.3 c$/kWh combined solar tower and parabolic trough plant in Dubai. The cost reduction in the technology appears to be based on learning curve effects, scaling-up technologies and the larger number of players joining that technology.
Nowadays, new ideas and proposals are seen as the next development steps for CSP technology. Some of those ideas include hybrid concepts [15], whether they are applied together with conventional thermal power plants such as coal-hybridization [16] or hybrid CSP/PV plants configurations [17; 18]. Also under investigation are the use of advances in working fluids, such as supercritical steam [19] or supercritical CO\\({}_{2}\\)[20; 21; 22]; the use of high temperature heat transfer fluids [23]; high temperature TES [24] and high temperature receivers [25; 26]; as well as the use of highly-efficient power cycles [27; 28; 29]. The latter is one of the main hot topics in thermal energy conversion technologies, and in CSP in particular. The main feature drawing the attention of CSP technologies is its ability to decouple energy harvesting and electricity generation when it is coupled to a TES system. Besides this, this TES system is inexpensive (compared to equivalent thermochemical energy storage solutions) and it allows large bulk storage.
Regarding the utilization of CO\\({}_{2}\\) under supercritical conditions for electricity generation by means of a power cycle, it has gained incredible attention over the last couple of years, mainly for CSP and heat recovery applications. In fact, that technology is seen as the philosopher's stone for electricity generation in the near future, with theoretical conversion efficiencies above 50% for the medium temperature range (550-650 \\({}^{\\circ}\\)C), and which could exceed 60% for temperatures in the range of 900 \\({}^{\\circ}\\)C [30]. There is a vast literature review in that topic that even lead to a bibliometric economic review [31]. Despite the great interest of researchers and scientists in that topic [32], several scholars listed out a number of challenges and difficulties that are limiting the deployment of that technology [33; 34]. The main ones can be summarized as its corrosive and solvent nature at high temperature and pressure; and its very high handling pressures (around 300 bar), which make its direct storage more difficult. On the contrary, there is also a long enough list of benefits, such as its high density at the compressor inlet (close to a liquid, but with a viscosity and diffusion that is as high as a gas); its high energy density (that is related to the compactness of its designs); its suitable supercritical temperature, which is close to ambient conditions (31 \\({}^{\\circ}\\)C); its stability; its non-flammable, non-toxic nature; and obviously the abovementioned very high efficiency for a moderate temperature range. Despite the high working pressures of sCO\\({}_{2}\\) cycles, one needs to keep in mind that the power cycle operates at a moderated pressure ratio (around 3.0) compared to conventional Brayton cycles, which imply smaller and more compact turbines, and the fact that its critical point appears at a lower pressure than the supercritical conditions for water steam (220 bar), which results in fewer turbine stages and reduced pumping losses. Furthermore, the sCO\\({}_{2}\\) cycle might operate at a temperature close to 1000 \\({}^{\\circ}\\)C, due to fluid nature stability [35]. Despite the general interest in that technology, the abovementioned technical challenges are still conditioning its further realization.
On the contrary, it is widely known that existing Combined Cycle power technologies allow for very high electricity conversion efficiencies (above to 50%) based on their highly recuperative heat nature. This is achieved by combining high temperature energy conversion through gas Brayton cycles together with medium temperature two-phase Rankine power cycles. The first ones are characterized by their very high working temperatures (above 1000 \\({}^{\\circ}\\)C), which is translated into high efficiency potential regarding the second law of thermodynamics. However, compressing a gas (typically air) is a highly energy demanding process, and the divergence appearing on the enthalpy-entropy diagram, together with the limited expansion ratios on the turbine side, results in the very high temperature of the exhaust gases, which compromises its high efficiency prospects. Nonetheless, modern gas turbine technologies allow for conversion efficiencies in the range of 35% to 45%, depending on the turbine power [36]. Rankine power cycles are characterized by their low energy consumption during the fluid compression process (a pump is required in order to increase the pressure of a liquid) compared to the very high energy that can be extracted from the steam phase in a turbine. However, the use of a two-phase working fluid requires a high latent energy consumption during the evaporation process inside the boiler, which reduces its high efficiency prospects. The newest advances on Rankine cycles include water chain preheating, steam reheating, and the use of supercritical one-through boilers, which lead to efficiencies close to 45% for typical water steam temperatures [37].
Despite both the maturity of CSP technology (with more than 20 years of commercial experience) and the maturity of Combined Cycles, the standard water-steam subcritical Rankine cycle has been imposed as the only commercial solution for large CSP installations (whether solar tower plants, parabolic through plants, or linear Fresnel plants). This is based on their suitability for being coupled with molten salts central receivers (that can be heated up to 565 \\({}^{\\circ}\\)C) and parabolic trough collectors (up to 400 \\({}^{\\circ}\\)C). However, the application of Combined Cycles for Concentrating Solar Power, also known as Integrated Solar Combined Cycle (ISCC), would meet both requirements regarding conversion efficiency improvement and cost generation reductions.
In that context, this paper aims to analyse the different approaches and the growing interest in the ISCC concept through a bibliometric study and the analysis of the publication trends in that topic. In order to do so, a data analytics study was performed, which provided some interesting facts related to which are the main working groups on ISCC topic, which are the most common keywords defining ISCC topics, or which are the main hubs, countries and connections among researchers. Those evidences will help to elucidate the research future of ISCC while helping scholars to focus their research in the CSP and renewable energy fields.
In that context, the bibliometric methodology that is described in this paper could serve as a tool for researchers to approach to any scientific topic from a Big data perspective. Indeed, during their whole scientific career, it is crucial for scholars to learn from relevant works from the same area of expertise, in order to discard, reject or support their assumptions based on similar research works. Usually, this stage is commonly known as the literature review, and it is the cornerstone of any research activity, from Masters theses, to PhD theses, to research papers, and it is even crucial for successful applications for project proposals and funding schemes. Despite the importance of that stage, the depth of that analysis depends in practice on previous expertise and personal experience, since it is typically addressed as a human-based activity that requires many years of experience. Nowadays, that exercise becomes even harder to attain, due to the increasing number of research papers being published, the appearance of new journals and platforms, and the infamous motto \"publish or perish\". Fortunately, the developments on Big data, data mining and data analytics that have became popular in social networks analysis and in behavioural sciences [38] can be also applied to Energy research and other technical sciences. Big data treatment through nodes and networks analysis has great potential for engineering research applications, and in particular to the topic of ISCC, since it allows researchers and scholars to understand trends and topics related to ISCC technologies, nurture future collaborations among different research groups and researchers, and to increase their awareness on the topic's importance. Last but not least, it establishes data analytics as one of the core activities of the scientific method by providing researchers a powerful, and yet unfamiliar, tool. Besides this, it is proposed as a methodology to thoroughly address a research topic that was rather manual, time consuming and biased, up until this time. The application of bibliometric studies for renewable energy topics is quite recent; some examples can be found, such as the use of community detection tools for scientific collaboration analysis [39], or keywords trend evolution for the study of interactions between the economy, energy and the environment [40]. Bibliometric studies have also shown their potential as a tool for the analysis of the research impact of a country [41], or to analyse the global transition to low-carbon electricity [42]. In particular, and related to solar energy, very few bibliometric studies could be found, with most of them having been published recently [43; 44; 45].
This paper has been organized according to the following structure; firstly, energy and technology contexts are presented. After that, the works' relevance is discussed, together with the data mining source and the research method employed. Later, corpus data is analysed using VOSviewer software, which is a tool based on the use of network data for the creation of maps, and for visualizing and exploring them [46]. Finally, conclusions are drawn, and recommendations are compared to another hot topic in the CSP field.
## Materials and Methods
### Data Source
The data corpus used in this work was retrieved from the Web of Science (WOS) Core Collection using the search questions (keywords) \"Integrated Solar Combined Cycle\" and \"Solar Combined Cycle\" for the time period between 1990 and 2020 (only papers published before 14th of July were accounted for in the 2020 analytics). Both works published as journal articles and conference proceedings that can be retrieved from the WOS have been considered for the analysis. Figure 1 shows the distribution of the retrieved publications according to the WOS thematic areas, with the total number of publications for each category indicated between brackets. As it can be observed, the main topic area on ISCC publications refers to Energy Fuels (1161), followed by Thermodynamics (424), Mechanics (255), Engineering Mechanical (234), Green Sustainable Science Technology (228), Engineering Chemical (136), Environmental Sciences (81), Environmental Engineering (61), Chemistry Physical (47) and Electrochemistry (47).
It must be pointed out that the number of publications indicated between brackets is not cumulative, since the same work might be classified under different categories in the WOS. Further refinement was applied in order to exclude the retrieved references that were related to different Integrated Solar Technologies, such as PV-only research. After that refinement, the total number of publications retrieved was 1277, which constitutes the data corpus for this study. Further insight into this search is shown in Figure 2, where it can be observed that 75.7% of the retrieved documents were journal articles, 18.9% were proceeding papers, and less than 5% were review papers. Editorial material, early access, corrections and notes accounted for less than 1% altogether.
Figure 3 shows the publication and citation trends in Integrated Solar Combined Cycles (ISCC) over the last 30 years.
Figure 1: Corpus data according to the Web of Science thematic categories.
Figure 2: Distribution of the scientific production on ISCC, according to the type of document.
As it can be observed, the number of publications related to ISCC has significantly increased since 2011, and especially since 2016. In fact, a baseline of 100 publications per year was imposed from 2014 onwards, which gives an idea about the interest in that technology. In fact, this trend has already been kept in 2020, in which 100 ISCC publications were reported before 14th July. A similar trend can be found in the total number of citations, with more than 5500 in 2019. Both the increasing number of ISCC publications and citations agree with the trend observed in CSP's installed capacity, which confirms the interest and deployment of the technology [47].
Figure 4 shows the summary report generated by the ISCC search on the WOS. As it can be observed, for the time period of the analysis, ISCC publications were cited 28,201 times in total in 17,768 different items indexed within the Web of Science Core Collection. Removing self-citation, that number was reduced to 24,934 citations in 17,010 different items. Dividing the sum of the times cited by the total number of publications results in an average of 22.08 citations per item, as can be observed in Figure 4.
### Analysis Methodology
For the data analytics, the data corpus retrieved from the WOS search was exported, including the full record and cited references to be analysed using the graphics analysis software. Save to other formats, the full record and cited references, and all of the data from those papers were exported, and were analysed using graphics analysis software. The VOSViewer [48] software tool was chosen, since it is a free software for constructing and visualizing bibliometric networks, and it also offers a text mining functionality that can be used to construct and visualize the co-occurrence networks of the relevant terms extracted from a body of scientific literature. These networks may for instance include journals, researchers, or individual publications, and they can be constructed based on citations, bibliographic coupling, co-citations, or co-authorship relations. A diagram of the methodology followed in this work
Figure 4: Citation report for 1277 results from the WOS Core Collection between 1990 and 2020, including the total number of publications, citations and average citations per item regarding the ISCC topic.
Figure 3: Evolution in the number of publications related to the ISCC topic.
can be found in Figure 5. As it can be seen, the first stage was comprised of filtering the data using the research question, time period and thematic areas applied to the WOS Core Collection. The result of that filtering stage is the corpus data of the ISCC topic. That corpus data was analysed applying different bibliometric indicators and using the VOSviewer tool for networking mapping.
## 3 Results
Several indicators were considered for the bibliometric network study, including co-authorship analysis, co-occurrence analysis and citation analysis, among others. Each study can be appointed by using different units of analysis, such as authors, sources, organizations, countries, documents or keywords.
### Co-Authorship Analysis
For the analysis, papers with a large number of authors were ignored by applying a threshold of 25 maximum authors for a single publication. In addition, full counting criteria were applied in the analysis, which means that if an author co-authors a document with, for example, 10 other authors, each of the 10 co-authorship links has a weight of 1 instead of a 1/10. That has an impact in bibliometric network construction, since links' thickness and node sizes depend on that counting. Applying that criterion, a total number of 3568 different authors have ever published at least one paper related to ISCC topic, as can be found in Table 1. Indeed, this table shows the number of different authors fulfilling a minimum publication criterion applied to the ISCC corpus data.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline \\multirow{2}{*}{**Minimum Number of Publications**} & **Number of Authors** & **Number of Authors** \\\\ & **(Citations \\(\\geq\\) 0)** & **(Citations \\(\\geq\\) 10)** \\\\ \\hline
1 & 3568 & 1873 \\\\
2 & 576 & 460 \\\\
3 & 237 & 220 \\\\
4 & 111 & 108 \\\\
5 & 71 & 71 \\\\
7 & 34 & 34 \\\\
10 & 8 & 8 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Authorship publications filtering criteria.
Figure 5: Methodology diagram.
As it can be deduced, a total number of 3568 authors have one publication related to the ISCC topic, and this number is reduced down to 576 for authors with two publications, and to 71 authors with five publications. There were only eight authors with 10 publications published on the ISCC topic since 1990. Those numbers do not consider any other requirement regarding the minimum number of citations of an author. In order to account for the relevance of their works, a minimum criterion of at least 10 citations of an author' documents (according to the WOS) was considered. As it can be observed, the impact of that filtering is only noticeable for those authors holding just one publication, which reduced author number by half.
For co-authorship networking mapping, a minimum number of two publications per author with 10 citations was chosen, which gives a significant number of nodes meeting the thresholds (460). For each of the 460 authors, the total strength of the co-authorship links with other authors was calculated. Some of the authors in the network are not connected to each other in the graph, since they may have cited different authors that did not meet the minimum number of publications and citations criteria. The largest set of connected items consists of 241 items that were finally represented in the network map on Figure 6. This graphical information is useful for ISCC researchers and scholars, since it provides a clear idea about who the main authors publishing in this topic are, and the connections (citations) among them. Furthermore, the size of each node is related to the number of publications. Table A1 from Appendix A contains detailed information about the clusters shown in Figure 6.
As it can be observed in Figure 6, some western authors appeared to be duplicated since they signed their documents using different initials, which is considered as two different authors by the software. The software allowed us to get rid of a researcher's first name and replace it by its initial, which would lead to an incorrect analysis, mainly with regard to Asian researchers. In order to verify the quality of the results, the generated table of authors was analysed, and the duplicate cases were revised, such as the ones found corresponding to renowned authors I. Dincer, who also published as Ibrahim Dincer; A. Steinfeld, who published as A Steinfeld; or D. Yogi Goswami, who also appeared as
Figure 6: Co-authorship network for those authors who had published more than two ISCC papers.
Dy Goswami. Applying that filter to the co-authorship network, a version of Table 2 including the top authors in ISCC was created. Another interesting parameter that can be deduced from Table 2 is the average number of citations each ISCC paper from those renowned authors received. As it can be seen, the highest ratio corresponds to D. Goswami and A. Steinfeld, with more than 50 citations for each ISCC paper, while the lowest ratio for the top 10 author list is 6.7 for B. Laurent at The Royal Institute of Technology, KTH.
Regarding the co-authorship analysis by organizations (authors' institutions), there were 1102 different institutions that had published at least one paper on the ISCC topic. However, only 85 of them published at least 5 publications, as can be observed from Table 3.
A minimum number of 5 publications (and 10 citations) were chosen for the bibliometric analysis, which resulted in 85 nodes. Some of them were not connected in the network, since they did not meet the publication criteria or did not cite any of the other filtered groups. Therefore, the largest set of connected items (69) was represented instead in Figure 7. Based on that figure, the different collaborations among the institutions can be observed. This kind of analysis is relevant because it allows scholars to identify the main institutions publishing on ISCC (nodes size) and linking relations (citations) among them. For example, Universidad Politecnica de Madrid had joint publications in ISCC together with Universidad Carlos III and UNED. Meanwhile, North China Electric Power University had joint publications in ISCC together with Hunan University, the Chinese Academy of Sciences, Tianjin University, Nanyang Technology University, Huazhong University, the University of Pennsylvania and the Technical University of Denmark. Table A2 from Appendix A contains the detailed information about the clusters shown in Figure 7.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Author** & **Number of ISCC Publications** & **Total Number of Citations** & **Citations/Publications** & **Insertition** \\\\ \\hline Dancer, I. & 24 & 758 & 31.6 & Ontario Tech University, \\\\ Jin, H. & 22 & 448 & 20.4 & Chinese Academy of Sciences \\\\ Goswami, D. & 19 & 993 & 52.3 & University of South Florida \\\\ Wang, J. & 17 & 620 & 36.5 & North China Electric Power \\\\ Liu, Q. & 15 & 243 & 16.2 & Guizhou University \\\\ Laurent, B. & 14 & 94 & 6.7 & The Royal Institute of \\\\ Spelling, J. & 14 & 180 & 12.9 & Technology KTH \\\\ Markides, CN. & 14 & 544 & 38.9 & Imperial College \\\\ Steinfeld, A. & 14 & 751 & 53.6 & ETH Zurich \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Top 10 authors on ISCC topics.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Minimum Number of Publications** & \\begin{tabular}{c} **Number of Institutions** \\\\ **(Citations \\(\\geq\\) 0)** \\\\ \\end{tabular} &
\\begin{tabular}{c} **Number of Institutions** \\\\ **(Citations \\(\\geq\\) 10)** \\\\ \\end{tabular} \\\\ \\hline
1 & 1102 & 624 \\\\
2 & 344 & 284 \\\\
3 & 190 & 179 \\\\
4 & 125 & 122 \\\\
5 & 85 & 85 \\\\
7 & 55 & 55 \\\\
10 & 29 & 29 \\\\
12 & 17 & 17 \\\\
15 & 11 & 11 \\\\
20 & 5 & 5 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Institutions publishing on ISCC topic by filtering criteria.
Based on that analysis, it can be observed, in Table 4, that the Chinese Academy of Sciences is the main institution regarding ISCC publications, with 52 contributions in total (and another 28 under the University of Chinese Academy of Sciences affiliation). As it can be observed, five out of the first 10 institutions publishing about ISCC are from China. Another interesting parameter that can be deduced from Table 4 is the ratio between the number of total citations per institutions and the number of published papers. As it can be observed, institutions with fewer publications in Table 4 (National Technical University of Athens and Xi'an Jiao Tong University) exhibit the higher ratio, with almost 40 cites per document; on the contrary, the Chinese Academy of Sciences--which was the institution with more publications (52) and more citations (736)--got the lowest ratio, with around 14 cites per document.
Those numbers were also confirmed through a co-authorship analysis based on the different countries. As it can be observed from Table 5, the corpus publication on ISCC came from 78 different countries. That number was reduced to 43 countries with five publications on that topic, and to 31 countries with ten publications. There are seven countries with at least 50 publications, and only four countries published 100 or more papers on the ISCC topic.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Institution** & **Number of ISCC Publications** & **Total Number of Citations** & **Citations/Document (+)** \\\\ \\hline Chinese Academy of Sciences & 52 & 736 & 14.2 \\\\ North China Electric Power University & 41 & 625 & 15.2 \\\\ University of Tehran & 30 & 509 & 17.0 \\\\ University of Chinese Academy of Sciences & 28 & 373 & 13.3 \\\\ Islamic Azad University & 20 & 283 & 14.2 \\\\ Shanghai Jiao Tong University & 18 & 361 & 20.1 \\\\ Imperial College London & 17 & 335 & 19.7 \\\\ National Technical University of Athens & 17 & 664 & 39.1 \\\\ Xiβan Jiao Tong University & 16 & 647 & 40.4 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: List of top 10 institutions publishing about the ISCC topic, by filtering criteria.
Figure 7: Organization network for those groups which have published more than five ISCC papers.
As it can be observed, when the criterion about having at least 10 citations by country is introduced, the number of countries meeting the requirement reduces to 68, and if the number of citations is increased to 100, the number is reduced to 38. As the number of publications increases, (above 10) the citation criteria show no effect. For networking mapping representation, a minimum number of five publications with 10 citations by country was chosen, which lead to 43 nodes. However, the largest set of connected items consists of 42 items (Morocco is the only country not meeting the criterion) that were chosen for the graphical representation in Figure 8. This kind of analysis is relevant because it allows an understanding of the main connections (citations) between different countries, and the identification of common collaborations among research institutions. Table 13 from Appendix -A contains detailed information about the clusters shown in Figure 8.
From Figure 8, the level of collaboration (nodes linking) among different countries in joint publications regarding the chosen criteria (at least five joint publications with a minimum number of 10 citations per country) can be deduced. In the case of Spain, its node is connected with another 19 nodes: France, Israel, India, Egypt, Canada, England, Sweden, Iran, China, the United States, Denmark, Norway, Belgium, Chile, Greece, Germany, Italy, the Netherlands, and Switzerland, which confirms a high level of collaboration with other countries in joint publications. On the contrary, Algeria only had
Figure 8: Co-authorship map regarding countriesβ connections.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Minimum Number of** & **Number of Countries** & **Number of Countries** & **Number of Countries** \\\\
**Publications** & **(Citations \\(\\geq\\) 0)** & **(Citations \\(\\geq\\) 10)** & **(Citations \\(\\geq\\) 100)** \\\\ \\hline
1 & 78 & 68 & 38 \\\\
2 & 65 & 62 & 38 \\\\
5 & 43 & 43 & 37 \\\\
10 & 31 & 31 & 31 \\\\
20 & 20 & 20 & 20 \\\\
50 & 7 & 7 & 7 \\\\
100 & 4 & 4 & 4 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Countries publishing on ISCC, by filtering criteria.
joint publications with France (meeting the chosen criteria), and Jordan only had publications with Poland and United States.
Table 6 shows the top 10 countries publishing on the ISCC topic; as it can be observed, China is the main contributor in this topic, with 241 papers and 4108 citations in the last 30 years, followed by the United States (202), Spain (115) and Italy (102). Another interesting parameter that can be inferred from Table 6 is the ratio between the total number of citations and the number of documents. At one end, Germany is the 6th country in terms of ISCC research, with 83 publications (most of them from the DLR German Aerospace Centre). However, each of those publications was cited almost 30 times on average. On the contrary, Indian publications on ISCC were cited 12 times on average.
### Co-Occurrence Analysis
A co-occurrence analysis was performed based on all of the keywords, with a full counting method. Considering all of the appearing keywords, Table 7 shows the number of occurrences of a keyword. As it can be observed from the table, there are at least 4534 different keywords that appear at least once in the data corpus (1277 total publications). This results in an average number of 3.55 keywords per publication. In order to consider only the more relevant keywords, a minimum number of occurrences was considered. In doing, so it can be concluded that 394 different keywords appeared in at least in five different papers, 71 appeared in at least 25 papers, and six keywords appeared in 100 publications from the corpus.
For the network graph representation, the 100 most common keywords were chosen, as can be seen in Figure 9. As it can be observed, most of the keywords are related to modelling topics based on keywords like 'optimization', 'design', 'exergy analysis', 'thermodynamic analysis','multiobjective optimization','model', or'simulation'. Apart from those common keywords, some others, such as thermoeconomic analysis, performance analysis, exoergonomic analysis or parametric analysis also appeared in Figure 9. Despite the abundance of keywords related to modelling and simulation, another series of keywords related to different technologies such as direct steam generation, CO\\({}_{2}\\) capture, hybrid plants, biomass or desalination also appear which gives an idea of combined applications of
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Country** & **Number of Documents** & **Total Number of Citations** & **Citations Per Document** \\\\ \\hline China & 241 & 4108 & 17.0 \\\\ United States & 202 & 4553 & 22.5 \\\\ Spain & 115 & 2641 & 23.0 \\\\ Italy & 102 & 1516 & 14.9 \\\\ Iran & 97 & 1877 & 19.4 \\\\ Germany & 83 & 2483 & 29.9 \\\\ England & 77 & 2000 & 26.0 \\\\ India & 48 & 575 & 12.0 \\\\ Canada & 44 & 1053 & 23.9 \\\\ Australia & 44 & 1209 & 27.5 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: The main countries publishing on the ISCC topic.
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Minimum Number of Occurrences** & **Number of Keywords** \\\\ \\hline
1 & 4534 \\\\
5 & 394 \\\\
10 & 189 \\\\
25 & 71 \\\\
50 & 30 \\\\
75 & 17 \\\\
100 & 6 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: Minimum number of occurrences of a keyword.
ISCC with other technologies. This kind of analysis and graphical representation is relevant because it allows us an understanding of the connection (co-ocurrence of keywords) between the topics of ISCC research. As it can be observed, a clear interest for low and medium temperature applications of ISCC is also evident from the occurrence of keywords such as 'parabolic trough', 'organic Rankine cycles', 'combined heat and power' and 'cooling and refrigeration'. Table 15 in Appendix A contains detailed information about the clusters shown in Figure 9.
Table 8 summarizes the list of the top 10 keywords and their frequency of occurrence. As was mentioned above, most keywords are related to simulations, such as 'optimization' (249 times), 'performance' (237), 'design' (175), 'exergy analysis '(98) and 'thermodynamics analysis' (94). We can also observe the growing interest in low grade heat energy recovery at ISCC by the high occurrence of the 'Organic Rankine Cycle' keyword (94).
### Citation Analysis
Citation bibliometric mapping can be performed through different units of analysis: documents, sources, authors, organizations and countries. Based on the number of documents, it is clear that
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Keyword** & **Occurrences** \\\\ \\hline Optimization & 249 \\\\ Performance & 237 \\\\ Energy & 205 \\\\ Design & 175 \\\\ Solar Energy & 159 \\\\ System & 136 \\\\ Energy analysis & 98 \\\\ Systems & 96 \\\\ Thermodynamic analysis & 94 \\\\ Organic Rankine Cycle & 94 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 8: Top 10 list of keywords in ISCC topic publications.
Figure 9: The 100 most common keywords used for ISCC topics.
there are 1277 publications with a minimum number of citations of 0 (the total corpus); meanwhile, 833 publications were cited five times, 392 publications were cited 20 times, 152 publications were cited 50 times, and 52 documents accumulated 100 citations each, as can be observed in Table 9.
The publications with 50 or more citations were selected for the networking mapping; according to Table 9, there are 152 publications meeting that requirement. Furthermore, the tool suggests the representation of only the largest set of connected items, which numbers 107. In other words, 107 publications with 50 citations (or more) each out of the 152 publications cite at least to one of the other works from that list, so that a link between two nodes will exist, as can be observed in Figure 10. This kind of analysis and graphical representation is relevant because it allows us an understanding of how the most relevant publications (in terms of the total number of citations) are connected to each other. Table 10 in Appendix A contains detailed information about the clusters shown in Figure 10.
As can be observed in Figure 10, there is a high interconnection among the nodes, which means that the most cited articles cited each other. Furthermore, each node contains full reference information and the hyperlink to the internet website hosting the article. The latter is very practical since one can quickly access to the full publication online. From the analysis, it can be concluded that the most cited articles are Mills (2004) [49], with 472 citations, and Behar (2013), with 335 [50]. The detailed list of the most cited articles retrieved from the WOS on the ISCC topic are gathered in Table 10.
Figure 10: Publications with more than 50 citations, and their connections.
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Minimum Number of Citations of a Document** & **Number of Publications** \\\\ \\hline
0 & 1277 \\\\
1 & 1108 \\\\
5 & 833 \\\\
20 & 392 \\\\
50 & 152 \\\\
100 & 52 \\\\
150 & 26 \\\\
200 & 9 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 9: Citation analysis.
Based on the sources--which means journals and indexed conference proceedings at the WOS--a total number of 129 different sources were retrieved for ISCC related publications. Figure 11 shows the network mapping for all of the sources with five or more publications in common (41). Nevertheless, the largest set of connected items that is represented in the network numbers 39. That map also contains information about the relevance of each node (publications of each source) and the connection between the journals. This refers to which journals were cited by each node; for example, publications from the _Sustainability_ node cited publications from _Renewable Energy_, _Energy_, _Applied Energy_, _Energy Conversion and Management_, _Sustainable Energy Technologies_ and _Energy Technology_, or vice-versa, which is translated as a link between those nodes. This kind of analysis and graphical representation is relevant because it allows us an understanding of which sources are the most cited, and how those sources are connected (citations) amongst themselves. Table 6 in Appendix A contains detailed information about the clusters shown in Figure 11.
Based on the information provided in Figure 11, summary Table 11 is provided in order to analyze the most frequent sources for publications on the topic of ISCC. The information was sorted in terms of the total number of documents. As it can be observed, _Energy Conversion and Management_ was the preferred platform for ISCC publications (158 documents) whereas _Solar Energy_ was the source that received more citations (4438 citations). In order to normalize that information, the ratio of citations per document was introduced. As can be observed, papers published in _Renewable and Sustainable Energy Reviews_ received the highest number of citations, with almost 54 cities per published document on average, followed by _Solar Energy_, with almost 44. On the other side, 'Energies' showed the lowest ratio, where each of their ISCC related publications (33 documents) received an average of four citations.
## 4 Discussion
The results presented in Section 3 (Results) and the tables gathered in Appendix A are relevant for solar energy scholars because some trends and research topics can be derived from them. Based on the detailed keyword information presented in Table 11, it can be deduced that most publications addressed modelling and performance optimization, with 857 keyword appearances. In particular, medium and low temperature studies were performed, as is indicated by the 59 keyword appearances for 'parabolic trough studies' and 246 keyword appearances for 'Organic Rankine Cycles' (ORC). Furthermore, a great interest in ISCC applications for combined heat and power generation as related keywords appear in 217 publications. It is also relevant that thermal energy storage studies were applied to ISCC topics, with 165 publications including related TES keywords. Hydrogen production was also related to ISCC publications, with 99 keyword appearances. To a lesser extent, 'life-cycle-analysis' (LCA) was considered for ISCC studies, with 82 keyword appearances. The less studied topics related to ISCC were 'desalination' (26 keyword appearances), 'phase-change material energy storage' (24 keyword appearances), 'CO\\({}_{2}\\) capture' (22 keyword appearances) and 'hybrid plants' (20 keyword appearances).
In this section, the ISCC topic is also compared to another hot energy topic: supercritical CO\\({}_{2}\\) cycles (sCO\\({}_{2}\\)). As it was referred to in the introduction, sCO\\({}_{2}\\) cycles for electricity production are one of the hottest research topics, with a clear interest for CSP application. Furthermore, both technologies are highly efficient and advanced solutions for electricity generation, both are of interest for CSP applications, and, in some cases, they were studied together. For this reason, the bibliometric study presented in this work was compared to recent bibliometric study for sCO\\({}_{2}\\)[31].
For comparison purposes, bibliometric indicators such as the number of publications, the total number of citations, authors, institutions, countries, and the most relevant publications were compared and analyzed for both technologies, and are gathered in Table 12. The bibliometric parameters and indicators were normalized due to both studies covering a different period of time; for the case of this work, the ISCC publications were collected from 1990 to July 2020, while for the sCO\\({}_{2}\\) bibliometric analysis, the timespan covered 2000 to 2019.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Source** & **Number of Documents** & **Citations** & **Ratio Citations/Document** \\\\ \\hline Energy Conversion and Management & 158 & 3054 & 19.3 \\\\ Energy & 124 & 3212 & 25.9 \\\\ Applied Energy & 114 & 3655 & 32.1 \\\\ Solar Energy & 101 & 4438 & 43.9 \\\\ Applied Thermal Engineering & 91 & 2665 & 29.3 \\\\ Renewable Energy & 66 & 1877 & 28.4 \\\\ Renewable and Sustainable Energy Reviews & 50 & 2690 & 53.8 \\\\ Journal of Solar Energy & 43 & 1230 & 28.6 \\\\ EngineeringβTransactions of the ASME & & & \\\\ International Journal of Hydrogen Energy & 38 & 815 & 21.4 \\\\ Energies & 33 & 142 & 4.3 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 11: Most frequent sources for ISCC topic publications.
As it can be observed in the table, both technologies are hot topics for Energy research, with 1277 ISCC publications and 724 for sCO\\({}_{2}\\). Furthermore, those topics have grabbed the attention of a large number of researchers and institutions from all over the world. In general, it could be said that the ISCC topic involved double the number of researchers and institutions compared to the sCO\\({}_{2}\\) topic. However, if normalized indicators are compared, similar numbers can be observed, such as, for example, the case of average number of publications per year, which was 42.6 for the ISCC topic and to 38.1 for sCO\\({}_{2}\\). For example, comparing the ratio between the total number of publications and the number of institutions, the average publishing ratio for sCO\\({}_{2}\\) (1.33) is higher than that for ISCC (1.16). Comparing the total number of citations, the differences between the ISCC and sCO\\({}_{2}\\) topics become more evident; for example, the number of citations per year for ISCC papers reaches 940, while for sCO\\({}_{2}\\) it is 511. One of the reasons behind that trend could be the fact that the research interest for sCO\\({}_{2}\\) cycles appeared later than for ISCC due to the technical difficulties discussed for supercritical CO\\({}_{2}\\) compared to the mature technology of Combined Cycles, as was discussed at the introduction. That can be confirmed from both of the publishing evolutions shown in Table 13.
As it can be observed, in the year 2010, there were 30 publications on ISCC, but only 11 publications for sCO\\({}_{2}\\). Despite the rapid research deployment of sCO\\({}_{2}\\) in recent years leading to 131 publications by 2019, still more papers were reported for the ISCC topic (174), which explains the lower citation ratio.
Comparing the most relevant countries in terms of publishing, it can be observed in Table 14 that, for both the ISCC and sCO\\({}_{2}\\) topics, China and United States are the main publishing countries, while Spain is the third highest publishing country for ISCC, as is South Korea for sCO\\({}_{2}\\).
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & & **ISCC** & & & **sCO\\({}_{2}\\)** & \\\\ \\hline
**Year** & **Publications (P)** & **Citations (C)** & **C/P** & **Publications (P)** & **Citations (C)** & **C/P** \\\\ \\hline
2010 & 30 & 539 & 18.0 & 11 & 110 & 10.0 \\\\
2015 & 105 & 2354 & 22.4 & 54 & 985 & 18.2 \\\\
2019 & 174 & 6017 & 34.6 & 131 & 218 & 1.7 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 13: Comparison between the ISCC and sCO\\({}_{2}\\) topics for the selected years of study.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline & & \\multicolumn{4}{c}{**ISCC (1990β2020)**} & \\multicolumn{4}{c}{**sCO\\({}_{2}\\) (2000-2019)**} \\\\ \\hline
**Items** & **Cites** & **Authors** & **Countries** & **Institutions** & **Sources** & **Items** & **Cites** & **Authors** & **Countries** & **Institutions** & **Sources** \\\\ \\hline
1277 & 28201 & 3568 & 78 & 1102 & 129 & 724 & 9710 & 1378 & 55 & 543 & 94 \\\\ & \\multicolumn{4}{c}{Publication Ratio (PR)} & \\multicolumn{4}{c}{Publication Ratio (PR)} \\\\ PR/year & PR/author & PR/country & PR/ institution & PR/source & PR/year & PR/author & PR/country & PR/ institution & PR/source \\\\
42.6 & 0.36 & 16.4 & 1.16 & 9.9 & 38.1 & 0.53 & 13.2 & 1.33 & 7.7 \\\\ & \\multicolumn{4}{c}{Citations Ratio (CR)} & \\multicolumn{4}{c}{Citations Ratio (CR)} \\\\ CR/year & CR/author & CR/country & CR/ institution & CR/source & CR/year & CR/author & CR/country & CR/institition & CR/source \\\\
940.0 & 7.9 & 361.6 & 25.6 & 218.6 & 511.0 & 7.05 & 176.5 & 17.88 & 103.3 \\\\ & \\multicolumn{4}{c}{Citations/Publication} & \\multicolumn{4}{c}{Citations/Publication} & \\multicolumn{4}{c}{13.4} \\\\ & \\multicolumn{4}{c}{22.1} & \\multicolumn{4}{c}{} & \\multicolumn{4}{c}{} & \\multicolumn{4}{c}{} & \\multicolumn{4}{c}{} & \\multicolumn{4}{c}{} \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 12: Comparison between ISCC and sCO\\({}_{2}\\) topics in terms of publications and citations.
It can also be observed in Table 14 that the average citation ratio (total number of citations divided by the number of publications) for those countries for the ISCC topic (around 22 citations per publication) is higher than for the sCO\\({}_{2}\\) topic (around 15 citations per publication). Despite the high research output from those countries, some differences were found regarding the main publishing institutions. For the ISCC topic, two Chinese organisations and one Iranian organization were the most productive, while for the sCO\\({}_{2}\\) topic, institutions from China, South Korea and United States were relevant. Regarding the most relevant authors, it is interesting that authors in sCO\\({}_{2}\\) topics had more publications than the authors retrieved for the ISCC topic; however, the citation ratio (the number of citations divided by the total number of publications from an author) was higher for ISCC topics. The relevance of the conference proceedings for sCO\\({}_{2}\\) topics can also be observed, where the Proceedings of the ASME Turbo Expo were the first publishing platform in terms of documents (113), while the 'Energy' journal was in second place for both ISCC and sCO\\({}_{2}\\).
Table 15 compares the most cited publications for both the ISCC and sCO\\({}_{2}\\) topics. As it can be observed, most of the relevant publications for ISCC had more total citations than for the sCO\\({}_{2}\\) topic, but the ratio of citations/year was half of the ratio observed for the sCO\\({}_{2}\\) topic. The publishing sources also differed, with the exception of _Renewable and Sustainable Energy Reviews_.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline & & \\multicolumn{4}{c}{**ISCC**} & \\multicolumn{4}{c}{**sCO\\({}_{2}\\)**} \\\\ \\hline & & \\multicolumn{4}{c}{**Most productive countries**} & \\\\ \\hline & Country & Publications & Citations & & Publications & Citations & & \\\\ & C & (P) & (C) & C/P & Country & (P) & (C) & C/P \\\\
1\\({}^{\\text{st}}\\) & China & 241 & 4108 & 17.0 & United States & 242 & 3622 & 15.0 \\\\
2\\({}^{\\text{nd}}\\) & United States & 202 & 4553 & 22.5 & China & 159 & 1812 & 11.4 \\\\
3\\({}^{\\text{rd}}\\) & Spain & 115 & 2641 & 23.0 & South Korea & 85 & 1368 & 16.1 \\\\ \\hline & & \\multicolumn{4}{c}{**Most productive Institutions**} & \\\\ \\hline & Institution & Publications & Citations & & & Publications & Citations & \\\\
1\\({}^{\\text{st}}\\) & Chinese Academy of & (P) & (C) & C/P & Institution & (P) & (C) & C/P \\\\ & Sciences, China & 52 & 736 & 14.2 & Xiβan Jiaotong & 57 & 880 & 15.4 \\\\ & North China Electric & & & & & & \\\\
2\\({}^{\\text{nd}}\\) & Power University, China & 41 & 625 & 15.2 & Institute of Science and Technology, South Korea & 53 & 913 & 17.2 \\\\
3\\({}^{\\text{rd}}\\) & University of Tehran, Iran & 30 & 509 & 17.0 & Laboratory, United States & 39 & 336 & 8.6 \\\\ \\hline & & \\multicolumn{4}{c}{**Most productive authors**} & \\\\ \\hline & & \\multicolumn{4}{c}{**Most productive authors**} & \\\\ \\hline & Author & Publications & Citations & & & & & \\\\ & (P) & (C) & C/P & Author & Publications & Citations & & \\\\ & & (P) & (C) & Lee, J,I, Korea & & (P) & (C) & C/P \\\\
1\\({}^{\\text{st}}\\) & Theoer, I, Ontario & & & & & & \\\\ & Tech Univeristy, & 24 & 758 & 31.6 & Advanced Institute of & 44 & 801 & 18.2 \\\\ & Oshawa, Canada & & & & & & \\\\ & Jin, H, Chinese & & & & & & \\\\
2\\({}^{\\text{nd}}\\) & Academy of Sciences, & & & & & & \\\\ & China & & & & & & \\\\ & Goswami, D, & & & & & & \\\\
3\\({}^{\\text{rd}}\\) & University of South & 19 & 993 & 52.3 & Argonne National & 26 & 264 & 10.1 \\\\ & Florida, United & & & & & & \\\\ & States & & & & & & \\\\ \\hline & & \\multicolumn{4}{c}{**Main publishing sources**} & \\\\ \\hline & Source & Publications & Citations & & & & & \\\\ & (P) & (C) & C/P & Source & Publications & Citations & & \\\\
1\\({}^{\\text{st}}\\) & Energy Conversion & & & & & & \\\\ & and Management & 158 & 3054 & 19.3 & Proceedings of the & 113 & 605 & 5.4 \\\\
2\\({}^{\\text{nd}}\\) & Energy & 124 & 3212 & 25.9 & Energy & 52 & 1593 & 30.6 \\\\
3\\({}^{\\text{rd}}\\) & Applied Energy & 114 & 3655 & 32.1 & Applied Thermal & 42 & 748 & 17.8 \\\\ & & & & & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 14: Comparison between ISCC and sCO\\({}_{2}\\) topics in terms of the most productive outputs.
Finally, it can be observed in Table 16 that the most frequent keywords for the ISCC topic were quite general ('optimization', 'performance', 'energy'), while those for sCO\\({}_{2}\\) were the name of the topic itself ('supercritical carbon dioxide', 'Brayton cycle','supercritical CO\\({}_{2}\\) Brayton cycle'). It was also observed in the detailed keywords information provided in Table A4 that the most common keywords for the ISCC topic were related to modelling and simulations.
## 5 Conclusions
In recent years, the growing interest in higher conversion efficiencies for CSP applications has led to an increasing number of papers covering the topic of Integrated Solar Combined Cycle technologies. In particular, ISCC interest is based on its ability to increase the contribution of renewable energy sources into the global energy mix at a very high plant efficiency, whether in a pure solar configuration or in hybrid arrangements. Based on the presented bibliometric study, the following conclusions can be summarized:
* There is a growing interest in ISCC topics, as can be observed from the increasing number of publications and citations. This trend sharpened in 2011.
* The most productive countries in terms of the number of publications were China (241), the United States (202) and Spain (115), which has similar citation/publication ratios to the average (22.1 citations per work). A similar trend was observed regarding the most productive institutions, with two of them being from China (the Chinese Academy of Sciences and North China Electric Power University).
* However, the most renowned researcher on the ISCC topic was Ibrahim Dincer from Ontario Tech University (Canada), with 24 publications and 758 citations. The second and third most productive authors were from China (the Chinese Academy of Sciences) and the United States (the University of South Florida). Despite their large scientific production, the most cited papers were from Mills (published in _Solar Energy_), Behar (_Renewable and Sustainable Energy Reviews_) and Schuster (_Applied Thermal Engineering_).
* It was interesting that none of those most-cited articles were published at the main publishing sources: _Energy Conversion and Management_ (with 158 publications), _Energy_ (124) and _Applied Energy_ (114).
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**ISCC**} & \\multicolumn{2}{c}{**sCO\\({}_{2}\\)**} \\\\ \\hline
1st & Optimization & 249 & Supercritical carbon dioxide & 178 \\\\
2nd & Performance & 237 & Brayton cycle & 93 \\\\
3rd & Energy & 205 & Supercritical CO\\({}_{2}\\) Brayton cycle & 86 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 16: Comparison between ISCC and sCO\\({}_{2}\\) topics in terms of the most frequent keywords.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline & & \\multicolumn{4}{c}{**ISCC**} & \\multicolumn{4}{c}{**sCO\\({}_{2}\\)**} \\\\ \\hline
\\begin{tabular}{c c c c c c c c} \\hline \\hline
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**Institution** & **Links** & **Total Link Strength** & **Documents** & **Institution** & **Links** & **Total Link Strength** & **Documents** \\\\ \\hline \\multicolumn{6}{c}{**Cluster 1**} & \\multicolumn{6}{c}{**Cluster 2**} \\\\ \\hline CNR & 4 & 4 & 5 & CIEMAT & 1 & 1 & 5 \\\\ Natl Tech Univ Athens & 3 & 3 & 17 & Colorado Sch & 4 & 5 & 6 \\\\ UNED & 2 & 3 & 5 & DLR & 4 & 4 & 7 \\\\ \\multicolumn{6}{c}{German} \\\\ Univ Carlos III Madrid & 3 & 4 & 6 & Aerospace CTR & 1 & 1 & 12 \\\\ Univ Ferrara & 1 & 1 & 5 & Naft Renewable & 3 & 4 & 9 \\\\ Univ Naples Federico II & 1 & 1 & 7 & Sandia Natl Labs & 4 & 4 & 7 \\\\ Univ Politech Madrid & 2 & 4 & 11 & Stanford Univ & 3 & 3 & 5 \\\\ Urmia Univ & 5 & 6 & 9 & Univ Calif & 1 & 1 & 9 \\\\ Urmia Univ Technology & 2 & 3 & 5 & & & \\\\ \\hline \\multicolumn{6}{c}{**Cluster 3**} & \\multicolumn{6}{c}{**Cluster 4**} \\\\ \\hline ETH & 4 & 7 & 10 & Delft Univ & 4 & 4 & 6 \\\\ Paul Scherrer Inst & 2 & 6 & 12 & MIT & 3 & 5 & 11 \\\\ San Diego State Univ & 2 & 2 & 10 & Politech Milan & 4 & 6 & 9 \\\\ Sharif Univ Technol & 2 & 3 & 6 & Politech Torino & 3 & 3 & 5 \\\\ Swiss Fed Inst Technol & 4 & 4 & 7 & Rhein Westfal Th & 4 & 5 & 8 \\\\ Aachen & 4 & 5 & 8 & Aachen & 4 & 5 & 8 \\\\ Univ Mohaghegh Ardab & 2 & 5 & 7 & Univ Brescia & 2 & 5 & 6 \\\\ Univ Tabriz & 4 & 7 & 10 & Univ Seville & 2 & 2 & 11 \\\\ Weizmann Inst Sci & 2 & 3 & 9 & & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 10: Detailed cluster information for co-authorship analysis by organizations (detailed information for Figure 7).
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**Country** & **Links** & **Total Link Strength** & **Documents** & **Country** & **Links** & **Total Link Strength** & **Documents** \\\\ \\hline \\multicolumn{6}{c}{**Cluster 1**} & \\multicolumn{6}{c}{**Cluster 2**} \\\\ \\hline Belgium & 7 & 10 & 14 & Canada & 11 & 31 & 44 \\\\ Brazil & 5 & 8 & 17 & Cyprus & 5 & 6 & 6 \\\\ Chile & 5 & 9 & 7 & Egypt & 4 & 6 & 22 \\\\ Finland & 4 & 4 & 7 & England & 19 & 54 & 77 \\\\ Norway & 8 & 9 & 11 & Pakistan & 7 & 10 & 7 \\\\ Portugal & 5 & 5 & 11 & Saudi Arabia & 10 & 25 & 31 \\\\ Spain & 19 & 59 & 115 & Turkey & 8 & 15 & 29 \\\\ Sweden & 16 & 25 & 41 & United Arab & 6 & 6 & 13 \\\\ & & & & & & \\\\ \\hline \\multicolumn{6}{c}{**Cluster 3**} & \\multicolumn{6}{c}{**Cluster 4**} \\\\ \\hline Australia & 14 & 24 & 44 & Germany & 16 & 45 & 83 \\\\ Colombia & 3 & 8 & 6 & Greece & 5 & 13 & 33 \\\\ Iran & 19 & 38 & 97 & Italy & 13 & 45 & 102 \\\\ Peopleles Republic of China & 23 & 80 & 241 & Jordan & 2 & 4 & 7 \\\\ Taiwan & 2 & 2 & 7 & Poland & 4 & 5 & 6 \\\\ United States & 28 & 92 & 202 & & & & \\\\ Vietnam & 4 & 8 & 5 & & & & \\\\ \\hline \\multicolumn{6}{c}{**Cluster 5**} & \\multicolumn{6}{c}{**Cluster 6**} \\\\ \\hline Denmark & 9 & 13 & 13 & Austria & 3 & 3 & 6 \\\\ Japan & 8 & 14 & 30 & Netherlands & 9 & 13 & 10 \\\\ Malaysia & 6 & 8 & 13 & South Africa & 3 & 3 & 21 \\\\ Singapore & 6 & 11 & 11 & Switzerland & 12 & 21 & 39 \\\\ South Korea & 5 & 5 & 12 & & & & \\\\ \\hline \\multicolumn{6}{c}{**Cluster 7**} & \\multicolumn{6}{c}{**Cluster 8**} \\\\ \\hline Algeria & 1 & 2 & 12 & India & 6 & 10 & 48 \\\\ France & 7 & 12 & 29 & Israel & 6 & 14 & 20 \\\\ Thailand & 5 & 5 & 5 & & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 10: Detailed cluster information for co-authorship analysis by countries (detailed information for Figure 8).
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Publication** & **Links** & **Citations** & **Publication** & **Links** & **Citations** \\\\ \\hline \\multicolumn{4}{c}{**Cluster 1**} & & **Cluster 2** \\\\ \\hline Ahmadi (2017a) & 3 & 57 & Al-Sulaiman (2011) & 1 & 51 \\\\ Ahmadi (2017b) & 5 & 50 & Al-Sulaiman (2013) & 4 & 70 \\\\ Baghernejad (2011) & 6 & 181 & Boyaghchi (2015a) & 7 & 57 \\\\ Behar (2011) & 3 & 50 & Boyaghchi (2015b) & 10 & 120 \\\\ Behar (2014) & 17 & 74 & Jing (2012) & 5 & 62 \\\\ Dersch (2004) & 9 & 153 & Jradii (2014) & 7 & 129 \\\\ Eck (2003) & 2 & 130 & Karellas (2008) & 1 & 111 \\\\ Franchini (2013) & 5 & 62 & Meng (2010) & 6 & 54 \\\\ Horn (2004) & 7 & 69 & Mohammadi (2017) & 1 & 83 \\\\ Hosseini (2005) & 5 & 68 & Shirazi (2017) & 2 & 60 \\\\ Jamel (2013) & 20 & 111 & Wang (2009a) & 5 & 67 \\\\ Li (2014) & 11 & 50 & Wang (2012) & 9 & 78 \\\\ Montes (2011) & 7 & 161 & Wang (2015a) & 4 & 65 \\\\ Mezammahalleh (2010) & 8 & 76 & Wang (2015b) & 6 & 96 \\\\ Reddy (2012) & 3 & 51 & & & \\\\ Rovira (2013) & 14 & 75 & & & \\\\ Rovira (2016) & 5 & 52 & & & \\\\ Spelling (2012) & 9 & 81 & & & \\\\ \\hline \\multicolumn{4}{c}{**Cluster 3**} & & **Cluster 4** \\\\ \\hline Bai (2015) & 1 & 66 & Al-Attab (2015) & 5 & 61 \\\\ Good (2016) & 2 & 54 & Behar (2013) & 13 & 335 \\\\ Kalogirou (2001) & 2 & 176 & Boerema (2012) & 1 & 98 \\\\ Khalid (2015) & 1 & 76 & Buck (2002) & 6 & 144 \\\\ Kosmadakis (2011) & 4 & 70 & Chacarregui (2011) & 10 & 127 \\\\ Mills (2004) & 9 & 472 & Crespi (2017) & 4 & 124 \\\\ Modi (2017) & 12 & 80 & Dunham (2014) & 7 & 92 \\\\ Rao (2013) & 2 & 58 & Kribus (1998) & 8 & 138 \\\\ Romero Gomez (2014) & 1 & 101 & Lenert (2012) & 2 & 255 \\\\ Wang (2009b) & 6 & 174 & Schmitz (2006) & 1 & 102 \\\\ Zhang (2006a) & 2 & 56 & Schwarzbozel (2006) & 8 & 156 \\\\ Zhang (2006b) & 4 & 130 & Turchi (2013) & 2 & 198 \\\\ Zhang (2012) & 2 & 212 & Zare (2016) & 5 & 51 \\\\ \\hline \\multicolumn{4}{c}{**Cluster 5**} & & **Cluster 6** \\\\ \\hline Al-Sulaiman (2012) & 7 & 100 & Balcombe (2015) & 1 & 50 \\\\ Al-Sulaiman (2014) & 4 & 107 & Freeman (2015) & 5 & 149 \\\\ Dincer (2015) & 1 & 156 & Freeman (2017a) & 4 & 61 \\\\ Kim (2009) & 1 & 63 & Freeman (2017b) & 6 & 88 \\\\ Li (2013) & 5 & 206 & Karellas (2016) & 2 & 102 \\\\ Nafey (2010) & 4 & 109 & Markides (2013) & 3 & 82 \\\\ Palenzuela (2011) & 1 & 65 & Martinez (2017) & 6 & 51 \\\\ Tchanche (2010) & 5 & 116 & Qiu (2011) & 2 & 192 \\\\ Wang (2011) & 4 & 68 & Qiu (2012) & 2 & 91 \\\\ You (2002) & 2 & 52 & Quoilin (2011) & 6 & 214 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Detailed cluster information for the most cited publications (detailed information for Figure 10).
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**Source** & **Links** & **Total Link Strength** & **Citations** & **Source** & **Links** & **Total Link Strength** & **Citations** \\\\ \\hline & \\multicolumn{3}{c}{**Cluster 1**} & \\multicolumn{3}{c}{**Cluster 2**} \\\\ \\hline Applied Energy & 31 & 540 & 114 & Energy Policy & 10 & 19 &
## References
* (1) Martinez, D.M.; Ebenhack, B.W. Understanding the role of energy consumption in human development through the use of saturation phenomena. _Energy Policy_**2008**, _36_, 1430-1435. [CrossRef]
* (2) Jorgenson, A.K.; Alekseyko, A.; Giedraitis, V. Energy consumption, human well-being and economic development in central and eastern European nations: A cautionary tale of sustainability. _Energy Policy_**2014**, 66, 419-427. [CrossRef]
* (3) Oberthur, S.; Ott, H.E. _The Kyoto Protocol: International Climate Policy for the 21st Century_; Springer-Verlag: Berlin, Germany, 1999.
* (4) Rogeli, J.; den Elzen, M.; Hohne, N.; Fransen, T.; Fekete, H.; Winkler, H.; Schaeffer, R.; Sha, F.; Riahi, K.; Meinshausen, M. Paris Agreement climate proposals need a boost to keep warming well below 2 \\({}^{\\circ}\\)c. _Nature_**2016**, _534_, 631-639. [CrossRef]
* (5) UN. COP25 Climate Change Conference. 2019. Available online: [https://unfccc.int/cop25](https://unfccc.int/cop25) (accessed on 26 August 2020).
* (6) UN. About the Sustainable Development Goals--United Nations Sustainable Development. 2020. Available online: [https://www.un.org/sustainableddevelopment/sustainable-development-goals/](https://www.un.org/sustainableddevelopment/sustainable-development-goals/) (accessed on 26 August 2020).
* (7) European Commission. _COM(2014) 15 Final: A Policy Framework for Climate and Energy in the Period from 2020 to 2030_; European Environment Agency (EEA): Brussels, Belgium, 2014; pp. 1-18.
* (8) Li, Q.; Wang, J.; Zhang, Y.; Fan, Y.; Bao, G.; Wang, X. Multi-Period Generation Expansion Planning for Sustainable Power Systems to Maximize the Utilization of Renewable Energy Sources. _Sustainability_**2020**, _12_, 1083. [CrossRef]
* (9) IRENA. Electricity Storage and Renewables: Costs and Markets to 2030. 2017. Available online: [https://www.irena.org/publications/2017/Oct/Electricity-storage-and-renewables-costs-and-markets](https://www.irena.org/publications/2017/Oct/Electricity-storage-and-renewables-costs-and-markets) (accessed on 30 July 2020).
* (10) Lund, H.; Salgi, G. The role of compressed air energy storage (CAES) in future sustainable energy systems. _Energy Convers. Manag._**2009**, _50_, 1172-1179. [CrossRef]* (11) Cabeza, L.F. (Ed.) _Advances in Thermal Energy Storage Systems: Methods and Applications_; Woodhead Publishing Series in Energy: Cambridge, UK, 2015.
* (12) Kraemer, S. Make Carnot Batteries with Molten Salt Thermal Energy Storage in ex-Coal Plants. SolarPACES. 16 April 2019. Available online: [https://www.solarpaces.org/make-carnot-batteries-with-molten-salt-thermal-energy-storage-from-ex-coal-plants/](https://www.solarpaces.org/make-carnot-batteries-with-molten-salt-thermal-energy-storage-from-ex-coal-plants/) (accessed on 26 August 2020).
* (13) Zhang, H.L.; Baeyens, J.; Degreve, J.; Caceres, G. Concentrated solar power plants: Review and design methodology. _Renew. Sustain. Energy Rev._**2013**, _22_, 466-481. [CrossRef]
* (14) Islam, M.T.; Huda, N.; Abdullah, A.B.; Saidur, R. A comprehensive review of state-of-the-art concentrating solar power (CSP) technologies: Current status and research trends. _Renew. Sustain. Energy Rev._**2018**, _91_, 987-1018. [CrossRef]
* (15) Powell, K.M.; Rashid, K.; Ellingwood, K.; Tuttle, J.; Iverson, B.D. Hybrid concentrated solar thermal power systems: A review. _Renew. Sustain. Energy Rev._**2017**, _80_, 215-237. [CrossRef]
* (16) Zhu, Y.; Zhai, R.; Yang, Y.; Reyes-Belmonte, M.A. Techno-Economic Analysis of Solar Tower Aided Coal-Fired Power Generation System. _Energies_**2017**, _10_, 1392. [CrossRef]
* (17) Aguilar-Jimenez, J.A.; Velazquez, N.; Acuna, A.; Cota, R.; Gonzalez, E.; Gonzalez, L.; Lopez, R.; Islas, S. Techno-economic analysis of a hybrid PV-CSP system with thermal energy storage applied to isolated microgrids. _Sol. Energy_**2018**, _174_, 55-65. [CrossRef]
* (18) Ju, X.; Xu, C.; Hu, Y.; Han, X.; Wei, G.; Du, X. A review on the development of photovoltaic/concentrated solar power (PV-CSP) hybrid systems. _Sol. Energy Mater. Sol. Cells_**2017**, _161_, 305-327. [CrossRef]
* (19) Singer, C.; Buck, R.; Pitz-paal, R.; Muller-steinhagen, H. Assessment of Solar Power Tower Driven Ultrasupercritical Steam Cycles Applying Tubular Central Receivers With Varied Heat. _J. Sol. Energy Eng._**2016**, _132_, 1-12. [CrossRef]
* (20) Turchi, C.S.; Ma, Z.; Neises, T.W.; Wagner, M.J. Thermodynamic Study of Advanced Supercritical Carbon Dioxide Power Cycles for Concentrating Solar Power Systems. _J. Sol. Energy Eng._**2013**, _135_, 041007. [CrossRef]
* (21) Binotti, M.; Astolfi, M.; Campanari, S.; Manzolini, G.; Silva, P. Preliminary assessment of sCO\\({}_{2}\\) cycles for power generation in CSP solar tower plants. _Appl. Energy_**2017**, _204_, 1007-1017. [CrossRef]
* (22) Neises, T.; Turchi, C. A Comparison of Supercritical Carbon Dioxide Power Cycle Configurations with an Emphasis on CSP Applications. _Energy Procedia_**2014**, _49_, 1187-1196. [CrossRef]
* (23) Vignarooban, K.; Xu, X.; Arvay, A.; Hsu, K.; Kannan, A.M. Heat transfer fluids for concentrating solar power systems--A review. _Appl. Energy_**2015**, _146_, 383-396. [CrossRef]
* (24) Liu, M.; Tay, N.H.S.; Bell, S.; Belusko, M.; Jacob, R.; Will, G.; Saman, W.; Bruno, F. Review on concentrating solar power plants and new developments in high temperature thermal energy storage technologies. _Renew. Sustain. Energy Rev._**2016**, _53_, 1411-1432. [CrossRef]
* (25) Ho, C.K.; Iverson, B.D. Review of high-temperature central receiver designs for concentrating solar power. _Renew. Sustain. Energy Rev._**2014**, _29_, 835-846. [CrossRef]
* (26) Ho, C.K. A review of high-temperature particle receivers for concentrating solar power. _Appl. Therm. Eng._**2016**, _109_, 958-969. [CrossRef]
* (27) Stein, W.H.; Buck, R. Advanced power cycles for concentrated solar power. _Sol. Energy_**2017**. [CrossRef]
* (28) Giuliano, S.; Buck, R.; Eguiguren, S. Analysis of Solar-Thermal Power Plants With Thermal Energy Storage and Solar-Hybrid Operation Strategy. _J. Sol. Energy Eng._**2011**. [CrossRef]
* (29) Reyes-Belmonte, M.A.; Sebastian, A.; Gonzalez-Aguilar, J.; Romero, M. Performance comparison of different thermodynamic cycles for an innovative central receiver solar power plant. _AIP Conf. Proc._**2017**, _1850_, 160024. [CrossRef]
* (30) Wang, X.; Wang, J.; Zhao, P.; Dai, Y. Thermodynamic Comparison and Optimization of Supercritical CO\\({}_{2}\\) Brayton Cycles with a Bottoming Transcritical CO\\({}_{2}\\) Cycle. _J. Energy Eng._**2016**, _142_, 04015028. [CrossRef]
* (31) Yu, A.; Su, W.; Lin, X.; Zhou, N. Recent trends of supercritical CO\\({}_{2}\\) Brayton cycle: Bibliometric analysis and research review. _Nucl. Eng. Technol._**2020**. [CrossRef]
* (32) Crespi, F.; Gavagnin, G.; Sanchez, D.; Martinez, G.S. Supercritical carbon dioxide cycles for power generation: A review. _Appl. Energy_**2017**, _195_, 152-183. [CrossRef]
* (33) Turchi, C.S.; Ma, Z.; Dyreby, J. Supercritical carbon dioxide power cycle configurations for use in concentrating solar power systems. In Proceedings of the ASME Turbo Expo, Copenhagen, Denmark, 11-15 June 2012; Volume 5, pp. 967-973. [CrossRef]* Reyes-Belmonte et al. (2016) Reyes-Belmonte, M.A.; Sebastian, A.; Romero, M.; Gonzalez-Aguilar, J. Optimization of a recompression supercritical carbon dioxide cycle for an innovative central receiver solar power plant. _Energy_**2016**. [CrossRef]
* Allam et al. (2017) Allam, R.; Martin, S.; Forrest, B.; Fetvedt, J.; Lu, X.; Freed, D.; Brown, G.W., Jr.; Sasaki, T.; Itoh, M.; Manning, J. Demonstration of the Allam Cycle: An Update on the Development Status of a High Efficiency Supercritical Carbon Dioxide Power Process Employing Full Carbon Capture. _Energy Procedia_**2017**, _114_, 5948-5966. [CrossRef]
* Poullikkas (2005) Poullikkas, A. An overview of current and future sustainable gas turbine technologies. _Renew. Sustain. Energy Rev._**2005**, \\(9\\), 409-443. [CrossRef]
* Smith (2017) Smith, R.W. Steam turbine cycles and cycle design optimization: Combined cycle power plants. In _Advances in Steam Turbines for Modern Power Plants_; Woodhead Publishing: Duxford, UK, 2017; pp. 57-92.
* Wasserman and Faust (2012) Wasserman, S.; Faust, K. Social Network Analysis in the Social and Behavioral Sciences. In _Social Network Analysis_; Cambridge University Press: Cambridge, UK, 2012; pp. 3-27. [CrossRef]
* Alcayde et al. (2018) Alcayde, A.; Montoya, F.G.; Banos, R.; Perea-Moreno, A.J.; Manzano-Agugliaro, F. Analysis of research topics and scientific collaborations in renewable energy using community detection. _Sustainability_**2018**, _10_, 4510. [CrossRef]
* Uribe-Toril et al. (2019) Uribe-Toril, J.; Ruiz-Real, J.L.; Milan-Garcia, J.; Valenciano, J.D.P. Energy, economy, and environment: Aw worldwide research update. _Energies_**2019**, _12_, 1120. [CrossRef]
* Hernandez-Escobedo et al. (2018) Hernandez-Escobedo, Q.; Perea-Moreno, A.J.; Manzano-Agugliaro, F. Wind energy research in Mexico. _Renew. Energy_**2018**, _123_, 719-729. [CrossRef]
* Wang et al. (2017) Wang, L.; Wei, Y.M.; Brown, M.A. Global transition to low-carbon electricity: A bibliometric analysis. _Appl. Energy_**2017**, _205_, 57-68. [CrossRef]
* Saikia et al. (2020) Saikia, K.; Valles, M.; Fabregat, A.; Saez, R.; Boer, D. A bibliometric analysis of trends in solar cooling technology. _Sol. Energy_**2020**, _199_, 100-114. [CrossRef]
* Imran et al. (2018) Imran, M.; Haglind, F.; Asim, M.; Alvi, J.Z. Recent research trends in organic Rankine cycle technology: A bibliometric approach. _Renew. Sustain. Energy Rev._**2018**, _81_, 552-562. [CrossRef]
* Yu et al. (2016) Yu, H.; Wei, Y.M.; Tang, B.J.; Mi, Z.; Pan, S.Y. Assessment on the research trend of low-carbon energy technology investment: A bibliometric analysis. _Appl. Energy_**2016**, _184_, 960-970. [CrossRef]
* van Eck and Waltman (2010) van Eck, N.J.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. _Scientometrics_**2010**, _84_, 523-538. [CrossRef]
* International Renewable Energy Agency (2019) International Renewable Energy Agency. _Renewable Power Generation Costs in 2019_; International Renewable Energy Agency: Abu Dhabi, United Arab Emirates, 2020.
* Centre for Science and Technology Studies--Leiden University (2020) Centre for Science and Technology Studies--Leiden University. VOSviewer--Visualizing Scientific Landscapes. 2020. Available online: [https://www.vosviewer.com/](https://www.vosviewer.com/) (accessed on 27 August 2020).
* Mills (2004) Mills, D. Advances in solar thermal electricity technology. _Sol. Energy_**2004**, _76_, 19-31. [CrossRef]
* Behar et al. (2013) Behar, O.; Khellaf, A.; Mohammedi, K. A review of studies on central receiver solar thermal power plants. _Renew. Sustain. Energy Rev._**2013**, _23_. [CrossRef]
* Schuster et al. (2009) Schuster, A.; Karellas, S.; Kakaras, E.; Spliethoff, H. Energetic and economic investigation of Organic Rankine Cycle applications. _Appl. Therm. Eng._**2009**, _29_, 1809-1817. [CrossRef]
* Lenert and Wang (2012) Lenert, A.; Wang, E.N. Optimization of nanofluid volumetric receivers for solar thermal energy conversion. _Sol. Energy_**2012**, _86_, 253-265. [CrossRef]
* Quoilin et al. (2011) Quoilin, S.; Orosz, M.; Hemond, H.; Lemort, V. Performance and design optimization of a low-cost solar organic Rankine cycle for remote power generation. _Sol. Energy_**2011**, _85_, 955-966. [CrossRef]
* Iverson et al. (2013) Iverson, B.D.; Conboy, T.M.; Pasch, J.J.; Kruizenga, A.M. Supercritical CO\\({}_{2}\\) Brayton cycles for solar-thermal energy. _Appl. Energy_**2013**, _111_, 957-970. [CrossRef] | In this paper, a bibliometric analysis was performed in order to analyze the state of the art and publication trends on the topic of ISCC (Integrated Solar Combined Cycles) for the period covering 1990 to July 2020. The Web of Science (WOS) database was consulted, and 1277 publications from 3157 different authors and 1102 different institutions, distributed among 78 countries, were retrieved as the corpus of the study. The VOSViewer software tool was used for the post-processing of the WOS corpus, and for the network data mapping. Multiple bibliometric indicators, such as the number of citations, keyword occurrences, the authors' affiliations, and the authors, among others, were analysed in this paper in order to find the main research trends on the ISCC topic. The analysis performed in this paper concluded that the main publication source for ISCC research was _Energy Conversion and Management_, in terms of the total number of publications (158), but _Solar Energy_ had the highest number of citations on the ISCC topic (4438). It was also found that China was the most productive country in terms of ISCC publications (241), and the Chinese Academy of Sciences was the most productive institution (52). Nevertheless, the author with the most publications on ISCC was I. Dincer, from Ontario Tech University (24). Based on publication keywords, a series of recommendations for future developments in the ISCC topic were derived, as well as the ways in which those ideas are connected to the global state of solar energy research. | Condense the content of the following passage. | 318 |
mdpi/022f40de_8d6c_44b9_b6eb_8e8d281d9d08.md | A Study on Typhoon Center Localization Based on an Improved Spatio-Temporally Consistent Scale-Invariant Feature and Brightness Temperature Perturbations
Chaoyu Yan
1State Environmental Protection Key Laboratory of Satellite Remote Sensing & Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 1001010, China; [email protected] (C.Y.); [email protected] (Z.L.); [email protected] (G.d.L.)
Jie Guang
1State Environmental Protection Key Laboratory of Satellite Remote Sensing & Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China; [email protected] (C.Y.); [email protected] (Z.L.); [email protected] (G.d.L.)
Zhengqiang Li
1State Environmental Protection Key Laboratory of Satellite Remote Sensing & Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China; [email protected] (C.Y.); [email protected] (Z.L.); [email protected] (G.d.L.)
Gerrit de Leeuw
1State Environmental Protection Key Laboratory of Satellite Remote Sensing & Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China; [email protected] (C.Y.); [email protected] (Z.L.); [email protected] (G.d.L.)
Zhetting Chen
5School of Information Engineering, Kunming University, Kunming 650214, China; [email protected]
######
typhoon; location; SIFT; brightness temperature perturbations +
Footnote β : journal: Remote Sensing
0020
tropical northwest Pacific is a typhoon-prone area. Following the statistics of the China Meteorological Administration (CMA) in the past decade, more than 20 typhoons form each year in the northwest Pacific Ocean. Tropical cyclones are classified into the following six categories ([https://tcdata.tyhoon.org.cn/data/doc/TC_std.pdf](https://tcdata.tyhoon.org.cn/data/doc/TC_std.pdf), accessed on 9 May 2006): a Tropical Depression (TD), Tropical Storm (TS), Severe Tropical Storm (STS), Typhoon (TY), Severe Typhoon (ST), and Super Typhoon (STY).
Typhoon monitoring refers to the use of various vehicles to record various physical phenomena, processes, and changes in meteorological elements within the typhoon, and its monitoring means mainly include aircraft monitoring, ground-based monitoring, radar monitoring, and satellite monitoring [3]. Currently, a crucial method for typhoon monitoring involves the utilization of meteorological satellites. Meteorological satellites have become indispensable sources of meteorological data globally, providing essential information for scientific research and everyday applications. They also play a critical role in typhoon monitoring [4]. The detection of typhoon position based on satellite remote sensing can help to quickly determine the generation of typhoons and their location, which is helpful for governments to take adequate preventive measures before the arrival of disasters, and can also help researchers to carry out research in the direction of global climate change.
Current methods of typhoon identification and localization using satellite imagery are mainly classified into manual monitoring, mathematical morphological methods, and artificial intelligence methods. The initial typhoon positioning and some practical operations in China and abroad use the manual positioning method, which mainly relies on the experience of the staff and some characteristics of the typhoon. The results from the manual method are subjective and the positioning results vary greatly from staff to staff. Currently, most tropical cyclone warning centers in the world utilize the Dvorak method [5; 6; 7] for typhoon localization.
The initial studies on the mathematical morphology method of typhoon localization mainly used radar data [8; 9; 10]. Liu, et al. [11] introduced a method for detecting the center of typhoons through the intersection of the Zero Radial Velocity Line (ZRVL) obtained from a dual-Doppler radar system. This method is applicable to regions where data from Automatic Weather Stations (AWSs) are unavailable. Typhoon localization based on radar data has higher localization accuracy in the case of better radar echo quality as well as typhoon structure, but not in the case of weaker typhoon intensity.
By employing the spiral analysis method, which involves extracting spiral lines from satellite cloud imagery that reflect cloud features, and subsequently employing an algorithm for fitting, the polar coordinate origin of the spiral line is determined as the center of the typhoon [12; 13; 14]. When using a spiral analysis to locate typhoons, the results depend on the characteristics of the typhoon itself, and the thresholds set by the algorithm are not suitable for most cases and need to be adjusted for each situation.
Typhoon center localization is also possible using the method of a wind field analysis [15; 16; 17]. Zhang, et al. [18] proposed a multi-channel satellite cloud image fusion method based on a shear wave transform, and the results showed that the fused images obtained by the proposed algorithm improved the typhoon center localization accuracy and outperformed the comprehensive performance of similar image fusion algorithms. Liu and Wang [19] used infrared sequence images from the FY-2 meteorological satellite to perform pyramidal decomposition and construct a cloud-guided wind field based on a weighted median filtered optical flow model as a way to calculate the typhoon center using a density matrix. The wind field analysis method is applicable to the weaker incipient and extinction periods of typhoons. When the typhoon intensity is high, the wind field environment inside the typhoon is complex, and the accuracy of using the wind field to locate the typhoon center position will be greatly reduced.
In several studies, typhoon edge contour features or image grayscale thresholds have been used to automatically extract typhoon cloud systems and locate their centers [20; 21; 22]. Jaiswal and Kishtawal [23] extracted thermal cyclone center spiral features from geostationary infrared satellite (Meteosat-5) image data to fit the location of the typhoon center by setting certain threshold values for typhoon center localization. These authors calculated the flux convergence point of the brightness temperature (BT) gradient vector and averaged the accumulated fractional values in the density matrix, and the typhoon center location was the highest scoring location. Zhang, et al. [24] proposed an efficient algorithm for typhoon center localization using fractal features and gradients of infrared satellite cloud images. Liu, Wang, Liao, and Fang [3] proposed a typhoon localization algorithm based on the spatio-temporally consistent (STC) Scale-Invariant Feature Transform (SIFT) method to filter the feature points after typhoon matching, and on this basis, the localization of the current typhoon center coordinates was completed. Xie, et al. [25] analyzed the spatial distribution characteristics of brightness temperature perturbation (BTP) in the center regions of typhoons during different time periods. They introduced a typhoon localization method based on brightness temperature data from FY-4A AGRI, known as the Bright-Temperature Perturbation (BTP) typhoon localization method. This algorithm effectively captures the brightness temperature perturbation features at the typhoon center, resulting in high accuracy in typhoon center localization. The BTP typhoon localization method is based on the brightness temperature perturbation factor, which effectively characterizes the state of typhoons during different time periods and in various spatial contexts. It demonstrates distinct advantages in localizing strong typhoons. On the other hand, the STC SIFT typhoon localization algorithm relies on the distribution of characteristic points in two successive images and the relative position of the typhoon center to these points. The STC SIFT method provides good localization accuracy, particularly when dealing with typhoons of weaker intensity. In summary, the BTP and the STC SIFT typhoon localization algorithms are the more commonly used and more accurate algorithms in the mathematical morphological method of typhoon localization.
In recent years, emerging artificial intelligence technologies have also been applied to typhoon identification and location. Permyakov, et al. [26] proposed methods for the estimation of typhoon eyewall characteristics (the center location, the radius and the width, and radii of inner and outer boundaries) based on World Wide Lightning Location Network (WWLLN) data. Magee, et al. [27] used multivariate Poisson regression and considering up to five modes of ocean-atmospheric variability and teleconnection patterns that influence TC behavior, thousands of possible predictor model combinations were compared using an automated variable selection procedure. Kang and Park [28] studied the precipitable water vapor (PWV) variation in a typhoon in 2018 based on the Global Navigation Satellite System (GNSS) inversion of PWV to predict the typhoon path. Some scholars [29; 30] carried out work on a typhoon vortex identification model based on deep image target detection, an intelligent typhoon intensity determination model based on image classification and retrieval, and a typhoon fast and enhanced identification model, and constructed a typhoon intelligent monitoring and forecasting system. Although artificial intelligence technology can be more convenient to achieve fully automated typhoon identification and positioning, the accuracy of the current positioning still needs to be improved.
The mainstream typhoon center localization method is the mathematical morphology method, with the typhoon localization algorithm based on the STC SIFT [3] and the BTP algorithm [25], which are most effective in locating the typhoon center. In this paper, the BTP and STC SIFT typhoon localization methods are improved and applied to Fengyun 4A (FY-4A) Advanced Geosynchronous Radiation Imager (AGRI) data to automatically perform typhoon localization in the northwest Pacific region. FY-4A AGRI L1 data and the FY-4A AGRI cloud top height (CTH) product are used for projection conversion and cloud system identification. Parallax correction is added to make the longitude and latitude of typhoon positioning closer to the surface longitude and latitude. The research area and data used are described in Section 2 and the methodology is explained in Section 3. The use of the improved methods is demonstrated in Section 4 by application to FY-4A AGRI level 1 data collected over the study area on 6 November 2019, at 02:00 Beijing time, for Typhoon Halong (HL) 1923. In Section 5, the two algorithms are applied to the full time series of six other typhoons with different intensities, observed with different time samplings (1-6 h). The results are analyzed based on comparison with optimal path data for the same typhoons provided by CMA. Section 6 quantifies the errors in typhoon localization using the two methods and provides a more in-depth analysis and discussion. Conclusions are presented in Section 7.
## 2 Data and Methods
### Research Area
The study reported in this paper was conducted over the northwest Pacific Ocean in the area 100\\({}^{\\circ}\\)E-170\\({}^{\\circ}\\)E, 5\\({}^{\\circ}\\)N-45\\({}^{\\circ}\\)N (Figure 1). The northwest Pacific warm pool serves as a significant genesis area for typhoons worldwide, with tropical cyclones forming throughout the year. On average, about 35 tropical cyclones develop annually in this region, and approximately 80% of them eventually develop into typhoons. Furthermore, roughly a quarter of these typhoons annually impact the coastal areas of China. Approximately 70% of the annual typhoon activity is concentrated in the months July to October, with August and September being the peak period for typhoon formation.
### Research Data
#### 2.2.1 FY-4A Advanced Geostationary Orbit Radiation Imager (AGRI) Data
Fengyun-4A (FY-4A) is the first of a series of seven second-generation Geostationary Earth Orbit (GEO) quantitative remote sensing meteorological satellites owned and operated by CMA ([https://www.eoportal.org/satellite-missions/fy-4](https://www.eoportal.org/satellite-missions/fy-4), last access: 23 March 2024), replacing Fengyun-2. FY-4A was launched on 11th December 2016. FY-4A carries the Advanced Geosynchronous Radiation Imager (AGRI), which covers a wavelength range from 0.45 to 13.8 um with a spatial resolution of 4 km. Level 1 (L1) full-disk data are observed every hour, the observation time is 15 min past the whole hour, and two encrypted observations are made every 3 h from 0:00 to 21:00 (UTC), so 40 full-disk observations can be obtained every day [31]. FY-4A AGRI data are stored in HDF5 format, including grayscale data in 14 channels, calibration data for each band, observation time and start/stop position of each line, etc. FY-4A data have been publicly accessible since the summer of 2018 from the Fengyun Satellite Data Center ([https://satellite.nsmc.org.cn/portalsite/default.aspx](https://satellite.nsmc.org.cn/portalsite/default.aspx), last access: 23 March 2024) of the National Satellite Meteorological Center (NSMC) in China. In this paper, channel 12 thermal infrared data (wavelength: 10.3-11.3 um) from the FY-4A
Figure 1: The yellow rectangular box represents the study area.
AGRI sensor are selected to derive the cloud brightness temperature (BT) as described in Section 3.2.
#### 2.2.2 Cloud Top Height Data (CTH)
The cloud top height product (CTH) retrieved from AGRI L1 data provides the cloud top height for each AGRI pixel. This product is retrieved from observations in the CO\\({}_{2}\\) absorption channel (13.3 \\(\\upmu\\)m) and two infrared channels (11.2 \\(\\upmu\\)m and 12.4 \\(\\upmu\\)m). A radiative transfer model and a one-dimensional variational method are used to obtain the cloud top temperature (CTT), from which cloud top height is calculated by combining information on the atmospheric temperature field from a numerical weather prediction model [32]. The CTH product is an important parameter for making parallax corrections to FY-4A AGRI data (Section 3.3).
#### 2.2.3 Optimal Path Data
The optimal path dataset used in this paper was obtained from CMA ([https://tcdata.tyhoon.org.cn](https://tcdata.tyhoon.org.cn), last access: 28 March 2024) [33; 34]. This website is a real-time typhoon path release system that can accurately provide the latest and most comprehensive real-time typhoon information, including information on the center location, central pressure, maximum wind speed, moving direction, and forecasted path, as well as integrated precipitation forecasts. The system also integrates precipitation forecasts, satellite cloud maps, and weather conditions. In this paper, we mainly use the optimal path data provided by CMA for comparison with the results obtained from the methods developed as described below.
Seven typhoons were selected for the current study: Typhoon Halong (HL) 1923 in the year 2019; Typhoon Conson (CS) 2113 and Typhoon Kompasu (KPS) 2118 in the year 2021; Typhoon Malakas (MLK) 2201, Typhoon Songda (SD) 2205, and Typhoon Hinnamor (HNM) 2211 in the year 2022; and Typhoon Haikui (HK) 2311 in the year 2023. The characteristics of these 7 typhoons and the time sampling with which they were observed during different stages of development are presented in Tables 1 and 2. The time sampling of CS and KPS is long with time samplings of 6 h, the time sampling of MLK is mostly 3 h, and the time sampling of HL, SD, HNM, and HK is short with most time sampling being 1 h.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Typhoon Name** & **Time** & **Amount of** & **Observation Time** & **Typhoon** & **Average Wind** & **Average Travel** \\\\
**Name** & & **Data** & **Resolution (h)** & **Intensity** & **Speed (m/s)** & **Speed (km/h)** \\\\ \\hline HL & 2019.11.01β2019.11.10 & 199 & 1 & TDβSTY & 35.3 & 19.4 \\\\ CS & 2021.09.05β2021.09.13 & 34 & 6 & TDβSTS & 22.29 & 12.4 \\\\ KPS & 2021.10.08β2021.10.14 & 27 & 6 & TDβTY & 22.42 & 15.48 \\\\ MLK & 2022.04.08β2022.04.14 & 68 & 3 & TSβST & 27.8 & 32.23 \\\\ SD & 2022.07.28β2022.08.02 & 115 & 1 & TDβTS & 16.4 & 22.89 \\\\ HNM & 2022.08.28β2022.09.06 & 203 & 1 & TSβSTY & 40.88 & 28.01 \\\\ HK & 2023.08.28β2022.09.06 & 187 & 1 & TDβSTY & 28.36 & 17.07 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Characteristics of the seven typhoons selected for the study.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Typhoon Name** & **TD** & **TS** & **STS** & **TY** & **ST** & **STY** \\\\ \\hline HL & 60 & 18 & 12 & 35 & 26 & 48 \\\\ CS & 10 & 4 & 20 & & & \\\\ KPS & 6 & 13 & 7 & 1 & & \\\\ MLK & 7 & 17 & 19 & 11 & 10 & 4 \\\\ SD & 45 & 70 & & & & \\\\ HNM & & 27 & 18 & 36 & 41 & 81 \\\\ HK & 27 & 24 & 82 & 37 & 11 & 6 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Amount of data observed at different stages of seven selected typhoons.
## 3 Methodology
Figure 2 shows the flow chart for the typhoon automatic center positioning process described in this study, with the following main steps:
1. AGRI L1-level product projection conversion pre-processing;
2. Cloud system identification in the study area images;
3. The parallax correction of the longitude and latitude of the target cloud system;
4. Typhoon localization algorithm based on space-time consistent SIFT;
5. Brightness temperature perturbation typhoon localization algorithm.
Each step is discussed below (Sections 3.1-3.5) and illustrated in Figures 3-7 of Section 4 where the typhoon automatic center positioning algorithm is applied. In brief, steps 1-3 are the preparation of the images for the identification and localization of the cloud system in geographic coordinates with reference to the Earth system. Steps 4 and 5 are the two actual typhoon center positioning methods by the application of different techniques to cropped images where only the typhoon cloud system is considered to determine the typhoon center location and the displacement is calculated from two consecutive images, referred to as the current and the previous images.
### AGRL L1-Level Product Projection Conversion Pre-Processing
Because the study described in this paper uses typhoon positioning in longitude and latitude coordinates, the row and column numbers of the image are converted to the corresponding longitude and latitude. The FY-4A satellite position is computed using the nominal projection of the geostationary orbit defined by the CGMS LRT/HRIT global specification. The geographic coordinates are calculated with reference to the WGS84 reference ellipsoid [25]. Using this method, we obtained the entire image corresponding to the study area.
### Image Cloud System Identification
In this study, the cloud system characteristics are derived from the patterns of the cloud brightness temperature (BT) available from AGRI observations at wavelengths in the thermal infrared part of the spectrum, in channel 12 (Section 2.2.1). To convert the measured radiances to BT, the channel 12 images of the AGRI L1 full-disk data are first cropped using the longitude and latitude of the study area. Then, the original AGRI channel 12 data are
Figure 2: Typhoon automatic center positioning process.
converted to BT data using the channel 12 calibration data, and the BT data are assigned to different classes using the Fisher criteria [35] for small and large intra-class variances to automatically find the thresholds. Typhoon cloud systems generally cover a large area, and using erosion and expansion, some smaller cloud systems and clouds can be removed to reduce interference. After further removing cloud systems with an area of less than 1500 pixels, the target cloud system is localized by searching for the cloud system with the nearest center of mass coordinates to the last localized typhoon center. The target cloud system image is cropped, but its longitude and latitude correspond to those of the cloud top. Therefore, the method described in Section 3.3 is required to correct the longitude and latitude from the cloud top to the surface.
### Parallax Correction for the Longitude and Latitude of the Target Cloud System
The satellite observation method and the curvature of the Earth lead to a field-of-view bias resulting in the deviation in the longitude and latitude of the clouds obtained from the satellite observation from the corresponding longitude and latitude of the sub-satellite ground surface. The actual longitude and latitude of the cloud were calculated following the method described by Di et al. [32], who used the CTH data to correct the parallax of the projected transformed image, as described below.
In Figure 3, S is the location of the satellite, A is the coordinate of the corresponding sub-satellite point (\\(\\uplambda_{\\rm S}\\), 0), O is the center of mass of the Earth, T is the actual position of the cloud, X is the position of the cloud projected on the surface (the cloud height is h), and P is the position of the cloud observed by the satellite (\\(\\uplambda_{\\rm P}\\), \\(\\uplambda_{\\rm P}\\)). Let the radius of the Earth be r, the height of the geostationary satellite from the surface be H, the geocentric distance R = H + r, the distance from the satellite S to the position P of the observed cloud be L, and the parallax angle \\(\\angle\\)POX = \\(\\uplambda\\). Also, let \\(\\angle\\)OSP = \\(\\uplambda\\) and \\(\\angle\\)POA = \\(\\uplambda\\).
First, calculate \\(\\uplambda\\):
\\[\\uplambda=\\arccos(\\cos(\\uplambda_{\\rm P})\\times\\cos(\\uplambda_{\\rm S}- \\uplambda_{\\rm P})) \\tag{1}\\]
Then, calculate the parallax angle \\(\\uplambda\\):
\\[\\uplambda=\\arcsin(\\frac{\\rm R\\times\\sin(\\uplambda)}{\\rm L})-\\arcsin(\\frac{ \\rm R\\times r\\times\\sin(\\uplambda)}{\\rm L\\times(h+r)}) \\tag{2}\\]
\\[\\rm where\\ L=\\sqrt{r^{2}+R^{2}-2\\times r\\times R\\times\\cos(\\uplambda)} \\tag{3}\\]
Figure 3: Parallax correction geometry relationship model [32].
Finally, the actual latitude of the cloud can be obtained as
\\[\\psi_{X}=\\arcsin\\biggl{(}\\frac{sin(\\psi_{P})\\times sin(\\gamma-\\alpha)}{sin(\\gamma)} \\biggr{)} \\tag{4}\\]
The actual longitude of the cloud is obtained as
\\[\\lambda_{X}=\\lambda_{S}\\pm\\arccos\\biggl{(}\\frac{\\cos(\\gamma-\\alpha)}{\\cos(\\psi_{ X})}\\biggr{)} \\tag{5}\\]
In the above equation, the negative sign is taken when the observed longitude of the cloud is less than that of the sub-satellite point, and the positive sign is taken when the opposite is true.
This method was demonstrated to be effective in correcting the longitude and latitude of the observed cloud top to the corresponding actual longitude and latitude with reference to the sub-satellite Earth surface, in the experiments of Di et al. [32].
### Spatio-Temporally Consistent (STC) SIFT Typhoon Localization Algorithm
The STC SIFT typhoon localization algorithm primarily relies on the relationship between characteristic points in the current typhoon image and those in the previous typhoon image. This relationship is used to determine the current typhoon's center coordinates and is applied to the cropped image, which only retains the typhoon cloud system rather than the whole study area. The SIFT feature extraction algorithm was first proposed by Lowe [36] and was improved by some scholars [3; 37; 38]. In the current study, the typhoon localization algorithm based on STC SIFT [3] is used as the starting point for further improvement. The algorithm by Liu, Wang, Liao, and Fang [3] uses the longitude and latitude coordinates of the previous typhoon center along with its corresponding satellite image as historical information for localization. Firstly, based on the longitude and latitude information of the previous typhoon center, they extracted the corresponding image from the current satellite image. The size of the image is related to the time sampling between the previous and current time sampling. The longer the interval, the larger the potential range of typhoon movement, and thus the larger the area set. Then, they matched the typhoon cloud system in this area with the typhoon cloud system in the previous image to determine the current typhoon center position. In the algorithm improved in this study, we extract the corresponding image from the current satellite data using the longitude and latitude of the center of the target cloud system obtained in Section 3.2. The size of the two consecutive images is the same. The algorithm primarily consists of four steps: feature point extraction, spatio-temporal consistency feature selection, regular angular distribution feature selection, and typhoon center localization.
The speed of typhoon movement varies, with an average speed ranging from 20 to 30 km per hour. Additionally, the typhoon center does not move along a straight line at a constant speed. Sometimes, new centers may form near the original center, replacing it, resulting in potentially faster movement within a short period. Therefore, we set the upper limit for the variation in the typhoon center position to 60 km/h. The characteristic points around the typhoon not only undergo translational motion with the typhoon center but also exhibit their own rotational motion. The angular velocity of this rotation varies depending on the distance from the typhoon center; points closer to the center rotate at a higher angular velocity. To describe the range of speed variation in these characteristic points, a small increment is added to the typhoon's movement speed. The maximum selected variation in the speed of characteristic points is 80 km/h.
Spatio-temporal consistency feature selection is mainly based on the time sampling between the two images and the calculation of the displacement of corresponding feature points during that time. The maximum rate of change between the calculated feature points is 80 km/h, and the spatial resolution of the image is 4 km, so the maximum displacement of the typhoon cloud system feature point movement radius \\(\\mathrm{R}_{\\mathrm{f}}\\) during the time sampling T between the two images is given by
\\[\\mathrm{R}_{f}=20\\;\\mathrm{Pixel}/\\mathrm{Hour}\\times\\mathrm{T}, \\tag{6}\\]
The matching results of feature points may be unevenly distributed in space. The application of regular angular distribution feature selection can alleviate such situations. Rotationally uniform distribution filtering takes the center of previous positioning as the coordinate origin, divides the 360 degrees around a circle into 36 equal parts of 10 degrees each, and keeps only 1 feature point with the best match in each of these 36 sectors.
The current and previous images are pre-processed separately, including cloud system identification, parallax correction, and cropping the target cloud system. The current and previous images are centered at the geometric centers of the target cloud system in each of the respective images and both images are cropped to 320 \\(\\times\\) 320 pixels. The feature points in the two images are extracted and matched using the SIFT algorithm. The matched feature points are filtered using the STC and rotation uniform distribution filtering algorithms. The upper limit of the variation range of the typhoon center position is 60 km/h by default, so the maximum typhoon center movement \\(\\mathrm{R}_{\\mathrm{c}}\\) during the time T between the observation of the two images is
\\[\\mathrm{R}_{\\mathrm{c}}=15\\;\\mathrm{Pixel}/\\mathrm{Hour}\\times\\mathrm{T} \\tag{7}\\]
The typhoon center location in the current image is determined as the point for which the proximity to the feature points is closest to the proximity of the typhoon center in the previous image to the corresponding feature points. This point is determined by minimizing the distance and angle deviations given by
\\[\\alpha=\\arg\\underset{\\alpha\\in\\Omega}{\\underset{i=1}{\\overset{N}{\\sum}}}||x_{ i}-\\alpha||_{F}-||{x_{i}}^{*}-\\beta||_{F}| \\tag{8}\\]
The minimum angular deviation is calculated using
\\[\\alpha=\\arg\\underset{\\alpha\\in\\Omega}{\\underset{i=1}{\\overset{N}{\\sum}}}\\bigg{<} \\frac{x_{i}-\\alpha}{||x_{i}-\\alpha||_{F}}^{*}\\frac{{x_{i}}^{*}-\\beta}{||x_{i}^{ *}-\\beta||_{F}}\\bigg{>}, \\tag{9}\\]
where \\(x_{i}\\) is the feature point in the current typhoon cloud map, \\(x_{i}^{*}\\) is the matching feature point of \\(x_{i}\\) in the previous typhoon cloud map, \\(\\alpha\\) is the center position of the current typhoon, \\(\\beta\\) is the center position of the previous typhoon, \\(N\\) is the number of all matching feature points, \\(\\|A\\|_{F}=\\left(\\sum_{j}{a_{ij}}^{2}\\right)^{\\frac{1}{2}}\\), \\(\\Omega=\\left(\\|\\alpha-\\beta\\|_{F}<R_{c}\\right)\\), and \\(R_{c}\\) is the radius of typhoon center movement. Formulas (8) and (9) search for point a within the radius R centered at b that satisfies the two equations mentioned above. After finding the locations of the minimum distance and angle deviations, the gradients of the two methods with respect to the surrounding points are calculated. The solution with the largest gradient is selected as the final result and the current image typhoon center is the location where the deviation is minimal.
### Brightness Temperature Perturbation (BTP) Typhoon Location Method
The brightness temperature perturbation (BTP) typhoon location method is inspired by the wind stress perturbation theory. Chelton, et al. (2016) used the QuikSCAT satellite radar scatterometer to measure near-surface wind speed and direction over the global ocean at a 25 km resolution, showing that cyclone scattering and spin have persistent small-scale features in the dynamics and thermodynamics of wind stress. Maloney and Chelton (2017) analyzed observations of wind stress from the NASA sea wind scatterometer and the advanced microwave scanning radiometer and found that wind stress increased linearly with increasing sea surface temperature. Minobe, et al. (2018) studied the effect of sea surface temperature fronts on precipitation using an atmospheric model and noted that typhoon clouds form with very low cloud top temperatures. The study by Wei, et al. [42] showed that there is a significant positive correlation between the wind stress dispersion and rotation and the perturbation of sea surface temperature. Correspondingly, the wind field disturbances caused by mesoscale sea surface temperature perturbations over the ocean can also lead to changes in the marine environment. Changes in the wind field can result in alterations in surface momentum and heat flux, thereby impacting the oceanic environment.
Brightness temperature perturbation (BTP) refers to the variation or deviation in brightness temperature (BT) from its average value or baseline state at a specific observation point or region. BTP is commonly used to analyze dynamic processes in the atmosphere or ocean, such as the impact of weather systems and ocean temperature changes on climate and meteorological conditions. The BTP typhoon location method used in the current study is an improvement in the BTP typhoon localization method of Xie et al. [25]. Xie et al. [25] used a polynomial interpolation method for short-term typhoon forecast data provided by the Typhoon and Ocean Meteorological Forecast Center of CMA as an initial guess for the location of the current typhoon center. BT gradients within a window around the estimated location were used to provide the actual typhoon center location. In the current study, the BTP typhoon localization method primarily utilizes the typhoon's BT images to calculate perturbation values in the central region, rather than the entire typhoon. This is because the algorithm uses the geometric center of the cloud system as the center, with a radius of 40 km and a step size of 8 km as the search range, to search for the maximum BTP value within the typhoon center. The algorithm centers the image at the geometric center of the target cloud system to intercept the image of the corresponding size based on the time sampling between the current and previous images. A higher time sampling results in smaller cropped image sizes. Parallax correction is added to correct the longitude and latitude determined by the typhoon positioning to obtain the surface-referenced longitude and latitude of the cloud system and the images are cropped to retain only the target cloud system.
For the application of the BTP typhoon localization method, the original AGRI channel 12 data need to be converted to BT data as described in Section 3.2.
The gradients in the zonal and meridional directions of the BT images are calculated separately as follows:
\\[G_{x}(x,y) =\\frac{\\left(H(x+1,y)-H(x-1,y)\\right)}{2} \\tag{10}\\] \\[G_{y}(x,y) =\\frac{\\left(H(x,y+1)-H(x,y-1)\\right)}{2} \\tag{11}\\]
where \\(x\\) and \\(y\\) are the coordinates of the corresponding image element and \\(H(x,y)\\) is the BT of the corresponding image element, \\(G_{x}(x,y)\\) is the BT gradient in the longitude direction of the corresponding image element, and \\(G_{y}(x,y)\\) is the BT gradient in the latitude direction of the corresponding image element. In comparison with the gradient calculation method proposed by Xie et al. [25], the points \\(x+1\\), \\(y\\) and \\(x-1\\), \\(y\\) are used to calculate the gradient with Equation (10), which means moving 2 steps in \\(x\\) (and the same for \\(y\\) in Equation (11)). This is performed because typhoons exhibit high BTP values not only in their central regions but also in other areas, which can have a certain impact on the localization results. Utilizing this method for gradient calculation can to some extent suppress interference from non-central regions and improve localization accuracy.
The BT gradient \\(G_{BT}\\) of the typhoon is obtained as follows:
\\[G_{BT}=\\sqrt{\\left(G_{x}(x,y)\\right)^{2}+\\left(G_{y}(x,y)\\right)^{2}} \\tag{12}\\]The divergence of the BT gradient is used to represent the intensity magnitude of the vector field emanation at individual points in space:
\\[div\\ G_{BT}=\\frac{\\delta G_{x}(x,y)}{\\delta x}+\\frac{G_{y}(x,y)}{\\delta y} \\tag{13}\\]
where \\(div\\ G_{BT}\\) is the divergence of the BT gradient. The curl of the BT gradient is used to express the degree of rotation caused by the vector field near a point of the micro-element in the region:
\\[curl\\ G_{BT}=\\left(\\frac{\\delta G_{y}(x,y)}{\\delta x}-\\frac{\\delta G_{x}(x,y) }{\\delta y}\\right)\\cdot\\overset{\\rightarrow}{k} \\tag{14}\\]
where \\(curl\\ G_{BT}\\) is the curl of the BT gradient, and \\(\\overset{\\rightarrow}{k}\\) is the curl direction. The BTP value \\(P\\) to the typhoon cloud system is calculated using divergence and curl:
\\[P=\\sqrt{\\left(div\\ G_{BT}\\right)^{2}+\\left(curl\\ G_{BT}\\right)^{2}} \\tag{15}\\]
Search for the BTP maximum within a 40 km radius around the geometric center of the cloud system, with a search step of 8 km. Then, using the location of this maximum as the center, search for the BTP minimum within an 8 km by 8 km area, which is identified as the typhoon center.
## 4 Results
### Typhoon Cloud System Identification and Correction of Parallax
The algorithm developed in this study was tested by application to FY-4A AGRI level 1 data over the study area (Section 2.1) collected for typhoon HL (Table 1), on 6 November 2019, at 02:00 Beijing time, as an example. Figure 4a shows the initial data corresponding to the study area. A map showing the spatial distribution of the BT over the study area, obtained after image pre-processing as described in Section 3, is presented in Figure 4b. Normally, the cloud top temperature is much lower than that of the sea and land surface. Thus, the BT map shows the locations of the cloud systems in the study area as dark blue areas with low CTT. To accurately extract the dark blue regions in the BT spatial distribution map, the BT data were converted to binary data by using the Fisher criteria for small and large intra-class variances to automatically find the thresholds. The result, i.e., the binary map in BT space, is presented in Figure 4c. To identify candidate typhoon cloud systems, erosion and expansion were applied and small cloud systems were removed. The results in Figure 4d show that the remaining cloud systems cover large areas and can be considered as candidate typhoon cloud systems. After calculating the center of mass of each candidate cloud system, the center of mass of the cloud system closest to the typhoon center in the previous observation (the cloud system at about 20\\({}^{\\circ}\\)N, 150\\({}^{\\circ}\\)E) was selected as the target typhoon cloud system. The result in Figure 5a shows that the typhoon cloud system has an overall lower CTT, while the CTT in the eye of the typhoon is substantially higher. This temperature distribution is in good agreement with the properties of a typhoon system with an eye. FY-4A cloud top height products were used for parallax correction, i.e., to match the longitude and latitude of each pixel to the surface longitude and latitude. The result after the application of the parallax correction to the data in Figure 5a is presented in Figure 5b. The results before and after parallax correction in the typhoon eye area are shown in Figure 5c,d. It can be found that the latitude and longitude of the typhoon eye area were significantly adjusted.
Figure 4: _Cont._
### Results from the Brightness Temperature Perturbation Typhoon Localization Algorithm
To accurately determine the typhoon center, the operations described in Section 3.5 were executed using the BT data presented in Figure 5b. First, the longitudinal and latitudinal BT gradients were calculated. The results in Figure 6a,b show that the typhoon eye feature is enhanced in the center of the eye and that clear BT maxima and minima occur between the typhoon cloud area and the eye area. These minima and maxima reflect
Figure 4: The application of the typhoon detection algorithm to identify the typhoon cloud system over the northwest Pacific using FY-4A AGRI level 1 data on 6 November 2019, at 02:00 Beijing time (typhoon HL). (**a**) The initial data after projection conversion pre-processing, (**b**) the spatial distribution of the BT over the study area (unit: K), (**c**) the binarized image of the BT spatial distribution, and (**d**) further processing that shows the locations of candidate target cloud systems.
Figure 5: The BT distribution of the typhoon cloud system using FY-4A AGRI level 1 data on 6 November 2019, at 02:00 Beijing time: the BT distribution of the target cloud system before (**a**) and after (**b**) parallax correction. (**c**) and (**d**) show details of the typhoon eye area before and after parallax correction, respectively.
that the sea surface temperature is higher than the CTT, which can only be observed in the cloud-free typhoon eye area. Therefore, there is a large BT gradient across the junction of the typhoon eye area and the cloud area, and BT maxima and minima occur.
Using the gradients, the BT divergence and curl can be calculated and the results are presented in Figure 6c,d, respectively. Figure 6c,d show the clear structures in the BT divergence and curl in the typhoon eye. A negative BT divergence indicates convergence, which in turn indicates that the weather situation is favorable for further enhancement and development in convective weather such as cyclones. The BT curl reaches a maximum in the eye area, indicating strong convergence, and is smaller in the cloud area.
Using the BT divergence and curl, the BTP can be calculated and the BTP spatial distribution over the typhoon cloud area is presented in Figure 6e. The results well describe the BTP characteristics of the typhoon center, showing that the perturbation in the eye area is substantially larger than that in the cloud area. In the cloud area, only the cloud top BT can be observed, for which the BT difference is very small and thus the BTP value is also small.
Using the BTP data, the location of the typhoon center is determined at 20.25\\({}^{\\circ}\\)N, 150.42\\({}^{\\circ}\\)E. This location compares favorably with that for the typhoon center provided
Figure 6: Results from the BTP typhoon localization algorithm applied to FY-4A AGRI level 1 data over the northwest Pacific on 6 November 2019, at (20:00 Beijing time (typhoon HL): (**a**) the longitudinal BT gradient near the typhoon eye area (unit: km); (**b**) latitudinal BT gradient near the typhoon eye area; (**c**) spatial distribution of BT divergence (unit: N-m\\({}^{-3}\\times 10^{7}\\)); (**d**) spatial distribution of the BT curl (unit: N-m\\({}^{-3}\\times 10^{7}\\)); (**e**) spatial distribution of BTP. The location of the typhoon as determined from the BTP distribution is indicated with a red + and the location of the typhoon center provided by CMA is indicated with a blue star.
by CMA at 20.3\\({}^{\\circ}\\)N, 150.5\\({}^{\\circ}\\)E. These locations are indicated in Figure 6e and the distance between them is 9.43 km, about two pixels.
### STC SIFT Typhoon Localization
The typhoon localization method using STC SIFT features (see Section 3.4) is illustrated in Figure 7. Previous images are shown to the left of Figure 7a,b, and current images are shown at the right. Feature points in the current and previous images are extracted as shown in Figure 7a. The feature points in both images are matched, and filtered using the STC and rotation uniform distribution filtering algorithms. The STC filtering removes feature points with large errors in the matching results, and the rotation uniform distribution makes the feature points evenly distributed across the image, as shown in Figure 7b. The matching lines between the filtered feature points are approximately parallel to each other. The minimum distance deviation and minimum angle deviation are calculated using the filtered feature points to obtain the typhoon center location as described in Section 3.4. The results are presented in Figure 7c. The optimum typhoon center location is 20.43\\({}^{\\circ}\\)N, 150.43\\({}^{\\circ}\\)E. The location of the typhoon center provided by CMA is 20.3\\({}^{\\circ}\\)N, 150.5\\({}^{\\circ}\\)E, and the difference is 14.76 km, about three pixels.
Figure 7: typhoon center localization with the STC SIFT feature method using FY-4A AGRI level 1 data on 6 November 2019, at 02:00 Beijing time (typhoon HL): (**a**) feature point distributions in the extracted historical image (**left**) and the current image (**right**); (**b**) results of matching the remaining feature points after STC filtering and rotation uniform distribution filtering; (**c**) the comparison of the final positioning result (red +) with the typhoon center location provided by CMA (blue star).
## 5 Analysis
The application of two typhoon center localization methods, BTP and STC, has been illustrated by using data from typhoon HL observed by FY-4A AGRI on 6 November 2019, at 02:00 Beijing time. The results were evaluated by comparison with the best path data provided by CMA. The BTP and STC methods were also applied to six other typhoons, with characteristics listed in Table 1 and the quantity of observational data under different typhoon intensities listed in Table 2. The positioning accuracy of typhoons is related to the typhoon observation time sampling, the intensity of the typhoon itself, and the typhoon positioning method used. The average accuracies of the localization of the centers of these seven typhoons using the BTP and STC methods, for different observation time resolution periods and different typhoon intensities, are presented in Table 3 and discussed below. The optimal paths were determined by the application of the two typhoon positioning methods to four typhoons (SD, HMN, HL, and HK), which were observed with a time resolution of 1h. The comparison of the results with the best paths provided by CMA is presented in Figure 8. The data in Figure 8 show that both methods provide relatively good positioning accuracy. Additionally, Table 4 presents the typhoon localization accuracy for the blue-outlined area (before developing into typhoon intensity, TY) in Figure 8.
long to accurately observe changes in the nature and position of the typhoon. For lower observation time resolution, the error in the results becomes larger.
With the development in remote sensing technology, the temporal resolution of satellite observations has been improved, and for the FY-4A satellite, it is 1h. Figure 8 shows a
Figure 8: The comparison of the optimal paths of SD, HMN, HL, and HK determined using two typhoon localization methods, with the best path provided by CMA. The left column shows results from the BTP typhoon localization method and the right column shows results from the STC SIFT feature typhoon localization method for typhoons SD (**a,b**), HNM (**c,d**), HL (**e,f**), and HK (**g,h**). The average accuracy, from comparison with the CMA path, is indicated in each figure. The region outlined by blue lines delineates the portion of the typhoon characterized by lower intensity (before developing into typhoon intensity, \\(\\Upsilon\\)). Table 4 presents the typhoon localization accuracy for the blue-outlined area.
comparison of the optimal paths determined using the BTP and STC methods for strong typhoons (HL, 199 validation results; HNM, 203 validation results; and HK, 187 validation results) and a weaker typhoon (SD, 115 validation results), with the best path provided by CMA, all with an observation time resolution of 1 h. For the stronger typhoon (HNM), the best result is obtained using the BTP typhoon localization method, with an average accuracy of 18.74 km (Figure 8c). For the weaker typhoon (SD), the localization using the STC SIFT method provides the best result, with an average accuracy of 26.43 km (Figure 8b). The algorithm presented in the current study has mainly been developed for observations with a time resolution of 1h, so the methods are applicable for today's observation conditions.
### Analysis of Impact of Different Typhoon Intensities
The positioning accuracies for typhoons of different intensities are presented in Table 3. The accuracy of the center localization results for the typhoons CS and KPS, with long time sampling (6 h), is relatively low. However, the overall typhoon positioning accuracy for CS is better than that for KPS. This is attributed to the continuous changes in the position and form of the typhoon over time, with these changes becoming more pronounced for stronger typhoons. During the three typhoon intensity stages TD, TS, and STS (see Introduction), both KPS and CS were mostly observed with a 6 h time sampling. However, the moving speed and intensity of KPS were higher than those of CS in the same stage. Consequently, in the long time sampling, the positioning accuracy decreases with the enhancement in typhoon intensity.
The time sampling of the typhoon MLK observations was 3 h, and the overall intensity of the typhoon was strong. Despite the intensification of the typhoon, there was not a significant improvement in the positioning accuracy. This is attributed to parameter settings under the 3 h time sampling, occasionally causing anomalous results due to abrupt changes in the typhoon's speed, consequently reducing the overall accuracy of the center localization of typhoon MLK. The data presented in Table 3 during the TS, STS, and TY intensity periods illustrate this scenario, with most data points indicating relatively high typhoon velocities, approximately exceeding 60 km/h. In typhoon MLK's positioning, the overall accuracy of STC SIFT typhoon localization was lower than that of BTP. During periods of lower intensity for typhoon MLK (weaker than typhoon level), the positioning accuracies for both methods were similar. However, during higher-intensity periods (typhoon level or stronger) for typhoon MLK, the accuracy of STC SIFT typhoon localization did not substantially improve while the BTP typhoon localization provided better accuracies. This decrease in accuracy during higher-intensity periods (typhoon level or stronger) for STC SIFT typhoon localization is due to the rapid changes in the typhoon's inherent characteristics during these phases. Monitored at a 3 h time sampling, the considerable variations in feature points extracted from consecutive images resulted in reduced positioning accuracy. In contrast, the BTP typhoon localization method, relying on the inherent nature of the typhoon for positioning, experienced less impact from these rapid changes.
For observations with short time sampling, such as for typhoons SD and HNM, the data in Table 3 show that the positioning accuracies are much better than those for typhoons observed in long time sampling, which confirms that the overall accuracy of the typhoon center positioning improves with the shortening of the time sampling of typhoon observation. However, the positioning accuracy of both methods does not increase together with the intensification of typhoon strength. For weaker typhoons (taking SD as an example), as the typhoon intensity increases, the positioning accuracy decreases for both methods. This is attributed to this phase being characterized by the formation or dissipation of the typhoon, where its movement speed and inherent changes accelerate with the increase in typhoon intensity, leading to decreased positioning accuracy. In the case of stronger typhoons (using HMN as an example), as the typhoon intensity increases, the accuracy of BTP typhoon localization improves, while the accuracy of STC SIFT typhoon localization does not change much. This is primarily because at a 1 h time sampling, as the typhoon intensity strengthens, the BTP typhoon localization algorithm becomes more adept at capturing the post-change characteristics of the typhoon, while the STC SIFT typhoon localization algorithm, with this time resolution, extracts minimal changes in feature points.
### Analysis of Impact of Different Algorithms
The comparison of the results from the STC SIFT and BTP typhoon localization methods in Table 3 shows that, for long time sampling, the STC SIFT method provides better accuracy than the BTP method. The reason is that the STC SIFT method utilizes the whole typhoon area while the BTP method only uses the (smaller) area near the eye of the typhoon. As a result, the stability of typhoon localization with the STC SIFT is better than that of the BTP method, which is prone to missing important typhoon features when they are outside the relatively small study area or located at the edge. This phenomenon is more significant for high typhoon intensity. Taking typhoon CS as an example, the accuracy of the BTP typhoon localization method is significantly lower when the typhoon intensity is highest. Consequently, the influence of the long time sampling of 6h on the properties and localization of the typhoon is larger than that of the typhoon intensity. This leads to the conclusion that applying the BTP typhoon localization algorithm to small areas under low observation time resolution introduces significant uncertainty.
For short time sampling, the performance of the BTP typhoon localization is better than that of the STC SIFT method for typhoons with higher intensity (typhoon level or higher), and its accuracy increases with typhoon intensity, whereas the performance of typhoon localization with STC SIFT features is better for typhoons with weaker intensity (weaker than typhoon level). This is illustrated with the typhoon center localization results, using both the STC SIFT and the BTP methods, for the weaker typhoon SD, for observations with 1 h time sampling (Figure 8a,b). The localization results of the two methods for weak typhoons are quite similar. However, in Figure 8a and Table 4, the BTP typhoon localization algorithm does not perform well in the early stages of typhoon formation (the regions in the blue rectangles). Additionally, these results display significant latitude discrepancies when compared to the optimal typhoon path provided by CMA during this period. The results from the typhoon localization algorithm with the STC SIFT method in Figure 8b compare much better with the CMA path in the early stages of typhoon formation (the regions in the blue rectangles) because the STC SIFT method uses features. More feature points can be extracted as the time sampling is shorter and the movement and variation in the typhoon are smaller, which improves the localization accuracy. For the typhoons HNM (Figure 8c,d), HL (Figure 8e,f), and HK (Figure 8g,h), during their inception or dissipation stages (in the blue rectangles), the accuracy of the STC SIFT typhoon localization method consistently surpasses that of the BTP typhoon localization method, as indicated by the data in Table 4. This further substantiates the advantage of the STC SIFT typhoon localization method during periods of relatively weaker typhoons (weaker than typhoon level).
The performances of the BTP and STC SIFT methods for locating the typhoon center for a stronger typhoon are illustrated with the results from typhoon HNM, for observations with 1 h time sampling (Figure 8c,d). The results from the typhoon localization algorithm using the BTP method in Figure 8c compare very well with the optimal typhoon path of CMA. Also, the typhoon localization results using the STC SIFT method in Figure 8d show an overall satisfactory performance but with larger deviations from the CMA path than the BTP results. The high intensity of typhoon HNM, which moves fast and changes direction, results in the extraction of a smaller number of feature points and thus smaller localization accuracy. The BTP relies mainly on the spatial distribution of perturbations in the BT and the stronger the typhoon, the more significant the perturbation distribution, resulting in the better performance of the BTP method in this case. While the BTP typhoon localization algorithm demonstrates superiority in locating stronger typhoons (typhoon level or stronger), the localization results may not be accurate, as observed in the case of typhoon HK (Figure 8g) during the middle stages of typhoon development, displaying a degree of scatter. This may stem from the focus of the BTP method on the typhoon'seye region, which may move outside the target window due to sudden increases in the typhoon's movement speed.
Furthermore, in terms of algorithm efficiency, the BTP method, targeting the typhoon's central region, is evidently more advantageous for localized studies. However, concerning algorithm stability, the STC SIFT typhoon localization, studying the entire typhoon, presents a wider research scope, reducing the likelihood of significant errors compared to BTP localization, thereby enhancing the algorithm's stability.
## 6 Discussion
This study quantified the impact of time sampling between images on the accuracy of two typhoon localization methods. Experiments were conducted using images sampled at different times, and the errors of the two typhoon localization methods with time samples of 1, 3, and 6 h were calculated with the results from the China Meteorological Administration (CMA). Since there were fewer data points for the 3 h and 6 h sampling, data from the 1 h sampling were used to supplement the calculations for these longer intervals (Table 5). For the STC SIFT method, as the time sampling increased, the median and mean errors exhibited linear growth, indicating that the longer the sampling duration, the greater the localization error. For the BTP method, the localization error also increased rapidly as the time sampling grew. When comparing the results from both methods to the CMA results, the interquartile range was smaller for the 1 h sampling, with lower uncertainty (the lower quartile error was below four pixels, and the upper quartile error was below eight pixels). However, beyond the 1 h sampling, the interquartile range for both methods increased significantly, with the lower quartile exceeding 10 pixels in the 6 h sampling.
The study also quantified the impact of typhoon intensity on the accuracy of the two typhoon localization methods using images with a 1 h sampling (a total of 704 typhoon data points, including 130 for TD, 138 for TS, 112 for STS, 108 for TY, 81 for ST, and 135 for STY). Experiments were conducted for typhoons of different intensities. As shown in Figure 9, the orange line inside each box represents the median error, while the green dashed line indicates the mean error. The top and bottom edges of each box denote the interquartile range, and the whiskers represent the maximum and minimum errors. For STC SIFT typhoon localization (Figure 9a), the overall accuracy did not show significant changes with increasing typhoon intensity, with the interquartile error range remaining between four and eight pixels. It is worth noting that as the typhoon intensity increased, the mean error for STC SIFT tended to be higher than the median error, indicating a greater number of higher error values with increased typhoon intensity. For BTP typhoon localization (Figure 9b), the error decreased significantly with increasing typhoon intensity. When the intensity reached or exceeded the typhoon level (TY), the interquartile range of errors was between three and six pixels. Especially at the super typhoon level (STY), the interquartile range was reduced to 2-3 pixels. Compared to BTP, STC SIFT showed an advantage in terms of fewer large errors when the typhoon intensity was below TY, providing more stable localization results. When the intensity reached or exceeded TY, BTP demonstrated significantly better localization accuracy than STC SIFT.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline & **Data Quantity** & **Mean** & **Median** & **Lower Quartile** & **Upper Quartile** \\\\ \\hline STC\\_1h & 704 & 25.43 & 24.82 & 16.84 & 32.96 \\\\ \\hline BTP\\_1h & 704 & 22.69 & 19.16 & 11.23 & 28.16 \\\\ \\hline STC\\_3h & 302 & 56.41 & 52.11 & 32.07 & 70.84 \\\\ \\hline BTP\\_3h & 302 & 53.27 & 44.71 & 23.53 & 77.07 \\\\ \\hline STC\\_6h & 178 & 82.09 & 74.52 & 50.95 & 99.92 \\\\ \\hline BTP\\_6h & 178 & 93.33 & 90.99 & 63.04 & 117.2 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Error analysis of two typhoon positioning methods under different time samplings (unit: km).
Additionally, the study discussed the results before and after parallax correction. Figure 9c illustrates the BTP typhoon localization results without parallax correction. There was little difference between the two methods when the typhoon intensity was below TY. However, when the intensity reached or exceeded TY, the localization accuracy improved considerably. This improvement is mainly because the parallax-corrected images allowed the typhoon eye (the high-value area in BTP) to better correspond to the typhoon's best track data provided by CMA, thus enhancing localization accuracy.
In this study, two typhoon localization algorithms have been improved to achieve automated monitoring of the northwest Pacific region using the method of searching for the target cloud system and determining the geometric center of the target cloud system as the center of the intercepted image. With these improvements, the algorithms are suitable for operational applications. However, the typhoon localization accuracy of the automated search algorithm exhibits a certain degree of error in comparison to the optimal path data provided by CMA, where the overall error is mostly smaller than 20 km provided that observations are available with a 1h time resolution. However, the average error is larger than 20 km because of the influence of some extreme values. The automated algorithm proposed in this paper has slightly lower accuracy compared to that proposed by Xie et al. [25]. This is because Xie et al. [25]'s algorithm determines the center of the cropped image by referencing short-term typhoon forecast data, which performed well in experiments specific to typhoon images but lacks generality. In contrast, the automated typhoon localization in this study does not rely on short-term typhoon forecast data. It automatically identifies the initial position of the cloud system, making it applicable to most scenarios. However, instances of abrupt changes in typhoon characteristics and movement speed may lead to a certain degree of error. Nonetheless, the improved algorithm in this study enables automated monitoring of typhoons in the northwest Pacific region, better catering to operational needs, albeit with a slight reduction in localization accuracy compared to the pre-improved typhoon positioning methods.
Figure 9: The error analysis of different typhoon localization methods under various typhoon intensities. Panel (**a**) illustrates the error analysis for STC SIFT at different typhoon intensities; panel (**b**) shows the error analysis for BTP typhoon localization under varying intensities; and panel (**c**) depicts the error analysis for BTP localization without parallax correction across different typhoon intensities. The orange line represents the median error, while the green dashed line indicates the mean error. The top and bottom edges of each box correspond to the upper and lower quartiles of the error distribution, and the whiskers denote the maximum and minimum error values.
## 7 Conclusions
In this study, two typhoon center location methods (STC SIFT and BTP) were improved and the various steps were illustrated by application to FY-4A AGRI L1 BT data and the AGRI cloud top height (CTH) product for one observation of the typhoon HL. Projection conversion, cloud system identification, and parallax correction were illustrated together with the application of the BTP and STC algorithms. Next, these algorithms were applied to the full time series for six other typhoons and the typhoon center location accuracy and the typhoon paths were compared with optimal path data provided by CMA, leading to the following conclusions.
1. Within the time sampling range discussed in this study, as the time sampling interval shortens (with a minimum of 1 h), the localization errors of both methods compared to the results provided by CMA will significantly decrease;
2. When typhoons are monitored with long time sampling, the STC SIFT typhoon localization method is more stable and the accuracy of the results is better than that from the BTP localization method; however, when typhoons are monitored with higher spatial resolution, the BTP localization method has an advantage over the STC SIFT typhoon localization method, especially for well-developed typhoons with high intensity (typhoon level or stronger);
3. When a typhoon is monitored with short time sampling, the accuracy of typhoon location using the BTP method increases with the increase in typhoon intensity; in contrast, the STC SIFT typhoon localization method is more accurate during the typhoon development phase when the typhoon intensity is still weak (weaker than typhoon level) than in the developed stage with strong typhoon intensity (typhoon level or stronger);
4. Because the BTP localization method focuses on the local typhoon eye area, while the STC SIFT typhoon localization method uses the whole typhoon, the BTP method is faster in terms of algorithm processing time, but the stability of accuracy is not as good as that of the STC SIFT typhoon localization method.
The analysis of the two typhoon center location methods shows that they both have merit, but their performances vary for different situations and different typhoon intensities when either STC or BTP provides more accurate results. The results show that automated monitoring of typhoons in the northwest Pacific can be achieved when observations are available with an observation time resolution of 1h. The methods described in this paper can serve the national meteorological industry to monitor typhoons, which is beneficial to national pre-disaster prevention work and global meteorological research. However, before operational application, the following problems need to be considered:
1. When typhoon observations are available with a time resolution of 1h, the parameters set by each of the two methods for automated typhoon positioning are fixed. When encountering sudden changes in typhoon wind speed and shift speed, two problems may be encountered: a. the typhoon center may move outside the reduced study area where BTP localization is applied, and b. the typhoon characteristics change significantly and thus the number of available feature points extracted by the STC SIFT method decreases. Hence, these changes in typhoon wind speed and shift speed may lead to large errors in the accuracy of the center localization by both methods.
2. The BTP localization method depends on the BT property of the typhoon. If there is no clear typhoon eye, the calculation of the location of the typhoon center can be affected by the surrounding BTP and thus reduce the positioning accuracy.
3. The matching results of feature points may appear unevenly distributed in space and, although the use of rotation uniform distribution feature screening has a mitigating effect, it may not be sufficient to completely solve the problem.
In future research, the inclusion of parameters such as wind speed and central pressure can be considered to improve the two methods of typhoon center location.
**Author Contributions:** Conceptualization, Z.L. and C.Y.; methodology, C.Y.; software, C.Y. and J.G.; validation, C.Y. and J.G.; formal analysis, J.G.; investigation, C.Y.; resources, Z.L.; data curation, J.G. and G.d.L.; writing--original draft preparation, C.Y.; writing--review and editing, C.Y. and G.d.L.; visualization, C.Y.; supervision, J.G. and G.d.L.; project administration, Z.L.; funding acquisition, Z.C. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was funded by the Foreign Technical Cooperation and Scientific Research Program, grant no. E3KZ0301, and the National Natural Science Foundation of China, grant no. 41925019, and this paper is supported by the Li Zhengqiang Expert Workstation of Yunnan Province, grant no. 202205AF150031. The participation of Gerrit de Leeuw was supported by the Chinese Academy of Sciences President's International Fellowship Initiative, grant no. 2025PVA0014.
**Data Availability Statement:** FY4-AGRI data used in this paper were downloaded from [http://satellite.nsmc.org.cn/portalsite/default.aspx](http://satellite.nsmc.org.cn/portalsite/default.aspx) (last access: 23 March 2024). Typhoon optimal path data used in this paper were made available by the China Meteorological Administration (CMA) ([https://tcdata.typhoon.org.cn](https://tcdata.typhoon.org.cn), last access: 23 March 2024).
**Acknowledgments:** The authors express their gratitude to the China Meteorological Administration (CMA) for providing the optimal path dataset and the National Satellite Meteorological Center for supplying FY-4A AGRI L1 and cloud top height (CTH) data.
**Conflicts of Interest:** The authors declare no conflict of interest.
## References
* (1) Uson, M.A.M. Natural disasters and land grabs: The politics of their intersection in the Philippines following super typhoon Haiyan. _Can. J. Dev. Stud./Rev. Can. D'etudes Dev._**2017**, _38_, 414-430. [CrossRef]
* (2) Zhou, J.; Xiang, J.; Huang, S. Classification and Prediction of Typhoon Levels by Satellite Cloud Pictures through GC-LSTM Deep Learning Model. _Sensors_**2020**, _20_, 5132. [CrossRef] [PubMed]
* (3) Liu, N.; Wang, X.; Liao, M.; Fang, X. _Efficient Tropical Cyclone Center Location Based on Adaptive Image Edge Growing Approaches;_ Institute of Automation, Chinese Academy of Sciences: Beijing, China, 2014; Volume 51, pp. 1493-1500.
* (4) Ruttgers, M.; Lee, S.; Jeon, S.; You, D. Prediction of a typhoon track using a generative adversarial network and satellite images. _Sci. Rep._**2019**, \\(9\\), 6057. [CrossRef] [PubMed]
* (5) Dvorak, V.F. Tropical Cyclone Intensity Analysis and Forecasting from Satellite Imagery. _Mon. Weather. Rev._**1975**, _103_, 420-430. [CrossRef]
* (6) Olander, T.L.; Velden, C.S. The Advanced Dvorak Technique: Continued Development of an Objective Scheme to Estimate Tropical Cyclone Intensity Using Geostationary Infrared Satellite Imagery. _Weather. Forecast._**2007**, _22_, 287-298. [CrossRef]
* (7) Velden, C.S.; Olander, T.L.; Zehr, R.M. Development of an Objective Scheme to Estimate Tropical Cyclone Intensity from Digital Geostationary Satellite Infrared Imagery. _Weather. Forecast._**1998**, _13_, 172-186. [CrossRef]
* (8) Sern, H.V.; Hiser, H.W. On the Origin of Hurricane Spiral Rain Bands. _J. Atmos. Sci._**1959**, _16_, 419-426. [CrossRef]
* (9) Griffin, J.S.; Burpee, R.W.; Marks, F.D.; Franklin, J.L. Real-Time Airborne Analysis of Aircraft Data Supporting Operational Hurricane Forecasting. _Weather. Forecast._**1992**, \\(7\\), 480-490. [CrossRef]
* (10) Wong, K.Y.; Yip, C.L.; Li, P.W. A novel algorithm for automatic tropical cyclone eye fix using Doppler radar data. _Meteorol. Appl._**2007**, _14_, 49-59. [CrossRef]
* (11) Liu, H.; Wang, H.; Qi, L.; Zhang, J. Typhoon Positioning Method Using Dual-Radar Zero Radial Velocity Lines and Preliminary Test. _Trop. Cyclone Res. Rev._**2017**, \\(6\\), 26-33. [CrossRef]
* (12) Yurchak, B.S. Description of cloud-rain bands in a tropical cyclone by a hyperbolic-logarithmic spiral. _Russ. Meteorol. Hydrol._**2007**, _32_, 8-18. [CrossRef]
* (13) Zhang, Q.P.; Lai, L.L.; Sun, w.c. Intelligent Location of Tropical Cyclone Center. In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18-21 August 2005; pp. 423-428.
* (14) Wimmers, A.J.; Velden, C.S. Objectively Determining the Rotational Center of Tropical Cyclones in Passive Microwave Satellite Imagery. _J. Appl. Meteorol. Climatol._**2010**, _49_, 2013-2034. [CrossRef]
* (15) Rao, B.M.; Kishtwal, C.M.; Pal, P.K.; Narayanan, M.S. ERS-1 surface wind observations over a cyclone system in the Bay of Bengal during November 1992. _Int. J. Remote Sens._**1995**, _16_, 351-357. [CrossRef]
* (16) Hasler, A.F.; Palaniappan, K.; Kambhammetu, C.; Black, P.; Uhlhorn, E.; Chesters, D. High-Resolution Wind Fields within the Inner Core and Eye of a Mature Tropical Cyclone from GOES 1-min Images. _Bull. Am. Meteorol. Soc._**1998**, _79_, 2483-2496. [CrossRef]
* (17) Tuttle, J.; Gall, R. A Single-Radar Technique for Estimating the Winds in Tropical Cyclones. _Bull. Am. Meteorol. Soc._**1999**, _80_, 653-668. [CrossRef]
* (18) Zhang, C.; Chen, Y.; Ma, L. Multi-channel Satellite Cloud Image Fusion in the Shearlet Transform Domain and Its Influence on Typhoon Center Location. In Proceedings of the Image and Graphics: 9th International Conference, ICIG 2017, Shanghai, China, 13-15 September 2017; Revised Selected Papers, Part II 9, 2017; pp. 440-451.
* Liu and Wang (2020) Liu, J.; Wang, X. Typhoon center location method based on FY-2 remote sensing data. _Bull. Surv. Mapp._**2020**, \\(6\\), 49-52. [CrossRef]
* Velden and Olander (1998) Velden, C.S.; Olander, T.L. Bispectral satellite technique for delineating intense convection- Applications to tropical cyclones. In Proceedings of the Conference on Satellite Meteorology and Oceanography, 9th, Paris, France, 25-29 May 1998; pp. 458-461.
* Liu et al. (2003) Liu, Z.; Qiu, H.; Wu, B.; Shen, G.G. Automatic center location of non-eyed typhoon in satellite cloud image. In Proceedings of the Image Processing: Algorithms and Systems II, Santa Clara, CA, USA, 21-23 January 2003; pp. 429-436.
* Pao et al. (2006) Pao, T.-L.; Yeh, J.-H.; Liu, M.-Y.; Hsu, Y.-C. Locating the typhoon center from the IR satellite cloud images. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8-11 October 2006; pp. 484-488.
* Jaiswal and Kishtawal (2011) Jaiswal, N.; Kishtawal, C.M. Automatic Determination of Center of Tropical Cyclone in Satellite-Generated IR Images. _IEEE Geosci. Remote Sens. Lett._**2011**, \\(8\\), 460-463. [CrossRef]
* Zhang and Chen (2014) Zhang, C.; Chen, Y.; Lu, J. Typhoon center location algorithm based on fractal feature and gradient of infrared satellite cloud image. In Proceedings of the International Symposium on Optoelectronic Technology and Application 2014: Optical Remote Sensing Technology and Applications, Beijing, China, 13-15 May 2014; pp. 84-89.
* Xie et al. (2022) Xie, T., Chen, J.; Yan, J. A New Objective Typhoon Location Algorithm Considering a Perturbation Factor Based on FY-4A Brightness Temperature Data. _J. Atmos. Ocean. Technol._**2022**, _39_, 2023-2038. [CrossRef]
* Permyakov et al. (2019) Permyakov, M.; Kleshcheva, T.; Potalova, E.; Holzwerth, R.H. Characteristics of typhoon eyewalls according to World Wide Lightning Location Network data. _Mon. Weather. Rev._**2019**, _147_, 4027-4043. [CrossRef]
* Magee et al. (2021) Magee, A.D.; Kiem, A.S.; Chan, J.C. A new approach for location-specific seasonal outlooks of typhoon and super typhoon frequency across the Western North Pacific region. _Sci. Rep._**2021**, _11_, 19439. [CrossRef]
* Kang and Park (2021) Kang, J.; Park, J. Use of GNSS-Derived PWV for Predicting the Path of Typhoon: Case Studies of Soulik and Kongrey in 2018. _J. Surv. Eng._**2021**, _147_, 04021018. [CrossRef]
* Wang et al. (2021) Wang, E.K.; Wang, F.; Kumari, S.; Yeh, J.-H.; Chen, C.-M. Intelligent monitor for typhoon in IoT system of smart city. _J. Supercomput._**2021**, _77_, 3024-3043. [CrossRef]
* Zhou et al. (2022) Zhou, G.; Fang, X.; Qian, Q.; Lv, X.; Cao, J.; Jiang, Y. Application of artificial intelligence technology in typhoon monitoring and forecasting. _Front. Earth Sci._**2022**, _10_, 974497. [CrossRef]
* Geng et al. (2020) Geng, X.; Min, J.; Yang, C.; Wang, Y.; Xu, D. Analysis of FY-4A AGRI Radiance Data Bias Characteristics and a Correction Experiment. _Chin. J. Atmos. Sci._**2020**, _44_, 679-694.
* Di et al. (2022) Di, D.I.; Ronglian, Z.; Ruize, L.A.I. Parallax shift effect correction and analysis based on Fengyun-4A advanced imager. _Acta Meteorol. Sin._**2022**, _80_, 632-642. [CrossRef]
* Ying et al. (2014) Ying, M.; Zhang, W.; Yu, H.; Lu, X.; Feng, J.; Fan, Y.; Zhu, Y.; Chen, D. An overview of the China Meteorological Administration tropical cyclone database. _J. Atmos. Ocean. Technol._**2014**, _31_, 287-301. [CrossRef]
* Lu et al. (2021) Lu, X.; Yu, H.; Ying, M.; Zhao, B.; Zhang, S.; Lin, L.; Bai, L.; Wan, R. Western North Pacific tropical cyclone database created by the China Meteorological Administration. _Adt. Atmos. Sci._**2021**, _38_, 690-699. [CrossRef]
* Loog et al. (2001) Loog, M.; Duin, R.P.; Haeb-Umbach, R. Multiclass linear dimension reduction by weighted pairwise Fisher criteria. _IEEE Trans. Pattern Anal. Mach. Intell._**2001**, _23_, 762-766. [CrossRef]
* Lowe (1999) Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20-27 September 1999; Volume 1152, pp. 1150-1157.
* Yan and Sukthankar (2004) Yan, K.; Sukthankar, R. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 27 June-2 July 2004; p. II.
* Bay et al. (2008) Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). _Comput. Vis. Image Underst._**2008**, _110_, 346-359. [CrossRef]
* Chelton et al. (2004) Chelton, D.; Schlax, M.; Freilich, M.; Milliff, R. Satellite Measurements Reveal Persistent Small-Scale Features in Ocean Winds. _Science_**2004**, _303_, 978-983. [CrossRef]
* Maloney and Chelton (2006) Maloney, E.D.; Chelton, D.B. An Assessment of the Sea Surface Temperature Influence on Surface Wind Stress in Numerical Weather Prediction and Climate Models. _J. Clim._**2006**, _19_, 2743-2762. [CrossRef]
* Minobe et al. (2008) Minobe, S.; Kuwano-Yoshida, A.; Komori, N.; Xie, S.-P.; Small, R.J. Influence of the Gulf Stream on the troposphere. _Nature_**2008**, _452_, 206-209. [CrossRef] [PubMed]
* Wei et al. (2017) Wei, Y.; Zhang, R.-H.; Wang, H. Mesoscale wind stress-SST coupling in the Kuroshio extension and its effect on the ocean. _J. Oceanogr._**2017**, _73_, 785-798. [CrossRef]
**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | Extreme weather events like typhoons have become more frequent due to global climate change. Current typhoon monitoring methods include manual monitoring, mathematical morphological methods, and artificial intelligence. Manual monitoring is accurate but labor-intensive, while AI offers convenience but requires accuracy improvements. Mathematical morphology methods, such as brightness temperature perturbation (BTP) and a spatio-temporally consistent (STC) Scale-Invariant Feature Transform (SIFT), remain mainstream for typhoon positioning. This paper enhances BTP and STC SIFT methods for application to Fengyun 4A (FF-4A) Advanced Geosynchronous Radiation Imager (AGRI) L1 data, incorporating parallax correction for more accurate surface longitude and latitude positioning. The applicability of these methods for different typhoon intensities and monitoring time resolutions is analyzed. Automated monitoring with one-hour observation intervals in the northwest Pacific region demonstrates high positioning accuracy, reaching 25 km or better when compared to best path data from the China Meteorological Administration (CMA). For 1 h remote sensing observations, BTP is more accurate for typhoons at or above typhoon intensity, while STC SIFT is more accurate for weaker typhoons. In the current era of a high temporal resolution of typhoon monitoring using geostationary satellites, the method presented in this paper can serve the national meteorological industry for typhoon monitoring, which is beneficial to national pre-disaster prevention work as well as global meteorological research. | Write a summary of the passage below. | 300 |
arxiv-format/2404_10587v1.md | Polarized Adding Method of Discrete Ordinate Approximation for Ultraviolet-Visible and Near-Infrared Radiative Transfer
Kun Wu
Feng Zhang
[email protected]
Wenwen Li
Fengzi Bao
Yi-ning Shi
Key Laboratory of Meteorological Disaster, Ministry of Education (KLME)/ Joint International Research Laboratory of Climate and Environment Change (ILCEC)/ Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD) / Institute for Climate and Application Research (ICAR), Nanjing University of Information Science and Technology, Nanjing 210044, China Department of Atmospheric and Oceanic Sciences and Institutes of Atmospheric Sciences, Fudan University, Shanghai 200433, China CMA Earth System Modeling and Prediction Centre, China Meteorological Administration, Beijing 100081, China
## 1 Introduction
In recent years, interest in the development of vector radiative transfer methods and instruments with polarization capabilities for aircraft, balloons, satellites, and ground-based platforms has increased rapidly. Radiation scattered by air molecules (Rayleigh scattering), aerosols, and cloud particles is polarized and shows different polarization characteristics based on the scattering event [1; 2]. The degree of linear and circular polarization produced by clouds and aerosol particles is more sensitive to the shape, size, and refractive index of the poly-dispersed small scattering particles. Polarization can provide additional information on atmospheric optical phenomena and their constituents [3], including the properties of atmospheric components and vertical distributions. Numerous studies [4; 5; 6; 7; 8; 9] have shown that approximately 10% of the errors introduced in atmospheric radiative transfer simulations and remote sensing are caused by ignoring the atmospheric polarization, particularly at short wavelengths [10; 5; 8; 11] owing to more scattering compared to long wavelengths [12]. Therefore, a scalar radiative transfer model cannot sufficiently describe the nature of radiation processes [13].
Various vector radiative transfer schemes have been developed, which commonly use the adding-doubling method [14; 15; 16; 17; 18; 19], discrete ordinates method [20; 21; 22; 23; 24], Monte-Carlo solutions method [25; 26; 27; 28], successive orders of scattering method [29; 30; 1], Invariant Imbedding Method [31], and spherical harmonic method [32]. Each method has its own advantages and disadvantages. For instance, the adding-doubling method deals with multiple scattering using the doubling method [17]. Though the adding method can simplify the layer-to-layer connection, it is time-consuming to compute an atmospheric layer with a thick optical depth, such as a sky with thick stratus. This is caused by the increase in the number of thin sub-layers (a number \\(2^{N}\\) of identical thin layers), each of which is characterized by single scattering. The problem can be overcome using the discrete ordinate method (DOM). DOM solves the vector transfer equation using eigenvectors and eigenvalues [33; 22]. It can calculate the interior variation of the reflection and transmission function within a layer; therefore, it is accurate and efficient for thick scattering media [21], such as aerosols and clouds.
In modeling scalar solar radiation, there is the method [34] combines the advantages of the DOM and adding method, which via using DOM for a single-layer radiative transfer solution and the adding method employed for the inhomogeneous multi-layer connection in a plane-parallel scattering atmosphere. In this study, it will be expanded to vector radiative transfer to optimize the accuracy and computational efficiency. The remainder of this study is organized as follows. Section 2 describes the formulation and algorithm of the proposed method. In Section 3, the calculation accuracy and efficiency of the proposed method are evaluated for different scattering atmospheres. Finally, the conclusions and discussion are presented in Section 4.
## 2 Solution by discrete ordinates method for a single-layer
### Solar radiation
The polarized radiative transfer equation with only a solar source in a plane-parallel medium is given by [10; 35; 36]
\\[\\begin{split}\\mu\\frac{d\\mathbf{L}(\\tau,\\mu,\\varphi)}{d\\tau}=& \\mathbf{L}(\\tau,\\mu,\\varphi)-\\frac{\\omega}{4\\pi}\\int_{0}^{2\\pi} \\int_{-1}^{1}\\mathbf{Z}(\\mu,\\varphi,\\mu^{\\prime},\\varphi^{\\prime})\\mathbf{L}( \\tau,\\mu^{\\prime},\\varphi^{\\prime})d\\mu^{\\prime}d\\varphi^{\\prime}\\\\ &-\\frac{\\omega}{4\\pi}\\mathbf{Z}(\\mu,\\varphi,\\mu_{0},\\varphi_{0}) \\mathbf{F_{0}}(\\mu_{0},\\varphi_{0})e^{-\\tau/\\mu_{0}},\\end{split} \\tag{1}\\]
where \\(\\mathbf{L}=[I,Q,U,V]^{\\mathbb{T}}\\) represents the Stokes vector, and the superscript \\(\\mathbb{T}\\) indicates a matrix transpose. \\(\\tau\\) indicates the optical depth, \\(\\omega\\) signifies the single scattering albedo, \\(\\varphi\\) indicates the azimuthal angle, and \\(\\mu\\) denotes the zenith angle cosine. \\(\\mathbf{Z}\\) is a 4\\(\\times\\)4 scattering matrix, defined as \\(i_{2})\\mathbf{P}(\\mu,\\varphi,\\mu^{\\prime},\\varphi^{\\prime})\\mathbf{C}(-i_{1})\\), where \\(\\mathbf{C}\\) represents the rotational matrices with \\(i_{1}\\) and \\(i_{2}\\) defining the rotation angles, and \\(\\mathbf{P}\\) represents the scattering phase matrix, and each element of \\(\\mathbf{P}\\) can be expanded using the Legendre series. \\(\\mathbf{F_{0}}\\) represents the incident solar Stokes vector at the top of the atmosphere (TOA).
The radiation transfer equation can be split into 2M equations by expanding the phase matrix and \\(\\mathbf{L}\\) into the Fourier cosine and sine series. The discrete ordinate method is used to solve the equation and neglecting the superscript \\(m\\), Eq.(1) can be written as in matrix form as
\\[\\frac{d}{d\\tau}\\left[\\begin{array}{c}\\mathbf{L}^{+}(\\tau)\\\\ \\mathbf{L}^{-}(\\tau)\\end{array}\\right]=\\left[\\begin{array}{cc}\\mathbf{X_{11 }}&\\mathbf{X_{12}}\\\\ \\mathbf{X_{21}}&\\mathbf{X_{22}}\\end{array}\\right]\\left[\\begin{array}{c} \\mathbf{L}^{+}\\\\ \\mathbf{L}^{-}\\end{array}\\right]+\\left[\\begin{array}{c}\\mathbf{E}^{+}\\\\ \\mathbf{E}^{-}\\end{array}\\right]e^{-\\tau/\\mu_{0}}, \\tag{2}\\]
where \\(\\mathbf{L}^{\\pm}(\\tau)=[I(\\tau,\\pm\\mu_{1}),\\cdot\\cdot\\cdot,I(\\tau,\\pm\\mu_{N}),Q(\\tau,\\pm\\mu_{1}),\\cdot\\cdot\\cdot,Q(\\tau,\\pm\\mu_{N}),\\)
\\(U(\\tau,\\pm\\mu_{1}),\\cdot\\cdot,U(\\tau,\\pm\\mu_{N}),V(\\tau,\\pm\\mu_{1}),\\cdot\\cdot \\cdot,V(\\tau,\\pm\\mu_{N})]^{\\mathbb{T}}\\). The details are given in Appendix A.
This can be solved using the eigenvalue method [23] as follows:
\\[\\left[\\begin{array}{c}\\mathbf{L}^{+}(\\tau)\\\\ \\mathbf{L}^{-}(\\tau)\\end{array}\\right]=\\mathbf{G}\\circ\\mathbf{K}\\left[ \\begin{array}{c}\\mathbf{C_{1}}\\\\ \\mathbf{C_{2}}\\end{array}\\right]-\\mu_{\\mathbf{0}}\\left[\\begin{array}{c} \\mathbf{E}^{+}\\\\ \\mathbf{E}^{-}\\end{array}\\right]\\mathbf{e}^{-\\tau/\\mu_{\\mathbf{0}}} \\tag{3}\\]
where \\(\\mathbf{G}\\) is composed of the eigenvectors of \\(\\mathbf{X}\\), and \\(\\mathbf{K}\\) is composed of the eigenvalues of \\(\\mathbf{X}\\). \\([\\mathbf{C_{1}},\\mathbf{C_{2}}]^{\\mathbb{T}}\\) can be obtained from the boundary condition, that is, \\(\\mathbf{L}^{-}(\\mathbf{0})=\\mathbf{0}\\) and \\(\\mathbf{L}^{+}(\\mathbf{\\tau_{1}})=\\mathbf{0}\\). By solving Eq.(6), \\(\\mathbf{with optical depth \\(\\tau_{1}\\), and can be expressed as
\\[\\begin{split}&\\mathbf{R}^{I}=\\left[R^{I\\gets I}(0,\\mu_{1},\\mu_{ 0}),\\cdots,R^{I\\gets I}(0,\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{R}^{Q}=\\left[R^{Q\\gets I}(0,\\mu_{1},\\mu_{0}), \\cdots,R^{Q\\gets I}(0,\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{R}^{U}=\\left[R^{U\\gets I}(0,\\mu_{1},\\mu_{0}), \\cdots,R^{U\\gets I}(0,\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{R}^{V}=\\left[R^{V\\gets I}(0,\\mu_{1},\\mu_{0}), \\cdots,R^{V\\gets I}(0,\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{T}^{I}=\\left[T^{I\\gets I}(\\tau_{1},\\mu_{1},\\mu_{0}), \\cdots,T^{I\\gets I}(\\tau_{1},\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{T}^{Q}=\\left[T^{Q\\gets I}(\\tau_{1},\\mu_{1},\\mu_{0}), \\cdots,T^{Q\\gets I}(\\tau_{1},\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{T}^{U}=\\left[T^{U\\gets I}(\\tau_{1},\\mu_{1},\\mu_{0}), \\cdots,T^{U\\gets I}(\\tau_{1},\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}},\\\\ &\\mathbf{T}^{V}=\\left[T^{V\\gets I}(\\tau_{1},\\mu_{1},\\mu_{0}), \\cdots,T^{V\\gets I}(\\tau_{1},\\mu_{N},\\mu_{0})\\right]^{\\mathbb{T}}.\\end{split} \\tag{4}\\]
A connection exists between \\(I\\)-, \\(Q\\)-, \\(U\\)- and \\(V\\)-component transfer in a medium considering scattering. For the I-component, its change is not only from the I-component self, but also from the other three components. The superscript '\\(Q\\gets I\\)' denotes the reflection or transmission of \\(I\\)-component to \\(Q\\)-component dimension. The other parameters are defined in a similar manner.
### Diffuse radiation
The diffuse equation of solar-polarized radiative transfer can be given as
\\[\\mu\\frac{d\\mathbf{L}(\\tau,\\mu,\\varphi)}{d\\tau}=\\mathbf{L}(\\tau,\\mu,\\varphi)- \\frac{\\omega}{4\\pi}\\int_{0}^{2\\pi}\\int_{-1}^{1}\\mathbf{Z}(\\mu,\\varphi,\\mu^{ \\prime},\\varphi^{\\prime})\\mathbf{L}(\\tau,\\mu^{\\prime},\\varphi^{\\prime})d\\mu^ {\\prime}d\\varphi^{\\prime}. \\tag{5}\\]
This solution is also required for the addition method. The details of the solutions of the diffuse radiation for the \\(I\\) and \\(Q\\) components are almost the same as those for the polarized thermal infrared radiative transfer described in [37]. The \\(U\\) and \\(V\\) components are obtained in a similar manner. Thus, the reflection and transmission matrices of diffuse radiation are given as
\\[\\overline{\\mathbf{R}}=\\left[\\begin{array}{cccc}\\overline{\\mathcal{R}}^{I\\gets I }&\\overline{\\mathcal{R}}^{I\\gets Q}&\\overline{\\mathcal{R}}^{I\\gets U }&\\overline{\\mathcal{R}}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}^{Q\\gets I}&\\overline{\\mathcal{R}}^{Q\\gets Q }&\\overline{\\mathcal{R}}^{Q\\gets U}&\\overline{\\mathcal{R}}^{Q\\gets V }\\\\ \\overline{\\mathcal{R}}^{U\\gets I}&\\overline{\\mathcal{R}}^{U\\gets Q }&\\overline{\\mathcal{R}}^{U\\gets U}&\\overline{\\mathcal{R}}^{U\\gets V }\\\\ \\overline{\\mathcal{R}}^{V\\gets I}&\\overline{\\mathcal{R}}^{V\\gets Q }&\\overline{\\mathcal{R}}^{V\\gets U}&\\overline{\\mathcal{R}}^{V\\gets V }\\end{array}\\right]_{4N\\times 4N} \\tag{6a}\\] \\[\\overline{\\mathbf{T}}=\\left[\\begin{array}{cccc}\\overline{\\mathcal{T}}^{I \\gets I}&\\overline{\\mathcal{T}}^{I\\gets Q}&\\overline{\\mathcal{T}}^{I \\gets U}&\\overline{\\mathcal{T}}^{I\\gets V}\\\\ \\overline{\\mathcal{T}}^{Q\\gets I}&\\overline{\\mathcal{T}}^{Q\\gets Q }&\\overline{\\mathcal{T}}^{Q\\gets U}&\\overline{\\mathcal{T}}^{Q\\gets V }\\\\ \\overline{\\mathcal{T}}^{U\\gets I}&\\overline{\\mathcal{T}}^{U\\gets Q }&\\overline{\\mathcal{T}}^{U\\gets U}&\\overline{\\mathcal{T}}^{U\\gets V }\\\\ \\overline{\\mathcal{T}}^{V\\gets I}&\\overline{\\mathcal{T}}^{V\\gets Q }&\\overline{\\mathcal{T}}^{V\\gets U}&\\overline{\\mathcal{T}}^{V\\gets V }\\end{array}\\right]_{4N\\times 4N}. \\tag{6b}\\]
where \\(\\overline{\\mathcal{R}}^{I\\gets I}=\\left[\\begin{array}{cccc}(1+\\delta_{0,m} )a_{1}R^{I\\gets I}(\\mu_{1},\\mu_{1})\\mu_{1}&\\cdots&(1+\\delta_{0,m})a_{N}R^{ I\\gets I}(\\mu_{1},\\mu_{N})\\mu_{N}\\\\ \\cdots&\\cdots&\\cdots\\\\ (1+\\delta_{0,m})a_{1}R^{I\\gets I}(\\mu_{N},\\mu_{1})\\mu_{1}&\\cdots&(1+\\delta _{0,m})a_{N}R^{I\\gets I}(\\mu_{N},\\mu_{N})\\mu_{N}\\end{array}\\right]_{N\\times N}\\),
\\[\\overline{\\mathcal{T}}^{I\\gets I}=\\left[\\begin{array}{cccc}(1+\\delta_{0, m})a_{1}\\tilde{T}^{I\\gets I}(\\mu_{1},\\mu_{1})\\mu_{1}&\\cdots&(1+\\delta_{0,m})a_{N} \\tilde{T}^{I\\gets I}(\\mu_{1},\\mu_{N})\\mu_{N}\\\\ \\cdots&\\cdots&\\cdots\\\\ (1+\\delta_{0,m})a_{1}\\tilde{T}^{I\\gets I}(\\mu_{N},\\mu_{1})\\mu_{1}&\\cdots&( 1+\\delta_{0,m})a_{N}\\tilde{T}^{I\\gets I}(\\mu_{N},\\mu_{N})\\mu_{N}\\end{array} \\right]_{N\\times N},\\]
and other matrices on the right hand side of Eq.(6) are defined similarly.
## 3 Four invariance principles and adding method for inhomogeneous multi-layer connection
In this section, the four invariance principles [10] are extended to the polarized solar radiative transfer process and applied to the new model. A schematic diagram is shown in Figure 1 (a)\\(\\sim\\)(d): More specifically, we consider a combination of two inhomogeneous layers with \\(\\tau_{1}\\) (first layer) and \\(\\tau_{2}\\) (second layer) as the optical depths. \\(R_{i}^{a\\gets b}(\\mu,\\mu^{\\prime})\\) and \\(T_{i}^{a\\gets b}(\\mu,\\mu^{\\prime})\\) (\\(a=I,Q,U,V\\) and \\(b=I,Q,U,V\\)) indicate the reflection and transmission functions of the \\(i\\)-th layer. The superscript \\({}^{*}\\) denotes that the radiation originates from below. From Figure 1, the four invariance principles can be expressed as follows:
(a) The reflected matrices \\(\\mathbf{A}\\) (\\(A^{I}(\\mu)\\) / \\(A^{Q}(\\mu)\\) / \\(A^{U}(\\mu)\\) / \\(A^{V}(\\mu)\\)) of the Stokes vector at level 2 originate from two parts: the reflection of the direct solar beam by the second layer \\({\\bf R}_{2}(0,\\mu)\\) (\\(R_{2}^{I}(0,\\mu)\\) / \\(R_{2}^{Q}(0,\\mu)\\) / \\(R_{2}^{U}(0,\\mu)\\) / \\(R_{2}^{V}(0,\\mu)\\)) and the coupled reflection of the downward components \\({\\bf D}\\) (\\(D^{I}(\\mu)\\) / \\(D^{Q}(\\mu)\\) / \\(D^{U}(\\mu)\\) / \\(D^{V}(\\mu)\\)) through the second layer;
(b) The transmitted matrices \\({\\bf D}\\) of the Stokes vector at level 2 originate from two parts: direct transmission by the first layer \\({\\bf T}_{1}(\\tau_{1},-\\mu)\\) (\\(T_{1}^{I}(\\tau_{1},-\\mu)\\) / \\(T_{1}^{Q}(\\tau_{1},-\\mu)\\) / \\(T_{1}^{V}(\\tau_{1},-\\mu)\\)) and the coupled reflection of the upward components \\({\\bf A}\\) through the first layer.
(c) The reflected matrices \\({\\bf R}_{1,2}(0,\\mu)\\) (\\(R_{1,2}^{I}(0,\\mu)\\) / \\(R_{1,2}^{Q}(0,\\mu)\\) / \\(R_{1,2}^{U}(0,\\mu)\\) / \\(R_{1,2}^{V}(0,\\mu)\\)) of the Stokes vector at the top of the two-layer are composed of two parts: the reflection by the first layer \\({\\bf R}_{1}(0,\\mu)\\) (\\(R_{1}^{I}(0,\\mu)\\) / \\(R_{1}^{Q}(0,\\mu)\\) / \\(R_{1}^{U}(0,\\mu)\\) / \\(R_{1}^{V}(0,\\mu)\\)), and the total transmission including the direct beam transmissions of \\(\\exp(-\\frac{\\tau_{1}}{\\mu})\\) and \\({\\bf T}_{1}^{*}(\\mu,\\mu^{\\prime})\\) of the upward components \\({\\bf A}\\) through the first layer.
(d) The transmitted matrices \\({\\bf T}_{1,2}(\\tau_{1}+\\tau_{2},-\\mu)\\) (\\(T_{1,2}^{I}(\\tau_{1}+\\tau_{2},-\\mu)\\) / \\(T_{1,2}^{Q}(\\tau_{1}+\\tau_{2},-\\mu)\\) / \\(T_{1,2}^{U}(\\tau_{1}+\\tau_{2},-\\mu)\\) / \\(T_{1,2}^{V}(\\tau_{1}+\\tau_{2},-\\mu)\\)) of the Stokes vector at the bottom of the two-layer are composed of two parts: transmission of the direct solar beam \\(\\exp(-\\frac{\\tau_{1}}{\\mu_{0}})\\) through the second layer and the total coupled transmission, including \\(\\exp(-\\frac{\\tau_{2}}{\\mu})\\) and \\({\\bf T}_{2}(\\mu,\\mu^{\\prime})\\) of the downward components \\({\\bf D}\\) through the second layer.
Based on the above (a\\(\\sim\\)d) statements, we can write the four invariance principles in vector solar radiative transfer as
\\[{\\bf A}={\\bf R}_{2}e^{-\\frac{\\tau_{1}}{\\mu_{0}}}+2\\int_{0}^{1} \\overline{{\\bf R}}_{2}{\\bf D}\\mu^{\\prime}d\\mu^{\\prime}, \\tag{7a}\\] \\[{\\bf D}={\\bf T}_{1}+2\\int_{0}^{1}\\overline{{\\bf R}}_{1}^{*}{\\bf A}\\mu^{\\prime }d\\mu^{\\prime},\\] (7b) \\[{\\bf R}_{1,2}={\\bf R}_{1}+{\\bf A}e^{-\\frac{\\tau_{1}}{\\mu_{0}}}+2\\int_{0}^{1} \\overline{{\\bf T}}_{1}^{*}{\\bf A}\\mu^{\\prime}d\\mu^{\\prime},\\] (7c) \\[{\\bf T}_{1,2}={\\bf T}_{2}e^{-\\frac{\\tau_{1}}{\\mu_{0}}}+{\\bf D}e^{-\\frac{\\tau_{ 2}}{\\mu_{0}}}+2\\int_{0}^{1}\\overline{{\\bf T}}_{2}{\\bf D}\\mu^{\\prime}d\\mu^{ \\prime}. \\tag{7d}\\]
In Eq.(7), \\({\\bf A}=[{\\bf A}^{I},{\\bf A}^{Q},{\\bf A}^{U},{\\bf A}^{V}]^{\\mathbb{T}}\\), \\({\\bf D}=[{\\bf D}^{I},{\\bf D}^{Q},{\\bf D}^{U},{\\bf D}^{V}]^{\\mathbb{T}}\\),
\\({\\bf R}_{1,2}=[{\\bf R}_{1,2}^{I},{\\bf R}_{1,2}^{Q},{\\bf R}_{1,2}^{U},{\\bf R}_{1,2}^{V}]^{\\mathbb{T}}\\), \\({\\bf T}_{1,2}=[{\\bf T}_{1,2}^{I},{\\bf T}_{1,2}^{Q},{\\bf T}_{1,2}^{U},{\\bf T}_{1,2}^{V}]^{\\mathbb{T}}\\).
Others matrix in Eq.(7) are detailed in the Appendix B.
\\(N\\)-node Gaussian integration was used to handle the integration; thus, Eq.(7)
From (8a-b), we obtain
\\[\\left[\\begin{array}{c}\\mathbf{A}^{I}\\\\ \\mathbf{A}^{Q}\\\\ \\mathbf{A}^{U}\\\\ \\mathbf{A}^{V}\\end{array}\\right]=\\overline{\\mathbf{X}}_{2}^{1}\\mathbf{Y}_{2}^{1}, \\tag{9a}\\] \\[\\left[\\begin{array}{c}\\mathbf{D}^{I}\\\\ \\mathbf{D}^{Q}\\\\ \\mathbf{D}^{U}\\\\ \\mathbf{D}^{V}\\end{array}\\right]=\\left[\\begin{array}{cc}\\mathbf{T}_{1}^{I} \\\\ \\mathbf{T}_{1}^{Q}\\\\ \\mathbf{T}_{1}^{U}\\\\ \\mathbf{T}_{1}^{V}\\end{array}\\right]+\\left[\\begin{array}{cc}\\overline{ \\mathcal{R}}_{1}^{*,I\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,I\\gets Q }&\\overline{\\mathcal{R}}_{1}^{*,I\\gets U}&\\overline{\\mathcal{R}}_{1}^{*, I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,Q \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,Q\\gets U}&\\overline{\\mathcal{R }}_{1}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,U\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,U \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,U\\gets U}&\\overline{\\mathcal{R }}_{1}^{*,U\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,V\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,V \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,V\\gets U}&\\overline{\\mathcal{R }}_{1}^{*,V\\gets V}\\end{array}\\right]\\overline{\\mathbf{X}}_{2}^{1}\\mathbf{Y}_{ 2}^{1} \\tag{9b}\\]
where \\(\\overline{\\mathbf{X}}_{i}^{j}\\) and \\(\\mathbf{Y}_{i}^{j}\\) are given in Appendix B.
Substituting (9) into (8c-d), the direct reflection and transmission can be expressed as follows:
\\[\\left[\\begin{array}{c}\\mathbf{R}_{1,2}^{I}\\\\ \\mathbf{R}_{1,2}^{Q}\\\\ \\mathbf{R}_{1,2}^{U}\\\\ \\mathbf{R}_{1,2}^{V}\\end{array}\\right]=\\left[\\begin{array}{c}\\mathbf{R}_{1} ^{I}\\\\ \\mathbf{R}_{1}^{Q}\\\\ \\mathbf{R}_{1}^{U}\\\\ \\mathbf{R}_{1}^{V}\\end{array}\\right]+\\left[\\begin{array}{cc}\\overline{ \\mathcal{T}}_{1}^{*,I\\gets I}&\\overline{\\mathcal{T}}_{1}^{*,I\\gets Q }&\\overline{\\mathcal{T}}_{1}^{*,I\\gets U}&\\overline{\\mathcal{T}}_{1}^{*, I\\gets V}\\\\ \\overline{\\mathcal{T}}_{1}^{*,Q\\gets I}&\\overline{\\mathcal{T}}_{1}^{*,Q \\gets Q}&\\overline{\\mathcal{T}}_{1}^{*,Q\\gets U}&\\overline{\\mathcal{T}}_ {1}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{T}}_{1}^{*,U\\gets I}&\\overline{\\mathcal{T}}_{1}^{*,U \\gets Q}&\\overline{\\mathcal{T}}_{1}^{*,U\\gets U}&\\overline{\\mathcal{T}}_ {1}^{*,U\\gets V}\\\\ \\overline{\\mathcal{T}}_{1}^{*,V\\gets I}&\\overline{\\mathcal{T}}_{1}^{*,V \\gets Q}&\\overline{\\mathcal{T}}_{1}^{*,V\\gets U}&\\overline{\\mathcal{T}}_ {1}^{*,V\\gets V}\\end{array}\\right]\\overline{\\mathbf{X}}_{2}^{1}\\mathbf{Y}_{ 2}^{1}, \\tag{10a}\\] \\[\\left[\\begin{array}{c}\\mathbf{T}_{1,2}^{I}\\\\ \\mathbf{T}_{1,2}^{Q}\\\\ \\mathbf{T}_{1,2}^{U}\\\\ \\mathbf{T}_{1,2}^{V}\\end{array}\\right]=\\left[\\begin{array}{c}\\mathbf{T}_{2} ^{I}\\\\ \\mathbf{T}_{2}^{Q}\\\\ \\mathbf{T}_{2}^{U}\\\\ \\mathbf{T}_{2}^{V}\\end{array}\\right]e^{-\\frac{\\tau_{1}}{\\rho_{0}}}+\\left[ \\begin{array}{cc}\\overline{\\mathcal{T}}_{2}^{I\\gets I}&\\overline{ \\mathcal{T}}_{2}^{I\\gets Q}&\\overline{\\mathcal{T}}_{2}^{I\\gets U}& \\overline{\\mathcal{T}}_{2}^{I\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{Q\\gets I}&\\overline{\\mathcal{T}}_{2}^{Q \\gets Q}&\\overline{\\mathcal{T}}_{2}^{Q\\gets U}&\\overline{\\mathcal{T}}_ {2}^{Q\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{U\\gets I}&\\overline{\\mathcal{T}}_{2}^{U \\gets Q}&\\overline{\\mathcal{T}}_{2}^{U\\gets U}&\\overline{\\mathcal{T}}_ {2}^{U\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{V\\gets I}&\\overline{\\mathcal{T}}_{2}^{V \\gets Q}&\\overline{\\mathcal{T}}_{2}^{V\\gets U}&\\overline{\\mathcal{T}}_ {2}^{V\\gets V}\\end{array}\\right]\\left[\\begin{array}{c}\\mathbf{T}_{1}^{I}\\\\ \\mathbf{T}_{1}^{Q}\\\\ \\mathbf{T}_{1}^{U}\\\\ \\mathbf{T}_{1}^{V}\\end{array}\\right]+\\] \\[\\left[\\begin{array}{cc}\\overline{\\mathcal{T}}_{2}^{I\\gets I }&\\overline{\\mathcal{T}}_{2}^{I\\gets Q}&\\overline{\\mathcal{T}}_{2}^{I \\gets U}&\\overline{\\mathcal{T}}_{2}^{I\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{Q\\gets I}&\\overline{\\mathcal{T}}_{2}^{Q \\gets U}&\\overline{\\mathcal{T}}_{2}^{Q\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{U\\gets I}&\\overline{\\mathcal{T}}_{2}^{U \\gets Q}&\\overline{\\mathcal{T}}_{2}^{U\\gets V}\\\\ \\overline{\\mathcal{T}}_{2}^{V\\gets I}&\\overline{\\mathcal{T}}_{2}^{V \\gets Q}&\\overline{\\mathcal{T}}_{2}^{V\\gets V}\\end{array}\\right]\\left[ \\begin{array}{c}\\overline{\\mathcal{R}}_{1}^{*,I\\gets I}&\\overline{ \\mathcal{R}}_{1}^{*,I\\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,I\\gets U}& \\overline{\\mathcal{R}}_{1}^{*,I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,Q \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,Q\\gets U}&\\overline{\\mathcal{R}}_ {1}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,U\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,U \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,U\\gets U}&\\overline{\\mathcal{R}}_ {1}^{*,U\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{*,V\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,V \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,V\\gets U}&\\overline{\\mathcal{R}}_ {1}^{*,V\\gets V}\\end{array}\\right]\\overline{\\mathbf{X}}_{2}^{1}\\mathbf{Y}_{ 2}^{1} \\tag{10b}\\]By applying the invariance principles to diffuse radiation, we obtain
\\[\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{1,2}^{I\\gets I}\\quad\\overline{ \\mathcal{R}}_{1,2}^{I\\gets Q}\\quad\\overline{\\mathcal{R}}_{1,2}^{I\\gets U }\\quad\\overline{\\mathcal{R}}_{1,2}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,2}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1,2 }^{Q\\gets Q}\\quad\\overline{\\mathcal{R}}_{1,2}^{Q\\gets U}\\quad \\overline{\\mathcal{R}}_{1,2}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{1,2 }^{U\\gets Q}\\quad\\overline{\\mathcal{R}}_{1,2}^{U\\gets U}\\quad \\overline{\\mathcal{R}}_{1,2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,2}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1,2 }^{V\\gets Q}\\quad\\overline{\\mathcal{R}}_{1,2}^{V\\gets U}\\quad \\overline{\\mathcal{R}}_{1,2}^{V\\gets V}\\\\ \\end{array}\\right]=\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{1}^{I \\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets Q}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{V\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{V\\gets V}\\end{array}\\right]\\]
\\[\\times\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{2}^{I \\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{I\\gets Q}\\quad\\overline{ \\mathcal{R}}_{2}^{I\\gets U}\\quad\\overline{\\mathcal{R}}_{2}^{I\\gets V} \\\\ \\overline{\\mathcal{R}}_{2}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{U \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{U\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{V\\gets V}\\end{array}\\right]\\left[\\begin{array}{l} \\overline{\\mathcal{R}}_{1}^{I\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{U \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{U\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{V\\gets V}\\end{array}\\right]\\left[\\begin{array}{l} \\overline{\\mathcal{R}}_{1}^{I\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{U \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{U\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{V\\gets V}\\end{array}\\right]\\]
\\[\\times\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{1}^{I \\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets Q}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{I\\gets Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{Q\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{V\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{V\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{ \\mathcal{R}}_{1}^{V\\gets V}\\end{array}\\right]\\mathbf{\\overline{X}}_{2}^{1} \\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{2}^{I\\gets I}\\quad\\overline{ \\mathcal{R}}_{2}^{I\\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{I\\gets U} \\quad\\overline{\\mathcal{R}}_{2}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{Q\\gets U}\\quad\\overline{\\mathcal{R}}_{2}^{Q \\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{U \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{U\\gets U}\\quad\\overline{ \\mathcal{R}}_{2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{2}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{2}^{V\\gets U}\\quad\\overline{\\mathcal{R}}_{2}^{V \\gets V}\\end{array}\\right]\\]
\\[\\times\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{1}^{I \\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets Q}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{V\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets V}\\end{array}\\right]\\]
\\[\\times\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{1}^{I \\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets Q}\\quad\\overline{ \\mathcal{R}}_{1}^{I\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{Q\\gets I}\\quad\\overline{\\mathcal{R}}_{1}^{Q \\gets Q}\\quad\\overline{\\mathcal{R}}_{1}^{Q\\gets U}\\quad\\overline{\\mathcal{R}}_{1}^{V \\gets V}\\\\ \\overline{\\mathcal{R}}_{1}^{V\\gets I}\\quad\\overline{\\For a light beam incident from below, \\(\\mathbf{R}^{*}_{1,2}(\\mu_{0})\\), \\(\\mathbf{T}^{*}_{1,2}(\\mu_{0})\\), \\(\\overline{\\mathcal{R}}^{*}_{1,2}\\), and \\(\\overline{\\mathcal{T}}^{*}_{1,2}\\) can be obtained in a similar manner.
In the downward path calculation, we apply it to multiple layers (from layer 1 to layer \\(k\\)).
\\[\\left[\\begin{array}{cc}\\mathbf{T}^{I}_{1,k}\\\\ \\mathbf{T}^{Q}_{1,k}\\\\ \\mathbf{T}^{U}_{1,k}\\\\ \\mathbf{T}^{V}_{1,k}\\end{array}\\right]=\\left[\\begin{array}{cc}\\mathbf{T}^{I} _{k}\\\\ \\mathbf{T}^{Q}_{k}\\\\ \\mathbf{T}^{U}_{k}\\\\ \\mathbf{T}^{V}_{k}\\end{array}\\right]e^{-\\frac{\\tau_{1,k-1}}{\\mu_{0}}}+\\left[ \\begin{array}{cc}\\overline{\\mathcal{T}}^{I\\gets I}_{k}&\\overline{ \\mathcal{T}}^{I\\gets Q}_{k}&\\overline{\\mathcal{T}}^{I\\gets U}_{k}& \\overline{\\mathcal{T}}^{I\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{Q\\gets I}_{k}&\\overline{\\mathcal{T}}^{Q\\gets Q }_{k}&\\overline{\\mathcal{T}}^{Q\\gets U}_{k}&\\overline{\\mathcal{T}}^{Q \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{U\\gets I}_{k}&\\overline{\\mathcal{T}}^{U\\gets Q }_{k}&\\overline{\\mathcal{T}}^{U\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\end{array}\\right]\\left[\\begin{array}{cc}\\mathbf{T}^{I}_{1,k-1} \\\\ \\mathbf{T}^{Q}_{1,k-1}\\\\ \\mathbf{T}^{U}_{1,k-1}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\end{array}\\right]\\left[\\begin{array}{cc}\\mathbf{T}^{V}_{1,k- 1}\\\\ \\mathbf{T}^{Q}_{1,k-1}\\\\ \\mathbf{T}^{V}_{1,k-1}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}\\end{array}\\right]+ \\tag{12a}\\] \\[\\left[\\begin{array}{cc}\\overline{\\mathcal{T}}^{I\\gets I}_{k}& \\overline{\\mathcal{T}}^{I\\gets Q}_{k}&\\overline{\\mathcal{T}}^{I\\gets U }_{k}&\\overline{\\mathcal{T}}^{I\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{Q\\gets I}_{k}&\\overline{\\mathcal{T}}^{Q\\gets Q }_{k}&\\overline{\\mathcal{T}}^{Q\\gets U}_{k}&\\overline{\\mathcal{T}}^{Q \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\end{array}\\right]=\\left[\\begin{array}{cc}\\overline{\\mathcal{R} }^{*,I\\gets I}_{k}&\\overline{\\mathcal{T}}^{*,I\\gets Q}_{k}&\\overline{ \\mathcal{R}}^{*,I\\gets U}_{k}&\\overline{\\mathcal{R}}^{*,I\\gets V}_{k}\\\\ \\overline{\\mathcal{R}}^{*,Q\\gets I}_{k}&\\overline{\\mathcal{R}}^{*,Q \\gets Q}_{k}&\\overline{\\mathcal{R}}^{*,Q\\gets U}_{k}&\\overline{\\mathcal{ R}}^{*,Q\\gets V}_{k}\\\\ \\overline{\\mathcal{R}}^{*,U\\gets I}_{k}&\\overline{\\mathcal{R}}^{*,U \\gets Q}_{k}&\\overline{\\mathcal{R}}^{*,U\\gets U}_{k}&\\overline{\\mathcal{ R}}^{*,U\\gets V}_{k}\\\\ \\overline{\\mathcal{R}}^{*,V\\gets I}_{k}&\\overline{\\mathcal{R}}^{*,V \\gets Q}_{k}&\\overline{\\mathcal{R}}^{*,V\\gets Q}_{k}&\\overline{\\mathcal{ R}}^{*,V\\gets U}_{k}&\\overline{\\mathcal{R}}^{*,V\\gets V}_{k}\\end{array}\\right]\\] \\[+\\left[\\begin{array}{cc}\\overline{\\mathcal{T}}^{I\\gets I}_{k}& \\overline{\\mathcal{T}}^{I\\gets Q}_{k}&\\overline{\\mathcal{T}}^{I\\gets U }_{k}&\\overline{\\mathcal{T}}^{I\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{Q\\gets I}_{k}&\\overline{\\mathcal{T}}^{Q\\gets Q }_{k}&\\overline{\\mathcal{T}}^{Q\\gets U}_{k}&\\overline{\\mathcal{T}}^{Q \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{U\\gets I}_{k}&\\overline{\\mathcal{T}}^{U\\gets Q }_{k}&\\overline{\\mathcal{T}}^{U\\gets U}_{k}&\\overline{\\mathcal{T}}^{U \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\end{array}\\right]\\overline{\\mathbf{X}}^{*1,k-1}_{k}\\left[ \\begin{array}{cc}\\overline{\\mathcal{R}}^{*,I\\gets I}_{1,k-1}&\\overline{ \\mathcal{R}}^{*,I\\gets Q}_{1,k-1}&\\overline{\\mathcal{R}}^{*,I\\gets U}_{1,k- 1}&\\overline{\\mathcal{R}}^{*,I\\gets V}_{1,k-1}\\\\ \\overline{\\mathcal{R}}^{*,Q\\gets I}_{1,k-1}&\\overline{\\mathcal{R}}^{*,Q \\gets Q}_{1,k-1}&\\overline{\\mathcal{R}}^{*,Q\\gets U}_{1,k-1}&\\overline{ \\mathcal{R}}^{*,Q\\gets V}_{1,k-1}\\\\ \\overline{\\mathcal{T}}^{*,U\\gets I}_{k}&\\overline{\\mathcal{T}}^{*,U \\gets Q}_{k}&\\overline{\\mathcal{T}}^{U\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{V\\gets I}_{k}&\\overline{\\mathcal{T}}^{V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{V\\gets U}_{k}&\\overline{\\mathcal{T}}^{V \\gets V}_{k}\\end{array}\\right]\\] \\[\\times\\left[\\begin{array}{cc}\\overline{\\mathcal{T}}^{*,I \\gets I}_{k}&\\overline{\\mathcal{T}}^{*,I\\gets Q}_{k}&\\overline{ \\mathcal{T}}^{*,I\\gets U}_{k}&\\overline{\\mathcal{T}}^{*,I\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{*,Q\\gets I}_{k}&\\overline{\\mathcal{T}}^{*,Q \\gets Q}_{k}&\\overline{\\mathcal{T}}^{*,Q\\gets U}_{k}&\\overline{\\mathcal{ T}}^{*,Q\\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{*,U\\gets I}_{k}&\\overline{\\mathcal{T}}^{*,U\\gets Q }_{k}&\\overline{\\mathcal{T}}^{*,U\\gets U}_{k}&\\overline{\\mathcal{T}}^{*,U \\gets V}_{k}\\\\ \\overline{\\mathcal{T}}^{*,V\\gets I}_{k}&\\overline{\\mathcal{T}}^{*,V\\gets Q }_{k}&\\overline{\\mathcal{T}}^{*,V\\gets U}_{k}&\\overline{\\mathcal{T}}^{*,V \\gets V}_{k}\\end{array}\\right] \\tag{12b}\\]
where \\(\\overline{\\mathbf{X}}^{*j}_{k}\\) is provided in Appendix B.
Similarly, in an upward path calculation, we apply the multi-layer from the surface to layer \\(k+1\\):
\\[\\left[\\begin{array}{c}\\mathbf{R}_{k,N}^{I}\\\\ \\mathbf{R}_{k,N}^{Q}\\\\ \\mathbf{R}_{k,N}^{U}\\\\ \\mathbf{R}_{k,N}^{V}\\end{array}\\right]=\\left[\\begin{array}{c}\\mathbf{R}_{k}^{ I}\\\\ \\mathbf{R}_{k}^{Q}\\\\ \\mathbf{R}_{k}^{U}\\\\ \\mathbf{R}_{k}^{V}\\end{array}\\right]+\\left[\\begin{array}{cccc}\\overline{ \\mathcal{T}}_{k}^{*,I\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,I\\gets Q}& \\overline{\\mathcal{T}}_{k}^{*,I\\gets U}&\\overline{\\mathcal{T}}_{k}^{*,I \\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,Q\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,Q \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,Q\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,U\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,U \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,U\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,U\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,V\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,V \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,V\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,V\\gets V}\\end{array}\\right]\\mathbf{\\overline{X}}_{k}^{k+1,N}\\mathbf{Y }_{k}^{k+1,N} \\tag{13a}\\] \\[\\left[\\begin{array}{cccc}\\overline{\\mathcal{R}}_{k,N}^{I \\gets I}&\\overline{\\mathcal{R}}_{k,N}^{I\\gets Q}&\\overline{\\mathcal{R} }_{k,N}^{I\\gets U}&\\overline{\\mathcal{R}}_{k,N}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{k,N}^{Q\\gets I}&\\overline{\\mathcal{R}}_{k,N}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{k,N}^{Q\\gets U}&\\overline{\\mathcal{R} }_{k,N}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{k,N}^{U\\gets I}&\\overline{\\mathcal{R}}_{k,N}^{U \\gets Q}&\\overline{\\mathcal{R}}_{k,N}^{U\\gets U}&\\overline{\\mathcal{R} }_{k,N}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{k,N}^{V\\gets I}&\\overline{\\mathcal{R}}_{k,N}^{V \\gets Q}&\\overline{\\mathcal{R}}_{k,N}^{V\\gets V}\\\\ \\overline{\\mathcal{R}}_{k,N}^{V\\gets I}&\\overline{\\mathcal{R}}_{k,N}^{V \\gets Q}&\\overline{\\mathcal{R}}_{k,N}^{V\\gets V}\\end{array}\\right]= \\left[\\begin{array}{cccc}\\overline{\\mathcal{R}}_{k}^{I\\gets I}& \\overline{\\mathcal{R}}_{k}^{I\\gets Q}&\\overline{\\mathcal{R}}_{k}^{I \\gets U}&\\overline{\\mathcal{R}}_{k}^{I\\gets U}\\\\ \\overline{\\mathcal{R}}_{k}^{Q\\gets I}&\\overline{\\mathcal{R}}_{k}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{k}^{Q\\gets U}&\\overline{\\mathcal{R} }_{k}^{Q\\gets U}&\\overline{\\mathcal{R}}_{k}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{k}^{U\\gets I}&\\overline{\\mathcal{R}}_{k}^{U \\gets Q}&\\overline{\\mathcal{R}}_{k}^{U\\gets U}&\\overline{\\mathcal{R} }_{k}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{k}^{V\\gets I}&\\overline{\\mathcal{R}}_{k}^{V \\gets Q}&\\overline{\\mathcal{R}}_{k}^{V\\gets U}&\\overline{\\mathcal{R} }_{k}^{V\\gets V}\\end{array}\\right]\\] \\[+\\left[\\begin{array}{cccc}\\overline{\\mathcal{T}}_{k}^{*,I \\gets I}&\\overline{\\mathcal{T}}_{k}^{*,I\\gets Q}&\\overline{\\mathcal{T }}_{k}^{*,I\\gets U}&\\overline{\\mathcal{T}}_{k}^{*,I\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,Q\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,Q \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,Q\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,U\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,U \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,U\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,U\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{*,V\\gets I}&\\overline{\\mathcal{T}}_{k}^{*,V \\gets Q}&\\overline{\\mathcal{T}}_{k}^{*,V\\gets U}&\\overline{\\mathcal{T} }_{k}^{*,V\\gets V}\\end{array}\\right]\\mathbf{\\overline{X}}_{k}^{k+1,N}\\left[ \\begin{array}{cccc}\\overline{\\mathcal{R}}_{k+1,N}^{I\\gets I}&\\overline{ \\mathcal{R}}_{k+1,N}^{I\\gets Q}&\\overline{\\mathcal{R}}_{k+1,N}^{I\\gets U }&\\overline{\\mathcal{R}}_{k+1,N}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{k+1,N}^{Q\\gets I}&\\overline{\\mathcal{R}}_{k+1,N}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{k+1,N}^{Q\\gets U}&\\overline{\\mathcal{R }}_{k+1,N}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{k+1,N}^{U\\gets I}&\\overline{\\mathcal{R}}_{k+1,N}^{U \\gets Q}&\\overline{\\mathcal{R}}_{k+1,N}^{U\\gets U}&\\overline{\\mathcal{R }}_{k+1,N}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{k+1,N}^{V\\gets I}&\\overline{\\mathcal{R}}_{k+1,N}^{V \\gets Q}&\\overline{\\mathcal{R}}_{k+1,N}^{V\\gets U}&\\overline{\\mathcal{R }}_{k+1,N}^{V\\gets V}\\end{array}\\right]\\] \\[\\times\\left[\\begin{array}{cccc}\\overline{\\mathcal{T}}_{k}^{I \\gets I}&\\overline{\\mathcal{T}}_{k}^{I\\gets Q}&\\overline{\\mathcal{T}}_{k}^{I \\gets U}&\\overline{\\mathcal{T}}_{k}^{I\\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{Q\\gets I}&\\overline{\\mathcal{T}}_{k}^{Q \\gets Q}&\\overline{\\mathcal{T}}_{k}^{Q\\gets U}&\\overline{\\mathcal{T}}_{k}^{Q \\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{U\\gets I}&\\overline{\\mathcal{T}}_{k}^{U \\gets Q}&\\overline{\\mathcal{T}}_{k}^{U\\gets U}&\\overline{\\mathcal{T}}_{k}^{U \\gets V}\\\\ \\overline{\\mathcal{T}}_{k}^{V\\gets I}&\\overline{\\mathcal{T}}_{k}^{V\\gets Q}& \\overline{\\mathcal{T}}_{k}^{V\\gets U}&\\overline{\\mathcal{T}}_{k}^{V\\gets V} \\end{array}\\right] \\tag{13b}\\]
where \\(\\mathbf{R}_{N}=\\left[\\begin{array}{c}\\mathbf{R}_{N}^{I}\\\\ \\mathbf{R}_{N}^{Q}\\\\ \\mathbf{R}_{N}^{U}\\\\ \\mathbf{R}_{N}^{V}\\end{array}\\right]\\) is the surface albedo and \\(\\overline{\\mathcal{R}}_{N}=\\left[\\begin{array}{cccc}\\overline{\\mathcal{R}}_{N}^{I \\gets I}&\\overline{\\mathcal{R}}_{N}^{I\\gets Q}&\\overline{\\mathcal{R}}_{N}^{I \\gets U}&\\overline{\\mathcal{R}}_{N}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{N}^{Q\\gets I}&\\overline{\\mathcal{R}}_{N}^{Q\\gets Q}& \\overline{\\mathcal{R}}_{N}^{Q\\gets U}&\\overline{\\mathcal{R}}_{N}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{N}^{U\\gets I}&\\overline{\\mathcal{R}}_{N}^{U\\gets Q}& \\overline{\\mathcal{R}}_{N}^{U\\gets U}&\\overline{\\mathcal{R}}_{N}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{N}^{V\\gets I}&\\overline{\\mathcal{R}}_{N}^{V\\gets Q}& \\overline{\\mathcal{R}}_{N}^{V\\gets U}&\\overline{\\mathcal{R}}_{N}^{V\\gets V} \\end{array}\\right]\\)
is the reflection property of the surface.
Thus, the internal Stokes parameters at level \\(k+1\\) can be determined as follows:
\\[\\left[\\begin{array}{c}\\mathbf{A}_{k+1}^{I}\\\\ \\mathbf{A}_{k+1}^{Q}\\\\ \\mathbf{A}_{k+1}^{U}\\\\ \\mathbf{A}_{k+1}^{V}\\\\ \\mathbf{A}_{k+1}^{V}\\end{array}\\right]=\\overline{\\mathbf{X}}_{1,k}^{k+1,N} \\mathbf{Y}_{1,k}^{k+1,N}, \\tag{14a}\\] \\[\\left[\\begin{array}{c}\\mathbf{D}_{k+1}^{I}\\\\ \\mathbf{D}_{k+1}^{Q}\\\\ \\mathbf{D}_{k+1}^{U}\\\\ \\mathbf{D}_{k+1}^{V}\\end{array}\\right]=\\left[\\begin{array}{c}\\mathbf{T}_{1,k }^{I}\\\\ \\mathbf{T}_{1,k}^{Q}\\\\ \\mathbf{T}_{1,k}^{U}\\\\ \\mathbf{T}_{1,k}^{V}\\end{array}\\right]+\\left[\\begin{array}{ccc}\\overline{ \\mathcal{R}}_{1,k}^{*,I\\gets I}&\\overline{\\mathcal{R}}_{1,k}^{*,I\\gets Q }&\\overline{\\mathcal{R}}_{1,k}^{*,I\\gets U}&\\overline{\\mathcal{R}}_{1,k}^ {*,I\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,k}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{1,k}^ {*,Q\\gets Q}&\\overline{\\mathcal{R}}_{1,k}^{*,Q\\gets U}&\\overline{ \\mathcal{R}}_{1,k}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,k}^{*,U\\gets I}&\\overline{\\mathcal{R}}_{1,k}^ {*,U\\gets Q}&\\overline{\\mathcal{R}}_{1,k}^{*,U\\gets U}&\\overline{ \\mathcal{R}}_{1,k}^{*,U\\gets V}\\\\ \\overline{\\mathcal{R}}_{1,k}^{*,V\\gets I}&\\overline{\\mathcal{R}}_{1,k}^ {*,V\\gets Q}&\\overline{\\mathcal{R}}_{1,k}^{*,V\\gets U}&\\overline{ \\mathcal{R}}_{1,k}^{*,V\\gets V}\\\\ \\end{array}\\right]\\overline{\\mathbf{X}}_{1,k}^{k+1,N}\\mathbf{Y}_{1,k}^{k+1,N}. \\tag{14b}\\]
An adjusted absorption and scattering atmosphere may be considered to incorporate the forward peak contribution into multiple scattering. The fraction of the scattered energy residing in the forward peak \\(f\\) is separated from the phase function. This is the \\(\\delta\\)-M adjustment [38] used in the proposed model.
The intensity at an arbitrary zenith angle for satellite applications can be obtained by replacing \\(\\mu_{i}\\) in Eq.(4) and the satellite zenith angle \\(\\mu_{sat}\\). The entire calculation process is the same for \\(I(\\tau,\\pm\\mu_{sat})\\), \\(Q(\\tau,\\pm\\mu_{sat})\\), \\(U(\\tau,\\pm\\mu_{sat})\\), and \\(V(\\tau,\\pm\\mu_{sat})\\). We call this new method the polarized discrete ordinate adding approximation (POLDDA) for vector solar radiative transfer.
In the PolRadtran/RT3 model, which is a code based on the adding-doubling method, the adding method is always used together with the doubling method because the multiple scattering process is not considered in the single-layer solution and the calculation time increases with an increase in the optical thickness. However, in POLDDA, no doubling process is required and the adding method can be used directly after a single-layer solution (considering multiple scattering processes). Therefore, in theory, the calculation time does not change with the variation in the optical depth. This point is demonstrated in Section 4.3.
## 4 Comparison Results
In this section, the calculation accuracy and efficiency of POLDDA are evaluated by comparing with the Monte Carlo model (MYSTIC) [39] and PolRadtran/RT3, where the MYSTIC model is considered the benchmark. Test cases for both single-layer and multi-layer atmospheres are included. POLDDA and RT3 were executed with 16 streams (half-sphere) for all Rayleigh cases, and 32 streams/64 streams (half-sphere) for the water cloud case.
### Single-layer test cases
In the first set of cases, one layer including molecules of atmospheric constituents with different depolarization factors and no surface reflection was tested. Another case exists, which includes Lambertian surface reflection.
#### 4.1.1 Case 1 - Rayleigh scattering with different depolarization factor
Case 1 (including cases 1-1, 1-2, 1-3) was used to verify the correct treatment of the anisotropy of the molecules through the Rayleigh depolarization factor. It contains one layer of non-absorbing molecules (single scattering albedo \\(\\omega=1\\)) and an optical thickness of 0.5. Surface reflection was not considered (the surface albedo was 0). Stokes vectors were calculated for various depolarization factors and sun positions. The definition of the viewing zenith angle is with respect to the downward normal instead of the upward normal, so viewing zenith angles are 0\\({}^{\\circ}\\) -80\\({}^{\\circ}\\) (down-looking) at the bottom and 100\\({}^{\\circ}\\)-180\\({}^{\\circ}\\) (up-looking) at the top with 5\\({}^{\\circ}\\) increments. The results at viewing zenith angle near 90\\({}^{\\circ}\\) (direction near horizontal) are not shown. The viewing azimuthal angle definition is clockwise and ranged from 0\\({}^{\\circ}\\) to 360\\({}^{\\circ}\\) (5\\({}^{\\circ}\\) increments).
Test case 1-1 had the simplest setup with a zero Rayleigh depolarization factor. The Stokes vectors at the bottom and TOA were calculated for solar zenith angles \\(\\theta_{0}=\\)0\\({}^{\\circ}\\) and \\(\\phi_{0}=\\)65\\({}^{\\circ}\\), respectively. The left plots shown in Figure 2 present the results calculated using POLDDA. The absolute errors and relative differences between POLDDA and RT3 compared to the MYSTIC model are shown. The \\(U\\)-component is zero because the sun was located at the zenith.
From Fig. 2, the relative differences by POLDDA are smaller than 0.1% for the \\(I\\)-component, which is almost the same as that of RT3 in both the upward and downward directions. The related bias are almost less than 2% for the \\(Q\\)-component at the top and bottom of the atmosphere using both POLDDA and RT3. In one point, the abnormally large related bias is shown at viewing azimuth angles of 180\\({}^{\\circ}\\) (top) and 0\\({}^{\\circ}\\) (bottom). It is because the values of \\(Q\\)-component are unsuitable as a denominator for calculating relative error if they are near zero. From the absolute error figures, a large error does not exist, which is a good illustration. When we removed abnormally large relative differences from Fig.2, the relative difference plot was consistent with the absolute difference plot (as shown in Fig. 2). The same phenomenon also appeared in subsequent cases, but we did not remove the deviation again. The relative root mean square errors (RMSE) for all the test cases are listed in Table 1. We calculated the RMSE for the \\(I\\)-component between MYSTIC and POLDDA or RT3 using RMSE=\\(\\frac{\\sqrt{\\sum_{i=1}^{N}(I_{MYSTIC}^{i}-I_{o}^{i})^{2}}}{\\sqrt{\\sum_{i=1}^{N}(I_{MYSTIC} ^{i})^{2}}}\\) (where \\(I_{o}\\) refers to the other models) for the radiation field including all down-and up-looking directions. The RMSEs of POLDDA and RT3 are 0.0177% for the \\(I\\)-component and 0.0226% for the \\(Q\\)-component, which also demonstrates the accuracy of POLDDA in this case.
Test cases 1-2 and 1-3 consider two cases in which the depolarization factor is not zero; it is 0.03 in case 1-2 and 0.1 in case 1-3. The solar positions (\\(\\theta_{0}\\) =30\\({}^{\\circ}\\), \\(\\phi_{0}\\)=65\\({}^{\\circ}\\)) are the same in cases 1-2 and 1-3. The \\(U\\)-component is nonzero because the solar zenith angle is not zero. Figures 3 and 4 present the Stokes vector results at the TOA and bottom, respectively. We show the relative bias, and as shown in Fig. 3, the relative bias for the \\(I\\)-component of POLDDA is between -0.05% and 0.05%. The largest bias of the \\(Q\\)-component reached a high of 2%, ranging from 120\\({}^{\\circ}\\)-150\\({}^{\\circ}\\) and near 170\\({}^{\\circ}\\) and 285\\({}^{\\circ}\\) viewing azimuth angles, for both POLDDA and RT3. The \\(U\\)-component calculated by POLDDA and RT3 did not show a significant difference, except near viewing azimuth angles of 0\\({}^{\\circ}\\) and 180\\({}^{\\circ}\\) owing to the \\(U\\)-component near zero. The large bias of the \\(I\\), \\(Q\\), and \\(U\\) components yielded by POLDDA at the bottom (Fig. 4) is almost the same as that of RT3. For Case 1-2, the RMSEs of POLDDA were close to those of RT3 (Table 1).
In Case 1-3, although the value of the depolarization factor increased, the bias of POLDDA (Figures 5 and 6) also differed slightly from that of RT3. The RMSE of POLDDA are 0.0166%, 0.025%, and 0.0229% (Table 1) for the \\(I\\)-, \\(Q\\)-, and \\(U\\)-components, respectively, which are similar to those of RT3.
#### 4.1.2 Case 2 - Rayleigh atmosphere with Lambertian surface
Surface reflection was considered in Case 2 with a Lambertian surface albedo of 0.3, as well as for a one-layer medium of 0.1 without absorption. The optical depth of the layer was set as 0.1. The solar position was \\(\\theta_{0}=\\)50\\({}^{\\circ}\\), with \\(\\phi_{0}=\\)0\\({}^{\\circ}\\), and the Rayleigh depolarization factor is the same as that in Case 1-3. The Stokes vectors (\\(I\\)-, \\(Q\\)-, \\(U\\)-, and \\(V\\)-components) were calculated for viewing zenith angles of 0-80\\({}^{\\circ}\\) (100-180\\({}^{\\circ}\\)) at the bottom (top) with 5\\({}^{\\circ}\\) increments, and viewing azimuth angles of 0-180\\({}^{\\circ}\\) with 5\\({}^{\\circ}\\) increments.
Figure 7 shows the results of Case 2. The relative bias of POLDDA is less than 0.05% for the \\(I\\)-component. A relative bias greater than 1% occurs when viewing azimuth angles are approximately 170 degrees at the top of the layer for the \\(Q\\)-component and approximately 30 degrees at the bottom. For the \\(U\\)-component, over 5% relative bias is observed in the viewing azimuth angles ranging from 0 to 10\\({}^{\\circ}\\) at the bottom and 100\\({}^{\\circ}\\)-110\\({}^{\\circ}\\) at the top of the layer. Significant differences were not found in the \\(I\\)-, \\(Q\\)-, and \\(U\\)-components calculated using POLDDA and RT3. The RMSEs of POLDDA and RT3 were also similar to those of nonzero Lambertian surface reflection.
### Test cases for realistic atmosphere with multi-layer
In this set of cases, the coupling between POLDDA model layers was examined using a multi-layer in a plane-parallel atmosphere. We used the addition method to handle multi-layer connections. Unlike RT3, the doubling process for POLDDA is not required; only the addition process is used with single-layer reflection and transmission functions.
The U.S. standard atmosphere [40] was used for the multi-layer cases. The model atmosphere comprised 30 layers from 0 km to 30 km in an altitude with a thickness of 1 km. The surface albedo was set to 0 for Cases 3 and 4 and 0.2 for Case 5. The Rayleigh scattering depolarization factor is 0.03. The sun?iYs position is \\(\\theta_{0}\\) =60\\({}^{\\circ}\\) and \\(\\phi_{0}\\) =0\\({}^{\\circ}\\). For Cases 3 and 4, the radiance field was calculated at the surface (0 km) and TOA (30 km) for viewing zenith angles from 0\\({}^{\\circ}\\) to 85\\({}^{\\circ}\\) (up-looking), and 95\\({}^{\\circ}\\) to 180\\({}^{\\circ}\\) (down-looking) in 5\\({}^{\\circ}\\) increments. The viewing azimuth angle ranged from 0\\({}^{\\circ}\\) to 180\\({}^{\\circ}\\) in 5\\({}^{\\circ}\\) increments. For Case 5, the radiance field was calculated at the surface (0 km) and TOA (30 km) for viewing zenith angles from 0\\({}^{\\circ}\\) to 90\\({}^{\\circ}\\) (up-looking) and 90\\({}^{\\circ}\\) to 180\\({}^{\\circ}\\) (down-looking) at a 90\\({}^{\\circ}\\) viewing azimuth angle.
#### 4.2.1 Case 3 - Only Rayleigh scattering with standard atmosphere
The radiance field was calculated as 450 nm. Only Rayleigh scattering was considered. The profiles of the scattering optical thicknesses are depicted in Figure 8(a) and can be downloaded from the International Polarized Radiative Transfer (IPRT) website.
Figure 9 shows the \\(I\\)-, \\(Q\\)-, and \\(U\\)-components at the output altitudes for a multi-layer atmosphere. For the one-layer cases, good agreement between the three models was observed for pure Rayleigh scattering with zero depolarization factor. In the inhomogeneous multi-layer, the three models are in good agreement, except near 0\\({}^{\\circ}\\)-10\\({}^{\\circ}\\) at the bottom and 100\\({}^{\\circ}\\)-110\\({}^{\\circ}\\) viewing azimuth angles at TOA for the \\(U\\)-component. The bias of POLDDA is less than 0.2% for the \\(I\\)-component and 0.5% for the \\(Q\\)-component. The RMSE of POLDDA for the \\(I\\)-, \\(Q\\)-, and \\(U\\)-components are only 0.017%, 0.0266%, and 0.0198%, respectively, against MYSTIC (Table 1).
#### 4.2.2 Case 4 - Rayleigh scattering and absorption with standard atmosphere
In this case, it was determined whether the absorption effect was appropriately considered. The radiance field was calculated at a wavelength of 325 nm, where the ozone absorption is strong. The profiles of scattering optical thicknesses and absorption optical thicknesses are illustrated in Figures 8 (b-c), which can also be downloaded from the IPRT website.
Figure 10 shows the \\(I\\)-, \\(Q\\)-, and \\(U\\)-component results at the top and on the surface. Similar to Case 3, little difference exists as calculated by POLDDA and RT3 for \\(I\\)-, \\(Q\\)-, and \\(U\\)-components. The difference between RMSE of POLDDA and RT3 decreases compared to the values in Case 4 owing to the addition of gas absorption, particularly for the \\(I\\)-component.
#### 4.2.3 Case 5 - Standard atmosphere with cloud layer
In this case, a cloud layer was added between 2 and 3 km. The radiance was measured at 800 nm. The cloud optical thickness is 5, which is larger than the atmosphere that only includes air molecules. The profiles of the Rayleigh scattering optical thicknesses with no molecular absorption are shown in Figure 8(d). The single-scattering albedo for the cloud layer is 0.999979 and the cloud phase function was calculated using Mie scattering. POLDDA and RT3 were executed with 32 and 64 streams (half-sphere), respectively.
Figures 11 and 12 show the \\(I\\)-, \\(Q\\)-, \\(U\\)-, and \\(V\\)-component results for RT3 (64 streams), and the relative differences between POLDDA and RT3 at the top and surface, respectively. The relative differences between POLDDA and RT3 are less than 0.01%, 0.025%, 0.05%, and 0.05% for the \\(I\\)-, \\(Q\\)-, \\(U\\)-, and \\(V\\)-components at TOA (Fig. 11), respectively, and are notably close to zero as \\(|cos\\theta|\\) decreases. At the surface (Fig. 12), the
increasing optical depth. A two-layer atmosphere with only molecular scattering is considered, and the optical depth of each layer is in the range from 0.001 to 100. The half-stream was 16, as calculated using POLDDA and RT3. Admittedly, the RT3 model has better computational efficiency when the optical depth is less than 0.0001 because it ignores multiple scattering effects when solving the equation of a single-layer atmosphere. However, the time run by RT3 increases with optical depth. The computational time increases by more than four times when the optical depth reaches 100. Unlike RT3, the computing time of POLDDA does not increase with increasing optical depth of each layer. The operation-time advantage of POLDDA is gradually apparent when the optical depth exceeds 0.0005. Following this trend, POLDDA saves more computational time than RT3 in aerosols or clouded atmospheres (optically thicker than 1). The computational efficiencies of POLDDA and RT3 with different half streams are also considered (Fig. 13b). The layer optical depth was set as 1. As shown in Fig. 13(b), the computational time increases with an increasing number of streams for both POLDDA and RT3, whereas the computational time for RT3 is longer than that for POLDDA with an increasing number of streams.
## 5 Conclusions and discussion
In this study, the polarized adding method of discrete ordinate approximation (POLDDA) was developed for ultraviolet-visible and near-infrared spectra. The single-layer polarized radiative transfer equation and inhomogeneous multilayer connection are solved using the discrete ordinate method and adding method, respectively. From the accuracy evaluation results in a multiple-layer standard atmosphere with Rayleigh scattering and a cloud layer, POLDDA proved to conform to the results of PolRadtran/RT3. The RMSE values of the Stokes vectors between POLDDA and RT3 against MYSTIC were found to be similar, which further confirms the good accuracy of POLDDA. POLDDA also has a high computational efficiency, particularly for an atmosphere with an optical depth of over 0.0005 compared to RT3. Moreover, the computation time of POLDDA does not increase with the optical depth of each layer.
## Appendix A
From Eq.(1), we expand the phase matrix and \\(\\mathbf{L}\\) into the Fourier cosine and sine series,
\\[\\left[\\begin{array}{l}I(\\tau,\\mu,\\varphi)\\\\ Q(\\tau,\\mu,\\varphi)\\\\ U(\\tau,\\mu,\\varphi)\\\\ V(\\tau,\\mu,\\varphi)\\end{array}\\right] = \\sum_{m=0}^{2M-1}\\left[\\begin{array}{l}I^{m}(\\tau,\\mu)\\cdot cos (\\varphi-\\varphi_{0})\\\\ Q^{m}(\\tau,\\mu)\\cdot cos(\\varphi-\\varphi_{0})\\\\ U^{m}(\\tau,\\mu)\\cdot sin(\\varphi-\\varphi_{0})\\\\ V^{m}(\\tau,\\mu)\\cdot sin(\\varphi-\\varphi_{0})\\end{array}\\right]\\] (A.1a) \\[\\mathbf{Z} = \\left[\\begin{array}{cccc}z_{11}&z_{12}&z_{13}&z_{14}\\\\ z_{21}&z_{22}&z_{23}&z_{24}\\\\ z_{31}&z_{32}&z_{33}&z_{34}\\\\ z_{41}&z_{42}&z_{43}&z_{44}\\end{array}\\right]\\] (A.1b)
where \\(z_{xy}(\\mu,\\varphi,\\mu^{\\prime},\\varphi^{\\prime})=\\sum_{m=0}^{2M-1}\\left\\{ \\begin{array}{l}z_{xy}^{m}(\\mu,\\varphi,\\mu^{\\prime},\\varphi^{\\prime})\\cdot cos (\\varphi-\\varphi^{\\prime})\\\\ z_{xy}^{m}(\\mu,\\varphi,\\mu^{\\prime},\\varphi^{\\prime})\\cdot sinm(\\varphi- \\varphi^{\\prime})\\end{array}\\right.\\) Thus, the radiation transfer equation can be split into 2M equations.
The discrete ordinate method is used to solve the equation. To achieve this, a Gaussian quadrature was used to handle the integration in Eq.(1) as
\\[\\int_{-1}^{1}\\mathbf{Z}^{m}(\\mu,\\mu^{\\prime})\\mathbf{L}^{m}(\\tau,\\mu^{\\prime}) d\\mu^{\\prime}=\\sum_{j=-N,j\
eq 0}^{N}a_{j}\\mathbf{Z}^{m}(\\mu,\\mu_{j})\\mathbf{L}^{ m}(\\tau,\\mu_{j})\\] (A.2)
where \\(N\\) denotes the number of half-streams, and \\(\\mu_{j}=-\\mu_{-j}\\) and \\(a_{j}=a_{-j}\\) (\\(j=1,2, ,N\\)) denote the quadrature angles and weights, respectively.
Upon substituting Eqs.(A1)-(A2) into Eq.(1) and neglecting the superscript \\(m\\), Eq.(1) can be written as follows:
\\[\\begin{split}\\mu\\frac{d\\mathbf{L}(\\tau,\\mu_{i})}{d\\tau}=& \\mathbf{L}(\\tau,\\mu_{i})-\\frac{\\omega}{4}(1+\\delta_{0,m})\\sum_{j=-N,j\
eq 0 }^{N}a_{j}\\mathbf{Z}(\\mu_{i},\\mu_{j})\\mathbf{L}(\\tau,\\mu_{j})\\\\ &-\\frac{\\omega}{4\\pi}\\mathbf{Z}(\\mu_{i},-\\mu_{0})\\mathbf{F_{0}}e^ {-\\tau/\\mu_{0}}.\\end{split}\\] (A.3)
where \\(N\\) indicates the number of half-streams, and \\(\\mu_{j}=-\\mu_{-j}\\) and \\(a_{j}=a_{-j}\\) (\\(j=1,2, ,N\\)) denote the quadrature angles and weights, respectively. They can be written in matrix form as Eq.(2).
**Appendix B**
In Eq.(7),
\\[\\mathbf{A}^{I}=\\left[\\begin{array}{l}R^{I}(\\tau_{1},\\mu_{1}),\\cdots,R^{I}( \\tau_{1},\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}},\\mathbf{A}^{Q}=\\left[ \\begin{array}{l}R^{Q}(\\tau_{1},\\mu_{1}),\\cdots,R^{Q}(\\tau_{1},\\mu_{N})\\end{array} \\right]_{N\\times 1}^{\\mathbb{T}},\\] \\[\\mathbf{A}^{U}=\\left[\\begin{array}{l}R^{U}(\\tau_{1},\\mu_{1}), \\cdots,R^{U}(\\tau_{1},\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}}, \\mathbf{A}^{V}=\\left[\\begin{array}{l}R^{V}(\\tau_{1},\\mu_{1}),\\cdots,R^{V}( \\tau_{1},\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}},\\] \\[\\mathbf{D}^{I}=\\left[\\begin{array}{l}T^{I}(\\tau_{1},-\\mu_{1}), \\cdots,T^{I}(\\tau_{1},-\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}}, \\mathbf{D}^{Q}=\\left[\\begin{array}{l}T^{Q}(\\tau_{1},-\\mu_{1}),\\cdots,T^{Q}( \\tau_{1},-\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}},\\] \\[\\mathbf{D}^{U}=\\left[\\begin{array}{l}T^{U}(\\tau_{1},-\\mu_{1}), \\cdots,T^{U}(\\tau_{1},-\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}}, \\mathbf{D}^{V}=\\left[\\begin{array}{l}T^{V}(\\tau_{1},-\\mu_{1}),\\cdots,T^{V}( \\tau_{1},-\\mu_{N})\\end{array}\\right]_{N\\times 1}^{\\mathbb{T}}\\]
and others matrix are \\(\\mathbf{R}_{1}=[\\mathbf{R}_{1}^{I},\\mathbf{R}_{1}^{Q},\\mathbf{R}_{1}^{U}, \\mathbf{R}_{1}^{V}]^{\\mathbb{T}}\\), \\(\\mathbf{T}_{1}=[\\mathbf{T}_{1}^{I},\\mathbf{T}_{1}^{Q},\\mathbf{T}_{1}^{U}, \\mathbf{T}_{1}^{V}]^{\\mathbb{T}}\\),
\\(\\mathbf{R}_{2}=[\\mathbf{R}_{2}^{I},\\mathbf{R}_{2}^{Q},\\mathbf{R}_{2}^{U}, \\mathbf{R}_{2}^{V}]^{\\mathbb{T}}\\), \\(\\mathbf{T}_{2}=[\\mathbf{T}_{2}^{I},\\mathbf{T}_{2}^{Q},\\mathbf{T}_{2}^{U}, \\mathbf{T}_{2}^{V}]^{\\mathbb{T}}\\),
\\[\\overline{\\mathbf{R}}_{2}=\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_{2}^ {I\\gets I}&\\overline{\\mathcal{R}}_{2}^{I\\gets Q}&\\overline{ \\mathcal{R}}_{2}^{I\\gets U}&\\overline{\\mathcal{R}}_{2}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{Q\\gets I}&\\overline{\\mathcal{R}}_{2}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{2}^{Q\\gets U}&\\overline{\\mathcal{R}}_ {2}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{U\\gets I}&\\overline{\\mathcal{R}}_{2}^{U \\gets Q}&\\overline{\\mathcal{R}}_{2}^{U\\gets U}&\\overline{\\mathcal{R}}_ {2}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{2}^{V\\gets I}&\\overline{\\mathcal{R}}_{2}^{V \\gets Q}&\\overline{\\mathcal{R}}_{2}^{V\\gets U}&\\overline{\\mathcal{R}}_ {2}^{V\\gets V}\\end{array}\\right]\\]
\\[\\overline{\\mathbf{R}}_{1}^{*}=\\left[\\begin{array}{l}\\overline{\\mathcal{R}}_ {1}^{*,I\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,I\\gets Q}&\\overline{ \\mathcal{R}}_{1}^{*,I\\gets U}&\\overline{\\mathcal{R}}_{1}^{*,I\\gets V} \\\\ \\overline{\\mathcal{R}}_{1}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{1}^{*,Q \\gets Q}&\\overline{\\mathcal{R}}_{1}^{*,Q\\gets U}&\\overline{\\mathcal{R}}_ {
In Eq.(9),
\\[\\overline{\\mathbf{X}}_{i}^{j}=\\{\\mathbf{E}-\\left[\\begin{array}{ccc} \\overline{\\mathcal{R}}_{i}^{I\\gets I}&\\overline{\\mathcal{R}}_{i}^{I \\gets Q}&\\overline{\\mathcal{R}}_{i}^{I\\gets U}&\\overline{\\mathcal{R} }_{i}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{Q\\gets I}&\\overline{\\mathcal{R}}_{i}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{i}^{Q\\gets U}&\\overline{\\mathcal{R} }_{i}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{U\\gets I}&\\overline{\\mathcal{R}}_{i}^{U \\gets Q}&\\overline{\\mathcal{R}}_{i}^{U\\gets U}&\\overline{\\mathcal{R} }_{i}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{V\\gets I}&\\overline{\\mathcal{R}}_{i}^{V \\gets Q}&\\overline{\\mathcal{R}}_{i}^{V\\gets U}&\\overline{\\mathcal{R} }_{i}^{V\\gets V}\\end{array}\\right]\\left[\\begin{array}{ccc}\\overline{ \\mathcal{R}}_{j}^{*,I\\gets I}&\\overline{\\mathcal{R}}_{j}^{*,I\\gets Q }&\\overline{\\mathcal{R}}_{j}^{*,I\\gets U}&\\overline{\\mathcal{R}}_{j}^{*,I \\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{j}^{*,Q \\gets Q}&\\overline{\\mathcal{R}}_{j}^{*,Q\\gets U}&\\overline{\\mathcal{R} }_{j}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{*,U\\gets I}&\\overline{\\mathcal{R}}_{j}^{*,U \\gets Q}&\\overline{\\mathcal{R}}_{j}^{*,U\\gets U}&\\overline{\\mathcal{R} }_{j}^{*,U\\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{*,V\\gets I}&\\overline{\\mathcal{R}}_{j}^{*,V \\gets Q}&\\overline{\\mathcal{R}}_{j}^{*,V\\gets U}&\\overline{\\mathcal{R} }_{j}^{*,V\\gets V}\\end{array}\\right]\\]
\\[\\mathbf{Y}_{i}^{j}=\\{\\left[\\begin{array}{c}\\mathbf{R}_{i}^{I}\\\\ \\mathbf{R}_{i}^{Q}\\\\ \\mathbf{R}_{i}^{U}\\\\ \\mathbf{R}_{i}^{V}\\end{array}\\right]e^{-\\frac{\\tau_{j}}{\\mu_{0}}}+\\left[ \\begin{array}{ccc}\\overline{\\mathcal{R}}_{i}^{I\\gets I}&\\overline{ \\mathcal{R}}_{i}^{I\\gets Q}&\\overline{\\mathcal{R}}_{i}^{I\\gets U}& \\overline{\\mathcal{R}}_{i}^{I\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{Q\\gets I}&\\overline{\\mathcal{R}}_{i}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{i}^{Q\\gets U}&\\overline{\\mathcal{R} }_{i}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{U\\gets I}&\\overline{\\mathcal{R}}_{i}^{U \\gets Q}&\\overline{\\mathcal{R}}_{i}^{U\\gets U}&\\overline{\\mathcal{R} }_{i}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{V\\gets I}&\\overline{\\mathcal{R}}_{i}^{V \\gets Q}&\\overline{\\mathcal{R}}_{i}^{V\\gets U}&\\overline{\\mathcal{R} }_{i}^{V\\gets V}\\end{array}\\right]\\left[\\begin{array}{c}\\mathbf{T}_{j}^{I} \\\\ \\mathbf{T}_{j}^{Q}\\\\ \\mathbf{T}_{j}^{U}\\\\ \\mathbf{T}_{j}^{V}\\end{array}\\right]\\}\\]
and \\(\\mathbf{E}\\) is a 4N\\(\\times\\)4N identity matrix.
In Eq.(12),
\\[\\overline{\\mathbf{X}}_{i}^{sj}=\\{\\mathbf{E}-\\left[\\begin{array}{ccc} \\overline{\\mathcal{R}}_{i}^{*,I\\gets I}&\\overline{\\mathcal{R}}_{i}^{*,I \\gets Q}&\\overline{\\mathcal{R}}_{i}^{*,I\\gets U}&\\overline{\\mathcal{R} }_{i}^{*,I\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{*,Q\\gets I}&\\overline{\\mathcal{R}}_{i}^{*,Q \\gets Q}&\\overline{\\mathcal{R}}_{i}^{*,Q\\gets U}&\\overline{\\mathcal{R} }_{i}^{*,Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{*,U\\gets I}&\\overline{\\mathcal{R}}_{i}^{*,U \\gets Q}&\\overline{\\mathcal{R}}_{i}^{*,U\\gets U}&\\overline{\\mathcal{R} }_{i}^{*,U\\gets V}\\\\ \\overline{\\mathcal{R}}_{i}^{*,V\\gets I}&\\overline{\\mathcal{R}}_{i}^{*,V \\gets Q}&\\overline{\\mathcal{R}}_{i}^{*,V\\gets U}&\\overline{\\mathcal{R} }_{i}^{*,V\\gets V}\\end{array}\\right]\\left[\\begin{array}{ccc}\\overline{ \\mathcal{R}}_{j}^{I\\gets I}&\\overline{\\mathcal{R}}_{j}^{I\\gets Q}& \\overline{\\mathcal{R}}_{j}^{I\\gets U}&\\overline{\\mathcal{R}}_{j}^{I \\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{Q\\gets I}&\\overline{\\mathcal{R}}_{j}^{Q \\gets Q}&\\overline{\\mathcal{R}}_{j}^{Q\\gets U}&\\overline{\\mathcal{R} }_{j}^{Q\\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{U\\gets I}&\\overline{\\mathcal{R}}_{j}^{U \\gets Q}&\\overline{\\mathcal{R}}_{j}^{U\\gets U}&\\overline{\\mathcal{R} }_{j}^{U\\gets V}\\\\ \\overline{\\mathcal{R}}_{j}^{V\\gets I}&\\overline{\\mathcal{R}}_{j}^{V \\gets Q}&\\overline{\\mathcal{R}}_{j}^{V\\gets U}&\\overline{\\mathcal{R} }_{j}^{V\\gets V}\\end{array}\\right]\\}\\]
## Acknowledgments
The code of POLDDA is available from the corresponding author upon reasonable request. This study was supported by the National Natural Science Foundation of China (42222506 and 42105081) and China Postdoctoral Science Foundation (2023M730618).
## References
* (1) M. Duan, Q. Min, D. Lu, A polarized radiative transfer model based on successive order of scattering, Advances in Atmospheric Sciences 27 (2010) 891-900.
* (2) Q. Yin, C. Song, Fundamental definition of two-stream approximation for radiative transfer in scattering atmosphere, IEEE Transactions on Geoscience and Remote Sensing 60 (2022) 2003614.
* (3) J. E. Hansen, L. D. Travis, Light scattering in planetary atmosphere, Space Science Reviews 16 (1974) 527-610.
* (4) M. I. Mishchenko, A. A. Lacis, L. D. Travis, Errors induced by the neglect of polarization in radiance calculations for rayleigh-scattering atmospheres, Journal of Quantitative Spectroscopy and Radiative Transfer 51 (1994) 491-510.
* (5) A. A. Lacis, J. Chowdhary, M. I. Mishchenko, B. Cairns, Modeling errors in diffuse-sky radiation: vector versus scalar treatment, Journal of the Atmospheric Sciences 25 (1998) 135-138.
* (6) L. Oikarinen, Polarization of light in uv-visible limb radiance measurements, Journal of Geophysical Research 106 (2001) 1533-1544.
* (7) R. C. Levy, L. A. Remer, Y. J. Kaufman, Effects of neglecting polarization on the modis aerosol retrieval over land, IEEE Transactions on Geoscience and Remote Sensing 42 (2004) 2576-2583.
* (8) D. M. Stam, J. W. Hovenier, Errors in calculated planetary phase functions and albedos due to neglecting polarization, Astronomy and Astrophysics 444 (2005) 275-286.
* (9) V. Natraj, R. Spurr, H. Boesch, Y. Jiang, Y. Yung, Evaluation of errors in neglecting polarization in the forward modeling of o2 a band measurements from space, with relevance to co2 column retrieval from polarization sensitive instruments, Journal of Quantitative Spectroscopy and Radiative Transfer 103 (2007) 245-259.
* (10) S. Chandrasekhar, Radiative Transfer, Oxford University Press, 1950.
* (11) Y. Gao, M. Z. Duan, X. Y. Huang, Preliminary comparisons of the typical polarized radiative transfer models: precision and efficiency, Journal of Remote Sensing 14 (2010) 839-851.
* (12) Y. Cai, F. Zhang, H. Lin, J. Li, H. Zhang, W. Li, S. Hu, Optimized alternate mapping correlated k-distribution method for atmospheric longwave radiative transfer, Journal of Advances in Modeling Earth Systems 15 (2023) e2022MS003419.
* (13) M. I. Mishchenko, Coauthors, Accurate monitoring of terrestrial aerosols and total solar irradiance: introducing the glory mission, Bulletin of the American Meteorological Society 88 (2007) 677-691.
* (14) J. E. Hansen, Multiple scattering of polarized light in planetary atmospheres. part ii. sunlight reflected by terrestrial water clouds, Journal of the Atmospheric Sciences 28 (1971) 1400-1426.
* (15) G. Plass, G. Kattawar, F. Catchings, Matrix operator theory of radiative transfer. i: Rayleigh scattering, Applied Optics 12 (1973) 314-29.
* (16) J. F. D. Haan, P. B. Bosma, J. W. Hovenier, The adding method for multiple scattering computations of polarized light, Astronomy and Astrophysics 183 (1987) 371-391.
* (17) K. F. Evans, G. L. Stephens, A new polarized atmospheric radiative transfer model, Journal of Quantitative Spectroscopy and Radiative Transfer 46 (1991) 413-423.
* (18) W. G. Bai, P. Zhang, W. J. Zhang, G. Ma, C.Qi, A model for accurately calculating hyper-spectral, middle-shortwave infrared radiative transfer for remote sensing, Science China Earth Sciences 47 (2017) 1483-1492.
* (19) W. G. Bai, P. Zhang, W. J. Zhang, J. Li, G. Ma, C.Qi, H. Liu, Jacobian matrix for near-infrared remote sensing based on vector radiative transfer model, Science China Earth Sciences 63 (2020) 1353-1365.
* (20) K. Stamnes, P. Conklin, A new multi-layer discrete ordinate approach to radiative transfer in vertically inhomogeneous atmospheres, Journal of Quantitative Spectroscopy and Radiative Transfer 31 (1984) 273-282.
* (21) F. Weng, A multi-layer discrete-ordinate method for vector radiative transfer in a vertically-inhomogeneous, emitting and scattering atmosphere-i. theory, Journal of Quantitative Spectroscopy and Radiative Transfer 47 (1992) 19-33.
* (22) F. M. Schulz, K. Stamnes, F. Weng, Vdisort: An improved and generalized discrete ordinate radiative transfer model for polarized (vector) radiative transfer, Journal of Quantitative Spectroscopy and Radiative Transfer 61 (1999) 105-122.
* (23) C. E. Siewert, A discrete-ordinates solution for radiative-transfer models that include polarization effects, Journal of Quantitative Spectroscopy and Radiative Transfer 64 (2000) 227-254.
* (24) Y. Ota, A. Higurashi, T. Nakajima, T. Yokota, Matrix formulations of radiative transfer including the polarization effect in a coupled atmosphere-ocean system, Journal of Quantitative Spectroscopy and Radiative Transfer 111 (2010) 878-94.
* (25) G. W. Kattawar, G. N. Plass, Radiance and polarization of multiple scattered light from haze and clouds, Appl. Opt. 7 (1968) 1519-1527.
* (26) D. Collins, W. Blattner, M. Wells, H. Horak, Backward monte-carlo calculations of polarization characteristics of the radiation emerging from a spherical shell atmosphere, Applied Optics 11 (1972) 2684-2705.
* (27) B. Y. Wu, D. R. Lu, Simulation of twilights after el chic eruption with the monte-carlo method, Journal of the Atmospheric Sciences 13 (1989) 204-213.
* (28) R. L. Roberti, C. Kummerow, Monte carlo calculations of polarized microwave radiation emerging from cloud structures, J. Geophy. Res. 104 (1999) 2093-2104.
* (29) W. Irvine, Multiple scattering in planetary atmospheres, Icarus 25 (1975) 175-204.
* (30) Q. L. Min, M. Z. Duan, A successive order of scattering model for solving vector radiative transfer in the atmosphere, Journal of Quantitative Spectroscopy and Radiative Transfer 87 (2004) 243-259.
* (31) M. I. Mishchenko, The fast invariant imbedding method for polarized light: computational aspects and numerical results for rayleigh scattering, Journal of Quantitative Spectroscopy and Radiative Transfer 43 (1990) 163-171.
* (32) K. F. Evans, The spherical harmonic discrete ordinate method for three-dimensional atmospheric radiative transfer, Journal of the Atmospheric Sciences 55 (1998) 429-446.
* (33) T. Nakajima, M. Tanaka, Matrix formulations for the transfer of solar radiation in a plane-parallel scattering atmosphere, Journal of Quantitative Spectroscopy and Radiative Transfer 35 (1986) 13-21.
* (34) F. Zhang, Z. Shen, J. Li, X. Zhou, L. Ma, Analytical delta-four-stream doubling-adding method for radiative transfer parameterizations, Journal of the Atmospheric Sciences 70 (2013) 794-808.
* (35) Q. Liu, F. Weng, A microwave polarimetric two-stream radiative transfer model, Journal of the Atmospheric Sciences 59 (2002) 2396-2402.
* (36) K. N. Liou, S. C. Ou, Y. Takano, Q. Liu, A polarized delta-four-stream approximation for infrared and microwave radiative, Journal of the Atmospheric Sciences 62 (2005) 2542-2554.
* (37) W. Li, F. Zhang, F. Bao, K. Wu, J. Li, P. Zhang, W. Han, Polarized discrete ordinate adding approximation for infrared and microwave radiative transfer, Journal of Quantitative Spectroscopy and Radiative Transfer 293 (2022) 108368.
* (38) R. J. Spurr, Vlidort: A linearized pseudo-spherical vector discrete ordinate radiative transfer code for forward model and retrieval studies in multilayer multiple scattering media, Journal of Quantitative Spectroscopy and Radiative Transfer 102 (2006) 316-342.
- phase a, Journal of Quantitative Spectroscopy and Radiative Transfer 164 (2015) 8-36.
* [40] R. A. McClatchey, R. W. Fenn, J. E. A. Selby, F. E. Volz, J. S.Garing, Optical properties of the atmosphere, Air Force Rep. AFCRL-71-0279.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline Case & & POLDDA & RT3 \\\\ \\hline Case1-1 & I & 0.01773589\\% & 0.01773660\\% \\\\ & Q & 0.02265512\\% & 0.02264818\\% \\\\ \\hline Case1-2 & I & 0.01649993\\% & 0.01650125\\% \\\\ & Q & 0.02561625\\% & 0.02561332\\% \\\\ & U & 0.02343666\\% & 0.02343717\\% \\\\ \\hline Case1-3 & I & 0.01660634\\% & 0.01660778\\% \\\\ & Q & 0.02499878\\% & 0.02499902\\% \\\\ & U & 0.02290941\\% & 0.02290969\\% \\\\ \\hline Case2 & I & 0.008561691\\% & 0.008559853\\% \\\\ & Q & 0.03571449\\% & 0.03571167\\% \\\\ & U & 0.02874306\\% & 0.02874456\\% \\\\ \\hline Case3 & I & 0.01723639\\% & 0.01723684\\% \\\\ & Q & 0.02661624\\% & 0.02660797\\% \\\\ & U & 0.01979122\\% & 0.01979095\\% \\\\ \\hline Case4 & I & 0.01260397\\% & 0.01260261\\% \\\\ & Q & 0.02472782\\% & 0.02472788\\% \\\\ & U & 0.02089377\\% & 0.02090551\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Relative root mean square errors in percent between MYSTIC and POLDDA, RT3 for all cases in this study.
Figure 1: Schematic diagram of the principles of invariance in vector solar radiative transfer.
Figure 2: Case 1-1: Rayleigh scattering, and depol is zero, (left column) the results for the \\(I\\)-component and \\(Q\\)-component at the top and bottom of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (second column to the left) the absolute differences of POLDDA (16S) against MYSTIC, units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (third column to the left) the relative differences of POLDDA (16S) against MYSTIC and (right column) the relative differences of RT3 (16S) against MYSTIC, units: %.
Figure 3: Case 1-2: Rayleigh scattering, and depol is 0.03, (left column) the results for the \\(I\\)-component, \\(Q\\)-component, and \\(U\\)-component at the top of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (middle column) the relative differences of POLDDA (16S) against MYSTIC and (right column) the relative differences of RT3 (16S) against MYSTIC, units: %.
Figure 4: Case 1-2: the same as Fig. 3, except for the bottom.
Figure 5: Case 1-3: Rayleigh scattering, and depol is 0.1, (left column) the results for the \\(I\\)-component, \\(Q\\)-component, and \\(U\\)-component at the top of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (middle column) the relative differences of POLDDA (16S) against MYSTIC and (right column) the relative differences of RT3 (16S) against MYSTIC, units: %.
Figure 6: Case 1-3: the same as Fig. 5, except for the bottom.
Figure 7: Case 2: Rayleigh atmosphere with Lambertian surface, (left column) the results for the \\(I\\)-component, \\(Q\\)-component, and \\(U\\)-component at the top of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (middle column) the relative differences of POLDDA (16S) against MYSTIC, and (right column) the relative differences of RT3 (16S) against MYSTIC, units: %.
Figure 8: Optical depth profiles for multi-layer test cases. The left plot shows the molecular scattering optical depth used in Case 3 (450 nm). The middle and the right plots show the gases absorption optical depth and molecular scattering optical depth in Case 4 (325 nm).
Figure 9: Case 3: Multi-layer atmosphere with only Rayleigh scattering, (left column) the results for the \\(I\\)-component, \\(Q\\)-component, and \\(U\\)-component at the top of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (middle column) the relative differences of POLDDA (16S) against MYSTIC and (right column) the relative differences of RT3 (32S) against MYSTIC, units: %.
Figure 10: Case 4: Multi-layer atmosphere with Rayleigh scattering and molecular absorption, (left column) the results for the \\(I\\)-component, \\(Q\\)-component, and \\(U\\)-component at the top of the layer based on POLDDA (16S), units: \\(Wm^{-2}\\mu m^{-1}sr^{-1}\\); (middle column) the relative differences of POLDDA (16S) against MYSTIC, and (right column) the relative differences of RT3 (16S) against MYSTIC, units: %.
Figure 12: Similar to Fig. 11 but for the results for the surface.
Figure 13: Computational time (in seconds) of POLDDA and RT3 versus each layer, optical depth (left) and versus the number of streams (right). | The polarization characteristics of atmospheric scattering are important and should not be ignored in radiative transfer simulations. In this study, a new vector radiative transfer model called the polarized adding method of discrete ordinate approximation (POLDDA) is proposed for use in remote sensing applications for ultraviolet-visible and near-infrared spectra. The single-layer radiative transfer process and inhomogeneous multi-layer connection are solved using the discrete ordinate method (DOM) and adding methods, respectively. By combining the advantages of DOM and the adding method, the Stokes vector (including the \\(I\\)-, \\(Q\\)-, \\(U\\)-, and \\(V\\)-components) calculated using the new method conforms to the results of PolRadtran/RT3, whether in a Rayleigh scattering atmosphere or the water cloud case. Moreover, the relative root-mean-square error (RMSE) values of the Stokes vector for the test cases between MYSTIC and the new method or RT3 prove the accuracy of the proposed method. Meanwhile, the new method has a higher computational efficiency than RT3, particularly for an atmosphere with a large scattering optical depth. Unlike RT3, the computation time of the proposed method does not increase with the optical depth of each layer.
keywords: vector radiative transfer, ultraviolet-visible and near-infrared, adding method, discrete ordinate approximation +
Footnote β : journal: Journal of Computational Physics | Give a concise overview of the text below. | 289 |
arxiv-format/2301_05206v3.md | # ImMesh: An Immediate LiDAR Localization and Meshing Framework
Jiarong Lin1, Chongjian Yuan1, Yixi Cai, Haotian Li, Yunfan Ren, Yuying Zou, Xiaoping Hong, and Fu Zhang
Manuscript received February 5, 2023; revised July 23, 2023; accepted September 18, 2023. This work is supported by the University Grants Committee of Hong Kong General Research Fund (project number 17206421) and DII Donation. _(Corresponding author: Fu Zhang.)_
1These two authors contribute equally to this work. J. Lin, C. Yuan, Y. Cai and F. Zhang are with the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong SAR, China. {jliarong.lin, ycj1, yixical, haotian1, renyf, zyycici, fuzhang}@connect.hku.hk X. Hong are with the School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen, People's Republic of China. {hongxp}@sustech.edu.cn
## I Introduction
Recently, the wide emergence of 3D applications such as metaverse [1, 2], VR/AR [3], video games, and physical simulator [4, 5] has enriched human lifestyle and boosted productive efficiency by providing a virtual environment that resembles the real world. These applications are built upon triangle meshes that represent the complex geometry of real-world scenes. Triangle mesh is the collection of vertices and triangle facets, which serves as a fundamental tool for object modeling in most existing 3D applications. It can not only significantly simplify the process and boost the speed of rendering [6, 7] and ray-tracing [8], but alsoplay an irreplaceable role in collision detection [9, 10], rigid-body dynamics [11, 12], dense mapping and surveying [13], sensor simulation [14, 15], etc. However, most of the existing mesh is manufactured by skillful 3D modelers with the help of computer-aided design (CAD) software (e.g., Solidworks [16], blender [17], etc.), which limits the mass production of large-scene meshing. Hence, developing an efficient mesh method that could reconstruct large scenes in real-time draws increasing research interests and serves as a hot topic in the community of 3D reconstruction.
Performing mesh reconstruction in real-time is particularly important in practical usages. Firstly, online mesh reconstruction makes data collection effective by providing a live preview, which is essential to give users a reference. Especially for non-expert users, a live preview can provide feedback about which parts of the scene have already been reconstructed in good quality and where additional data is needed. Secondly, online mesh reconstruction can immediately output the mesh of the scene once data collection is complete, saving additional post-processing time of offline mesh reconstruction and boosting the productivity of mass production. Thirdly, it is particularly important for those real-time applications, especially fully autonomous robotic applications. A real-time update of mesh can provide better maps with denser representation and higher accuracy, enabling the agent to navigate itself better.
Reconstructing the mesh of large scenes from sensor measurements in real-time remains one of the most challenging problems in computer graphics, 3D vision, and robotics, which require reconstructing the surfaces of scenes with triangle facets adjacently connected by edges. This challenging problem needs to build the geometry structure with very high accuracy, and the triangle facet should be reconstructed on surfaces that actually exist in the real world. Besides, a good mesh reconstruction method should also suppress the appearance of holes on the reconstructed surface and avoid the reconstruction of triangle silver (i.e., the noodle-like triangles with an acute shard angle). Real-time mesh reconstruction in large scenes is even more challenging as it further requires the reconstruction to operate efficiently and incrementally.
In this work, we propose a real-time mesh reconstruction framework termed ImMesh to achieve the goal of simultaneous localization and meshing on the fly. ImMesh is a well-engineered system comprised of four tightly-coupled modules delicately designed for efficiency and accuracy. Among them, we implement a novel mesh reconstruction method in our meshing module. Specifically, our meshing module first utilizes the voxels for partitioning the 3D space and allows fast finding of voxels that contain points of new scans. Then, the voxel-wise 3D meshing problem is converted into a 2D one by performing dimension reduction for efficient meshing. Finally, the triangle facets are incrementally reconstructed with the voxel-wise mesh pull, commit and push steps. To the best of our knowledge, this is the first work in literature to reconstruct the triangle mesh of large-scale scenes online with a standard CPU. The main contributions of our work are:
* We propose ImMesh, a novel SLAM framework designed to achieve simultaneous localization and mesh reconstruction using a LiDAR sensor. ImMesh is built upon our previous work VoxelMap [18], and incorporates a novel mesh reconstruction method. This proposed approach can efficiently and incrementally reconstruct the mesh of scenes online, achieving real-time performance in large-scale scenarios on a standard desktop CPU.
* We comprehensively evaluated ImMesh's runtime performance and meshing accuracy using real-world and synthetic data, by comparing our runtime performance and meshing accuracy against existing baselines to assess its effectiveness.
* We additionally demonstrate how real-time meshing can be applied in potential applications by presenting two practical examples: point cloud reinforcement and lossless texture reconstruction (see Fig. 1(b and c)).
* We make ImMesh publicly available on our GitHub: github.com/hku-mars/ImMesh for sharing our findings and making contributions to the community,
## II Related Works
In this section, we discuss the related works of mesh reconstruction based on 3D point clouds, which are closely related to this work. Depending on whether the reconstruction processes can perform online, we categorize existing mesh reconstruction methods into two classes: offline methods and online methods.
### _Offline mesh reconstruction_
The offline methods usually require a global map in prior, for example, the full registered point cloud of the scene. Then, a global mesh reconstruction process is used to build the mesh. In this category, the most notable works include: methods based on Poisson surface reconstruction (Poisson-based), and methods based on Delaunay tetrahedralization (i.e., 3D Delaunay triangulation) and graph cut (Delaunay-based).
#### Ii-A1 Poisson surface reconstruction (Poisson-based)
Given a set of 3D points with oriented normals that are sampled on the surface of a 3D model, the basic idea of Poisson surface reconstruction [19, 20] is to cast the problem of mesh reconstruction as an optimization problem, which solves for an approximate indicator function of the inferred solid whose gradient best matches the input normals. Then, the continuous isosurface (i.e., the triangle mesh) is extracted from the indicator function using the method [21, 22], similar to adaptations of the Marching Cubes [23] with octree representations.
Benefiting from this implicit representation, where the mesh is extracted from the indicator function instead of being estimated directly, Poisson surface reconstruction can produce a watertight manifold mesh and is resilient to scanner noise, misalignment, and missing data. Hence, in the communities of graphics and vision, these types of methods [19, 20, 24] have been widely used for reconstructing the mesh from given 3D scanned data.
#### Ii-A2 Delaunay triangulation and graph cut (Delaunay-based)
In the category of offline mesh reconstruction methods, approaches [25, 26, 27] based on Delaunay tetrahedralization and graph cut have also been widely used for generating the mesh, relying on the reconstructed 3D point cloud and the sensor's poses. The basic idea of this class of methods is first to build a tetrahedral decomposition of 3D space by computing the 3D Delaunay triangulation of the 3D point set. Then, the Delaunay tetrahedra were labeled as two classes (i.e., \"inside\" or \"outside\") with the globally optimal label assignment (i.e., the graph cut). Finally, the triangle mesh can be extracted as the interface between these two classes.
Besides these two classes of methods, there are other offline mesh reconstruction methods, such as the ball-pivoting algorithm [28]. This algorithm works by pivoting a ball of fixed radius around each point in the point cloud and constructing a triangle whenever three balls overlap. [29] involves extracting the curve skeleton using Laplacian-based contraction, and then reconstructing the surface with the skeleton-assisted topology. However, these methods are often not the first choice due to various limitations such as robustness, accuracy, and efficiency when compared to Poisson- and Delaunay-based methods [30].
Unlike these offline mesh reconstruction methods, our proposed work ImMesh can perform online in an incremental manner without the complete point cloud of the scene. Besides, ImMesh also achieves a satisfactory meshing accuracy that is higher than Poisson-based methods and slightly lower than Delaunay-based methods (see our experimental results in Section VIII-C).
### _Online mesh reconstruction_
#### Ii-B1 Voxel volume-based methods (TSDF-based)
The online mesh reconstruction method is predominated by TSDF-based methods, which represent the scene in a voxel volumetric theme. These methods implicitly reconstruct the mesh in a two-step pipeline, which first establishes the truncated signed distance to the closest surface of voxels, then extracts the continuous triangle mesh by leveraging the Marching Cubes algorithm [23] from volumes. TSDF-based methods are popularized by KinectFusion [31], with many follow-up works focused on scaling this approach to larger scenes [32, 33], adding multi-resolution capability [34, 35], and improving efficiency [36, 37, 38]. Since these classes of methods can be easily implemented with parallelism, they can achieve real-time performance with the acceleration of GPUs.
Compared to these methods, our work ImMesh shows several advantages: Firstly, in ImMesh, the triangle mesh is directly reconstructed from the point cloud in one step, while for TSDF-based methods, the mesh is implicitly built in a two-step pipeline (i.e., SDF update followed by a mesh extraction). Secondly, ImMesh can output the mesh in scan rate (i.e., sensor sampling rate), while the mesh extraction of TSDF-based methods is usually at a lower rate. Thirdly, ImMesh achieves real-time performance by running on a standard CPU, while TSDF-based methods need GPU acceleration for real-time SDF updates. Lastly, TSDF-based methods require adequate observation for the calculation of the SDF of each voxel w.r.t. the closest surface, which needs the data to be sampled by a depth sensor of high resolution and moving at a low speed. On the contrary, our work exploits high-accuracy LiDAR points for meshing and is robust to points data of low density.
#### Ii-B2 Surfel-based mesh reconstruction
Besides TSDF-based methods, another popular approach is representing the scene with a set of points or surfels (e.g., oriented discs). For example, in work [39, 40, 33], the maps are reconstructed with point-based representation, and its \"surface\" is rendered with the approaches of \"point-based rendering\" that originated from the communities of computer graphics [41, 42, 43]. Besides, in work [44], the high-quality map is reconstructed with surfel-based representations (i.e., use patches). Such forms of mapping representation are popularized in works [45, 46, 47, 48]. To reconstruct a dense map, these classes of methods need a large number of points or tiny patches to represent the surface of the models, which is an inefficient representation with high usage of system memory and computation resources. In contrast, our work reconstructs the surface of models with triangle mesh, which uses triangle facets of proper size adjacently connected by edges. It is the most efficient solid-model representation that has been widely adopted in most modern 3D software.
Compared with the works reviewed above, our proposed work is in a class by itself, which contains the following advantages:
* It is an online mesh reconstruction method that reconstructs the triangle mesh in an incremental manner. It can achieve real-time performance in large-scale scenes (e.g., traveling length reaches \\(7.5\\,\\mathrm{km}\\)) by just running on a standard desktop CPU.
* It explicitly reconstructs the triangle mesh by directly taking the registered LiDAR points as meshing vertices, performing the voxel-wise meshing operation as each new LiDAR scan is registered.
* It is delicately designed for the purpose of efficiency and achieves satisfactory meshing precision comparable to existing high-accuracy offline methods.
## III System overview
Fig. 2 depicts the overview of our proposed system (ImMesh), which consists of a map structure and four modules that work jointly to achieve the goal of simultaneous localization and meshing in real-time. As shown in Fig. 2, from left to right are: _receiver_ (in red), _localization_ (in orange), _map structure_ (in green), _meshing_ (in blue) and _broadcaster_ (in purple).
In the rest sections, we will first introduce our _map structures_ in Section IV, showing the detail of the data structures used in other modules. Next, we will introduce our receiver and localization module in Section V. Then, we will present how our _meshing_ modules work in Section VI. Finally, in Section VII, we will introduce the _broadcaster_ module, which publishes the localization and meshing results to other applications.
## IV Map structure
As shown by the _map structure_ (in green) in Fig. 2, we designed four data types, including mesh vertices, triangle facets, regions, and voxels, as well as two data structures: a hash table for efficient data lookup and an incremental kd-tree (ikd-tree) for \\(k\\) nearest neighbors (kNN) search and downsampling.
The relationship among these map structures is depicted in Fig. 3, where we partition the 3D space into two types of volumetric grids: regions and voxels. Triangle facets are stored inside the regions containing them and are also indexed in a global hash table of triangle facets, and mesh vertices are stored inside the voxels containing them and are also indexed in a global list of vertices. Additionally, we maintain two hash tables to facilitate the efficient lookup of regions and voxels.
### _Data types: Region, voxel, triangle facet, and mesh vertex_
#### Iii-A1 Region \\(\\mathbf{R}\\)
Region have a much larger size \\(S_{\\mathbf{R}}\\) (e.g., \\(S_{\\mathbf{R}}=10.0\\,\\mathrm{m}\\)) compared to voxel's size \\(S_{\\mathbf{O}}\\) (e.g., \\(S_{\\mathbf{O}}=0.4\\,\\mathrm{m}\\)). They contain triangle facets whose centers are located inside, allowing for the _broadcaster_ to asynchronously copy these triangle facets. Additionally, each region has a status flag \\(f_{\\mathbf{R}}\\) to identify its syncing status, which can be either _Sync-required_ or _Synced_. This status indicates the update flag related to the data synchronization of triangle facets.
#### Iii-A2 Voxel \\(\\mathbf{O}\\)
Voxels enable the _meshing_ module to efficiently retrieve all in-voxel mesh vertices for voxel-wise meshing operations. Each voxel \\(\\mathbf{O}_{i}\\) also has a status flag \\(f_{\\mathbf{O}}\\) indicating whether it has new points appended. Specifically, \\(\\mathbf{O}_{i}\\) is marked as _Activated_ if new mesh vertices are registered from the latest LiDAR scan. The _Activated_ flag is reset to _Deactivated_ after the voxel-wise meshing operation has been performed on this voxel.
#### Iii-A3 Triangle facet \\(\\mathbf{T}\\)
In our work, triangle facets are stored in regions. A triangle facet describes a small surface that exists in the reconstructed scene. It is maintained online by our _meshing_ module and is asynchronously copied to the _broadcaster_ module for publishing. For a triangle facet \\(\\mathbf{T}\\), it is constituted by the following elements: 1) The sorted indices \\(\\texttt{Pts\\_id}(\\mathbf{T})\\) of three mesh vertices that form this triangle: \\(\\texttt{Pts\\_id}(\\mathbf{T})=\\{i,j,k\\},\\;i<j<k\\). 2) The center \\(\\texttt{Center}(\\mathbf{T})\\) and normal \\(\\texttt{Norm}(\\mathbf{T})\\) (both in the global reference frame) of this facet.
#### Iii-A4 Mesh vertex \\(\\mathbf{V}\\)
In ImMesh, mesh vertices are the points that constitute the geometric structure (shape) of mesh. For the \\(i\\)-th vertex \\(\\mathbf{V}_{i}\\), it contains the following elements: 1) The unique index (id) of this vertex \\(\\texttt{Id}(\\mathbf{V}_{i})\\) in the global list containing all the vertices in the map. 2) Its 3D position \\(\\texttt{Pos}(\\mathbf{V}_{i})\\in\\mathbb{R}^{3}\\) in the global frame. 3) The list \\(\\texttt{Tri\\_list}(\\mathbf{V}_{i})\\) of triangles facets whose vertices contain \\(\\mathbf{V}_{i}\\).
### _Data structure: Hash tables and Incremental kd-Tree (ikd-Tree)_
In our work, we leverage a global list for accessing mesh vertices by indices. Besides, we employ two data structures (i.e., hash tables and incremental kd-tree (ikd-Tree)) for efficiently managing our four data types. Specifically, we leverage the hash tables for efficient lookup of regions, voxels, and triangle facets, and maintain an ikd-Tree to enable the fast kNN search of mesh vertices.
#### Iii-B1 Hash tables
To facilitate efficient lookup of the data types (i.e., regions, voxels, and triangle facets), and avoid excessive memory consumption from allocating regular data
Fig. 2: This figure shows the overview of our proposed work ImMesh, which utilizes the raw input sensor data to achieve the goal of simultaneous localization and meshing. It is constituted by four tightly-coupled modules and a map structure, from left (input) to right (output) are: _receiver_ (in red), _localization_ (in orange), _map structure_ (in green), _meshing_ (in blue) and _broadcaster_ (in purple).
structures in continuous memory space, we employ a spatial hashing scheme. This scheme allows us to compactly store, access, and update the data structure by mapping them into a hash table using appropriate hash functions, as illustrated in Fig. 3.
Given a 3D vector \\(\\mathbf{p}=[x,y,z]^{T}\\in\\mathbb{R}^{3}\\), its corresponding hash key \\(\\boldsymbol{\\mathcal{H}}(\\mathbf{p})\\) is calculated via the 3D hash function \\(\\mathtt{Hash}(x,y,z)\\), shown as below:
\\[\\boldsymbol{\\mathcal{H}}(\\mathbf{p}) =\\mathtt{Hash}(x,y,z)=\\mathtt{Int\\_Hash}(x_{i},y_{i},z_{i}) \\tag{1}\\] \\[=\\mathtt{Mod}((x_{i}\\cdot p_{1})\\oplus(y_{i}\\cdot p_{2})\\oplus(z_ {i}\\cdot p_{3}),n)\\] (2) \\[x_{i}=\\mathtt{Round}(x\\ast 100/S),\\quad y_{i}=\\mathtt{Round}(y \\ast 100/S)\\] \\[z_{i}=\\mathtt{Round}(z\\ast 100/S) \\tag{3}\\]
where \\(x_{i},y_{i},z_{i}\\) are the corresponding integer-rounded coordinates, \\(S\\) is size of a region (i.e., \\(S_{\\mathbf{R}}\\)) or voxel (i.e., \\(S_{\\mathbf{O}}\\)), \\(\\oplus\\) is the XOR operation, and function \\(\\mathtt{Mod}(a,b)\\) is the calculation of integer \\(a\\) modulus another integer \\(b\\). \\(p_{1},p_{2},p_{3}\\) are three large prime numbers for reducing the collision probability [33, 49], \\(n\\) is the hash table size. In our work, we set the value of \\(p_{1},p_{2},p_{3}\\) and \\(n\\) as \\(116101,37199,93911\\) and \\(201326611\\), respectively.
In our map structure, we maintain three independent hash tables for regions, voxels, and triangle facets, denoted as: \\(\\boldsymbol{\\Xi_{\\mathbf{R}}}\\), \\(\\boldsymbol{\\Xi_{\\mathbf{O}}}\\), and \\(\\boldsymbol{\\Xi_{\\mathbf{T}}}\\), respectively. For a region \\(\\mathbf{R}_{i}\\), a voxel \\(\\mathbf{O}_{j}\\), and a triangle facet \\(\\mathbf{T}_{k}\\), they are mapped to hash tables (i.e., \\(\\boldsymbol{\\Xi_{\\mathbf{R}}}\\), \\(\\boldsymbol{\\Xi_{\\mathbf{O}}}\\), and \\(\\boldsymbol{\\Xi_{\\mathbf{T}}}\\)) through the hash keys \\(\\boldsymbol{\\mathcal{H}_{\\mathbf{R}}}(\\mathbf{R}_{i})\\), \\(\\boldsymbol{\\mathcal{H}_{\\mathbf{O}}}(\\mathbf{O}_{j})\\), and \\(\\boldsymbol{\\mathcal{H}_{\\mathbf{T}}}(\\mathbf{T}_{k})\\) are calculated as below:
\\[\\mathbf{R}\\mapsto\\boldsymbol{\\Xi_{\\mathbf{R}}}:\\boldsymbol{\\mathcal{ H}_{\\mathbf{R}}}(\\mathbf{R}_{i})=\\boldsymbol{\\mathcal{H}}(\\mathbf{p}_{i}),\\ \\ \\ \\mathbf{p}_{i}\\in\\mathbb{R}^{3} \\tag{4}\\] \\[\\mathbf{O}\\mapsto\\boldsymbol{\\Xi_{\\mathbf{O}}}:\\boldsymbol{ \\mathcal{H}_{\\mathbf{O}}}(\\mathbf{O}_{j})=\\boldsymbol{\\mathcal{H}}(\\mathbf{p}_{ j}),\\ \\ \\mathbf{p}_{j}\\in\\mathbb{R}^{3}\\] (5) \\[\\mathbf{T}\\mapsto\\boldsymbol{\\Xi_{\\mathbf{T}}}:\\boldsymbol{ \\mathcal{H}_{\\mathbf{T}}}(\\mathbf{T}_{k})=\\mathtt{Int\\_Hash}(\\mathtt{Pts\\_id}( \\mathbf{T}_{k})) \\tag{6}\\]
where \\(\\mathbf{p}_{i}\\) (and \\(\\mathbf{p}_{j}\\)) can be any point that located inside region \\(\\mathbf{R}_{i}\\) (and voxel \\(\\mathbf{O}_{j}\\)). The hash function \\(\\boldsymbol{\\mathcal{H}_{\\mathbf{R}}}(\\cdot)\\) in (4) and \\(\\boldsymbol{\\mathcal{H}_{\\mathbf{R}}}(\\cdot)\\) in (5) are distinguished with different container's size \\(S\\) in (3).
Besides, we use function \\(\\boldsymbol{\\Psi}(\\cdot)\\) to denote the retrieval of \\(\\mathbf{R}_{i}\\), \\(\\mathbf{O}_{j}\\), and \\(\\mathbf{T}_{k}\\) from the hash tables, shown as follows:
\\[\\mathbf{R}\\leftarrow\\boldsymbol{\\Xi_{\\mathbf{R}}}:\\ \\mathbf{R}_{i}= \\boldsymbol{\\Psi}(\\boldsymbol{\\Xi_{\\mathbf{R}}},\\boldsymbol{\\mathcal{H}_{ \\mathbf{R}}}(\\mathbf{R}_{i})) \\tag{7}\\] \\[\\mathbf{O}\\leftarrow\\boldsymbol{\\Xi_{\\mathbf{O}}}:\\ \\mathbf{O}_{j}= \\boldsymbol{\\Psi}(\\boldsymbol{\\Xi_{\\mathbf{O}}},\\boldsymbol{\\mathcal{H}_{ \\mathbf{O}}}(\\mathbf{O}_{j}))\\] (8) \\[\\mathbf{T}\\leftarrow\\boldsymbol{\\Xi_{\\mathbf{T}}}:\\ \\mathbf{T}_{k}= \\boldsymbol{\\Psi}(\\boldsymbol{\\Xi_{\\mathbf{T}}},\\boldsymbol{\\mathcal{H}_{ \\mathbf{T}}}(\\mathbf{T}_{k})) \\tag{9}\\]
Notice that the hash table is unstructured, indicating that neighboring regions (or voxels) are not stored spatially but in different parts of the buckets, as illustrated by two neighboring regions \\(\\mathbf{R}_{i}\\) and \\(\\mathbf{R}_{j}\\) in Fig. 3.
Lastly, for resolving the possible hash collision (i.e., two pieces of data in a hash table share the same hash value), we adopt the technique in [33], using the implementation of unordered_map container [50] in C++ standard library (std) [51].
#### Iv-B2 Incremental kd-Tree (ikd-Tree)
We maintain an incremental kd-tree to enable the fast kNN search of mesh vertices. The ikd-Tree is proposed in our previous work [52, 53], which is an efficient dynamic space partition data structure for fast kNN search. Unlike existing static kd-trees (e.g., kd-tree implemented in PCL [54] and FLANN [55]) that require rebuilding the entire tree at each update, ikd-Tree achieves lower computation time by updating the tree with newly coming points in an incremental manner. In ImMesh, we use the ikd-Tree for: 1) ensuring that the distance between any two mesh vertices remains larger than the minimum value \\(\\xi\\), thereby maintaining the triangle mesh at a proper resolution. 2) enabling the vertex dilation operation in our voxel-wise meshing operation to erode the gaps between neighbor voxels.
## V Receiver and localization
The _receiver_ module is designed for processing and packaging the input sensor data. As shown in the red box of Fig. 2, our _receiver_ module receives the streaming of LiDAR data from live or offline recorded files, processes the data to a unified data format (i.e., customized point cloud data) that make
Fig. 3: In ImMesh, we partition the 3D space into two types of volumetric grids: regions and voxels. Triangle facets are stored inside the regions, and mesh vertices are stored inside the voxels. Additionally, we maintain three hash tables to facilitate efficient lookup of these data types.
ImMesh compatible with LiDARs of different manufacturers, scanning mechanisms (i.e., mechanical spinning, solid-state) and point cloud density (e.g., 64-, 32-, 16-lines, etc.). Besides, if the IMU source is available, our _input_ module will also package the IMU measurements within a LiDAR frame by referring to the sampling time.
The _localization_ module utilizes the input data stream of _receiver_ module, reuses the voxels for estimating the sensor poses of 6 DoF by registering the points to planes in voxels in real-time. Our _localization_ module is built upon our previous work VoxelMap [18], which represents the environment with the probabilistic planes and estimating pose with an iterated Kalman filter.
### _Voxel map construction_
Our _localization_ is built by representing the environment with probabilistic planes, which accounts for both LiDAR measurement noises and sensor pose estimation errors, and constructs the voxel-volumetric maps in a coarse-to-fine adaptive resolution manner. Since the main focus of this work is on meshing, we only discuss those processes in _localization_ module that are closely related to our _meshing_ module. For the detailed modeling and analysis of LiDAR's measurement noise and sensor estimation errors, we recommend our readers to our previous work VoxelMap [18].
For each LiDAR point, we first compensate the in-frame motion distortion with an IMU backward propagation introduced in [52]. Denoting \\({}^{L}\\mathbf{p}_{i}\\) the \\(i\\)-th LiDAR point after motion compensation, it is registered to the world frame as \\({}^{W}\\mathbf{p}_{i}\\) with the estimated sensor pose \\(\\left({}^{W}_{L}\\mathbf{R},{}^{W}_{L}\\mathbf{t}\\right)\\in SE(3)\\):
\\[{}^{W}\\mathbf{p}_{i}={}^{W}_{L}\\mathbf{R}{}^{L}\\mathbf{p}_{i}+{}^{W}_{L} \\mathbf{t} \\tag{10}\\]
The registered LiDAR point \\({}^{W}\\mathbf{p}_{i}\\) is stored inside the voxels. Given all points \\({}^{W}\\mathbf{p}_{i}\\)\\((i=1, ,N)\\) inside a voxel \\(\\mathbf{O}\\), the points covariance matrix \\(\\mathbf{A}\\) is
\\[\\bar{\\mathbf{p}}=\\frac{1}{N}\\sum_{i=1}^{N}{}^{W}\\mathbf{p}_{i},\\quad\\mathbf{ A}=\\frac{1}{N}\\sum_{i=1}^{N}\\left({}^{W}\\mathbf{p}_{i}-\\bar{\\mathbf{p}} \\right)\\left({}^{W}\\mathbf{p}_{i}-\\bar{\\mathbf{p}}\\right)^{T} \\tag{11}\\]
where the symmetric matrix \\(\\mathbf{A}\\) depicted the distribution of pall oints. Perform the eigenvalue decomposition of matrix \\(\\mathbf{A}\\):
\\[\\mathbf{A}\\mathbf{U}=\\begin{bmatrix}\\lambda_{1}&&\\\\ &\\lambda_{2}&\\\\ &&\\lambda_{3}\\end{bmatrix}\\begin{bmatrix}\\mathbf{u}_{1}&\\mathbf{u}_{2}&\\mathbf{ u}_{3}\\end{bmatrix},\\ \\ \\lambda_{1}\\geq\\lambda_{2}\\geq\\lambda_{3} \\tag{12}\\]
where \\(\\lambda_{1},\\lambda_{2},\\lambda_{3}\\) are the eigenvalues and \\(\\mathbf{u}_{1},\\mathbf{u}_{2},\\mathbf{u}_{3}\\) are the correspondent eigenvectors. In our _meshing_ module, we use these calculated eigenvectors of voxel \\(\\mathbf{O}\\) for performing the dimension reduction through projection, as we will discuss in Section VI-D.
In our localization module, voxel \\(\\mathbf{O}\\) might be subdivided into smaller sub-voxels to construct possible planar features at finer resolutions for robust localization in unstructured environments. Then, the sensor pose \\(\\left({}^{W}_{L}\\mathbf{R},{}^{W}_{L}\\mathbf{t}\\right)\\) is estimated by minimizing the point-to-plane residual. While this paper primarily focuses on our mesh reconstruction method, we refer readers to our previous work [18] for more details on the implementation of our _localization_ module, including the voxel subdivision and state estimation.
### _Point cloud registration_
With the estimated sensor pose \\(\\left({}^{W}_{L}\\mathbf{R},{}^{W}_{L}\\mathbf{t}\\right)\\), we perform the point cloud registration for transforming each measurement point \\({}^{L}\\mathbf{p}_{i}\\) from the LiDAR frame to the global frame (i.e., the first LiDAR frame) with (10). This registered point cloud is then used for: 1) publishing to other applications with our _broadcaster_. 2) updating the voxel map (detailed in [18]). 3) appending to _map structure_ that serves as the mesh vertices for shaping the geometry structure of our online reconstructed triangle mesh.
If a new registered point does not lie on an existing voxel \\(\\mathbf{O}\\) (or region \\(\\mathbf{R}\\)), a new voxel (or region) will be created and added to the hash table \\(\\mathbf{\\Xi_{O}}\\) (or \\(\\mathbf{\\Xi_{R}}\\)). Subsequently, the newly registered point will be included in the newly constructed voxel.
#### V-A1 _Append of mesh vertices_
The registered LiDAR points are also used for forming the meshing vertices in _map structure_. To be detailed, we first leverage a voxel-grid filter to downsample the newly registered LiDAR point cloud. Then, to avoid the appearance of tiny triangles in reconstructing the mesh, we leverage the ikd-Tree for keeping the minimum distance \\(\\xi\\) between any of two meshing vertices. That is, for each register LiDAR point \\({}^{W}\\mathbf{p}_{i}\\) in the global frame, we search for the nearest mesh vertex in _map structure_ with ikd-Tree. If the Euclidean distance between this point and the searched vertex is smaller than \\(\\xi\\), we will discard this point. Otherwise, this point will be used for: 1) constructing a new mesh vertex \\(\\mathbf{V}_{i}\\), where \\(i\\) is the unique index indicating that \\(\\mathbf{V}_{i}\\) is the \\(i\\)-th appended vertex. 2) adding the vertex \\(\\mathbf{V}_{i}\\) to the ikd-Tree. 3) pushing back \\(\\mathbf{V}_{i}\\) to the vertex array of the voxel \\(\\mathbf{O}_{j}\\) that \\(\\mathbf{V}_{i}\\) lies in. After, the status flag \\(f_{\\mathbf{O}_{j}}\\) of \\(\\mathbf{O}_{j}\\) is set as _activated_ for notifying the meshing module for performing the voxel-wise meshing operation.
## VI Meshing
In ImMesh, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. We explicitly reconstruct the triangle mesh by directly utilizing 3D registered LiDAR points as mesh vertices enabled by two facts of LiDAR sensors: 1) The points sampled by LiDAR and registered via the LiDAR odometry and mapping [18] have very high positional accuracy. Hence, they can accurately shape the geometric structure of the mesh. 2) A LiDAR measurement point naturally lies on the surface of the detected object, with two other points in the same plane that can form a triangle facet to represent its underlying surface.
### _Goals and requirements_
With the accurate mesh vertices appended from the point cloud registration in Section V-B, the problem of online mesh reconstruction is converted to another goal, which is to seek a proper way of real-time reconstructing the triangle facets with a growing 3D point set. This new problem is barely researched to date. Given a set of growing 3D points, our _meshing_ module is designed to incrementally reconstruct the triangle facets considering the following four requirements:Firstly, precision is our primary consideration. For each reconstructed triangle facet representing the surface of the scene, we require it to lie on an existing plane.
Secondly, the reconstructed mesh should be hole-less. In the dense reconstruction of the surface triangle mesh, the appearance of holes is unacceptable since they lead to the wrong rendering results, where surfaces behind a real object are rendered.
Thirdly, the reconstruction of triangle mesh should avoid constructing sliver triangles. A sliver triangle (i.e., the noodle-like triangle), as defined in the communities of computer graphics [56], is a thin triangle whose area is nearly zero, an undesired property in the field of computer graphics. For example, these noodle-like triangles would cause some errors in the numerical analysis on them [57]. Besides, these unfavorable properties cause troubles in the pipelines of rendering (e.g., rasterization, texturing, and anti-aliasing [6, 7, 58]), which leads to the loss of accuracy in calculating (e.g., depth testing, interpolation, etc.) the pixel values distributed near the sharp angle [7, 59, 60].
Lastly, the complexity of triangle mesh reconstruction should be computationally efficient to meet the requirement of real-time applications. The time consumption of each meshing process should not exceed the sampling duration of two consecutive LiDAR frames.
### _Challenges and approaches_
To achieve our goals of dense incremental meshing with the four requirements listed above, our system is proposed based on a deep analysis of the challenges. The challenges and corresponding scientific approaches are briefed below:
The first challenge is that the global map is continuously grown by the newly registered LiDAR points, with each update of a LiDAR scan only affecting parts of the scene. Hence, an incremental mesh reconstruction method should be able to process only those parts of the scene with new points. In our work, we incrementally perform the mesh reconstruction with a mechanism similar to _git_[61]. For each incremental mesh update, we first retrieve the data of the voxels with new mesh vertices appended via the _pull_ step (detailed in Section VI-E1). Then, an efficient voxel-wise meshing algorithm is executed to reconstruct the mesh with these data. The incremental modifications of newly reconstructed results w.r.t. pulled results are calculated in our _commit_ step (detailed in Section VI-E2). Finally, these incremental modifications are merged to the global map via our _push_ step (detailed in Section VI-E3).
Given a set of 3D vertices, the second challenge is how to correctly and efficiently reconstruct the triangle facets representing the surfaces of the scene. Since it is hard to directly reconstruct mesh from these mesh vertices in 3D space, our work performs the meshing operation in 2D. To be detailed, for vertices located in a voxel \\(\\mathbf{O}\\), we first project them into a proper plane (i.e., the estimated plane given by the _localization_ module). The mesh of these 2D points is constructed using the 2D meshing algorithms and is recovered back to 3D (detailed in Section VI-D2).
### _Voxel-wise vertex retrieval_
#### Vi-C1 Retrieval of in-voxel vertices
To reconstruct the triangle mesh incrementally, the first step is to retrieve the vertices that need to mesh with the newly added points. ImMesh uses voxels for dividing the 3D space, and uses the flag \\(f_{\\mathbf{O}}\\) of each voxel \\(\\mathbf{O}\\) for identifying whether \\(\\mathbf{O}\\) has newly appended mesh vertices (i.e., _activated_ voxel).
Take an _activated_ voxel \\(\\mathbf{O}_{i}\\) as an example. We perform a voxel-wise meshing operation to reconstruct the triangle facets with all in-voxel vertices. For all vertices inside the voxel \\(\\mathbf{O}_{i}\\), we denote them \\(\\boldsymbol{\\mathcal{V}}_{i}^{\\text{In}}=\\{\\mathbf{V}_{j_{1}},\\mathbf{V}_{j_ {2}}, ,\\mathbf{V}_{j_{m}}\\}\\).
#### Vi-C2 Vertex dilation
In practice, if we perform the meshing operation with only the in-voxel mesh vertices, the gaps between neighborhood voxels will appear due to the absence of triangles facets across voxels, as shown in Fig. 4(b). Motivated by morphological operations (e.g., dilation and erosion) in digital image processing [62], we perform the 3D point cloud dilation for adding neighborhood points of \\(\\boldsymbol{\\mathcal{V}}_{i}^{\\text{In}}\\) to erode the gaps between voxels, as shown in Fig. 4(a).
For vertex \\(\\mathbf{V}_{i_{j}}\\in\\boldsymbol{\\mathcal{V}}_{i}^{\\text{In}}\\), we perform the radius-search operation by leveraging the ikd-Tree [53] for searching the nearest vertices of \\(\\mathbf{V}_{i_{j}}\\) with their Euclidean distance smaller than a given value \\(d_{r}\\) (usually set as \\(1/4\\) of the size of a voxel). Using \\(\\boldsymbol{\\mathcal{\\tilde{V}}}_{i_{j}}\\) to denote the searched neighbor vertices of \\(\\mathbf{V}_{i_{j}}\\), we have:
\\[\\forall\\mathbf{V}\\in\\boldsymbol{\\mathcal{\\tilde{V}}}_{i_{j}},\\quad\\left|| \\texttt{Pos}(\\mathbf{V})-\\texttt{Pos}(\\mathbf{V}_{i_{j}})\\right||\\leq d_{r}. \\tag{13}\\]
We enumerate each \\(\\mathbf{V}_{i_{j}}\\in\\boldsymbol{\\mathcal{V}}_{i}^{\\text{In}}\\) and union the corresponding \\(\\boldsymbol{\\mathcal{\\tilde{V}}}_{i_{j}}\\) into \\(\\boldsymbol{\\mathcal{V}}_{i}\\) (excluding duplicated vertices), which is the set of dilated vertices. The full algorithm of our voxel-wise vertex retrieval is shown in Algorithm 1.
Fig. 4: The comparisons of mesh reconstruction with (a) and without (b) the vertex dilation.
### _Dimension reduction through projection_
With the mesh vertices \\(\\mathbf{\\mathcal{V}}_{i}\\) retrieved from Algorithm 1, we introduce the voxel-wise mesh reconstruction.
```
Input : The _activated_ voxel \\(\\mathbf{\\mathcal{O}}_{i}\\) Output : The retrieved vertex set \\(\\mathbf{\\mathcal{V}}_{i}\\) Start : Copy all in-voxel pointer list to \\(\\mathbf{\\mathcal{V}}_{i}^{\\text{In}}\\). \\(\\mathbf{\\mathcal{V}}_{i}=\\mathbf{\\mathcal{V}}_{i}^{\\text{In}}\\)
1foreach\\(\\mathbf{\\mathcal{V}}_{i}\\in\\mathbf{\\mathcal{V}}_{i}^{\\text{In}}\\)do
2\\(\\tilde{\\mathbf{\\mathcal{V}}}_{i}=\\text{RadioSearch}(\\mathbf{\\mathcal{V}}_{i},d_{r})\\)
3foreach\\(\\mathbf{\\mathcal{V}}\\in\\tilde{\\mathbf{\\mathcal{V}}}_{i}\\)do
4if\\(\\mathbf{\\mathcal{V}}\
otin\\mathbf{\\mathcal{V}}_{i}\\)then
5\\(\\mathbf{\\mathcal{V}}_{i}=\\mathbf{\\mathcal{V}}_{i}\\cup\\mathbf{\\mathcal{V}}\\)
```
**Return:** The retrieved vertex set \\(\\mathbf{\\mathcal{V}}_{i}\\) after dilation
**Algorithm 1**Voxel-wise vertex retrieval of \\(\\mathbf{\\mathcal{O}}_{i}\\)
#### Iv-D1 Projection of 3D vertices on a 2D plane
Since it is hard to directly mesh in real-time with \\(\\mathbf{\\mathcal{V}}_{i}\\), which is distributed in 3D space, we simplify the 3D meshing problem into a 2D one by projecting \\(\\mathbf{\\mathcal{V}}_{i}\\) on a suitable plane. This dimension reduction by projection is inspired by two key observations: 1) Every LiDAR point can be viewed as lying on a small local surface around it. Hence, for vertices \\(\\mathbf{\\mathcal{V}}_{i}\\) retrieved from Algorithm 1 that are distributed in a small area (i.e., inside a voxel \\(\\mathbf{\\mathcal{O}}_{i}\\)), they tend to form a planar-like point cluster. 2) For these planar-like point clusters, we can approximately mesh them in a 2D view on their lying surface. To preserve the 3D space spanned by \\(\\mathbf{\\mathcal{V}}_{i}\\) to the best extent, the plane \\((\\mathbf{n},\\mathbf{q})\\) suitable for projection should be formed by the two principal components of \\(\\mathbf{\\mathcal{V}}_{i}\\), which is essentially the plane fitted from \\(\\mathbf{\\mathcal{V}}_{i}\\) and has already been calculated in our _localization_ module in Section V-A. The norm \\(\\mathbf{n}\\) of the plane is the eigenvector \\(\\mathbf{u}_{3}\\) that corresponds to the minimum eigenvalue \\(\\lambda_{3}\\) in (12), which is the eigendecomposition of point covariance matrix \\(\\mathbf{A}\\) in voxel \\(\\mathbf{\\mathcal{O}}_{i}\\). \\(\\mathbf{q}\\) is the center points inside \\(\\mathbf{\\mathcal{O}}_{i}\\).
For each vertex \\(\\mathbf{\\mathcal{V}}_{i_{j}}\\in\\mathbf{\\mathcal{V}}_{i}\\), we project it to plane \\((\\mathbf{n},\\mathbf{q})\\). The resultant 2D point \\(\\mathbf{p}_{i_{j}}\\) is calculated as:
\\[\\mathbf{p}_{i_{j}}=\\left[\\phi,\\rho\\right]^{T}\\in\\mathbb{R}^{2} \\tag{14}\\] \\[\\phi=\\left(\\texttt{Pos}(\\mathbf{\\mathcal{V}}_{i_{j}})-\\mathbf{q} \\right)^{T}\\mathbf{u}_{1},\\ \\ \\rho=\\left(\\texttt{Pos}(\\mathbf{\\mathcal{V}}_{i_{j}})-\\mathbf{q} \\right)^{T}\\mathbf{u}_{2} \\tag{15}\\]
where \\(\\mathbf{u}_{1},\\mathbf{u}_{2}\\) are the other two eigenvectors in (12). We use \\(\\mathbf{\\mathcal{P}}_{i}=\\left\\{\\mathbf{p}_{i_{1}},\\mathbf{p}_{i_{2}}, ,\\mathbf{ p}_{i_{m}}\\right\\}\\) to denote the 2D point set after projected onto the plane.
#### Iv-D2 Two-dimensional Delaunay triangulation
After the projection, the dimension of 3D meshing problem is reduced to a 2D one, which can be solved by 2D Delaunay triangulation.
As introduced in [63, 64], a Delaunay triangulation \\(\\texttt{Del}(\\mathbf{\\mathcal{P}})\\) for a 2D point set \\(\\mathbf{\\mathcal{P}}=\\left\\{\\mathbf{p}_{1},\\mathbf{p}_{2}, ,\\mathbf{p}_{m}\\right\\}\\) is a triangulation such that no point in \\(\\mathbf{\\mathcal{P}}\\) is inside the circumcircle of any triangle. Using \\(\\mathbf{\\mathcal{T}}=\\texttt{Del}(\\mathbf{\\mathcal{P}})\\) to denote the triangle facets after triangulation, \\(\\mathbf{\\mathcal{T}}\\) has the following properties: 1) Any of two facets are either disjoint or share a lower dimensional face (i.e., edge or point). 2) The set of facets in \\(\\mathbf{\\mathcal{T}}\\) is connected with adjacency relation. 3) The domain \\(\\mathbf{P}_{\\mathbf{\\mathcal{T}}}\\), which is the union of facets in \\(\\mathbf{\\mathcal{T}}\\), has no singularity1. With these three useful properties, the 2D Delaunay triangulation has been widely applied for reconstructing dense facets with a given 2D point set (e.g., [65]).
Footnote 1: The union \\(\\mathbf{\\mathcal{U}}_{\\mathbf{\\mathcal{T}}}\\) of all simplices in \\(\\mathbf{\\mathcal{T}}\\) is called the domain of \\(\\mathbf{\\mathcal{T}}\\). A point in the domain of \\(\\mathbf{\\mathcal{T}}\\) is said to be singular if its surrounding in \\(\\mathbf{P}_{\\mathbf{\\mathcal{T}}}\\) is neither a topological ball nor a topological disc (view [https://doc.egal.org/latest/Triangulation_2/index.html](https://doc.egal.org/latest/Triangulation_2/index.html) of [63] for detail).
Considering our requirements in Section VI-A, we chose Delaunay triangulation to reconstruct the mesh for its remarkable properties as follows. Firstly, it is a 2D triangulation providing mesh with no hole left in the convex hull of \\(\\mathbf{\\mathcal{P}}\\), which satisfies our first requirement. Secondly, it naturally avoids sliver triangles by maximizing the minimum angles of the triangles in triangulation, which meets our second requirement. Finally, it is a fast algorithm suitable for real-time requirements. The algorithm complexity of \\(n\\) points is \\(\\mathbf{\\mathcal{O}}(n\\texttt{log}(n))\\) in 2D (p.s. \\(\\mathbf{\\mathcal{O}}(n^{2})\\) in 3D) [66].
Denote the triangle facets after the Delaunay triangulation of \\(\\mathbf{\\mathcal{P}}_{i}\\) (from Section VI-D) as \\(\\mathbf{\\mathcal{T}}_{i}=\\texttt{Del}(\\mathbf{\\mathcal{P}}_{i})=\\{\\texttt{T}_{i_{1}}, \\texttt{T}_{i_{2}}, ,\\texttt{T}_{i_{n}}\\}\\). For each triangle facets \\(\\mathbf{T}_{i_{j}}\\in\\mathbf{\\mathcal{T}}_{i}\\), we retrieve the indices of its three vertices with: \\(\\{\\alpha,\\beta,\\gamma\\}=\\texttt{Pts}_{\\texttt{id}}(\\mathbf{T}_{i_{j}})\\), indicating that this triangle is formed with 2D points \\(\\{\\mathbf{p}_{i_{\\alpha}},\\mathbf{p}_{i_{\\beta}},\\mathbf{p}_{i_{\\gamma}}\\}\\). Returning back to 3D space, we constitute a triangle facet \\(\\mathbf{T}_{i_{j}}\\) with vertices \\(\\{\\mathbf{V}_{i_{\\alpha}},\\mathbf{V}_{i_{\\beta}},\\mathbf{V}_{i_{\\gamma}}\\}\\), as shown in Fig. 5. Then, the center \\(\\texttt{Center}(\\mathbf{T}_{i_{j}})\\) and norm \\(\\texttt{Norm}(\\mathbf{T}_{i_{j}})\\) of \\(\\mathbf{T}_{i_{j}}\\) are calculated as below:
\\[\\texttt{Center}(\\mathbf{T}_{i_{j}})=\\left(\\texttt{Pos}(\\mathbf{V}_{i_{ \\alpha}})+\\texttt{Pos}(\\mathbf{V}_{i_{\\beta}})+\\texttt{Pos}(\\mathbf{V}_{i_{ \\gamma}})\\right)/3 \\tag{16}\\] \\[\\texttt{Norm}(\\mathbf{T}_{i_{j}})=\\mathbf{n}(/||\\mathbf{n}||)\\] (17) \\[\\mathbf{n}=\\left(\\texttt{Pos}(\\mathbf{V}_{i_{\\alpha}})-\\texttt{Pos}( \\mathbf{V}_{i_{\\beta}})\\right)\\times\\left(\\texttt{Pos}(\\mathbf{V}_{i_{\\gamma}})- \\texttt{Pos}(\\mathbf{V}_{i_{\\beta}})\\right) \\tag{18}\\]
Additionally, to ensure proper face orientation for identifying the front-back face, which is crucial for various computer graphics applications such as front-back face culling, lighting, and shading, we adjust the normal of \\(\\mathbf{T}_{i_{j}}\\) to make it always face towards the current LiDAR position by:
\\[\\texttt{If}: ((\\overset{W}{L}\\texttt{t}-\\texttt{Center}(\\mathbf{T}_{i_{j}}))^{T} \\texttt{Norm}(\\mathbf{T}_{i_{j}})<0 \\tag{19}\\] \\[\\texttt{Then}: \\texttt{Norm}(\\mathbf{T}_{i_{j}})=-\\texttt{Norm}(\\mathbf{T}_{i_{j}}) \\tag{20}\\]
where \\(\\overset{W}{L}\\texttt{t}\\) is the LiDAR position of current scan, which is estimated in our _localization_ module. Furthermore, if the normal is flipped in (20), we will change the indices of \\(\\mathbf{T}_{i_{j}}\\) from \\(\\{\\alpha,\\beta,\\gamma\\}\\) to \\(\\{\\beta,\\alpha,\\gamma\\}\\) when publishing this facet in our _broadcaster_, which is necessary to ensure the correct normal orientation in certain rendering engines (e.g., in [67]).
Fig. 5: In ImMesh, we reduce the 3D meshing problem to a 2D one by projecting the 3D vertices onto their principal plane.
### _Voxel-wise meshing with pull, commit, and push_
With the triangle facets \\(\\mathbf{\\mathcal{T}}_{i}\\) newly constructed by the voxel-wise meshing operation, we incrementally merge \\(\\mathbf{\\mathcal{T}}_{i}\\) to the existing triangle facets in the voxel currently saved in _map structure_. This update is designed with a mechanism similar to _git_[61] (a version control software) that includes _pull_, _commit_, and _push_ steps.
#### Vi-E1 Pull
The pull operation aims to retrieve existing triangle facets \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) in the \\(i\\)-th \\(L2\\) voxel. Given vertices \\(\\mathbf{\\mathcal{V}}_{i}\\) in the voxel, which is obtained from Algorithm 1, we retrieve the triangle facets \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) from the _map structure_ as shown in Algorithm 2.
```
Input : The retrieved vertex set \\(\\mathbf{\\mathcal{V}}_{i}\\) from Algorithm 1 Output : Existing triangles facets in the voxel \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) Start : \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}=\\texttt{[null]}\\)
1foreach\\(\\mathbf{V}_{j}\\in\\mathbf{\\mathcal{V}}_{i}\\)do
2 Get triangles having vertex \\(\\mathbf{V}_{j}\\): \\(\\mathbf{\\mathcal{T}}_{\\mathbf{V}_{j}}=\\texttt{Tri\\_List}(\\mathbf{V}_{j})\\)foreach\\(\\mathbf{T}_{k}\\in\\mathbf{\\mathcal{T}}_{\\mathbf{V}_{j}}\\)do
3 Get all vertices of \\(\\mathbf{T}_{k}\\): \\(\\{\\alpha,\\beta,\\gamma\\}=\\texttt{Pts\\_id}(\\mathbf{T}_{k})\\)if\\((\\mathbf{V}_{\\alpha}\\in\\mathbf{\\mathcal{V}}_{i})\\) and \\((\\mathbf{V}_{\\beta}\\in\\mathbf{\\mathcal{V}}_{i})\\) and \\((\\mathbf{V}_{\\gamma}\\in\\mathbf{\\mathcal{V}}_{i})\\)then
4\\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}=\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\cup \\mathbf{T}_{k}\\)
```
**Return:**\\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) ```
**Algorithm 2**Voxel-wise mesh pull.
#### Vi-E2 Commit
In this step, we incrementally update the newly reconstructed triangle facets \\(\\mathbf{\\mathcal{T}}_{i}\\) (in Section VI-D2) to the existing facets \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) (from Algorithm 2). These incremental updates are summarized into an array of mesh facets to be added \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\) and an array of mesh facets to be erased \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}\\). The detailed processes of this commit step are shown in Algorithm 3.
``` Input : The pulled triangle facets \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\) from Algorithm 2 The reconstructed triangle facets \\(\\mathbf{\\mathcal{T}}_{i}\\) Output : The triangle facets to be added \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\). The triangle facets to be erased \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}\\). Start : \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}=\\texttt{[null]}\\), \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}=\\texttt{[null]}\\)foreach\\(\\mathbf{T}_{i}\\in\\mathbf{\\mathcal{T}}_{i}\\)do
2if\\(\\mathbf{T}_{j}\
otin\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\)then
3\\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}=\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\cup \\mathbf{T}_{j}\\)
4foreach\\(\\mathbf{T}_{j}\\in\\mathbf{\\mathcal{T}}_{i}^{\\texttt{pull}}\\)do
5if\\(\\mathbf{T}_{j}\
otin\\mathbf{\\mathcal{T}}_{i}\\)then
6\\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}=\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}} \\cup\\mathbf{T}_{j}\\)
7
8
9
10 ```
**Algorithm 3**Voxel-wise mesh commit.
#### Vi-E3 Push
With the incremental modification \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}\\) and \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\) from the previous _commit_ step, we perform the addition and erasion operations of triangle facets in _push_ step by: 1) constructing (or deleting) the triangle facet structures (as defined in Section IV-A3). 2) adding (or removing) the pointer to these facet structures to other data structures (i.e., mesh vertices and regions). The detailed processes of _push_ step are shown in Algorithm 4.
### _Parallelism_
To further improve the real-time performance, we implement our algorithms with parallelism for better utilization of the computation power of a multi-core CPU. In ImMesh, we have two major parallelisms as follows:
The first parallelism is implemented between the _localization_ module and the _meshing_ module. Except for the point cloud registration in _localization_ module, which needs to operate the mesh vertices as the meshing operation, the remaining processes of _localization_ module are parallelized with the _meshing_ module. More specifically, once our meshing processes start, the _localization_ module is allowed to process the new incoming LiDAR scans for estimation of the pose of LiDAR. However, the subsequent point cloud registration step is only allowed to be executed after the end of the current meshing process.
The second parallelism is implemented among the voxel-wise meshing operation of each _activated_ voxel. The voxel-wise meshing operations of different voxels are independent; thus, no conflicted operations exist on the same set of data.
### _The full meshing algorithm_
To sum up, our full meshing processes are shown in Algorithm 5.
``` Input : The triangle facets that need to erased \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{dense}}\\). The triangle facets that need to added \\(\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\).
1FunctionAdd_triangle \\((\\mathbf{T}_{j})\\) : Get vertex indices \\(\\{\\alpha,\\beta,\\gamma\\}=\\texttt{Pts\\_id}(\\mathbf{T}_{j})\\) Find the region \\(\\mathbf{R}_{k}\\) with \\(\\texttt{Center}(\\mathbf{T}_{j})\\) via (7). Set the status flag \\(f_{\\mathbf{R}_{k}}\\) of region \\(\\mathbf{R}_{k}\\) to _Sync-required_. Add \\(\\mathbf{T}_{j}^{G}\\) to region \\(\\mathbf{R}_{k}\\) and triangles list of vertices \\(\\mathbf{V}_{\\alpha}\\), \\(\\mathbf{V}_{\\beta}\\), \\(\\mathbf{V}_{\\gamma}\\)
2FunctionErase_triangle \\((\\mathbf{T}_{j})\\) : Get vertex indices \\(\\{\\alpha,\\beta,\\gamma\\}=\\texttt{Pts\\_id}(\\mathbf{T}_{j})\\) Remove \\(\\mathbf{T}_{j}\\) from triangles list of vertices \\(\\mathbf{V}_{\\alpha}\\), \\(\\mathbf{V}_{\\beta}\\), \\(\\mathbf{V}_{\\gamma}\\). Find the region \\(\\mathbf{R}_{k}\\) with \\(\\texttt{Center}(\\mathbf{T}_{j})\\) via (7). Set the status flag \\(f_{\\mathbf{R}_{k}}\\) of region \\(\\mathbf{R}_{k}\\) to _Sync-required_. Remove \\(\\mathbf{T}_{j}\\) from region \\(\\mathbf{R}_{k}\\). Delete triangle \\(\\mathbf{T}_{j}\\) from memory.
3foreach\\(\\mathbf{T}_{j}\\in\\mathbf{\\mathcal{T}}_{i}^{\\texttt{add}}\\)do
4Add_triangle(\\(\\mathbf{T}_{j}\\)) ```
**Algorithm 4**Voxel-wise mesh push.
## VII Broadcaster
In ImMesh, the _broadcaster_ module publishes our state estimation results (i.e., odometry) and mapping results (i.e., newly registered point cloud and triangle mesh) to other applications. Additionally, if a depth image is needed, the _broadcaster_ module will rasterize the triangle meshes into a depth image.
### _Broadcast of triangle facets_
Since the triangle facets are stored in regions in an unstructured way, they can not be directly applied for broadcast. To resolve this problem, our _broadcaster_ module maintains a background thread that asynchronously copies the triangle facets from each _sync-required_ region (set as _sync-required_ after the triangle facets are updated in Algorithm 4) to a structured array for broadcasting. Then, these _sync-required_ regions are marked as _synced_ after the copying. Finally, The _broadcaster_ module publishes the newest triangle facets to other applications.
### _Rasterization of depth image_
Some robotic applications, such as autonomous navigation [68] and exploration [69] tasks, require dense accurate depth images for obstacle avoidance. To meet the requirements of these scenarios, the broadcaster module utilizes the triangle facets from Section VII-A to rasterize a depth image at any customized resolution and FoV, based on the fast implementation of _OpenGL_[58].
Besides depth image rasterization, the mesh obtained by our meshing module can reinforce the raw LiDAR point cloud measurements by increasing the resolution and enlarging the FoV. In detail, with the projection matrix and estimated pose used for rasterizing the depth image, the 3D points are obtained (i.e., unproject) from each pixel of the depth image. The unprojected 3D points would have higher resolution and larger FoV than the raw LiDAR measurement scan (see our Application-1 in Section VIII-D).
```
Input : The set of voxels \\(\\boldsymbol{\\mathcal{O}}=\\{\\mathbf{O}_{1},\\mathbf{O}_{2}, ,\\mathbf{O}_{m}\\}\\) that activated in Section V-B Start : The triangle facets that need to added \\(\\boldsymbol{\\mathcal{T}}^{\\text{total}}=\\{\\text{null}\\}\\), and to be erased \\(\\boldsymbol{\\mathcal{T}}^{\\text{frame}}=\\{\\text{null}\\}\\) in this update
1foreach\\(\\mathbf{O}_{i}\\in\\boldsymbol{\\mathcal{O}}\\)do
2 Retrieve vertices \\(\\boldsymbol{\\mathcal{V}}_{i}\\) with Algorithm 1.
3 Reconstruct the triangle facets \\(\\boldsymbol{\\mathcal{T}}_{i}\\) with \\(\\boldsymbol{\\mathcal{V}}_{i}\\) (Section VI-D2),
4 Performing voxel-wise mesh _pull_ (Algorithm 2) to get \\(\\boldsymbol{\\mathcal{T}}^{\\text{total}}_{i}\\)\\(\\triangleright\\) // Mesh _pull_
5 Performing voxel-wise mesh _commit_ (Algorithm 3) to get the triangle facets that need to be added \\(\\boldsymbol{\\mathcal{T}}^{\\text{total}}_{i}\\) and erased \\(\\boldsymbol{\\mathcal{T}}^{\\text{frame}}\\).
6\\(\\boldsymbol{\\mathcal{T}}^{\\text{464}}=\\boldsymbol{\\mathcal{T}}^{\\text{Iid}} \\bigcup\\boldsymbol{\\mathcal{T}}^{\\text{464}}_{i},\\quad\\boldsymbol{\\mathcal{T}}^ {\\text{frame}}=\\boldsymbol{\\mathcal{T}}^{\\text{frame}}\\bigcup\\boldsymbol{ \\mathcal{T}}^{\\text{frame}}_{i}\\) /* == Mesh \\(\\text{push}\\) start ==
7foreach\\(\\mathbf{T}_{j}\\in\\boldsymbol{\\mathcal{T}}^{\\text{464}}\\)do
8 Add_triangle (\\(\\mathbf{T}_{j}\\))\\(\\triangleright\\) // In Algorithm 4
9foreach\\(\\mathbf{T}_{j}\\in\\boldsymbol{\\mathcal{T}}^{\\text{frame}}\\)do
10 Erase_triangle (\\(\\mathbf{T}_{j}\\))\\(\\triangleright\\) // In Algorithm 4 /* == Mesh \\(\\text{push}\\) end ==
11foreach\\(\\mathbf{O}_{i}\\in\\boldsymbol{\\mathcal{O}}\\)do
12 Reset status flag\\(f_{\\mathbf{O}_{i}}\\) of \\(\\mathbf{O}_{i}\\) as _deactived_.
```
**Algorithm 5**The full meshing process of each update of LiDAR scan
## VIII Experiments and results
In this paper, we conduct the experiments by evaluating our meshing ability, especially on the runtime performance and accuracy in reconstructing the triangle mesh.
### _Experiment-1: ImMesh for immediate mesh reconstruction_
In this experiment, we verify the overall performance of ImMesh toward real-time simultaneous localization and meshing with live video demonstrations. As shown in Fig. 6(b), we record the entire process of our data collection at the campus of the University of Hong Kong (HKU), deploying the ImMesh for simultaneously estimating the sensor pose and reconstructing the triangle mesh on the fly. The _accompanying video [70] (starting at 00:09)_ demonstration of this experiment is available on YouTube.
#### Vi-A1 Experiment setup
Our handheld device for data collection is shown in Fig. 6(a), which includes a mini-computer (equipped with an _Intel 19-10900_ CPU and 64 GB RAM), a _Livax aia_ 3D LiDAR (FoV: \\(70.4\\,{}^{\\circ}\\!\\times\\!77.2^{\\circ}\\)), and an RGB camera for previewing. In this experiment video, three time-aligned views of different sources are presented, including: 1) a screen-recorded view that shows the estimated pose and online reconstructed triangles mesh of ImMesh. 2) a camera preview that records the video stream of the front-facing camera. 3) a third-person view that records the whole process of this experiment.
#### Vi-A2 Result and analysis
As presented in the video, benefiting from the accurate uncertainty models of the LiDAR point and plane that account for both LiDAR measurement noise and sensor pose estimation errors in our _localization_ module, ImMesh is able to provide the \\(6\\) DoF pose estimation of high accuracy in real-time. Without any additional processing (i.e., loop detection), all of these two trials can close the loop itself after traveling \\(957\\,\\mathrm{m}\\) and \\(391\\,\\mathrm{m}\\), respectively. In addition, with the efficient architecture design and careful engineering implementation on our _meshing_ module, the triangle mesh of the surrounding environment is incrementally reconstructed on the fly. With the live preview of real-time meshing, it informs users whether the data collection is sufficient enough for any part of the scene. This important function could lower the revisit chances and facilitate the collection process. Immediately after the data collection, the dense accurate triangle mesh of this scene would be available for analysis. Due to this reason, our system is named as the **Im**mediately **M**eshing (ImMesh).
_Experiment-2: Extensive evaluation of ImMesh on public datasets with various types of LiDAR in different scenes_
With all the modules delicately designed for efficiency, both the _localization_ and _meshing_ modules easily achieve real-time performances on a standard multi-core CPU. In this
Fig. 6: (a) shows our handheld device for data collection and online mesh reconstruction. (b) shows a snapshot of our _accompanying video [70] (starting at 00:09)_ of Experiment-1, with three time-aligned views of different sources including a screen-recorded view (in red), a camera preview (in yellow), and a third-person view (in blue).
experiment, we evaluate the average time consumption on four public datasets with the computation platform listed in Section VIII-A1.
The four datasets we chose are: Kitti dataset [71], NCTL dataset [72], NTU VIRAL dataset [73] and R\\({}^{3}\\)LIVE dataset [74]. They are collected in different scenarios ranging from structured urban buildings to field-cluttered complex environments (see TABLE II), using various types of LiDARs that include mechanical spinning LiDAR of different channels and solid-state LiDAR of small FoV (see TABLE I).
#### Iv-A1 Experiment setup
ImMesh is robust to its parameter values, which requires minimal user-adjustable parameters to achieve good results without extensive parameter tuning. We
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**Sequece** & **Traveling** & **Durations** & **LiDAR** & **Meshing** & **Localization** & **Number of** & **Number of** \\\\ & **length (m)** & **(s)** & **frames** & **mean/Std (ms)** & **mean/Std (ms)** & **vertices (m)** & **facets(m)** & **Scenarios** \\\\ \\hline Kitti\\_00 & 3,724.2 & 456 & 4,541 & 32.1 / 12.0 & 49.0 / 11.7 & 3.33 & 7.70 & Urban city \\\\ Kitti\\_01 & 2,453.2 & 146 & 1,101 & 34.5 / 10.5 & 51.1 / 18.5 & 2.03 & 4.05 & High way \\\\ Kitti\\_02 & 5,058.9 & 509 & 4,661 & 33.5 / 7.0 & 36.2 / 9.5 & 4.39 & 10.03 & Residential \\\\ Kitti\\_03 & 560.9 & 88 & 801 & 28 / 7.1 & 49.0 / 12.2 & 0.73 & 1.55 & Countryside; Road \\\\ Kitti\\_04 & 393.6 & 27 & 271 & 30.1 / 9.4 & 24.4 / 12.9 & 0.41 & 0.85 & Urban city; Road \\\\ Kitti\\_05 & 2,205.6 & 303 & 2,761 & 29.6 / 8.2 & 38.7 / 11.5 & 2.17 & 4.95 & Residential \\\\ Kitti\\_06 & 1,232.9 & 123 & 1,101 & 23.1 / 5.6 & 56.9 / 9.7 & 0.89 & 1.89 & Urban city \\\\ Kitti\\_07 & 2,453.2 & 114 & 1,101 & 20.7 / 7.4 & 31.3 / 8.6 & 0.76 & 1.71 & Urban city \\\\ Kitti\\_08 & 3,222.8 & 441 & 4,071 & 32.4 / 7.8 & 45.7 / 17.7 & 3.56 & 7.94 & Urban city \\\\ Kitti\\_09 & 1,705.1 & 171 & 1,591 & 34.5 / 7.5 & 43.1 / 19.2 & 1.83 & 4.12 & Countryside; Road \\\\ Kitti\\_10 & 919.5 & 132 & 1,201 & 23.4 / 6.9 & 30.9 / 11.9 & 0.94 & 2.10 & Residential \\\\ \\hline NCI 2012-01-15 & 7,499.8 & 6739 & 66,889 & 26.3 / 14.1 & 21.3 / 9.8 & 9.66 & 26.61 & Campus; Indoor \\\\ NCIT 2012-04-29 & 318.1 & 2598 & 25.819 & 25.4 / 13.9 & 19.1 / 5.4 & 4.82 & 13.43 & Campus \\\\ NCIT 2012-06-15 & 4,085.9 & 3310 & 32,954 & 24.5 / 14.4 & 22.3 / 7.7 & 6.36 & 17.47 & Campus \\\\ NCIT 2013-01-10 & 1,132.3 & 1024 & 10,212 & 20.2 / 12.5 & 19.3 / 6.5 & 2.02 & 5.50 & Campus \\\\ NCIT 2013-04-05 & 4,523.6 & 4167 & 41,651 & 20.6 / 13.8 & 26.8 / 11.7 & 9.58 & 23.98 & Campus \\\\ \\hline NTU VIRAL ee\\_01 & 265.3 & 398 & 3,987 & 11.2 / 6.7 & 14.5 / 3.4 & 0.60 & 1.38 & Aerial; Outdoor \\\\ NTU VIRAL nya\\_01 & 200.6 & 396 & 3,949 & 9.4 / 5.3 & 10.2 / 1.7 & 0.54 & 1.24 & Aerial; Indoor \\\\ NTU VIRAL rt\\_01 & 449.6 & 482 & 4,615 & 12.1 / 8.5 & 10.9 / 2.6 & 0.72 & 2.03 & Aerial; Outdoor \\\\ NTU VIRAL obj\\_01 & 222.1 & 354 & 3,542 & 11.4 / 8.0 & 17.2 / 3.2 & 0.47 & 1.15 & Aerial; Outdoor \\\\ NTU VIRAL mt\\_01 & 319.4 & 583 & 5,795 & 6.3 / 3.7 & 8.8 / 1.2 & 0.16 & 0.41 & Aerial; Indoor \\\\ \\hline R\\({}^{3}\\)LIVE hku\\_campus\\_00 & 190.6 & 202 & 2,022 & 12.0 / 7.3 & 11.5 / 3.2 & 0.58 & 1.24 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_01 & 374.6 & 304 & 3,043 & 20.4 / 12.6 & 17.2 / 6.9 & 1.32 & 2.86 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_02 & 354.3 & 323 & 32.6 & 13.5 / 6.4 & 11.9 / 2.8 & 0.87 & 1.91 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_03 & 181.2 & 173 & 1737 & 12.2 / 5.7 & 11.3 / 2.9 & 0.55 & 1.13 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_main\\_building & 1,036.9 & 1170 & 11,703 & 16.9 / 14.3 & 12.5 / 8.0 & 3.03 & 6.80 & Indoor; Outdoor \\\\ R\\({}^{3}\\)LIVE hku\\_park\\_00 & 247.3 & 228 & 2,285 & 30.1 / 15.9 & 12.6 / 3.7 & 0.92 & 2.38 & Cluttered field \\\\ R\\({}^{3}\\)LIVE hku\\_park\\_01 & 401.8 & 351 & 3,520 & 31.5 / 12.2 & 12.6 / 3.9 & 1.67 & 3.96 & Cluttered field \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_00 & 1,317.2 & 1073 & 10,732 & 26.0 / 12.8 & 18.0 / 7.6 & 4.92 & 11.25 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_01 & 1,524.3 & 1162 & 11,629 & 27.1 / 13.9 & 16.8 / 6.7 & 5.35 & 12.64 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_02 & 2,112.2 & 1618 & 4,787 & 26.7 / 14.5 & 20.3 / 6.1 & 1.99 & 4.65 & Campus \\\\ R\\({}^{3}\\)LIVE hku\\_campus\\_03 & 503.8 & 478 & 16,181 & 33.6 / 13.3 & 21.0 / 5.3 & 7.67 & 18.25 & Campus \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE II: This table shows the detailed information (e.g., length, duration, scenarios) of each testing sequence, the time consumption of ImMesh in processing a LiDAR scan, and the number of vertices and facets of each reconstructed mesh in Experiment-2. Our _accompanypanying video [70] (starting at 05:21)_ that visualizes the online mesh reconstruction process with sequence Kitti_00 is available on YouTube.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Mexline** & **Kitti** & **NCLT** & **NTU VIRAL** & **R\\({}^{3}\\)LIVE** \\\\ & **distance \\(\\xi\\)** (m) & \\(S_{\\text{R}}\\)** (m) & \\(S_{\\text{Q}}\\)** (m) \\\\ \\hline
**Mechanical LiDAR** & 0.15 & 15.0 & 0.60 \\\\
**Solid-state LiDAR** & 0.10 & 10.0 & 0.40 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: Two ImMesh configurations for two types of LiDARs (i.e., mechanical and solid-state LiDAR).
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Dataset** & **Kitti** & **NCLT** & **NTU VIRAL** & **R\\({}^{3}\\)LIVE** \\\\ \\hline \\hline \\multirow{2}{*}{LiDAR} & \\multirow{2}{*}{Velodyne HDL-64E} & \\multirow{2}{*}{Velodyne HDL-32E} & \\multirow{2}{*}{Ouster OS1-16benchmark ImMesh in four datasets with only two sets of configurations. The two configurations are reasonably required for adapting two classes of LiDARs (i.e., mechanical and solid-state LiDAR), as shown in TABLE III. Since the 3D points sampled by a solid-state LiDAR are distributed in a small sensor FoV, the accumulated point cloud of solid-state LiDAR usually has a higher density. Therefore, we set the minimum point distance and voxel size for solid-state LiDAR \\(1.5\\) times smaller than those for mechanical LiDAR, as shown in TABLE III. We maintained the same configuration for the other setups except for some necessary adjustments to match the hardware setup.
#### Iv-B2 Result and analysis
TABLE II shows the detailed information (e.g., length, duration, scene) of each sequence, the average time consumption of our _localization_ and _meshing_ module in processing a LiDAR scan, and the number of vertices and facets of each reconstructed mesh. From Table II, it is seen that the average cost-time of both _localization_ and _meshing_ modules are closely related to the density of the input LiDAR scan. To be detailed, the LiDAR of a higher channel has a much higher point sampling rate (see Table I) which causes more data to be processed in each update of a LiDAR frame (e.g., more points in a voxel and more voxels activated in each frame). Besides, the processing time varies among different scenarios for the same set of datasets. The sequences sampled in a high-way or field environment (e.g., Kitti_01, Kitti_09) usually have a longer LiDAR sampling range, leading to more points per frame to be processed. Thanks to the efficient data structures (e.g., kid-Tree, hash tables) and parallelism strategy, which allows us to perform the state estimation and incremental mesh reconstruction simultaneously, the time consumption of large-scale datasets is bounded in an acceptable value (\\(\\leqslant\\)\\(35\\,\\mathrm{ms}\\) for meshing, \\(\\leqslant\\)\\(49\\,\\mathrm{ms}\\) for localization).
The average and maximum time consumption of ImMesh in the four datasets are shown in TABLE IV, reflecting that our system satisfies the real-time requirement even with different types of LiDARs and scenarios. Notice that the LiDAR frame rate are \\(10\\,\\mathrm{Hz}\\) for all datasets, and our _meshing_
Fig. 7: The screenshots of our _Microsoft AirSim_ simulator used for generating synthetic data. (a, c, and e) show the βUrban cityβ environments, while (b, d, f, and g) depict the βCluttered fieldβ environments. The yellow frustums in (a\\(\\sim\\)d) represent the poses of LiDAR sensor used to capture synthetic data. These frustums are set as invisible during data generation, as shown in (e-g). The images within the green and blue boxes in (a and d) respectively show the produced depth and RGB images.
and _localization_ modules run in parallel (see Section VI-F).
### _Experiment-3: Quantitative evaluation of ImMesh_
In this experiment, we use both real-world and synthetic data to conduct the quantitative evaluations of ImMesh, by comparing it against existing reconstruction methods.
#### Vi-C1 Preparation of large-scale, real-world data
We conducted a quantitative evaluation using large-scale real-world LiDAR data collected from the _Complex Urban Dataset_[75]. This dataset provides a high-quality set of ground truth LiDAR poses and ground truth point clouds, which enables a comprehensive assessment of our proposed method and existing baselines. The detailed traveling length and the number of LiDAR frames of tested sequences are shown in TABLE V.
#### Vi-C2 Preparation of synthetic data
To further evaluate the performance of all the methods under diverse scenarios, with varying levels of clutteredness, we generated synthetic data using the _Microsoft AirSim_ simulator [4]. The screenshots of
Fig. 8: The qualitative comparison of ground truth and four evaluated methods, which are tested with the depth images resolution of \\(640\\times 480\\). The facets colored in red represent surfaces that have been incorrectly reconstructed, with \\(80\\%\\) of their sampling points not lying on the ground truth surface (i.e., the distances between these points and the nearest ground truth surface are larger than \\(5\\,\\mathrm{cm}\\)).
Fig. 9: The qualitative comparisons of four methods that evaluated with depth images of different resolutions. The facets colored in red represent surfaces that have been incorrectly reconstructed, with \\(80\\%\\) of their sampling points not lying on the ground truth surface.
our simulating scenarios are presented in Fig. 7, where we prepared two typical environments: \"Urban city\" (Fig. 7(a, c, and e)) and \"Cluttered field\" (Fig. 7(b, d, f, and g)), both of which have dimensions of \\(20\\,\\mathrm{m}\\times 10\\,\\mathrm{m}\\times 8\\,\\mathrm{m}\\). The \"Urban city\" environment consists of structured objects, such as buildings, towers, and water tanks, providing a realistic representation of an urban setting. On the other hand, the \"Cluttered field\" environment incorporates a diverse range of plants, including trees, flowers, grasses, and other vegetation, creating a more complex and cluttered scenario.
To simulate point clouds collected by a real LiDAR, we unproject the 3D points from the depth image. The depth images are obtained by querying the _AirSim_'s API, specifically the images shown within the green box in Fig. 7(a and b). The depth image has a field of view (FoV) of \\(120^{\\circ}\\times 80^{\\circ}\\). We manually positioned the poses, represented by the yellow frustums in Fig. 7(a \\(\\sim\\) d), to ensure that the generated point cloud covers most of the surfaces in the scene. Additionally, we simulate LiDAR data with different point cloud densities by generating data using three different sets of depth image resolutions: \\(640\\times 480\\), \\(320\\times 240\\), and \\(160\\times 120\\), as shown in TABLE VI.
#### Iv-B3 Experiment setup
In this experiment, we performed a comprehensive evaluation of meshing ability among our work and existing mesh reconstruction baselines, which includes a TSDF-based method implemented by _Point cloud library (PCL)_[54] with GPU acceleration, Delaunay triangulation and graph cut based method implemented by _OpenMVS_[76], and the official implementation of Poisson surface reconstruction [19; 20].
We conducted the evaluation of these methods on a desktop PC equipped with an _Intel i7-9700K_ CPU, 64Gb RAM, and an _Nvidia 2080 Ti_ GPU with 12Gb of graphics memory. We fed online reconstruction method _ImMesh_ and TSDF-based (_TSDF_) methods with LiDAR points frame by frame. To mitigate the impact of pose estimation errors on meshing results, we disabled the pose estimation module and provided the ground truth poses to the online mesh reconstruction methods _ImMesh_ and _TSDF_. For the offline mesh reconstruction methods, namely Delaunay triangulation (_Del_) and Poisson surface reconstruction (_Poi_), we fed them with the accumulated point cloud from all frames. Additionally, to address the issue of uneven point cloud density, which can result in errors when calculating normals for _Poi_, and to prevent _Del_ from reconstructing small facets that could bias accuracy calculations. We leverage a voxel grid filter with a leaf size of \\(1.0\\,\\mathrm{cm}\\times 1.0\\,\\mathrm{cm}\\times 1.0\\,\\mathrm{cm}\\) to downsample the accumulated point cloud before providing it as input to both
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Sequence**} & \\multirow{2}{*}{\\begin{tabular}{c} **Traveling** \\\\ **length (Km)** \\\\ \\end{tabular} } & \\multirow{2}{*}{\\begin{tabular}{c} **Number of** \\\\ **LiDAR frames** \\\\ \\end{tabular} } & \\multirow{2}{*}{**Method**} & \\multirow{2}{*}{\\begin{tabular}{c} **Cost time \\(\\downarrow\\)** \\\\ **(hour:min:sec)** \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{**Fairness**} & \\multicolumn{4}{c}{**Correctness**} \\\\ \\cline{5-10} & & & & **Max-Min** & **C2SE \\(\\downarrow\\)** & **Complexness** & **Accuracy** & **Recall** & **Precision** \\\\ & & & & **angle (\\(\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{ \\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{ \\boldsymbol{ \\boldsymbol{ \\boldsymbol{ }}}}}}}}}}}}}\\)\\) & **(m)** \\(\\downarrow\\)** & **(m)** \\(\\downarrow\\)** & **(\\%)** \\(\\uparrow\\)** & **(\\%)** \\(\\uparrow\\)** & **F-score \\(\\uparrow\\)** \\\\ \\hline Urban01 & 11.72 & 13846 & \\begin{tabular}{c} Poi \\\\ ImMesh (ours) \\\\ \\end{tabular} & 009:58:39 & 60.1014 & 0.9760 & 0.0632 & 0.0724 & 0.8554 & 0.7563 & 0.8028 \\\\ & & & & **angle (\\(\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{ \\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{\\boldsymbol{ }}}}}}}}}}}}}\\) & **0.636044** & **0.65064** & **0.9477** & **0.8260** & **0.8827** \\\\ \\hline Urban02 & 4.20 & 8961 & \\begin{tabular}{c} Poi \\\\ ImMesh (ours) \\\\ \\end{tabular} & 05:49:36 & 59.9965 & 0.9739 & 0.0792 & 0.0822 & 0.8818 & 0.7261 & 0.7964 \\\\ & & & & **on 00:03:01** & **57.3564** & **0.8605** & **0.80392** & **0.0556** & **0.9623** & **0.8398** & **0.8968** \\\\ \\hline Urban03 & 3.06 & 9091 &
\\begin{tabular}{c} Poi \\\\ ImMesh (ours) \\\\ \\end{tabular} & 04:55:04 & 60.1614 & 0.9770 & 0.0070 & 0.0059 & 0.8871 & 0.7754 & 0.8275 \\\\ & & & & **on 00:03:07** & **57.4131** & **0.8628** & **0.0398** & **0.0564** & **0.9597** & **0.8359** & **0.8935** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: The quantitative evaluation result with real-world data from _Complex Urban Dataset_. The \\(\\uparrow\\) denotes larger is better while \\(\\downarrow\\) indicates lower is better.
\\begin{table}
\\begin{tabular}{c c c c|c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Method**} & \\multirow{2}{*}{**Scenario**} & \\multirow{2}{*}{**Resolution**} & \\multirow{2}{*}{\\begin{tabular}{c} **Cost time** \\\\ **(min:sec)** \\(\\downarrow\\)** \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{\\begin{tabular}{c} **Fairness** \\\\ **(m)** \\(\\downarrow\\) \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{\\begin{tabular}{c} **Correctness** \\\\ **C2SE \\(\\downarrow\\)** \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{\\begin{tabular}{c} **Complexness** \\\\ **(m)** \\(\\downarrow\\) \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{\\begin{tabular}{c} **Accuracy** \\\\ **(\\%)** \\(\\uparrow\\) \\\\ \\end{tabular} } & \\multicolumn{2}{c|}{\\begin{tabular}{c} **Recall** \\\\ **(\\%)** \\(\\uparrow\\) \\\\ \\end{tabular} } & \\multicolumn{2}{c}{
\\begin{tabular}{c} **Precision** \\\\ **F-score \\(\\uparrow\\)** \\\\ \\end{tabular} } \\\\ \\hline Del & Urban city & 640 x 480 & 17:51 & **48.5148** & **0.7825** & **0.0883** & 0.0341 & **0.7976** & **0.7976** \\\\ ImMesh (ours) & Urban city & 640 x 480 & 00:31 & 48.0909 & 0.7843 & 0.1002 & **0.0265** & 0.7290 & **0.8825** & 0.7859 \\\\ & & & & & & & & & & \\\\ & & & & & & & & & &_Poi_ and _Del_.
Due to the limited graphics memory (12Gb for _Nvidia 2080 Ti_), we set the _TSDF_ cell size as \\(0.2\\,\\mathrm{m}\\) such that _TSDF_ can utilize the GPU acceleration while preserving satisfying precision in the mesh reconstruction. For our ImMesh, the parameter configuration for solid-state LiDAR is used, as shown in TABLE III. For _Poi_, we set the octree level as \\(12\\) and removed large hulls by deleting facets with one of their edges longer than \\(15.0\\,\\mathrm{cm}\\). For other configurations of all methods, we set them as their default configuration. It is noted that other than _TSDF_ using GPUs for acceleration, the rest methods, _Del_, _Poi_, and ours, use the CPU only. We compare the efficiency of four methods by evaluating their time consumption in reconstructing the mesh. For online methods (i.e., _TSDF_ and ours), we accumulate the processing time of all frames, while for offline methods (i.e., _Poi_ and _Del_), we count the total time in processing the offline data. The results of their time consumption are listed in TABLE V and TABLE VI.
#### Vi-B4 Evaluation of fairness
In this experiment, we employ the triangle fairness criteria to evaluate the quality of reconstructed triangle facets. This evaluation involves analyzing the average error of the maximum and minimum interior angles of the triangles (as utilized in work [77]), which we refer to as the _Max-Min angle_ in TABLE V and TABLE VI. Additionally, we consider the average ratio of the circumradius to the shortest edge length (referred to as _C2SE_ in Tables) as used in works [78, 79]. A lower value for both the _Max-Min angle_ and _C2SE_ indicates higher mesh quality, as it signifies that the triangle facets are closer to being equilateral.
In the evaluation with large-scale, real-world data, the results for _Del_ and _TSDF_ methods were not available due to specific limitations: 1) For _Del_, we encountered difficulties when running it with the _Complex Urban Dataset_. Despite multiple attempts, the _Del_ method either crashed midway or failed to produce any result after running for over three days. 2) As for _TSDF_, allocating the voxels requires a massive amount of graphics memory. This exceeds the capabilities of our hardware platforms, particularly for sequences in Table V with a traveling length of over 3 kilometers.
As indicated by the fairness metrics listed in TABLE V and VI, we can conclude that leveraging Delaunay triangulation eliminates the formation of sliver triangles. The _Del_ method demonstrates the best results in this regard. Following that is _ImMesh_, which utilizes Delaunay triangulation for meshing the point set after dimension reduction through projection. On the other hand, the meshes reconstructed by the _Poi_ and _TSDF_ methods, which employ the marching cubes algorithm, exhibit inferior results. This is due to the inherent limitation of the marching cubes algorithm [23], which generates sliver triangles when a facet is positioned closely and nearly parallel to the edges of the cube.
#### Vi-B5 Evaluation of correctness
For the quantitative evaluation of the methods' correctness in reconstructing the mesh, we utilized 3D geometry metrics as employed in works NeuralRecon [80] and Atlas [81]. These metrics encompass the following measurements: _accuracy_, _completeness_, _precision_, _recall_, and _F-score_. The calculations for these metrics are as follows:
\\[\\begin{split}\\text{\\emph{Accuracy}:}&\\quad\\texttt{ mean}_{\\texttt{p}\\in\\mathcal{P}}\\big{(}\\texttt{min}_{\\texttt{p}\\in\\mathcal{P}^{*}}\\big{|} \\big{|}\\mathbf{p}-\\texttt{p}^{*}\\big{|}\\big{|})\\\\ \\text{\\emph{Completeness}:}&\\quad\\texttt{ mean}_{\\texttt{p}\\in\\mathcal{P}^{*}}\\big{(}\\texttt{min}_{\\texttt{p}\\in\\mathcal{P}} \\big{|}\\big{|}\\mathbf{p}-\\texttt{p}^{*}\\big{|}\\big{|})\\\\ \\text{\\emph{Precision}:}&\\quad\\texttt{ mean}_{\\texttt{p}\\in\\mathcal{P}}\\big{(}\\texttt{min}_{\\texttt{p}\\in\\mathcal{P}^{*}} \\big{|}\\big{|}\\mathbf{p}-\\texttt{p}^{*}\\big{|}<0.05)\\\\ \\text{\\emph{Recall}:}&\\quad\\texttt{ mean}_{\\texttt{p}\\in\\mathcal{P}^{*}}\\big{(}\\texttt{min}_{\\texttt{p}\\in\\mathcal{P}} \\big{|}\\mathbf{p}-\\texttt{p}^{*}\\big{|}<0.05)\\\\ \\text{\\emph{F-score}:}&\\quad\\frac{2\\times Precision \\times Recall}{Precision+Recall}\\end{split}\\]
Fig. 10: The first row of images shows the comparisons between a raw LiDAR frame (colored in white) and our reinforced points (colored in magenta) under different sets of rasterizing FoV. The second and third rows of images show the comparisons of raw and reinforced points after projection on the current sensor frame. For more detailed visualizations of this process, please refer to our _accompanying video [70] (starting at 08:19)_ on YouTube.
where \\(\\mathbf{\\mathcal{P}}\\) refers to the point cloud obtained by uniformly sampling the reconstructed mesh generated by the method under evaluation. This point cloud is sampled at a spatial resolution of \\(0.01\\,\\mathrm{m}\\). On the other hand, \\(\\mathbf{\\mathcal{P}}^{\\ast}\\) represents the downsampled ground truth point cloud. It is also downsampled at a spatial resolution of \\(0.01\\,\\mathrm{m}\\).
The quantitative evaluation results for metrics such as _accuracy_, _completeness_, _precision_, _recall_, and _F-score_ are provided in TABLE VI. We can observe that _Del_ achieves the highest overall correctness in constructing the mesh of the scene. Following that, _ImMesh_ demonstrates slightly lower precision. Then, _Poi_ exhibits even lower mesh correctness, and _TSDF_ shows the lowest correctness among all methods.
The qualitative comparison results of the four benchmarked methods evaluated with synthetic data are presented in Fig. 8 and Fig. 9. In these figures, the red facets represent incorrectly reconstructed surfaces with \\(80\\%\\) of their sampling points not lying on a ground truth surface (i.e., the distances between these points and the nearest ground truth surface are larger than \\(5\\,\\mathrm{cm}\\)). Among the evaluated methods, _Del_ and _ImMesh_ exhibit comparable results in reconstructing the mesh of scenes well. In contrast, _Poi_ exhibits lower mesh correctness due to the presence of unwanted facets at the sharp edges of the models, as indicated in the fourth column of Fig. 8. The _TSDF_ method shows the lowest results with the appearance of holes on the reconstructed surface, as observed in the roofs of buildings and the leaves of trees shown in the fifth column of Fig. 8.
When reconstructing complex and small objects in the scene, such as the flower in the \"Cluttered field\" environment, as depicted in the RGB image shown in the bottom-left corner of Fig. 7(f) and the corresponding mesh models displayed in the fifth row of Fig. 8. _Del_, _TSDF_ and _ImMesh_ fail to recover the details of surface well. This limitation arises from different factors for each method: _Del_ requires a large number of camera-to-point correspondences to extract intricate surface details, which may pose challenges when dealing with complex and tiny objects. _TSDF_ and _ImMesh_ are constrained by the fixed voxel size, which can not reconstruct the details of surfaces whose size is smaller than voxel. What is worth mentioning is that we found _Poi_ can recover the details of the flower's petals well. This is achieved through the use of a scalable resolution based on an octree structure, which allows _Poi_ to adapt its resolution for reconstructing small and intricate surfaces.
In addition, as observed in Fig. 9 and with the metrics listed in Table VI, we can see that as the point cloud becomes sparser (due to lower resolution depth images), the correctness of the reconstruction methods decreases accordingly. However, both _Del_ and _ImMesh_ demonstrate stronger robustness in resiliently handling the drop in point cloud density. On the other hand, the meshes reconstructed by _Poi_ and _TSDF_ exhibit discontinuities and contain more holes and gaps when compared to the results of _Del_ and _ImMesh_.
Lastly, in the evaluation with real-world data from _Complex Urban Dataset_[75], we discovered that the mesh reconstructed by _Poi_ also exhibits unwanted facets appearing at the edges of objects such as buildings and trees. These undesirable facets, as indicated by the red facets in Fig. 8 and Fig. 9 for _Poi_, have a negative impact on the overall correctness of the reconstruction. As a result, _Poi_ performs inferiorly across all evaluated correctness metrics when compared to _ImMesh_, as shown in TABLE V.
#### Vi-C6 Evaluation of runtime performance
According to the _cost time_ listed in TABLE V, it is clear that _ImMesh_ demonstrates a significant advantage in terms of runtime performance when evaluated with large-scale sequences. The execution time of _ImMesh_ is only \\(0.93\\%\\sim 1.06\\%\\) of that of _Poi_.
TABLE VI displays the average time consumption of the four benchmarked methods when evaluated with synthetic data. The online methods, _ImMesh_ and _TSDF_, exhibit similar runtime performance. In contrast, the offline methods (_Del_ and _Poi_) consume significantly more time, ranging from \\(5\\) to \\(40\\) times longer than the online methods (_TSDF_ and _ImMesh_). Notably, _TSDF_ achieves comparable runtime performance to our method with the assistance of an _Nvidia 2080 Ti_ GPU, highlighting the high computational efficiency of our _ImMesh_ framework compared to the other three methods.
#### Vi-C7 Summary
Based on the results and analysis regarding runtime performance, fairness, and correctness, we have reached the following conclusions for Experiment-3: 1) For offline applications, which only care about quality and neglect time consumption, _Del_ is the best choice, and our _ImMesh_ is the second best one. 2) For real-time applications, our work _ImMesh_ is the best choice. Even though _TSDF_ with GPU acceleration can run in real-time, its meshing correctness is much lower than _ImMesh_.
### _Application-1: LiDAR point cloud reinforcement_
Benefiting from ImMesh's real-time ability to reconstruct the triangle mesh on the fly, depth images can be rasterized from the reconstructed facets online in the current sensor frame. By unprojecting the 3D points from the depth image, point clouds of a regular pattern can be retrieved with wider FoV and denser distribution than the original input LiDAR scan. We termed this process as LiDAR point reinforcement.
In this experiment, we demonstrate the LiDAR point cloud reinforcement with a solid-state LiDAR _Livox Avia_ with FoV of \\(70.4\\lx@math@degree\\times 77.2\\lx@math@degree\\). The comparisons between the original points of a LiDAR frame (colored in white) and after our reinforcement (colored in magenta) with different sets of rasterization FoV are shown in Fig. 10. As the white points shown in the first row of Fig. 10, the input LiDAR scan is sparse with an irregular scanning pattern. After the reinforcement, the resultant 3D points colored in magenta are distributed in a regular pattern, with a higher density and wider FoV (as the rasterization FoV is bigger than LiDAR's). To better understand their differences, we present the comparisons of depth images after projection, as shown in the second and third rows of Fig. 10.
### _Application-2: Rapid, lossless texture reconstruction_
In this application, we show how ImMesh can be applied in applications of lossless texture reconstruction for rapid field surveying. As shown in Fig. 11(b1\\(\\sim\\)b3), we mounted a _Livox_avia_ LiDAR and a _Hikvision CA-050-11UC_ global shutter RGB camera on a _DJI M300_ drone platform.
We collected the data in a mountain field by taking off from Zone-A (see Fig. 11(a)) and flying in a \"s\"-like pattern trajectory with a traveling distance of \\(975\\,\\mathrm{m}\\). We leveraged ImMesh for reconstructing the mesh from the collected LiDAR data and used R\\({}^{3}\\)LIVE++ [82] for estimating the camera's poses (as the yellow frustum shown in Fig. 11(a, c1 and c2)). We textured each facet of the reconstructed mesh by the RGB image captured by the nearest camera frame with the estimated camera pose from R\\({}^{3}\\)LIVE++. Benefiting from the high efficiency of ImMesh and R\\({}^{3}\\)LIVE++, the total time of reconstructing the RGB textured mesh from this sequence of duration \\(325\\,\\mathrm{s}\\) cost only \\(686\\,\\mathrm{s}\\), with \\(328\\,\\mathrm{s}\\) for ImMesh, \\(330\\,\\mathrm{s}\\) for R\\({}^{3}\\)LIVE++, and \\(28\\,\\mathrm{s}\\) for texturing. Fig. 11(a) shows a bird view of our mesh after texturing, with the close-up views of textured mesh in Zone-A, B, and C shown in Fig. 11(e1, e2, and e3), respectively. In Fig. 11(c1 and c2), we show the altitude of this map by coloring the facets in their height w.r.t. the take-off point (i.e., the ground plane in Zone-A).
As shown by the close-up views in the bottom three rows of Fig. 11, the reconstructed mesh (d1\\(\\sim\\)d3) from our ImMesh after texturing (e1\\(\\sim\\)e3) successfully preserves the map textures when comparing with the RGB-colored point cloud reconstructed by R\\({}^{3}\\)LIVE++ (f1\\(\\sim\\)f3). Due to the limited point cloud density, the RGB-colored point cloud by R\\({}^{3}\\)LIVE++ is unable to reconstruct the scene losslessly. Compared to existing counterparts (e.g., 3D reconstruction from photogrammetry [13, 26]) that reconstructs a scene from captured images (and RTK measurements), our system shows significant advantages:
Fig. 11: (b1\\(\\sim\\)b3) show our UAV platform for data collection. (a) show the bird view of our lossless texture reconstruction result. (c1 and c2) show the altitude of this map by coloring the facets in their height w.r.t. the take-off point (i.e., the ground plane in Zone-A). The qualitative comparison of mapping results in Zone-A, B, and C of ImMesh, ImMesh after texturing, and R\\({}^{3}\\)LIVE++ are shown in (d\\(\\sim\\)f). To see the detailed reconstruction process of the scene, please refer to our _accompanying video [70] (starting at 10:22)_ on YouTube.
1) It is a reliable solution that does not require GPS measurement. 2) It is a rapid reconstruction method that costs only 2\\(\\sim\\)3 times the data sampling time for reconstructing a scene. 3) It preserves a geometry structure of high accuracy that is reconstructed from LiDAR's measurements.The _accompanying video [70] (starting at 10:22)_ that records the full process of this lossless texture reconstruction is available on our YouTube, and an additional trial is shown in our Supplementary Material [83].
Notice that in Fig. 11, the presence of isolated mesh facets is a result of missing scanning data, while the blurry texture artifacts are caused by the large viewing angle of the facets and textured images, both can be addressed through proper data collection processes.
## IX Conclusions and future work
### _Conclusions_
In this work, we proposed a novel meshing framework termed ImMesh for achieving the goal of simultaneous localization and meshing in real-time. The real-time incremental meshing nature of our system, even in large-scale scenes, makes it one of a kind. The _localization_ module in ImMesh represents the surrounding environment in a probabilistic representation, estimating the sensor pose in real-time by leveraging an iterated Kalman filter to maximize the posterior probability. The _meshing_ module directly utilizes the spatially-downsampled registered LiDAR points as mesh vertices and reconstructs the triangle facets in a novel incremental manner in real-time. To be detailed, our _meshing_ module first retrieves all voxels that contain newly appended vertices. Then, the voxel-wise 3D meshing problem is converted into a 2D one by performing dimension reduction for efficient meshing. Finally, the triangle facets are incrementally reconstructed with _pull_, _commit_, and _push_ steps.
Our system is evaluated by real experiments. First, we verified the overall performance by presenting live video demonstrations of how the mesh is immediately reconstructed in the process of data collection. Then we extensively tested ImMesh with four public datasets collected by four different LiDAR sensors in various scenes, which confirmed the real-time ability of our system. Lastly, we benchmarked the meshing performance of ImMesh in Experiment-3 by comparing it against existing meshing baselines. The results show that ImMesh achieves high meshing accuracy while keeping the best runtime performance among all methods.
Applications of our system were demonstrated. We first show how ImMesh can be applied for LiDAR point cloud reinforcement, which generates reinforced points in a regular pattern with denser density and wider FoV than raw LiDAR scans. In Application-2, we combined our works ImMesh and R\\({}^{3}\\)LIVE++ to achieve the goal of lossless texture reconstruction of scenes. Finally, we make our code publicly available on our GitHub: github.com/hku-mars/ImMesh.
### _Limitations and future works_
One major limitation of our work is its lack of scalability in spatial resolution. Specifically, when dealing with large planar surfaces, ImMesh tends to inefficiently reconstruct the mesh with numerous small facets due to the fixed vertex density. Conversely, for tiny objects smaller than the size of a voxel, ImMesh struggles to accurately reconstruct their surfaces, as mentioned in our quantitative evaluation results in Section VIII-C5. To address this limitation, our future work will focus on developing an adaptive resolution meshing strategy.
The second limitation is that our system does not currently implement any loop correction mechanism, resulting in potential gradual drift due to accumulated localization errors at revisited places. This potentially leads to inconsistent reconstructed results if revisit occurs. In our future work, we plan to address this limitation by integrating our recent works [84, 85] on loop detection based on LiDAR point clouds. This loop detection mechanism will allow us to detect loops online and apply loop corrections to reduce drift and improve the consistency of the reconstructed results.
Furthermore, we have noticed that a number of works appeared in the literature recently, which utilize the reconstructed mesh for improving the localization accuracy of both visual-slam (e.g., [86]) and LiDAR-slam system (e.g., [87, 88, 89]). Motivated by these works, our future work would improve our localization accuracy by utilizing our online reconstructed mesh.
Lastly, when realizing the goal of lossless texture reconstruction of scenes, we combined ImMesh and R\\({}^{3}\\)LIVE at the system level as presented in our Application-2 (in Section VIII-E). Our future would couple ImMesh with R\\({}^{3}\\)LIVE more tightly to improve the overall efficiency.
## X Acknowledgements
The authors would like to thank DJI Co., Ltd2 for providing devices and research funds.
Footnote 2: [https://www.dji.com](https://www.dji.com)
## References
* [1] S. Mystakidis, \"Metaverse,\" _Encyclopedia_, vol. 2, no. 1, pp. 486-497, 2022.
* [2] Y. Wang, Z. Su, N. Zhang, R. Xing, D. Liu, T. H. Luan, and X. Shen, \"A survey on metaverse: Fundamentals, security, and privacy,\" _IEEE Communications Surveys & Tutorials_, 2022.
* [3] P. Cipresso, I. A. C. Giglioli, M. A. Raya, and G. Riva, \"The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature,\" _Frontiers in psychology_, p. 2086, 2018.
* [4] S. Shah, D. Dey, C. Lovett, and A. Kapoor, \"Airsim: High-fidelity visual and physical simulation for autonomous vehicles,\" in _Field and Service Robotics: Results of the 11th International Conference_. Springer, 2018, pp. 621-635.
* [5] Y. Song, S. Naji, E. Kaufmann, A. Loquercio, and D. Scaramuzza, \"Flightmare: A flexible quadrotor simulator,\" in _Conference on Robot Learning_. PMLR, 2021, pp. 1147-1157.
* [6] S. Laine and T. Karras, \"High-performance software rasterization on gpus,\" in _Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics_, 2011, pp. 79-88.
* [7] T. Akenine-Moller, E. Haines, and N. Hoffman, _Real-time rendering_. AK Peters/crc Press, 2019.
* [8] J. Aroroo, _Graphics Semts II_. Elsevier, 2013.
* [9] P. Jimenez, F. Thomas, and C. Torras, \"3d collision detection: a survey,\" _Computers & Graphics_, vol. 25, no. 2, pp. 269-285, 2001.
* [10] C. Ericson, _Real-time collision detection_. Crc Press, 2004.
* [11] R. Featherstone, _Rigid body dynamics algorithms_. Springer, 2014.
** [12] D. Baraff, \"An introduction to physically based modeling: rigid body simulation i--unconstrained rigid body dynamics,\" _SIGGRAPH course notes_, vol. 82, 1997.
* [13] J. L. Schonberger and J.-M. Frahm, \"Structure-from-motion revisited,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 4104-4113.
* [14] F. Kong, X. Liu, R. Tang, J. Lin, Y. Ren, Y. Cai, F. Zhu, N. Chen, and F. Zhang, \"Maxim: A light-weight point-realistic simulator for lidar-based uavs,\" _arXiv preprint arXiv:2211.10716_, 2022.
* [15] W. Wang, D. Zhu, X. Wang, Y. Hu, Y. Qiu, C. Wang, Y. Hu, A. Kapoor, and S. Scherer, \"Tartaniar: A dataset to push the limits of visual slam,\" in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 4909-4916.
* [16] D. S. SolidWorks, \"Solidworks(r),\" _Version Solidworks_, vol. 1, 2005.
* [17] B. O. Community, \"Blender--a 3d modelling and rendering package,\" _Blender Foundation_, 2018.
* [18] C. Yuan, W. Xu, X. Liu, X. Hong, and F. Zhang, \"Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry,\" _IEEE Robotics and Automation Letters_, vol. 7, no. 3, pp. 8518-8525, 2022.
* [19] M. Kazhdan, M. Boltho, and H. Hoppe, \"Poisson surface reconstruction,\" in _Proceedings of the fourth Eurographics symposium on Geometry processing_, vol. 7, 2006.
* [20] M. Kazhdan and H. Hoppe, \"Screened poisson surface reconstruction,\" _ACM Transactions on Graphics (ToG)_, vol. 32, no. 3, pp. 1-13, 2013.
* [21] J. Wilhelms and A. Van Gelder, \"Octrees for faster isosurface generation,\" _ACM Transactions on Graphics (TOG)_, vol. 11, no. 3, pp. 201-227, 1992.
* [22] R. Shekhar, E. Fayyad, R. Yagel, and J. F. Cornhill, \"Octree-based decimation of marching cubes surfaces,\" in _Proceedings of Seventh Annual IEEE Visualization'96_. IEEE, 1996, pp. 335-342.
* [23] W. E. Lorensen and H. E. Cline, \"Marching cubes: A high resolution 3d surface construction algorithm,\" _ACM siggraph computer graphics_, vol. 21, no. 4, pp. 163-169, 1987.
* [24] M. Kazhdan, M. Chuang, S. Rusinkiewicz, and H. Hoppe, \"Poisson surface reconstruction with envelope constraints,\" in _Computer graphics forum_, vol. 39, no. 5. Wiley Online Library, 2020, pp. 173-182.
* [25] P. Labatut, J.-P. Pons, and R. Keriven, \"Efficient multi-view reconstruction of large-scale scenes using interest points, delaunay triangulation and graph cuts,\" in _2007 IEEE 11th international conference on computer vision_. IEEE, 2007, pp. 1-8.
* [26] V. Livinov and M. Lhuillier, \"Incremental solid modeling from sparse and omnidirectional structure-from-motion data,\" in _British Machine Vision Conference_, 2013.
* [27] M. Jancosek and T. Pajdla, \"Exploiting visibility information in surface reconstruction to preserve weakly supported surfaces,\" _International scholarly research notices_, vol. 2014, 2014.
* [28] F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva, and G. Taubin, \"The ball-pivoting algorithm for surface reconstruction,\" _IEEE transactions on visualization and computer graphics_, vol. 5, no. 4, pp. 349-359, 1999.
* [29] J. Cao, A. Tagliasacchi, M. Olson, H. Zhang, and Z. Su, \"Point cloud skeletons via laplacian based contraction,\" in _2010 Shape Modeling International Conference_. IEEE, 2010, pp. 187-197.
* [30] R. Wang, J. Petchambaran, and D. Chen, \"Lidar point clouds to 3-d urban models : a review,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 11, no. 2, pp. 606-627, 2018.
* [31] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, \"Kinectfusion: Real-time dense surface mapping and tracking,\" in _2011 10th IEEE international symposium on mixed and augmented reality_. IEEE, 2011, pp. 127-136.
* [32] J. Chen, D. Bautembach, and S. Izadi, \"Scalable real-time volumetric surface reconstruction,\" _ACM Transactions on Graphics (ToG)_, vol. 32, no. 4, pp. 1-16, 2013.
* [33] M. Nielner, M. Zollhofer, S. Izadi, and M. Stamminger, \"Real-time 3d reconstruction at scale using voxel hashing,\" _ACM Transactions on Graphics (ToG)_, vol. 32, no. 6, pp. 1-11, 2013.
* [34] O. Kahler, V. Priscaariu, J. Valentin, and D. Murray, \"Hierarchical voxel block hashing for efficient integration of depth images,\" _IEEE Robotics and Automation Letters_, vol. 1, no. 1, pp. 192-197, 2015.
* [35] E. Vespa, N. Nikolov, M. Grimm, L. Nardi, P. H. J. Kelly, and S. Leutenegger, \"Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping,\" _IEEE Robotics and Automation Letters_, vol. 3, no. 2, pp. 1144-1151, Apr. 2018.
* [36] O. Kahler, V. A. Priscaariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray, \"Very high frame rate volumetric integration of depth images on mobile devices,\" _IEEE transactions on visualization and computer graphics_, vol. 21, no. 11, pp. 1241-1250, 2015.
* [37] M. Klingensmith, I. Dryanovski, S. S. Srinivasa, and J. Xiao, \"Chisel: Real time large scale 3d reconstruction onboard a mobile device using spatially hashed signed distance fields.\" in _Robotics: science and systems_, vol. 4, no. 1. Citeseer, 2015.
* [38] H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto, \"Voxblox: Incremental 3d euclidean signed distance fields for on-board many planning,\" in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 1366-1373.
* [39] D. Lefloch, M. Kluge, H. Sarbolandi, T. Weyrich, and A. Kolb, \"Comprehensive use of curvature for robust and accurate online surface reconstruction,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 39, no. 12, pp. 2349-2365, 2017.
* [40] D. Lefloch, T. Weyrich, and A. Kolb, \"Anisotropic point-based fusion,\" in _2015 18th International Conference on Information Fusion (Fusion)_. IEEE, 2015, pp. 2121-2128.
* [41] T. Weise, T. Wismer, B. Leibe, and L. Van Gool, \"In-hand scanning with online loop closure,\" in _2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops_. IEEE, 2009, pp. 1630-1637.
* [42] S. Rusinkiewicz, O. Hall-Holt, and M. Levoy, \"Real-time 3d model acquisition,\" _ACM Transactions on Graphics (TOG)_, vol. 21, no. 3, pp. 438-446, 2002.
* [43] M. Habbecke and L. Kobbelt, \"A surface-growing approach to multi-view stereo reconstruction,\" in _2007 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, 2007, pp. 1-8.
* [44] T. Bodemmueller, \"Streaming surface reconstruction from real time 3d measurements,\" Ph.D. dissertation, Technische Universitat Munchen, 2009.
* [45] T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison, \"Elasticfusion: Dense slam without a pose graph.\" Robotics: Science and Systems, 2015.
* [46] T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison, and S. Leutenegger, \"Elasticfusion: Real-time dense slam and light source estimation,\" _The International Journal of Robotics Research_, vol. 35, no. 14, pp. 1697-1716, 2016.
* [47] W. Gao and R. Tedrake, \"Surfelwarp: Efficient non-volumetric single view dynamic reconstruction,\" _arXiv preprint arXiv:1904.13073_, 2019.
* [48] T. Schops, T. Sattler, and M. Pollefeys, \"Surfelleming: Online surf-based mesh reconstruction,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 42, no. 10, pp. 2494-2507, 2019.
* [49] M. Teschner, B. Heidelberger, M. Muller, D. Pomerantes, and M. H. Gross, \"Optimized spatial hashing for collision detection of deformable objects,\" in _Vmv_, vol. 3, 2003, pp. 47-54.
* [50] C++ std::unordered_map: [https://cplusplus.com/reference/unordered_map/unordered_map/unordered_map/unordered_map](https://cplusplus.com/reference/unordered_map/unordered_map/unordered_map/unordered_map)
* C++_, Sep. 1998.
* [52] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, \"Fast-lio2: Fast direct lidar-inertial odometry,\" _IEEE Transactions on Robotics_, 2022.
* [53] Y. Cai, W. Xu, and F. Zhang, \"kdd-tree: An incremental kd tree for robotic applications,\" _arXiv preprint arXiv:2102.10808_, 2021.
* [54] R. B. Rusu and S. Cousins, \"3d is here: Point cloud library (pcl),\" in _2011 IEEE international conference on robotics and automation_. IEEE, 2011, pp. 1-4.
* [55] M. Muja and D. G. Lowe, \"Fast approximate nearest neighbors with automatic algorithm configuration.\" _VISAPP (1)_, vol. 2, no. 331-340, p. 2, 2009.
* [56] R. Stevens, _Computer Graphics Dictionary_, ser. ADVANCES IN COMPUTFEER GRAPHICS AND GAME DEVELOPMENT SERIES. Charles River Media, 2002. [Online]. Available: [https://books.google.com/hko/books?id=XqJL6MiP0C](https://books.google.com/hko/books?id=XqJL6MiP0C)
* [57] W. Kahan, \"Miscalculating area and angles of a needle-like triangle,\" _University of California, Berkeley_, vol. 94720, 1776.
* [58] M. Woo, J. Neider, T. Davis, and D. Shreiner, _OpenGL programming guide: the official guide to learning OpenGL_. Addison-Wesley Longman Publishing Co., Inc., 1999.
* [59] F. Evans, S. Sichena, and A. Varshney, \"Optimizing triangle strips for fast * [62] K. R. Castleman, _Digital image processing_. Prentice Hall Press, 1996.
* [63] A. Fabri and S. Pion, \"Cgal: The computational geometry algorithms library,\" in _Proceedings of the 17th ACM SIGSPATIAL international conference on advances in geographic information systems_, 2009, pp. 538-539.
* [64] C. D. Toth, J. O'Rourke, and J. E. Goodman, _Handbook of discrete and computational geometry_. CRC press, 2017.
* [65] A. Rosinol, M. Abate, Y. Chang, and L. Carlone, \"Kimera: an open-source library for real-time metric-semantic localization and mapping,\" in _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 1689-1696.
* [66] D. Attali, J.-D. Boissonnat, and A. Lieutier, \"Complexity of the delaunay triangulation of points on surfaces the smooth case,\" in _Proceedings of the nineteenth annual symposium on Computational Geometry_, 2003, pp. 201-210.
* [67] \"Face culling in open!.\" [Online]. Available: [https://www.khronos.org/open/wiki/Face_Culling](https://www.khronos.org/open/wiki/Face_Culling)
* [68] B. Zhou, J. Pan, F. Gao, and S. Shen, \"Raptor: Robust and perception-aware trajectory replanning for quadrotor fast flight,\" _IEEE Transactions on Robotics_, vol. 37, no. 6, pp. 1992-2009, 2021.
* [69] B. Zhou, Y. Zhang, X. Chen, and S. Shen, \"Fuel: Fast uav exploration using incremental frontier structure and hierarchical planning,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, pp. 779-786, 2021.
* [70] J. Lin, C. Yuan, Y. Cai, H. Li, Y. Ren, Y. Zou, X. Hong, and F. Zhang, \"Accomapying video for immesh,\" 2023. [Online]. Available: [https://youtu.be/pz172Mwz428](https://youtu.be/pz172Mwz428)
* [71] A. Geiger, P. Lenz, and R. Urtasun, \"Are we ready for autonomous driving? the kitti vision benchmark suite,\" in _2012 IEEE conference on computer vision and pattern recognition_. IEEE, 2012, pp. 3354-3361.
* [72] N. Carlevaris-Bianco, A. K. Ushshani, and R. M. Eustice, \"University of michigan north campus long-term vision and lidar dataset,\" _The International Journal of Robotics Research_, vol. 35, no. 9, pp. 1023-1035, 2016.
* [73] T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, \"Nu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint,\" _The International Journal of Robotics Research_, vol. 41, no. 3, pp. 270-280, 2022.
* [74] J. Lin and F. Zhang, \"R3live: A robust, real-time, rgb-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package,\" in _2022 International Conference on Robotics and Automation (ICRA)_. IEEE, 2022, pp. 10 672-10 678.
* [75] J. Jeong, Y. Cho, Y.-S. Shin, H. Roh, and A. Kim, \"Complex urban dataset with multi-level sensors from highly diverse urban environments,\" _The International Journal of Robotics Research_, vol. 38, no. 6, pp. 642-657, 2019.
* [76] D. Cernea, \"OpenMVS: Multi-view stereo reconstruction library,\" 2020. [Online]. Available: [https://cdcseaceave.github.io/openMVS](https://cdcseaceave.github.io/openMVS)
* [77] C. L. Lawson, \"Software for c1 surface interpolation,\" in _Mathematical software_. Elsevier, 1977, pp. 161-194.
* [78] J. R. Shewchuk, _Delaunay refinement mesh generation_. Carnegie Mellon University, 1997.
* [79] X.-Y. Li, \"Generating well-shaped d-dimensional delaunay meshes,\" _Theoretical Computer Science_, vol. 296, no. 1, pp. 145-165, 2003.
* [80] J. Sun, Y. Xie, L. Chen, X. Zhou, and H. Bao, \"Neuralrecon: Real-time coherent 3d reconstruction from monocular video,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 15 598-15 607.
* [81] Z. Murez, T. Van As, J. Bartolozzi, A. Sinha, V. Badrinarayanan, and A. Rabinovich, \"Atlas: End-to-end 3d scene reconstruction from posed images,\" in _Computers Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VII 16_. Springer, 2020, pp. 414-431.
* [82] J. Lin and F. Zhang, \"R3live+: A robust, real-time, radiance reconstruction package with a tightly-coupled lidar-inertial-visual state estimator,\" _arXiv preprint arXiv:2209.03666_, 2022.
* [83] J. Lin, C. Yuan, Y. Cai, H. Li, Y. Ren, Y. Zou, X. Hong, and F. Zhang, \"Supplementary material for immesh,\" 2023. [Online]. Available: [https://github.com/hku-mars/ImMesh/blob/main/supply/Supplementary_material.pdf](https://github.com/hku-mars/ImMesh/blob/main/supply/Supplementary_material.pdf)
* [84] C. Yuan, J. Lin, Z. Zou, X. Hong, and F. Zhang, \"Std: Stable triangle descriptor for 3d place recognition,\" _arXiv preprint arXiv:2209.12435_, 2022.
* [85] J. Lin and F. Zhang, \"A fast, complete, point cloud based loop closure for lidar odometry and mapping,\" _arXiv preprint arXiv:1909.11811_, 2019.
* [86] V. Panek, Z. Kukelova, and T. Sattler, \"Meshloc: Mesh-based visual localization,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 589-609.
* [87] I. Vizzo, X. Chen, N. Chebrolu, J. Behley, and C. Stachniss, \"Poisson surface reconstruction for lidar odometry and mapping,\" in _2021 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2021, pp. 5624-5630.
* [88] M. Dreher, H. Blum, R. Siegwart, and A. Gawel, \"Global localization in meshes,\" in _ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction_, vol. 38. IARARC Publications, 2021, pp. 747-754.
* [89] M. Oelsch, M. Karimi, and E. Steinbach, \"R-loam: Improving lidar odometry and mapping with point-to-mesh features of a known 3d reference object,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, pp. 2068-2075, 2021.
| In this paper, we propose a novel LiDAR(-inertial) odometry and mapping framework to achieve the goal of simultaneous localization and meshing in real-time. This proposed framework termed ImMesh comprises four tightly-coupled modules: receiver, localization, meshing, and broadcaster. The localization module first utilizes the preprocessed sensor data from the receiver, estimates the sensor pose online by registering LiDAR scans to maps, and dynamically grows the map. Then, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. Finally, the real-time odometry, map, and mesh are published via our broadcaster. The primary contribution of this work is the meshing module, which represents a scene by an efficient voxel structure, performs fast finding of voxels observed by new scans, and incrementally reconstructs triangle facets in each voxel. This voxel-wise meshing operation is delicately designed for the purpose of efficiency; it first performs a dimension reduction by projecting 3D points to a 2D local plane contained in the voxel, and then executes the meshing operation with pull, commit and push steps for incremental reconstruction of triangle facets. To the best of our knowledge, this is the first work in literature that can reconstruct online the triangle mesh of large-scale scenes, just relying on a standard CPU without GPU acceleration. To share our findings and make contributions to the community, we make our code publicly available on our GitHub: github.com/hku-mars/ImMesh.
Mapping, 3D reconstruction, SLAM | Summarize the following text. | 318 |
arxiv-format/0604158v1.md | # Rainfall Advection using Velocimetry by Multiresolution Viscous Alignment 1
Footnote 1: This material is supported in part by NSF ITR 0121182 and DDDAS 0540259.
Sai Ravela
Earth, Atmospheric and Planetary Sciences
Virat Chatdaarong
Civil and Environmental Engineering
Massachusetts Institute of Technology
[email protected]
April 10, 2006
## 1 Introduction
Environmental data assimilation is the methodology for combining imperfect model predictions with uncertain data in a way that acknowledges their respective uncertainties. The proper framework for state estimation includes sequential [15], ensemble-based [14] and variational [20, 5] methods.
The difficulties created by improperly represented error are particularly apparent in mesoscale meteorological phenomena such as thunderstorms, squall-lines, hurricanes, precipitation, and fronts. We are particularly interested in rainfall data-assimilation, where rainfall measurements from satellite data, radar data, or in-situ measurements are used to condition a rainfall model. Such conditional simulations are valuable both for producing estimates at the current time (nowcasting), as well as for short-term forecasting.
There are a countless number of models developed to simulate the rainfall process. In general, there are two types of models that can deal with spatial and temporal characteristics of rainfall. The first category is the meteorological model or the quantitative precipitation forecasting model. It involves a large, complex set of differential equations seeking to represent complete physical processes controlling rainfall and other weather related variables. Examples of these models include the fifth-generation Mesoscale Model (MM5) [3, 4, 16], the step-mountain Eta coordinate model [1, 2, 13], and the Regional Atmospheric Modeling System (RAMS) [7, 12], etc. The second type is the spatiotemporal stochastic rainfall model. It aims to summarize the spatial and temporal characteristics of rainfall by a small set of parameters [6, 18, 11, 8, 22, 25]. This type of model usually simulates the birth and decay of rain-cells and evolve them through space and time using simple physical descriptions. Despite significant differences among these rainfall models, the concept of propagating rainfall through space and time are relatively similar.
The major ingredient required to advect rainfall is a velocity field. Large spatial-scale (synoptic) winds are inappropriate for this purpose for a variety of reasons. Ironically, synoptic observations can be sparse to be used directly and although synoptic-scale wind analyses produced from them (and models) do produce dense spatial estimates, such estimates often do not contain variability at the meso-scales of interest. The motion of mesoscale convective activity is a natural source for velocimetry. Indeed, there exist products that deduce \"winds\" by estimating the motion of temperature, vapor and other fields evolving in time [9, 10].
In this paper, we present an algorithm for velocimetry from observed motion from satellite observations such as GOES, AMSU, TRMM, or radar data such as NOWRAD. This algorithm follows from a Bayesian formulation of the motion estimation problem, where a dense displacement field is estimated from two images of cloud-top temperature of rain-cells separated in time. Ordinarily, the motion estimation problem is ill-posed, because the displacement field has far too many degrees of freedom than the motion. Therefore, some form of regularization becomes necessary and by imposing smoothness and non-divergence as desirable properties of the estimated displacement vector field solutions can be obtained.
This approach provides marked improvement over other methods in conventional use. In contrast to correlation based approaches used for deriving velocity from GOES imagery, the displacement fields are dense, quality control is implicit, and higher-order and small-scale deformations can be easily handled. In contrast with optic-flow algorithms [21, 17], we can produce solutions at large separations of mesoscale features between large time-steps or where the deformation is rapidly evolving.
After formulating the motion estimation problem and providing a solution, we extend the algorithm using a multi-resolution procedure. The primary advantage of a multi-resolution approach is to produce displacement fields quickly. The secondary advantage is to structure the estimation homotopically; coarse or low-frequency information is used first to produce velocity estimates over which deformation adjustments from finer-scale structures is superposed. The result is a powerful algorithm for velocimetry by alignment. As such, it is useful in a variety of situations including, for example, (a) estimating winds, (b) estimating transport of tracers, (c) Particle Image Velocimetry, (d) Advecting Rainfall models etc.
## 2 Related Work
There are two dominant approaches to computing flow from observations directly. The first is correlation-based and the second is based on optic flow.
In correlation based approaches [19], a region of interest (or patch) is identified in the first image and correlated within a search window in the second image. The location of the best match is then used to compute a displacement vector. When the input image or field is tiled, possibly overlapping, and regions of interest are extracted from each tile location, the result is velocimetry at regular intervals and is most commonly used for Particle Image Velocimetry (PIV). In certain instances it is useful to define interest-points or salient features around which to extract regions of interest. In particular, if the field has many areas with negligible spatial variability, then matches are undefined. As a quality control measure then, matching is restricted only to those regions of interest that have interesting variability, or interest points.
There are several disadvantages to correlation-based approaches. First, by construction it is assumed that the entire ROI purely translates from one image to the other. This is not always the case, but is a reasonable approximation when the right length scale can be found. However, when higher-order deformations (shears for example) are present, correlation based approaches cannot be expected to work well. Second, correlation based approaches assume that a unique match can be found in a way that is substantially better than correlation elsewhere. This is only true if the features are well-defined and identified. Third, there is no implicit consistency across regions of interest in correlation-based flow. Neighboring regions of interest can and often do match at wildly different and inconsistent locations. This calls for a significant overhead in terms of quality control. Fourth, it is not clear how the search window size (that is the area over which a region of interest is matched in the subsequent frame) is determined. This window size varies both in space (as the velocity varies spatially) and time (as velocity varies with time). A larger search window portends a larger probability to miss the real target, and a smaller search window can lead to false negatives or false positives. Finally, where interest points are used as a preprocessing step to correlation, the velocity field produced is necessarily sparse, and therefore, leaves hanging the question of how to produce dense flow fields. Our proposed algorithm handles all these issues in a simple and direct way.
More closely related to the proposed approach is optic flow [21, 17]. This method arises from what is known as the brightness constraint equation, which is a statement of conservation of brightness (intensity) mass, expressed by the continuity equation evaluated at each pixel or grid node of \\(X\\).
\\[\\frac{\\partial X}{\\partial t}+\\mathbf{q}\\cdot\
abla X=0 \\tag{1}\\]
Here \\(X\\) is the brightness or intensity scalar field and \\(\\mathbf{q}\\) a displacement vector-field. Solutions to the optic flow equation can be formulated using the well-known method by [21], which can be stated as a solution to the following system of equations:
\\[(\
abla X)(\
abla X)^{T}\\mathbf{q}=-(\
abla X)\\frac{\\partial X}{\\partial t} \\tag{2}\\]
The right-hand side is completely determined from a pair of images and the coefficient or stiffness matrix on the left-hand side is the second-derivative of the auto correlation matrix, also known as the windowed second-moment matrix, or Harris interest operator, which is sensitive to \"corners\" in an image. This formulation arises directly from a quadratic formulation, which can in turn be synthesized from a Bayesian formulation under a Gaussian assumption. Thus, we can write that we seek to minimize
\\[J(\\mathbf{q})=\\left|\\left|X(\\mathbf{r}-\\mathbf{q})-Y\\right|\\right| \\tag{3}\\]Then solve this problem via the Euler-Lagrange equation:
\\[\\frac{\\partial J(\\mathbf{q})}{\\partial\\mathbf{q}} = \
abla X|_{\\mathbf{r}-\\mathbf{q}}(X(\\mathbf{r}-\\mathbf{q})-Y)=0 \\tag{4}\\]
The solution is obtained by _linearizing_ (4), that is,
\\[\
abla X|_{\\mathbf{r}-\\mathbf{q}}(X(\\mathbf{r})-\
abla X\\cdot \\mathbf{q}-Y) = 0\\] \\[\
abla X(\
abla X)^{T}\\mathbf{q} = -\
abla X(Y-(X(\\mathbf{r})) \\tag{5}\\]
There are several disadvantages to this algorithm. First, much like correlation with feature detection, equation 5 is evaluated at pixels where the second-moment matrix is full-rank, which corresponds to locations where features are present. There is no clear way of propagating information obtained at sparse locations to locations where direct computation of displacement is not possible due to poor conditioning of the second-moment matrix. For the same reason, it cannot handle tangential flows. The brightness constraint equation can only represent flows along brightness streamlines. When tangential motion is present, detected motion at extreme ends a moving curve cannot be propagated easily into the interior. Our method provides some degree of spatial smoothness common in geophysical fluid transport, and uses regularization constraints to propagate flow information to nodes where feature strengths are weak.
Second, the linearization implicit in (5) precludes large displacements; structures must be closely overlapping in successive images, which can also be seen from the continuity equation (1). Therefore, this method is very useful for densely sampled motion, such as ego-motion resulting from a moving, jittering camera, but is not as useful for sparsely sampled flow arising from structures moving in a scene. In the latter case, to ameliorate the effects of large expected displacement, multi-resolution approaches have been proposed. Even so, much like determining the size of the search window in correlation, determining the number of resolutions is an ad-hoc procedure. Our method can handle large displacements and we also propose a multi-resolution approach, but the primary motivation there is improved computational speed.
## 3 Velocimetry by Field Alignment
The main approach consists of solving a nonlinear quadratic estimation problem for a field of displacements. Solutions to this problem are obtained by regularizing the an ill-posed inverse problem. The material presented in this section is derived directly from work by Ravela [24], and Ravela et al. [23]. Here we reformulate their original formulation to allow only position adjustments.
To make this framework more explicit it is useful to introduce some notation. Let \\(X=X(\\mathbf{r})=\\{X[\\underline{r}_{1}^{T}]\\ldots X[\\underline{r}_{m}^{T}]\\}\\) be the first image, written as a vector, defined over a spatially discretized computational grid \\(\\Omega\\), and \\(\\mathbf{r}^{\\mathbf{T}}=\\{\\underline{r}_{i}=(x_{i},y_{i})^{T},i\\in\\Omega\\}\\) be the position indices. Let \\(\\mathbf{q}\\) be a _vector_ of displacements, that is \\(\\mathbf{q^{T}}=\\{\\underline{q}_{i}=(\\Delta x_{i},\\Delta y_{i})^{T},i\\in\\Omega\\}\\). Then the notation \\(X(\\mathbf{r}-\\mathbf{q})\\) represents _displacement_ of \\(X\\) by \\(\\mathbf{q}\\). The displacement field \\(\\mathbf{q}\\) is real-valued, so \\(X(\\mathbf{r}-\\mathbf{q})\\) must be evaluated by interpolation if necessary. It is important to understand that this displacement field represents a warping of the underlying grid, whose effect is to move structures in the image around, see Figure 1.
In a probabilistic sense, we may suppose that finding \\(\\mathbf{q}\\) that has the maximum a posteriori probability in the distribution \\(P(\\mathbf{q}|\\mathcal{X},\\mathcal{Y})\\) is appropriate. Without loss of generality, \\(\\mathcal{X}\\) is a random variable corresponding to the image or field at a given time and \\(\\mathcal{Y}\\) is random variable for a field at a future time. Using Bayes rule we
Figure 1: A graphical illustration of field alignment. State vector on a discretized grid is moved by deforming its grid (\\(\\mathbf{r}\\)) by a displacement (\\(\\mathbf{q}\\)).
obtain \\(P(Q={\\bf q}|{\\cal X}=X,{\\cal Y}=Y)\\propto P({\\cal Y}=Y,{\\cal X}=X|{\\bf q})P({\\bf q})\\). If we make a Gaussian assumption of the component densities, we can write:
\\[P(X,Y|{\\bf q})=\\frac{1}{(2\\pi)^{\\frac{n}{2}}\\left|R\\right|^{\\frac{1}{2}}}e^{- \\frac{1}{2}(Y-X({\\bf r}-{\\bf q}))^{T}R^{-1}(Y-X({\\bf r}-{\\bf q}))} \\tag{6}\\]
This equation says that the observations separated in time can be related using a Gaussian model to the displaced state X(**r**- **q**), where X(**r**) is defined on the original grid, and **q** is a displacement field. We use the linear observation model here, and therefore, \\(Y=HX({\\bf r}-{\\bf q})+\\eta,\\eta\\sim N(0,R)..\\) We should emphasize here that the observation vector is fixed. It's elements are always defined from the original grid. In fully observed fields, H is an identity matrix, and for many applications R, reflecting the noise in the field, can also be modeled as an identity matrix.
\\[P({\\bf q})=\\frac{1}{C}e^{-L({\\bf q})} \\tag{7}\\]
This equation specifies a _displacement prior_. This prior is constructed from an energy function \\(L({\\bf q})\\) which expresses constraints on the displacement field. The proposed method for constructing \\(L\\) is drawn from the nature of the expected displacement field. Displacements can be represented as smooth flow fields in many fluid flows and smoothness naturally leads to a Tikhonov type formulation [26] and, in particular, \\(L(\\mathbf{q})\\) is designed as a gradient and a divergence penalty term. These constraints, expressed in quadratic form are:
\\[L(\\mathbf{q})=\\frac{w_{1}}{2}\\sum_{j\\in\\Omega}\\mathbf{tr}\\{[\
abla\\underline{q}_{ j}][\
abla\\underline{q}_{j}]^{T}\\}+\\frac{w_{2}}{2}\\sum_{j\\in\\Omega}[\
abla \\cdot\\underline{q}_{j}]^{2} \\tag{8}\\]
In Equation 8, \\(\\mathbf{q}_{j}\\) refers to the \\(j^{th}\\) grid index and \\(\\mathbf{tr}\\) is the trace. Equation 8 is a _weak constraint_, weighted by the corresponding weights \\(w_{1}\\) and \\(w_{2}\\). Note that the constant C can be defined to make Equation 7 a proper probability density. In particular, define \\(Z(\\mathbf{q})=e^{-L(\\mathbf{q})}\\) and define \\(C=\\int\\limits_{\\mathbf{q}}Z(\\mathbf{q})d\\mathbf{q}\\). This integral exists and converges.
With these definitions of probabilities, we are in a position to construct an objective by evaluating the log probability. We propose a solution using Euler-Lagrange equations. Defining \\(\\mathbf{p}=\\mathbf{r}-\\mathbf{q}\\)These can be written as:
\\[\\frac{\\partial J}{\\partial\\mathbf{q}} = \
abla X|_{\\mathbf{p}}H^{T}R^{-1}\\left(H\\ X\\left(\\mathbf{p} \\right)-Y\\right)+\\frac{\\partial L}{\\partial\\mathbf{q}}=0 \\tag{9}\\]
Using the regularization constraints ( 9) at a node \\(i\\) now becomes:
\\[w_{1}\
abla^{2}\\underline{q}_{i}+w_{2}\
abla(\
abla\\cdot\\underline{q}_{i})+ \\left[\
abla X^{fT}|_{\\mathbf{p}}H^{T}R^{-1}\\left(H\\left[X^{f}\\left(\\mathbf{p }\\right)\\right]-Y\\right)\\right]_{i}=0 \\tag{10}\\]
Equation 10 is the field alignment formulation. It introduces a forcing based on the residual between the model- and observation-fields. The constraints on the displacement field allow the forcing to propagate to a consistent solution. Equation 10 is also non-linear, and is solved iteratively, as a Poisson equation. During each iteration \\(\\mathbf{q}\\) is computed by holding the forcing term constant. The estimate of displacement at each iteration is then used to deform a copy of the original forecast model-field using bi-cubic interpolation for the next iteration. The process is repeated until a small displacement residual is obtained, the misfit with observations does not improve, or an iteration limit is reached. Upon convergence, we have an aligned image \\(X(\\mathbf{\\hat{p}})\\), and a displacement field \\(\\mathbf{\\hat{q}}=\\sum\\limits_{d=1}^{N}q^{(d)}\\), for individual displacements \\(q^{(d)}\\) at iterations \\(d=1\\ldots D\\)
### Multi-resolution Alignment and Velocimetry
The convergence of solution to the alignment equation is super-linearly dependent on the expected displacement between the two fields. Therefore, it is desirable to solve it in a coarse-to-fine manner, which serves two principal advantages. The first, as the following construction will show, is to substantially speed-up the time to alignment because decimated (or coarse-resolution) representations of a pair of fields has smaller expected displacement than a pair at finer resolution.
Second, decimation or resolution reduction also implies that finer structure or higher spatial frequencies will be attenuated. This smoothness in the coarsened-field intensities directly translates to smoothness in flow-fields using ( 9). Thus, a coarse-to-fine method for alignment can incrementally add velocity contributions from higher-frequencies, that is it incrementally incorporates higher-order variability in the displacement field. Many of the advantages of a multi-resolution approach have been previously explored in the context of visual motion estimation, including the famous pyramid algorithm and architecture for matching and flow and our implementation borrows from this central idea.
The multi-resolution algorithm is depicted in Figure 2 for two levels. The
Figure 2: The multi-resolution algorithm is shown for two-levels and requires five steps, labeled (1) through (5). See text for explanation.
input images \\(X\\) and \\(Y\\) are decimated to generate coarse resolution images \\(X_{1}\\) and \\(Y_{1}\\) respectively (step 1). Let us suppose that this scaling is by a factor of \\(0<s<1\\) (most commonly \\(s=0.5\\)). Displacement is computed for this level first, and let us call this \\(\\mathbf{\\hat{q}_{1}}\\) (step 2). This displacement field is downscaled by a factor of \\(s\\), using simple (bicubic) interpolation, to produce a prior estimate of displacement at level 0, written \\(\\mathbf{\\hat{q}_{10}}=s^{-1}\\mathbf{\\hat{q}_{0}}(s^{-1}\\mathbf{r})\\) (step 3). The source image at level 0, that is \\(X_{0}=X\\) is displaced by \\(\\mathbf{\\hat{q}_{10}}\\) (step 4) and thus \\(X(\\mathbf{r}-\\mathbf{\\hat{q}_{10}})\\) is aligned with \\(Y_{0}\\) to produce a displacement estimate \\(\\mathbf{\\hat{q}_{0}}\\) (step 5). The total displacement relating source image \\(X\\) with target field \\(Y\\) is simply \\(\\mathbf{\\hat{q}_{0}}+\\mathbf{\\hat{q}_{10}}\\). Multiple levels of resolution can be implemented from this framework recursively.
## 4 Example
Figure 3: CIMSS Winds derived from GOES data at 2006-04-06-06Z (left) and pressure (right). The velocity vectors are sparse and contain significant divergence.
The performance of this algorithm is illustrated in a velocimetry computation. To compare, we use CIMSS wind-data satellite data [10], depicted in Figure 3, and Figure 4 obtained from CIMSS analysis on 2006-06-04 at 06Z and 09Z respectively. CIMSS wind-data is shown over the US great plains, and were obtained from the'sounder.' The red dots indicate the original location of the data. The left subplot shows wind speed (in degree/hr). The right ones show pressure, and the location of raw measurements in red.
It can be seen in the maps shown in Figure 3 and Figure 4 that current method to produce winds generate sparse vectors and, further, has substantial divergence. Whilst this can be thought of as accurately representing turbulence, in reality these vectors are more likely the result of weak quality control. The primary methodology used here is to identify features in an image, extract regions of interest around
Figure 4: CIMSS Winds derived from GOES data at 2006-04-06-09Z (left) and pressure (right). The velocity vectors are sparse and contain significant divergence.
them and search for them in subsequent frames. This, by definition produces sparse velocity estimates (features are sparse), leaving unanswered how to systematically incorporate appropriate spatial interpolation functions for the velocity. Since regions of interest are essentially treated as being statistically independent, mismatches can produce widely varying displacement vectors. Such mis-matches can easily occur in correlation based approaches when the features are not distinguishing or substantial deformations occur from one time to another in a region of interest. A more detailed discussion is presented in Section 2.
In contrast, our method produces dense flow fields, and quality control is implicit from regularization constraints. Figure 5(a,b) shows a pair of NOWRAD images at 2006-06-01-0800Z and 2006-06-01-0900Z respectively, and the computed flow field in Figure 5(c). Similarly, Figure 5(d,e,f) show the GOES images and velocity from the same time frame over the deep convective rainfall region in the Great Plains example. The velocities are in good agreement with CIMSS derived winds where magnitudes are concerned, but the flow-fields are smooth and visual confirmation of the alignment provides convincing evidence that they are correct.
## 5 Conclusions
Our method is a Bayesian perspective of the velocimetry problem. It has several distinct advantages: (a) It is useful for a wide range of observation modalities. (b) Our approach does not require features to be identified for computing velocity. This is a significant advantage because features cannot often be clearly delineated, and are by definition sparse. (c) Our approach implicitly uses quality control in terms of smoothness, and produces dense flow-fields. (d) our approach can be integrated easily with current operational implementations, thereby making this effort more likely to have a real impact. Finally, it should be noted that the regularization constraint in field alignment is a weak constraint and the weights determine how strongly the constraints influence the flow field. The constraint in \\(L\\) is modeled as such because we expect the fluid flow to be smooth. From a regularization point of view, there can be other choices [27] as well. The proposed method can be used for a variety of velocimetry applications including PIV, velocity from tracer-transport, and velocity from GOES and other satellite data, and an application of this is to advect rain-cells produced by a rainfall model, with realistic wind-forcing.
## References
* [1] T. L. Black. The new nmc moesoscale eta model: Description and forecast examples. _Weather and Forecasting_, 9(2):265-278, 1994.
* [2] T. L. Black, D. Deaven, and G. DiMego. The step-mountain eta coordinate model: 80 km early version and objective verifications. _NWS/NOAA Tech. Procedures Bull., 1993. 412: p. 31._, 412:31, 1993.
* [3] F. Chen and J. Dudhia. Coupling an advanced land surface-hydrology model with the penn state-ncar mm5 modeling system. part i: Model implementation and sensitivity. _Monthly Weather Review_, 129(4):569-585, 2001.
* [4] F. Chen and J. Dudhia. Coupling an advanced land surface-hydrology model with the penn state-ncar mm5 modeling system. part ii: Preliminary model validation. _Monthly Weather Review_, 129(4):587-604, 2001.
* [5] P. Courtier. Variational methods. _J. Meteor. Soc. Japan_, 75, 1997.
* [6] P. Cowpertwait. Further developments of the neyman-scott clustered point process for modeling rainfall. _Water Resource Research_, 27(7), 1991.
* [7] A. Orlandi et al. Rainfall assimilation in rams by means of the Kuo parameterisation inversion: Method and preliminary results. _Journal of Hydrology_, 288(1-2):20-35, 2004.
* [8] C. Onof et al. Rainfall modelling using poisson-cluster processes: A review of developments. _Stochastic Environmental Research and Risk Assessment_, 2000.
* [9] C. S. Velden et al. Upper-tropospheric winds derived from geostationary satellite water vapor observations. _Bulletin of the American Meteorological Society_, 78(2):173-195, 1997.
* [10] C. Velden et al. Recent innovations in deriving tropospheric winds from meteorological satellites. _Bulletin of the American Meteorological Society_, 86(2):205-223, 2005.
* [11] H. Moradkhani et al. Dual state-parameter estimation of hydrological models using ensemble kalman filter. _Advances in Water Resources_, 28(2):135-147, 2005.
* [12] R. A. Pielke et al. A comprehensive meteorological modeling system rams. _Meteorology and Atmospheric Physics_, 49(1-4):69-91, 1992.
* [13] R. Rogers et al. Changes to the operational \"early\" eta analysis forecast system at the national centers for environmental prediction. _Weather and Forecasting_, 11(3):391-413, 1996.
* [14] G. Evensen. The ensemble kalman filter: Theoretical formulation and practical implementation. _Ocean Dynamics_, 53:342-367, 2003.
* [15] A. Gelb. _Applied Optimal Estimation_. MIT Press, 1974.
* [16] G. Grell, J. Dudhia, and D.R. Stauffer. A description of the fifth generation penn state/ncar mesoscale model (mm5). Technical Report TN-398+IA, NCAR, 1993.
* [17] D. J. Heeger. Optical flow from spatiotemporal filters. _International Journal of Computer Vision_, pages 279-302, 1988.
* [18] M. N. Khaliq and C. Cunnane. Modelling point rainfall occurrences with the modified brattlett-lewis rectangular pulses model. _Journal of Hydrology_, 180(1):109-138, 1996.
* [19] D. T. Lawton. Processing translational motion sequences. _Computer Vision, Graphics and Image Processing_, 22:116-144, 1983.
* [20] A. C. Lorenc. Analysis method for numerical weather predictin. _Q. J. R. Meteorol. Soc._, 112:1177-1194, 1986.
* [21] H.-H Nagel. Displacement vectors derived from second order intensity variations in image sequences. _Computer Vision, Graphics and Image Processing_, 21:85-117, 1983.
* [22] T. M. Over and V. K. Gupta. A space-time theory of mesoscale rainfall using random cascades. _Journal of Geophysical Research_, 101(D21):319-332, 1996.
* [23] S. Ravela. Amplitude-position formulation of data assimilation. In _ICCS 2006, Lecture Notes in Computer Science_, number 3993 in Part III, pages 497-505, 2006.
* to appear_, 2006.
* [25] I. Rodriguez-Iturbe, D.R. Cox, and V. Isham. A point process model for rainfall: Further developments. _Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences_, 417(1853):283-298, 1988.
* [26] A.N. Tikhonov and V. Y. Arsenin. _Solutions of Ill-Posed Problems_. Wiley, New York, 1977.
* [27] G. Wabha and J. Wendelberger. Some new mathematical methods for variational objective analysis using splines and cross-validation. _Monthly Weather Review_, 108, 1980.
Figure 5: Deriving velocimetry information from satellite observations, Nexrad (top), GOES (bottom). See text for more information. | An algorithm to estimate motion from satellite imagery is presented. Dense displacement fields are computed from time-separated images of of significant convective activity using a Bayesian formulation of the motion estimation problem. Ordinarily this motion estimation problem is ill-posed; there are far too many degrees of freedom than necessary to represent the motion. Therefore, some form of regularization becomes necessary and by imposing smoothness and non-divergence as desirable properties of the estimated displacement vector field, excellent solutions are obtained. Our approach provides a marked improvement over other methods in conventionaluse. In contrast to correlation based approaches, the displacement fields produced by our method are dense, spatial consistency of the displacement vector field is implicit, and higher-order and small-scale deformations can be easily handled. In contrast with optic-flow algorithms, we can produce solutions at large separations of mesoscale features between large time-steps or where the deformation is rapidly evolving. | Give a concise overview of the text below. | 183 |
arxiv-format/1604_08182v2.md | Unsupervised Classification in Hyperspectral Imagery with Nonlocal Total Variation and Primal-Dual Hybrid Gradient Algorithm
Wei Zhu, Victoria Chayes, Alexandre Tiard, Stephanie Sanchez, Devin Dahlberg,
Andrea L. Bertozzi, Stanley Osher, Dominique Zosso, and Da Kuang
This work was supported by NSF grants DMS-1118971, DMS-1045536, DMS-1417674, and ONR grant N00014-16-1-2119 Wei Zhu, Da Kuang, Andrea L. Bertozzi, Stanley Osher, Dominique Zosso, and Stephanie Sanchez are with the Department of Mathematics at University of California, Los Angeles. Email: {weizhu}731, dakuang, bertozzi, spo, zoos}@math.ucla.edu, [email protected] Victoria Chayes is with Bard College. Email: [email protected] Tiard is with ENSE3, Grenoble Institute of Technology. Email: [email protected] Dahlberg is with University of California, San Diego. Email: [email protected]
## I Introduction
Hyperspectral imagery (HSI) is an important domain in the field of remote sensing with numerous applications in agriculture, environmental science, mineralogy, and surveillance [1]. Hyperspectral sensors capture information of intensity of reflection at different wavelengths, from the infrared to ultraviolet. They take measurements 10-30nm apart, and up to 200 layers for a single image. Each pixel has a unique spectral signature, which can be used to differentiate objects that cannot be distinguished based on visible spectra, for example: invisible gas plumes, oil or chemical spills over water, or healthy from unhealthy crops.
The majority of HSI classification methods are either _unmixing_ methods or _clustering_ methods. Unmixing methods extract the information of the constitutive materials (the _endmembers_) and the abundance map [2, 3, 4, 5]. Clustering methods do not extract endmembers; instead, they return the spectral signatures of the centroids of the clusters. Each centroid is the mean of the signatures of all the pixels in a cluster. However, when it is assumed that most of the pixels are dominated mostly by one endmember, i.e. in the absence of partial volume effects [6], which is usually the case for high-resolution HSI, these two types of methods are expected to give similar results [5]. The proposed nonlocal total variation (NLTV) method for HSI classification in this paper is a clustering method.
Much work has been carried out in the literature in both the unmixing and the clustering categories. HSI unmixing models can be characterized as linear or nonlinear. In a linear unmixing model (LUM), each pixel is approximated by a linear combination of the endmembers. When the linear coefficients are constrained to be nonnegative, it is equivalent to nonnegative matrix factorization (NMF), and good unsupervised classification results have been achieved in [3, 4, 5] using either NMF or hierarchical rank-2 NMF (H2NMF). Despite the simplicity of LUM, the assumption of a linear mixture of materials has been shown to be physically inaccurate in certain situations [7]. Researchers are starting to expand aggressively into the much more complicated nonlinear unmixing realm [8], where nonlinear effects such as atmospheric scattering are explicitly modeled. However, most of the work that has been done for nonlinear unmixing so far is supervised in the sense that prior knowledge of the endmember signatures is required [2]. Discriminative machine learning methods such as support vector machine (SVM) [9, 10, 11] and relevance vector machine (RVM) [12, 13, 14] based approaches have also been applied to hyperspectral images, but they are also supervised methods since a training set is needed to learn the classifiers.
On the contrary, graph-based clustering methods implicitly model the nonlinear mixture of the endmembers. This type of method is built upon a weight matrix that encodes the similarity between the pixels, which is typically a sparse matrix constructed using the distances between the spectral signatures. Graph-cut problems for graph segmentation have been well-studied in the literature [15, 16, 17, 18]. In 2012, Bertozzi and Flenner proposed a diffuse interface model on graphs with applicationsto classification of high dimensional data [19]. This idea has been combined with the Merriman-Bence-Osher (MBO) scheme [20] and applied to multi-class graph segmentation [21, 22] and HSI classification [23, 24]. The method in [19] minimizes a graph version of the Ginzburg-Landau (GL) functional, which consists of the Dirichlet energy of the labeling function and a double-well potential, and uses Nystrom extension to speed up the calculation of the eigenvectors for inverting the graph Laplacian. This graph-based method performed well compared to other algorithms in the detection of chemical plumes in hyperspectral video sequences [23, 24]. However, the GL functional is non-convex due to its double-well term, which may cause the algorithm to get stuck in local minima. This issue can be circumvented by running the algorithm multiple times with different initial conditions and hand-picking the best result.
The two methods proposed in this paper are unsupervised graph-based clustering techniques. Instead of minimizing the GL functional, which has been proved to converge to the total variation (TV) semi-norm, this work proposes to minimize the NLTV semi-norm of the labeling functions \\(\\|\
abla_{w}u_{l}\\|_{L^{1}}\\) directly. A detailed explanation of the nonlocal operator \\(\
abla_{w}\\) and the labeling function \\(u_{l}\\) will be provided in Section II and Section III. The \\(L^{1}\\) regularized convex optimization problem is solved by the primal-dual hybrid gradient (PDHG) algorithm, which avoids the need to invert the graph Laplacian. We also introduce the novel idea of the quadratic model and a stable simplex clustering technique, which ensures that anomalies converge to their own clusters and makes random endmember initialization possible in the proposed algorithm. The direct usage of the NLTV semi-norm makes the proposed clustering methods more accurate than other methods when evaluated quantitatively on HSI with ground-truth labels, and the quadratic model with stable simplex clustering is a completely new addition to the field of HSI classification.
This paper is organized as follows: in Section II background is provided on total variation and nonlocal operators. Two NLTV models (linear and quadratic) and a stable simplex clustering method are presented in Section III. Section IV provides a detailed explanation on the application of the PDHG algorithm to solving the convex optimization problems in the linear and quadratic models. Section V presents the numerical results and a sensitivity analysis on the key model parameters. Section VI presents the conclusions.
## II Total Variation and Nonlocal Operators
Total variation (TV) method was introduced by Rudin et al in 1992 [25] and has been applied to various image processing tasks [26]. Its advantage is that one can preserve the edges in the image when minimizing \\(\\|\
abla u\\|_{L^{1}}\\) (TV semi-norm). The total variation model is:
\\[\\min_{u}E(u)=\\|\
abla u\\|_{L^{1}}+\\lambda S(u).\\]
The parameter \\(\\lambda\\) can be adjusted to give higher priority to the TV-regularizing term, or the data fidelity term \\(S(u)\\).
Despite its huge success in image processing, the total variation method is still a local method. More specifically, the gradient of a pixel is calculated using its immediate adjacent pixels. It is known that local image processing techniques fail to produce satisfactory results when the image has repetitive structures, or intrinsically related objects in the image are not spatially connected. To address this problem, Buades et al proposed a nonlocal means method based on patch distances for image denoising [27]. Gilboa and Osher [28] later formalized a systematic framework for nonlocal image processing. Nonlocal image processing produces much better results because theoretically any pixel in the image can interact with any other, which better preserves texture and fine details.
In HSI classification, clusters can have elements that are not spatially connected. Thus it is necessary to develop a nonlocal method of gradient calculation. We provide a review of nonlocal operators in the rest of this section. Note that the model is continuous, and the weights are not necessarily symmetric [29].
Let \\(\\Omega\\) be a region in \\(\\mathbb{R}^{n}\\), and \\(u:\\Omega\\rightarrow\\mathbb{R}\\) be a real function. In the model for HSI classification, \\(\\Omega\\) is the domain of the pixels, and \\(u:\\Omega\\rightarrow[0,1]\\) is the labeling function of a cluster. The larger the value of \\(u(x)\\), the more likely that pixel \\(x\\) would be classified in that cluster. The nonlocal derivative is:
\\[\\frac{\\partial u}{\\partial y}(x):=\\frac{u(y)-u(x)}{d(x,y)},\\quad\\text{for all }x,y\\in\\Omega,\\]
where \\(d\\) is a positive distance between \\(x\\) and \\(y\\). In the context of hyperspectral images, \\(d(x,y)\\) provides a way to measure the similarity between pixels \\(x\\) and \\(y\\). Smaller \\(d(x,y)\\) implies more resemblance between these two pixels. The nonlocal weight is defined as \\(w(x,y)=d^{-2}(x,y)\\).
The nonlocal gradient \\(\
abla_{w}u\\) for \\(u\\in L^{2}(\\Omega)\\) can be defined as the collection of all partial derivatives, which is a function from \\(\\Omega\\) to \\(L^{2}(\\Omega)\\), i.e. \\(\
abla_{w}u\\in L^{2}(\\Omega,L^{2}(\\Omega))\\):
\\[\
abla_{w}u(x)(y)=\\frac{\\partial u}{\\partial y}(x)=\\sqrt{w(x,y)}(u(y)-u(x)).\\]The standard \\(L^{2}\\) inner products on Hilbert spaces \\(L^{2}(\\Omega)\\) and \\(L^{2}(\\Omega,L^{2}(\\Omega))\\) are used in the definition. More specifically, for \\(u_{1},u_{2}\\in L^{2}(\\Omega)\\) and \\(v_{1},v_{2}\\in L^{2}(\\Omega,L^{2}(\\Omega))\\),
\\[\\langle u_{1},u_{2}\\rangle :=\\int_{\\Omega}u_{1}(x)u_{2}(x)dx,\\] \\[\\langle v_{1},v_{2}\\rangle :=\\int_{\\Omega}\\int_{\\Omega}v_{1}(x)(y)v_{2}(x)(y)dydx.\\]
The nonlocal divergence \\(\\text{div}_{w}\\) is defined as the negative adjoint of the nonlocal gradient:
\\[\\text{div}_{w}v(x):=\\int_{\\Omega}\\sqrt{w(x,y)}v(x)(y)-\\sqrt{w(y,x)}v(y)(x)dy.\\]
At last, a standard \\(L^{1}\\) and \\(L^{\\infty}\\) norm is defined on the space \\(L^{2}(\\Omega,L^{2}(\\Omega))\\):
\\[\\|v\\|_{L^{1}}:=\\int_{\\Omega}\\|v(x)\\|_{L^{2}}dx=\\int_{\\Omega}\\left| \\int_{\\Omega}|v(x)(y)|^{2}\\,dy\\right|^{\\frac{1}{2}}dx,\\] \\[\\|v\\|_{L^{\\infty}}:=\\sup_{x}\\|v(x)\\|_{L^{2}}.\\]
## III Two NLTV Models for Unsupervised HSI classification
In this section, two NLTV models are explained for unsupervised classification of HSI. The linear model runs faster in each iteration, but it requires a more accurate centroid initialization. The quadratic model runs slower in each iteration, but it is more robust with respect to the centroid initialization. Moreover, the quadratic model converges faster if the initialization is not ideal.
### _Linear Model_
We extend the idea from [30] to formulate a linear model for classification on HSI. The linear model seeks to minimize:
\\[E_{1}(u) =\\|\
abla_{w}u\\|_{L^{1}}+\\langle u,f\\rangle\\] \\[=\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}+\\sum_{l=1}^{k}\\int u_{ l}(x)f_{l}(x)dx, \\tag{1}\\]
where \\(u=(u_{1},u_{2},\\ldots,u_{k}):\\Omega\\rightarrow\\mathbb{R}^{k}\\) is the labeling function, \\(k\\) is the number of clusters, \\(\\mathbb{K}^{k}=\\{(x_{1},x_{2},\\ldots,x_{k})|\\sum_{i=1}^{k}x_{i}=1,x_{i}\\geq 0\\}\\) is the unit simplex in \\(\\mathbb{R}^{k}\\), and \\(\
abla_{w}u=\\left(\
abla_{w}u_{1},\\ldots,\
abla_{w}u_{k}\\right)\\) such that \\(\\|\
abla_{w}u\\|_{L^{1}}=\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}\\). \\(f_{l}(x)\\) is the error function defined as \\(f_{l}(x)=\\frac{\\lambda}{2}\\left|g(x)-c_{l}\\right|_{\\mu}^{2}\\), where \\(g(x)\\) and \\(c_{l}\\) are the spectral signatures of pixel \\(x\\) and the \\(l\\)-th centroid, which is initially either picked randomly from the HSI or generated by any fast unsupervised centroid extraction algorithm (e.g. H2NMF, K-means.) The distance in the definition of \\(f_{l}(x)\\) is a linear combination of cosine distance and Euclidean distance:
\\[\\left|g(x)-c_{l}\\right|_{\\mu}=1-\\frac{\\langle g(x),c_{l}\\rangle}{\\|g(x)\\|_{2} \\|c_{l}\\|_{2}}+\\mu\\|g(x)-c_{l}\\|_{2},\\quad\\mu\\geq 0.\\]
In HSI processing, the cosine distance is generally used because it is more robust to atmospheric interference and topographical features [31]. The reason why the Euclidean distance is also used is that sometimes different classes have very similar spectral angles, but vastly different spectral amplitudes (e.g. \"dirt\" and \"road\" in the Urban dataset, which is illustrated in Section V.) This is called the linear model since the power of the labeling function \\(u_{l}\\) in (1) is one.
The intuition of the model is as follows: In order to minimize the fidelity term \\(\\sum_{l=1}^{k}\\int u_{l}(x)f_{l}(x)\\), a small \\(u_{l}(x)\\) is required if \\(f_{l}(x)\\) is large, while no such requirement is needed if \\(f_{l}(x)\\) is relatively small. This combined with the fact that \\((u_{1}(x),\\ldots,u_{l}(x))\\) lies on a unit simplex implies that \\(u_{l}(x)\\) would be the largest term if pixel \\(x\\) is mostly similar to the \\(l\\)-th centroid \\(c_{l}\\). Meanwhile, the NLTV regularizing term \\(\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}\\) ensures that pixels similar to each other tend to have analogous values of \\(u\\). Therefore a classification of pixel \\(x\\) can be obtained by choosing the index \\(l\\) that has the largest value \\(u_{l}(x)\\).
Now we discuss how to discretize (1) for numerical implementation.
#### Ii-A1 Weight Matrix
Following the idea from [28], the patch distance is defined as:
\\[d_{\\sigma}(x,y)=\\int_{\\Omega}G_{\\sigma}(t)\\left|g(x+t)-g(y+t)\\right|^{2}dt,\\]
where \\(G_{\\sigma}\\) is a Gaussian of standard deviation \\(\\sigma\\). To build a sparse weight matrix, we take a patch \\(P_{i}\\) around every pixel \\(i\\), and truncate the weight matrix by constructing a \\(k\\)-d tree [32] and searching the \\(m\\) nearest neighbors of \\(P_{i}\\). \\(k\\)-d tree is a space-partitioning data structrue that can significantly reduce the time cost of nearest neighbor search [33]. We employ a randomized and approximate version of this algorithm [34] implemented in the open source VLFeat package 1. The weight is binarized by setting all nonzero entries to one. In the experiments, patches of size \\(3\\times 3\\) are used, and \\(m\\) is set to 10. Note that unlike RGB image processing, the patch size for HSI does not have to be very large. The reason is that while low dimensional RGB images require spatial context to identify pixels, high dimensional hyperspectral images already encode enough information for each pixel in the spectral dimension. Of course, a larger patch size that is consistent with the spatial resolution of the HSI will still be preferable when significant noise is present.
Footnote 1: [http://www.vlfeat.org](http://www.vlfeat.org)
#### Ii-A2 The Labeling Function and the Nonlocal Operators
The labeling function, \\(u=(u_{1},u_{2},\\ldots,u_{k})\\), is discretized as a matrix of size \\(r\\times k\\), where \\(r\\) is the number of pixels in the hyperspectral image, and \\((u_{l})_{j}\\) is the \\(l\\)-th labeling function at \\(j\\)-th pixel; \\((\
abla_{w}u_{l})_{i,j}=\\sqrt{w_{i,j}}((u_{l})_{j}-(u_{l})_{i})\\) is the nonlocal gradient of \\(u_{l}\\); \\((\\text{div}_{w}v)_{i}=\\sum_{j}\\sqrt{w_{i,j}}v_{i,j}-\\sqrt{w_{j,i}}v_{j,i}\\) is the divergence of \\(v\\) at \\(i\\)-th pixel; and the discrete \\(L^{1}\\) and \\(L^{\\infty}\\) norm of \\(\
abla_{w}u_{l}\\) are defined as: \\(\\|\
abla_{w}u_{l}\\|_{L^{1}}=\\sum_{i}\\left(\\sum_{j}(\
abla_{w}u_{l})_{i,j}^{2}\\right) ^{\\frac{1}{2}}\\), and \\(\\|\
abla_{w}u_{l}\\|_{L^{\\infty}}=\\max_{i}\\left(\\sum_{j}(\
abla_{w}u_{l})_{i,j} ^{2}\\right)^{\\frac{1}{2}}\\).
The next issue to address is how to minimize (1) efficiently. The convexity of the energy functional \\(E_{1}\\) allows us to consider using convex optimization methods. First-order primal-dual algorithms have been successfully used in image processing with \\(L^{1}\\) type regularizers [35, 36, 30, 37]. We use the primal-dual hybrid gradient (PDHG) algorithm. The main advantage is that no matrix inversion is involved in the iterations, as opposed to general graph Laplacian methods. The most expensive part of the computation comes from sparse matrix multiplications, which are still inexpensive due to the fact that only \\(m=10\\) nonzero elements are kept in each row of the nonlocal weight matrix.
We then address centroid updates and stopping criteria for the linear model. The concept of centroid updates is not uncommon; in fact, the standard K-means algorithm consists of two steps: first, it assigns each point to a cluster whose mean yields the least within-cluster sum of squares, then it re-calculates the means from the centroids, and terminates when assignments no longer change [38]. Especially for data-based methods, re-calculating the centroid is essential for making the algorithm less sensitive to initial conditions and more likely to find the \"true\" clusters.
After solving (1) using the PDHG algorithm, the output \\(u\\) will be thresholded to \\(u_{hard}\\). More specifically, for every \\(i\\in\\{1,2,\\ldots,r\\}\\), the largest element among \\(((u_{1})_{i},(u_{2})_{i},\\cdots,(u_{k})_{i})\\) is set to 1, while the others are set to 0, and we claim the \\(i\\)-th pixel belongs to that particular cluster. Then the \\(l\\)-th centroid is updated by taking the mean of all the pixels in that cluster. The process is repeated until the difference between two consecutive \\(u_{hard}\\) drops below a certain threshold. The pseudocode for the proposed linear model on HSI is listed in Algorithm 1.
Before ending the discussion of the proposed linear model, we point out its connection to the piecewise constant Mumford-Shah model for multi-class graph segmentation [39]. Assume that the domain \\(\\Omega\\) of the HSI is segmented by a contour \\(\\Phi\\) into \\(k\\) disjoint regions, \\(\\Omega=\\cup_{l=1}^{k}\\Omega_{l}\\). The piecewise constant Mumford-Shah energy is defined as:
\\[E_{MS}(\\Phi,\\{c_{l}\\}_{l=1}^{k})=\\left|\\Phi\\right|+\\lambda\\sum_{l=1}^{k}\\int_{ \\Omega_{l}}\\left|g(x)-c_{l}\\right|^{2}dx, \\tag{2}\\]where \\(|\\Phi|\\) is the length of the contour. To illustrate the connection between (1) and (2), consider the \"local\" version of (1), which essentially replaces the NLTV regularizer \\(\\|\
abla_{w}u_{l}\\|_{L^{1}}\\) with its local counterpart :
\\[E_{1}^{\\text{loc}}(u)=\\sum_{l=1}^{k}\\|\
abla u_{l}\\|_{L^{1}}+\\sum_{l=1}^{k}\\int u _{l}(x)f_{l}(x)dx. \\tag{3}\\]
Assume that the labeling function \\(u_{l}\\) is the characteristic function of \\(\\Omega_{l}\\). Then \\(\\int u_{l}(x)f_{l}(x)dx\\) is equal to \\(\\int_{\\Omega_{l}}|g(x)-c_{l}|^{2}\\,dx\\) up to a multiplicative constant. Moreover, the total variation of a characteristic function of a region equals the length of its boundary, and hence \\(|\\Phi|=\\sum_{l=1}^{k}\\|\
abla u_{l}\\|_{L^{1}}\\). So the linear model (1) can be viewed as a nonlocal convex-relaxed version of Mumford-Shah model. We also note that the linear energy (1) has been studied in [23]. But in their work, the authors used a graph-based MBO method to minimize (1) instead of the PDHG algorithm, and the difference of the numerical performances can be seen in Section V.
### _Quadratic Model_
#### Iii-B1 Intuition
The aforementioned linear model performs very well when the centroids are initialized by accurate centroid extraction algorithms. As shown in Section V, the linear model can have a significant boost to the accuracy of other algorithms if the centroid extraction algorithm is reasonable, without sacrificing speed. However, if centroids are not extracted accurately, or if random initialization is used, the segmenting results are no longer reliable, and the algorithm takes far more iterations to converge to a stable classification.
To reduce the times of centroid updates and merge similar clusters automatically and simultaneously, the following quadratic model is proposed:
\\[E_{2}(u)=\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}+\\sum_{l=1}^{k}\\int u_{l}^{2 }(x)f_{l}(x)dx. \\tag{4}\\]
Similar as before, \\(u=(u_{1},u_{2},\\ldots,u_{k}):\\Omega\\rightarrow\\mathbb{K}^{k}\\) is the labeling function, \\(k\\) is the number of clusters, \\(\\mathbb{K}^{k}\\) is the unit simplex in \\(\\mathbb{R}^{k}\\), and \\(f_{l}(x)\\) is the error function.
Note that the only difference between (1) and (4) is that the power of the labeling function \\(u_{l}\\) here is two. The intuition for this is as follows:
Consider for simplicity a hyperspectral image with a ground truth of only two clusters, \\(A_{1}\\) and \\(A_{2}\\). Suppose the randomized initial centroids are chosen such that \\(c_{1}\\approx c_{2}\\in A_{1}\\); or, that the two random initial pixels are of very similar spectral signatures and belong to the same ground truth cluster.
Let \\(x\\) be a pixel from \\(A_{2}\\). Then \\(0\\ll|g(x)-c_{1}|^{2}\\approx|g(x)-c_{2}|^{2}\\). When (1) is applied, the fidelity term \\(\\langle u,f\\rangle\\) does not change when \\(u(x)\\) moves on the simplex in \\(\\mathbb{R}^{2}\\), and thus pixels of \\(A_{2}\\) will be scattered randomly on the simplex. After thresholding, an approximately equal number of pixels from cluster \\(A_{2}\\) will belong to clusters \\(C_{1}\\) and \\(C_{2}\\), so the new centroids \\(\\tilde{c}_{1}\\) and \\(\\tilde{c}_{2}\\) that are the means of the spectral signatures of the current clusters will once again be approximately equal.
This situation changes dramatically when (4) is minimized:
* Observe that the fidelity term in \\(E_{2}\\) is minimized for a pixel \\(x\\in A_{2}\\) when \\(u_{1}(x)\\approx u_{2}(x)\\approx\\frac{1}{2}\\). Therefore, the pixels of cluster \\(A_{2}\\) will be \"pushed\" toward the center of the simplex once \\(E_{2}\\) is minimized.
* With a stable simplex clustering method (explained in Section III-B2), the clusters are divided such that all of these pixels in the center belong to either \\(C_{1}\\) or \\(C_{2}\\); without loss of generality suppose they belong to \\(C_{2}\\). Then the updated centroid
Fig. 1: The first figure shows the βpushingβ mechanism of the quadratic model. The horizontal line represents the unit simplex in \\(\\mathbb{R}^{2}\\). Signatures from cluster \\(A_{1}\\) are colored blue, and signatures from cluster \\(A_{2}\\) are colored brown. The vertical dashed bar is generated by a stable simplex clustering method, and it thresholds the points on the simplex into two categories.
The second figure shows the stable simplex clustering. Every grid point \\(\\delta\\) on the simplex generates a simplex clustering. We want to choose a \\(\\delta\\) such that there are very few data points falling into the β\\(\\gamma\\)-shaped regionβ.
\\(\\tilde{c}_{1}\\) is essentially \\(c_{1}\\), while the updated centroid \\(\\tilde{c}_{2}\\) is a linear combination of the spectral signature of members belonging to \\(A_{1}\\) and \\(A_{2}\\), and thus quite different from the original \\(c_{2}\\).
* After minimizing the energy \\(E_{2}\\) again, pixels from \\(A_{1}\\) will be clustered in \\(C_{1}\\), and pixels from \\(A_{2}\\) will be pushed to \\(C_{2}\\). Therefore, the clustering will be finished in just two steps in theory. See Fig.1 for a graphical illustration.
The quadratic model not only reduces the number of iterations needed to find the \"true\" clustering because of its capability of anomaly detection, but it allows for random initialization as well, making it a more robust technique.
#### Iii-B2 Stable Simplex Clustering
As mentioned above, the quadratic model pushes anomalies into the middle of the unit simplex. Therefore it would be ill-conceived to simply classify the pixels based on the largest component of the labeling function \\(u(x)=(u_{1}(x),u_{2}(x),\\ldots,u_{k}(x))\\). Instead, a stable simplex clustering method has to be used.
The concept behind the stable simplex clustering is to choose a division that puts all the data points in the \"middle\" of the unit simplex into a single cluster. Fig. 1 demonstrates this in the simple two-cluster case. Also refer to section III-B1 for explanation of the \"pushing\" process. The idea to accomplish this goal is inspired by [5]. We first create a grid on a \\(k\\)-dimensional simplex, where \\(k\\) is the number of clusters, and each grid point \\(\\delta\\) generates a simplex clustering. Then a \\(\\delta\\) is searched to minimize the energy \\(g(\\delta)\\):
\\[g(\\delta)=-\\log(\\prod_{l=1}^{k}F_{l}(\\delta))+\\eta\\exp(G(\\delta)),\\]
where \\(F_{l}(\\delta)\\) is the percentage of data points in cluster \\(l\\), and \\(G(\\delta)\\) is the percentage of data points on the edges near the division, i.e. the \"Y-shaped region\" in Figure 1. The first term in \\(g(\\delta)\\) rewards keeping clusters approximately of the same size, ensuring no skewed data from clusters far too small. And the second term rewards sparsity of points in the intermediate region. The constant \\(\\eta\\) is chosen to be large enough such that stability has a bigger weight in the energy.
Algorithm 2 shows the quadratic model using stable simplex clustering. Fig. 2 demonstrates how this detected the chemical plumes in a frame with background centroids pre-calculated and random initialization for the final centroid. Notice that no plume is detected in the first iteration. But by the twelfth iteration, the gas plume is nearly perfectly segmented.
Fig. 3: Linear vs Quadratic Model on the Urban dataset with the same centroid initialization. To produce essentially identical results, the Linear model (first row) took 50 iterations of centroid updates, and the Quadratic model (second row) took just 4 iterations.
Fig. 2: Quadratic model and stable simplex clustering on the plume dataset. The chemical plume (brown) is perfectly detected in 12 iterations.
Finally, we present the comparison between the results of the linear model and the quadratic model on the Urban dataset with identical random pixel initialization in Figure 3. The linear model took about 50 iterations to converge, and the quadratic model only took 4 iterations.
```
1: Initialization: Choose \\(\\tau,\\sigma>0\\), \\(\\theta\\in[0,1]\\), \\((x^{0},y^{0})\\in X\\times Y\\), and set \\(\\bar{x}^{0}=x^{0}\\)
2:while not converge do
3:\\(y^{n+1}=(I+\\sigma\\partial F^{*})^{-1}(y^{n}+\\sigma K\\bar{x}^{n})\\)
4:\\(x^{n+1}=(I+\\tau\\partial G)^{-1}(x^{n}-\\tau K^{*}y^{n+1})\\)
5:\\(\\bar{x}^{n+1}=x^{n+1}+\\theta(x^{n+1}-x^{n})\\)
6:\\(n=n+1\\)
7:endwhile
```
**Algorithm 3** Primal-Dual Hybrid Gradient (PDHG) Algorithm
## IV Primal-Dual Hybrid Gradient Algorithm
In this section, a detailed explanation is provided on the application of the PDHG algorithm [30, 35, 36, 37] to minimizing \\(E_{1}\\) (1) and \\(E_{2}\\) (4) in the previous section. A review of the algorithm is provided in a more general setting to contextualize the extension to nonlocal model for hyperspectral imagery.
### _A Review of PDHG Algorithm_
Consider the following convex optimization problem:
\\[\\min_{x\\in X}\\{F(Kx)+G(x)\\}, \\tag{5}\\]
where \\(X\\) and \\(Y\\) are finite-dimensional real vector spaces, \\(F\\) and \\(G\\) are proper convex lower semi-continuous functions \\(F:Y\\rightarrow[0,\\infty]\\), \\(G:X\\rightarrow[0,\\infty]\\), and \\(K:X\\to Y\\) is a continuous linear operator with the operator norm \\(\\|K\\|=\\sup\\{\\|Kx\\|:x\\in X,\\|x\\|\\leq 1\\}\\). The primal-dual formulation of (5) is the saddle-point problem:
\\[\\min_{x\\in X}\\max_{y\\in Y}\\{(Kx,y)-F^{*}(y)+G(x)\\}, \\tag{6}\\]
where \\(F^{*}\\) is the convex conjugate of \\(F\\) defined as \\(F^{*}(y)=\\sup_{x}\\left\\langle x,y\\right\\rangle-F(x)\\)
The saddle-point problem (6) is then solved using the iterations of Algorithm 3 from [30].
In Algorithm 3, \\((I+\\lambda\\partial f)^{-1}(x)\\) is the proximal operator of \\(f\\), which is defined as:
\\[(I+\\lambda\\partial f)^{-1}(x)=\\text{prox}_{\\lambda f}(x)=\\arg\\min_{y}f(y)+ \\frac{1}{2\\lambda}\\|y-x\\|_{2}^{2}.\\]
It has been shown in [30] that \\(O(1/N)\\) (where \\(N\\) is the number of iterations) convergence can be achieved as long as \\(\\sigma,\\tau\\) satisfy \\(\\sigma\\tau\\|K\\|^{2}\\leq 1\\).
### _Primal-Dual Iteraions to Minimize \\(E_{1}\\) and \\(E_{2}\\)_
Recall from Section III that the discretized linear and quadratic energy \\(E_{1}\\) and \\(E_{2}\\) are:
\\[E_{1}(u) =\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}+\\sum_{l=1}^{k}\\sum_{i=1 }^{r}(u_{l})_{i}(f_{l})_{i},\\] \\[=\\|\
abla_{w}u\\|_{L^{1}}+\\langle u,f\\rangle,\\] \\[E_{2}(u) =\\sum_{l=1}^{k}\\|\
abla_{w}u_{l}\\|_{L^{1}}+\\sum_{l=1}^{k}\\sum_{i= 1}^{r}(u_{l})_{i}^{2}(f_{l})_{i},\\] \\[=\\|\
abla_{w}u\\|_{L^{1}}+\\langle u,f\\odot u\\rangle,\\]where \\(u=(u_{1},u_{2},\\ldots,u_{k})\\) is a nonegative matrix of size \\(r\\times k\\), with each row of matrix \\(u\\) summing to one, and \\(f\\odot u\\) denotes the pointwise product between two matrices \\(f\\) and \\(u\\). After adding an indicator function \\(\\delta_{U}\\), minimizing \\(E_{1}\\) and \\(E_{2}\\) are equivalent to solving (7) and (8):
\\[\\underset{u}{\\text{min}}\\|\
abla_{w}u\\|_{L^{1}}+\\langle u,f\\rangle +\\delta_{U}(u), \\tag{7}\\] \\[\\underset{u}{\\text{min}}\\|\
abla_{w}u\\|_{L^{1}}+\\langle u,f\\odot u \\rangle+\\delta_{U}(u) \\tag{8}\\]
where \\(U=\\{u=(u_{1},u_{2},\\ldots,u_{k})\\in\\mathbb{R}^{r\\times k}:\\sum_{l=1}^{k}(u_{l} )_{i}=1,\\forall i=1,\\ldots,r,(u_{l})_{i}\\geq 0\\}\\), and \\(\\delta_{U}\\) is the indicator function on \\(U\\). More specifically:
\\[\\delta_{U}(u)=\\begin{cases}0&\\text{if }u\\in U,\\\\ \\infty&\\text{otherwise}.\\end{cases} \\tag{9}\\]
By comparing (7), (8) and (5), we can set \\(K_{1}=K_{2}=\
abla_{w}\\), \\(F_{1}(q)=F_{2}(q)=\\|q\\|_{L^{1}}\\), \\(G_{1}(u)=\\langle u,f\\rangle+\\delta_{U}(u)\\), and \\(G_{2}(u)=\\langle u,f\\odot u\\rangle+\\delta_{U}(u)\\). The convex conjugate of \\(F_{1}\\) (and \\(F_{2}\\)) is \\(F_{1}^{*}(p)=F_{2}^{*}(p)=\\delta_{P}(p)\\), where the set \\(P=\\{p\\in\\mathbb{R}^{(r\\times r)\\times k}:\\|p\\|_{\\infty}\\leq 1\\}\\).
Next, we derive the closed forms of the proximal operators \\((I+\\sigma\\partial F_{1,2}^{*})^{-1}\\) and \\((I+\\tau\\partial G_{1,2})^{-1}\\) so that Algorithm 3 can be implemented efficiently to minimize \\(E_{1}\\) and \\(E_{2}\\).
\\[(I+\\sigma\\partial F_{1,2}^{*})^{-1}(\\tilde{p})=(I+\\sigma\\partial \\delta_{P})^{-1}(\\tilde{p})\\] \\[=\\arg\\min_{p}\\delta_{P}(p)+\\frac{1}{2\\sigma}\\|p-\\tilde{p}\\|_{2}^{ 2}=\\text{proj}_{P}(\\tilde{p}), \\tag{10}\\]
where \\(\\text{proj}_{P}(\\tilde{p})\\) is the projection of \\(\\tilde{p}\\) onto the closed convex set \\(P\\).
\\[(I+\\tau\\partial G_{1})^{-1}(\\tilde{u})=\\arg\\min_{u}\\langle u,f \\rangle+\\delta_{U}(u)+\\frac{1}{2\\tau}\\|u-\\tilde{u}\\|_{2}^{2}\\] \\[=\\arg\\min_{u\\in U}\\|u-\\tilde{u}+\\tau f\\|_{2}^{2}=\\text{proj}_{U}( \\tilde{u}-\\tau f). \\tag{11}\\] \\[(I+\\tau\\partial G_{2})^{-1}(\\tilde{u})=\\arg\\min_{u}\\left\\langle u,\\frac{\\tau}{2}\\mathcal{A}u\\right\\rangle+\\tau\\delta_{U}(u)+\\frac{1}{2}\\|u- \\tilde{u}\\|_{2}^{2}\\] \\[=\\arg\\min_{u\\in U}\\frac{1}{2}\\left\\langle u,(I+\\tau\\mathcal{A})u \\right\\rangle-\\langle u,\\tilde{u}\\rangle+\\frac{1}{2}\\left\\langle\\tilde{u},(I+ \\tau\\mathcal{A})^{-1}\\tilde{u}\\right\\rangle\\] \\[=\\arg\\min_{u\\in U}\\frac{1}{2}\\|(I+\\tau\\mathcal{A})^{\\frac{1}{2}}u- (I+\\tau\\mathcal{A})^{-\\frac{1}{2}}\\tilde{u}\\|_{2}^{2}, \\tag{12}\\]
where \\(\\mathcal{A}:\\mathbb{R}^{r\\times k}\\rightarrow\\mathbb{R}^{r\\times k}\\) is a linear operator defined as \\(\\frac{1}{2}\\mathcal{A}u=f\\odot u\\). Therefore \\(\\mathcal{A}\\) is a positive semidefinite diagonal matrix of size \\(rk\\times rk\\). It is worth mentioning that the matrix \\((I+\\tau\\mathcal{A})\\) is diagonal and positive definite, and hence it is trivial to compute its inverse and square root. Problem (12) can be solved as a preconditioned projection onto the unit simplex \\(\\mathbb{K}^{k}\\), and the solution will be explained in Section IV-C.
Combining (10,11,12) and Algorithm 3, we have the primal-dual iterations for minimizing \\(E_{1}\\) (Algorithm 4) and \\(E_{2}\\) (Algorithm 5).
```
1:while not converge do
2:\\(p^{n+1}=\\text{proj}_{P}(p^{n}+\\sigma\
abla_{w}\\tilde{u}^{n})\\)
3: Update \\(u^{n+1}\\) as in (12), where \\(\\tilde{u}=u^{n}+\\tau\\text{div}_{w}p^{n+1}\\)
4:\\(\\tilde{u}^{n+1}=u^{n+1}+\\theta(u^{n+1}-u^{n})\\)
5:\\(n=n+1\\)
6:endwhile
```
**Algorithm 5** Primal-Dual Iterations for the Quadratic Model
Before moving on to explaining how to solve (12), we specify the two orthogonal projections \\(\\text{proj}_{P}\\) and \\(\\text{proj}_{U}\\) in Algorithm 4: Let \\(\\tilde{p}=\\text{proj}_{P}(p)\\), where \\(p=(p_{l})_{l=1}^{k}\\in\\mathbb{R}^{(r\\times r)\\times k}\\). Then for every \\(i\\in\\{1,2,\\ldots,r\\}\\) and every \\(l\\in\\{1,2,\\ldots,k\\}\\), the \\(i\\)-th row of \\(\\tilde{p}_{l}\\) is the projection of the \\(i\\)-th row of \\(p_{l}\\) on to the unit ball in \\(\\mathbb{R}^{r}\\). Similarly, if \\(\\tilde{u}=\\text{proj}_{U}(u)\\), then for every \\(i\\in\\{1,2,\\ldots,r\\}\\), \\(((\\tilde{u}_{1})_{i},(\\tilde{u}_{2})_{i},\\ldots,(\\tilde{u}_{k})_{i})\\) is the projection of \\(((u_{1})_{i},(u_{2})_{i},\\ldots,(u_{k})_{i})\\) onto the unit simplex \\(\\mathbb{K}^{k}\\) in \\(\\mathbb{R}^{k}\\).
### _Preconditioned Projection onto the Unit Simplex_
This section is dedicated to solving (12). It is easy to see that the rows of \\(u\\) in (12) are decoupled, and the only problem that needs to be solved is:
\\[\\min_{u\\in\\mathbb{R}^{k}}\\delta_{\\mathbb{K}^{k}}(u)+\\frac{1}{2}\\|Au-y\\|^{2}, \\tag{13}\\]
where \\(A=\\text{diag}(a_{1},a_{2},\\ldots,a_{k})\\) is a positive definite diagonal matrix of size \\(k\\times k\\), \\(\\mathbb{K}^{k}\\) is the unit simplex in \\(\\mathbb{R}^{k}\\), and \\(y\\in\\mathbb{R}^{k}\\) is a given vector.
**Theorem 1**: _The solution \\(u=(u_{1},u_{2},\\ldots,u_{k})\\) of (13) is:_
\\[u_{i}=\\max\\left(\\frac{a_{i}y_{i}-\\lambda}{a_{i}^{2}},0\\right), \\tag{14}\\]
_where \\(\\lambda\\) is the unique number satisfying:_
\\[\\sum_{i=1}^{k}\\max\\left(\\frac{a_{i}y_{i}-\\lambda}{a_{i}^{2}},0\\right)=1 \\tag{15}\\]
The proof of Theorem 1 is shown in the Appendix. The most computationally expensive part of solving (15) is sorting the sequence \\(\\left(a_{i}y_{i}\\right)_{1\\leq i\\leq k}\\) of length \\(k\\), which is trivial since \\(k\\), the number of clusters, is typically a small number.
## V Numerical Results
### _Comparison Methods and Experimental Setup_
All experiments were run on a Linux machine with Intel core i5, 3.3Hz with 2GB of DDR3 RAM. The following unsupervised algorithms have been tested:
1. **(Spherical) K-means**: Built in MatLab Code.
2. **NMF**: Non-negative Matrix Factorization [40].
3. **H2NMF**: Hierarchical Rank-2 Non-negative Matrix Factorization [5].
4. **MBO**: Graph Merriman-Bence-Osher scheme [23, 24]. The code is run for 10 times on each dataset, and the best result is chosen.
5. **NLTV2**: Nonlocal Total Variation, quadratic model with random pixel initialization.
6. **NLTV1(H2NMF/K-means)**: Nonlocal Total Variation, linear model with endmembers/centroids extracted from H2NMF/K-means.
Every algorithm can be initialized via the same procedure as that in \"K-means++\" [41], and the name \"Algorithm++\" is used if the algorithm is initialized in such a way. For example, \"NLTV2++\" means nonlocal total variation, quadratic model with \"K-means++\" initialization procedure.
The algorithms are compared on the following datasets:
1. **Synthetic Dataset:** This dataset2 contains five endmembers and \\(162\\) spectral bands. The 40,000 abundance vectors were generated as a sum of Gaussian fields. The dataset was generated using a Generalized Bilinear Mixing Model (GBM): Footnote 2: Available at [http://www.math.ucla.edu/~weizhu731/](http://www.math.ucla.edu/~weizhu731/)
Footnote 3: Available at [http://www.ehu.eu/csevinco/index_php?title=Hyperspectral_Remote_Sensing_Scenes](http://www.ehu.eu/csevinco/index_php?title=Hyperspectral_Remote_Sensing_Scenes)
\\[y=\\sum_{i=1}^{p}a_{i}e_{i}+\\sum_{i=1}^{p-1}\\sum_{j=i+1}^{p}\\gamma_{ij}a_{i}a_ {j}e_{i}\\odot e_{j}+n,\\]
where \\(\\gamma_{ij}\\) are chosen uniformly and randomly in the interval \\([0,1]\\), \\(n\\) is the Gaussian noise, with an SNR of 30 dB, and \\(a_{i}\\) satisfies: \\(a_{i}\\geq 0\\), and \\(\\sum_{i=1}^{p}a_{i}=1\\).
2. **Salinas-A Dataset:** Salinas-A scene4 was a small subscene of Salinas image, which was acquired by the AVIRIS sensor over Salinas Valley. It contains \\(86\\times 83\\) pixels and \\(204\\) bands. The ground truth includes six classes: broccoli, corn, and four types of lettuce. Footnote 4: Available at [http://www.agc.amry.mil/](http://www.agc.amry.mil/).
3. **Urban Dataset:** The Urban dataset5 is from HYperspectral Digital Imagery Collection Experiment (HYDICE), which has \\(307\\times 307\\) pixels and contains \\(162\\) clean spectral bands. This dataset only has six classes of material: road, dirt, house, metal, tree, and grass. Footnote 5: Available at [http://www.ehu.eu/csevinco/index_php?title=Hyperspectral_Remote_Sensing_Scenes](http://www.ehu.eu/csevinco/index_php?title=Hyperspectral_Remote_Sensing_Scenes)
4. **San Diego Airport Dataset:** The San Diego Airport (SDA) dataset5 is provided by the HYDICE sensor. It comprises \\(400\\times 400\\) pixels and contains \\(158\\) clean spectral bands. There are seven types of material: trees, grass, three types of road surfaces, and two types of rooftops [5]. The RGB image with cluster labels are shown in Fig. 7. Footnote 5: Available at [http://www.math.ucla.edu/~weizhu731/](http://www.math.ucla.edu/~weizhu731/)
5. **Chemical Plume Dataset:** The chemical plume dataset6 consists of frames taken from a hyperspectral video of the release of chemical plumes provided by the John Hopkins University Applied Physics Laboratory. The image has \\(128\\times 320\\) pixels, with \\(129\\) clean spectral bands. There was no ground truth provided for this data, so a segmentation of four classes is assumed: chemical plume, sky, foreground, and mountain. A fifth cluster is added so that the noise pixels would not interfere with the segmentation [23]. Footnote 6: Available at [http://www.math.ucla.edu/~weizhu731/](http://www.math.ucla.edu/~weizhu731/)
6. **Pavia University Dataset:** The Pavia University dataset is collected by the ROSIS sensor. It contains \\(103\\) clean spectral bands and \\(610\\times 340\\) pixels, and comprises \\(9\\) classes of material.
7. **Indian Pines Dataset:** The Indian Pines dataset was acquired by AVIRIS sensor and consists of \\(145\\times 145\\) pixels, with 200 clean spectral bands. The available ground truth is labeled into 16 classes.
8. **Kennedy Space Center Dataset:** This dataset was gathered by the NASA AVIRIS sensor over the Kennedy Space Center, Florida. A subscene of the western shore of the center is used in the numerical experiment. \\(12\\) classes of different materials are reported in the datacube of size \\(512\\times 365\\times 176\\). K-means and NMF are non-parametric, and the parameter setups of H2NMF and the MBO scheme are described in [5] and [23, 24]. The key parameters \\(\\lambda\\) and \\(\\mu\\) in the NLTV models are determined in the following way: 1. \\(\\lambda\\) is chosen such that the data fidelity term is around \\(10\\) times larger than the NLTV regularizing term \\(\\|\
abla_{w}u\\|_{L^{1}}\\). 2. \\(\\mu\\) is chosen such that the Euclidean distances between different endmembers are roughly \\(10\\) times smaller than the cosine distances.
Table I displays the parameters chosen for the numerical experiments. The large variance of the parameter scales results from the variety of image sizes and scales. A sensitivity analysis over the parameters is presented in Section V-G.
### _Synthetic Dataset and Salinas-A Dataset_
All the algorithms are first tested on the synthetic dataset. The classification results are shown in Table II and Fig. 4. Both NLTV algorithms have better overall accuracy than all of the other methods, although they took a longer time to converge. The quadratic model classified the image almost perfectly.
The visual classification results and overall accuracies of the Salinas-A dataset are shown in Fig. 5 and Table II. Both NLTV methods performed at higher accuracy compared to other methods. The linear model improved the result of K-means by incorporating spatial information of the dataset, and the quadratic model only took 4 iterations to converge.
numerical analysis of accuracy. As this \"ground truth\" was hand-corrected, it does not necessarily represent the most accurate segmentation of the image; however, it provides a basis for quantitative comparison.
After running all the algorithms that are compared to create six clusters, we noticed that they all split \"grass\" into two different clusters (one of them corresponds to a mixture of grass and dirt), while treating \"road\" and \"metal\" as the same. To obtain a reliable overall accuracy of the classification results, the two \"grass\" clusters are combined in every algorithm, hence obtaining the classification results for 5 clusters, which are \"grass\", \"dirt\", \"road+metal\", \"roof\", and \"tree\".
The overall classification accuracies and run-times are displayed in Table III. As can be seen, the proposed NLTV algorithms performed consistently better with comparable run-time. It is easier to see visually in Fig. 6 that the NLTV algorithm performed best of the five algorithms tested; specifically, the NLTV algorithm alone distinguished all of the dirt beneath the parking lot and the intricacies of the road around the parking lot. The total variation regularizer also gives the segmented image smoother and more distinct edges, allowing easier human identification of the clusters.
Fig. 4: Clustering results for the synthetic dataset generated by 5 endmembers. The first image on the left is the ground truth, and the remaining six images are the clustering results of the corresponding algorithms.
Fig. 5: Clustering results for the Salina-A dataset. The first image on the left is the ground truth, and the remaining six images are the clustering results of the corresponding algorithms.
### _San Diego Airport Dataset_
The classification results and computational run-times are shown in Fig. 7 and Table IV. No ground truth classification is available for this HSI, but after examining the spectral signatures of various pixels in the scene, we managed to pinpoint some errors that were common for each algorithm. We will not go into detail about the NMF and H2NMF algorithms, which clearly do not perform well on this dataset. K-means obtained some decent results, but splitted the rooftops of the four buildings on the bottom right of the image into two distinct clusters, and failed to separate two different road types (cluster 5 and 6). The MBO scheme failed on two accounts: it did not properly segment two different road surfaces (cluster 6 and 7), and did not account for the different roofop types (cluster 3 and 4). The linear NLTV model with H2NMF initialization is significantly more accurate than H2NMF and MBO. It successfully picked out two different types of roof (cluster 3 and 4), two different types of road (cluster 6 and 7), although the other type of road (cluster 5) is mixed with one type of roof (cluster 3). The best result was obtained by using the NLTV quadratic model with random initialization, with the only problem that tree and grass
Fig. 6: Clustering results for the Urban dataset. Five clusters including rooftops, grass, trees, dirt, and βroad+metalβ are generated by the algorithms.
Fig. 7: Clustering results for the San Diego Airport dataset. The first image on the left is the RGB image, and the remaining six images are the clustering results of the corresponding algorithms.
(clusters 1 and 2) are mixed together. However, the mixing of grass and tree is actually the case for all the other algorithms. This means that NLTV quadratic model alone was able to identify six of the seven clusters correctly.
### _Chemical Plume Dataset_
Analyzing images for chemical plumes is more difficult because of its diffusive nature. All the algorithms are run on the image before it was denoised and the results are shown in Figure 8. The unmixing methods such as NMF and H2NMF do not perform satisfactorily on this dataset. MBO++, K-means++, and NLTV2++ can all properly identify the chemical plume. Note that NLTV with H2NMF as centroid initialization outperforms H2NMF as a classification method. We have to point out that the NLTV quadratic model is not so robust with respect to the centroid initialization even with a \"K-means++\" type procedure on this dataset. But this is also the case for all the other testing algorithms. The MBO scheme, which was specifically designed for this dataset [23], does seem to have the highest robustness among all the algorithms.
### _Pavia University, Indian Pines, and Kennedy Space Center Dataset_
The Pavia University (9 clusters), Indian Pines (16 clusters), and Kennedy Space Center (12 clusters) datasets are frequently used to test supervised classification algorithms. To save space, we only report the numerical overall accuracies in Table V. As can be seen, all the competing unsupervised algorithms performed poorly on these three datasets. Different clusters were merged and same clusters were splitted in various fashions by all the algorithms, which rendered the numerical accuracies no longer reliable.
The computational run-times of these three datasets are listed in Table IV. Unfortunately, when the number of clusters is increasing, the computational complexity of the quadratic model grows exponentially. The reason is that the number of grid points (\\(\\delta\\) in Fig. 1) on the unit simplex grows exponentially as the dimension of the simplex increases. Therefore, when the number of clusters is large enough (greater than 10), the stable simplex clustering will become the most time-consuming part of the quadratic model. On these three datasets, we sacrificed the accuracy of the quadratic model by creating a coarser mesh on the unit simplex.
The reason why NLTV, as well as all the other competing unsupervised algorithms, performed poorly on these three datasets is two-fold. First, when the number of classes is too large in a HSI covering a large geographic location, the variation of spectral signatures within the same class cannot be neglected when compared to the difference between the constitutive materials, especially when the endmembers themselves are similar. As a result, the unsupervised algorithms tend to split a ground-truth cluster with large variation in spectral signatures and merge clusters with similar centroids or endmembers. Second, there might exist more distinct materials in the image than reported in the ground truth. Therefore the algorithms might detect those unreported materials because no labeling has been used in these unsupervised algorithms. Thus we can conclude that NLTV,
Fig. 8: Clustering results for the Chemical Plume dataset.
as well as other unsupervised methods reported in this paper, is not suitable for such images at current stage. Modifying the NLTV algorithm to work for such datasets would be the direction of future work.
### _Sensitivity Analysis over Key Model Parameters_
At last, a sensitivity analysis is provided over the parameters \\(\\lambda\\) and \\(\\mu\\) in the NLTV models. As mentioned in Section V-A, \\(\\lambda\\) and \\(\\mu\\) are chosen to balance the scale of the regularizing and fidelity terms or the cosine and Euclidean distances. Fig. 9 displays the robustness of the NLTV algorithm on the Synthetic, Urban, and Salinas-A datasets with respect to \\(\\lambda\\) and \\(\\mu\\) within the variance of two magnitudes. Centroid initialization remains identical as \\(\\lambda\\) and \\(\\mu\\) are changing. It is clear that the NLTV algorithm is fairly robust with respect to \\(\\lambda\\) on all three datasets. The algorithm is also relatively robust with respect to \\(\\mu\\) on the Synthetic and Salinas-A datasets. As for the Urban dataset, a significant decay in accuracy can be observed as \\(\\mu\\) increases. This phenomenum is due to the fact that larger \\(\\mu\\) causes Euclidean distance to be the dominant one, which is not ideal with the presence of atmospheric interference in the Urban dataset. Smaller \\(\\mu\\) also leads to lower accuracy in the Urban dataset, which results from the similarity of \"road\" and \"dirt\" clusters measured in cosine distance. Overall, a reasonable robustness with respect to the key parameters \\(\\lambda\\) and \\(\\mu\\) can be concluded on these three tests.
Similar robustness can be observed on other datasets except for the Chemical Plume. Fig. 10 shows the sensitivity of the result with respect to \\(\\mu\\). All the centroids are initialized using H2NMF, and vastly different results occurred as \\(\\mu\\) changes. This could be due to the presence of significant noise.
## VI Conclusion
In this paper we present the framework for a nonlocal total variation method for unsupervised HSI classification, which is solved with the primal-dual hybrid gradient algorithm. A linear and a quadratic version of this model are developed; the linear
Fig. 10: The sensitivity of the NLTV algorithm with respect to \\(\\mu\\) in the plume dataset. All the tests used the same centroid initialization (H2NMF).
Fig. 9: This figure shows the robustness of the NLTV algorithm with respect to \\(\\lambda\\) and \\(\\mu\\). Centroid initialization remains identical as \\(\\lambda\\) and \\(\\mu\\) are changing. \\(\\lambda_{0}\\) and \\(\\mu_{0}\\) are the optimal values specified in Section V-A. The overall accuracies of the Synthetic, Urban, and Salinas-A datasets are displayed.
version updates more quickly and can refine results produced by a centroid extraction algorithm, and the quadratic model with stable simplex clustering method provides a robust means of classifying HSI with randomized pixel initialization.
The algorithm is tested on both a synthetic and seven real-world datasets, with promising results. The proposed NLTV algorithm consistently performed with highest accuracy on synthetic and urbanized datasets such as Urban, Salinas-A, and the San Diego Airport, both producing smoother results with easier visual identification of segmentation, and distinguishing classes of material that other algorithms failed to differentiate. The NLTV algorithm also performed well on anomaly detection scenarios like the Chemical Plume datasets; with proper initialization, it performed on par with the Merriman-Bence-Osher scheme developed specifically for this dataset. However, NLTV, as well as other unsupervised algorithms, failed to achieve satisfactory results on datasets with a relatively large number of clusters. The run-times of the NLTV algorithms are generally comparable to the other methods, and the consistent higher accuracy on different types of datasets suggests that this technique is a more robust and precise means of classifying hyperspectral images with a moderate number of clusters.
Proof of Theorem 1: Problem (13) is equivalent to:
\\[\\min_{\\sum_{i=1}^{k}u_{i}=1}\\delta_{\\mathbb{R}_{+}^{k}(u)}+\\frac{1}{2}\\|Au-y \\|_{2}^{2}, \\tag{16}\\]
where \\(\\mathbb{R}_{+}^{k}=\\{u\\in\\mathbb{R}^{k}:u_{i}\\geq 0\\}\\) is the nonnegative quadrant of \\(\\mathbb{R}^{k}\\). The Lagrangian of (16) is:
\\[\\mathcal{L}(u,\\lambda)=\\sum_{i=1}^{k}\\left(\\frac{1}{2}\\left|a_{i}u_{i}-y_{i} \\right|^{2}+\\delta_{\\mathbb{R}_{+}}(u_{i})+\\lambda u_{i}\\right)-\\lambda.\\]
If \\(u^{*}\\) is a soluton of (16), KKT conditions [43] imply that there exists a \\(\\lambda\\) such that:
\\[u^{*}=\\arg\\min_{u}\\mathcal{L}(u,\\lambda)=\\arg\\min_{u_{i}\\geq 0}\\sum_{i=1}^{k} \\frac{1}{2}a_{i}^{2}\\left(u_{i}+\\frac{\\lambda-a_{i}y_{i}}{a_{i}^{2}}\\right)^{2}.\\]
Therefore \\(u_{i}^{*}=\\max\\left(\\frac{a_{i}y_{i}-\\lambda}{a_{i}^{2}},0\\right)\\). Meanwhile, the primal feasibility requires:
\\[\\sum_{i=1}^{k}u_{i}^{*}=\\sum_{i=1}^{k}\\max\\left(\\frac{a_{i}y_{i}-\\lambda}{a_{i }^{2}},0\\right)=1.\\]
And this proves Theorem 1.
## Acknowledgment
The authors would like to thank Zhaoyi Meng and Justin Sunu, for providing and helping with the MBO code.
## References
* [1] C.-I. Chang, _Hyperspectral imaging: techniques for spectral detection and classification_. Springer Science & Business Media, 2003, vol. 1.
* [2] J. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of_, vol. 5, no. 2, pp. 354-379, 2012.
* [3] N. Gillis and S. Vavasis, \"Fast and robust recursive algorithmsfor separable nonnegative matrix factorization,\" _Pattern Analysis and Machine Intelligence, IEEE Transactions on_, vol. 36, no. 4, pp. 698-714, 2014.
* [4] S. Jia and Y. Qian, \"Constrained nonnegative matrix factorization for hyperspectral unmixing,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 47, no. 1, pp. 161-173, 2009.
* [5] N. Gillis, D. Kuang, and H. Park, \"Hierarchical clustering of hyperspectral images using rank-two nonnegative matrix factorization,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 53, no. 4, pp. 2066-2078, 2015.
* [6] M. Soret, S. L. Bacharach, and I. Buvat, \"Partial-volume effect in pet tumor imaging,\" _Journal of Nuclear Medicine_, vol. 48, no. 6, pp. 932-945, 2007.
* [7] N. Dobigeon, J.-Y. Tourneret, C. Richard, J. Bermudez, S. McLaughlin, and A. Hero, \"Nonlinear unmixing of hyperspectral images: Models and algorithms,\" _Signal Processing Magazine, IEEE_, vol. 31, no. 1, pp. 82-94, 2014.
* [8] R. Heylen, M. Parente, and P. Gader, \"A review of nonlinear hyperspectral unmixing methods,\" _Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of_, vol. 7, no. 6, pp. 1844-1868, 2014.
* [9] F. Melgani and L. Bruzzone, \"Classification of hyperspectral remote sensing images with support vector machines,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 24, no. 8, pp. 1778-1790, 2004.
* [10] G. Camps-Valls and L. Bruzzone, \"Kernel-based methods for hyperspectral image classification,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 43, no. 6, pp. 1351-1362, 2005.
* [11] M. Fauvel, J. Benediktsson, J. Chanussot, and J. Sveinsson, \"Spectral and spatial classification of hyperspectral data using svms and morphological profiles,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 46, no. 11, pp. 3804-3814, 2008.
* [12] B. Demir and S. Erturk, \"Hyperspectral image classification using relevance vector machines,\" _Geoscience and Remote Sensing Letters, IEEE_, vol. 4, no. 4, pp. 586-590, 2007.
* [13] G. M. Foody, \"Rvmbased multiclass classification of remotely sensed data,\" _International Journal of Remote Sensing_, vol. 29, no. 6, pp. 1817-1823, 2008.
* [14] F. Mianji and Y. Zhang, \"Robust hyperspectral classification using relevance vector machine,\" _Geoscience and Remote Sensing, IEEE Transactions on_, vol. 49, no. 6, pp. 2100-2112, 2011.
* [15] J. Shi and J. Malik, \"Normalized cuts and image segmentation,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 22, no. 8, pp. 888-905, Aug 2000.
* [16] M. Stoer and F. Wagner, \"A simple min-cut algorithm,\" _J. ACM_, vol. 44, no. 4, pp. 585-591, Jul. 1997. [Online]. Available: [http://doi.acm.org/10.1145/263867.263872](http://doi.acm.org/10.1145/263867.263872)
* [17] A. Szlam and X. Bresson, \"A Total Variation-based Graph Clustering Algorithm for Cheeger Ratio Cuts,\" Tech. Rep. UCLA CAM Report 09-68, 2009.
* [18] X. Bresson and A. D. Szlam, \"Total variation, cheeger cuts,\" in _Proceedings of the 27th International Conference on Machine Learning (ICML-10)_, 2010, pp. 1039-1046.
* [19] A. L. Bettorzi and A. Flenner, \"Diffuse interface models on graphs for classification of high dimensional data,\" _Multiscale Modeling & Simulation_, vol. 10, no. 3, pp. 1090-1118, 2012.
* 363, 1994.
* [21] C. Garcia-Cardona, E. Merkurjev, A. Bertozzi, A. Flenner, and A. Percus, \"Multiclass data segmentation using diffuse interface methods on graphs,\" _Pattern Analysis and Machine Intelligence, IEEE Transactions on_, vol. 36, no. 8, pp. 1600-1613, 2014.
* 34, 2014.
* [23] H. Hu, J. Sunu, and A. L. Bertozzi, _Energy Minimization Methods in Computer Vision and Pattern Recognition: 10th International Conference, EMMCVPR 2015, Hong Kong, China, January 13-16, 2015. Proceedings_. Cham: Springer International Publishing, 2015, ch. Multi-class Graph Mumford-Shah Model for Plume Detection Using the MBO scheme, pp. 209-222.
* [24] E. Merkurjev, J. Sunu, and A. Bertozzi, \"Graph MBO method for multiclass segmentation of hyperspectral stand-off detection video,\" in _Image Processing (ICIP), 2014 IEEE International Conference on_, 2014, pp. 689-693.
* 268, 1992.
* [26] T. Chan, S. Esedoglu, F. Park, and A. Yip, \"Recent developments in total variation image restoration,\" _Mathematical Models of Computer Vision_, vol. 17, 2005.
* [27] A. Buades, B. Coll, and J. M. Morel, \"A review of image denoising algorithms, with a new one,\" _Multiscale Modeling & Simulation_, vol. 4, no. 2, pp. 490-530, 2005.
* [28] G. Gilboa and S. Osher, \"Nonlocal operators with applications to image processing,\" _Multiscale Modeling & Simulation_, vol. 7, no. 3, pp. 1005-1028, 2009.
* [29] D. Zosso, G. Tran, and S. J. Osher, \"Non-local retinex--a unifying framework and beyond,\" _SIAM Journal on Imaging Sciences_, vol. 8, no. 2, pp. 787-826, 2015.
* [30] A. Chambolle and T. Pock, \"A first-order primal-dual algorithm for convex problems withapplications to imaging,\" _Journal of Mathematical Imaging and Vision_, vol. 40, no. 1, pp. 120-145, 2010.
* [31] J. Zhang, W. Zhu, L. Wang, and N. Jiang, \"Evaluation of similarity measure methods for hyperspectral remote sensing data,\" in _Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International_, 2012, pp. 4138-4141.
* [32] J. H. Friedman, J. L. Bentley, and R. A. Finkel, \"An algorithm for finding best matches in logarithmic expected time,\" _ACM Trans. Math. Softw._, vol. 3, no. 3, pp. 209-226, Sep. 1977.
* [33] R. A. Brown, \"Building a balanced kd tree in o (kn log n) time,\" _arXiv preprint arXiv:1410.5420_, 2014.
* [34] M. Muja and D. G. Lowe, \"Fast approximate nearest neighbors with automatic algorithm configuration.\" _VISAPP (1)_, vol. 2, pp. 331-340, 2009.
* [35] M. Zhu and T. Chan, \"An efficient primal-dual hybrid gradient algorithm for total variation image restoration,\" Tech. Rep. UCLA CAM Report 08-34, 2008.
* [36] M. Zhu, S. J. Wright, and T. F. Chan, \"Duality-based algorithms fortotal-variation-regularized image restoration,\" _Computational Optimization and Applications_, vol. 47, no. 3, pp. 377-400, 2008.
* [37] E. Esser, X. Zhang, and T. F. Chan, \"A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science,\" _SIAM Journal on Imaging Sciences_, vol. 3, no. 4, pp. 1015-1046, 2010.
* [38] D. MacKay, \"An example inference task: Clustering,\" in _Information Theory, Inference and Learning Algorithms_. Cambridge University Press, 2003, ch. 20, pp. 284-292.
* [39] D. Mumford and J. Shah, \"Optimal approximations by piecewise smooth functions and associated variational problems,\" _Communications on pure and applied mathematics_, vol. 42, no. 5, pp. 577-685, 1989.
* [40] J. Kim and H. Park, \"Fast nonnegative matrix factorization: An active-set-like method and comparisons,\" _SIAM Journal on Scientific Computing_, vol. 33, no. 6, pp. 3261-3281, 2011.
* [41] D. Arthur and S. Vassilvitskii, \"K-means++: The advantages of careful seeding,\" in _Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms_, ser. SODA '07. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2007, pp. 1027-1035. [Online]. Available: [http://dl.acm.org/citation.cfm?id=1283383.1283494](http://dl.acm.org/citation.cfm?id=1283383.1283494)
* 118, 2014.
* [43] S. Boyd and L. Vandenberghe, _Convex optimization_. Cambridge university press, 2004. | In this paper, a graph-based nonlocal total variation method (NLTV) is proposed for unsupervised classification of hyperspectral images (HSI). The variational problem is solved by the primal-dual hybrid gradient (PDHG) algorithm. By squaring the labeling function and using a stable simplex clustering routine, an unsupervised clustering method with random initialization can be implemented. The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods such as spherical K-means, nonnegative matrix factorization (NMF), and the graph-based Merriman-Bence-Osher (MBO) scheme.
Hyperspectral images (HSI), nonlocal total variation (NLTV), primal-dual hybrid gradient (PDHG) algorithm, unsupervised classification, stable simplex clustering | Write a summary of the passage below. | 181 |
mdpi/037612d4_9280_4e79_9387_9f738033bee0.md | The Impact of Top Management's Support on the Collaboration of Green Supply Chain Participants and Environmental Performance
Jungeun Lee
1Department of Business Administration, The Catholic University of Korea, Gyeonggi-do 14662, Korea; [email protected]
Hye-Young Joo
2College of Business and Economics, Chung-Ang University, Seoul 06974, Korea
2
Footnote 2: email: [email protected]
Received: 23 September 2020; Accepted: 28 October 2020; Published: 31 October 2020
## 1 Introduction
Over the years, companies have been doing business with little concern for the environmental impacts that business activities can bring, such as the waste and destruction of resources [1]. As the awareness of such externalities, which is unintentionally caused by corporate activities, has become increasingly recognized, the scope of corporate responsibility stipulated by stakeholders has become wider and now corporate social responsibility is seen as a source of sustainable competitive advantage for companies [2].
Although social responsibility activities can be carried out at various levels [3], the supply chain management system, in which many stakeholders participate, is also one of the essential elements required for a company to gain a competitive edge. In 2001, Sony's failure to manage its supply chain accounts for why environmental management of the supply chain is important: Cadmium, which is defined as an environmentally harmful substance, was detected in the company's game machine parts, and 1.3 million game machines that had been produced were recovered in total, and the huge losses were transferred to the company [4]. Thus, overlooking environmental factors in supply chain management can be a significant risk to an enterprise. In particular, the negative effects of the products produced by a company on the environment--that is, the externalities of the enterprises that are not intended by them--occur mainly in the procurement of the raw materials and the manufacturing and use of the parts. Thus, cooperation in the green supply chain becomes more important than anything else [5; 6].
Today, companies are faced with new challenges that must address the environmental and social issues that surround the enterprise in order to reduce the risks in the supply chain and continuously improve supply chain performance [7]. In this context, green supply chain management, a concept that incorporates environmental and social issues into traditional supply chain management, is attracting attention as a major way to create a sound and sustainable competitive advantage for the enterprise. Green supply chain management means a set of activities necessary to build a competitive supply chain, namely, adding the social and environmental themes, such as the use of harmful substances and a poor working environment in the manufacturing process, to the existing supply chain management concept, thereby minimizing the risks that may occur in the value chain and improving the performance of the supply chain [8,9].
This green supply chain management is considered to be one of the essential elements for sustainable development and survival of companies, and more and more companies are striving to build green supply chains. In this respect, recently, academia has focused on green supply chain management, and research on it has been actively conducted [10]. For example, researchers have been examining supply chain collaboration measures [11], studying the variables consisting of green supply chains [12], factors promoting the introduction of green supply chains [13,14], and studying the correlation between green supply chain management and firm performance [10,15,16]. Researchers also have expanded the research scope centered around these topics.
The role of top management is crucial in the study of green supply chain management and performance [17,18]. This is because top management has the greatest influence on the organization, making major decisions in the organization [17,18,19], and their decisions affect the structure and performance of the organization [20].
The top management is the most important set of people in determining the strategy and direction of the company and in predicting the performance of the company. Especially, the influence of top management is one of the factors that should be considered in the decision or achievement of a production/operation field, which is one of the important functions of an enterprise. Rather than focusing on the impact of top managements only, scholars have focused on the management elements within the firm, including some of the top executives. In this case, it is hard to see the significance and impact of top management due to this variable act as a general variable for internal management, and it cannot be concluded that it is the unique effect of top management.
Therefore, this study focuses on the impact of top management's supportive attitude on green supply chain management, one of the psychological characteristics of top executives, and explains environmental performance. Specifically, a high level of top management support for green supply chain management (SCM) is expected to promote green SCM implementation and thus achieve a high environmental performance. In addition, the key to successfully leading a green SCM is in working with supply chain partners. In general, collaboration among companies in a supply chain is a relationship in which participants set common goals, share information and resources [21], and not only reward but also share corresponding responsibilities [22,23].
Soosay et al. [24] suggested that collaboration in the supply chain is a capability that results in continuous innovation of the enterprise. In green SCM, collaboration among these participating companies has been a key issue for explaining the sustainable competitiveness of these companies [25,26,27]. It is interesting to note that there is relatively little research on the relationship between the top management's support, which is the most crucial variable explaining the competitiveness of green SCM, and the level of cooperation with participating companies in green SCM.
In the meantime, studies related to green SCM have not independently examined the attitudes of top management, and they are generally used as a comprehensive concept of \"internal environmental management\" [12]. However, \"internal environmental management\" or \"environmental management orientation\" is a concept that integrates the attitude of the top management and the programs or systems of the enterprise. In that case, it becomes more difficult for top management to have an independent influence on green SCM because each company's internal factors have a mixed effect on the outcome variables.
There is a study by Burki et al. (2014) that deals with the role of top management and environmental collaboration with participating companies in a similar perspective to this one. However, in these studies, (1) it is difficult to know the situation of the suppliers because the target companies are confined to the customers; (2) this study controls the financial performance, the type of industry, the introduction period of green SCM, and the position in the green SCM--which their study has not been able to control; and (3) while this study deals with environmental performance as a dependent variable, their research was approached from an innovative point of view, such as process innovation and managerial innovation. Therefore, this study is different from previous studies. In addition, Chu et al. (2019) analyzed the impact of top management on green SCM and corporate performance, but the impact path is headed toward overall green SCM and social capital from top management. Kumar and Paraskevas (2019) also deal with the characteristics of the top management team to explain environmental strategy, focusing on the observable characteristics of the top management team, such as age, gender, and experiences, whereas this study rather focuses on the unobservable characteristic of top management--their attitudes.
To sum up, the aforementioned literature rarely explained the role of environmental collaboration, which is one of the most necessary requirements for increasing environmental contribution (environmental performance). In addition, the prior studies have limitations in showing how top management is impacting directly on promoting collaboration among companies. With regard to this, the main purpose of this study is to identify the impact of environmental collaboration on environmental performance and to analyze the direct role of top management in increasing such environmental collaboration. In particular, by analyzing the mediating effects of environmental collaboration not only with suppliers but also with customers in the supply chain, we tried to help them to understand the mechanism of environmental performance with the support of top management from a more integrated perspective. Put simply, this paper attempts to answer three fundamental research questions:
**Research question 1.**_How does the support of top management affect environmental collaboration among companies in the green supply chain?_
**Research question 2.**_Is environmental performance significantly increased by environmental collaboration between companies in the green supply chain?_
**Research question 3.**_Does environmental collaboration with customers and their suppliers play a mediating role in the green supply chain?_
From the sample of 301 Korean manufacturers in 2014, we found the empirical results as follows. First, the top management's support toward green supply chain increases the environmental collaboration with the supply chain participants. We found that the top management's support for the green supply chain also increases corporate environmental performance. Lastly, the environmental collaboration with the supply chain partners significantly mediates the relationship between top management support and environmental performance.
Our paper has some theoretical and practical contributions. Theoretically, this study extends the green SCM literature by focusing on the effect of a corporation's top management team. Our empirical results indicate that the top management's characteristics and attitudes can affect the process and outcomes of firms, supporting the claims of the existing upper echelon theory. Moreover, we separate the independent effects of top management support on the green SCM, which has so far been considered comprehensively as a component of the company's internal environmental management. In addition, this study proposes a mechanism that connects the support of a corporate top management team to its environmental performance, considering various control variables that could affect the dependent variables. Practically, our study provides a fresh insight into companies and the role of top managersin promoting the implementation of functional-level strategies. Our results indicate that the top management's active supporting attitude is crucial for the smooth implementation of the company's functional strategy regarding green SCM.
The rest of this paper is organized as follows. The next section provides the theoretical background to the work. This is followed by a discussion of the direct effect of top management's support on the environmental collaboration with the supply chain participants and the environmental performance. This study then considers the mediating effect of the environmental collaboration with the supply chain participants. Next, the study's methodology is explained, and the analytical results presented. The final section provides a discussion of the results and draws conclusions.
## 2 Theoretical Background
### Top Management's Support and Environmental Collaboration with Participants
The top management of a firm is rated as the top decision maker and the most influential of the firm [31]. Therefore, the attitudes and support levels of top executives have a significant impact on the attitude or participation of the organizational members or organizational partners in accepting such practices, management activities, and systems [32]. In addition, it is almost impossible for the members of the organization who are actively involved in the activities, or their partners, to provide high level collaboration for the execution of the management activities without active support or willingness of the top management for the specific activities of the company.
Thus, the willingness and active support of top executives for the implementation of specific activities play an important role in the overall direction of the internal members and external partners and in the formation of a value chain. It will also play an essential role in promoting collaboration between enterprises [33]. In other words, in order to discuss the impact of eco-friendly collaboration with supply chain participants on environmental performance within the green supply chain, consideration should be given to the degree of support and attitude of the internal top management. Top management's support for specific activities of an organization plays an important role in overcoming the resistance to changes that may exist within and outside the company by promoting information sharing and communication among organizations [34]. However, compared to traditional supply chain management, implementing green supply chain management involves more effort and expense. In addition, since all changes are accompanied by resistance, there is the possibility that there may be organizational members, or supply chain participants, resisting or having a wait-and-see attitude towards these changes, internally or externally, and irrespective of the environmental or historical need for environmentally friendly supply chain management [35].
Under these circumstances, if the top executives of an enterprise show positive support and a willingness to change, and where this is recognized by the internal/external members of the organization, the members will actively share relevant information and participate in the process of justifying and accepting change in themselves. This will increase the degree of collaboration with the change [36]. Green supply chain management collaboration is defined as the degree of interaction between manufacturers developing and implementing pollution prevention technologies and their key suppliers and key customers [11]. Therefore, we hypothesize the following:
**Hypothesis 1.**_The degree of top management's support in the green supply chain has a positive impact on the level of environmental collaboration with supply chain participants._
**Hypothesis 2.**_The degree of top management's support in the green supply chain has a positive impact on the level of environmental collaboration with supply chain suppliers._
**Hypothesis 3.**_The degree of top management's support in the green supply chain has a positive impact on the level of environmental collaboration with supply chain customers._
### Top Management's Support and Environmental Performance
According to the upper echelon theory, a company or organization grows and evolves to reflect the characteristics of its top management [31]. In particular, CEOs have authority over corporate management on behalf of shareholders and are also responsible for their performance. In addition, top executives, including top management, are the main decision makers in an enterprise, and the decision they make determines the performance of the enterprise and the maintenance or abolition of the enterprise [37; 38]. The effects of top management's characteristics and behavior on corporate behavior and performance--corporate financial performance, innovation performance, and social performance--have been steadily studied by scholars who have paid attention to the importance of top management. The upper echelon theory argues that psychological characteristics, such as aggressiveness and stability, as well as demographic characteristics, such as the age and gender of the CEO and top management, can have a significant impact on the selection of strategic activities and on corporate performance [31]. Zhao et al. [39] analyzed the relationship between CEOs' psychological factors and firm performance, and found that CEOs' sincerity, openness, emotional stability, and extroversion have a positive effect on corporate performance. Papadakis and Barwise [40] found that top management's support and aggressiveness in corporate activities is an important explanatory factor for improved management performance. In addition, Abatecola et al. [41] conclude that the emotional stability, extroversion, authenticity, and strategic aggressiveness of top management are highly correlated with corporate performance.
Based on the results of the studies that have focused on the psychological characteristics of the top executives, it can be assumed that the support of the top management of a company plays a crucial role in achieving its original purpose and goal. Saraph et al. [42] argues that one of the key success factors for Total Quality Management (TQM) activity is the role of the CEO. In addition, Sila [43] found that the degree of commitment of top management, which indicates how a CEO is supporting and devoting a specific activity to a company, can lead to an improvement of its performance by successful execution of the company's TQM. From this perspective, it can be seen that the top management's active attitude and support are needed to achieve good environmental performance by implementing a green supply chain.
When companies want to implement green supply chain management, they need other equipment and systems, and they involve a lot of resources, costs, and efforts. This requires the companies to modify their existing practices and establish other business activities. Top management's support is essential for having well-established activities in the green supply chain. In fact, the more senior management is interested in a company's specific activities and tasks, the better it can be seen [44]. The fact that the top executives of a company show interest and support for specific activities means that they allocate sufficient resources to support the activities when they formulate budgets and show active interest in the planning and execution of production departments and support their activities.
Green supply chain management is a relatively recent concept for long-term survival, sustainability, and competitiveness of an organization. Here, active support from top management leads to smooth implementation of green SCM and, as a result, the company will achieve improved environmental performance and sustainability. With that in mind, we postulate the following:
**Hypothesis 4**: _Top management's support for the green supply chain has a positive (+) relationship with corporate environmental performance._
### Mediating Role of Environmental Collaboration with Participants
In order to smoothly implement a green supply chain, not only the company's independent efforts are necessary but also the efforts of the suppliers included in the enterprise's value chain [45]. Since there is no participant that can exist alone in the value chain of an enterprise, it is most important to establish a network of collaboration with participating companies to achieve a green supply chain. From this point of view, we expect that the level of green collaboration with supply chain participants can play abridge between the level of top management's support for the green supply chain management and environmental performance.
Green supply chain management can be said to be a plan and an activity that is used to collaboration with the supply chain participant in coordinating and cooperating regarding environmental problems to enhance the environmental performance of the company [46; 47].
The coordination and cooperation between the supply chain participants, which is meant here, is also an extended concept that includes both the upstream and downstream of the supply chain and the closed loop of the supply [48; 49]. In other words, environmental collaboration with the supply chain participants is planned by the suppliers located upstream of the supply chain and by customer companies located downstream, jointly planning and implementing environmental management and solving environmental problems [50]. It is important for companies to intervene directly with their suppliers and customers to jointly establish plans to improve and develop the green supply chain and find solutions to the problems [49].
Such collaboration on the supply chain is to provide suppliers and customers with information necessary for eco-friendly management, to develop a mutual understanding of environmental management and conduct joint technology development, and so achieve a green supply chain [11; 46].
The high degree of collaboration with green supply chain participants can help suppliers and customers to develop their capacity to respond to environmental issues by acting as a motivator to make environmental performance more mindful and improved [45]. Achieving a high level of collaboration with suppliers and customers within a green supply chain affects the dissemination and integration of knowledge among each other, thereby acquiring valuable knowledge on eco-friendly management and preempting all green supply chain participants, helping to shape the proactive environmental management orientation and have improved environmental management capabilities [46]. Furthermore, through this process, the increased environmental response capacity of the entire supply chain participants has a positive impact on enhancing the company's eco-efficiency performance [46]. Green et al. [51] found that environmental co-operation diffuses within the green supply chain, resulting in higher environmental performance. Taken together, the active support and attitudes of top management on the green supply chain can help motivate the value chain participants and the internal members of the organization, to promote environmental collaboration both internally and externally, making it possible to successfully implement green supply chain management.
Collaboration among all members in the green supply chain and efforts to implement a green supply chain will enhance a company's eco-efficiency by strengthening its environmental management capabilities. In light of the above discussions, we hypothesize the following:
**Hypothesis 5**.: _The degree of environmental collaboration among supply chain participants will mediate the relationship between top management's support for green supply chains and environmental performance._
**Hypothesis 6**.: _The degree of environmental collaboration with suppliers will mediate the relationship between top management's support for green supply chains and environmental performance._
**Hypothesis 7**.: _The degree of environmental collaboration with customers will mediate the relationship between top management's support for green supply chains and environmental performance._
To sum up, this paper proposes to investigate a research model (see Figure 1).
## 3 Methods
### Sample Selection
We collected data from companies in manufacturing industries in South Korea. The Korean Business Directory 2013 issued by the Korea Chamber of Commerce and Industry (KCCI) was used to identify the Korean manufacturing companies. A total of 27,086 manufacturers were identified. Among them, 500 manufacturers operating mainly in Seoul were extracted as the research sample due to the accessibility and ease of the survey. In order to obtain participants, we explained the purpose of this study and survey via email and phone for 500 manufacturers. Then, the questionnaire was sent to the 500 companies by e-mail. With assistance of the companies, responses to the questionnaire were made by the executives who are in charge of environmental management or green supply chain management within the companies. In this way, we were able to secure reliability and accuracy in the questionnaire response.
### Data Collection
The survey was conducted in 2014. During this period, a total of 500 questionnaires were distributed, of which 324 copies were collected. The return rate of the questionnaire is 64.8%. Data obtained from 301 questionnaires were used for the statistical analysis, with the exception of 23 questionnaires that were not appropriate for the analysis. Questionnaires excluded from analysis are those that cannot be used for analysis because the same number is continuously repeated or there were many missing values in the questionnaire. In addition, due to the simplicity of the model, it was expected that a meaningful hypothesis test could be performed with a sample of 301 parts, excluding 23 copies.
The industry composition ratio of this sample was 15.7% for radicals and pharmaceuticals, 15.0% for electronics and communication equipment, and 11.7% for chemicals and plastics. In addition, the number of employees was 500-800, the highest with 21.3%; 100-299, with 15.3%; 50-99, with 18.7%; and 10-49, with 15.7%. The year of establishment of the company was the most in 30-50 years with 45.0%, followed by 10-29 years with 36.0%. The positions of the respondents were in the order of manager 34.3%, assistant manager 30.7%, and deputy manager 15.0%. SPSS 21.0 was used as the statistical package to test the research hypotheses.
Figure 1: The research model.
### Measures
#### 3.3.1 Independent Variable
Support from top management is defined as \"the attitude of top management to support green supply chain management\" [12]. In order to measure the top management support for green SCM, we used Zhu et al.'s [12] items that include \"our top management team supports green SCM\" and \"our top management team commits green SCM\". A 7-point Likert scale was used, ranging from 1 (\"strongly disagree\") to 7 (\"strongly agree\").
#### 3.3.2 Mediating Variables
The level of environmental collaboration with supply chain participants, defined as the cooperation with suppliers and customers in order to achieve environmental goals in the green SCM, was measured using Vachon and Klassen's [11] scale. On that scale, the supply chain participants featured two facets, (1) suppliers and (2) customers. We adopted a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree).
In particular, we used four items to assess the level of environmental collaboration with suppliers from Vachon and Klassen's [11] questionnaire. The items include \"we are developing a mutual understanding of responsibilities regarding environmental performance with our suppliers\", \"we are working together to reduce environmental impact of our activities with our suppliers\", \"we are conducting joint planning to anticipate and resolve environmental-related problems with our suppliers\", and \"we are making joint decisions about ways to reduce overall environmental impact of our products with our suppliers\". The level of environmental collaboration with customers was captured by using four items from Vachon and Klassen's [11] questionnaire. The items include \"we are developing a mutual understanding of responsibilities regarding environmental performance with our customers\", \"we are working together to reduce environmental impact of our activities with our customers\", \"we are conducting joint planning to anticipate and resolve environmental-related problems with our customers\", and \"we are making joint decisions about ways to reduce overall environmental impact of our products with our customers.\"
#### 3.3.3 Dependent Variable
Environmental performance is defined as \"the degree of improvement of environmental pollution, reduction of environmental risks and amelioration of environmental conditions in the firms\" [16]. As a dependent variable, environmental performance was measured using constructs developed by Zhu and Sarkis [16]. The items include \"Decrease in fines for environmental accidents in the last 3 years\", \"Improvement in the enterprise's environmental situation in the last 3 years\", and \"Reduction of water/solid wastes in the last 3 years\". These measurement items were operationalized on a seven-point Likert scale from 1 (\"not at all\") to 7 (\"highly significant\").
#### 3.3.4 Control Variables
Based upon a careful review of previous empirical studies on green SCM and environmental performance, we found some control variables designed to rule out potential confounding factors that could affect environmental performance. Five control variables were added to the research model.
In previous studies on green SCM, financial performance has often been treated as a dependent variable [12], but financial performance could positively affect environmental performance [52]. When a company has sufficient slack resources due to its high performance, they are likely to invest in environmentally friendly activities with their slack resources. Thus, we controlled the financial performance of a company. In addition, depending on which industry a company belongs to, the level of green SCM implementation could vary. In the case of industries that are directly affected by overseas regulations, the implementation level may be greater. So, we controlled the industry-specific effects by including industry dummy variables. Compared to a company that has just introduced green SCM, it isobvious that the more time a company has had to introduce green SCM, the more opportunities there are to correct the problems on that supply chain, to cooperate with the supply chain participants, and to enhance the environmental performance; also, the position of the company in the supply chain is likely to have different effects on the collaboration and environmental performance among the participating companies. For these reasons, a company's introduction time of green SCM and its position in the supply chain were included as control variables. In general, larger firms have more available and superior resources, which has a significant impact on their performance [53]. Environmental collaboration and performance among supply chain participants on green SCM, which could affect the corporate level performance, could also be influenced by the corporate size. Therefore, the company size was controlled in our research model.
The company size is measured by number of employees. The number of employees is a nominal variable, having 5 categories: (1) less than 50 employees; (2) 50-100 employees; (3) 100-300 employees; (4) 300-500 employees; and (5) more than 500 employees. To put the nominal variable into our research model, we generated 5 dummy variables regarding the number of employees, coded as 1 (observation that belongs to a certain category) or 0 (otherwise). The type of industry and the introduction time of green SCM are also measured as a nominal variable, so that these variables are converted into dummy variables and put into our regression model. In terms of the industry variable, there are 10 industry categories: (1) chemicals and plastics; (2) pulp, printing, and furniture; (3) steel and assembled metals; (4) machinery and shipbuilding; (5) automobile and transportation equipment; (6) electronics and communication equipment; (7) medical and pharmaceuticals; (8) textile and clothing; (9) electrics and construction; and (10) other industries. Regarding the introduction time of green SCM, there are 5 categories: (1) less than a year; (2) less than two years; (3) less than 5 years; (4) less than 10 years; and (5) more than 10 years.
### Method Bias Test
Harman's single factor test [54] was used to verify the likelihood of the common method bias in this study. If the common method bias is present in our research model, all independent and dependent variables are likely to converge into a single factor or a dominant factor are likely to explain most of the total variance when an exploratory factor analysis is performed. This was not the case in our study as a single factor model of the unrotated solution explained only 25.78 percent of variance and all the variables did not converge to a single factor. Therefore, we conclude that the common method bias is not present in our model.
## 4 Results
### Reliability and Validity Tests
To confirm validity, an exploratory factor analysis (Pearson's principal component analysis) was conducted with rotation (Varimax rotation with Kaiser normalization). All variables were confirmed valid since their factor loading values were more than 0.4; also, the factor loading values of the sub-variable items were close to or greater than 0.5, concluding that the construct validity is confirmed. The results of the factor analysis are shown in Table 1. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.729, and the obtained chi-square value was 943.356 for the Bartlett's Test of Sphericity (\\(p<0.001\\)), confirming the validity of the model.
Cronbach's alpha was used to test the reliability of all the measures. The Cronbach's alpha results ranged between 0.6 and 0.8 (see Table 1). These values are slightly below the standard value of 0.7 suggested by Nunnally and Bernstein [55], but it is an empirical standard rather than an absolute one. Moreover, considering that the number of items in the questionnaires and the difficulty in understanding the items could affect the test results, our test results are still acceptable [56]. In this case, the CR (Composite Reliability) values can be used alternatively [57], for which all values were found to be over 0.7. Therefore, we conclude that the reliability of the measures is valid (see Table 1).
### Correlation Analysis
Table 2 provides the descriptive statistics and the correlation matrix for all variables used in the study. The mean of the perceived environmental performance measured by the Likert-type 7-point scale was 4.27, and the mean of the level of the top management support was 4.44. The level of environmental collaboration with suppliers and customers were 4.76 and 3.87, respectively. Since some significant correlations were found between the variables, we calculated the variance inflation factor (VIF) to detect the multicollinearity. The computed VIF values of all variables were less than 2, which is substantially below the cutoff threshold value of 10. Therefore, we conclude that multicollinearity is not a serious issue in the research models.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{**Construct**} & \\multirow{2}{*}{**Items**} & \\multicolumn{4}{c}{**Varimax-Rotated Loadings**} \\\\ \\cline{3-6} & & **Factor1** & **Factor2** & **Factor3** & **Factor4** \\\\ \\hline \\multirow{6}{*}{Environmental collaboration with suppliers} & Developing a mutual understanding of responsibilities regarding environmental performance. & **0.759** & 0.088 & 0.010 & 0.113 \\\\ & Conducting joint planning to anticipate and resolve environmental-related problems. & **0.750** & 0.032 & 0.090 & \\(-\\)0.109 \\\\ & Achieving environmental goals collectively. & **0.616** & 0.142 & 0.040 & 0.287 \\\\ & Working to reduce the environmental impact of our activities. & **0.565** & 0.028 & 0.263 & 0.250 \\\\ \\hline \\multirow{6}{*}{Environmental collaboration with customers} & Developing a mutual understanding of responsibilities regarding environmental performance. & 0.033 & **0.694** & 0.158 & 0.271 \\\\ & Achieving environmental goals collectively. & 0.359 & **0.654** & 0.041 & 0.086 \\\\ & Conducting joint planning to anticipate and resolve environmental-related problems. & 0.346 & **0.613** & 0.106 & \\(-\\)0.011 \\\\ & Working to reduce the environmental impact of our activities. & \\(-\\)0.155 & **0.591** & \\(-\\)0.147 & \\(-\\)0.143 \\\\ \\hline Top management & Top managementβs green SCM commitment. & 0.121 & 0.011 & **0.831** & 0.132 \\\\ & Top managementβs green SCM support. & 0.004 & 0.299 & **0.761** & 0.002 \\\\ \\hline \\multirow{6}{*}{Environmental performance} & Decrease of fine for environmental accidents in the last 3 years. & \\(-\\)0.010 & 0.030 & 0.153 & **0.805** \\\\ & Improve of enterpriseβs environmental situation in the last 3 years. & 0.021 & 0.070 & \\(-\\)0.016 & **0.759** \\\\ & Reduction of water/solid wastes in the last 3 years. & 0.316 & 0.268 & 0.032 & **0.489** \\\\ \\hline \\multirow{6}{*}{Environmental performance} & Eigenvalue & 2.355 & 2.278 & 1.628 & 1.572 \\\\ & \\% of variance & 16.821 & 16.270 & 11.628 & 11.228 \\\\ \\cline{1-1} & Cumulative \\% & 16.821 & 33.092 & 44.720 & 55.948 \\\\ \\cline{1-1} & Cronbachβs \\(\\alpha\\) & 0.695 & 0.636 & 0.616 & 0.611 \\\\ \\cline{1-1} & Composite Reliability & 0.770 & 0.734 & 0.776 & 0.733 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Exploratory factor analysis results.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \\hline \\hline
**No.** & **Variables** & **N** & **Mean** & **SD** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** \\\\ \\hline \\multicolumn{11}{c}{} \\\\
### Hypotheses Testing
To test the hypotheses, we employed the hierarchical multiple linear regression analyses. We tested the mediation hypotheses in accordance with the method proposed by Baron and Kenny (1986). Their method is composed of the estimation of three regression models. The first model regresses the level of environmental collaboration with the suppliers (Model 2 in Table 3) and customers (Model 2 in Table 4) in terms of the top management support, represented by Hypothesis 1. The second equation (Model 2 in Table 5) estimates the corporate environmental performance in terms of the top management support and controls, represented by Hypothesis 4. The last equation (Model 3 in Table 5) explains the corporate environmental performance in terms of the mediator (the level of environmental collaboration with suppliers and customers) and the other independent variables (the level of top management support and control variables), represented by Hypothesis 5. In order to establish the mediation effect, three conditions must hold: (H1) the top management support must affect the environmental collaboration between the suppliers and customers (Model 2 in Tables 3 and 4); (H2) the top management support must affect the environmental performance (Model 2 in Table 5); and (H3) the environmental collaboration with the suppliers and customers must affect the environmental performance (Model 3 in Table 5). The mediation holds if the coefficient of top management support, initially significant in Model 2 in Table 5, turns out to be non-significant when the environmental collaboration between the suppliers and customers are included (Model 3A).
Regarding the first condition, as shown in Tables 3 and 4, the results of Model 2 suggest that the top management support toward green SCM has a positive effect on the environmental collaboration between suppliers (\\(\\beta=0.34\\), \\(p\\leq 0.01\\)) and customers (\\(\\beta=0.20\\), \\(p\\leq 0.01\\)) in the supply chain, supporting Hypothesis 2 and 3. This result is empirical evidence that the top management support for green SCM
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline & **Model 1** & **Model 2** \\\\
**Variables** & **The Level of Environmental** & **The Level of Environmental** \\\\ & **Collaboration with Suppliers** & **Collaboration with Suppliers** \\\\ \\hline Top management support toward green SCM & & 0.275 **(0.339)** \\\\ Perceived corporate financial performance & 0.034 (0.034) & 0.035 (0.035) \\\\ Industry 2 & 0.493 (0.177) & 0.446 (0.160) \\\\ Industry 3 & 0.404 (0.126) & 0.313 (0.098) \\\\ Industry 4 & 0.468 (0.156) & 0.459 (0.153) \\\\ Industry 5 & 0.579 (0.106) & 0.403 (0.074) \\\\ Industry 6 & 0.551 **(0.216)** & 0.513 **(0.201)** \\\\ Industry 7 & \\(-\\)0.146 (\\(-\\)0.028) & \\(-\\)0.004 (\\(-\\)0.001) \\\\ Industry 8 & 0.405 (0.074) & 0.144 (0.027) \\\\ Industry 9 & 0.684 (0.120) & 0.751 **(0.131)** \\\\ Industry 10 & 0.42 (0.188) & 0.393 (0.176) \\\\ Introduction time of green SCM 1 (\\(<\\)1) & \\(-\\)0.031 (\\(-\\)0.015) & \\(-\\)0.128 (\\(-\\)0.060) \\\\ Introduction time of green SCM 2 (\\(<\\)2) & 0.306 (0.122) & 0.293 (0.117) \\\\ Introduction time of green SCM 3 (\\(<\\)5) & \\(-\\)0.095 (\\(-\\)0.039) & \\(-\\)0.076 (\\(-\\)0.031) \\\\ Introduction time of green SCM 4 (\\(<\\)10) & 0.06 (0.017) & 0.12 (0.034) \\\\ Corporate position in the supply chain 2 & \\(-\\)0.103 (\\(-\\)0.038) & \\(-\\)0.133 (\\(-\\)0.049) \\\\ Corporate position in the supply chain 3 & 0.312 (0.114) & 0.246 (0.090) \\\\ Corporate position in the supply chain 4 & \\(-\\)0.014 (\\(-\\)0.007) & \\(-\\)0.032 (\\(-\\)0.016) \\\\ Corporate position in the supply chain 5 & \\(-\\)0.094 (\\(-\\)0.039) & \\(-\\)0.095 (\\(-\\)0.040) \\\\ Employee 2 (50-100) & 0.3 (0.143) & 0.16 (0.076) \\\\ Employee 3 (100β300) & 0.294 (0.118) & 0.065 (0.026) \\\\ Employee 4 (300β500) & 0.167 (0.066) & 0.053 (0.021) \\\\ Employee 5 (more than 500) & 0.183 (0.073) & \\(-\\)0.025 (\\(-\\)0.010) \\\\ \\hline Observations & 301 & 301 \\\\ Rβsquared & 0.079 & 0.181 \\\\ F & 1.086 & 2.666 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Impact of top management support toward green SCM on the level of environmental collaboration between suppliers: regression models.
increases the level of collaboration among the supply chain participants, one of the essential factors of a successful green SCM. Model 2 in Table 5 shows that the top management support toward green SCM positively and significantly affects corporate environmental performance (\\(\\beta=0.28\\), \\(p\\leq 0.01\\)), holding the second condition of the mediation effect and supporting Hypothesis 4. Lastly, Model 3 in Table 5 allows the effects of the top management support on the corporate environmental performance to be mediated by the environmental collaboration between suppliers and customers, testing the final condition for mediation. Even after controlling the collaboration with suppliers and customers, the impact of top management support on the environmental performance is positive and significant (\\(\\beta=0.14\\) and \\(p\\leq 0.05\\)), but the value of the regression coefficients is less than the value shown in Hypothesis 4 (\\(\\beta=0.28\\) and \\(p\\leq 0.01\\)). Therefore, the relationship between the top management support and the corporate environmental performance is partially mediated by the environmental collaboration between the supply chain participants, supporting Hypothesis 6 and 7.
In summing up the above, it is shown that, with the support of a corporation's top management team--who plays a key role in green SCM--the level of environmental collaboration between the participants in green SCM is significantly increased, and the environmental performance is consequently also increased.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline & **Model 1** & **Model 2** \\\\
**Variables** & **The Level of Environmental** & **The Level of Environmental** \\\\ & **Collaboration with Customers** & **Collaboration with Customers** \\\\ \\hline Top management support toward green SCM & & 0.155 ** (0.199) \\\\ Perceived corporate financial performance & 0.08 (0.084) & 0.08 (0.085) \\\\ Industry 2 & 0.317 (0.119) & 0.29 (0.109) \\\\ Industry 3 & 0.047 (0.015) & \\(-\\)0.004 (\\(-\\)0.001) \\\\ Industry 4 & 0.278 (0.097) & 0.273 (0.095) \\\\ Industry 5 & 0.219 (0.042) & 0.12 (0.023) \\\\ Industry 6 & \\(-\\)0.024 (\\(-\\)0.010) & \\(-\\)0.045 (\\(-\\)0.019) \\\\ Industry 7 & 0.129 (0.026) & 0.209 (0.042) \\\\ Industry 8 & \\(-\\)0.152 (\\(-\\)0.029) & \\(-\\)0.298 (\\(-\\)0.057) \\\\ Industry 9 & \\(-\\)0.321 (\\(-\\)0.059) & \\(-\\)0.283 (\\(-\\)0.052) \\\\ Industry 10 & 0.207 (0.097) & 0.191 (0.089) \\\\ Introduction time of green SCM 1 (\\(<\\)1) & \\(-\\)0.06 (\\(-\\)0.029) & \\(-\\)0.114 (\\(-\\)0.056) \\\\ Introduction time of green SCM 2 (\\(<\\)2) & \\(-\\)0.086 (\\(-\\)0.036) & \\(-\\)0.094 (\\(-\\)0.039) \\\\ Introduction time of green SCM 3 (\\(<\\)5) & 0.053 (0.022) & 0.063 (0.027) \\\\ Introduction time of green SCM 4 (\\(<\\)10) & 0.33 (0.098) & 0.364 (0.108) \\\\ Corporate position in the supply chain 2 & \\(-\\)0.077 (\\(-\\)0.030) & \\(-\\)0.094 (\\(-\\)0.036) \\\\ Corporate position in the supply chain 3 & \\(-\\)0.04 (\\(-\\)0.015) & \\(-\\)0.077 (\\(-\\)0.029) \\\\ Corporate position in the supply chain 4 & \\(-\\)0.138 (\\(-\\)0.071) & \\(-\\)0.148 (\\(-\\)0.076) \\\\ Corporate position in the supply chain 5 & \\(-\\)0.331 (\\(-\\)0.145) & \\(-\\)0.331 (\\(-\\)0.146) \\\\ Employee 2 (50β100) & 0.311 (0.155) & 0.232 (0.116) \\\\ Employee 3 (100β300) & 0.277 (0.117) & 0.149 (0.062) \\\\ Employee 4 (300β500) & 0.234 (0.096) & 0.17 (0.070) \\\\ Employee 5 (more than 500) & 0.424 * (0.176) & 0.308 (0.128) \\\\ \\hline Observations & 301 & 301 \\\\ R-squared & 0.069 & 0.104 \\\\ F & 0.933 & 1.397 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Impact of the top management support toward green SCM on the level of environmental collaboration with customers: regression models.
## 5 Discussion and Conclusions
This study examined how the support of the corporate top management team in green SCM increases the environmental collaboration with the suppliers and customers in their supply chain, and how the environmental performance is eventually improved through this relationship. In addition, this study analyzed the mediating effect of environmental collaboration with the supply chain participants between the top management support and the corporate environmental performance, reflecting the fact that the success or failure of green SCM depends on the level of collaboration between the participants in a green supply chain. Accordingly, this study closely analyzed the effect of the top management's support [18; 20; 31], through environmental collaboration with the participating supply chain companies [46; 50], on environmental performance [16]. The main findings based on the analysis results of this study are as follows.
First, this study broadens the horizon of green SCM research by focusing on the effect of a corporation's top management team in discussing the performance of green SCM. Specifically, we employed upper echelon theory [31; 38; 39] to explain the significant impact of top management team, the group that has the greatest influence in a company, on collaboration among the green SCM participants, and subsequently on environmental performance. In this study, we distinguish the roles of a company's top management team and highlight the separate independent effects they could have on the green SCM, which has so far been considered comprehensively as a component of the company's internal environmental management [12; 15]. Therefore, it has theoretical contributions in that it clearly analyzes the role of a top management team under green SCM, which has been relatively lacking in previous discussions on the topic. In addition, the scalability of this study could be even greater in that
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline & **Model 1** & **Model 2** & **Model 3** \\\\
# Introduction
\\\\ \\hline Observations & 301 & 301 & 301 & 301 \\\\ Rβsquared & 0.127 & 0.194 & 0.316 \\\\ F & 1.839 & 2.904 & 5.083 \\\\ \\hline \\hline \\end{tabular}
* Normalized beta coefficients in brackets. **\\(p<0.01\\), **\\({}^{*}\\)**\\(p<0.05\\).
\\end{table}
Table 5: Mediation analysis regression results.
it analyzes the influence paths together, i.e., the environmental collaboration among both suppliers and customers, as suggested by Burki et al. [28].
Secondly, this study finds a mechanism that connects the support of a corporate top management team to its environmental performance. In other words, the top executives' active support for green SCM has a direct positive impact on the environmental performance of the company itself and leads to active collaboration of suppliers and customers in the supply chain, thereby enhancing perceived environmental performance. This result supports the existing discussion on the importance of collaboration among supply chain participants [11; 46; 51], which has been emphasized continuously in green SCM. This also provides an opportunity to identify the nature of the organic organization, in which active support from the corporate executives leads to active cooperation from the participating firms outside the company and eventually returns to improved performance.
Thirdly, in inferring the relationship between the support of top management team and the cooperation among green SCM participants and environmental performance, this study clarifies the theoretical structure by controlling various external variables that may affect the dependent variable, such as financial performance, industry type, the introduction time of green SCM, corporate position within green SCM, and company size. For instance, firms with a higher financial performance have more slack resources, which can lead to sufficient investment in environmental performance. In this study, we tried to prevent this issue from affecting our research model by controlling for financial performance.
Fourthly, this study has practical implications to shed light on the role of top managers in promoting implementation of corporate strategies and activities. No matter how good and appropriate a strategy is formulated from an analysis of the internal and external environment, if members fail to implement it, the company will eventually fail to change and achieve the desired outcomes. This paper shows that the top management's active supporting attitude is important for the smooth implementation of the company's functional strategy, in this case, green SCM. Implementing green SCM in response to changes in the external environment of companies will give them an opportunity to increase their sustainability. In fact, the most essential factor to become a company with a high environmental performance by managing the green supply chain is the continuous support of the top management. The study empirically confirms that if corporate top executives continue to manage the green supply chain with a strong will, it will result in a high level of collaboration among the participants within the supply chain, which in turn results in an increase in the environmental performance of the company.
This research contributes to theory and provide various theoretical implications by clearly analyzing the mechanism between the top management's support toward green SCM and environmental collaboration among participants of a supply chain and thus environmental performance, which has been relatively neglected in the green SCM literature. However, this study has some limitations and hopes that future research could overcome these issues.
First, in this study, the level of top management supporting attitudes toward green SCM was regarded as the most important psychological characteristic of the top executives, affecting the environmental performance and collaboration of the supply chain participants. However, a top management team also has other demographics as well as psychological characteristics, like risk aversion and neuroticism. Therefore, if the future research reflects the various characteristics of the top management, it would generate further contributions.
Secondly, consideration should be given to a wider context, nationally or culturally, since the characteristics of a company' top executive may vary depending on culture or country; it would be nice if further studies are conducted in this regard. If future research on countries with a similar collective cultural nature to Korea yields the same results, it could further enhance the validity of the findings from this study. On the contrary, it would be an interesting follow-up study to expand the study to countries with different individualistic cultures to identify the effects of different cultures.
Thirdly, this study is based on a survey conducted in 2014, so there is a possibility that there might be some changes in the application situation of green SCM. Moreover, the recent spread of the COVID-19 pandemic could also affect the application of green SCM. These issues could not be reflected in this paper due to limitations, but future research that reflects these issues could give more implications and contributions.
Fourthly, there could be different or similar incentives to cooperate in a green supply chain. According to Tacheva et al. [58], there are two major incentives for cooperation, with partners driven from a shareholder's perspective and stakeholder's perspective. Although the incentives for cooperation among green SCM participants was not considered in this paper, it is likely that more meaningful results could be obtained if research with this incentive variable added in the future is conducted.
Conceptualization, methodology, and formal analysis, J.L.; data curation and validation, H.-Y.J.; writing--original draft preparation, writing--review and editing, and funding acquisition, J.L. and H.-Y. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The author declares no conflict of interest.
## References
* (1) Carroll, A.B.; Buchholtz, A.K. _Business and Society: Ethics, Sustainability, and Stakeholder Management_, 10th ed.; Cengage Learning: Boston, MA, USA, 2016; ISBN 978-1-305-95982-8.
* (2) Maignan, I.; Ferrell, O. Corporate social responsibility and marketing: An integrative framework. _J. Acad. Mark. Sci._**2004**, _32_, 3-19. [CrossRef]
* (3) Shin, S.H. Disclosures and asset returns. _Econometrica_**2003**, _71_, 105-133. [CrossRef]
* (4) Gunther, M. The green machine. _Fortune_**2006**, _154_, 42-57. [PubMed]
* (5) Albino, V.; Dangelico, R.M.; Pontrandolfo, P. Do inter-organizational collaborations enhance a firm's environmental performance? A study of the largest US companies. _J. Clean. Prod._**2012**, _37_, 304-315. [CrossRef]
* (6) Lee, S.Y.; Cheong, I.M. Sustainable supply chain initiatives in the Korean automotive industry. In _Handbook of Sustainability Management_; World Scientific: New Jersey, NJ, USA, 2012; pp. 609-624. [CrossRef]
* (7) Lee, S.Y.; Lee, W.H. The effects of sustainable supply chain management on relational social capital and supplier sustainability performance: An integrative model of the fair, green, and responsible supply chain. _Korean Manag. Rev._**2014**, _43_, 275-302.
* (8) Hassini, E.; Surti, C.; Searcy, C. A literature review and a case study of sustainable supply chains with a focus on metrics. _Int. J. Prod. Econ._**2012**, _140_, 69-82. [CrossRef]
* (9) Seuring, S.; Muller, M. From a literature review to a conceptual framework for sustainable supply chain management. _J. Clean. Prod._**2008**, _16_, 1699-1710. [CrossRef]
* (10) Corbett, C.J.; Klassen, R.D. Extending the horizons: Environmental excellence as key to improving operations. _Manuf. Sero. Oper. Manag._**2006**, \\(8\\), 5-22. [CrossRef]
* (11) Vachon, S.; Klassen, R.D. Green project partnership in the supply chain: The case of the package printing industry. _J. Clean. Prod._**2006**, _14_, 661-671. [CrossRef]
* (12) Zhu, Q.; Sarkis, J.; Lai, K.H. Confirmation of a measurement model for green supply chain management practices implementation. _Int. J. Prod. Econ._**2008**, _111_, 261-273. [CrossRef]
* (13) Lee, S.Y. Drivers for the participation of small and medium-sized suppliers in green supply chain initiatives. _Supply Chain Manag. Int. J._**2008**, _13_, 185-198. [CrossRef]
* (14) Wu, G.C.; Ding, J.H.; Chen, P.S. The effects of GSCM drivers and institutional pressures on GSCM practices in Taiwan's textile and apparel industry. _Int. J. Prod. Econ._**2012**, _135_, 618-636. [CrossRef]
* (15) Choi, S.B.; Min, H.; Joo, H.Y.; Choi, H.B. Assessing the impact of green supply chain practices on firm performance in the Korean manufacturing industry. _J. Logist. Res. Appl._**2016**, _20_, 129-145. [CrossRef]
* (16) Zhu, Q.; Sarkis, J. Relationships between operational practices and performance among early adopters of green supply chain management practices in Chinese manufacturing enterprises. _J. Oper. Manag._**2004**, _22_, 265-289. [CrossRef]
* (17) Mintzberg, H. _The Nature of Managerial Work_; Harper & Row: New York, NY, USA, 1973.
* (18) Quinn, J.B. _Strategies for Change: Logical Incrementalism_; Irwin Professional Publishing: Burr Ridge, IL, USA, 1980.
* (19) Andrews, K.R. The concept of corporate strategy. In _Resources, Firms, and Strategies: A Reader in the Resource-Based Perspective_; Oxford University Press: Oxford, UK, 1997; ISBN 978-0-19-878180-6.
* (20) Mackey, A. The effect of CEOs on firm performance. _Strat. Manag. J._**2008**, _29_, 1357-1367. [CrossRef]
* (21) Barratt, M.; Oliveira, A. Exploring the experiences of collaborative planning initiatives. _Int. J. Phys. Distrib. Logist. Manag._**2001**, _31_, 266-289. [CrossRef]
* (22) Phillips, N.; Lawrence, T.B.; Hardy, C. Inter-organizational collaboration and the dynamics of institutional fields. _J. Manag. Stud._**2000**, _37_, 23-43. [CrossRef]
* (23) Spekman, R.E.; Kamauff, J.W.; Myhr, N. An empirical investigation into supply chain management: A perspective on partnerships. _Supply Chain Manag._**1998**, \\(3\\), 53-67. [CrossRef]
* (24) Soosay, C.A.; Hyland, P.W.; Ferrer, M. Supply chain collaboration: Capabilities for continuous innovation. _Supply Chain Manag._**2008**, _13_, 160-169. [CrossRef]
* (25) Diabat, A.; Govindan, K. An analysis of the drivers affecting the implementation of green supply chain management. _Resour. Conserv. Recyl._**2011**, _55_, 659-667. [CrossRef]
* (26) Green, K.W.; Zelbst, P.J.; Bhadauria, V.S.; Meacham, J. Do environmental collaboration and monitoring enhance organizational performance? _Ind. Manag. Data Syst._**2012**, _112_, 186-205. [CrossRef]
* (27) Vachon, S. Green supply chain practices and the selection of environmental technologies. _Int. J. Prod. Res._**2007**, _45_, 4357-4379. [CrossRef]
* (28) Burki, U.; Ersoy, P.; Najam, U. Top management, green innovations, and the mediating effect of customer cooperation in green supply chains. _Sustainability_**2019**, _11_, 1031. [CrossRef]
* (29) Chu, S.H.; Yang, H.; Lee, M.; Park, S. The impact of institutional pressures on green supply chain management and firm performance: Top management roles and social capital. _Sustainability_**2017**, \\(9\\), 764. [CrossRef]
* (30) Kumar, A.; Paraskevas, J.P. A proactive environmental strategy: Analyzing the effect of SCM experience, age, and female representation in TMTs. _J. Supply Chain Manag._**2018**, _54_, 20-41. [CrossRef]
* (31) Hambrick, D.C.; Mason, P.A. Upper echelons: The organization as a reflection of its top managers. _Acad. Manag. Rev._**1984**, \\(9\\), 193-206. [CrossRef]
* (32) Cha, S.H.; Yang, D.H. Influence of organizational and HR department characteristics on human resource outsourcing. _Korean J. Manag._**2008**, _16_, 159-190.
* (33) Rai, A.; Borah, S.A.; Ramaprasad, A. Critical success factors for strategic alliances in the information technology industry: An empirical study. _Decis. Sci._**1996**, _27_, 141-155. [CrossRef]
* (34) Teo, T.S.; Tan, M.; Buk, W.K. A contingency model of Internet adoption in Singapore. _Int. J. Electron. Commer._**1997**, \\(2\\), 95-118. [CrossRef]
* (35) Coch, L.; French, J.R., Jr. Overcoming resistance to change. _Hum. Relat._**1948**, \\(1\\), 512-532. [CrossRef]
* (36) Gioia, D.A.; Chittipeddi, K. Sensemaking and sensegiving in strategic change initiation. _Strateg. Manag. J._**1991**, _12_, 433-448. [CrossRef]
* (37) Boone, C.; De Brabander, B.; Van Witteloostuijn, A. CEO locus of control and small firm performance: An integrative framework and empirical test. _J. Manag. Stud._**1996**, _33_, 667-700. [CrossRef]
* (38) Takeuchi, R.; Lepak, D.P.; Wang, H.; Takeuchi, K. An empirical examination of the mechanisms mediating between high-performance work systems and the performance of Japanese organizations. _J. Appl. Psychol._**2007**, _92_, 1069. [CrossRef]
* (39) Zhao, H.; Seibert, S.E.; Lumpkin, G.T. The relationship of personality to entrepreneurial intentions and performance: A meta-analytic review. _J. Manag._**2010**, _36_, 381-404. [CrossRef]
* (40) Papadakis, V.M.; Barwise, P. How much do CEOs and top managers matter in strategic decision-making? _Br. J. Manag._**2002**, _13_, 83-95. [CrossRef]
* (41) Abatecola, G.; Mandarelli, G.; Poggesi, S. The personality factor: How top management teams make decisions. A literature review. _J. Manag. Gov._**2013**, _17_, 1073-1100. [CrossRef]
* (42) Saraph, J.V.; Benson, P.G.; Schroeder, R.G. An instrument for measuring the critical factors of quality management. _Decis. Sci._**1989**, _20_, 810-829. [CrossRef]
* (43) Sila, I. Examining the effects of contextual factors on TQM and performance through the lens of organizational theories: An empirical study. _J. Oper. Manag._**2007**, _25_, 83-109. [CrossRef]
* (44) Pfeffer, J.; Jeffrey, P. _The Human Equation: Building Profits by Putting People First_; Harvard Business Press: Boston, MA, USA, 1998; ISBN 0-87584-841-9.
* Lee and Lee (2013) Lee, S.; Lee, K. A study on the relationships between social capital accumulation, green supply chain management and supplier operational performance: A path analysis. _J. Korean Prod. Oper. Manag. Soc._**2013**, _24_, 239-269.
* Bowen et al. (2001) Bowen, F.E.; Cousins, P.D.; Lamming, R.C.; Farukt, A.C. The role of supply management capabilities in green supply. _Prod. Oper. Manag._**2001**, _10_, 174-189. [CrossRef]
* Handfield et al. (2005) Handfield, R.; Sroufe, R.; Walton, S. Integrating environmental management and supply chain strategies. _Bus. Strategy Environ._**2005**, _14_, 1-19. [CrossRef]
* Linton et al. (2007) Linton, J.D.; Klassen, R.; Jayaraman, V. Sustainable supply chains: An introduction. _J. Oper. Manag._**2007**, _25_, 1075-1082. [CrossRef]
* Sarkis (2003) Sarkis, J. A strategic decision framework for green supply chain management. _J. Clean. Prod._**2003**, _11_, 397-409. [CrossRef]
* Vachon and Klassen (2008) Vachon, S.; Klassen, R.D. Environmental management and manufacturing performance: The role of collaboration in the supply chain. _Int. J. Prod. Econ._**2008**, _111_, 299-315. [CrossRef]
* Green et al. (2000) Green, K.; Morton, B.; New, S. Greening organizations: Purchasing, consumption, and innovation. _Organ. Environ._**2000**, _13_, 206-225. [CrossRef]
* Waddock and Graves (1997) Waddock, S.A.; Graves, S.B. The corporate social performance-financial performance link. _Strateg. Manag. J._**1997**, _18_, 303-319. [CrossRef]
* Cho and Lee (2018) Cho, J.; Lee, J. Internationalization and performance of Korean SMEs: The moderating role of competitive strategy. _Asian Bus. Manag._**2018**, _17_, 140-166. [CrossRef]
* Podsakoff et al. (2003) Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. _J. Appl. Psychol._**2003**, _88_, 879. [CrossRef]
* Nunnally and Bernstein (1994) Nunnally, J.C.; Bernstein, I.H. _Psychometric Theory_, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994; ISBN 9780070478497.
* Helms (1999) Helms, L.S. _Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, And Reliability Generation as Meta-Analytic Method_; Eric Document Reproduction Service: San Antonio, TX, USA, 1999.
* Hair et al. (2006) Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. _Multivariate Data Analysis_, 6th ed.; Pearson Education, Uppersaddle River: New Jersey, NJ, USA, 2006.
* Tacheva et al. (2020) Tacheva, Z.; Simpson, N.; Ivanov, A. Examining the role of top management in corporate sustainability: Does supply chain position matter? _Sustainability_**2020**, _12_, 7518. [CrossRef]
**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. | The purpose of this study is to determine whether the support of top management significantly improves the level of environmental collaboration with participating companies upstream and downstream of the green supply chain and the impact on environmental performance. The results of the empirical analysis of 301 companies that are establishing a green supply chain are as follows. First, top management's support positively affects the level of collaboration with suppliers and customers in the green supply chain. Secondly, support from top management has a direct impact on the company's environmental performance. Thirdly, the environmental collaboration of participating companies partially plays a mediation role between the support of top management and the environmental performance. This study has significance in that it analyzes the theoretical mechanism of top management's support for environmental collaboration with participating companies, leading to environmental performance, and draws implications.
green supply chain; collaboration of supply chain participants; environmental performance | Summarize the following text. | 169 |
arxiv-format/2005_09830v1.md | # Deep Learning for LiDAR Point Clouds
in Autonomous Driving: A Review
Ying Li, Lingfei Ma, Zilong Zhong, Fei Liu, Dongpu Cao, Jonathan Li, and Michael A. Chapman
Y.Li, L.Ma and J.Li are with the Department of Geography and Environmental Management, University of Waterloo, 200 University Avenue West, Waterloo, N2L 3G1, Canada (e-mail: [email protected], [email protected]).Z.Zhong is with School of Data and Computer Science Sun Yat-Sen University, Guangzhou, China, 510006 (email: [email protected]).F.Liu is with Xilinx Technology Beijing Limited, Beijing, China, 100083 (email: [email protected]).D.Ca is with Valtoro Cognitive Autonomous Driving Lab, University of Waterloo, N2L 3G1, Canada (e-mail: [email protected]).J. Li is with the Departments of System Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada (e-mail: [email protected]).M. A. Chapman is with the Department of Civil Engineering, Ryerson University, Toronto, ON 05B 2K3, Canada (e-mail:[email protected]).
## I Introduction
Accurate environment perception and precise localization are crucial requirements for reliable navigation, information decision and safely driving of autonomous vehicles (AVs) in complex dynamic environments[1, 2]. These two tasks need to acquire and process highly-accurate and information-rich data of real-world environments [3]. To obtain such data, multiple sensors such as LiDAR and digital cameras [4] are equipped on AVs or mapping vehicles to collect and extract target context. Traditionally, image data captured by the digital camera, featured with 2D appearance-based representation, low cost, and high efficiency, is the most commonly used data in perception tasks [5]. However, image data lack of 3D geo-referenced information [6]. Thus, the dense, geo-referenced, and accurate 3D point cloud data collected by LiDAR are exploited. Besides, LiDAR is not sensitive to the variations of lighting conditions and can work under day and night, even with glare and shadows [7].
The application of LiDAR point clouds for AVs can be described in two aspects: (1) real-time environment perception and processing for scene understanding and object detection [8]; (2) high-definition (HD) maps and urban models generation and construction for reliable localization and referencing [2]. These applications have some similar tasks, which can be roughly divided into three types: 3D point cloud segmentation, 3D object detection and localization, and 3D object classification and recognition. Such a technique has led to an increasing and urgent requirement for automatic analysis of 3D point clouds [9] for AVs.
Driven by the breakthroughs brought by deep learning (DL) techniques and the accessibility of 3D point cloud, the 3D DL frameworks have been investigated based on the extension of 2D DL architectures to 3D data with a notable string of empirical successes. These frameworks can be applied to several tasks specifically for AVs such as: segmentation and scene understanding [10, 11, 12], object detection [13, 14], and classification [15, 16, 10]. Thus, we provide a systematic survey in this paper, which focuses explicitly on framing the LiDAR point clouds in segmentation, detection, and classification tasks for autonomous driving using DL techniques.
Several related surveys based on DL have been published in recent years. The basic and comprehensive knowledge of DL is described in detail in [17, 18]. These surveys normally
Fig. 1: Existing review paper related to DL and their application with different tasks. We summarize that our paper is the first one to survey the application of LiDAR point clouds in segmentation, detection and classification tasks for autonomous driving using DL techniquesfocused on reviewing DL applications in visual data [19, 20] and remote sensing imagery [21, 22]. Some are targeted at more specific tasks such as object detection [23, 24], semantic segmentation [25], recognition [26]. Although DL in 3D data has been surveyed in [27, 28, 29], these 3D data are mainly 3D CAD models [30]. In [1], challenges, datasets, and methods in computer vision for AVs are reviewed. However, DL applications in LiDAR point cloud data have not been comprehensively reviewed and analyzed. We summarize these surveys related to DL in Fig.1.
There also have several surveys published for LiDAR point clouds. In [31, 32, 33, 34], 3D road object segmentation, detection, and classification from mobile LiDAR point clouds are introduced, but they are focusing on general methods not specific for DL models. In [35], comprehensive 3D descriptors are analyzed. In [36, 37], approaches of 3D object detection applied for autonomous driving are concluded. However, DL models applied in these tasks have not been comprehensively analyzed. Thus, the goal of this paper is to provide a systematic review of DL using LiDAR point clouds in the field of autonomous driving for specific tasks such as segmentation, detection/localization, and classification.
The main contributions of our work can be summarized as:
* An in-depth and organized survey of the milestone 3D deep models and a comprehensive survey of DL methods aimed at tasks such as segmentation, object detection/localization, and classification/recognition in AVs, their origins, and their contributions.
* A comprehensive survey of existing LiDAR datasets that can be exploited in training DL models for AVs.
* A detailed introduction for quantitative evaluation metrics and performance comparison for segmentation, detection, and classification.
* A list of the remaining challenges and future researches that help to advance the development of DL in the field of autonomous driving.
The remainder of this paper is organized as follows: Tasks in autonomous driving and the challenges of DL using LiDAR point cloud data are introduced in Section II. A summary of existing LiDAR point clouds datasets and evaluation metrics are described in Section III. Then the milestone 3D deep models with four data representations of LiDAR point clouds are described in Section IV. The DL applications in segmentation, object detection/localization, and classification/recognition for AVs based on LiDAR point clouds are reviewed and discussed in Section V. Section VI proposes a list of the remaining challenges for future researches. We finally conclude the paper in Section VII.
## II Tasks and Challenges
### _Tasks_
In the perception module of autonomous vehicles, semantic segmentation, object detection, object localization, and classification/recognition constitute the foundation for reliable navigation and accurate decision [38]. These tasks are described as follows respectively:
* **3D point cloud semantic segmentation**: Point cloud segmentation is the process to cluster the input data into several homogeneous regions, where points in the same region have the identical attributes [39]. Each input point is predicted with a semantic label, such as ground, tree, building. The task can be concluded as: given a set of ordered 3D points \\(X=\\{x_{1},x_{2},x_{i},\\cdots,x_{n}\\}\\) with \\(x_{i}\\in R^{3}\\) and a candidate label set \\(Y=\\{y_{1},y_{2},\\cdots,y_{k}\\}\\), assign each input point \\(x_{i}\\) with one of the k semantic labels [40]. Segmentation results can further support object detection and classification, as shown in Fig.2(a).
* **3D object detection/localization**: Given an arbitrary point cloud data, the goal of 3D object detection is to detect and locate the instances of predefined categories (e.g., cars, pedestrians, and cyclists, as shown in Fig.2(b)), and return their geometric 3D location, orientation and semantic instance label [41]. Such information can be represented coarsely using a 3D bounding box which is tightly bounding the detected object [13, 42, 42]. This box is commonly represented as \\((x,y,z,h,w,l,\\theta,c)\\), where \\((x,y,z)\\) denotes the object (bounding box) center position, \\((h,w,l)\\) represents the bounding box size with width, length and height, and \\(\\theta\\) is the object orientation. The orientation refers to the rigid transformation that aligns the detected object to its instance in the scene, which are the translations in each of the of x, y, and z directions as well as a rotation about each of these three axes [43, 44]. \\(c\\) represents the semantic label of this bounding box (object).
* **3D object classification/recognition**: Given several groups of point clouds, the objectiveness of classification /recognition is to determine the category (e.g., mug, table, or car, as shown in Fig.2(c)) the group points belong to. The problem of 3D object classification can be defined as: given a set of 3D ordered points \\(X=\\{x_{1},x_{2},x_{i},\\cdots,x_{n}\\}\\) with \\(x_{i}\\in R^{3}\\) and a candidate label set \\(Y=\\{y_{1},y_{2},\\cdots,y_{k}\\}\\), assign the whole point set \\(X\\) with one of the \\(k\\) labels [45].
### _Challenges and Problems_
In order to segment, detect, and classify the general objects using DL for AVs with robust and discriminative performance, several challenges and problems that must be addressed, as shown in Fig.2. The variation of sensing conditions and unconstrained environments results in the challenges on data. The irregular data format and requirements for both accuracy and efficiency pose the problems that DL models need to solve.
#### Ii-B1 Challenges on LiDAR point clouds
Changes in sensing conditions and unconstrained environments have dramatic impacts on object appearance. In particular, the objects captured at different scenes or instances exist a set of variations. Even for the same scene, the scanning times, locations, weather conditions, sensor types, sensing distances and backgrounds are all brought about intra-class differences. All these conditions produce significant variations for both intra- and extra-class objects in LiDAR point cloud data:
* **Diversified point density and reflective intensity**. Due to the scanning mode of LiDAR, the density and intensity for objects vary a lot. The distribution of these two characteristics highly depends on the distance between objects and LiDAR sensors [46, 47, 48]. Besides, the ability of the LiDAR sensors, the time constraints of scanning and needed resolution also affect their distribution and intensity.
* **Noisy**. All sensors are noisy. There are a few types of noise that include point perturbations and outliers [49]. It means that a point has some probability to be within a sphere of a certain radius around the place it was sampled (perturbations), or it may appear in a random position in space [50].
* **Incompleteness**. Point cloud data obtained by LiDAR are commonly incomplete [51]. This mainly results from the occlusion between objects [50], cluttered background in urban scenes [46, 49], and unsatisfactory material surface reflectivity. Such problems are severe in real-time capturing of moving objects, which exist large gaping holes and severe under-sampling.
* **Confusion categories**. In a natural environment, shape-similar or reflectance similar objects have interference in object detection and classification. For example, some manmade objects such as commercial billboards have similar shapes and reflectance with traffic signs.
#### Ii-C2 Problems for 3D DL models
The irregular data format and the requirements for accuracy and efficiency from tasks bring some new challenges for DL models. A discriminate and general-purpose 3D DL model should solve the following problems when designing and constructing its framework:
* **Permutation and orientation invariance**. Compared with 2D grid pixels, the LiDAR point clouds are a set of points with irregular order and no specific orientation [52]. Within the same group of \\(N\\) points, the network should feed N! permutations in an order to be invariant. Besides, the orientation of point sets is missing, which poses a great challenge for object pattern recognition [53].
* **Rigid transformation challenge**. There exist various rigid transformations among point sets, such as 3D rotations and 3D translations. These transformations should not affect the performance of networks [52, 12].
* **Big data challenge**. LiDAR collects millions to billions of points in different urban or rural environments with nature scenes [49]. For example, in Kitti dataset [54], each frame captured by 3D Velodyne laser scanners contains 100k points. The smallest collected scene has 114 frames, which has more than 10 million points. Such amounts of data bring difficulties in data storage.
* **Accuracy challenge**. Accurate perception of road objects is crucial for AVs. However, the variation for both intra-class and extra-class objects and the quality of data pose challenges for accuracy. For example, objects in the same category have a set of different instances, in terms of various material, shape, and size. Besides, the model should be robust to the unevenly distributed, sparse, and missing data.
* **Efficiency challenge**. Compared with 2D images, processing a large quantity number of point clouds produces high computation complexity and time costs. Besides, the computation devices on AVs have limited computational capabilities and storage space [55]. Thus, an efficient and scalable deep network model is critical.
## III Datasets and Evaluation Metrics
### _Datasets_
Datasets pave the way towards the rapid development of 3D data application and exploitation using DL networks. There are two roles of reliable datasets: one for providing a comparison for competing algorithms, another for pushing the fields towards more complex and challenging tasks [23]. With the increasing application of LiDAR in multiple fields, such as autonomous driving, remote sensing, photogrammetry, there is a rise of large scale datasets with more than millions of points. These datasets accelerate the crucial breakthroughs and unpredicted performance in point cloud segmentation, 3D object detection, and classification. Apart from the mobile LiDAR data, some discriminative datasets [56] acquired by terrestrial laser scanning (TLS) by static LiDAR are also employed due to they provide high-quality point cloud data.
As shown in Table I, we classify those existing datasets related to our topic into three types: segmentation-based datasets, detection-based datasets, classification-based datasets. Besides, long-term autonomy dataset is also summarized.
* Segmentation-based datasets
**Semantic3D**[56]. Semantic3D is the existing largest LiDAR dataset for outdoor scene segmentation tasks with more than 4 billion points and around 110,000\\(m^{2}\\) covering area. This dataset is labeled with 8 classes and split into training and test sets with nearly equal size. These data are acquired
Fig. 2: Tasks and challenges related to DL-based applications on 3D point clouds: (a) Point cloud segmentation [10], (b) 3D object detection [41], (c) 3D object classification [10], (d) challenges on LiDAR point clouds, (e) Problems for DL modelsby a static LiDAR with high measurement resolution and covered long measurement distance. The challenges for this dataset mainly stems from the massive point clouds, unevenly distributed point density, and severe occlusions. In order to fit the high computation algorithms, a reduced-8 dataset is introduced for training and testing, which share the same training data but fewer test data compared with Semantic3D.
**Oakland 3-D Point Cloud Dataset**[57]. This dataset is acquired in an early year compared with the above two datasets. A mobile platform equipped with LiDAR is used to scan the urban environment and generated around 1.3 million points, while 100,000 points are split into a validation set. The whole dataset is labeled with 5 classes such as wire, vegetation, ground, pole/tree-trunk, and facade. This dataset is small and thus suitable for lightweight networks. Besides, this dataset can be used to test and tune the network architectures without a lot of training time before final training on other datasets.
**IQmulus & TerraMobilita Contest**[58]. This dataset is also acquired by a mobile LiDAR system in the urban environment in Paris. There are more than 300 million points in this dataset, which covered 10km street. The data is split into 10 separate zones and labeled with more than 20 fine classes. However, this dataset also has severe occlusion.
**Paris-Lille-3D**[59]. Compared with Semantic3D [56], Paris-Lille-3D contains fewer points (140 million points) and covering area (55,000\\(m^{2}\\)). The main difference of this dataset is that its data are acquired by a Mobile LiDAR system in two cities: Paris and Lille. Thus, the points in this dataset are sparse and comparatively low measurement resolution compared with Semantic3D [56]. But this dataset is more similar to the LiDAR data acquired by AVs. The whole dataset is fully annotated into 50 classes unequally distributed in three scenes:Lille1, Lille2, and Paris. For simplicity, these 50 classes are combined into 10 coarse classes for challenging.
* Detection-based datasets
**KITTI Object Detection/Birds Eye View Benchmark**[60]. Different from the above LiDAR datasets which are specific for segmentation task, KITTI dataset is acquired from an autonomous driving platform and records six hours driving using digital cameras, LiDAR, GPS/IMU inertial navigation system. Thus, apart from the LiDAR data, the corresponding imagery data are also provided. Both the Object Detection and Birds Eye View Benchmark contains 7481 training images and 7518 test images as well as the corresponding point clouds. Due to the moving scanning mode, the LiDAR data in this benchmark is highly sparse. Thus, only three objects are labeled with bounding box: cars, pedestrians, and cyclists.
* Classification-based datasets
**Sydney Urban Objects Dataset**[61]. This dataset contains a set of general urban road objects scanned with a LiDAR in the CBD of Sydney, Australia. There are 588 labeled objects and classified in 14 categories, such as vehicles, pedestrians, signs, and trees. The whole dataset is split into four folds for training and testing. Similar to other LiDAR datasets, the collected objects in this dataset are sparse with incomplete shape. Although it is small and not ideal for the classification task, it the most commonly used benchmark due to the limitation of the tedious labeling process.
**ModelNet**[30]. This dataset is the existing largest 3D benchmark for 3D object recognition. Different from Sydney Urban Objects Dataset [61], which contains road objects collected by LiDAR sensors, this dataset is composed of general objects in CAD models with evenly distributed point density and complete shape. There are approximately 130K labeled models in a total of 660 categories (e.g., car, chair, clock). The most commonly used benchmarks are ModelNet40 that contains 40 general objects and ModelNet10 with 10 general objects. The milestone 3D deep architectures are commonly trained and tested on these two datasets due to the affordable computation burden and time.
**Long-Term Autonomy**: To address challenges of long-term autonomy, a novel dataset for autonomous driving has been presented by Maddern et al. [64]. They collected images, LiDAR, and GPS data while traversing 1,000 km in central Oxford in the UK for one year. This allowed them to capture different scene appearances under various illumination, weather, and season with dynamic objects and constructions. Such long-term datasets allow for in-depth investigation of problems that detail the realization of autonomous vehicles such as localization at different times of the year.
### _Evaluation Metrics_
To evaluate those proposed methods performance, several metrics, as summarized in Table II, are proposed for those tasks: segmentation, detection, and classification. The detail of these metrics is given as follows.
For the segmentation task, the most commonly used evaluation metrics are the Intersection over Union (IoU) metric, \\(\\overline{ToU}\\), and overall accuracy (OA) [62]. IoU defines the quantify the percent overlap between the target mask and the prediction output [56].
For detection and classification tasks, the results are commonly analyzed region-wise. Precision, recall, \\(F_{1}\\)-score and Matthews correlation coefficient (MCC) [65] are commonly used to evaluate the performance. The precision represents the ratio of correctly detected objects in the whole detection results, while the recall means the percentage of the correctly detected objects in the ground truth, the \\(F_{1}\\)-score conveys the balance between the precision and the recall, the MCC is the combined ratio of detected and undetected objects and non-objects.
For 3D object localization and detection task, the most frequently used metrics are: Average Precision (\\(AP_{3D}\\)) [66], and Average Orientation Similarity (AOS) [36]. The average precision is used to evaluate the localization and detection performance by calculating the averaged valid bounding box overlaps, which exceed predefined values. For orientation estimation, the orientation similarities with different thresholded valid bounding box overlaps are averaged to report the performance.
## IV General 3D Deep Learning Frameworks
In this section, we review the milestone DL frameworks on 3D data. These frameworks are pioneers in solving the problems defined in section II. Besides, their stable and efficient performance makes them suitable for use as the backbone framework in detection, segmentation and classification tasks. Although 3D data acquired by LiDAR is often in the form of point clouds, how to represent point cloud and what DL models to use for detection, segmentation and classifications remains an open problem [41]. Most existing 3D DL models process point clouds mainly in form of voxel grids [30, 67, 68, 69], point clouds [10, 70, 71, 12], graphs [72, 73, 74, 75] and 2D images [76, 77, 78, 15]. In this section, we analyze the frameworks, attributes and problems of these models in detail.
### _Voxel-based models_
Conventionally, CNNs are mainly applied to data with regular structures, such as the 2D pixel array [79]. Thus, in order to apply CNNs to unordered 3D point cloud data, such data are divided into regular grids with a certain size to describe the distribution of data in 3D space. Typically, the size of the grid is related to the resolution of data [80]. The advantage of voxel-based representation is that it can encode the 3D shape and viewpoint information by classifying the occupied voxels into several types such as visible, occluded, or self-occluded. Besides, 3D convolution (Conv) and pooling operations can be directly applied in voxel grids [69].
**3D ShapeNet**[30], proposed by Wu et al. and shown in Fig.3, is the pioneer in exploiting 3D volumetric data using a convolutional deep belief network. The probability distribution of binary variables is used to represent the geometric shape of a 3D voxel grid. Then these distributions are input to the network which is mainly composed of three Conv layers. This network is initially pre-trained in a layer-wise fashion and then trained with a generative fine-tuning procedure. The input and Conv layers are modeled based on the Contrastive Divergence, where the output layer was trained based on the Fast-Persistent Contrastive Divergence. After training, the input test data is output with a single depth map and then transformed to represent the voxel grid. ShapeNet has notable results in low-resolution voxels. However, the computation cost increases cubically with the increment of input data size or resolution, which limit the models performance in large-scale or dense point clouds data. Besides, multi-scale and multi-view information from the data is not fully exploited, which hinder the output performance.
**VoxNet**[67] is proposed by Maturana et al. to conduct 3D object recognition using 3D convolution filters based on volumetric data representation, as shown in Fig.3. Occupancy grids represented by a 3D lattice of random variables are employed to show the state of the environment. Then a probabilistic estimate is used to estimate the occupancy of these grids which is maintained as the prior knowledge. Three different occupancy grid models, such as binary occupancy grid, density grid, and hit grid are experimented to select the best model. This network framework is mainly composed of Conv, pooling layer, and fully connected (FC) layers. Both ShapeNet [30] and VoxNet employ rotation augmentation for training. Compared with ShapeNet [30], VoxNet has a smaller architecture that has less than 1 million parameters. However, not all occupancy grids contain useful information but only increase the computation cost.
**3D-GAN**[68] combines the merits of both general-adversarial network (GAN) [81] and volumetric convolutional networks [67] to learn the features of 3D objects. This network is composed of a generator and a discriminator as shown in Fig.3. The adversarial discriminator is conducted to classify objects into synthesized and real categories due to the generative-adversarial criterion has the advantage in capturing the structural variation between two 3D objects. And the employment of generative-adversarial loss is helpful to avoid possible criterion-dependent over-fitting. The generator attempts to confuse the discriminator. Both generator and discriminator consist of five volumetric fully Conv layers. This network provides a powerful 3D shape descriptor with unsupervised training in 3D object recognition. But the density of data affects the performance of adversarial discriminator for finest feature capturing. Consequently, this adaptive method is suitable for evenly distributed point cloud data.
In conclusion, there are some limitations of this general volumetric 3D data representation:
* Firstly, not all voxel representations are useful because they contain occupied and non-occupied parts of the scanning environment. Thus, the high demand for computer storage is actually unnecessary within this ineffective data representation [69].
* Secondly, the size of the grid is hard to set, which affects the scale of input data and may disrupt the spatial relationship between points.
* Thirdly, computational and memory requirements grow cubically with the resolution [69]. Thus, existing voxel-based models are maintained at low 3D resolutions, and the most commonly used size is \\(30^{3}\\) for each grid.[69].
A more advanced voxel-based data representation is the octree-based grids [69, 82], which use adaptive size to divides the 3D point cloud into cubes. It is a hierarchical data structure
Fig. 3: Deep architectures of 3D ShapeNet [30], VoxNet [67], 3D-GAN [68].
Fig. 4: PointNet [10] and PointNet++ [12] architectures.
that recursively decomposes the root voxels into multiple leaf voxels.
**OctNet**[69] is proposed by Riegler et al., which exploits the sparsity of the input data. Motivated by the observation that the object boundaries have the highest probability in producing the maximum responses across all feature maps generated by the network at different layers, they partitioned the 3D space hierarchically into a set of unbalanced octrees [83] based on the density of the input data. Specifically, the octree nodes that have point clouds are split recursively in its domain, ending at the finest resolution of the tree. Thus, the size of leaf nodes varies. For each leaf node, those features that activate their comprised voxel is pooled and stored. Then the convolution filters are conducted in these trees. In [82], the deep model is constructed by learning the structure of the octree and the represented occupancy value for each grid. This octree-based data representation largely reduces the computation and memory resources for DL architectures, which achieves better performance in high-resolution 3D data compared with voxel-based models. However, the disadvantage of octree data is similar to voxels, both of them fail to exploit the geometry feature of 3D objects, especially the intrinsic characteristics of patterns and surfaces [29].
### _Point clouds based models_
Different from volumetric 3D data representation, point cloud data can preserve the 3D geospatial information and internal local structure. Besides, the voxel-based models that scan the space with fixed strides are constrained by the local receptive fields. But for point clouds, the input data and the metric decide the range of receptive fields, which has high efficiency and accuracy.
**PointNet**[10], as a pioneer in consuming 3D point clouds directly for deep models, learns the spatial feature of each point independently via MLP layers and then accumulates their features by max-pooling. The point cloud data are input directly to the PointNet, which predicts per-point label or per-object label, its framework is illustrated in Fig.4. In PointNet, spatial transform network and a symmetric function are designed to improve the invariance to permutation. The spatial feature of each input point was learned through the networks. Then, the learned features are assembled across the whole region of point clouds. The outstanding performance of PointNet has achieved in 3D objects classification and segmentation tasks. However, the individual point features are grouped and pooled by max-pooling, which fails to preserve the local structure. As a result, PointNet is not robust to fine-grained patterns and complex scenes.
**PointNet++** was proposed later by Qi et al. [12], which compensate the local feature extraction problems in PointNet. Within the raw unordered point clouds as input, these points are initially divided into overlapping local regions using the Euclidean distance metric. These partitions are defined as a neighborhood ball in this metric space and labeled with the centroid location and scale. In order to sample the points evenly over the whole point set, the farthest point sampling (FPS) algorithm is applied. Local features are extracted from the small neighborhoods around the selected points using K-nearest-neighbor (KNN) or query-ball searching methods. These neighborhoods are gathered into larger clusters and leveraged to extract high-level features via PointNet [10] network. The sampling and grouping module are repeated until the local and global features of the whole points are learned, as shown in Fig.4. This network, which outperforms the PointNet [10] network in classification and segmentation tasks, extracts the local feature for points in different scales. However, features from the local neighborhood points in different sampling layers are learned in an isolated fashion. Besides, max-pooling operation based on PointNet [10] for high-level feature extraction in PointNet++ fails to preserve the spatial information between the local neighborhood points.
**Kd-networks**[70] uses the kd-tree to create the order of the input points, which is different from PointNet [10] and PointNet++ [12] as both of them use the symmetric function to solve the permutation problem. Klokov et al. used the maximum range of point coordinates along the coordinate axis to recursively split the certain size point clouds \\(N=2^{D}\\) into subsets with a top-down fashion to construct a kd-tree. As shown in Fig.5, this kd-tree is ending with a fixed depth. Within this balanced tree structure, vectorial representations in each node, which represents a subdivision along certain axis, is computed using kd-networks. These representations are then exploited to train a linear classifier. This network has better performance than PointNet [10] and PointNet++ [12] in small objects classification. However, it is not robust to rotations and noise, since these variations can lead to the change of tree structure. Besides, it lacks the overlapped receptive field which reduces the spatial-correlation between leaf nodes.
**PointCNN**, proposed by Li et al. [71], solves the input points permutation and transformation problems based on an \\(\\chi\\)-Conv operation, as shown in Fig.5. They proposed the \\(\\chi\\)-transformation which is learned from the input points by weighting the input point features and permutating the points into a latent and potentially canonical order. Then the traditional convolution operators are applied in the learned \\(\\chi\\)-transformation features. These spatially-local correlation
Fig. 5: Kd-tree structure in Kd-networks [70] and \\(\\chi\\)-Conv in PointCNN [71].
features in each local range are aggregated to construct a hierarchical CNN network architecture. However, this model still has not exploited the correlations of different geometric features and their discriminate information toward results, which limits the performance.
Point cloud based deep models are mostly focused on solving permutation problems. Although they treat points independently at local scales to maintain permutation invariance. This independence, however, neglects the geometric relationships among points and their neighbors, presenting a fundamental limitation that leads to local features' missing.
### _Graph-based models_
Graphs are a type of non-Euclidean data structure that can be used to represent point cloud data. Their node corresponds to each input point and the edges represent the relationship between each point neighbors. Graph neural networks propagate the node states until equilibrium in an iterative manner [75]. With the advancement of CNNs, there is an increment graph convolutional networks applied to 3D data. Those graph CNNs define convolutions directly on the graph in the spectral and non-spectral (spatial) domain, operating on groups of spatially close neighbors [84]. The advantage of graph-based models is that the geometric relationships among points and their neighbors are exploited. Thus, more spatially-local correlation features are extracted from the grouped edge relationships on each node. But there are two challenges for constructing graph-based deep models:
* Firstly, defining an operator that is suitable for dynamically sized neighborhoods and maintaining the weight sharing scheme of CNNs [75].
* Secondly, exploiting the spatial and geometric relationships among each node's neighbors.
**SyncSpecCNN**[72] exploited the spectral eigen-decomposition of the graph Laplacian to generate a convolution filter applied in point clouds. Yi et al. constructed SyncSpecCNN based on that two considerations: the first is the coefficients sharing and multi-scale graph analyzing; the second is information sharing across related but different graphs. They solved these two problems by constructing the convolution operation in the spectral domain: the signal of point sets in the Euclidean domain is defined by the metrics on the graph nodes, and the convolution operation in the Euclidean domain is related to the scaling signals based on eigenvalues. Actually, such operation is linear and only applicable to the graph weights generated from eigenvectors of the graph Laplacian. Despite SyncSpecCNN achieved excellent performance in 3D shape part segmentation, it has several limitations:
* Basis-dependent. The learned spectral filters coefficients are not suitable for another domain with a different basis.
* Computationally expensive. The spectral filtering is calculated based on the whole input data, which requires high computation capability.
* Missing local edge features. The local graph neighborhood contains useful and distinctive local structural information, which is not exploited.
**Edge-conditioned convolution** (ECC) [73] considers the edge information in constructing the convolution filters based on the graph signal in the spatial domain. The edge labels in a vertex neighborhood are conditioned to generate the Conv filter weights. Besides, in order to solve the basis-dependent problem, they dynamic generalized the convolution operator for arbitrary graphs with varying size and connectivity. The whole network follows the common structure of feedforward network with interlaced convolutions and pooling followed by global pooling and FC layers. Thus, features from local neighborhoods are extracted continually from these stacked layers, which increase the receptive field. Although the edge labels are fixed for a specific graph, the learned interpretation networks may vary in different layers. ECC learns the dynamic pattern of local neighborhoods, which is scalable and effective. However, the computation cost remains high, and it is not applicable for large-scale graphs with continuous edge labels.
**DGCNN**[74] also constructed a local neighborhood graph to extract the local geometric features and applied Conv-like operations, named EdgeConv which is shown in Fig.6, on the edges connecting neighboring pairs of each point. Different from ECC [73], EdgeConv dynamically updates the given fixed graph with Conv-like operations for each layer output. Thus, DGCNN can learn how to extract local geometric structures and group point clouds. This model takes \\(n\\) points as input, and then find the K neighborhoods of each point to calculate the edge feature between the point and its K neighborhoods in each EdgeConv layer. Similar to PointNet[34] architecture, the features convolved in the last EdgeConv layer are aggregated globally to construct a global feature, while all the EdgeConv outputs are treated as local features. Local and global features are concatenated to generate results score. This model extracts distinctive edge features from point neighborhoods, which can be applied in different point clouds related tasks. However, the fixed size of edge features limits the performance of the model when facing different scales and resolution point clouds.
ECC [73] and DGCNN [74] propose general convolutions on graph nodes and their edge information, which is isotropy about input features. However, not all the input features contribute equally to its nodes. Thus, attention mechanisms are introduced to deal with variable sized inputs and focus on the most relevant parts of the nodes' neighbors to make decisions [75].
**Graph Attention Networks** (GAT) [75]. The core insight behind GAT is to calculate the hidden representations of each node in the graph, by assigning different attentional weights to different neighbors, following a self-attention strategy. Within a set of node features as input, a shared linear transformation, parametrized by a weight matrix is applied to each node. Then a self-attention, a shared attentional mechanism which is shown in Fig.6, is applied on the nodes to computes attention coefficients. These coefficients indicate the importance of corresponding nodes' neighbor features, respectively, and are further normalized to make them comparable across different nodes. These local features are combined according to the attentional weights to form the output features for each node. In order to improve the stability of the self-attentionmechanism, multi-head attention is employed to conduct k independent attention schemes, which are then concatenated together to form the final output features for each node. This attention architecture is efficient and can extract fine-grained representations for each graph node by assigning different weights to the neighbors. However, local spatial relationship between neighbors are not considered in calculating the attentional weights. To further improve its performance, Wang et al. [85] proposed graph attention convolution (GAC) to generate attentional weights by considering different neighboring points and feature channels.
### _View-based models_
The last type of MLS data representation is 2D views obtained from 3D point clouds from different directions. With the projected 2D views, traditional well-established convolutional neural networks (CNN) and pre-trained networks on image datasets, such as AlexNet [86], VGG [87], GoogLeNet [88], ResNet [89] can be exploited. Compared with voxel-based models, these methods can improve the performance for different 3D tasks by taking multi-view of the interest object or scenes and then fusing or voting the outputs for final prediction. Compared with the above three different 3D data representations, view-based models can achieve near-optimal results, as shown in Table III. Su et al. [90] experimented that multiview methods have the optimal generalization ability even without using pre-trained models compared with point cloud and voxel data representation models. The advantages of view-based models compared with 3D models can be concluded as:
* Efficiency. Compared with 3D data representations such as point clouds or voxel grids, the reduced one dimension information can greatly reduce the computation cost but with increased resolution [76].
* Exploiting established 2D deep architectures and datasets. The well-developed 2D DL architectures can better exploit the local and global information from projected 2D view images [91]. Besides, existing 2D image databases (such as ImageNet [92]) can be used to train 2D DL architectures.
**Multi-View CNN** (MVCNN) [76] is the pioneer in exploiting 2D DL models to learn 3D representation. Multiple views of 3D objects are extracted without specific order using a view pooling layer. Two different CNNs models are proposed and tested in this paper. The first CNN model takes 12 views rendered from the object via placing 12 virtual cameras with equal distance around the objects as the input, while the second CNN model takes 80 views rendered in the same way as input. These views are first learned separately and then fused through max-pooling operation the extract the most representative feature among all views for the whole 3D shape. This network is effective and efficient compared with volumetric data representation. However, the max-pooling operation only considers the most important views and discards information from other views, which fails to preserve comprehensive visual information.
**MVCNN-MultiRes** was proposed by Qi et al [15] to improve multi-view CNNs. Different from traditional view rendering methods, the 3D shape is projected to 2D via a convolution operation based on an anisotropic probing kernel applied to the 3D volume. Multi-orientation pooling is combined together to improve the 3D structure capturing capability. Then the MVCNN [76] is applied to classify the 2D projects. Compared with MVCNN [76], multi-resolution 3D filtering is introduced to capture multi-scale information. Sphere rendering is performed at different volume resolutions to achieve view-invariant and improve the robust to potential noise and irregularities. This model achieves better results in 3D object classification task compared with MVCNN [76].
**3DMV**[77] combines the geometry and imagery data as input to train a joint 3D deep architecture. Feature maps extracted from imagery data are first extracted and then mapped into the 3D feature extracted from the volumetric grid data derived from a differentiable back-projection layer. Because there exists redundant information among multiple views, a multiview pooling approach is applied to extract useful information from these views. This network achieved remarkable results in 3D objects classification. However, compared with models using one source of data such as LiDAR point or RGB images solely, the computation cost of this method is higher.
**RotationNet**[78] is proposed following the assumption that when the object is observed by a viewer from a partial set of full multiview images, the observation direction should be recognized to correctly infer the objects category. Thus, the multiview images of an object are input to the RotationNet, which outputs its pose and category. The most representative characteristic of RotationNet is that it treats viewpoints which are the observation of training images as latent variables. Then unsupervised learning of object poses is conducted based on an unaligned object dataset, which can eliminate the process of pose normalization to reduce noise and individual variations in shape. The whole network is constructed as a differentiable MLP network with softmax layers as the final layer. The outputs are the viewpoint category probabilities, which correspond to the predefined discrete viewpoints for
Fig. 6: EdgeConv in DGCNN [74] and attention mechanism in GAT [75].
each input image. These likelihoods are optimized by the selected object pose.
However, there some limitation of 2D view-based models:
* The first is that the projection from 3D space to 2D views can lose some geometrically-related spatial information.
* The second is the redundant information among multiple views.
### _3D Data Processing and Augmentation_
Due to the massive amount of data and the tedious labeling process, there exist limited reliable 3D datasets. To better exploit the architecture of deep networks and improve the model generalization ability, data augmentation is commonly conducted. Augmentation can be applied to both data space and feature space, while the most common augmentation is conducted in the first space. This type of augmentation can not only enrich the variations of data but also can generate new samples by conducting transformations to the existing 3D data. There are several types of transformations, such as translation, rotation, and scaling. Several requirements for data augmentation are summarised as:
* There must exist similar features between original augmented data, such as shape;
* There must exist different features between original and augmented data such as orientation.
Based on those existing methods, classical data augmentation for point clouds can be concluded as:
* Mirror \\(x\\) and \\(y\\) axis with predefined probability [59, 93]
* Rotation around z-axis with certain times and angles[13, 59, 93, 94]
* Random (uniform) height or position jittering in certain range [67, 93, 95]
* Random scale with certain ratio [13, 59]
* Random occlusions or randomly down-sampling points within predefined ratio [59]
* Random artefacts or randomly down-sampling points within predefined ratio [59]
* Randomly adding noise, following certain distribution, to the points' coordinates and local features [45, 59, 96].
## V Deep Learning in LiDAR Point Cloud for AVs
The application of LiDAR point clouds for AVs can be concluded into three types: 3D point cloud segmentation, 3D object detection and localization, and 3D objects classification and recognition. Targets for these tasks vary, for example, scene segmentation focus on per-point label prediction, while detection and classification concentrate on integrated point set labeling. But they all need to exploit the input point feature representations before feature embedding and network construction.
We first make a survey of input point cloud feature representations applied in DL architectures for all these three tasks, such as local density and curvature. These features are representations of a specific 3D point or position in 3D space, which describe the geometrical structures and features based on the extracted information around the point. These features can be grouped into two types: one is derived directly from the sensors such as coordinate and intensity, we term them as direct point feature representations; the second is extracted from the information provided by each points neighbors, we term them as geo-local point feature representations.
#### V-1 Direct input point feature representations
The direct input point feature representations are mainly provided by laser scanners, which include the \\(x\\), \\(y\\), and \\(z\\) coordinates, and other characteristics (e.g., intensity, angle, and number of returns). Two most frequently used features applied in DL are selected:
* **XYZ coordinate**. The most direct point feature representation is the \\(XYZ\\) coordinate provided by the sensors, which means the position of a point in the real world coordinate.
* **Intensity**. The intensity represents the reflectance characteristics of the material surface, which is one common characteristic of laser scanners [97]. Different objects have different reflectance, thus produce different densities in point clouds. For example, traffic signs have a higher intensity than vegetation.
#### V-2 Geo-local point feature representations
Local input point feature embeds the spatial relationship of points and their neighborhoods, which plays a significant role in point cloud segmentation [12], object detection [42], and classification [74]. Besides, the searched local region can be exploited by some operations such as CNNs [98]. Two most representative and widely-used neighborhood searching methods are k-nearest neighbors (KNN) [12, 96, 99] and spherical neighborhood [100].
The geo-local feature representations are usually generated from the searched region using the above two neighborhood searching algorithms. They are composed of eigenvalues (e.g., \\(\\eta_{0}\\), \\(\\eta_{1}\\) and \\(\\eta_{2}\\) (\\(\\eta_{0}>\\eta_{1}>\\eta_{2}\\))) or eigenvectors (e.g., \\(\\overline{v_{0}}\\), \\(\\overline{v_{1}}\\), and \\(\\overline{v_{2}}\\)) by decomposing the covariance matrix defined in the searched region. We list five most commonly used 3D local feature descriptors applied in DL:
* **Local density**. The local density is typically determined by the quantity of points in a selected area [101]. Typically, the point density decreases when the distance of objects to the LiDAR sensor increases. In voxel-based models, the local density of points is related to the setting of voxel sizes [102].
* **Local normal**. It infers the direction of the normal at a certain point on the surface. The equation about normal extraction can be found in [65]. In [103], the eigenvector \\(\\overline{v_{2}}\\) of \\(\\eta_{2}\\) in \\(C_{i}\\) is selected as the normal vector for each point. However, in [10], the eigenvectors of \\(\\eta_{0}\\), \\(\\eta_{1}\\) and \\(\\eta_{2}\\) are all chose as the normal vectors of point \\(p_{i}\\).
* **Local curvature**. The local curvature is defined to be the rate at which the unit tangent vector changes direction. Similar to local normal calculation in [65], the surface curvature change in [103] can be estimated from the eigenvalues derived from the Eigen decomposition: \\(curvature=\\eta_{0}/(\\eta_{0}+\\eta_{1}+\\eta_{2})\\)
* **Local linearity**. It is a local geometric characteristic for each point to indicate the linearity of its local geometry [104]: \\(linearity=\\left(\\eta_{1}-\\eta_{2}\\right)/\\eta_{1}\\).
* **Local planarity**. It describes the flatness of a given point neighbors. for example, group points have higher planarity compared with tree points [104]: \\(planarity=\\left(\\eta_{2}-\\eta_{3}\\right)/\\eta_{1}\\)
### _LiDAR point cloud semantic segmentation_
The goal of semantic segmentation is to label each point as belonging to a specific semantic class. For AVs segmentation tasks, these classes cloud be a street, buildings, cars, pedestrians, trees or traffic lights. When applying DL for point cloud segmentation, classification of small features is required [38]. However, the LiDAR 3D point clouds are usually acquired in large scale, and they are irregularly shaped with changeable spatial contents. In the review of the recent five years papers related in this region, we group these papers into three schemes according to the types of data representation: point cloud based, voxel-based, and multi-view based models. There is limited research focusing on graph-based models, thus we combine the graph-based and point cloud based models together to illustrate their paradigms. Each type of model is represented by a compelling deep architecture as shown in Fig.7.
#### Iv-B1 Point cloud based networks
For point cloud based networks, they are mainly composed of two parts: feature embedding and network construction. For the discriminate feature representing, both local and global features have demonstrated to be crucial for the success of CNNs [12]. However, in order to apply conventional CNNs, the permutation and orientation problem for unordered and unoriented points requires a discriminative feature embedding network. Besides, lightweight, effective, and efficient deep network construction is another key module that affects the segmentation performance.
Local feature is commonly extracted from points neighborhoods [104]. The most frequently used local features are local normal and curvature [10, 12]. To improve the receptive field, PointNet [10] has been proved to be a compelling architecture to extract semantic feature from unordered point sets. Thus, in [12, 105, 108, 109], a simplified PointNet is exploited to abstract local features from sampled point sets into high-level representations. Landrieu et al. [105] proposed superpoint graph (SPG) to represent large 3D point clouds as a set of interconnected simple shapes coined superpoints, then PointNet is operated on these superpoints to embed features.
To solve the permutation problem and extract local features, Huang et al. [40] proposed a novel slice pooling layer to extract the local context layer from the input point features and outputs an ordered sequence of aggregated features. To this end, the input points are first grouped into slices and then a global representation for each slice is generated via concatenating points features within the slice. The advantage of this slice pooling layer is the low computation cost compared with point-based local features. However, the slice size is sensitive to the density of data. In [110], bilateral Conv layers (BCL) are applied to perform convolutions on occupied parts of the lattice for hierarchical and spatially-aware feature learning. BCL first maps input points onto a sparse lattice and applies convolutional operations on the sparse lattice and then the filtered signal are interpolated smoothly to recover the original input points.
To reduce the computation cost, in [108], an encoding-decoding framework is adopted. Features extracted from the same scale of abstraction are combined and then upsampled by 3D deconvolutions to generate the desired output sampling density, which is finally interpolated by Latent nearest-neighbor interpolation to output per-point label. However, the down-sampling and up-sampling operations are hard to preserve the edge information, thus cannot extract the fine-grained features. In [40], RNNs are applied to model dependencies of the ordered global representation derived from slice pooling. Similar to sequence data, each slice is viewed as one timestamp and the interaction information with other slices also follows the timestamps in RNN units. This operation enables the model to generate dependencies between slices.
Although Zhang et al. [65] proposed the ReLu-NN to learn embedded point features, which is a four-layer MLP architecture. However, for objects without discriminative features, such as shrubs or trees, their local spatial relationship is not fully exploited. To better leverage the rich spatial information of objects, Wang et al. constructed a lightweight and effective deep neural network with spatial pooling (DNNSP) [111] to learn point features. They clustered the input data into groups and then applied distance minimum spanning tree-based pooling to extract the spatial information among the points in the clustered point sets. Finally, an MLP is used for classification with these features. In order to achieve multiple tasks, such as instance segmentation and object detection with simple architecture, Wang et al. [109] proposed a similarity group proposal network SGPN. Within the extracted local and global point features by PointNet, feature extraction network generates a matrix which is then diverged into three subsets that each pass through a single PointNet layer to obtain three similarity matrices. These three matrices are used to produce a similarity matrix, a confidence map and a semantic segmentation map.
#### Iv-B2 Voxel-based networks
In voxel-based networks, the point clouds are first voxelized into grids and then learn features from these grids. The deep network is finally constructed to map these features into segmentation masks.
Wang et al. [106] conducted a multi-scale voxelization method to extract objects spatial information at different scales to form a comprehensive description. At each scale, a neighboring cubic with selected length is constructed for a given point [112]. After that, the cube is divided into grid voxels with different size as a patch. The smaller the size is, the finer the scale. The point density and occupancy are selected to represent each voxel. The advantage of this kind voxelization is that it can accommodate objects with different sizes without losing their spatial space information. In [113], the class probabilities for each voxel are predicted using 3D-FCNN, which are then transferred back to the raw 3D points based on trilinear interpolation. In [106], after the multi-scale voxelization of point clouds, features at different scales and spatial resolutions are learned by a set of CNNs with shared weights which are finally fused together for final prediction.
In voxel-based point cloud segmentation task, there are two ways to label each point: (1) Using the voxel label derived from the argmax of the predicted probabilities; (2) Further globally optimizing the class label of the point cloud based on spatial consistency. The first method is simple, but the result is provided at the voxel level and inevitably influenced by noise. The second one is more accurate but complex with additional computation. Because the inherent invariance of CNN networks to spatial transformations affects the segmentation accuracy [25]. In order to extract the fine-grained details for volumetric data representations, the Conditional Random Field (CRF) [114, 106, 113] is commonly adopted in a post-processing stage. The CRFs have the advantage in combining low-level information such as the interactions between points to output multi-class inference for multi-class per-point labeling tasks, which compensates the fine local details that CNNs fail to capture.
#### Iv-B3 Multiview-based networks
As for multi-view based models, view rendering and deep architecture construction are two key modules for segmentation task. The first one is used to generate structural and well-organized 2D grids that can exploit existing CNN-based deep architectures. The second one is proposed to construct the most suitable and generative models for different data.
In order to extract local and global features simultaneously, some hand-designed feature descriptors are employed for representative information extraction. In [65, 111], the spin image descriptor is employed to represent point-based local features, which contains the global description of objects from partial views and clutters of local shape description. In [107], point splatting was applied to generate view images by projecting the points with a spread function into the image plane. The point is first projected into image coordinates of a virtual camera. For each projected point, its corresponding depth value and feature vectors such as normal are stored.
Once the points are projected into multi-view 2D images, some discriminative 2D deep networks can be exploited, such as VGG16 [87], AlexNet [86], GoogLeNet [88], and ResNet [89]. In [25], these deep networks have been detailed analyzed in 2D semantic segmentation. Among these methods, VGG16 [87], composed of 16 layers, is the most frequently used. Its main advantage is the use of stacked Conv layers with small receptive fields, which produces a lightweight network with limited parameters and increasing nonlinearity [25, 107, 115].
#### Iv-A4 Evaluation on Point cloud segmentation
Due to the high volume of point clouds, which pose a great challenge for computation capability. We choose the models tested on Reduced-8 Semantic3D dataset to compare their performance, as shown in Table IV. Reduced-8 shares the same training data as semantic-8 but only use a small part of test data, which can also suit the high computation cost algorithm for competing. The metrics used to compare these models are \\(IoU_{i}\\), \\(\\overline{IoU}\\), and OA. The computation efficiency for these algorithms are not reported and compared due to the difference between computation capacity, selected training dataset, model architecture.
### _3D objects detection (localization)_
The detection(& localization) of 3D objects in LiDAR point clouds can be summarised as bounding box prediction and objectness prediction [14]. In this paper, we mainly survey the LiDAR-only paradigm, which takes advantage from accurate geo-referenced information. Overall, there are two ways for data representation in this paradigm: one detects and locates 3D objects directly from point clouds [118]; another first converts 3D points into regular grids, such as voxel grids or birds eye view images as well as front views, and then utilizes architectures in 2D detectors to extract object from images, the 2D detection results are finally back-projected into 3D space for final 3D object location estimation [50]. Fig.8 shows the representative network frameworks of the above-listed data representations.
#### Iv-B1 3D objects detection (localization) from point clouds
The challenges for 3D object detection from sparse and large-scale point clouds are concluded as:
* The detected objects only occupy a very limited amount of the whole input data.
* The 3D object centroid can be far from any surface point thus hard to regress accurately in one step [42].
* The missing of 3D object center points. As LiDAR sensors only capture surfaces of objects, 3D object centers are likely to be in empty space, far away from any point.
Thus, a common procedure of 3D object detection and localization from large-scale point clouds is composed of
Fig. 7: DL architectures on LiDAR point cloud segmentation with three different data representations: point cloud based networks represented by SPG [105], voxel-based networks represented by MSNet [106], view-based networks represented by DePr3SS [107]the following processes: firstly, the whole scene is roughly segmented, and then the coarse location of interest object is approximately proposed; secondly, the feature for each proposed region is extracted; finally, the localization and object class is predicted through a Bounding-Box Prediction Network [118, 119].
In [119], the PointNet++ [12] is applied to generate per-point feature within the whole input point clouds. Different from [118], each point is viewed as an effective proposal, which preserves the localization information. Then the localization and detection prediction is conducted based on the extracted point-based proposal features as well as local neighbor context information captured by increasing receptive field and input point features. This network preserves more accurate localization information but has higher computation cost for operating directly on point sets.
In [118], 3D CNN with three Conv layers and multiple FC layers is applied to learn the discriminate and robust features of objects. Then an intelligent eye window (EW) algorithm is applied to the scene. The label of point belong to the EW is predicted using the pre-trained 3D CNN. The evaluation result is then input to the deep Q-network (DQN) to adjust the size and position of EW. Then the new EW is evaluated by 3D CNN and DQN until the EW only contains one object. Different from the traditional bounding box of the region of interest (RoI), the EW can reshape its size and change the window center automatically, which is suitable for objects with different scales. Once the position of the object is located, the object in the input window is predicted with learned features. In [118], the object features are extracted based on 3D CNN models and then fed into the residual RNN [120] for category labeling.
Qi et al. [42] proposed VoteNet a 3D object detection deep network based on Hough voting. The raw point clouds are input to PointNet++ [12] to learn point features. Based on these features, a group of seed points is sampled and generate votes from their neighbor features. These seeds are then gathered to cluster the object centers and generate bounding box proposals for a final decision. Compared with the above two architectures, VoteNet is robust to sparse and large-scale point clouds. Besides, it can localize the object center with high accuracy.
#### Iv-B2 3D objects detection (localization) from regular voxel grid
To better exploit CNNs, some approaches voxelize the 3D space into a voxel grid, which is represented by a scalar value such as occupancy or vector data extracted from voxels [8]. In [121, 122], the 3D space is first discretized into grids with a fixed size and then converted each occupied cell into a fixed-dimensional feature vector. Non-occupied cells without any points are represented with zero feature vectors. A binary occupancy and the mean and variance of the reflectance, as well as three shape factors are used to describe the feature vector. For simplicity, in [14], the voxelized grids are represented by length, width, height, and channels 4D array, and the binary value of one channel is used to represent the observation status of points in corresponding grids. Zhou et al. [13] voxelized the 3D point clouds along \\(XYZ\\) coordinates with predefined distance and grouped points in each grid. Then a voxel feature encoding (VFE) layer is proposed to achieve inter-point interaction within a voxel, by combining per
Fig. 8: DL architectures on 3D object detection/localization with three different data representations: point cloud based networks represented by VoteNet [42], voxel-based networks represented by VoxelNet [13], view-based networks represented by ComplexYOLO [116].
point features and local neighbor features. The combination of multi-scale VFE layers enables this architecture to learn discriminative features from local shape information.
The voting scheme is adopted in [121, 122] to perform a sparse convolution on the voxelized grids. These grids, weighted by the convolution kernels as well as their surrounding cells in the receptive field, accumulate the votes from their neighbors by flipping the CNN kernel along each dimension and finally outputs the voting scores for potential interest objects. Based on that voting scheme, Engelcke et al. [122] then used a ReLU non-linearity to produce a novel sparse 3D representation of these grids. This process is iterated and stacked in conventional CNN operations and finally output the predicting scores for each proposal. However, the voting scheme has high computation during voting. Thus, modified region proposal networks (RPN) is employed by [13] in object detection to reduce computation. This RPN is composed of three blocks of Conv layers, which are used to downsample, filter features and upsample the input feature map and produce a probability score map, and a regression map for object detection and localization.
#### Iv-B3 3D objects detection (localization) from 2D views
Some approaches also project LiDAR point clouds into 2D views. Such approaches are mainly composed of those two steps: first is the projection of 3D points; second is the object detection from projected images. There are several types of view generation methods to project 3D points into 2D images: BEV images [43, 116, 123, 124], front view images [123], spherical projections [50], and cylindrical projection [9].
Different from [50], in [43, 116, 123, 124], the point cloud data is split into grids with fixed size and then converted to a birds eye view (BEV) image with corresponding three channels which encodes height, intensity, and density information. Considering the efficiency and performance, only the maximum height, the maximum intensity, and the normalized density among the grids are converted to a single birds-eye-view RGB-map [116]. In [125], only the maximum, median, and minimum height values are selected to represent the channels of the BEV image to exploit conventional 2D RGB deep models without modification. Dewan et al. [16] selected the range, intensity, and height values to represent three channels. In [8], the feature representation for each BEV pixel is composed of occupancy and reflectance value.
However, due to the sparsity of point clouds, the projection of point clouds to the 2D image plane produces a sparse 2D point map. Thus, Chen et al. [123] added front view representation to compensate for the missing information in BEV images. The point clouds are projected to a cylinder plane to produce dense front view images. In order to keep the 3D spatial information during projection, points are projected at multiview angles which are evenly selected on a sphere [50]. Pang et al. first discretized 3D points into cells with a fixed size. Then the scene is sampled to generate multiview images to construct positive and negative training samples. The benefits of this kind of dataset generation are that the spatial relationship and feature of the scene can be better exploited. However, this model is not robust to a new scene and cannot learn new features from a constructed dataset.
As for 2D object detectors, there exist enormous compelling deep models such as VGG-16 [87], Faster R-CNN [126]. In [23], a comprehensive survey of 2D detectors in object detection is concluded.
#### Vi-B4 Evaluation on 3D objects localization and detection
In order to compare 3D objects localization and detection deep models, KITTI birds eye view benchmark and KITTI 3D object detection benchmark [60] are selected. As reported in [60], all non- and weakly-occluded (\\(<20\\%\\)) objects which are neither truncated nor smaller than 40 px in height are evaluated. Truncated or occluded objects are not counted as false positives. Only a bounding box overlap of at least \\(50\\%\\) results for pedestrian and cyclist, and \\(70\\%\\) results for the car are considered for detection, localization, and orientation estimation measurements. Besides, this benchmark classified the difficulties of tasks into three types: easy, moderate, and hard.
Both the accuracy and execution time are compared to evaluate these algorithms because detection and localization in real-time are crucial for AVs [127]. For the localization task, the KITTI birds eye view benchmark is chosen as the evaluation benchmark, and the comparison results are shown in Table V. The 3D detection is evaluated on the KITTI 3D object detection benchmark. Table V shows the runtime and the average precision (\\(AP_{3D}\\)) on the validation set. For each bounding box overlap, only 3D IoU exceeds 0.25 is considered as a valid localization/detection box [127].
### _3D object classification_
Semantic object classification/recognition is crucial for safe and reliable driving of AVs in unstructured and uncontrolled real-world environments [67]. Existing 3D object detection are mainly focus on CAD data (e.g., ModelNet40 [30]) or RGBD data (e.g., NYUv2 [128]). However, these data have uniform point distribution, complete shapes, limited noise, occlusion and background clutter, which poses limit challenges for 3D classification compared with LiDAR point clouds [12, 10, 10]. Those compelling deep architectures applied on CAD data have been analyzed in the form of four types of data representations in section III. In this part, we mainly focus on the LiDAR data based deep models for the classification task.
#### Vi-C1 Volumetric architectures
The voxelization of point clouds depends on the data spatial resolution, orientation, and the origin [67]. This operation which can provide enough recognizable information but not increase the computation cost is crucial for DL models. Thus, for LiDAR data, a voxel with spatial resolution such as \\((0.1m)^{3}\\) is adopted in [67] to voxelize the input points. Then for each voxel, binary occupancy grid, density grid, hit grid are calculated to estimate its occupancy. The input layer, Conv layer, pooling layer, and FC layer are combined to construct the CNNs. Such architecture can exploit the spatial structure among data and extract global feature via pooling. However, the FC layer produces high computation cost and lose the spatial information between voxels. In [130], based on VoxNet [67], it takes a 3D voxel grid as input and contains two Conv layers with 3D filters followed by two FC layers. Different from other category-level classification tasks, they treated this task as a multi-task problem, where the orientation estimation and class label prediction are processed parallel.
For simplicity and efficiency, Zhi et al. [93, 131] adopted the binary grid of [67] to reduce the computation cost. However, they only consider the voxels inside the surface, ignoring the difference between unknown and free space. Normal vectors, which contain geo-local position and orientation information, have been demonstrated stronger than binary grid in [132]. Similar to [130], the classification is treated as two tasks: voxel object class label predicting and its orientation prediction. To extract local and global features, there are two sub-tasks in the first task: the first sub-task is to predict the object label referencing the whole input shape while the second one predicts the object label with part of the shape. The orientation prediction is proposed to exploit the orientation augmentation scheme. The whole network is composed of three 3D Conv layers and two 3D max-pooling layers, which is lightweight and demonstrated robust to occlusion and clutter.
#### Vi-C2 Multi-view architectures
The merit of view-based methods is their ability to exploit both local and global spatial relationships among points. Luo et al. [45] designed the three feature descriptors to extract local and global features from point clouds: the first one captures the horizontal geometric structure, the second one extracts vertical information, the last one provides complete spatial information. To better leverage the multi-view data representations, You et al. [91] integrated the merits of point cloud and multi-view data and achieved better results than MVCNN [76] in 3D classification. Besides, the high-level features extracted from view representations based on MVCNN [76] are embedded with an attention fusion scheme to compensate the local features extracted from point cloud data representations. Such attention-aware features are proved efficient in representing discriminative information of 3D data.
However, for different objects, the view generation process varies. Because the special attributes of objects can contribute to computation saving and accuracy improving. For example, in road marking extraction tasks, the elevation derived mainly from \\(Z\\) coordinate contributes little to the algorithm. But the road surface is actually a 2D structure. As a result, Wen et al. [47] directly projected 3D point clouds onto a horizontal plane and gird as a 2D image. Luo et al. [45] input the acquired three-view descriptors separately to capture low-level features to JointNet. Then this network learns high-level features by a convolutional operation based on the input features, and finally fuses the prediction scores. The whole framework is composed of five Conv layers, a spatial pyramid pooling (SPP) layer [133] and two FC layers and a reshape layer. The output results are fused through Conv layers and multi-view pooling layers. The well-designed view descriptors help the network achieve compelling results in object classification tasks.
Another representative architecture in 2D deep models is the encoder-decoder architecture. Due to the down-sampling and up-sampling can help to compress the information among pixels to extract the most representative features. In [47], Wen et al. proposed a modified U-net model to classify road markings. The point clouds data are first mapped into the intensity images. Then a hierarchical U-net module is applied to classify road markings by multi-scale clustering via CNNs. Due to such down-sampling and up-sampling is hard to preserve the fine-grained patterns, a GAN network is adopted to reshape small-size road markings, broken lanelines and missing marking considering the expert context knowledge. This architecture exploits the efficiency of U-net and completeness of GAN to classify the road markings with high efficiency and accuracy.
#### V-B3 Evaluation on 3D objects classification
There is limited published LiDAR point cloud benchmark specific for 3D objects classification task. Thus, the Sydney Urban Objects dataset is selected due to the performance of several state-of-the-art methods are available. The \\(F_{1}\\) score is used to evaluate these published algorithms [45], as shown in Table VI.
## VI Research Challenges and Opportunities
DL architectures developed in recent five years using LiDAR point clouds have made significant success in the field of autonomous driving detailing for 3D segmentation, detection, and classification tasks. However, there still exists a huge gap between cutting-edge results and human-level performance. Although there is much work to be done, we mainly summarize the remaining challenges specific for data, deep architectures, and tasks as follows:
#### Vi-1 **Multi-source Data Fusion**
To compensate the absence of 2D semantic, textual and incomplete information in 3D points, imagery, LiDAR point clouds, and radar data can be fused to provide accurate, geo-referenced, and information-rich cues for AVs' navigation and decision making [134]. Besides, there also exists a fusion between data acquired by low-end LiDAR (e.g., Velodyne HDL-16E) and high-end LiDAR (e.g., Velodyne HDL-64E) sensors. However, there exist several challenges in fusing these data: The first is the sparsity of point clouds causes the inconsistent and missing data when fusing multi-source data. The second is that the existing data fusion scheme using DL knowledge is processed in a separate line, which is not an end-to-end scheme. [119, 135, 41].
#### Vi-2 **Robust Data Representation**
The unstructured and unordered data format [12, 10] poses a great challenge for robust 3D DL applications. Although there are several effective data representations such as voxels [67], point clouds [12, 10], graphs [74, 129], 2D views [78], or novel 3D data representations [136, 137, 138], there has not yet agreed on a robust and memory-efficient 3D data representation. For example, although voxels solve the ordering problem, the computation cost increases cubically with the increment of voxel resolution [67, 30]. As for point clouds and graphs, the permutation invariance and the computation capability limit the processable quantity of points, which inevitably constrains the performance of the deep models [10, 74].
#### Vi-B3 **Effective and More Efficient Deep Frameworks**
Due to the limitation of memory and computation facilities of the platform embedded in AVs, effective and efficient DL architectures are crucial for the wide application of automated AV systems. Although there are significant improvements in 3D DL models, such as PointNet [10], PointNet++ [12], PointCNN [71], DGCNN [74], RotationNet [78] and other work [52, 139, 140, 141]. Some limited models can achieve real-time segmentation, detection and classification tasks. Researches should focus on lightweight and compact architecture designing.
#### Vi-B4 **Context Knowledge Extraction**
Due to the sparsity of point clouds and incompleteness of scanned objects, detailed context information for objects is not fully exploited. For example, the semantic contexts in traffic signs are crucial cues for AVs navigation, but existing deep models cannot extract such information completely from point clouds. Although multi-scale feature fusion approaches [142, 143, 144] have demonstrated significant improvements in context information extraction. Besides, GAN [47] can be utilized to improve the completeness of 3D point clouds. However, these frameworks cannot solve the sparsity and incompleteness problems for context information extraction in an end-to-end trainable way.
#### Vi-B5 **Multi-task Learning**
The approaches related to LiDAR point clouds for AVs consist of several tasks, such as scene segmentation, object detection (e.g., cars, pedestrians, traffic lights, etc.) and classification (e.g., road markings, traffic signs). All these results are commonly fused together and reported to a decision system for final control [1]. However, there are few DL architectures combining these multiple LiDAR point cloud tasks together [15, 130]. Thus, the inherent information among them is not fully exploited and used to generalize better models with less computation.
#### Vi-B6 **Weakly Supervised/Unsupervised Learning**
The existing state-of-art deep models are commonly constructed under supervised modes using labeled data with 3D objects bounding boxes or per-point segmentation masks [8, 74, 119]. However, there are some limitations for fully supervised models. First is the limited availability of high quality, large scale, and enormous general objects datasets and benchmarks. Second is the fully-supervised model generalization capability which is not robust to unseen or untrained objects. Weakly supervised [145] or unsupervised learning [146, 147] should be developed to increase the model's generalization and solve the data absence problem.
## VII Conclusion
In this paper, we have provided a systematic review of the state-of-the-art DL architectures using LiDAR point clouds in the field of autonomous driving for specific tasks such as segmentation, detection, and classification. Milestone 3D deep models and 3D DL applications on these three tasks have been summarized and evaluated with merits and demerits comparison. Research challenges and opportunities were listed to advance the potential development of DL in the field of autonomous driving.
## Acknowledgment
The authors would like to thank the Professors Jos Marcato Junior and Wesley Nunes Gonalves for their carefully proofreading. Besides, we also would like to thank anonymous reviewers for their insightful comments and suggestions.
## References
* [1] J. Janai, F. Guney, A. Behl, and A. Geiger, \"Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art,\" _arXiv:1704.05519_, 2017.
* [2] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt _et al._, \"Towards fully autonomous driving: Systems and algorithms,\" in _IEEE Intell. Vehicles Symp._, 2011, pp. 163-168.
* [3] J. Van Brummelen, M. O'Brien, D. Gruyer, and H. Najjaran, \"Autonomous vehicle perception: The technology of today and tomorrow,\" _Transp. Res. Part C Emerg. Technol._, vol. 89, pp. 384-406, 2018.
* [4] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang, \"The apolloscape dataset for autonomous driving,\" in _Proc. IEEE CVPR Workshops_, 2018, pp. 954-960.
* [5] R. P. D. Vrvacoam, M. Bertozzi, P. Cerri, F. N. Martins, and R. F. Vassallo, \"Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving,\" _IEEE Trans. Intell. Transp. Syst._, vol. 19, no. 2, pp. 582-597, 2018.
* [6] F. Remondino, \"Heritage recording and 3d modeling with photogrammetry and 3d scanning,\" _Remote Sens._, vol. 3, no. 6, pp. 1104-1138, 2011.
* [7] B. Wu, A. Wan, X. Yue, and K. Keutzer, \"Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud,\" in _IEEE ICRA_, 2018, pp. 1887-1893.
* [8] B. Yang, W. Luo, and R. Urtasun, \"Pixor: Real-time 3d object detection from point clouds,\" in _Proc. IEEE CVPR_, 2018, pp. 7652-7660.
* [9] B. Li, T. Zhang, and T. Xia, \"Vehicle detection from 3d lidar using fully convolutional network,\" _arXiv:1608.07916_, 2016.
* [10] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, \"Pointnet: Deep learning on point sets for 3d classification and segmentation,\" in _Proc. IEEE CVPR_, 2017, pp. 652-660.
* [11] A. Boulch, B. Le Saux, and N. Audebert, \"Unstructured point cloud semantic labeling using deep segmentation networks.\" in _3DOR_, 2017.
* [12] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, \"Pointnet++: Deep hierarchical feature learning on point sets in a metric space,\" in _Adv Neural Inf Process Syst_, 2017, pp. 5099-5108.
* [13] Y. Zhou and O. Tuzel, \"Voxelnet: End-to-end learning for point cloud based 3d object detection,\" in _Proc. IEEE CVPR_, 2018, pp. 4490-4499.
* [14] B. Li, \"3d fully convolutional network for vehicle detection in point cloud,\" in _IEEE/RSJ IROS_, 2017, pp. 1513-1518.
* [15] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, \"Volumetric and multi-view cnns for object classification on 3d data,\" in _Proc. IEEE CVPR_, 2016, pp. 5648-5656.
* [16] A. Dewan, G. L. Oliveira, and W. Burgard, \"Deep semantic classification for 3d lidar data,\" in _IEEE/RSJ IROS_, 2017, pp. 3544-3549.
* [17] Y. LeCun, Y. Bengio, and G. Hinton, \"Deep learning,\" _Nature_, vol. 521, no. 7553, p. 436, 2015.
* [18] V. See, Y.-H. Chen, T.-J. Yang, and J. S. Emer, \"Efficient processing of deep neural networks: A tutorial and survey,\" _Proc. IEEE_, vol. 105, no. 12, pp. 2295-2329, 2017.
* [19] Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, \"Deep learning for visual understanding: A review,\" _Neurocomput_, vol. 187, pp. 27-48, 2016.
* [20] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, \"Deep learning for computer vision: A brief review,\" _Comput Intell Neurosci._, vol. 2018, pp. 1-13, 2018.
* [21] L. Zhang, L. Zhang, and B. Du, \"Deep learning for remote sensing data: A technical tutorial on the state of the art,\" _IEEE Geosci. Remote Sens. Mag._, vol. 4, no. 2, pp. 22-40, 2016.
* [22] X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, and F. Fraundorfer, \"Deep learning in remote sensing: A comprehensive review and list of resources,\" _IEEE Geosci. Remote Sens. Mag._, vol. 5, no. 4, pp. 8-36, 2017.
* [23] L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikainen, \"Deep learning for generic object detection: A survey,\" _arXiv:1809.02165_, 2018.
* [24] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, \"Object detection with deep learning: A review,\" _IEEE Trans Neural Netw Learn Syst._, 2019.
* [25] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez, \"A review on deep learning techniques applied to semantic segmentation,\" _arXiv:1704.06857_, 2017.
* [26] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, \"A survey of deep neural network architectures and their applications,\" _Neuroment_, vol. 234, pp. 11-26, 2017.
* [27] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, \"Geometric deep learning: going beyond exolication data,\" _IEEE Signal Process Mag._, vol. 34, no. 4, pp. 18-42, 2017.
* [28] A. Ioannloo, E. Chatzlariari, S. Nikolopoulos, and I. Kompatsiaris, \"Deep learning advances in computer vision with 3d data: A survey,\" _ACM CSUR_, vol. 50, no. 2, p. 20, 2017.
* [29] E. Ahmed, A. Saint, A. E. R. Shabayek, K. Cherenkova, R. Das, G. Gusev, D. Aouda, and B. Otterstein, \"Deep learning advances on different 3d data representations: A survey,\" _arXiv:1808.01462_, 2018.
* [30] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, \"3d shapenets: A deep representation for volumetric shapes,\" in _Proc. IEEE CVPR_, 2015, pp. 1912-1920.
* [31] L. Ma, Y. Li, J. Li, C. Wang, R. Wang, and M. Chapman, \"Mobile laser scanned point-clouds for road object detection and extraction: A review,\" _Remote Sens._, vol. 10, no. 10, p. 1531, 2018.
* [32] H. Guan, J. Li, S. Cao, and Y. Yu, \"Use of mobile lidar in road information inventory: A review,\" _Int J Image Data Fusion_, vol. 7, no. 3, pp. 219-242, 2016.
* [33] E. Che, J. Jung, and M. J. Olsen, \"Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art reviews,\" _Sensors_, vol. 19, no. 4, p. 810, 2019.
* [34] R. Wang, J. Peethambaran, and D. Chen, \"Lidar point clouds to 3-d urban models : a review,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 2, pp. 606-627, 2018.
* [35] X.-F. Hana, J. S. Jin, J. Xie, M.-J. Wang, and W. Jiang, \"A comprehensive review of 3d point cloud descriptors,\" _arXiv:1802.02297_, 2018.
* [36] E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mourakitis, \"A survey on 3d object detection methods for autonomous driving applications,\" _IEEE Trans. Intell. Transp. Syst_, 2019.
* [37] W. Liu, J. Sun, W. Li, T. Hu, and P. Wang, \"Deep learning on point clouds and its application: A survey,\" _Sens._, vol. 19, no. 19, p. 4188, 2019.
* [38] M. Treml, J. Arjona-Medina, T. Unterthiner, R. Durgesh, F. Friedmann, P. Schuberth, A. Mayr, M. Heusel, M. Hofmarcher, M. Widrich _et al._, \"Speeding up semantic segmentation for autonomous driving,\" in _MILTS, NIPS Workshop_, vol. 1, 2016, p. 5.
* [39] A. Nguyen and B. Le, \"3d point cloud segmentation: A survey,\" in _RAM_, 2013, pp. 225-230.
* [40] Q. Huang, W. Wang, and U. Neumann, \"Recurrent slice networks for 3d segmentation of point clouds,\" in _Proc. IEEE CVPR_, 2018, pp. 2626-2635.
* [41] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, \"Frustum pointnets for 3d object detection from rgb-d data,\" in _Proc. IEEE CVPR_, 2018, pp. 918-927.
* [42] C. R. Qi, O. Litany, K. He, and L. J. Guibas, \"Deep hough voting for 3d object detection in point clouds,\" _arXiv:1904.09664_, 2019.
* [43] J. Beltran, C. Guindel, F. M. Moreno, D. Cruzado, F. Garcia, and A. De La Escalera, \"Birdnet: a 3d object detection framework from lidar information,\" in _ITSC_, 2018, pp. 3517-3523.
* [44] A. Kundu, Y. Li, and J. M. Rehg, \"3d can: Instance-level 3d object reconstruction via render-and-compare,\" in _Proc. IEEE CVPR_, 2018, pp. 3559-3568.
* [45] Z. Luo, J. Li, Z. Xiao, Z. G. Mou, X. Cai, and C. Wang, \"Learning high-level features by fusing multi-view representation of mls point clouds for 3d object recognition in road environments,\" _ISPRS J. Photogramm. Remote Sens._, vol. 150, pp. 44-58, 2019.
* [46] Z. Wang, L. Zhang, T. Fang, P. T. Mathiopoulos, X. Tong, H. Qu, Z. Xiao, F. Li, and D. Chen, \"A multiscale and hierarchical feature extraction method for terrestrial laser scanning point cloud classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 5, pp. 2409-2425, 2015.
* [47] C. Wen, X. Sun, J. Li, C. Wang, Y. Guo, and A. Habib, \"A deep learning framework for road marking extraction, classification and completion from mobile laser scanning point clouds,\" _ISPRS J. Photogramm. Remote Sens._, vol. 147, pp. 178-192, 2019.
* [48] T. Hackel, J. D. Wegner, and K. Schindler, \"Joint classification and contour extraction of large 3d point clouds,\" _ISPRS J. Photogramm. Remote Sens._, vol. 130, pp. 231-245, 2017.
* [49] B. Kumar, G. Pandey, B. Lohani, and S. C. Misra, \"A multi-faceted cmn architecture for automatic classification of mobile lidar data and an algorithm to reproduce point cloud samples for enhanced training,\" _ISPRS J. Photogramm. Remote Sens._, vol. 147, pp. 80-89, 2019.
* [50] G. Pang and U. Neumann, \"3d point cloud object detection with multi-view convolutional neural network,\" in _IEEE ICPR_, 2016, pp. 585-590.
* [51] A. Tagliascchi, H. Zhang, and D. Cohen, \"Curve skeleton extraction from incomplete point cloud,\" in _ACM Trans. Graph_, vol. 28, no. 3, 2009, p. 71.
* [52] Y. Liu, B. Fan, S. Xiang, and C. Pan, \"Relation-shape convolutional neural network for point cloud analysis,\" _arXiv:1904.07601_, 2019.
* [53] H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-Or, \"Consolidation of unorganized point clouds for surface reconstruction,\" _ACM Trans. Graph_, vol. 28, no. 5, p. 176, 2009.
* [54] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _Int. J. Rob Res_, vol. 32, no. 11, pp. 1231-1237, 2013.
* [55] K. Jo, J. Kim, D. Kim, C. Jang, and M. Sunwoo, \"Development of autonomous carpet ii: A case study on the implementation of an autonomous driving system based on distributed architecture,\" _IEEE Trans. Aerosp. Electron._, vol. 62, no. 8, pp. 5119-5132, 2015.
* [56] T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, and M. Pollefeys, \"Semantic3d. net: A new large-scale point cloud classification benchmark,\" _arXiv:1704.03834_, 2017.
* [57] D. Munoz, J. A. Bagnell, N. Vandepel, and M. Hebert, \"Contextual classification with functional max-margin markov networks,\" in _Proc. IEEE CVPR_, 2009, pp. 975-982.
* [58] B. Vallet, M. Bredif, A. Serna, B. Marcotegui, and N. Paparodiitis, \"Terramobilita/iqmulus urban point cloud analysis benchmark,\" _Comput. Graph_, vol. 49, pp. 126-133, 2015.
* [59] X. Reynard, J.-E. Deschaud, and F. Goulette, \"Classification of point cloud scenes with multiscale voxel deep network,\" _arXiv:1804.03583_, 2018.
* [60] A. Geiger, P. Lenz, and R. Urtasun, \"Are we ready for autonomous driving? the kitti vision benchmark suite,\" in _Proc. IEEE CVPR_, 2012, pp. 3354-3361.
* [61] M. De De Deuge, A. Quadros, C. Hung, and B. Douillard, \"Unsupervised feature learning for classification of outdoor 3d scans,\" in _ACRA_, vol. 2, 2013, p. 1.
* [62] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, \"The pascal visual object classes challenge: A retrospective,\" _Int. J. Comput. Vision_, vol. 111, no. 1, pp. 98-136, 2015.
* [63] L. Yan, Z. Li, H. Liu, J. Tan, S. Zhao, and C. Chen, \"Detection and classification of pole-like road objects from mobile lidar data in motoray environment,\" _Opt Laser Technol._, vol. 97, pp. 272-283, 2017.
* [64] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, \"1 year, 1000 km: The oxford robotcar dataset,\" _Int J Rob Res_, vol. 36, no. 1, pp. 3-15, 2017.
* [65] L. Zhang, Z. Li, A. Li, and F. Liu, \"Large-scale urban point cloud labeling and reconstruction,\" _ISPRS J. Photogramm. Remote Sens._, vol. 138, pp. 86-100, 2018.
* [66] X. Chen, K. Kundu, Y. Zhu, H. Ma, S. Fidler, and R. Urtasun, \"3d object proposals using stereo imagery for accurate object class detection,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 40, no. 5, pp. 1259-1272, 2018.
* [67] D. Maturana and S. Scherer, \"Voxnet: A 3d convolutional neural network for real-time object recognition,\" in _IEEE/RSJ IROS_, 2015, pp. 922-928.
* [68] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, \"Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,\" in _Adv Neural Inf Process Sy* [76] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, \"Multi-view convolutional neural networks for 3d shape recognition,\" in _Proc. IEEE ICCV_, 2015, pp. 945-953.
* [77] A. Dai and M. Niessner, \"3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation,\" in _ECCV_, 2018, pp. 452-468.
* [78] A. Kanezaki, Y. Matsushita, and Y. Nishida, \"Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints,\" in _Proc. IEEE CVPR_, 2018, pp. 5010-5019.
* [79] J. Long, E. Shelhamer, and T. Darrell, \"Fully convolutional networks for semantic segmentation,\" in _Proc. IEEE CVPR_, 2015, pp. 3431-3440.
* [80] G. Vosselman, B. G. Gorte, G. Sithole, and T. Rabbani, \"Recognising structure in laser scanner point clouds,\" _Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci._, vol. 46, no. 8, pp. 33-38, 2004.
* [81] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" in _Adv. Neural Inf. Process Syst._, 2014, pp. 2672-2680.
* [82] M. Tatarchenko, A. Dosovitskiy, and T. Brox, \"Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs,\" in _Proc. IEEE ICCV_, 2017, pp. 2088-2096.
* [83] A. Miller, V. Jain, and J. L. Mundy, \"Real-time rendering and dynamic updating of 3-d volumetric data,\" in _Proc. GPGPU_, 2011, p. 8.
* [84] C. Wang, B. Samari, and K. Siddiqi, \"Local spectral graph convolution for point set feature learning,\" in _ECCV_, 2018, pp. 52-66.
* [85] L. Wang, Y. Huang, Y. Hou, S. Zhang, and J. Shan, \"Graph attention convolution for point cloud semantic segmentation,\" in _Proc. IEEE CVPR_, 2019, pp. 10 296-10 305.
* [86] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Adv Neural Inf Process Syst._, 2012, pp. 1097-1105.
* [87] K. Simonyan and A. Zisserman, \"Very deep convolutional networks for large-scale image recognition,\" _arXiv:1409.1556_, 2014.
* [88] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, \"Going deeper with convolutions,\" in _Proc. IEEE CVPR_, 2015, pp. 1-9.
* [89] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proc. IEEE CVPR_, 2016, pp. 770-778.
* [90] J.-C. Su, M. Gadelha, R. Wang, and S. Maji, \"A deeper look at 3d shape classifiers,\" in _ECCV_, 2018, pp. 0-0.
* [91] H. Youi, Y. Feng, R. Ji, and Y. Gao, \"Pvnet: A joint convolutional network of point cloud and multi-view for 3d shape recognition,\" in _2018 ACM Multimedia Conference on Multimedia Conference_, 2018, pp. 1310-1318.
* [92] Q. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein _et al._, \"Imagenet large scale visual recognition challenge,\" _Int. J. Comput. Vision_, vol. 115, no. 3, pp. 211-225, 2015.
* [93] S. Zhi, Y. Liu, X. Li, and Y. Guo, \"Toward real-time 3d object recognition: a lightweight volumetric cnn framework using multitask learning,\" _Comput Graph_, vol. 71, pp. 199-207, 2018.
* [94] ----, \"Lightnet: A lightweight 3d convolutional neural network for real-time 3d object recognition.\" in _3DOR_, 2017.
* [95] A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Niessner, \"Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans,\" in _Proc. IEEE CVPR_, 2018, pp. 4578-4587.
* [96] J. Li, B. M. Chen, and G. Hee Lee, \"So-net: Self-organizing network for point cloud analysis,\" in _Proc. IEEE CVPR_, 2018, pp. 9397-9406.
* [97] P. Huang, M. Cheng, Y. Chen, H. Luo, C. Wang, and J. Li, \"Traffic sign occlusion detection using mobile laser scanning point clouds,\" _IEEE Trans. Intell. Transg. Syst._, vol. 18, no. 9, pp. 2364-2376, 2017.
* [98] H. Lei, N. Akhtar, and A. Mian, \"Spherical convolutional neural network for 3d point clouds,\" _arXiv:1805.07872_, 2018.
* [99] F. Engelmann, T. Kontoigiann, J. Schulst, and B. Leibe, \"Know what your neighbors do: 3d semantic segmentation of point clouds,\" in _ECCV_, 2018, pp. 0-0.
* [100] M. Weinmann, B. Jutzi, S. Hinz, and C. Mallet, \"Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers,\" _ISPRS J. Photogramm. Remote Sens._, vol. 105, pp. 286-304, 2015.
* [101] E. Che and M. J. Olsen, \"Fast ground filtering for tls data via scanline density analysis,\" _ISPRS J. Photogramm. Remote Sens._, vol. 129, pp. 226-240, 2017.
* [102] A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, \"Octree-based region growing for point cloud segmentation,\" _ISPRS J. Photogramm. Remote Sens._, vol. 104, pp. 88-100, 2015.
* [103] R. B. Rusu and S. Cousins, \"Point cloud library (pcl),\" in _2011 IEEE ICRA_, 2011, pp. 1-4.
* [104] H. Thomas, F. Goulette, J.-E. Deschaud, and B. Marcotegui, \"Semantic classification of 3d point clouds with multiscale spherical neighborhoods,\" in _3DV_, 2018, pp. 390-398.
* [105] L. Landrieu and M. Simonovsky, \"Large-scale point cloud semantic segmentation with superpoint graphs,\" in _Proc. IEEE CVPR_, 2018, pp. 4558-4567.
* [106] L. Wang, Y. Huang, J. Shan, and L. He, \"Msnet: Multi-scale convolutional network for point cloud classification,\" _Remote Sens._, vol. 10, no. 4, p. 612, 2018.
* [107] F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, and M. Felsberg, \"Deep projective 3d semantic segmentation,\" in _CAIP_, 2017, pp. 95-107.
* [108] D. Rethage, J. Wald, J. Sturm, N. Navab, and F. Tombari, \"Fully-convolutional point networks for large-scale point clouds,\" in _ECCV_, 2018, pp. 596-611.
* [109] W. Wang, R. Yu, Q. Huang, and U. Neumann, \"Sgpn: Similarity group proposal network for 3d point cloud instance segmentation,\" in _Proc. IEEE CVPR_, 2018, pp. 2569-2578.
* [110] H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M.-H. Yang, and J. Kautz, \"Splatnet: Sparse lattice networks for point cloud processing,\" in _Proc. IEEE CVPR_, 2018, pp. 2530-2539.
* [111] Z. Wang, L. Zhang, L. Zhang, R. Li, Y. Zheng, and Z. Zhu, \"A deep neural network with spatial pooling (dnnsp) for 3-d point cloud classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 8, pp. 4594-4604, 2018.
* [112] J. Huang and S. Youi, \"Point cloud labeling using 3d convolutional neural network,\" in _ICPR_, 2016, pp. 2670-2675.
* [113] L. Tchapmi, C. Choy, I. Armeni, J. Gwak, and S. Savarese, \"Segcloud: Semantic segmentation of 3d point clouds,\" in _3DV_, 2017, pp. 537-547.
* [114] J. Lafferty, A. McCallum, and F. C. Pereira, \"Conditional random fields: Probabilistic models for segmenting and labeling sequence data,\" 2001.
* [115] R. Zhang, G. Li, M. Li, and L. Wang, \"Fusion of images and point clouds for the semantic segmentation of large-scale 3d scenes based on deep learning,\" _ISPRS J. Photogramm. Remote Sens._, vol. 143, pp. 85-96, 2018.
* [116] M. Simony, S. Milzy, K. Amendey, and H.-M. Gross, \"Complex-yolo: an euler-region-proposal for real-time 3d object detection on point clouds,\" in _ECCV_, 2018, pp. 0-0.
* [117] A. Liaw, M. Wiener _et al._, \"Classification and regression by randomforest,\" _R news_, vol. 2, no. 3, pp. 18-22, 2002.
* [118] L. Zhang and L. Zhang, \"Deep learning-based classification and reconstruction of residential scenes from large-scale point clouds,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 4, pp. 1887-1897, 2018.
* [119] Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia, \"Ipod: Intensive point-based object detector for point cloud,\" _arXiv:1812.05276_, 2018.
* [120] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, \"Learning transferable architectures for scalable image recognition,\" in _Proc. IEEE CVPR_, 2018, pp. 8697-8710.
* [121] D. Z. Wang and I. Posner, \"Voting for voting in online point cloud object detection.\" in _RSS_, vol. 1, no. 3, 2015, pp. 10-15 607.
* [122] M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner, \"Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks,\" in _IEEE ICRA_, 2017, pp. 1355-1361.
* [123] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, \"Multi-view 3d object detection network for autonomous driving,\" in _Proc. IEEEboosted voxel nets for 3d object recognition,\" _arXiv:1604.03351_, 2016.
* [131] C. Ma, Y. Guo, Y. Lei, and W. An, \"Binary volumetric convolutional neural networks for 3-d object recognition,\" _IEEE Trans. Instrum. Meas._, no. 99, pp. 1-11, 2018.
* [132] C. Wang, M. Cheng, F. Sohel, M. Bennamoun, and J. Li, \"Normalnet: A voxel-based cnn for 3d object classification and retrieval,\" _Neuroncomput_, vol. 323, pp. 139-147, 2019.
* [133] K. He, X. Zhang, S. Ren, and J. Sun, \"Spatial pyramid pooling in deep convolutional networks for visual recognition,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 37, no. 9, pp. 1904-1916, 2015.
* [134] M. Liang, B. Yang, S. Wang, and R. Urtasun, \"Deep continuous fusion for multi-sensor 3d object detection,\" in _ECCV_, 2018, pp. 641-656.
* [135] D. Xu, D. Anguelov, and A. Jain, \"Pointfusion: Deep sensor fusion for 3d bounding box estimation,\" in _Proc. IEEE CVPR_, June 2018.
* [136] T. He, H. Huang, L. Yi, Y. Zhou, and S. Soatto, \"Geonet: Deep geodesic networks for point cloud analysis,\" _arXiv:1901.00680_, 2019.
* [137] L. Mescheder, M. Oechslae, M. Niemeyer, S. Nowozin, and A. Geiger, \"Occupancy networks: Learning 3d reconstruction in function space,\" _arXiv:1812.03828_, 2018.
* [138] T. Le and Y. Duan, \"PointGrid: A Deep Network for 3D Shape Understanding,\" _Proc. IEEE CVPR_, June 2018.
* [139] J. Li, Y. Bi, and G. H. Lee, \"Discrete rotation equivariance for point cloud recognition,\" _arXiv:1904.00319_, 2019.
* [140] D. Worrall and G. Brostow, \"Cohenet: Equivariance to 3d rotation and translation,\" in _ECCV_, 2018, pp. 567-584.
* [141] K. Fujiwara, I. Sato, M. Ambai, Y. Yoshida, and Y. Sakakura, \"Canonical and compact point cloud representation for shape classification,\" _arXiv:1809.04820_, 2018.
* [142] Z. Dong, B. Yang, F. Liang, R. Huang, and S. Scherer, \"Hierarchical registration of unordered tts point clouds based on binary shape context descriptor,\" _ISPRS J. Photogramm. Remote Sens._, vol. 144, pp. 61-79, 2018.
* [143] H. Deng, T. Birdal, and S. Ilic, \"Ppfnet: Global context aware local features for robust 3d point matching,\" in _Proc. IEEE CVPR_, 2018, pp. 195-205.
* [144] S. Xie, S. Liu, Z. Chen, and Z. Tu, \"Attentional shapecontextnet for point cloud recognition,\" in _Proc. IEEE CVPR_, 2018, pp. 4606-4615.
* [145] Z. J. Yew and G. H. Lee, \"3dfeat-net: Weakly supervised local 3d features for point cloud registration,\" in _ECCV_, 2018, pp. 630-646.
* [146] J. Sauder and B. Sievers, \"Context prediction for unsupervised deep learning on point clouds,\" _arXiv:1901.08396_, 2019.
* [147] M. Shoef, S. Fogel, and D. Cohen-Or, \"Pointwise: An unsupervised point-wise feature learning network,\" _arXiv:1901.04544_, 2019.
\\begin{tabular}{c c} & Ying Li received the M.Sc. degree in remote sensing from Wuhan University, China, in 2017. She is currently working toward the Ph.D. degree with the Mobile Sensing and Geodata Science Laboratory, Department of Geography and Environmental Management, University of Waterloo, ON, Canada. Her research interests include autonomous driving, mobile laser scanning, intelligent processing of point clouds, geometric and semantic modeling, and augmented reality. \\\\ \\end{tabular} \\begin{tabular}{c c} & Lingfei Ma (S18) received the B.Sc. and M.Sc. degrees in geomatics engineering from the University of Waterloo, Waterloo, ON, Canada, in 2015 and 2017, respectively. He is currently working toward the Ph.D. degree in photogrammetry and remote sensing with the Mobile Sensing and Geodata Science Laboratory, Department of Geography and Environmental Management, University of Waterloo, ON, Canada. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zilong Zhong (S15) Zilong Zhong received the Ph.D. in systems design engineering, specialized in machine learning and intelligence, from the University of Waterloo, Canada, in 2019. He is a postdoctoral fellow with the School of Data and Computer Science, Sun Yat-Sen University, China. His research interests include computer vision, deep learning, graph models and their applications involving large-scale image analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Fei Liu received the B.Eng. degree from Yanshan University, China, in 2011. Since then, she has been working in the sectors of vehicular electronics and control, artificial intelligence, deep learning, FPGA, advanced driver assistance systems, and automated driving. She is currently working at Xilinx Technology Beijing Limited, Beijing, China, focusing on development of automated driving technologies and data centers. \\\\ \\end{tabular} \\begin{tabular}{c c} & Dongpu Cao o (M08) received the Ph.D. degree from Concordia University, Canada, in 2008. He is the Canada Research Chair in Driver Cognition and Automated Driving, and currently an Associate Professor and Director of Waterloo Cognitive Autonomous Driving (CogDrive) Lab at University of Waterloo, Canada. His current research focuses on driver cognition, automated driving and cognitive autonomous driving. He has contributed more than 200 publications, 2 books and 1 patent. He received the SAE Arch T. Colwell Merit Award in 2012, and three Best Paper Awards from the ASME and IEEE conferences. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jonathan Li (00' M- 11' SM) received the Ph.D. degree in geomatics engineering from the University of Cape Town, South Africa. He is currently a Professor with the Departments of Geography and Environmental Management and Systems Design Engineering, University of Waterloo, Canada. He has coauthored more than 420 publications, more than 200 of which were published in refereed journals, including the IEEE Transactions on Geoscience and Remote Sensing, IEEE Transactions on Intelligent Transportation Systems, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, ISPRS Journal of Photogrammetry and Remote Sensing, Remote Sensing of Environment, as well as leading artificial intelligence and remote sensing conferences including CVPR, AAAI, IJCAI, IGARSS and ISPRS. His research interests include information extraction from LiDAR point clouds and from earth observation images. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Michael A. Chapman received the Ph.D. degree in photogrammetry from Laval University, Canada. He was a Professor with the Department of Geomatics Engineering, University of Calgary, Canada, for 18 years. Currently, he is a Professor of geomatics engineering with the Department of Civil Engineering, Ryerson University, Canada. He has authored or coauthored over 200 technical articles. His research interests include algorithms and processing methodologies for airborne sensors using GNSS/IMU, geometric processing of digital imagery in industrial environments, terrestrial imaging systems for transportation infrastructure mapping, and algorithms and processing strategies for biometrology applications. \\\\ \\end{tabular} | Recently, the advancement of deep learning in discriminative feature learning from 3D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3D point clouds is a challenging and tedious task. In this paper, we provide a systematic review of existing compelling deep learning architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving such as segmentation, detection, and classification. Although several published research papers focus on specific topics in computer vision for autonomous vehicles, to date, no general survey on deep learning applied in LiDAR point clouds for autonomous vehicles exists. Thus, the goal of this paper is to narrow the gap in this topic. More than \\(140\\) key contributions in the recent five years are summarized in this survey, including the milestone 3D deep architectures, the remarkable deep learning applications in 3D semantic segmentation, object detection, and classification; specific datasets, evaluation metrics, and the state of the art performance. Finally, we conclude the remaining challenges and future researches.
Autonomous driving, LiDAR, point clouds, object detection, segmentation, classification, deep learning. | Write a summary of the passage below. | 243 |
arxiv-format/1609_08300v1.md | # Converse Bounds on Modulation-Estimation Performance for the Gaussian Multiple-Access Channel
Ayse Unsal
Univ Lyon, INSA Lyon, Inria, CITI, France, [email protected]
Raymond Knopp
Communications Systems Department, Eurecom, France, [email protected]
Neri Merhav
The Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion, Israel, [email protected]
## I Introduction
Before addressing the problem of joint modulation-estimation for the Gaussian MAC, let us refer first to the more fundamental single-user modulation-estimation problem. In this setting, a single continuous-valued random parameter \\(U\\) is encoded (modulated) into an \\(N\\)-dimensional power-limited vector \\(\\mathbf{x}(U)\\) and transmitted over an additive-white Gaussian noise (AWGN) channel [1, 2, 3] as shown in Fig. 1(a). The corresponding \\(N\\)-dimensional channel output vector is given by \\(\\mathbf{y}=\\mathbf{x}(U)+\\mathbf{z}\\), where \\(\\mathbf{z}\\) is a Gaussian noise vector with independent and identically distributed (i.i.d.) components, which are independent also of \\(U\\). The channel output vector \\(\\mathbf{y}\\) is used by the receiver to estimate \\(U\\) by an estimator \\(\\hat{U}(\\mathbf{y})\\). The goal is to derive a lower bound to the MSE, \\(\\mathbf{E}(U-\\hat{U}(\\mathbf{y}))^{2}\\), that applies to every modulator \\(\\mathbf{x}(\\cdot)\\), that is subjected to a given power constraint, and to every estimator \\(\\hat{U}(\\mathbf{y})\\)[3, Chapter 8]. More recently in [4], this class of transmission problems was given the name _parameter modulation-estimation_, which we believe, will likely become an important mathematical framework to analyze various remote sensing problems that may arise in fifth generation wireless networks. The purpose of this work is to extend the described problem, as well as its analysis and results, to the modelof the discrete-time two-user Gaussian MAC, where two independent parameters, denoted by \\(U_{1}\\) and \\(U_{2}\\), are conveyed from two separate transmitters and jointly estimated at the receiver. This model is shown in Fig. 1(b). The aim is to derive outer bounds on the region of best achievable MSE's associated with any modulators (subjected to power constraints) and estimators of these parameters. It should be noted that in the context of the MAC model considered here, there exists an interesting trade-off that is not seen in the single-user case described in the first paragraph above. A better modulator for one of the users is good, of course, for the estimation of the corresponding parameter at the receiver side, because it amounts to high sensitivity of the likelihood function to this parameter. However, at the same time, and for the very same reason, it comes at the expense of the estimation performance of the other user (for which the parameter of the first user is a nuisance parameter). Indeed, such a trade-off is manifested in the boundary curves of the achievable regions that we obtain, which are always monotonically non-increasing functions, namely, smaller MSE values in one parameter impose higher lower bounds on the MSE values of the other. This paper builds on relationships between modulation and coding and between estimation and detection.
The remote-sensing application is one where the random-variables \\(U_{i}\\) are measured by a communicating device equipped with some form of analog sensor. The resulting measurements are conveyed to the network via the uplink of a wireless communication system. In the near future such devices will use conventional cellular access, albeit with specially-tailored waveforms, to feed data centers with physical information observed in so-called _smart cities_ or remote areas. These applications will often impose extremely low-periodicity sporadic transmission coupled with long lifetime batteries or solar cells in order to remain embedded in nature with little or no maintenance for long periods of time. In addition, the problem addressed here is also related to more general ranging estimation problems where the random parameters are induced by the channel. As an example, consider a satellite or cellular positioning system where the \\(U_{i}\\) represent two time-delays which, when estimated at the receiver, are used to estimate the position of the receiver. The framework considered here can therefore be extended to analyze the fundamental performance limits in such systems.
Fig. 1: System Models
### _Related Work_
The majority of work dealing with this class of problems considers transmission on a continuous-time channel using finite-energy waveforms without bandwidth constraints. In [1], Goblick provided a lower bound of the exponential order of \\(\\exp\\left(-2\\mathcal{E}/N_{0}\\right)\\), where \\(\\mathcal{E}\\) is the energy used to convey \\(U\\) and \\(N_{0}/2\\) is the two-sided power spectral density of the channel noise process. Goblick also provided several examples of parameter modulation-estimation schemes, one of them turns out to achieve the best asymptotic performance, namely, MSE of the exponential order of \\(\\exp\\left(-\\mathcal{E}/3N_{0}\\right)\\). This is a simple digital scheme, which is based on first uniformly quantizing the parameter into one out of \\(M\\) points and then transmitting the index of the quantized parameter to the receiver, using \\(M\\)-ary orthogonal modulation scheme. Another modulation strategy, considered this problem in continuous-time, was given in [3, pp. 623] where the parameter is reflected in the delay of a purely analog signaling pulse sent across the channel, namely, pulse position modulation (PPM). When the pulse bandwidth is unlimited, this system achieves the same exponential behaviour as Goblick's scheme. This scheme also provided a link to the classical ranging problem where the objective is to estimate the random delay of an incoming waveform corrupted by Gaussian noise [5]. In [2], Wyner and Ziv showed that Goblick's lower bound could be improved to the order of \\(\\exp\\left(-\\mathcal{E}/2N_{0}\\right)\\). Cohn [6] and Burnashev [7, 8, 9], further improved the multiplicative factor at the MSE exponent, progressively from 1/2.889 to 1/2.896. then 1/2.970, and finally to 1/3.000, thus closing the gap to Goblick's practical scheme. In particular, despite the significance of the presented results, unfortunately, [6] is not well known as it has never been published and hence is not easily accessible to the general public. In a nutshell, in [6] Cohn presented lower bounds on the average MSE in estimating the message of a single user using a geometric approach for simplex signal sets as well as the general case. The main contribution of [4] was the characterization of the parameter modulation-estimation problem for infinite-dimensional transmission over the continuous-time AWGN channel. A recent example of a similar scenario as the present paper can be found in [10, 11], where lower bounds on the MSE region are provided for the transmission of two correlated analog source samples with and without causal feedback on the discrete-time AWGN MAC without a constraint on the number of signal dimensionality. The main difference between the current paper and [10, 11] is the analysis technique that is used. [10, 11] use an information-theoretic approach to obtain lower bounds.
### _Contributions_
This paper studies the problem of jointly modulating and estimating two independent continuous-valued random variables encoded into an \\(N\\)-dimensional vector and transmitted over an AWGN channel to be estimated at the receiver end. The performance criterion is chosen as the MSE, which is characterized in two different ways as follows. Firstly, we derive outer bounds on the achievable region of pairs \\((\\mathrm{MSE}_{1},\\mathrm{MSE}_{2})\\), where \\(\\mathrm{MSE}_{1}\\) and \\(\\mathrm{MSE}_{2}\\) are the MSE's associated with arbitrary parameters, using a generalization of Shannon's zero-rate lower bound [12] for the two-user discrete-time MAC, which allows us to characterize the MSE region in terms of the signal-to-noise ratios. We present outer bounds to the achievable region in the plane of the MSE's, basically one MSE associated to one of the users is bounded by a function that depends on the MSE associated to the other user. Thus, we obtain a trade-off between the MSE's based on some parameter.
In addition, we investigate the exponential behaviour of \\((\\mathrm{MSE}_{1},\\mathrm{MSE}_{2})\\) by characterizing a lower bound to the region of achievable pairs of MSE exponents for any joint parameter-modulation estimation scheme. To this end, we adapt the multiple-access results of [13] to the discrete-time AWGN channel. In order to find the tightest characterization, we also use the bounds on the on the reliability function of the Gaussian channel proposed in [12, 14]. Coupled with the results of [15], we provide the means to make use of single-user error exponents for the characterization of multiuser channels.
### _Outline_
In Section II, we describe the system model and formalize the problem. In Section III, we begin with the single-user case and present lower bounds on the MSE itself and its MSE exponent, as a preparatory step to be used later in the MAC model. Section IV is focused on the generalization of parameter modulation-estimation problem to a two-user Gaussian MAC in two subsections. In Subsections IV-A and IV-B, respectively, we present new lower bounds on the MSE's and the MSE exponents. The proposed bounds are numerically compared in Section V. Finally, in Section VI, we draw conclusions from our results.
## II Problem Formulation and Signal Models
### _Single-user setting_
We consider lower bounds on the MSE of modulation-estimation schemes for a random parameter \\(U\\), that is uniformly distributed over the interval \\([0,1)\\). 1 The parameter \\(U\\) is conveyed by a modulator, which maps \\(U\\) into a channel input vector \\(\\mathbf{x}(U)\\) that is transmitted over an \\(N\\)-dimensional memoryless AWGN channel, which is assumed to be phase-synchronous. In general, we have the following signal model
Footnote 1: The results presented in this paper can be quite easily adapted to other source distributions.
\\[\\mathbf{y}=\\mathbf{x}(U)+\\mathbf{z} \\tag{1}\\]
where \\(\\mathbf{x}(U)\\) is constrained in energy as
\\[\\|\\mathbf{x}(U)\\|^{2}\\leq N\\mathcal{S}=\\mathcal{E}, \\tag{2}\\]
\\(\\mathcal{S}\\) and \\(\\mathcal{E}\\) being the power and energy limitations, respectively, and the noise covariance matrix is given by
\\[\\mathbf{Ezz}^{T}=\\sigma^{2}\\mathbf{I}_{N}. \\tag{3}\\]
Here the superscript \\(T\\) denotes the transposition of a vector and \\(\\mathbf{I}_{N}\\) is the \\(N\\times N\\) identity matrix. At the receiver, we consider an estimator \\(\\hat{U}(\\mathbf{y})\\) with corresponding \\(\\mathrm{MSE}_{\\mathrm{s}}=\\mathbf{E}[U-\\hat{U}(\\mathbf{y})]^{2}\\). Let us also define the _asymptotic MSE exponent_ as
\\[\\epsilon_{\\mathrm{s}}\\stackrel{{\\triangle}}{{=}}-\\liminf_{N \\rightarrow\\infty}\\frac{1}{N}\\log\\mathbf{E}[\\hat{U}(\\mathbf{y})-U]^{2}. \\tag{4}\\]
### _Two-user setting_
For this setting, we generalize the model of eq. (1) to a model that includes two independent random variables, \\(U_{1}\\) and \\(U_{2}\\), both uniformly distributed over \\([0,1)\\). These two parameters are separately conveyed by the modulators of two different users, which generate the channel input vectors \\(\\mathbf{x}_{1}(U_{1})\\) and \\(\\mathbf{x}_{2}(U_{2})\\) over an \\(N\\)-dimensional real-valued AWGN MAC obeying the following signal model
\\[\\mathbf{y}=\\mathbf{x}_{1}(U_{1})+\\mathbf{x}_{2}(U_{2})+\\mathbf{z}. \\tag{5}\\]
The modulators are constrained in energy as
\\[\\|\\mathbf{x}_{j}(U_{j})\\|^{2}\\leq N\\mathcal{S}_{j}=\\mathcal{E}_{j},\\;\\forall U _{j},\\;\\text{for}\\;\\;j=1,2 \\tag{6}\\]
and the noise covariance matrix is as before. As in the single-user case of Subsection II-A, at the receiver, we consider estimators \\(\\hat{U}_{j}(\\mathbf{y})\\) with MSE's, \\(\\mathrm{MSE}_{j}=\\mathbf{E}[U_{j}-\\hat{U}_{j}(\\mathbf{y})]^{2}\\), \\(j=1,2\\). As mentioned earlier, in Section IV, we derive outer bounds to the region of achievable MSE pairs \\((\\mathrm{MSE}_{1},\\mathrm{MSE}_{2})\\), which apply to arbitrary modulators and estimators subject to the aforementioned power limitations, \\(\\mathcal{S}_{1}\\) and \\(\\mathcal{S}_{2}\\). The first characterization is for a given finite \\(N\\) and it providesa direct characterization of \\((\\mathrm{MSE}_{1},\\mathrm{MSE}_{2})\\), whereas the second characterization is asymptotic and it characterizes the region in terms of the exponents \\((\\epsilon_{1},\\epsilon_{2})\\) where
\\[\\epsilon_{j}\\stackrel{{\\triangle}}{{=}}-\\liminf_{N\\to\\infty}\\frac{ 1}{N}\\log\\mathbf{E}[\\hat{U}_{j}(\\mathbf{y})-U_{j}]^{2},\\text{ for }j=1,2. \\tag{7}\\]
## III Single-User Channel
In this section, we first recall the single-user approach from [4] and improve the lower bound on the MSE for any parameter-modulator scheme. Additionally, we present a new bound on the MSE exponent of a single-user channel.
### _An improved lower bound_
It is shown in [4, eq. (21)] that for the single-user problem, the probability that the absolute estimation error \\(|\\hat{U}(\\mathbf{y})-U|\\) would exceed \\(\\Delta/2\\), for a given \\(\\Delta>0\\), is lower bounded as follows
\\[\\Pr\\{|\\hat{U}(\\mathbf{y})-U|>\\Delta/2\\}\\geq L_{B}(\\Delta) \\tag{8}\\]
where \\(L_{B}(\\Delta)\\) designates a lower bound to be specified later. To derive such a bound, one considers the following hypothesis testing problem with \\(M\\) equiprobable hypotheses,
\\[\\mathcal{H}_{i}:\\mathbf{y}=\\mathbf{x}(u+i\\Delta)+\\mathbf{z}, \\tag{9}\\]
for \\(i\\in\\{1,\\cdots,M\\}\\) where \\(u\\) is considered a parameter taking values in \\([0,1-(M-1)\\Delta)\\). The lower bound \\(L_{B}(\\Delta)\\) is derived by combining the Ziv-Zakai approach with any lower bound on the average probability of error of an arbitrary code at a given rate. Specifically, let \\(\\hat{i}\\) denote the maximum likelihood (ML) estimate of \\(i\\), and let \\(P_{\\iota}(u,\\Delta)=\\Pr\\left(\\hat{i}\
eq i|u\\right)\\) denote the corresponding conditional probability of error, which is upper bounded as follows:
\\[\\int_{0}^{1-(M-1)\\Delta}du\\cdot P_{\\iota}(u,\\Delta) \\leq\\frac{1}{M}\\sum_{i=0}^{M-1}\\int_{0}^{1-(M-1)\\Delta}\\mathrm{d} u\\cdot\\Pr\\left\\{|\\hat{U}(\\mathbf{y})-U|>\\Delta/2\\Big{|}U=u+i\\Delta\\right\\}\\] \\[=\\frac{1}{M}\\sum_{i=0}^{M-1}\\int_{i\\Delta}^{1-(M-1)\\Delta+i \\Delta}\\mathrm{d}u\\cdot\\Pr\\left\\{|\\hat{U}(\\mathbf{y})-U|>\\frac{\\Delta}{2} \\Big{|}U=u\\right\\}\\] \\[\\stackrel{{(a)}}{{=}}\\frac{1}{M}\\sum_{i=0}^{M-1}\\Pr \\left\\{|\\hat{U}(\\mathbf{y})-U|>\\Delta/2,i\\Delta\\leq U\\leq 1-(M-1)\\Delta+i\\Delta\\right\\}\\] \\[\\leq\\frac{1}{M}\\Pr\\left\\{|\\hat{U}(\\mathbf{y})-U|>\\Delta/2\\right\\} \\tag{10}\\]
We note that (10) is valid for all \\(M\\) and \\(\\Delta\\) such that \\((M-1)\\Delta<1\\). If we add the condition that \\(M\\Delta>1\\), which amounts to \\(1/\\Delta<M<1+(1/\\Delta)\\) or equivalently \\(M=\\lceil 1/\\Delta\\rceil\\), the intervals in step (a) become disjoint. This yields
\\[\\frac{1}{\\lceil 1/\\Delta\\rceil}\\sum_{i=0}^{\\lceil 1/\\Delta\\rceil-1}\\Pr \\left\\{|\\hat{U}-U|>\\frac{\\Delta}{2},i\\Delta\\leq U\\leq 1-(\\lceil 1/\\Delta \\rceil-1)\\Delta+i\\Delta\\right\\}\\leq\\frac{1}{\\lceil 1/\\Delta\\rceil}\\Pr\\left\\{|\\hat{U}( \\mathbf{y})-U|>\\frac{\\Delta}{2}\\right\\} \\tag{11}\\]
Bounding the left hand side (l.h.s.) of (10) using any zero-rate bound for \\(M\\)-ary signals, \\(P_{\\mathrm{ZR}}\\left(\\mathcal{E},\\left\\lceil\\frac{1}{\\Delta}\\right\\rceil\\right)\\) yields the bound
\\[\\lceil 1/\\Delta\\rceil\\left(1+\\Delta-\\lceil 1/\\Delta\\rceil\\Delta\\right)\\cdot P _{\\mathrm{ZR}}\\left(\\mathcal{E},\\lceil 1/\\Delta\\rceil\\right)\\leq\\Pr\\left\\{|\\hat{U}( \\mathbf{y})-U|>\\Delta/2\\right\\} \\tag{12}\\]
which is \\(M\\) times larger than the original result given by [4, eq. (21)]. The lower bound \\(L_{B}(\\Delta)\\) corresponds to the l.h.s. of (12). The right hand side of the last inequality is related to the MSE according to
\\[\\int_{0}^{1}d\\Delta\\cdot\\Delta\\cdot\\Pr\\{|\\hat{U}(\\mathbf{y})-U|> \\Delta/2\\}\\] \\[\\stackrel{{(a)}}{{\\leq}}4\\int_{0}^{1}d\\delta\\cdot \\delta\\cdot\\Pr\\{|\\hat{U}(\\mathbf{y})-U|>\\delta\\}\\stackrel{{(b)}}{{= }}2\\mathbf{E}[\\hat{U}(\\mathbf{y})-U]^{2} \\tag{13}\\]where in (a), we changed the integration variable to \\(\\delta=\\Delta/2\\) and the integration interval was extended to \\([0,1)\\), whereas in (b), the following identity was used
\\[\\mathbf{E}[\\hat{U}(\\mathbf{y})-U]^{2}=2\\int_{0}^{1}d\\Delta\\cdot\\Delta\\cdot \\Pr\\{|\\hat{U}(\\mathbf{y})-U|>\\Delta\\}. \\tag{14}\\]
Combining (10) with (13), the improved single-user lower bound is given by
\\[\\mathrm{MSE_{s}} \\geq\\frac{1}{2}\\int_{0}^{1}d\\Delta\\left\\lceil 1/\\Delta\\right\\rceil \\Delta\\left(1+\\Delta-\\Delta\\left\\lceil 1/\\Delta\\right\\rceil\\right)P_{\\mathrm{ZR}} \\left(\\mathcal{E},\\left\\lceil 1/\\Delta\\right\\rceil\\right)\\] \\[=\\frac{1}{2}\\sum_{i=2}^{\\infty}\\int_{1/i}^{1/(i-1)}d\\Delta \\cdot\\left(\\Delta i+\\Delta^{2}i-\\Delta^{2}i^{2}\\right)\\cdot P_{\\mathrm{ZR}} \\left(\\mathcal{E},i\\right)\\] \\[=\\frac{1}{2}\\sum_{i=2}^{\\infty}\\frac{3i-2}{6i^{2}(i-1)^{2}}P_{ \\mathrm{ZR}}\\left(\\mathcal{E},i\\right). \\tag{15}\\]
In what follows we consider two zero-rate bounds.
#### Ii-A1 Shannon zero-rate bound [12]
In [12, eq. (81)] we have the general zero-rate lower bound
\\[P_{\\mathrm{ZR}}^{\\mathrm{Shannon}}\\left(\\mathcal{E},M\\right)\\triangleq\\frac{ 1}{M}\\sum_{m=2}^{M}Q\\left(\\sqrt{\\frac{m}{m-1}\\left(\\frac{\\mathcal{E}}{2\\sigma ^{2}}\\right)}\\right) \\tag{16}\\]
which is valid for all \\(N\\) and can be used in conjunction with (15) to bound the MSE for a point-to-point AWGN channel.
#### Ii-A2 A new zero-rate lower bound
Using the Polyanskiy _et al._ converse [16, Theorem 41] for the AWGN channel which provides a lower bound on the average error probability for any \\(M\\)-ary signal set in \\(N\\)-dimensions, we propose a new lower bound on the error-probability for \\(N\\rightarrow\\infty\\) under the finite-energy constraint in (2) given as
\\[P_{\\mathrm{ZR}}^{\\mathrm{P}}\\left(\\mathcal{E},M\\right)\\triangleq Q\\left(\\frac {\\sqrt{\\mathcal{E}}}{\\sigma}(1+\\mu)-Q^{-1}\\left(\\frac{1}{M}\\right)\\right) \\tag{17}\\]
for any arbitrarily small \\(\\mu>0\\). The derivation of \\(P_{\\mathrm{ZR}}^{\\mathrm{P}}\\left(\\mathcal{E},M\\right)\\) can be found in detail in Appendix VII-A. The expression in (17) is potentially tighter than (16) for low signal energies since it increases to 1 with \\(M\\) for a fixed energy as is the case for any real signal set. It is clearly looser asymptotically since the energy exponent for fixed \\(M\\) is \\(\\mathcal{E}/2\\sigma^{2}\\) and not \\(\\mathcal{E}/4\\sigma^{2}\\). We show a comparison of (17) and (16) with the error probability of the simplex signal set for \\(M=256\\) in Figure 2. The latter is widely believed to be the optimal signal set for \\(M\\)-ary equal-energy signals. We see that (17) is much closer to the Simplex error-probability for low signal-energies (error probabilities below \\(10^{-2}\\)) and crosses the Shannon bound at an error-probability around \\(10^{-10}\\).
### _Upper bound on the MSE exponent_
In this subsection, we introduce a new bound on the MSE exponent \\(\\epsilon_{\\mathrm{s}}\\) defined by (4) that makes use of any upper bound on the error exponent in a single user AWGN channel.
**Theorem 1**.: _For an arbitrary \\(N\\)-dimensional modulator \\(\\mathbf{x}(U)\\) subject to a power constraint given by (2) for transmission over the AWGN channel defined (1) and for \\(R\\geq 0\\), the MSE exponent \\(\\epsilon_{\\mathrm{s}}\\) as defined in (4) is bounded by_
\\[\\epsilon_{\\mathrm{s}}\\leq\\min_{R}[2R+E_{\\mathrm{s}}(R)], \\tag{18}\\]
_where \\(E_{\\mathrm{s}}(R)\\) is any upper bound on the error exponent function of the single user Gaussian channel._
Proof.: Let us select \\(\\Delta=e^{-RN}\\) where \\(R\\geq 0\\) is a parameter (to be chosen later) in the general form of the bound
\\[\\mathbf{E}[\\hat{U}(\\mathbf{y})-U]^{2}\\geq 2\\int_{0}^{1}d\\Delta\\cdot\\Delta \\cdot L_{B}(\\Delta), \\tag{19}\\]
where \\(L_{B}(\\Delta)\\) is the l.h.s. of (12). Changing the integration variable on the right-hand side (r.h.s.) of (19) to \\(R\\), we obtain
\\[\\mathbf{E}[\\hat{U}(\\mathbf{y})-U]^{2}\\geq\\frac{N}{2}\\int_{0}^{\\infty}dR\\cdot e ^{-2RN}\\cdot L_{B}(e^{-RN}) \\tag{20}\\]The r.h.s. of (20) is bounded by an expression of the exponential order of \\(\\exp\\{-N\\min_{R}[2R+E_{\\text{\\tiny u}}(R)]\\}=\\mathrm{e}^{-NF}\\) where \\(F\\stackrel{{\\triangle}}{{=}}\\min_{R}[2R+E_{\\text{\\tiny u}}(R)]\\). Finally, by taking the logarithms of both sides of (20), dividing by \\(-N\\), and passing to the limit \\(N\\to\\infty\\), the proof of Theorem 1 is completed.
As for an upper bound on the error exponent, \\(E_{\\text{\\tiny u}}(R)\\), of the Gaussian channel, there are many options in the literature, such as Shannon's sphere-packing bound on the reliability function of the Gaussian channel [12], or a more recent bound by Ashikhmin _et al._[14], or others such as [17] and [18]. In this paper, we will use the results of [12] and [14] in our numerical evaluations due to their lower computational complexity relative to the others.
#### Iii-B1 Sphere-packing bound
For rates confined to \\([0,\\mathcal{C})\\), where \\(\\mathcal{C}=(1/2)\\log(1+A)\\) is the Gaussian channel capacity, \\(A=\\mathcal{S}/\\sigma^{2}\\) being the signal-to-noise ratio (SNR), Shannon's sphere-packing bound \\(E_{\\text{\\tiny u}}(\\psi(R),A)\\) is an upper bound on the reliability function of the Gaussian channel \\(E(R,A)\\)[12] where \\(\\psi(R)=\\arcsin(e^{-R})\\). The sphere-packing bound is given by
\\[E_{\\text{\\tiny u}}(\\psi(R),A) =\\frac{A}{2}-\\frac{A(1-e^{-2R})}{4}+\\frac{\\sqrt{A(1-e^{-2R})(A(1- e^{-2R})+4)}}{4}+R\\] \\[+\\log 2-\\log\\left(\\sqrt{A(1-e^{-2R})}+\\sqrt{A(1-e^{-2R})+4}\\right) \\tag{21}\\]
The only positive and real minimizer of \\(E_{\\text{\\tiny u}}(\\psi(R),A)+2R\\) where \\(E_{\\text{\\tiny u}}(\\psi(R),A)\\) is given by (21) is obtained as
\\[R_{\\min}=\\frac{1}{2}\\log\\left\\{\\frac{A+\\sqrt{A^{2}-2A+9}+3}{6}\\right\\}. \\tag{22}\\]
#### Iii-B2 Upper Bound by Ashikhmin _et al._
As for the second alternative to be used for \\(E_{\\text{\\tiny u}}(R)\\) we have a more recent result by Ashikhmin _et al._[14, Theorem 1], which states that \\(E(R,A)\\leq E_{\\text{\\tiny ufl}}(R,A)\\), with \\(E_{\\text{\\tiny ufl}}(R,A)\\) being defined as
\\[E_{\\text{\\tiny ufl}}(R,A)=\\min_{0\\leq\\rho\\leq\\rho_{k,l}}\\max_{w,d}\\left[\\min \\left(Ad^{2}/8,Au^{2}/8-L_{\\text{\\tiny ufl}}(w,d,\\rho)\\right)\\right] \\tag{23}\\]
where \\(0\\leq d\\leq d_{\\max}\\) and \\(d\\leq w\\leq w_{\\max}\\) with
\\[d_{\\max}=\\frac{\\sqrt{2}(\\sqrt{1+\\rho_{kl}}-\\sqrt{\\rho_{kl}})}{\\sqrt{1+2\\rho_{ kl}}}\\]
and
\\[w_{\\max}=\\frac{\\sqrt{2}(\\sqrt{1+\\rho}-\\sqrt{\\rho})}{\\sqrt{1+2\\rho}},\\]
Fig. 2: Comparison of Zero-Rate Bounds with the error-probability of a Simplex (\\(M=256\\))respectively. \\(\\rho_{kl}\\) is the root of the equality
\\[R-(1+\\rho)H(\\rho/(1+\\rho))=0.\\]
Here \\(H(x)\\) denotes the binary entropy function. Lastly, for the inner minimization function of the bound \\(E_{\\text{\\tiny{AHL}}}(R,A)\\), \\(L_{\\text{\\tiny{AHL}}}(w,d,\\rho)\\) is given by
\\[L_{\\text{\\tiny{AHL}}}(w,d,\\rho)=\\min\\left\\{\\frac{Ad^{2}w^{2}}{8(4w^{2}-d^{2})}, F_{\\text{\\tiny{AHL}}}(1-w^{2}/2,\\rho)\\right\\} \\tag{24}\\]
with
\\[F_{\\text{\\tiny{AHL}}}(x,\\rho)=R-(1+\\rho)H(\\rho/(1+\\rho))+\\log((x +\\sqrt{(1+2\\rho)^{2}x^{2}-4\\rho(1+\\rho)})/2)\\\\ -(1+2\\rho)\\log\\left(\\frac{(1+2\\rho)x+\\sqrt{(1+2\\rho)^{2}x^{2}-4 \\rho(1+\\rho)}}{2(1+\\rho)}\\right). \\tag{25}\\]
In Section IV, the relation of these bounds with the two-user setting are analyzed and in Section V, their performances are numerically compared. It is worth mentioning that, unlike Shannon's results, the rate that minimizes \\(E_{\\text{\\tiny{AHL}}}(R,A)\\) cannot be derived analytically.
## IV Multiple-Access Channel
In order to derive outer bounds for the two-user modulation-estimation problem, we consider the following auxiliary hypothesis testing problem, in analogy to the technique used for the single-user case:
\\[\\mathcal{H}_{i_{1},i_{2}}:\\mathbf{y}=\\mathbf{x}_{1}(u_{1}+i_{1}\\Delta_{1})+ \\mathbf{x}_{2}(u_{2}+i_{2}\\Delta_{2})+\\mathbf{z}, \\tag{26}\\]
for \\(i_{1}\\in\\{1,\\cdots,M_{1}\\}\\) and \\(i_{2}\\in\\{1,\\cdots,M_{2}\\}\\), where \\(u_{1}\\in[0,1-(M_{1}-1)\\Delta_{1})\\), \\(u_{2}\\in[0,1-(M_{2}-1)\\Delta_{2})\\). Both \\(u_{1}\\) and \\(u_{2}\\) are known to the receiver. As in the single-user case, we will derive two types of results. The first corresponds to fixed values of \\(M_{1}\\) and \\(M_{2}\\) (and \\(\\Delta_{1}\\), \\(\\Delta_{2}\\)), which will yield non-asymptotic results on the MSE's themselves. The second type of results refers to the asymptotic regime of large \\(N\\), where \\(M_{1}\\) and \\(M_{2}\\) are allowed to grow exponentially with \\(N\\), at arbitrary rates to be optimized, and our asymptotic results concern the asymptotic exponential rates of the two MSE's.
### _Outer bounds on the region of achievable MSE pairs_
We denote the conditional probability of error as a function of \\((u_{1},u_{2})\\) by
\\[P_{e}(u_{1},u_{2},\\Delta_{1},\\Delta_{2})=\\Pr\\left\\{(\\hat{i}_{1},\\hat{i}_{2}) \
eq(i_{1},i_{2})|u_{1},u_{2}\\right\\} \\tag{27}\\]
where the overall
The lower bound \\(L_{B}(\\Delta_{1},\\Delta_{2})\\) is to be specified later. Using considerations similar to those of the derivation in (10), one obtains
\\[\\int_{0}^{1-(M_{1}-1)\\Delta_{1}}du_{1}p(u_{1})\\int_{0}^{1-(M_{2}-1) \\Delta_{2}}du_{2}p(u_{2})P_{e}(u_{1},u_{2},\\Delta_{1},\\Delta_{2})\\leq\\\\ \\frac{\\left(\\Pr\\left\\{\\left|\\hat{U}_{1}(\\mathbf{y})-U_{1}\\right|> \\Delta_{1}/2\\right\\}+\\Pr\\left\\{\\left|\\hat{U}_{2}(\\mathbf{y})-U_{2}\\right|> \\Delta_{2}/2\\right\\}\\right)}{\\left\\lceil 1/\\Delta_{1}\\right\\rceil\\left\\lceil 1/ \\Delta_{2}\\right\\rceil}. \\tag{30}\\]
The l.h.s. of (30) is obtained by introducing the condition of \\(M_{j}\\Delta_{j}>1\\), which is equivalent to \\(M_{j}=\\left\\lceil 1/\\Delta_{j}\\right\\rceil\\), for \\(j=1,2\\). We note that (30) is valid for all \\(M_{j}\\) and \\(\\Delta_{j}\\) such that \\((M_{j}-1)\\Delta_{j}<1\\). A detailed derivation of (30) can be found in Appendix VII-C. Combining (29) and (30) with (35), we finally have
\\[L_{B}(\\Delta_{1},\\Delta_{2}) \\stackrel{{\\triangle}}{{=}}\\left\\lceil 1/\\Delta_{1} \\right\\rceil\\left\\lceil 1/\\Delta_{2}\\right\\rceil\\left(1+\\Delta_{1}-\\left\\lceil 1/ \\Delta_{1}\\right\\rceil\\Delta_{1}\\right)\\left(1+\\Delta_{2}-\\left\\lceil 1/ \\Delta_{2}\\right\\rceil\\Delta_{2}\\right)\\] \\[P_{ZR}\\left(\\mathcal{E}_{1},\\mathcal{E}_{2},\\left\\lceil 1/\\Delta_{1} \\right\\rceil,\\left\\lceil 1/\\Delta_{2}\\right\\rceil\\right). \\tag{31}\\]
#### Vii-C1 Shannon's zero-rate bound adapted to the MAC
Shannon's bound is based on first upper bounding the average squared Euclidean distance between all pairs of modulated signals and this should be carried out for each of the three terms of eq. (28). In the first term in (28) there are \\(M_{1}(M_{1}-1)/2\\) possible signal pairs, and so, the average squared Euclidean distance between all such pairs is upper bounded by
\\[D_{1}^{2}(u_{1},u_{2})\\leq\\frac{2M_{1}\\mathcal{E}_{1}}{(M_{1}-1)} \\tag{32}\\]
Similarly, for the second term of (28),
\\[D_{2}^{2}(u_{1},u_{2})\\leq\\frac{2M_{2}\\mathcal{E}_{2}}{(M_{2}-1)} \\tag{33}\\]
with \\(M_{2}(M_{2}-1)/2\\) signal pairs of user 2. For the third term, there are \\(M_{1}M_{2}(M_{1}-1)(M_{2}-1)\\) possible pairs that differ in both indices, so that
\\[D_{12}^{2}(u_{1},u_{2})\\leq\\frac{2M_{1}\\mathcal{E}_{1}}{(M_{1}-1)}+\\frac{2M_{ 2}\\mathcal{E}_{2}}{(M_{2}-1)} \\tag{34}\\]
The reader is referred to Appendix VII-B for a detailed derivation of eqs. (33)-(34). By progressively removing points at the average distance as in [12, eq. (81)], we obtain the overall bound as follows.
\\[P_{e}(u_{1},u_{2},\\Delta_{1},\\Delta_{2}) \\geq P_{ZR}^{\\mathrm{Shannon}}(\\mathcal{E}_{1},\\mathcal{E}_{2},M_ {1},M_{2})\\] \\[=\\frac{1}{M_{1}}\\sum_{m=2}^{M_{1}}Q\\left(\\sqrt{\\frac{m}{m-1} \\frac{\\mathcal{E}_{1}}{2\\sigma^{2}}}\\right)+\\frac{1}{M_{2}}\\sum_{m=2}^{M_{2}} Q\\left(\\sqrt{\\frac{m}{m-1}\\frac{\\mathcal{E}_{2}}{2\\sigma^{2}}}\\right)\\] \\[+\\frac{1}{M_{1}M_{2}}\\sum_{m_{1}=2}^{M_{1}}\\sum_{m_{2}=2}^{M_{2}} Q\\left(\\sqrt{\\frac{m_{1}}{m_{1}-1}\\frac{\\mathcal{E}_{1}}{2\\sigma^{2}}}+ \\frac{m_{2}}{m_{2}-1}\\frac{\\mathcal{E}_{2}}{2\\sigma^{2}}\\right) \\tag{35}\\]
#### Vii-C2 An alternative zero-rate bound
In the proof of Theorem 4 from [15], the authors showed that the overall error probability (28) of a two-user Gaussian MAC with codebooks \\(\\mathcal{C}_{1}\\) and \\(\\mathcal{C}_{2}\\) is lower bounded by the error probability of the single-user code \\(\\mathcal{C}_{1}+\\mathcal{C}_{2}\\) under an average power constraint. In our case, the resulting lower bound using an average power constraint is still valid since a peak energy/power constraint can only increase the error probability. Note that in our case the sum codebook has energy \\(\\mathcal{E}_{1}+\\mathcal{E}_{2}\\) and cardinality \\(M_{1}M_{2}\\). The error probability of the sum codebook can then be lower bounded by (17) using \\(\\mathcal{E}_{1}+\\mathcal{E}_{2}\\) and \\(M_{1}M_{2}\\) for \\(\\mathcal{E}\\) and \\(M\\). Including the single-user lower bounds for each user, the overall bound on the zero rate error probability is the maximum of three functions as
\\[P_{ZR}^{\\mathrm{P}}(\\mathcal{E}_{1},\\mathcal{E}_{2},M_{1},M_{2})=\\max\\left\\{P_ {\\mathrm{ZR}}^{\\mathrm{P}}\\left(\\mathcal{E}_{1},M_{1}\\right),P_{\\mathrm{ZR}}^ {\\mathrm{P}}\\left(\\mathcal{E}_{2},M_{2}\\right),P_{\\mathrm{ZR}}^{\\mathrm{P}} \\left(\\mathcal{E}_{1}+\\mathcal{E}_{2},M_{1}M_{2}\\right)\\right\\} \\tag{36}\\]
where \\(P_{\\mathrm{ZR}}^{\\mathrm{P}}\\left(\\mathcal{E},M\\right)\\) is given by (17).
In the next theorem, we state the first main result for the two-user setting.
**Theorem 2**.: _For arbitrary modulators \\({\\bf x}_{j}(U_{j}),\\ j=1,2\\), transmitting subject to power limitations, \\({\\cal S}_{1}\\) and \\({\\cal S}_{2}\\), respectively, over the two-user Gaussian MAC (5), the following inequalities hold_
\\[{\\rm MSE}_{1} \\geq \\max\\left({\\rm MSE}_{s,1},\\max_{0<\\theta\\leq 1}\\left(C_{1}( \\theta)/2-\\frac{{\\rm MSE}_{2}}{\\theta^{2}}\\right),\\max_{0<\\theta\\leq 1}\\theta^{2} \\left(C_{2}(\\theta)/2-{\\rm MSE}_{2}\\right)\\right), \\tag{37}\\] \\[{\\rm MSE}_{2} \\geq \\max\\left({\\rm MSE}_{s,2},\\max_{0<\\theta\\leq 1}\\left(C_{2}( \\theta)/2-\\frac{{\\rm MSE}_{1}}{\\theta^{2}}\\right),\\max_{0<\\theta\\leq 1} \\theta^{2}\\left(C_{1}(\\theta)/2-{\\rm MSE}_{1}\\right)\\right), \\tag{38}\\]
_where \\({\\rm MSE}_{s,j}\\) denotes the lower bound on the MSE in estimating the parameter \\(U_{j}\\), \\(j=1,2\\), in the single-user case (or equivalently, when the other parameter is known), given by (15), with_
\\[C_{1}(\\theta) = \\int_{0}^{1}d\\Delta\\cdot\\Delta\\cdot L_{B}(\\Delta,\\theta\\Delta)\\] \\[C_{2}(\\theta) = \\int_{0}^{1}d\\Delta\\cdot\\Delta\\cdot L_{B}(\\theta\\Delta,\\Delta)\\]
_and \\(L_{B}(.,.)\\) is given by (31)._
Proof:: Let \\(\\theta\\) be an arbitrary parameter, taking on values in \\([0,1]\\), and for a given \\(\\Delta\\), set \\(\\Delta_{1}=\\Delta\\) and \\(\\Delta_{2}=\\theta\\Delta\\). Now, by integrating both sides of (29) w.r.t. \\(\\Delta\\) we have
\\[\\int_{0}^{1}\\!{\\rm d}\\Delta\\cdot\\Delta\\left({\\rm Pr}\\{|\\hat{U}_{1}({\\bf y})-U_ {1}|>\\Delta/2\\}+{\\rm Pr}\\{|\\hat{U}_{2}({\\bf y})-U_{2}|>\\theta\\Delta/2\\}\\right) \\geq C_{1}(\\theta). \\tag{39}\\]
For the derivation of \\(C_{1}(\\theta)\\), the reader is referred to Appendix VII-D. Now, the first term on the l.h.s. is upper bounded by \\(2{\\bf E}[\\hat{U}_{1}({\\bf y})-U_{1}]^{2}\\). As for the second term, similarly, we get
\\[\\int_{0}^{1}d\\Delta\\cdot\\Delta\\cdot{\\rm Pr}\\{|\\hat{U}_{2}({\\bf y})-U_{2}|> \\theta\\Delta/2\\}\\leq\\frac{2}{\\theta^{2}}\\cdot{\\bf E}[\\hat{U}_{2}({\\bf y})-U_{ 2}]^{2}.\\]
Combining this with (39), we readily obtain
\\[{\\rm MSE}_{1}+\\frac{{\\rm MSE}_{2}}{\\theta^{2}}\\geq\\frac{C_{1}(\\theta)}{2} \\tag{40}\\]
or equivalently,
\\[{\\rm MSE}_{1}\\geq\\frac{C_{1}(\\theta)}{2}-\\frac{{\\rm MSE}_{2}}{\\theta^{2}}. \\tag{41}\\]
Since this inequality holds true for every \\(\\theta\\in[0,1]\\), the tightest bound of this form is obtained by maximizing the r.h.s. over \\(\\theta\\) in this interval, which yields
\\[{\\rm MSE}_{1}\\geq\\max_{0\\leq\\theta\\leq 1}\\left[\\frac{C_{1}(\\theta)}{2}-\\frac{ {\\rm MSE}_{2}}{\\theta^{2}}\\right]. \\tag{42}\\]
We also observe that the single-user bound \\({\\rm MSE}_{1}\\geq{\\rm MSE}_{s,j}\\) trivially holds since it is equivalent to a \"genie-aided\" scenario, where user no. 1 is fully informed on the exact value of \\(U_{2}\\).
The equivalence of (40) using \\(C_{1}(\\theta)\\) could be given also for user 2 as
\\[\\theta^{2}{\\rm MSE}_{1}+{\\rm MSE}_{2}\\geq\\theta^{2}\\frac{C_{1}(\\theta)}{2}. \\tag{43}\\]
By the same token, eq. (43) implies that
\\[{\\rm MSE}_{2}\\geq\\max_{0\\leq\\theta\\leq 1}\\theta^{2}\\left[\\frac{C_{1}(\\theta)}{2}- {\\rm MSE}_{1}\\right]. \\tag{44}\\]
To obtain the remaining bounds, interchange the roles of the two users, which amounts to the use of \\(C_{2}(\\theta)\\). This completes the proof of Theorem 2.
In Section V we present numerical evaluation results of (37)-(38) for different values of \\(\\theta\\) and SNR.
### _Upper Bounds on the MSE exponents_
In this subsection, we modify the bounds presented in Theorem 2 in order to obtain upper bounds of the achievable region of the MSE exponents defined as in (7). The core idea is to pass from the zero-rate bound of the previous subsection, where \\(M_{1}\\) and \\(M_{2}\\) were fixed (independent of \\(N\\)), to positive rate bounds, where \\(M_{1}=e^{NR_{1}}\\) and \\(M_{2}=e^{NR_{2}}\\), \\(R_{1}\\) and \\(R_{2}\\) being subjected to optimization. Our main result, in this subsection, is asserted in the following theorem.
**Theorem 3**.: _For arbitrary \\(N\\)-dimensional parameter modulators \\(\\mathbf{x}_{j}(U_{j}),\\;j=1,2\\) transmitting subject to power constraints given by (6) across the two-user Gaussian MAC (5), the MSE exponents are bounded by_
\\[\\epsilon_{1} \\leq \\min\\left\\{\\epsilon_{\\mathrm{s},1},\\inf_{\\alpha:\\;F(\\alpha)+2 \\alpha\\geq\\epsilon_{2}}F(\\alpha),\\inf_{\\alpha:\\;G(\\alpha)\\geq\\epsilon_{2}}G( \\alpha)+2\\alpha\\right\\} \\tag{45}\\] \\[\\epsilon_{2} \\leq \\min\\left\\{\\epsilon_{\\mathrm{s},2},\\inf_{\\alpha:\\;G(\\alpha)+2 \\alpha\\geq\\epsilon_{1}}G(\\alpha),\\inf_{\\alpha:\\;F(\\alpha)\\geq\\epsilon_{1}}F( \\alpha)+2\\alpha\\right\\} \\tag{46}\\]
_where_
\\[F(\\alpha) \\stackrel{{\\triangle}}{{=}} \\min_{R}[E_{u}(R,R+\\alpha)+2R]\\},\\] \\[G(\\alpha) \\stackrel{{\\triangle}}{{=}} \\min_{R}[E_{u}(R+\\alpha,R)+2R]\\}\\]
_and \\(E_{u}(R_{1},R_{2})\\) and \\(\\epsilon_{\\mathrm{s},j}\\) denote any upper bound on the reliability function of the two-user Gaussian MAC and the single-user bound on the MSE exponent in estimating the parameter \\(U_{j}\\), \\(j=1,2\\), given by Theorem 1, respectively._
Proof.: Substituting \\(\\Delta=e^{-RN}\\) and \\(\\theta=e^{-\\alpha N}\\) into (40) and changing the integration variable on the r.h.s. of (40) to \\(R\\), we obtain
\\[\\mathrm{MSE}_{1}+e^{2\\alpha N}\\mathrm{MSE}_{2}\\geq\\frac{N}{2}\\int_{0}^{\\infty }dR\\cdot e^{-2RN}\\cdot L_{B}(e^{-RN},e^{-(R+\\alpha)N}) \\tag{47}\\]
By the Laplace integration method [19] the r.h.s. of (47) is of the exponential order of \\(\\exp\\{-N\\min_{R}[E_{u}(R,R+\\alpha)+2R]\\}=\\exp\\{-NF(\\alpha)\\}\\). The l.h.s. is of the exponential order of \\(\\exp\\{\\min\\{\\epsilon_{1},\\epsilon_{2}-2\\alpha\\}\\}\\). Thus, we obtain
\\[\\min\\{\\epsilon_{1},\\epsilon_{2}-2\\alpha\\}\\leq F(\\alpha)\\qquad\\forall\\alpha\\geq 0. \\tag{48}\\]
In other words, for every \\(\\alpha\\geq 0\\), there exists \\(\\lambda\\in[0,1]\\) such that \\(\\lambda\\epsilon_{1}+(1-\\lambda)(\\epsilon_{2}-2\\alpha)\\leq F(\\alpha)\\) or equivalently:
\\[\\epsilon_{1}\\leq\\inf_{\\alpha\\geq 0}\\sup_{0\\leq\\lambda\\leq 1}\\frac{F(\\alpha)+(1- \\lambda)(\\epsilon_{2}-2\\alpha)}{\\lambda}=\\inf_{\\alpha:\\;F(\\alpha)+2\\alpha \\geq\\epsilon_{2}}F(\\alpha). \\tag{49}\\]
Substituting \\(\\Delta=e^{-RN}\\) and \\(\\theta=e^{-\\alpha N}\\) into (43) and changing the integration variable on the r.h.s. to \\(R\\), we get \\(\\max\\{\\epsilon_{1}-2\\alpha,\\epsilon_{2}\\}\\leq G(\\alpha)\\;,\\forall\\alpha\\geq 0\\) that yields the following bound on \\(\\epsilon_{1}\\) as
\\[\\epsilon_{1}\\leq\\inf_{\\alpha\\geq 0}\\sup_{0\\leq\\lambda\\leq 1}\\left(\\frac{G( \\alpha)+(1-\\lambda)\\epsilon_{2}}{\\lambda}+2\\alpha\\right)=\\inf_{\\alpha:\\;G( \\alpha)\\geq\\epsilon_{2}}G(\\alpha)+2\\alpha. \\tag{50}\\]
The overall bound on \\(\\epsilon_{1}\\) is the maximum of the three bounds given by (49), (50) and the bound on the single-user MSE exponent given by (18). The bound to \\(\\epsilon_{2}\\) is obtained in the very same manner.
For the purpose of numerical evaluation, we will study three different alternatives for \\(E_{u}(R_{1},R_{2})\\) to be used in bounding the MSE exponents (45)-(46) assuming equal energy on both transmitters, i.e. \\(S_{1}=S_{2}=S\\). Clearly, equal energy on both users will result in the same exponent \\(F(\\alpha)\\) (or \\(G(\\alpha)\\)).
#### Iv-B1 Divergence bound
\\(E_{u}(R_{1},R_{2})\\) is chosen as the sphere-packing bound of [13], taking the auxiliary channel \\(W\\) to be a Gaussian MAC with noise variance \\(\\sigma_{w}^{2}\\). For inputs of powers as defined by (6), the rate region of the auxiliary Gaussian MAC \\(W\\) is given by
\\[R_{j} \\leq \\frac{1}{2}\\log\\left(1+\\frac{\\mathcal{S}}{\\sigma_{w}^{2}}\\right) \\tag{51}\\] \\[R_{1}+R_{2} \\leq \\frac{1}{2}\\log\\left(1+\\frac{2\\mathcal{S}}{\\sigma_{w}^{2}}\\right), \\tag{52}\\]which implies that for \\(W\\) to exclude \\((R_{1},R_{2})\\) from the achievable region,
\\[\\sigma_{w}^{2}\\geq\\min\\left\\{\\frac{\\mathcal{S}}{e^{2R_{1}}-1},\\frac{\\mathcal{S}}{ e^{2R_{2}}-1},\\frac{2\\mathcal{S}}{e^{2(R_{1}+R_{2})}-1}\\right\\}\\stackrel{{ \\triangle}}{{=}}\\sigma_{0}^{2}(R_{1},R_{2}), \\tag{53}\\]
and its assumed that \\(\\sigma_{0}^{2}(R_{1},R_{2})>\\sigma^{2}\\). Thus,
\\[E_{\\pi}(R_{1},R_{2}) = \\frac{1}{2}\\left[\\frac{\\sigma_{0}^{2}(R_{1},R_{2})}{\\sigma^{2}}- \\ln\\left(\\frac{\\sigma_{0}^{2}(R_{1},R_{2})}{\\sigma^{2}}\\right)-1\\right] \\tag{54}\\] \\[= \\min\\{D(R_{1},\\mathcal{S}),D(R_{2},\\mathcal{S}),D(R_{1}+R_{2},2 \\mathcal{S})\\},\\]
where the divergence function is defined using [13, eq. (5.27)] as
\\[D(R,\\mathcal{S})\\stackrel{{\\triangle}}{{=}}\\frac{1}{2}\\left[ \\frac{\\mathcal{S}}{\\sigma^{2}(e^{2R}-1)}-\\ln\\left(\\frac{\\mathcal{S}}{\\sigma^{ 2}(e^{2R}-1)}\\right)-1\\right]. \\tag{55}\\]
The derivation of the upper bound on the sphere-packing bound \\(E_{\\pi}(R_{1},R_{2})\\) for the Gaussian MAC can be found in Appendix VII-E. We first need to calculate
\\[F(\\alpha) = \\inf_{R>0}\\left\\{2R+E_{\\pi}(R,R+\\alpha)\\right\\} \\tag{56}\\] \\[= \\inf_{R>0}\\left\\{2R+\\frac{1}{2}\\left[\\frac{\\sigma_{0}^{2}(R,R+ \\alpha)}{\\sigma^{2}}-\\ln\\left(\\frac{\\sigma_{0}^{2}(R,R+\\alpha)}{\\sigma^{2}} \\right)-1\\right]\\right\\}\\] \\[= \\min\\{F_{1},F_{2}(\\alpha),F_{12}(\\alpha)\\},\\]
with
\\[F_{1} = \\inf_{R\\geq 0}[2R+D(R,\\mathcal{S})] \\tag{57}\\] \\[F_{2}(\\alpha) = \\inf_{R\\geq 0}[2R+D(R+\\alpha,\\mathcal{S})]\\] (58) \\[F_{12}(\\alpha) = \\inf_{R\\geq 0}[2R+D(2R+\\alpha,2\\mathcal{S})]. \\tag{59}\\]
The channel rates that minimize the three exponents \\(F_{1}\\), \\(F_{2}(\\alpha)\\) and \\(F_{12}(\\alpha)\\) given by (57)-(59) are denoted respectively by \\(R_{1}^{*}\\), \\(R_{2}^{*}\\) and \\(R_{12}^{*}\\) that are derived and given in detail in Appendix VII-F. Using these rate functions we can reformulate the minimum functions \\(F_{1}^{*}\\), \\(F_{2}^{*}(\\alpha)\\) and \\(F_{12}^{*}(\\alpha)\\) as functions of \\(R_{1}^{*}\\), \\(R_{2}^{*}\\) and \\(R_{12}^{*}\\), respectively. Considering the constraint in (45), we choose the \\(\\alpha\\) satisfying
\\[\\epsilon_{2}\\leq\\min\\{F_{1}^{*},F_{2}^{*}(\\alpha),F_{12}^{*}(\\alpha)\\}+2\\alpha. \\tag{60}\\]
The constraint \\(\\epsilon_{2}\\leq F_{1}^{*}+2\\alpha\\) yields
\\[\\alpha\\leq\\frac{F_{1}^{*}-\\epsilon_{2}}{2}\\stackrel{{\\triangle}}{{= }}\\alpha_{1}(\\epsilon_{2}). \\tag{61}\\]
The constraint \\(\\epsilon_{2}\\leq F_{2}^{*}(\\alpha)+2\\alpha\\) gives no requirement concerning \\(\\alpha\\), it is simply the single-user bound for user 2. For the two-user component \\(\\epsilon_{2}\\leq F_{12}^{*}(\\alpha)+2\\alpha\\) we get
\\[\\alpha\\leq\\frac{1}{2}(F_{12}^{*}(\\alpha)-\\epsilon_{2})\\stackrel{{ \\triangle}}{{=}}\\alpha_{2}(\\epsilon_{2}). \\tag{62}\\]
Thus, the constraint becomes
\\[\\alpha\\leq\\alpha^{*}(\\epsilon_{2})\\stackrel{{\\triangle}}{{=}} \\max\\{\\alpha_{1}(\\epsilon_{2}),\\alpha_{2}(\\epsilon_{2})\\} \\tag{63}\\]
resulting in the overall bound
\\[\\epsilon_{1} \\leq F[\\alpha^{*}(\\epsilon_{2})] \\tag{64}\\] \\[= \\min\\{F_{1},F_{2}[\\alpha^{*}(\\epsilon_{2})],F_{12}[\\alpha^{*}( \\epsilon_{2})]\\}.\\]
The roles of the users should be interchanged to obtain the upper bound for \\(\\epsilon_{2}\\) as a function of \\(\\epsilon_{1}\\). The overall upper bound on the achievable region of the MSE exponents is the intersection of the two. Note that the upper bound on the MSE exponent in a point-to-point channel that is derived from (20) in the previous part is equivalent to (57).
2 Shannon's sphere-packing bound
As a second alternative to the divergence bound by Nazari, we adopt Shannon's sphere-packing bound studied Section III-B1 to the two-user setting. Before defining the exponents using (21), we remind the reader concerning the error exponent region for a MAC, introduced in [15, Theorem 4]. The authors of [15] show that for the Gaussian MAC with equal signal powers, denoted by \\(S\\), an outer bound on the error exponent region is dictated by three inequalities. The first two error exponents \\(E_{j},\\ j=1,2\\) are bounded from above by \\(E_{su}(R_{j},\\mathcal{S}/\\sigma^{2})\\) and correspond to the two single-user error events, and the third exponent \\(E_{su}(R_{1}+R_{2},2S/\\sigma^{2})\\) corresponds to the joint error event. In all inequalities, \\(E_{su}(R)\\) represents any upper bound on the reliability function of the single-user AWGN channel. Let us denote the three exponents which make use of (21) in the minimization by \\(F_{1,\\text{a}}\\), \\(F_{2,\\text{a}}(\\alpha)\\) and for the two-user component by \\(F_{12,\\text{a}}(\\alpha)\\).
Using the results of [15], the single-user components are functions of the minimum rate given as (22).
\\[F_{1,\\text{a}}^{*} =2R_{\\min}+E_{\\text{\\tiny{$\\mathfrak{g}$}}}(\\psi(R_{\\min}),A) \\tag{65}\\] \\[F_{2,\\text{a}}^{*}(\\alpha) =F_{1,\\text{a}}^{*}-2\\alpha \\tag{66}\\]
\\(F_{12,\\text{a}}(\\alpha)\\) has to be optimized numerically since it does not lend itself to closed form analysis. Using (21) the third exponent as the two-user component is
\\[F_{12,\\text{a}}(\\alpha)=\\min_{R^{\\prime}\\geq\\frac{\\pi}{2}}[2R^{\\prime}-\\alpha +E_{\\text{\\tiny{$\\mathfrak{g}$}}}(\\psi(2R^{\\prime}),2A)] \\tag{67}\\]
where \\(R^{\\prime}=R+\\alpha/2\\) and \\(A=\\mathcal{S}/\\sigma^{2}\\). Similarly the two-user component \\(F_{12,\\text{a}}(\\alpha)\\) with the minimum rate is denoted by \\(F_{12,\\text{a}}^{*}(\\alpha)\\). The derivation of the bounds on the error exponents follow through in the same way as shown in the previous case that makes use of the divergence bound by simply replacing the three exponents in (64) by \\(F_{1,\\text{a}}^{*}\\), \\(F_{2,\\text{a}}^{*}\\) and \\(F_{12,\\text{a}}^{*}\\).
#### Iv-B3 The upper bound by Ashikhmin et. al. :
As for the third alternative for \\(E_{u}(R_{1},R_{2})\\), we have a more recent result by Ashikhmin _et al._[14, Theorem 1], which is a tighter bound on the reliability function \\(E(R,A)\\) with SNR \\(A\\), and we denote it by \\(E_{\\text{a,all}}(R,A)\\). Note that \\(E_{\\text{a,all}}(R,A)\\) coincides with (21) above a certain rate. It is, in fact, a convex combination of (21) with a tighter low-rate bound which coincides with the zero-rate exponent unlike (21). We were not able to characterize the MSE exponents analytically for the Ashikhmin _et al._ upper bound on the reliability function. Similar to the Shannon's sphere-packing bound, we denote the three error exponents by \\(F_{1,\\text{a,all}}\\), \\(F_{2,\\text{a,all}}(\\alpha)\\) and \\(F_{12,\\text{a,all}}(\\alpha)\\), which are evaluated as
\\[\\begin{split}& F_{1,\\text{a,all}}=\\min_{R\\geq 0}2R+E_{\\text{a,all}} (R,A)\\\\ & F_{2,\\text{a,all}}(\\alpha)=F_{1,\\text{a,all}}-2\\alpha\\\\ & F_{12,\\text{a,all}}(\\alpha)=\\min_{R\\geq 0}[2R+E_{\\text{a,all}} (2R+\\alpha,2A)]\\end{split} \\tag{68}\\]
where \\(R^{\\prime}=R+\\alpha/2\\). The optimal values are replaced in (49) to determine the MSE exponents. It should be mentioned that the MSE exponent region in this case may coincide for some choice of SNR with the region based on (21) since the two error exponents coincide for some rates. In Section V the three bounds on the MSE exponents in a two-user MAC are numerically evaluated and their performances are compared as a function of various values of SNR.
## V Numerical Results
In Figure 3, we first present a numerical evaluation of the bounds for the single-user problem that was treated in Section III with several bounds proposed for the same problem from the literature alongside one achievable scheme. Following the order of the curves in the legend, \\(M\\)-ary Scalar Quantization and \\(M\\)-ary Simplex refers to the exact MSE of a uniform scalar quantizer with \\(\\log_{2}M\\) bits that is mapped to a regular \\(M\\)-ary simplex. Note that this combination has an exponential behaviour as \\(O(e^{-\\mathcal{E}/6})\\) which is higher than that of all the lower bounds. We also show the rate-distortion lower bound from Goblick [1]\\(D=\\frac{1}{2\\pi e}e^{-\\mathcal{E}}\\) for the sake of comparison. The four remaining lower bounds make use of the results from [4] and the work reported here. The two new lower bounds correspond to (17) and (16) combined with (15). The previously best lower bound corresponds to (16) combined with the lower bound through the use of [4, eq. 13]. We also show a conjectured bound which results from the combination of (15) with the exact error-probability of a regular \\(M\\)-ary simplex. The validity of this bound depends on the validity of the Weak Simplex Conjecture. It is interesting to note that the bound obtained through the use of (17) with (15) comes very close to the conjectured bound even for moderate signal energies.
In Figure 4, we present numerical evaluation of (37) for different values of \\(\\theta\\). Note that signal-to-noise ratio (SNR) which is chosen equal for both transmitters as \\(\\mathcal{E}/\\sigma^{2}\\). The _wall_ and _floor_, the vertical and horizontal parts of the black curve to the axes, correspond to \\(\\mathrm{MSE}_{\\mathrm{single},j}\\). The red and blue curves represent all possible bounds for \\(\\theta\\in[0,1)\\). The convex hulls are depicted in solid and dotted black curves using the two-user adaption of the classical Shannon's zero-rate bound given by (35) and the lower bound given by (36).
In Figure 5, the three bounds on the MSE exponents are numerically evaluated for different values of SNR, which is chosen equal for both transmitters. Clearly, the divergence bound is the weakest one for all values of SNR, whereas the outer bound evaluated using the reliability function bound by Ashikhmin _et al._, labeled as ABL in the legend, is the tightest. It seems to coincide with the bound using (21) for high SNR levels in the portion not dominated by the single-user error-event. It is worth mentioning the difference between the performance of the divergence bound and reliability function is most significant for low SNR levels.
## VI Conclusion
New lower bounds on any linear combination of the MSE's are derived for two-user separate modulation and joint estimation of parameter on a discrete-time Gaussian MAC without bandwidth constraints. To this end, we used zero-rate lower bounds on the error probability of Gaussian channels by Shannon and Polyanskiy _et al._. Numerical results showed that, the multi-user adaptation of the zero-rate lower bound by Polyanskiy _et al._ provides a tighter overall lower bound on the MSE pairs than the classical Shannon bound. Additionally, we introduced upper bounds on the MSE exponents that could make use of any bound on the error exponent of a single-user AWGN channel. The obtained results are numerically evaluated for three different bounds on the reliability function of the Gaussian channel. It is shown that applying the reliability function by Ashikhmin _et al._[14] to the MAC provides a significantly tighter characterization than Shannon's sphere-packing bound [12] and the divergence bound [13].
Fig. 4: Numerical evaluation of (37) for different values of SNR and all possible values of \\(\\theta\\) where the dotted and solid boundaries represent the bounds using (35) and (36), respectively.
## VII Appendix
### _The derivation of the new zero-rate lower bound_
[16, Theorem 41] provides a lower bound on the average error probability for the AWGN channel as a function of the statistics of two random variables \\(H_{N}\\) and \\(G_{N}\\). Specifically, \\(H_{N}\\) is defined as [16, eq. 205]
\\[H_{N}=C+\\frac{\\log_{2}e}{2}\\frac{(2^{2C/N}-1)}{2^{2C/N}}\\sum_{i=1}^{n}\\left(1- Z_{i}^{2}+\\frac{2\\sigma}{\\sqrt{\\mathcal{S}}}Z_{i}\\right), \\tag{69}\\]
where \\(C=\\frac{N}{2}\\log_{2}\\left(1+\\frac{\\mathcal{S}}{\\sigma^{2}}\\right)\\) and \\(Z_{i}\\) are all i.i.d. \\(\\mathcal{N}(0,1)\\). In order to simplify this for the finite-energy case, consider the random variables \\(Q_{0}=\\frac{1}{\\sqrt{N}}\\sum_{i=1}^{N}Z_{i}\\) so that \\(Q_{0}\\sim\\mathcal{N}(0,1)\\) and \\(Q_{1,N}=\\frac{1}{N}\\sum_{i=1}^{N}Z_{i}^{2}\\), so that \\(\\mathrm{Var}(Q_{1,N})=\\frac{2}{N}\\). The first condition for the Polyanskiy _et al._ converse is that
\\[\\Pr\\left(H_{N}\\geq\\gamma_{n}\\right)=1-\\epsilon(\\mathcal{E},M,N) \\tag{70}\\]
where \\(\\epsilon(\\mathcal{E},M,N)\\) is the average probability of error. Expressing the right-hand tail of the c.d.f. of \\(H_{N}\\) in terms of \\(Q_{0}\\) and \\(Q_{1}\\) yields
\\[\\Pr\\left(H_{N}\\geq\\gamma_{N}\\right)=\\Pr\\left(C+\\frac{N(2^{2C/N}-1)\\log_{2}e}{ 2^{2C/N+1}}\\left(1-Q_{1,N}\\right)+\\frac{N(2^{2C/N}-1)\\log_{2}e}{2^{2C/N}}Q_{0} \\geq\\gamma_{n}\\right) \\tag{71}\\]
and rearranging (71) in terms of \\(Q_{0}\\) provides
\\[\\Pr\\left(H_{N}\\geq\\gamma_{N}\\right)=\\Pr\\left(Q_{0}\\geq\\frac{(\\gamma_{N}-C)}{ \\log_{2}e}\\frac{2^{2C/N}}{\\sqrt{\\mathcal{E}}/\\sigma}+\\frac{\\sqrt{\\mathcal{E}} }{2\\sigma}\\left(1-Q_{1,N}\\right)\\right) \\tag{72}\\]
Now, \\(1-Q_{1,N}\\) converges to 0 with \\(N\\), so we have the following bound on (72) which is tight for large \\(N\\) and some \\(\\mu_{N}>0\\)
\\[\\Pr\\left(H_{N}\\geq\\gamma_{N}\\right)\\leq\\Pr(Q_{1,N}\\leq 1+\\mu_{N})\\Pr\\left(Q_{0 }\\geq\\frac{(\\gamma_{N}-C)}{\\log_{2}e}\\frac{2^{2C/N}}{\\sqrt{\\mathcal{E}}/ \\sigma}-\\mu_{N}\\frac{\\sqrt{\\mathcal{E}}}{2\\sigma}\\right)+\\Pr(Q_{1,N}>1+\\mu_{N})\\]
Fig. 5: Numerical evaluation of the upper bounds on the error exponents for different values of SNR.
\\[\\leq\\Pr\\left(Q_{0}\\geq\\frac{(\\gamma_{N}-C)}{\\log_{2}e}\\frac{2^{2C/N}}{ \\sqrt{\\mathcal{E}}/\\sigma}-\\mu_{N}\\frac{\\sqrt{\\mathcal{E}}}{2\\sigma}\\right)+ \\delta_{N}\\] \\[=Q\\left(\\frac{(\\gamma_{N}-C)}{\\log_{2}e}\\frac{2^{2C/N}}{\\sqrt{ \\mathcal{E}}/\\sigma}-\\mu_{N}\\frac{\\sqrt{\\mathcal{E}}}{2\\sigma}\\right)+\\delta_ {N} \\tag{73}\\]
where \\(\\delta_{N}=\\Pr(Q_{1,N}>1+\\mu_{N})=1-\\frac{1}{\\Gamma\\left(\\frac{\\mathcal{E}}{ 2}\\right)}\\gamma\\left(\\frac{N}{2},\\frac{N(1+\\mu_{N})}{2}\\right)\\leq\\left(1+ \\mu_{N}\\right)e^{-\\frac{N\\mu_{N}}{2}}\\)[20, p.1325,Lemma 1]. Combining (73) with (70) yields
\\[\\frac{(\\gamma_{N}-C)}{\\log_{2}e}\\frac{2^{2C/N}}{\\sqrt{\\mathcal{E}}/\\sigma}-\\mu _{N}\\frac{\\sqrt{\\mathcal{E}}}{2\\sigma}\\leq Q^{-1}\\left(1-\\epsilon(\\mathcal{E},M,N)-\\delta_{N}\\right) \\tag{74}\\]
Turning now to the \\(G_{N}\\), from [16, eq. 204] we have
\\[G_{N} =C-\\frac{(2^{2C/N}-1)\\log_{2}e}{2}\\sum_{i=1}^{N}\\left(1+Z_{i}^{2} -2\\sqrt{1+\\frac{\\sigma^{2}}{\\mathcal{S}}}Z_{i}\\right)\\] \\[=C-\\frac{\\mathcal{E}\\log_{2}e}{2\\sigma^{2}}\\left(1+Q_{1,N}\\right) +\\frac{\\log_{2}e}{\\sigma}2^{C/N}\\sqrt{\\mathcal{E}}Q_{0} \\tag{75}\\]
Rearranging \\(\\Pr\\left(G_{N}\\geq\\gamma_{N}\\right)\\) in terms of \\(Q_{0}\\) yields
\\[\\Pr\\left(G_{N}\\geq\\gamma_{N}\\right) =\\Pr\\left(Q_{0}\\geq\\frac{(\\gamma_{N}-C)}{\\log_{2}e\\sqrt{( \\mathcal{E}/\\sigma)}2^{C/N}}+\\frac{\\sqrt{\\mathcal{E}}}{2^{C/N}}\\frac{1+Q_{1,N }}{2\\sigma}\\right)\\] \\[\\geq(1-\\delta_{N})\\Pr\\left(Q_{0}\\geq\\frac{(\\gamma_{N}-C)}{\\log_{ 2}e\\sqrt{(\\mathcal{E}/\\sigma)}2^{C/N}}+\\frac{\\sqrt{\\mathcal{E}}}{2^{C/N}} \\frac{1+Q_{1,N}}{2\\sigma}\\right)\\] \\[\\overset{\\text{(a)}}{\\geq}(1-\\delta_{N})\\Pr\\left(Q_{0}\\geq\\frac{ Q^{-1}\\left(1-\\epsilon(\\mathcal{E},M,N)-\\delta_{N}\\right)}{2^{3C/N}}+\\mu_{N} \\frac{\\sqrt{\\mathcal{E}}}{\\sigma 2^{3C/N+1}}+\\frac{\\sqrt{\\mathcal{E}}}{2^{C/N}} \\left(\\sigma^{-1}+\\frac{\\mu_{N}}{2\\sigma}\\right)\\right)\\] \\[=(1-\\delta_{N})Q\\left(\\frac{Q^{-1}\\left(1-\\epsilon(\\mathcal{E},M, N)-\\delta_{N}\\right)}{2^{3C/N}}+\\mu_{N}\\frac{\\sqrt{\\mathcal{E}}}{\\sigma 2^{3C/N+1}}+\\frac{\\sqrt{ \\mathcal{E}}}{2^{C/N}}\\left(\\sigma^{-1}+\\frac{\\mu_{N}}{2\\sigma}\\right)\\right) \\tag{76}\\]
where step (a) is obtained using (74). Polyanskiy's bound in [16, eq.208] on the signal-set cardinality becomes
\\[M\\leq\\frac{1}{\\Pr(G_{N}\\geq\\gamma_{N})}\\leq\\left[(1-\\delta_{N})Q\\left(\\frac{ Q^{-1}\\left(1-\\epsilon(\\mathcal{E},M,N)-\\delta_{N}\\right)}{2^{3C/N}}+\\mu_{N} \\frac{\\sqrt{\\mathcal{E}}}{\\sigma 2^{3C/N+1}}+\\frac{\\sqrt{\\mathcal{E}}}{2^{C/N}} \\left(\\sigma^{-1}+\\frac{\\mu_{N}}{2\\sigma}\\right)\\right)\\right]^{-1} \\tag{77}\\]
which when rearranged for the error probability becomes
\\[\\epsilon(\\mathcal{E},M,N)\\geq Q\\left(\\frac{\\sqrt{\\mathcal{E}}}{\\sigma}\\left( \\left(1+\\frac{\\mathcal{E}}{N\\sigma^{2}}\\right)\\left(1+\\frac{\\mu_{N}}{2}\\right) +\\frac{\\mu_{N}}{2}\\right)-\\left(1+\\frac{\\mathcal{E}}{N\\sigma^{2}}\\right)^{3/2} Q^{-1}\\left(\\frac{1}{M(1-\\delta_{N})}\\right)\\right)-\\delta_{N} \\tag{78}\\]
Now, \\(\\lim_{N\\to\\infty}\\delta_{N}=0\\), so the limiting expression becomes
\\[\\lim_{N\\to\\infty}\\epsilon(\\mathcal{E},M,N)\\geq Q\\left(\\frac{\\sqrt{\\mathcal{E }}}{\\sigma}(1+\\mu)-Q^{-1}\\left(\\frac{1}{M}\\right)\\right) \\tag{79}\\]
for any arbitrarily small \\(\\mu>0\\). The obtained bound is given by (17) in Section III-A2.
### _The average squared Euclidean distance derivation for a two-user MAC_
The average squared Euclidean distance for the pairs represented by the first term in (28) is given by
\\[D_{2}^{2}(u_{1},u_{2}) =\\frac{1}{M_{2}(M_{2}-1)}\\sum_{i_{1}=1}^{M_{2}}\\sum_{i_{2}=1}^{M_{ 2}}\\sum_{n=1}^{N}\\left|x_{2,i_{1}^{\\prime},n}-x_{2,i_{2}^{\\prime},n}\\right|^{2}\\] \\[=\\frac{2}{M_{2}(M_{2}-1)}\\left[M_{2}\\sum_{i_{1}^{\\prime}=1}^{M_{ 2}}\\left\\|x_{2,i_{1}^{\\prime}}\\right\\|^{2}-\\sum_{n=1}^{N}\\left|\\sum_{i^{\\prime }}x_{2,i^{\\prime},n}\\right|^{2}\\right]\\] \\[\\leq\\frac{2}{(M_{2}-1)}\\sum_{i^{\\prime}=1}^{M_{2}}\\left\\|\\mathbf{ x}_{2,i^{\\prime}}\\right\\|^{2}\\] \\[\\leq\\frac{2M_{2}}{(M_{2}-1)}\\mathcal{E}_{2} \\tag{80}\\]Note that the derivation given above applies to \\(D_{1}^{2}(u_{1},u_{2})\\) with \\(M_{1}\\) as well. For the third term we have
\\[D_{12}^{2}(u_{1},u_{2})=\\frac{1}{M_{1}M_{2}(M_{1}-1)(M_{2}-1)}\\sum_ {i_{1}=1}^{M_{1}}\\sum_{i_{1}^{\\prime}=1}^{M_{2}}\\sum_{i_{2}\
eq i_{1}}\\sum_{i_{2} \
eq i_{1}^{\\prime}}\\sum_{n=1}^{N}\\left|(x_{1,i_{1},n}-x_{1,i_{2},n})+(x_{2,i_{1 }^{\\prime},n}-x_{2,i_{2}^{\\prime},n})\\right|^{2}\\] \\[=\\frac{1}{M_{1}(M_{1}-1)}\\sum_{i_{1}=1}^{M_{1}}\\sum_{i_{2}\
eq i _{1}}\\sum_{n=1}^{N}|x_{1,i_{1},n}-x_{1,i_{2},n}|^{2}+\\frac{1}{M_{2}(M_{2}-1)} \\sum_{i_{1}^{\\prime}=1}\\sum_{i_{2}^{\\prime}\
eq i_{1}^{\\prime}}\\sum_{n=1}^{N} |x_{2,i_{1}^{\\prime},n}-x_{2,i_{2}^{\\prime},n}|^{2}+\\] \\[\\qquad\\frac{2}{M_{1}M_{2}(M_{1}-1)(M_{2}-1)}\\sum_{i_{1}=1}^{M_{1} }\\sum_{i_{1}^{\\prime}=1}^{M_{2}}\\sum_{i_{2}\
eq i_{1}}\\sum_{i_{2}^{\\prime} \
eq i_{1}^{\\prime}}\\sum_{n=1}^{N}\\text{Re}\\left((x_{1,i_{1},n}-x_{1,i_{2},n})( x_{2,i_{1}^{\\prime},n}-x_{2,i_{2}^{\\prime},n})^{*}\\right)\\] \\[=\\frac{1}{M_{1}(M_{1}-1)}\\sum_{i_{1}=1}^{M_{1}}\\sum_{i_{2}=1}^{N }|x_{1,i_{1},n}-x_{1,i_{2},n}|^{2}+\\frac{1}{M_{2}(M_{2}-1)}\\sum_{i_{1}^{\\prime} =1}^{M_{2}}\\sum_{i_{2}^{\\prime}=1}^{N}\\sum_{n=1}^{N}|x_{2,i_{1}^{\\prime},n}-x_ {2,i_{2}^{\\prime},n}|^{2}+\\] \\[\\qquad\\frac{2}{M_{1}M_{2}(M_{1}-1)(M_{2}-1)}\\sum_{i_{1}=1}^{M_{1} }\\sum_{i_{1}^{\\prime}=1}^{M_{2}}\\sum_{i_{2}=1}^{M_{1}}\\sum_{i_{2}=1}^{N}\\sum _{n=1}^{N}\\text{Re}\\left((x_{1,i_{1},n}-x_{1,i_{2},n})(x_{2,i_{1}^{\\prime},n}- x_{2,i_{2}^{\\prime},n})^{*}\\right)\\] \\[=\\frac{1}{M_{1}(M_{1}-1)}\\sum_{i_{1}=1}^{M_{1}}\\sum_{i_{2}=1}^{N }|x_{1,i_{1},n}-x_{1,i_{2},n}|^{2}+\\frac{1}{M_{2}(M_{2}-1)}\\sum_{i_{1}^{\\prime }=1}^{M_{2}}\\sum_{i_{2}^{\\prime}=1}^{M_{2}}\\sum_{n=1}^{N}|x_{2,i_{1}^{\\prime},n }-x_{2,i_{2}^{\\prime},n}|^{2}+\\] \\[\\qquad\\frac{2}{M_{1}M_{2}(M_{1}-1)(M_{2}-1)}\\text{Re}\\left( \\underbrace{\\sum_{i_{1}=1}^{M_{1}}\\sum_{i_{2}=1}^{M_{1}}\\sum_{n=1}^{N}(x_{1,i_ {1},n}-x_{1,i_{2},n})}_{0}\\underbrace{\\sum_{i_{1}^{\\prime}=1}^{M_{2}}\\sum_{i_{ 2}^{\\prime}=1}^{M_{2}}\\sum_{n=1}^{N}(x_{2,i_{1}^{\\prime},n}-x_{2,i_{2}^{ \\prime},n})^{*}}_{0}\\right)\\] \\[=\\frac{2}{M_{1}(M_{1}-1)}\\left[M_{1}\\sum_{i=1}^{M_{1}}||\\mathbf{ x}_{1,i}||^{2}-\\sum_{n=1}^{N}\\left|\\sum_{i=1}^{M_{1}}x_{1,i,n}\\right|^{2} \\right]+\\frac{2}{M_{2}(M_{2}-1)}\\left[M_{2}\\sum_{i^{\\prime}=1}^{M_{2}}|| \\mathbf{x}_{2,i^{\\prime}}||^{2}-\\sum_{n=1}^{N}\\left|\\sum_{i^{\\prime}=1}^{M_{2 }}x_{2,i^{\\prime},n}\\right|^{2}\\right]\\] \\[\\leq\\frac{2M_{1}}{(M_{1}-1)}\\mathcal{E}_{1}+\\frac{2M_{2}}{(M_{2}- 1)}\\mathcal{E}_{2} \\tag{81}\\]
### _Bounding the error probability in a MAC_
Here we will apply the modification applied to the single-user derivation that resulted in the improved lower bound (15) to the two-user MAC. The upper bound on the overall error probability given by (30) is derived as follows
\\[P_{\\text{e}}=\\int_{0}^{1-(M_{1}-1)\\Delta_{1}}du_{1}p(u_{1})\\int_{ 0}^{1-(M_{2}-1)\\Delta_{2}}du_{2}p(u_{2})P_{\\text{e}}(u_{1},u_{2})\\] \\[\\leq\\frac{1}{M_{1}M_{2}}\\sum_{i=1}^{M_{1}}\\sum_{i^{\\prime}=1}^{M_ {2}}\\int_{0}^{1-(M_{1}-1)\\Delta_{1}}du_{1}\\int_{0}^{1-(M_{2}-1)\\Delta_{2}}du_{2 }\\Pr\\left\\{|U_{1}-\\hat
We set the following relationships as \\(M_{1}=\\lceil 1/\\Delta_{1}\\rceil\\) and \\(M_{2}=\\lceil 1/\\Delta_{2}\\rceil\\) so that the lower bound \\(L_{B}(\\Delta_{1},\\Delta_{2})\\) becomes
\\[\\frac{1}{\\lceil 1/\\Delta_{1}\\rceil\\lceil 1/\\Delta_{2}\\rceil} \\sum_{i=0}^{\\lceil 1/\\Delta_{1}\\rceil-1}\\sum_{i^{\\prime}=0}^{1/\\Delta_{2} \\rceil-1}\\] \\[\\left[\\Pr\\left\\{\\left|U_{1}-\\hat{U}_{1}(\\mathbf{y})\\right|>\\frac{ \\Delta_{1}}{2}|i\\Delta_{1}\\leq U_{1}\\leq 1-(M_{1}-1)\\Delta_{1}+i\\Delta_{1},i^{ \\prime}\\Delta_{2}\\leq U_{2}\\leq 1-(M_{2}-1)\\Delta_{2}+i^{\\prime}\\Delta_{2}\\right\\}\\right.\\] \\[+\\Pr\\left\\{\\left|U_{2}-\\hat{U}_{2}(\\mathbf{y})\\right|>\\frac{ \\Delta_{2}}{2}|i\\Delta_{1}\\leq U_{1}\\leq 1-(M_{1}-1)\\Delta_{1}+i\\Delta_{1},i^{ \\prime}\\Delta_{2}\\leq U_{2}\\leq 1-(M_{2}-1)\\Delta_{2}+i^{\\prime}\\Delta_{2} \\right\\}\\right]\\] \\[=\\frac{1}{\\lceil 1/\\Delta_{1}\\rceil\\lceil 1/\\Delta_{2}\\rceil} \\left[\\Pr\\left\\{\\left|\\hat{U}_{1}(\\mathbf{y})-U_{1}\\right|>\\Delta_{1}/2 \\right\\}+\\Pr\\left\\{\\left|\\hat{U}_{2}(\\mathbf{y})-U_{2}\\right|>\\Delta_{2}/2 \\right\\}\\right]\\] \\[\\geq\\left(1+\\Delta_{1}-\\left\\lceil\\frac{1}{\\Delta_{1}}\\right\\rceil \\Delta_{1}\\right)\\left(1+\\Delta_{2}-\\left\\lceil\\frac{1}{\\Delta_{2}}\\right\\rceil \\Delta_{2}\\right)P_{ZR}\\left(\\mathcal{E}_{1},\\mathcal{E}_{2},\\left\\lceil\\frac{1 }{\\Delta_{1}}\\right\\rceil,\\left\\lceil\\frac{1}{\\Delta_{2}}\\right\\rceil\\right) \\tag{83}\\]
### _Derivation of \\(C_{1}(\\theta)\\)_
As in Theorem 2, setting \\(\\Delta_{2}=\\theta\\Delta\\) and \\(\\Delta_{1}=\\Delta\\) in (83) yields \\(C_{1}(\\theta)\\) as follows.
\\[\\int_{0}^{1}d\\Delta\\Delta\\left(\\left\\lceil\\frac{1}{\\Delta}\\right \\rceil+\\Delta\\left\\lceil\\frac{1}{\\Delta}\\right\\rceil-\\left\\lceil\\frac{1}{ \\Delta}\\right\\rceil^{2}\\Delta\\right)\\left(\\left\\lceil\\frac{1}{\\theta\\Delta} \\right\\rceil+\\theta\\Delta\\left\\lceil\\frac{1}{\\theta\\Delta}\\right\\rceil-\\left \\lceil\\frac{1}{\\theta\\Delta}\\right\\rceil^{2}\\theta\\Delta\\right)P_{ZR}\\left( \\mathcal{E}_{1},\\mathcal{E}_{2},\\left\\lceil\\frac{1}{\\Delta}\\right\\rceil, \\left\\lceil\\frac{1}{\\theta\\Delta}\\right\\rceil\\right)\\] \\[=\\sum_{i=1+\\lceil\\frac{1}{\\theta}\\rceil}^{\\infty}\\mathcal{I}\\left( \\left[i\\theta\\right]=\\lceil(i-1)\\theta\\rceil\\right)\\int_{\\frac{1}{\\theta \\Delta}}^{\\frac{1}{\\theta\\Delta}-1}d\\Delta\\cdot\\Delta\\left(\\left[i\\theta \\right]+\\Delta\\left\\lceil i\\theta\\right\\rceil-\\left\\lceil i\\theta\\right\\rceil^ {2}\\Delta\\right)\\left(i+\\theta\\Delta i-i^{2}\\theta\\Delta\\right)\\] \\[+\\sum_{i=1+\\lceil\\frac{1}{\\theta}\\rceil}^{\\infty}\\mathcal{I}\\left( \\left[i\\theta\\right]\
eq\\lceil(i-1)\\theta\\rceil\\right)\\left(\\int_{\\frac{1}{ \\theta\\Delta(i-1)}}^{\\frac{1}{\\theta(i-1)}}d\\Delta\\cdot\\Delta\\left(\\left[(i-1) \\theta\\right]+\\Delta\\left\\lceil(i-1)\\theta\\right]-\\left\\lceil(i-1)\\theta \\right\\rceil^{2}\\Delta\\right)\\left(i+\\theta\\Delta i-i^{2}\\theta\\Delta\\right)\\] \\[+\\int_{\\frac{1}{\\theta\\Delta(i-1)}}^{1\\frac{1}{\\theta(i-1)}}d \\Delta\\cdot\\Delta\\left(\\left[i\\theta\\right]+\\Delta\\left\\lceil i\\theta\\right]- \\left\\lceil i\\theta\\right\\rceil^{2}\\Delta\\right)\\left(i+\\theta\\Delta i-i^{2} \\theta\\Delta\\right)\\right)P_{ZR}\\left(\\mathcal{E}_{1},\\mathcal{E}_{2},\\left \\lceil i\\theta\\right\\rceil,i)\\] \\[+\\int_{1/(\\theta\\left\\lceil\\frac{1}{\\theta}\\right\\rceil)}^{1}d \\Delta\\cdot 2\\Delta\\cdot(1-\\Delta)\\left(\\left[\\frac{1}{\\theta}\\right]+\\theta\\Delta\\left\\lfloor \\frac{1}{\\theta}\\right\\rceil-\\left\\lceil\\frac{1}{\\theta}\\right\\rceil^{2} \\theta\\Delta\\right)P_{ZR}\\left(\\mathcal{E}_{1},\\mathcal{E}_{2},2,\\left\\lceil \\frac{1}{\\theta}\\right\\rceil\\right)\\] \\[\\stackrel{{(a)}}{{=}}\\sum_{i=1+\\lceil\\frac{1}{ \\theta}\\rceil}^{\\infty}\\left\\{\\mathcal{I}\\left(c(i)\\!=\\!c(i-1)\\right)\\left( \\frac{c(i)(2i-1)}{2i\\theta^{2}(i-1)^{2}}+\\frac{(3i^{2}-3i+1)c(i)(\\theta(1-i)+1 -c(i))}{3\\theta^{3}i^{2}(i-1)^{3}}\\right)\\right.\\] \\[+\\mathcal{I}\\left(c(i)\\!=\\!c(i-1)\\right)\\frac{(c(i)-1)c(i)(2i-1)(2 i^{2}-2i+1)}{4\\theta^{3}(i-1)^{3}i^{3}}\\] \\[+\\mathcal{I}\\left(c(i)\
eq c(i-1)\\right)\\frac{i\\cdot c(i-1)}{2} \\left(\\frac{1}{\\theta^{2}(i-1)^{2}}-\\frac{1}{c(i-1)^{2}}\\right)\\] \\[+\\mathcal{I}\\left(c(i)\
eq c(i-1)\\right)\\frac{i\\cdot c(i)\\cdot (1-c(i-1)+\\theta(1-i))}{3}\\left(\\frac{1}{\\theta^{3}(i-1)^{3}}-\\frac{1}{c(i-1) ^{3}}\\right)\\] \\[+\\mathcal{I}\\left(c(i)\
eq c(i-1)\\right)\\frac{i\\cdot c(i-1)\\cdot \\theta(1-i)(1-c(i-1))}{4}\\left(\\frac{1}{\\theta^{4}(i-1)^{4}}-\\frac{1}{c(i-1) ^{4}}\\right)\\] \\[+\\mathcal{I}\\left(c(i)\
eq c(i-1)\\right)\\left[\\frac{i\\cdot c(i)}{ 2}\\left(\\frac{1}{c(i-1)^{2}}-\\frac{1}{\\theta^{2}i^{2}}\\right)+\\frac{i\\cdot c(i) \\cdot(1-c(i)+\\theta(1-i))}{3}\\left(\\frac{1}{c(i-1)^{3}}-\\frac{1}{\\theta^{3}i^{ 3}}\\right)\\right]\\] \\[+\\left\\{\\left(\\left\\lceil 1/\\theta\\right\\rceil-\\frac{1}{\\theta^{2} \\left\\lceil 1/\\theta\\right\\rceil}\\right)+\\frac{2\\left(\\theta-\\theta\\left\\lceil 1/\\theta \\right\\rceil-1\\right)}{3}\\left(\\left\\lceil 1/\\theta\\right\\rceil-\\frac{1}{\\theta^{3}\\left\\lceil 1/\\theta \\right\\rceil^{2}}\\right)+\\left(\\left\\lceil 1/\\theta\\right\\rceil-\\frac{1}{\\theta^{4}\\left\\lceil 1/\\theta \\right\\rceil^{3}}\\right)\\frac{\\theta(\\left\\left\\lceil 1/\\theta\\right\\rceil-1)}{2}\\right\\}\\] \\[P_{ZR}\\left(\\mathcal{E}_{1},\\mathcal{E}_{2},2,\\left\\lceil 1/\\theta \\right\\rceil\\right)\\] \\[=C_{1}(\\theta) \\tag{84}\\]
In order to simplify the presentation, in step (a), we used the following change of variables \\(c(i)=\\lceil i\\theta\\rceil\\) and \\(c(i-1)=\\lceil(i-1)\\theta\\rceil\\). Combining both sides of the inequality results in the lower bound given by (40) in Section IV-A. By analogy, \\(C_{2}(\\theta)\\) can be obtained the same way by swapping the roles of the two users.
### _Divergence Bound- Upper Bounding \\(E_{sp}(R_{1},R_{2})\\) for the Gaussian MAC_
Consider the Gaussian MAC defined in (5). For convenience, let us consider the subclass \\(\\mathcal{W}\\) of additive Gaussian MAC's \\(Y\\sim\\mathcal{N}(x_{1}+x_{2},\\sigma_{w}^{2})\\). First, let us calculate the maximum conditional mutual informations, \\(I(X_{1};Y|X_{2})\\) and \\(I(X_{2};Y|X_{1})\\).
\\[I(X_{1};Y|X_{2}) = I(X_{1};X_{1}+X_{2}+N|X_{2}) \\tag{85}\\] \\[= h(X_{1}+X_{2}+N|X_{2})-h(X_{1}+X_{2}+N|X_{1},X_{2})\\] \\[= h(X_{1}+N|X_{2})-h(N)\\] \\[\\leq \\int_{-\\infty}^{\\infty}\\mathrm{d}x\\cdot p_{2}(x)\\cdot h(X_{1}+N| X_{2}=x)-\\frac{1}{2}\\log(2\\pi e\\sigma_{w}^{2})\\] \\[\\leq \\int_{-\\infty}^{\\infty}\\mathrm{d}x\\cdot p_{2}(x)\\cdot\\frac{1}{2 }\\ln[2\\pi e\\mbox{Var}\\{X_{1}+N|X_{2}=x\\}]-\\frac{1}{2}\\log(2\\pi e\\sigma_{w}^{2})\\] \\[\\leq \\frac{1}{2}\\ln[2\\pi e\\cdot\\mbox{EVar}\\{X_{1}+N|X_{2}\\}]-\\frac{1} {2}\\log(2\\pi e\\sigma_{w}^{2})\\] \\[= \\frac{1}{2}\\ln[2\\pi e\\cdot\\mbox{mmse}\\{X_{1}+N|X_{2}\\}]-\\frac{1} {2}\\log(2\\pi e\\sigma_{w}^{2})\\] \\[\\leq \\frac{1}{2}\\ln[2\\pi e\\cdot\\mbox{E}\\{(X_{1}+N)^{2}\\}]-\\frac{1}{2} \\log(2\\pi e\\sigma_{w}^{2})\\] \\[\\leq \\frac{1}{2}\\log\\left(1+\\frac{\\mathcal{S}}{\\sigma_{w}^{2}}\\right).\\]
Similarly, \\(I(X_{2};Y|X_{1})\\leq\\frac{1}{2}\\log(1+\\mathcal{S}/\\sigma_{w}^{2})\\). Both upper bounds are achieved at the same time if \\(X_{1}\\) and \\(X_{2}\\) are independent, zero-mean, Gaussian random variables with variances \\(\\mathcal{S}_{1}=\\mathcal{S}_{2}=S\\). Thus, the conditions \\(R_{1}\\geq I(X_{1};Y|X_{2})\\) and \\(R_{2}\\geq I(X_{2};Y|X_{1})\\), are equivalent to the condition
\\[\\sigma_{w}^{2}\\geq\\max\\left\\{\\frac{\\mathcal{S}}{e^{2R_{1}}-1},\\frac{\\mathcal{S }}{e^{2R_{2}}-1}\\right\\}\\stackrel{{\\triangle}}{{=}}\\sigma_{0}^{2 }(R_{1},R_{2}), \\tag{86}\\]
where \\(\\sigma_{0}^{2}(R_{1},R_{2})\\) is assumed larger than \\(\\sigma^{2}\\) since \\((R_{1},R_{2})\\) are assumed in the achievable region of the real underlying channel \\(P\\). Now,
\\[\\mathcal{D}(\\mathcal{N}(x_{1}+x_{2},\\sigma_{w}^{2})\\|\\mathcal{N}(x_{1}+x_{2}, \\sigma^{2}))=\\frac{1}{2}\\left[\\frac{\\sigma_{w}^{2}}{\\sigma^{2}}-\\ln\\left( \\frac{\\sigma_{w}^{2}}{\\sigma^{2}}\\right)-1\\right], \\tag{87}\\]
whose minimum under the constraint (86) is
\\[\\mathcal{D}(\\mathcal{N}(x_{1}+x_{2},\\sigma_{0}^{2}(R_{1},R_{2}))\\|\\mathcal{N} (x_{1}+x_{2},\\sigma^{2}))=\\frac{1}{2}\\left[\\frac{\\sigma_{0}^{2}(R_{1},R_{2})} {\\sigma^{2}}-\\ln\\left(\\frac{\\sigma_{0}^{2}(R_{1},R_{2})}{\\sigma^{2}}\\right)-1 \\right]. \\tag{88}\\]
Since this is independent of \\((x_{1},x_{2})\\), the outer maximization over \\(Q\\) degenerates, and the end result is
\\[E_{sp}(R_{1},R_{2})\\leq\\frac{1}{2}\\left[\\frac{\\sigma_{0}^{2}(R_{1},R_{2})}{ \\sigma^{2}}-\\ln\\left(\\frac{\\sigma_{0}^{2}(R_{1},R_{2})}{\\sigma^{2}}\\right)-1 \\right]\\stackrel{{\\triangle}}{{=}}E_{sp}(R_{1},R_{2}) \\tag{89}\\]
### _Minimization of the error exponents for the divergence bound_
The minimization of the first exponent \\(F_{1}\\) given by (57) can be written explicitly as
\\[F_{1}=\\min_{R\\geq 0}2R+\\frac{1}{2}\\left\\{\\frac{\\mathcal{S}}{e^{2R}-1}-\\ln\\frac{ \\mathcal{S}}{e^{2R}-1}-1\\right\\} \\tag{90}\\]
Taking the first derivative of the function above based on \\(R\\) and equating to zero as follows
\\[\\frac{d}{dR}F_{1}(R)=2+\\frac{1}{2}\\left\\{\\frac{-2\\mathcal{S}e^{2R}}{(e^{2R}-1 )^{2}}+\\frac{2e^{2R}}{e^{2R}-1}\\right\\}=0\\]
yields \\(3x^{2}-(\\mathcal{S}+5)x+2=0,\\mbox{with}\\quad x=e^{2R}\\). The rate value that minimizes the first error exponent is obtained as
\\[R_{1}^{*}=\\frac{1}{2}(\\log(\\mathcal{S}+5+\\sqrt{(\\mathcal{S})^{2}+10\\mathcal{S} +1})-\\log(6)).\\]
* [18] Y. Ben-Haim and S. Litsyn, \"Improved lower bounds on the reliability function of the Gaussian channel,\" _IEEE Transactions on Information Theory_, vol. 54, pp. 5-12, January 2008.
* [19] P. S. Laplace, \"Memoire sur la probabilite des causes par les evenements,\" _Memoire de Mathematique et de Physique_, pp. 621-656, 1774.
* [20] B. Massart and P. Laurent, \"Adaptive estimation of a quadratic functional by model selection,\" _Annals of Statistics_, vol. 28, pp. 1302-1338, 2000. | This paper focuses on the problem of separately modulating and jointly estimating two independent continuous-valued parameters sent over a Gaussian multiple-access channel (MAC) under the mean square error (MSE) criterion. To this end, we first improve an existing lower bound on the MSE that is obtained using the parameter modulation-estimation techniques for the single-user additive white Gaussian noise (AWGN) channel. As for the main contribution of this work, this improved modulation-estimation analysis is generalized to the model of the two-user Gaussian MAC, which will likely become an important mathematical framework for the analysis of remote sensing problems in wireless networks. We present outer bounds to the achievable region in the plane of the MSE's of the two user parameters, which provides a trade-off between the MSE's, in addition to the upper bounds on the achievable region of the MSE exponents, namely, the exponential decay rates of these MSE's in the asymptotic regime of long blocks.
Parameter modulation-estimation, multiple-access channel, error exponents, MSE | Condense the content of the following passage. | 208 |
isprs/20793242_61a3_411d_abf5_6b1803d124d2.md | Effects on Streamflow Caused by Reforestation and Deforestation in a Brazilian Southeast Basin: Evaluation by Multicriteria Analysis and Swat Model
## 1 Introduction
In Brazil reforestation has been a plausible and low cost solution to reduce surface runoff and control erosion. The areas defined as priorities are usually those with high slopes, soils which are easily eroded, intense and frequent rain events, watercourse banks and headwaters, in addition to areas with little or no vegetation cover (Nossack et al., 2014; Pinheiro, 2015; Santos, 2013; Sartori, Zimback, 2011).
The multicriteria analysis is commonly used for the definition of priority zones for reforestation (Nossack et al., 2014; Pinheiro, 2015; Santos, 2013; Sartori, Zimback, 2011), enabling the choice of variables and attribution of weights and values, giving priority to different options, and facilitating the decision making by presenting alternatives according to the proposed objective (Francisco et al., 2008). It is a tool used for the resolution of multiple problems. The analysis is based on the representation of a complex problem, structuring it hierarchically to prioritize factors in the analysis of several alternatives. It provides a hierarchical structure, facilitates the pairwise decomposition, reduces inconsistencies and generates priority vectors, in addition to reducing the subjectivity of the choice (Calijuri et al., 2002; Feizizadeh et al., 2014).
Other useful tools to assess the effects of reforestation on the attenuation of surface runoff are hydrological models. They evaluate the quantitative rainfall and flow rate data, simulating the hydrological response of the watershed to rain events. They also aid in environmental planning, since they provide important information for the management of soil and the maintenance of the quality of the water resources (Lelis, Calijuri, 2010). One of these models is the _Soil and Water Assessment Tool_ (SWAT), which was developed to predict the effects of different scenarios of land use/land cover on the water quality, on the production of sediments and on the load of pollutants in the agricultural watershed (Srinivasan, Arnold, 1994).
Input data for simulating flow using SWAT are climate, hypsometry, pedology and land cover. The water management techniques focus on the latter, since it is easily modified by human action. By identifying areas with greater surface runoff potential, it is possible to manage them, and reforestation is one way to mitigate such phenomenon. Therefore, from the multicriteria analysis, alternative land cover scenarios can be designed, simulating the reforestation of priority areas; and the hydrological modeling would enable the evaluation of the effectiveness of forest recovery on the attenuation of surface runoff, as well as on the mitigation of problems such as floods and erosion.
The main advantage of using these models is the possibility of studying several scenarios in a quick manner, which significantly reduces research costs, especially in extensive and complex areas such as watersheds (Machado et al., 2003).
Thus the objective of this study was to assess the flow behavior of the Velhas River in different scenarios of reforestation and deforestation, using multicriteria analysis and hydrological modeling.
## 2 Material and Methods
### Study Area
The Velhas River Watershed (VRW) (Figure 1) is located in the state ofinas Gerais (MG), Southeast Brazil. The Velhas River is the largest tributary to the Sao Francisco River, with a drainage area of 27,851 kmr, corresponding to 4.4% of the drainage area of the Sao Francisco River Watershed (SFRW), according to the Velhas River Watershed Committee (CBH Rio das Velhas, 2014). Velhas contributes nearly 11.2% (an average of 320.5 m/s) of the Sao Francisco river flow, being the third largest contributor (DAMG, 2009; Pereira et al., 2007).
The Velhas River presents an important contribution to the flow of the Sao Francisco, which is, in turn, responsible for 70% of the water resources of Brazil's Northeast. This region has nearly 1/3 of the Brazilian population, but water availability is only 3% of that of the country, with great irregularity in the distribution of such resources. Thus, the protection and conservation of the tributaries of the Sao Francisco River, such as the Velhas River, is important for the maintenance of the main river that crosses the Northeast (Brasil, 2006; Castro, 2009).
### Preparation of reforestation and deforestation scenarios
The reforestation scenarios were designed from preliminary mappings carried out using data available through governmental and research institutes, in addition to methods available in the literature and specific geoprocessing softwares (Table 1 and 2).
The design of the scenarios was carried out using the _Multicriteria Evaluation - MCE of software Idrisi Selva_ 17.0, through the _Analytic Hierarchy Process_. The suitability scales of the factors used in the analysis were standardized using a diffuse function where the factors have suitability values ranging from zero (lowest suitability) to 255 (greatest suitability) (Eastman, 2012; Feizizadeh et al., 2014).
Constraints were used in some classes of the land cover mapping (Table 3). The classes selected as constraints were: Rock Outcrop, Water, Urban Area and Arboreal Vegetation; the first three classes consist of areas which are difficult for growing trees and the latter is already occupied by forests. The choice of the covers to be replaced comes from the idea that the less vegetated and more exposed the soils, the higher the chance of occurrence of surface runoff with higher intensity, which may result in intensive erosive processes (El Swaify et al., 1982; Labriere et al., 2015).
Constraints were used in some classes of the land cover mapping (Table 3). The classes selected as constraints were:
\\begin{table}
\\begin{tabular}{|c|c|} \\hline
**Criteria** & **Definition** \\\\ \\hline Erodibility & Soil erosion ratio \\\\ \\hline Slope & Surface inclination with respect to the horizontal \\\\ \\hline Erosivity & Potential capacity of water to cause erosion \\\\ \\hline Distance to & Areas close to arboreal vegetation are considered \\\\ arboreal & more suitable for forest recovery \\\\ \\hline Distance to & Areas close to hydrography are considered suitable \\\\ hydrography & for forest recovery (riprian forest) \\\\ \\hline Distance to & Revegation of areas closer to roads \\\\ \\hline Distance to & Distant forest recovery, since forest areas are more \\\\ urban areas & easily affected by urban sites \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Criteria definition to designed reforestation scenarios
Figure 1: Location of the Velhas River Watershed
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline
### Hydrological modeling
For the hydrological modeling of the VRW, secondary data available from research and governmental institutions were used (Table 4).
This stage consisted of data entry and flow simulation using the _software ArcSWAT_. The _software SWAT-Cap_ was used to carry out the sensitivity analysis of the parameters used to calibrate and validate the model, in addition to the analysis of the modeling uncertainty. The land cover, pedology and slope maps, in addition to climate data from the VRW, were entered into the _ArcSWAT_ to simulate the flows at the location of the fluorimetric station used in the research. The objective of this stage was to compare simulated and observed data, and thus enable the calibration and validation of the model in the _SWAT-Cap_. The hydrological simulation was carried out using the 2016 Land cover Map, as well as flow and rainfall data from the period between 1986 and 2015, with 10 years of warm-up period.
The calibration process was carried out for the period between 1996 and 2010, and the validation from 2011 and 2015. The same interval of values of the parameters used to calibrate the model was used for validation. A total of 6 iterations with 200 simulations were carried out for calibration and only 1 iteration with 200 simulations was carried out for validation, using the _SUFI-2_ method.
The results of calibration and validation for the flow simulation of the Vehras River were evaluated by statistical analysis, in order to verify if the flow behavior presented by the model is compatible with the data obtained in the field. For that, three statistical criteria were used: the Determination Coefficient (R\\({}^{2}\\)), the Nash-Sutcliffe Efficiency (NSE) and the Percentage of Bias (PBIAS) (Moriasi et al., 2007). In general, the model can be considered satisfactory when NSE \\(>\\) 0.4, R\\({}^{2}\\)\\(>\\) 0.5 and PBIAS \\(\\pm\\) 25% for simulations related to flow (Welde, Gebremariam, 2017).
### Simulation of flow for the designed scenarios
After the preparation of the scenarios and elaboration of the hydrological modeling for the VRW, flow simulations were carried out, in which the parameters used to simulate the flows of the scenarios had the same values found during the model calibration and validation. This process enabled the verification of the flow behavior, in the case of reforestation or deforestation in the watershed.
In order to show the influence of reforestation and deforestation on the flows of the Vehras River, a more recent period of rainfall and flow data was selected, with continuous series, i.e., without gaps throughout monitoring and that comprised both the rainy and the rainy and the rainy seasons. The period chosen was from February 2012 to June 2013 (rainy season from 2011/2012 - rainy season 1, dry season of 2012 - dry season 1, rainy season from 2012/2013 - rainy season 2, and the beginning of the dry season in 2013 - dry season 2).
## 3 Results
### Reforestation and deforestation Scenarios
The mapping resulting from the multiciteria analysis showed that the suitability for reforestation in the VRW ranged between 30 and 244. The continuous scale was divided into equal parts, resulting in five suitability classes (Figure 2).
The class with the greatest representativity was the High Suitability Class, with 6,983.4 km\\({}^{2}\\) (37%), followed by the Medium Suitability Class, with 4,956.3 km\\({}^{2}\\) (26%) and the Low Suitability Class, with 4,398.2 km\\({}^{2}\\) (23%). The least representative classes are those of the extremes, the Very Low Suitability Class, with 111.3 km\\({}^{2}\\) (\\(>\\)1%) and the Very High Suitability with 2,570.1 km\\({}^{2}\\) (14%).
The most indicated locations for reforestation (Very High Suitability) are in the South, Southeast and East of the watershed. In these areas, there is still remaining arboreal vegetation, facilitating the expansion of forest areas. In these locations, there are also steep hillslides (slopes over 45\\({}^{\\prime\\prime}\\)) occupied by pasture or bare soil. These hillslides present easily
\\begin{table}
\\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \\hline
**Information** & **Data used** & **Data source** & **Method used** \\\\ \\hline \\multirow{3}{*}{Land Cover} & Images of the OLI sense of the _Landmark_ & Impe/www.4 & Brasileiro et al. (2016) \\\\ & 8 with sequential resolution of 30 meters in 2016 & & Cechina Junior \\\\ \\hline \\multirow{4}{*}{Sails Type} & Shapefiles containing the mapping of the words for the State of Map & \\multirow{2}{*}{\\begin{tabular}{} \\end{tabular} } & Impe/www.4 & Clipool of the _software_ \\\\ & Mignas Ceri, the shape of the type & & g/sr/br/ba & _software_ \\\\ & & to the shape of the VWB & & _ArceGIS_ 10.3 \\\\ \\hline Digital Elevation & Images _Stankle Radar_ & Impe/www.4 & _Extract By_ \\\\ & _Topography Mission_ & _Stankle_ & spicer-ups. & _Mass total of_ \\\\ & & & & _the software_ \\\\ & Model & & & _ArceGIS_ 10.3 \\\\ \\hline \\multirow{3}{*}{R rainfall} & Daily rainfall data between 1986 and 2015, from 21 rainfall stations & \\multirow{3}{*}{
\\begin{tabular}{} \\end{tabular} } & Data \\\\ & Daily temperature, data, relative moisture, solar & & & \\\\ & _radiation_, wind speed, dew point, between 1986 and 2013, without and around the & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ \\hline \\multirow{3}{*}{Flow} & Daily flow data between 1986 and 2015, from a & Impe/www.s & Data \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ & & & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Data and methods used to prepare and organize the preliminary data for the hydrological modeling of the VRW
Figure 2: Suitability map for the reforestation of the VRWerodible soils (litholic soils) and are located where rainfall indices are more intense. The river banks and hillsides with high slopes (but not as steep as the ones previously mentioned) were considered suitable for reforestation (High Suitability), because they are on Cambios (soils with a shallow B horizon and susceptible to erosion) and are occupied by pasture.
Areas occupied by agriculture were considered less suitable (Low and Very Low Suitability), since they consist of more plain terrains with soils less prone to erosion (Red and Red-Yellow Latools and the Red-Yellow Argisols). Also, in these areas, West, Northwest and North of the watershed, the rainfall indices are the lowest.
From this mapping, two reforestation and one deforestation scenarios were created. The first two were designed as follows: i) one replaces the current land cover by arboreal vegetation in the areas considered of Very High Suitability, i.e., a reforestation of 9.2% of the watershed area (Scenario I); ii) the other one was designed by replacing the current land cover by arboreal vegetation in the areas considered of Very High and High Suitability, i.e., a reforestation of 34.3% of the watershed area (Scenario II). The deforestation scenario was prepared by replacing all arboreal vegetation by underbrush (Scenario III) (Figures 3 and 4).
The changes from the current scenario to Scenario I were more significant in the underbrush, with a loss of 2,280 km\\({}^{2}\\) (29%); in agricultural areas, with a decrease of 203 km\\({}^{2}\\) (2.7%); and in bare soils, with a reduction of 87 km\\({}^{2}\\) (7%). There was an increase of 2,570 km\\({}^{2}\\) (45%) of arboreal vegetation.
From the current scenario to Scenario II, the most significant reductions were in the underbrush, of 6,801 km\\({}^{2}\\) (213%); in agricultural areas, of 2,243 km\\({}^{2}\\) (40%); and in bare soils, with 508 km\\({}^{2}\\) (63%). There was an increase of 9,552 km\\({}^{2}\\) (75%) of the arboreal vegetation.
From the current scenario to Scenario III, there was a replacement of 3,008 km\\({}^{2}\\) of arboreal vegetation by underbrush, representing a deforestation of 100% of the forests of the watershed.
### Hidrological Modeling
The statistical tests for calibration resulted in values of 0.75 for NS, 0.76 for R\\({}^{2}\\) and +5.4% for PBIAS. For validation, the values were 0.54 for NS, 0.65 for R\\({}^{2}\\) and -14.4% for PBIAS. These results show that the efficiency of the model was satisfactory in both the calibration and validation processes.
The hydrograms related to calibration and validation (Figure 5) showed that during calibration, the model had a good fit between observed and simulated monthly average streamflow, satisfactorily representing the rainy and dry seasons, even though it underestimated the flows for the rainiest months.
With respect to validation, there were higher inconsistencies when comparing observed and simulated data, with significant differences, especially overestimating flows in the months of higher average flows. The modeling had satisfactory results when simulating the flows of the Vehlas River, according to the statistical tests.
Figure 4: a and b) Reforestation Scenarios (Scenarios I and II, respectively); c) Deforestation Scenario (Scenario III)
Figure 5: Hydrograms of calibration (A) and validation (B)
Figure 3: Current Scenario of Land Cover and Land Use
### Simulation of flows for the designed scenarios
The flow simulation for Scenario I showed that a reforestation of 9.2% of the watershed would result in a reduction of 33.6 m/s (17.6%) in the average flow for the entire period assessed. The maximum flows would reduce by 18.6% and 16% in the rainy seasons I and 2, respectively, and the minimum flows would reduce by 26.6% and 22.6% in the dry seasons I and 2, respectively (Figure 6A).
The flow simulation for Scenario II showed that a reforestation of 34.3% of the watershed would reduce the average flow by 36.7 m/s (19.2%) for the entire period assessed. The maximum flows would reduce by 19.8% and 14.3% in the rainy seasons I and 2, respectively, and the minimum flows would reduce by 26.8% and 24.3% in the dry seasons I and 2, respectively (Figure 6B).
The flow simulation for Scenario III showed that a reforestation of 100% of the remaining forests in the watershed would increase the average flow by 51.9 m/s (27.1%). The maximum flows would increase by 9.4 and 51.4% in the rainy seasons I and 2, respectively, and the minimum flows would increase by 14.4% in the dry season 1 and decrease by 9.3% in the dry season 2 (Figure 6C).
## 4 Discussions
The results of the simulations of monthly average streamflow for Scenarios I and II indicate that a reforestation of 9.2% (2,570 km/s) or 34.3% (9,552 km/s) will result in a decrease of 33.6 m/s in the first case and 36.7 m/s in the second, in the case of similar rainfall in the assessed period. From one scenario to another, there is a difference of 3.1 m/s, which represents a 1.6% change. In the dry and rainy seasons, these changes were also not significant, with the difference no higher than 2% between the two scenarios. Although the simulation of the two scenarios suggests changes in the behavior of the monthly average flows with respect to the current scenario, there is no significant variation between Scenario I and II.
The literature that addresses the effects of reforestation on the water yield and flows shows that an increase in forest areas causes a reduction in such factors (Andreassian, 2004; Brown et al., 2005). Over a hundred results were compiled from the quoted studies showing that, despite the flow reductions that take place with reforestation, the hydrological responses of the watersheds are highly variable and most of the time, unpredictable. Most of these studies were carried out in watersheds smaller than 1,000 km/s. From the compilation of 162 studies on watersheds with more than 1,000 km/s, it was possible to see that the increase in forest areas also decreased the water yield (Li et al., 2017).
Also, the greater the changes in the forest areas, due to deforestation or reforestation, the higher the variations in water yield in large watersheds. However, the authors of these studies highlighted that despite the existing relationship between changes in forest cover and variations in water yield, there are other variables that interfere in this production, such as climate, soils, size and shape of the watershed, altimetry, and slopes, among other characteristics (Andreassian, 2004; Brown et al., 2005; Li et al., 2017).
With respect to Scenario III (100% of deforestation) and its comparison to the Current Scenario, there would be an increase of 51.87 m/s in the monthly average flow of the assessed period. This difference would result in a flow of 124.81 m/s in one of the rainy seasons. The results related to deforestation were compatible with those found in the literature, where the suppression of forests causes the increase in water yield, mostly due to the increase in runoff which directly contributes to river flows, in small or large watersheds (Andreassian, 2004; Brown et al., 2005; Li et al., 2017). Scenario II shows how important it is to protect and maintain the remaining arboreal vegetation, contributing to actions aimed at regulating the flows of the Velhas River, mitigating problems related to floods.
## 5 Conclusions
Multicriteria analysis is an important methodology that aids in the management of the territory. This study allowed the identification of priority areas for reforestation in the VRW;
The hydrological modeling was a method to complement the multicriteria analysis, verifying the behavior of the flows of the Velhas River from the scenarios created, helping in the identification of those that are more advantageous for the management of water resources;
According to the hydrological simulations of the scenarios, the flows of the Velhas River do not follow a linear trend, where the larger the reforested area, the smaller the surface runoff. The multicriteria analysis and the hydrological modeling indicate that the physical characteristics of the watershed significantly influence the flow behavior, and that reforestation and deforestation should be carried out with caution. Thus, the results show that the combination of the multicriteria analysis and hydrological modeling provides assistance to planning and management with respect to modifications of forest areas, contributing to the policy related to the subject.
Figure 6: Flow behavior for Scenarios I (A), II (B) and III (C) compared to the flows of the Current Scenario
## References
* Andreassian (2004) Andreassian, V., 2004. Water and forests: from historical controversy to scientific debate. **J. Hydrol.** 291, 1-27.
* Brasileiro et al. (2016) Brasileiro, F.G., Oliveira, C.M.M., Rodrigues, R.A., Delgado, R.C., 2016. Orbital image classification for Maximum Likelihood Method in Quixeramobim, Ceara, Brazil. Revista Geografica Academica 10, 81-92 (In Portuguese).
* Brasil (2006) Brasil, 2006. Notebook of the San Francisco Hydrographic Region. Ministerio doi Meio Ambiente, Secretaria de Recursos Hidricos, Brasilia, DF, Brazil (In Portuguese).
* Brown et al. (2005) Brown, A.E., Zhang, L., McMahon, T.A., Western, A.W., Vertessy, R.A., 2005. A review of paired catchment studies for determining changes in water yield resulting from alterations in vegetation. **J. Hydrol.** 310, 28-61.
* Calijuri et al. (2002) Calijuri, M.L., Melo, A.L.O., Lorentz, J.F., 2002. Identification of areas for the implantation of sanitary landffills using strategic decision analysis. Informatica Publica 4, 231-250 (in Portuguese).
* Castro (2009) Castro, C.N., 2009. Sao Francisco River Transposition. Repositorio do Conhecimento do IPEA. (in Portuguese) ([http://repositorio.ipea.gov.br/handle/11058/5477](http://repositorio.ipea.gov.br/handle/11058/5477)).
* C.B.H. Rio das Velthas (2014) C.B.H. Rio das Velthas, 2014. The Velhas Watersed (in Portuguese) ([http://cblwelhas.org.br/a-bacia-hidrografica-do-rio-das-vehlhas/](http://cblwelhas.org.br/a-bacia-hidrografica-do-rio-das-vehlhas/)).
* Digital Atlas of Minas Gerais (DAMG) (2009) Digital Atlas of Minas Gerais (DAMG), 2009. Contribution of the main tributaries of the Velhas basin, third ed. (in Portuguese).
* Durase and Mello (2016) Durase, M.F., Mello, C.R., 2016. Spatial distribution of the potential and current soil erosion for the Sapucai River Basin, MG, Brazil. Eng. Sanit. Ambient. 21, 677-685 (In Portuguese).
* Eastman (2012) Eastman, J.R., 2012. IDRISI Selva Manual, Version 17, Clark University, Worcester, Massachusetts, EUA.
* El-Swairy et al. (1982) El-Swairy, S.A., Dangler, E.W., Armstrong, C.L., 1982. Soil erosion by water in the tropics, first ed. University of Hawaii. Hawaii, EUA.
* Feizizadeh et al. (2014) Feizizadeh, B., Roodposhti, M.S., Jankowski, P., Blaschke, T., 2014. A GIS-based extended fuzzy multi-criteria evaluation for landslide susceptibility mapping. Comput. and Geosci. 73, 208-221.
* Francisco et al. (2008) Francisco, C.E.S., Coelho, R.M., Torres, R.B., Adami, S.F., 2008. Watersed selection for environmental rehabilitation using multiciteria analysis. Ciencia Florestal. 18, 1-13 (in Portuguese).
* Labriere et al. (2015) Labriere, N., Locatelli, B., Laumonier, Y., Freycon, V., Bernoux, M., 2015. Soil erosion in the humid tropics: a systematic quantitative review. Agric. Ecosyst. Environ. 203,127-139.
* Lelis and Calijuri (2010) Lelis, T.A., Calijuri, M.L.A., 2010. Hydrosedimentological modeling of watershed in southeast Brazil, using SWAT. Ambia. 5, 158-174.
* Li et al. (2017) Li, Q., Wei, X., Zhang, M., Wang, Y., 2017. Forest cover change and water yield in large forested watersheds: a global synthetic assessment. Ecohydrology. 10:e1838.
* Moriasi et al. (2007) Moriasi, D.N., Arnold, J.G., Van Liew, M.W., Bingen, R.L., Harmel, R.D., Veith, T.L., 2007. Model evaluation guidelines for systematic quantifications of accuracy in watershed simulations. Transactions of the ASABE. 50, 885-900.
* Nearing et al. (2017) Nearing, M.A., Yin, S., Borrell, P., Polyakov, V.O., 2017. Rainfall erosivity: an historical review. Catena. 157: 357-362.
* Nossack et al. (2014) Nossack, F.A., Zimback, C.R.L., Da Silva, R.F.B., Sartori, A.C., 2014. Application of multiciteria analysis to define priority areas for forest recovery. Irriga. 19, 612-625 (in Portuguese).
* Pereira et al. (2007) Pereira, S.B., Pruski, F.F., Da Silva, D.D., Ramos, M.M., 2007. Study of the hydrological behavior of Sao Francisco river and its main tributaries. Rev. Bras. Eng. Agric. Ambient. 11, 615-622 (in Portuguese).
* Pinheiro (2015) Pinheiro, J.A.C., 2015. Analysis of the landscape and priority areas for forest restoration in watershed from the Minas Gerais area. Doctoral dissertation. Federal University of Vicyosa. Vicyosa, MG, Brazil (in Portuguese).
* Santos (2013) Santos, J.B., 2013. Geotechnology and hydrologic modeling in the limits of priority areas in recomposition forest in sub-basin of Ribeira Layeses, Boutcau-SP. Master thesis. Paulista State University, Boutcau, SP, Brazil (in Portuguese).
* Sartori and Zimback (2011) Sartori, A.A.C., Zimback, C.R.L., 2011. Forest recomposition aiming the conservation of water resources in river Pardo watershed, SP. Energia na Agricultura. 26, 1-15 (in Portuguese).
* Serio et al. (2008) Serio, J., Costa, C.A.G., Teixeira, A.S., Ortega, E., 2008. Application of USLE and SIG in the characterization of three small watersheds in Brazil. Rev. Acad. Agraf. Ambient. 6, 213-221 (In Portuguese).
* Silva et al. (2009) Silva, R.M., Paiva, F.M.L., Santos, C.A.G., 2009. Evaluation of soil erodibility and soil loss in Capia Basin based on Geographical Information System and Remote Sensing. Rev. Bras. Geograf. Fis. 2, 26-40 (In Portuguese).
* Srinivasan and Arnold (1994) Srinivasan, R., Arnold, J.G., 1994. Integration of a basin-scale water quality model with GIS. J. Am. Water Resour. Assoc. 30, 453-462.
* Welde and Gebremariam (2017) Welde, K., Gebremariam, B., 2017. Effect of land use land cover dynamics on hydrological response of watershed: case study of Tekeze Dam watershed, northern Ethiopia. J. Soil Water Conserv. 5, 1-16. | Deforestation is a global concern due to its problematic consequences, which intensify natural phenomena such as floods. In the past century, Brazil lost large areas of forests due to agricultural and livestock development, mining activities, the construction of hydroelectric plants and the expansion of the urban-industrial sector. The recovery of forests is an alternative to mitigate impacts caused by floods, although it is not known for sure the extent of such mitigation. The objective of this study was to assess the flow behavior of the Velhas River, Southeast Brazil, under different reforestation and deforestation scenarios, through multicriteria analysis and hydrological modeling. The combination of these methods allowed for interesting results, showing that a reforestation in 34.3% of the watershed area would have effects on river flow behavior similar to a reforestation of 9.2%. A scenario of deforestation in 100% of the forest area showed that there would be an increase in flow peaks in the rainiest months.
L +
Footnote β : Corresponding authors: [email protected] and [email protected]
2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGRS 2020), 22-26 March 2020, Santiago, Chile | Condense the content of the following passage. | 261 |
arxiv-format/2305_02034v4.md | # SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment Anything Model
Di Wang\\({}^{1}\\), Jing Zhang\\({}^{2}\\), Bo Du\\({}^{1}\\), Minqiang Xu\\({}^{3}\\), Lin Liu\\({}^{3}\\), Dacheng Tao\\({}^{2}\\), Liangpei Zhang\\({}^{4}\\)
\\({}^{1}\\)School of Computer Science, National Engineering Research Center for Multimedia Software,
Institute of Artificial Intelligence, and Hubei Key Laboratory of Multimedia and Network
Communication Engineering, Wuhan University, China
\\({}^{2}\\)School of Computer Science, Faculty of Engineering, The University of Sydney, Australia
\\({}^{3}\\)National Engineering Research Center of Speech and Language Information Processing, China
\\({}^{4}\\)State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,
Wuhan University, China
{d_wang,dubo,zlp62}@whu.edu.cn; [email protected];
{mqxu7,linliu}@iflytek.com; [email protected]
This work was partially done during Di Wang's internship at iFlytek.Corresponding author.
## 1 Introduction
The advancement of earth observation technologies has led to the generation of abundant remote sensing images (RSI). These images retain valuable information about the spatial distribution and condition of extensive ground surfaces and geospatial objects, and can be conveniently accessed in real-time. Consequently, remote sensing data has garnered the interest of various disciplines, including agricultural monitoring, urban planning, and environmental protection. In particular, the identification of surface targets has been a fundamental task in these fields for several years.
To our knowledge, a significant number of RSIs remain unlabeled. Unlike natural images that can be easily comprehended by the human eye, interpreting RSI taken from an aerial perspective typically demands specialized expertise from practitioners. Furthermore, RSI objects are often distributed sparsely, and the images frequently contain small targets, making the labeling process less efficient. Therefore, the annotation of RSI has traditionally required substantial labor and time costs. Amongvarious RS tasks, the classification task requires only a single category for the entire scene, and the detection task involves the additional step of bounding box annotation, while segmentation is particularly challenging since it necessitates pixel-level annotations to accurately delineate object boundaries.
Do we have to spend a significant amount of time annotating RSIs? The answer is probably no. Recently, the segment anything model (SAM) [17], which excels in object segmentation, has gained popularity as a new research focus in the field of computer vision. SAM accurately captures object locations and contours (_i.e._, in the form of masks), enabling it to distinguish various objects in the foreground and background. Furthermore, SAM possesses an impressive zero-shot segmentation ability, exhibiting high performance even when applied to specialized scenarios such as cell images photographed by microscopes [8] and medical images [26], despite being trained on a vast dataset of natural images. In the RS field, [31] firstly tests the performance of SAM on six public datasets. [16] extra introduce a domain decoder to improve the performance of SAM on the planetary geological mapping task. Beyond default prompts, [29, 45] consider utilizing texts as the prompt by adopting Grounding DINO [20] to obtain boxes that can be employed by SAM. Then, [29] realizes the one-shot segmentation with the help of PerSAM [47], while [45] applies the heatmap obtained from CLIP [30] to further optimize segmentation results. Different from the above methods with manual prompts, [3] design a prompter to adaptively generate prompts for improving the performance of SAM in instance segmentation. In addition, SAM is also used in producing rotated bounding boxes 4, which is significant for RS oriented object detection.
Footnote 1: [https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-vaihingen.aspx](https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-vaihingen.aspx)
Footnote 2: [https://www.gaofen-challenge.com/challenge](https://www.gaofen-challenge.com/challenge)
Footnote 3: [https://segment-anything.com/demo](https://segment-anything.com/demo)
Footnote 4: [https://github.com/Li-Qingyun/sam-mrostate](https://github.com/Li-Qingyun/sam-mrostate)
We have also found it performs well in recognizing diverse targets in RSI, even when the images are obtained using sensors that perceive different bands, such as infrared and microwave, or with varying resolutions, such as airborne or satellite imagery, as illustrated in Figure 1. Although we acknowledge that SAM may not have fully detected all regions, we believe that it has significant potential to improve the efficiency of annotating RSIs since it delivers promising segmentations on
Figure 1: Some examples of SAM segmentation results on RSIs: (a) RGB aerial image obtained from the IsAID dataset [41]. (b) Airborne aerial image composed of near-infrared, red, and green bands. This image is from the ISPRS Vaihingen dataset1. (c) RGB satellite image observed by GF-2 sensors. This image is from the GID dataset [34]. (d) Hisea-1 SAR image from the Marine Farms Segmentation track of the 5th Gaofen Challenge2. These segmentation results are generated by the SAM demo website3.
recognized areas. Therefore, in this study, we aim to utilize SAM to efficiently construct a large-scale RS segmentation dataset by obtaining pixel-level annotations for RSIs. Ground objects in RSI possess definite category properties, which are essential for real RS recognition tasks. However, the segmentation maps produced by SAM lack such information, rendering them unsuitable for labeling RSIs. To address this issue, we notice the annotations in existing RS object detection datasets, which include category and bounding box information. With the aid of SAM, we can leverage such detection annotations to obtain pixel-level semantic labels and efficiently construct large-scale segmentation datasets. The obtained dataset is called **S**egment **A**nything **M**odel annotated **R**emote Sensing **S**egmentation dataset (SAMRS). SAMRS inherits the characteristics of existing RS object detection datasets that have more samples and categories compared with existing high-resolution RS segmentation datasets.
Since we efficiently obtain numerous segmentation label maps, it is natural to consider using the obtained dataset for pre-training. Existing models pretrained by classification tasks may be not very suitable for downstream tasks, e.g., segmentation, because of the task-level discrepancy [36], while the emergence of SAMRS is expected to address this issue. To this end, we train classical deep learning models on the SAMRS, and finetune the trained model on typical RS segmentation datasets to explore the feasibility of segmentation pre-training. The main contribution of this study can be summarized to: **(1)** We develop a SAM-based pipeline for efficiently generating RS segmentation annotations. **(2)** We obtain a large-scale RS segmentation dataset named SAMRS using existing RS object detection annotations, whose capacity is far beyond existing high-resolution RS segmentation datasets. **(3)** We conduct preliminary segmentation pre-training experiments on SAMRS. The results highlight the importance of conducting segmentation pre-training using large-scale RS segmentation data, such as SAMRS, for mitigating task discrepancy and dealing with limited training data. We hope this research could significantly enhance the annotation efficiency of RSIs, thereby unlocking the full potential of RS models, especially in the context of segmentation tasks.
## 2 Implementation
### Segment Anything Model
To perform segmentation, additional prompts are needed to guide SAM to locate the object of interest, in addition to the input image. SAM supports various prompts, such as points, boxes, and masks, which can be input into the model either alone or in combination. It is important to note that when using point prompts, it is necessary to indicate whether the points are foregrounds or backgrounds. In this study, we use detection annotations from existing datasets to obtain all kinds of prompts since they contain both location and category information.
### Datasets
In this study, we employ SAM on four public RS object detection datasets, namely HRSC2016 [22], DOTA-V2.0 [10], DIOR [18], and FAIR1M-2.0 [33]. DOTA, DIOR, and FAIR1M are three large-scale datasets [33]. HRSC2016 is primarily designed for ship detection and comprises only one category. In comparison to the other three datasets, it has the smallest data volume. Additionally, in the testing set, 124 images possess bounding box annotations and pixel-level labels simultaneously, making it highly suitable for evaluating the accuracy of SAM annotations. Therefore, we conduct an ablation study on the testing set consisting of the aforementioned 124 images to determine the optimal configuration for SAM. Following this, we generate segmentation labels for the remaining datasets. To obtain a segmentation dataset with more images or
Figure 2: The differences between segmentation labels and mask prompts. (a) Pixel-level annotated map from the original dataset. (b) Pixel-level annotations along with horizontal and rotated box ground truths. (c) Mask prompts derived from horizontal boxes. (d) Mask prompts derived from rotated boxes. The ship instances are marked with different colors by following (a).
categories, we opt for the latest versions of DOTA and FAIR1M. Based on the available annotations, we only transform the training and validation sets of DOTA-V2.0 and FAIR1M-2.0, while for DIOR, all data has been utilized. Here, according to the licenses, DOTA, DIOR, and FAIR1M can be used for academic purposes.
### Prompt Settings
As RSIs are captured from an overhead perspective, the objects in them can have arbitrary orientations, unlike natural image objects that are typically oriented upward due to gravity. Hence, in addition to the usual horizontal bounding boxes (H-Box), we also consider oriented bounding boxes or rotated bounding boxes (R-Box) as box prompts. However, SAM does not directly support R-Box prompts. To address this issue, we use the minimum circumscribed horizontal rectangle of the R-Box, which is denoted as \"RH-Box\". It is also worth noting that the instances in the HRSC2016 testing set contain both H-Box and R-Box ground truth annotations.
In the case of the point prompt, due to the intricate shapes of various RS objects, such as airplanes, we have taken a cautious approach and only consider the center point as the foreground. We did not include background points in our study, as accurately defining them in an automated way can be challenging without additional contextual information. Regarding the mask prompt, we define the region enclosed by corresponding boxes as the mask prompt. Figure 2 illustrates the differences between the adopted mask prompts and ground truth segmentation labels. In SAM, the mask is a single-channel score matrix where positive values denote the active area where the target is located, whereas negative values represent irrelevant areas. In our experiments, we assign the values in these two types of areas as 1,000 and -1,000, respectively.
In summary, we have obtained six basic prompts, namely center point (CP), H-Box, RH-Box, and their corresponding masks, _i.e._, H-Box-M, R-Box-M, and RH-Box-M, as illustrated in Figure 3.
### Ablation Study
In addition to the above basic prompts, we also investigate various combinations of prompts in this study. To conduct a comprehensive analysis, we compute two types of mean intersection over union (mIOU) metrics: mIOU\\({}_{I}\\) and mIOU\\({}_{P}\\), which measure the similarity between the predicted segmentation mask and the ground truth label. The former is the average value of the IoU calculated on a per-instance basis, while the latter measures the pixel-level accuracy. Given the \\(i\\)th instance with intersection set \\(I_{i}\\) and union set \\(U_{i}\\), and the number of instances \\(N\\), we have:
\\[\\text{mIOU}_{I}=\\frac{1}{N}\\sum_{i=1}^{N}\\frac{I_{i}}{U_{i}}\\quad\\text{mIOU}_{ P}=\\frac{\\sum_{i=1}^{N}I_{i}}{\\sum_{i=1}^{N}U_{i}}. \\tag{1}\\]
Table 1 presents the evaluation results of utilizing different prompts. The point prompt delivers the worst performance and negatively affects the accuracy of any prompt combinations. This could be attributed to the insufficient amount of foreground points, which cannot guide the model effectively. The mask prompt performs better than the point prompt, but it still cannot generate high-quality segmentation annotations. The highest accuracy achieved by a mask prompt is approximately 60%, which is still much lower than the optimal prompts. Furthermore, the mask prompt has a negative impact on the performance of box prompts. When solely adopting the H-Box prompt, we obtain the highest accuracy compared to the point and mask prompts. For the case of utilizing R-Box annotations, the RH-Box prompt also achieves satisfactory performance. From this experiment, we conclude that: _if an RS object detection dataset only has R-Box annotations, then the RH-Box prompt
Figure 3: The adopted basic prompts. (a) CP. (b) H-Box. (c) RH-Box. (d) H-Box-M. (e) R-Box-M. (f) RH-Box-M. The dashed line is used for the convenience of visualization.
should be used; otherwise, the H-Box prompt should be adopted._ This consideration is applied in our later dataset transformations.
### Dataset Transformation
For the FAIR1M-2.0 dataset, since it only contains R-Box annotations, we use the corresponding RH-Box as the prompt. For DOTA-V2.0 and DIOR, we directly adopt the H-Box prompt. Prior to transformation, we follow the common practice to crop images in DOTA and FAIR1M datasets to 1,024 \\(\\times\\) 1,024 and 600 \\(\\times\\) 600, respectively, while images in DIOR are maintained at the size of 800 \\(\\times\\) 800. The resulting datasets are named SOTA (_i.e._, DOTA \\(\\rightarrow\\) SOTA), SIOR (_i.e._, DIOR \\(\\rightarrow\\) SIOR), and FAST (_i.e._, Fine-grAined object recognltion in high-Resolution remote sensing imagery \\(\\rightarrow\\) Fine-grAined Segmentation for high-resolution remote senang imagery), respectively. These datasets constitute a comprehensive and large-scale remote sensing segmentation database, _i.e._, **SAMRS**.
## 3 SAMRS
### Basic Information
The obtained segmentation labels are stored in *.png files. Pixel values are aligned with the object classes of source object detection datasets. The areas that have not been covered by the generated masks will be in a pixel value of 255. We present the comparison of our SAMRS dataset with existing high-resolution RS segmentation datasets in Table 2 from different aspects. With the available high-resolution RSI object detection datasets, we can efficiently annotate 105,090 images containing 1,668,241 instances based on SAM and the identified prompt settings (Sec. 2.4), which is more than ten times the capacity of existing datasets. Additionally, SAMRS inherits the categories
\\begin{table}
\\begin{tabular}{l c c c c|c c} \\hline \\hline CP & H-Box & H-Box-M & R-Box-M & RH-Box & RH-Box-M & mIOU\\({}_{I}\\) & mIOU\\({}_{P}\\) \\\\ \\hline \\multicolumn{2}{l}{_Point_} & & & & & 16.14 & 2.72 \\\\ \\hline \\multicolumn{2}{l}{_P_} & & & & & **89.97** & **79.40** \\\\ \\hline \\multicolumn{2}{l}{_H-Box_} & & & & & 40.54 & 36.71 \\\\ \\multicolumn{2}{l}{} & & & & & 86.67 & 77.35 \\\\ \\multicolumn{2}{l}{} & & & & & 74.21 & 62.25 \\\\ \\multicolumn{2}{l}{} & & & & & 24.54 & 5.41 \\\\ \\multicolumn{2}{l}{} & & & & & 59.71 & 49.30 \\\\ \\multicolumn{2}{l}{} & & & & & & **65.54** & **59.78** \\\\ \\multicolumn{2}{l}{} & & & & & 26.49 & 4.97 \\\\ \\hline \\multicolumn{2}{l}{_RH-Box_} & & & & & **88.85** & **76.42** \\\\ \\multicolumn{2}{l}{} & & & & & & 34.63 & 31.81 \\\\ \\multicolumn{2}{l}{} & & & & & 83.55 & 72.67 \\\\ \\multicolumn{2}{l}{} & & & & & & 66.23 & 52.75 \\\\ \\multicolumn{2}{l}{} & & & & & & 23.71 & 5.10 \\\\ \\multicolumn{2}{l}{} & & & & & & 49.24 & 39.03 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Results of using different prompts on the HRSC2016 testing set consisting of 124 images.
\\begin{table}
\\begin{tabular}{l c c c c c c} \\hline \\hline Dataset & \\#Images & \\#Category & \\#Channels & Resolution (m) & Image size & Instance & Fine-grained \\\\ \\hline ISPRS Vahlingen 1 & 33 & 6 & IR,R,G & 0.09 & 2,494 \\(\\times\\) 2,064 & \\\\ ISPRS Potsdam 2 & 38 & 6 & IR,RGB & 0.05 & 6,000 \\(\\times\\) 6,000 & \\\\ Zurich Summer [35] & 20 & 8 & NIR,RGB & 0.62 & 1,000 \\(\\times\\) 1,150 & \\\\ Zerberges [27] & 7 & 8 & RGB & 0.05 & 10,000 \\(\\times\\) 1,000 & \\\\ DeepGlobe Land Cover [6] & 1,146 & 7 & RGB & 0.5 & 2,448 \\(\\times\\) 2,448 & \\\\ UAMQ [24] & 420 & 8 & RGB & - & 4,096 \\(\\times\\) 2,160 \\(\\times\\) 9,840 \\(\\times\\) 2,160 & \\\\ GID [34] & 150 & 15 & NIR,RGB & 1 or 4 & 4,080 \\(\\times\\) 5,000 \\(\\times\\) 7,200 & \\\\ Landcovera [2] & 41 & 3 & RGB & 0.25 or 0.5 & 9,000 \\(\\times\\) 9,500 \\(\\times\\) 4,200 \\(\\times\\) 4,700 & \\\\ IsIdD [41] & 2,806 & 15 & RGB & - & 800 \\(\\times\\) 800 \\(\\times\\) 4,000 \\(\\times\\) 13,000 & β \\\\ LoveDA [38] & 5,987 & 7 & RGB & 0.3 & 1,024 \\(\\times\\) 1,024 & \\\\ \\hline \\multicolumn{2}{l}{**SAMRS**} & & & & & \\\\ \\hline
**SOTA** & 17,480 & 18 & RGB & - & 1,024 \\(\\times\\) 1,024 & β \\\\
**SIOR** & 23,463 1 & 20 & RGB & - & 800 \\(\\times\\) 800 & β \\\\
**FAST** & 64,147 & 37 & RGB & - & 600 \\(\\times\\) 600 & β \\\\ \\hline \\hline \\end{tabular}
* To avoid data snopeing, only the 11725 images corresponding to the original DIOR trainval dataset are used in subsequent pre-trainings.
\\end{table}
Table 2: Comparisons of different high-resolution RS segmentation datasets.
of the original detection datasets, which makes them more diverse than other high-resolution RS segmentation collections. It is worth noting that RS object datasets usually have more diverse categories than RS segmentation datasets due to the difficulty of tagging pixels in RSIs, and thus our SAMRS reduces this gap.
Specifically, the resulting FAST dataset is a large-scale fine-grained RS segmentation dataset that targets diverse vehicles and grounds, while SOTA and SIOR are segmentation datasets containing common object categories. For this reason, we did not unify their categories. In addition to the massive pixel-level semantic mask annotations, SAMRS includes instance mask and bounding box annotations. This means that _it can be used to perform semantic segmentation, instance segmentation, and object detection, either individually or in combination._ This feature sets SAMRS apart from the IsAID dataset, which was independently annotated from scratch on DOTA-V1.0 [42] images.
### Statistics and Analysis
To gain a deeper understanding of the characteristics of the SAMRS dataset, we conduct a thorough analysis of their capacity per category, including pixel and instance numbers. The results are presented
Figure 4: Some visual examples from the three subsets of our SAMRS dataset. For the definition of classes, please refer to the supplementary material.
Figure 5: Statistics of the number of pixels and instances per category in SAMRS. The histograms for the subsets SOTA, SIOR, and FAST are shown in the first, second, and third columns, respectively. The first row presents histograms on a per-pixel basis, while the second row presents histograms on a per-instance basis. A list of category abbreviations is provided in the supplementary material.
in Figure 5. In this analysis, we only count instances that have valid masks. The figure indicates that SIOR has more balanced categories compared to SOTA and FAST. In the instance-level statistics, we observe a large number of vehicle annotations, particularly on small ships and cars, as they are common in the real world and frequently appear in RSIs. This could also be the goal of initially developing these detection datasets. For instance, DOTA-V2.0 focuses on small targets, while FAIR1M mainly aims to accurately distinguish between different types of vehicles. Furthermore, it is observed that some categories have a high number of pixels but a low number of instances, which is likely due to their large size. For instance, the _expressway-service-area_ in SIOR and the _football-field_ in FAST demonstrate this pattern.
In addition, we investigate the distribution of mask sizes in SAMRS, as shown in Figure 6. The results indicate that, in general, there are more instances with smaller sizes in all subsets. However, some differences exist between the subsets. Specifically, FAST has more small objects than the other two sets. Nevertheless, SOTA appears to have a higher number of extremely small targets (_i.e._, <100 pixels), since its source dataset DOTA-V2.0 is designed for small object detection. On the other hand, SIOR has a more smooth distribution of mask sizes compared to SOTA and FAST.
### Visualization
In Figure 4, we visualize some segmentation annotations from the three subsets in our SAMRS dataset. As can be seen, SOTA exhibits a greater number of instances for tiny cars, whereas FAST provides a more fine-grained annotation of existing categories in SOTA such as car, ship, and plane. SIOR on the other hand, offers annotations for more diverse ground objects, such as _dam_. Hence, our SAMRS dataset encompasses a wide range of categories with varying sizes and distributions, thereby presenting a new challenge for RS semantic segmentation.
## 4 Experiment
### Pre-training
#### 4.1.1 Data and Model Settings
To investigate the influence of segmentation pre-training (SEP) using SAMRS, we adopt multiple segmentation frameworks, including typical encoder-decoder networks and the recently emerged end-to-end structure. In encoder-decoder networks, we utilize classical UNet [32] and commonly-used UperNet [43]. Different from the original U-Net that has five blocks in the decoder part, to be compatible with the typical hierarchical pyramid backbone network that outputs four levels of features, we replace the last block to a single 2\\(\\times\\) bilinear upsampling layer, which is followed by a segmentation head that contains a 1\\(\\times\\)1 convolution, a 2\\(\\times\\) bilinear upsampling, and a ReLU activation function. For UperNet, the segmentation head only employs a 1\\(\\times\\)1 convolution. To comprehensively explore the SEP, in addition to traditional convolutional networks such as ResNet [14], diverse backbones are used, including hierarchical vision transformers: Swin [21], ViTAEv2 [46] and InterImage [40], and non-hierarchical networks, including ViT [12], ViT-Adapter [4] and ViT-RVSA [37]. For the end-to-end structure, we choose the recent Mask2Former [5]. The SAMRS is split into two parts, one for pre-training, and another for validation, see the supplementary material for more details. In the data preprocessing, we employed common data augmentation techniques,
Figure 6: Statistics of the mask sizes in the subsets of SAMRS. (a) SOTA. (b) SIOR. (c) FAST.
including random scaling, random horizontal and vertical flipping, random rotation by 90 degrees, and altering pixel values through random color jitter and gamma transformation. Moreover, to ensure a fair comparison with prior studies [7; 36], we randomly cropped the input images to a size of 224 \\(\\times\\) 224.
#### 4.1.2 Training Settings
Intuitively, a well-initialized backbone network generates discriminative features at the beginning of training, thereby facilitating the optimization process of the decoder component. Since the SEP is expected to mitigate the gaps between pre-training and downstream tasks. To this end, before the segmentation pre-training phase, the selected model's backbone network is initialized using pretrained weights. In our experiments, to fully evaluate the SEP, besides basic supervised pre-training on ImageNet (IMP) [7], we also utilize the RSP [36] on the MillionAID Dataset [23]. In addition, the unsupervised pre-training weights are also involved, including BEiT [1] and MAE [13]. Here, the MAE pre-training is conducted on the MillionAID [37], while the BEiT is pretrained on the ImageNet.
To accommodate the multiple segmentation sets within SAMRS, each having a different number of categories, we employ a multi-head pre-training strategy. This approach involves utilizing separate segmentation heads for individual datasets. The only distinction lies in the output channel count of the 1 \\(\\times\\) 1 convolution, which corresponds to the number of categories. During batch-based training, diverse mini-batches are sampled from these sets to form a collective large batch, which is then fed into the network. Given the volume disparities among the various sets in SAMRS, proportional sampling is employed to obtain the mini-batches. Assuming a large batch of size \\(B\\) consists of \\(M\\) mini-batches with sizes \\(B_{1},B_{2},\\cdots,B_{M}\\), it follows that \\(B=B_{1}+B_{2}+\\cdots+B_{M}\\). Each mini-batch corresponds to its respective segmentation head, resulting in a training loss \\(\\mathcal{L}_{i}\\) for the \\(i\\)th mini-batch. The total loss is computed as \\(\\mathcal{L}=\\mathcal{L}_{1}+\\mathcal{L}_{2}+\\cdots+\\mathcal{L}_{M}\\). Assuming the sizes of the \\(M\\) sets in SAMRS are \\(L_{1},L_{2},\\cdots,L_{M}\\), we can express this relationship as \\(B_{i}=\\frac{L_{i}}{\\sum_{j=1}^{M}L_{j}}B,\\ i=0,1, ,M\\). Here, \\(M\\) can be easily extended for more RS detection datasets. In this study, \\(M=3\\). Figure 7 illustrates the pre-training pipeline. Each model is first pre-trained for 80k iterations with \\(B=96\\) and then used for fine-tuning. All experiments are implemented by PyTorch on NVIDIA GeForce RTX 3090 GPUs.
### Fine-tuning
#### 4.2.1 Comparison to Various Pre-training Strategies
In the RS community, ISPRS Potsdam and IsAID are commonly-used finely annotated datasets for evaluating segmentation methods [25; 28; 36; 37; 44], and we use them to assess the pre-trained models, as Table 3-4 shown. It can be seen that, without good initialization, the performances of SEP are comparable as IMP but inferior to IMP+SEP. On the Potsdam scene, for the traditional encoder-decoder network, SEP improves both convolutional and vision transformer networks, especially for UperNet and the backbones containing hierarchical features (also includes ResNet). As a result, ViTAEv2-S is greatly boosted and overperforms existing advanced methods in overall accuracy. We also observe that SEP is useful when combined with different pre-training strategies. Even if using the initialized weights generated by pre-training on SAMRS itself, SEP still can improve the accuracy,
Figure 7: The pipeline of segmentation pre-training on SAMRS. Different colors represent the data stream of various sets. The yellow parts will be used in fine-tuning.
excluding the effect of data volume used for training. We notice SEP played a negative role in the end-to-end structure, it may be because the objects of SAMRS are too small, which is unfavorable for the region-based Mask2Former. In addition, the Mask2Former, which has obtained high accuracies on natural images, does not perform as well as UNet and UperNet on RSIs. These results indicate more refined parameter adjustments of Mask2Former are needed in later research. On the IsAID dataset, the performances of SEP on simple convolutional networks depending on local perception are unstable, because IsAID and DOTA share the same images but with different annotations, which may confuse the model. Benefiting from SEP, vision transformer networks are further enhanced and surpass previous methods. From these results we can see, SEP is able to mitigate the influence of task-level disparities, specifically the gaps between upstream pre-training tasks and downstream segmentation tasks.
### Fine-tuning with Small-size Training Samples
The difficulty of annotating pixel-level masks limits the scale of existing remote sensing segmentation datasets, ultimately constraining the performance of trained models due to insufficient training samples. In order to investigate the effectiveness of SEP under conditions of limited training samples, we conducted experiments involving fine-tuning models using small fractions (1%, 3%, and 5%) of data from the ISPRS Potsdam and IsAID training sets, as outlined in Table 5 and Table 6. The integration of SEP with SAMRS, which provides a valuable segmentation prior, yields superior results compared to the IMP and RSP counterparts. This advantage is particularly evident when the number of available samples is scarce in the ISPRS Potsdam scene. Conversely, the results for the IsAID dataset exhibit an opposite trend due to the inherent challenges of this dataset, where both IMP and RSP yield extremely low overall accuracies. Nevertheless, the adoption of SEP significantly improves model performance. These findings highlight the importance of conducting segmentation
\\begin{table}
\\begin{tabular}{l|l|c|c c c c|c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{Pretrain} & \\multirow{2}{*}{Backbone} & \\multicolumn{4}{c|}{FT score per category} & \\multirow{2}{*}{OA} & \\multirow{2}{*}{mF1} \\\\ \\cline{2-2} \\cline{6-9} & & Imper. surf. & Building & & Low veg. & & & & Car \\\\ \\hline \\multicolumn{10}{l}{_Comparison method_} \\\\ \\hline ST-UNet [15] & \\multirow{2}{*}{β} & ResNet-50 & 79.19 & 86.63 & 67.89 & 66.37 & 79.77 & β & 86.13 \\\\ ResNets-4.67\\(\\times\\)2 [9] & & β & 93.50 & 97.20 & 88.20 & 89.20 & 96.40 & 91.50 & 92.90 \\\\ LANet [11] & IMP & ResNet-50 & 93.05 & 97.19 & 87.30 & 88.04 & 94.19 & 90.84 & 91.95 \\\\ DCFAM [39] & IMP & Swin-S & 94.19 & 97.57 & 88.57 & 89.62 & 96.31 & 92.00 & 93.25* \\\\ \\hline \\multicolumn{10}{l}{_Convolution network_} \\\\ \\hline UNet & SEP & ResNet-50 & 90.62 & 94.75 & 85.12 & 83.91 & 96.51 & 89.70 & 90.18 \\\\ UNet & IMP & ResNet-50 & 90.78 & 94.78 & 85.23 & 84.76 & 96.81 & 89.94 & 90.47 \\\\ UNet & IMP+SEP & ResNet-50 & 91.36 & 94.92 & 85.39 & 85.24 & 97.17 & **90.29** & **90.82** \\\\ \\hline UperNet & SEP & ResNet-50 & 91.02 & 94.82 & 84.28 & 83.97 & 96.95 & 89.70 & 90.21 \\\\ UperNet & IMP & ResNet-50 & 90.70 & 94.44 & 84.68 & 83.94 & 96.58 & 89.95 & 90.07 \\\\ UperNet & IMP+SEP & ResNet-50 & 91.38 & 95.26 & 85.14 & 84.88 & 97.16 & **90.27** & **90.76** \\\\ \\hline \\multicolumn{10}{l}{_Hierarchical vision transformer_} \\\\ \\hline UperNet & IMP & Swin-T & 93.09 & 96.74 & 86.99 & 86.45 & 91.12 & 91.44 & 90.88 \\\\ UperNet & IMP+SEP & Swin-T & 93.06 & 96.65 & 87.07 & 86.74 & 97.64 & **91.88** & **92.23** \\\\ \\hline UperNet & IMP & ViLAEv2-S & 92.54 & 96.54 & 86.11 & 86.13 & 91.31 & 91.00 & 90.52 \\\\ UperNet & IMP+SEP & ViLAEv2-S & 93.45 & 96.99 & 87.65 & 87.00 & 97.67 & **92.25*** & **92.55** \\\\ \\hline UperNet & IMP & Intermediate-T & 93.27 & 96.80 & 87.41 & 86.62 & 91.79 & 91.65 & 91.18 \\\\ UperNet & IMP+SEP & InternalImage-T & 93.30 & 96.91 & 87.24 & 86.80 & 97.81 & **92.08** & **92.41** \\\\ \\hline \\multicolumn{10}{l}{_Plain vision transformer_} \\\\ \\hline UperNet & IMP & ViT-B & 93.09 & 96.83 & 86.93 & 86.61 & 90.93 & 91.47 & 90.88 \\\\ UperNet & IMP+SEP & ViT-B & 92.96 & 96.52 & 86.62 & 86.01 & 97.57 & **91.60** & **91.94** \\\\ \\hline UperNet & IMP & ViT-Adapter-B & 93.16 & 96.77 & 87.09 & 86.71 & 91.20 & 91.53 & 90.98 \\\\ UperNet & IMP+SEP & ViT-Adapter-B & 93.20 & 96.75 & 87.06 & 86.52 & 97.68 & **91.91** & **92.24** \\\\ \\hline \\multicolumn{10}{l}{_Pre-training strategy_} \\\\ \\hline UNet & RSP & ResNet-50 & 91.49 & 95.42 & 85.70 & 85.18 & 97.05 & 90.49 & 90.57 \\\\ UNet & RSP+SEP & ResNet-50 & 92.00 & 95.44 & 85.76 & 85.33 & 97.38 & **90.72** & **91.18** \\\\ \\hline UperNet & RSP & ResNet-50 & 91.08 & 94.64 & 85.57 & 85.38 & 86.97 & 90.18 & 90.73 \\\\ UperNet & RSP+SEP & ResNet-50 & 91.73 & 95.52 & 85.44 & 85.35 & 97.24 & **90.59** & **91.06** \\\\ \\hline UperNet & BEiT & ViT-B & 88.70 & 92.29 & 81.48 & 78.54 & 96.36 & 86.80 & 87.49 \\\\ UperNet & BEiT+SEP & ViT-B & 89.95 & 93.33 & 82.96 & 80.91 & 96.67 & **88.20** & **88.76** \\\\ \\hline UperNet & MAE \\(\\uparrow\\) & ViT-B + RVSA & 92.67 & 96.38 & 86.43 & 85.89 & 90.46 & 90.97 & 90.37 \\\\ UperNet & MAE+SEP & ViT-B + RVSA & 92.69 & 96.33 & 86.28 & 85.60 & 95.76 & **91.33** & **91.69** \\\\ \\hline UperNet & SAMRS-MAE \\(\\ddagger\\) & ViT-B + RVSA & 92.46 & 96.10 & 86.18 & 85.59 & 90.35 & 90.71 & 90.13 \\\\ UperNet & SAMRS-MAE-SEP & ViT-B + RVSA & 92.34 & 95.88 & 86.06 & 85.32 & 97.54 & **91.01** & **91.43** \\\\ \\hline \\multicolumn{10}{l}{_End-to-end transformer_} \\\\ \\hline Mask2Former & IMP & ResNet-50 & 88.40 & 92.93 & 83.05 & 83.98 & 86.00 & **87.54** & **86.87** \\\\ Mask2Former & IMP+SEP & ResNet-50 & 72.41 & 78.98 & 63.14 & 61.62 & 73.16 & 70.14 & 69.86 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Segmentation results of different methods on the ISPRS Potsdam dataset. \\(\\ddagger\\): MAE pre-training on the SAMRS training set. βaβ denotes the best score among all methods.
pre-training using large-scale RS segmentation data, such as SAMRS, prior to training with limited data. Notably, the developed pipeline enables the rapid construction of such a dataset at a low labeling cost, making it a promising approach.
### Limitations and Discussion
Previous classical datasets, such as HRSC2016 [22] and COCO [19], simultaneously contain bounding box and pixel-level mask annotations, proving the feasibility of the coexistence for segmentation and detection labels. Therefore, it is reasonable to construct the SAMRS dataset by transforming existing RS object detection datasets. However, despite successfully establishing the SAMRS dataset, which outperforms existing high-resolution RS segmentation datasets by more than tenfold, its volume remains smaller than large-scale classification datasets such as ImageNet [7] and MillionAID [23], commonly employed for pre-training purposes. Our current investigation focuses exclusively on pre-training small-scale basic models (about 100M) and we intend to incorporate larger models. Additionally, it is also worth trying to explore the impact of pre-training on SAMRS for tasks such as instance segmentation and object detection.
## 5 Conclusion
This study presents an effective way to create a large-scale remote sensing (RS) segmentation dataset by harnessing the capabilities of the Segment Anything Model (SAM) and existing object detection datasets. Given the unique challenges associated with RS data labeling, we investigate the performance of various prompts to identify the optimal settings for SAM. By leveraging these optimal settings, we generate extensive mask annotations for RS images, thereby creating a large-scale segmentation dataset named SAMRS. Remarkably, SAMRS surpasses all previously available high-resolution RS segmentation datasets in terms of volume. Furthermore, our statistical analysis reveals that SAMRS encompasses a diverse array of categories exhibiting varying sizes and distributions. SAMRS can be utilized for semantic segmentation, instance segmentation, and object detection, either independently or in combination. Specifically, we present a preliminary investigation and demonstrate the value of segmentation pre-training on SAMRS for RS segmentation tasks, especially in scenarios with limited training samples.
\\begin{table}
\\begin{tabular}{l|l|c|c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{Pretrain} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c|}{mIoU} \\\\ \\cline{3-5} & & 5\\% & 3\\% & 1\\% \\\\ \\hline UNet & IMP & ResNet-50 & 5.33 & 5.19 & 1.33 \\\\ UNet & IMP+SEP & ResNet-50 & 18.74 & 12.98 & 7.04 \\\\ \\(\\Delta\\) & +13.41 & +7.79 & +5.71 \\\\ \\hline UNet & RSP & ResNet-50 & 4.68 & 3.67 & 2.50 \\\\ UNet & RSP+SEP & ResNet-50 & 22.47 & 13.88 & 8.84 \\\\ \\(\\Delta\\) & +17.79 & +10.21 & +6.34 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Segmentation results of different pre-training methods on the IsAPR dataset.
\\begin{table}
\\begin{tabular}{l|l|l|c c c c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{Pretrain} & \\multirow{2}{*}{Backbone} & \\multicolumn{5}{c}{mIoU category} \\\\ \\cline{3-14} & & & 5\\% & 3\\% &
## Acknowledgments
We acknowledge the authors of SAM for releasing codes and models, and the authors of DOTA, DIOR, and FAIR1M for providing their datasets. This work was supported in part by the National Natural Science Foundation of China under Grant 62225113 and in part by the National Key Research and Development Program of China under Grant 2022YFB3903405.
## References
* [1] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEiT: BERT pre-training of image transformers. In _ICLR_, 2022.
* [2] Adrian Boguszewski, Dominik Batorski, Natalia Ziemba-Jankowska, Tomasz Dziedzic, and Anna Zambrzycka. Landcover. ai: Dataset for automatic mapping of buildings, woodlands, water and roads from aerial imagery. In _CVPR_, pages 1102-1110, 2021.
* [3] Keyan Chen, Chenyang Liu, Hao Chen, Haotian Zhang, Wenyuan Li, Zhengxia Zou, and Zhenwei Shi. Rsprompter: Learning to prompt for remote sensing instance segmentation based on visual foundation model. _arXiv preprint arXiv:2306.16269_, 2023.
* [4] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In _ICLR_, 2023.
* [5] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In _CVPR_, pages 1290-1299, 2022.
* [6] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. In _CVPRW_, pages 172-181, 2018.
* [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _CVPR_, pages 248-255, 2009.
* [8] Ruining Deng, Can Cui, Quan Liu, Tianyuan Yao, Lucas W Remedios, Shunxing Bao, Bennett A Landman, Lee E Wheless, Lori A Coburn, Keith T Wilson, et al. Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging. _arXiv preprint arXiv:2304.04155_, 2023.
* [9] Foivos I Diakkogiannis, Francois Waldner, Peter Caccetta, and Chen Wu. Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data. _ISPRS Journal of Photogrammetry and Remote Sensing_, 162:94-114, 2020.
* [10] Jian Ding, Nan Xue, Gui-Song Xia, Xiang Bai, Wen Yang, Michael Yang, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Object detection in aerial images: A large-scale benchmark and challenges. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, pages 1-1, 2021.
* [11] Lei Ding, Hao Tang, and Lorenzo Bruzzone. Lanet: Local attention embedding to improve the semantic segmentation of remote sensing images. _IEEE Transactions on Geoscience and Remote Sensing_, 59(1):426-435, 2020.
* [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _ICLR_, 2021.
* [13] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In _CVPR_, pages 16000-16009, June 2022.
* [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _CVPR_, pages 770-778, 2016.
* [15] Xin He, Yong Zhou, Jiaqi Zhao, Di Zhang, Rui Yao, and Yong Xue. Swin transformer embedding unet for remote sensing image semantic segmentation. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-15, 2022.
* [16] Sahib Julka and Michael Granitzer. Knowledge distillation with segment anything (sam) model for planetary geological mapping. _arXiv preprint arXiv:2305.07586_, 2023.
* [17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In _ICCV_, pages 4015-4026, October 2023.
* [18] Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. _ISPRS journal of photogrammetry and remote sensing_, 159:296-307, 2020.
* [19] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _ECCV_, pages 740-755, 2014.
* [20] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. _arXiv preprint arXiv:2303.05499_, 2023.
* [21] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical vision transformer using shifted windows. In _ICCV_, pages 10012-10022, 2021.
* [22] Zikun Liu, Liu Yuan, Lubin Weng, and Yiping Yang. A high resolution optical satellite image dataset for ship recognition and some new baselines. In _ICPRAM_, pages 324-331, 2017.
* [23] Yang Long, Gui-Song Xia, Shengyang Li, Wen Yang, Michael Ying Yang, Xiao Xiang Zhu, Liangpei Zhang, and Deren Li. On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 14:4205-4230, 2021.
* [24] Ye Lyu, George Vosselman, Gui-Song Xia, Alper Yilmaz, and Michael Ying Yang. Uavid: A semantic segmentation dataset for uav imagery. _ISPRS journal of photogrammetry and remote sensing_, 165:108-119, 2020.
* [25] Ailong Ma, Junjue Wang, Yanfei Zhong, and Zhuo Zheng. Factseg: Foreground activation-driven small object semantic segmentation in large-scale remote sensing imagery. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-16, 2022.
* [26] Jun Ma and Bo Wang. Segment anything in medical images. _arXiv preprint arXiv:2304.12306_, 2023.
* [27] Diego Marcos, Michele Volpi, Benjamin Kellenberger, and Devis Tuia. Land cover mapping at very high resolution with rotation equivariant cnns: Towards small yet accurate models. _ISPRS journal of photogrammetry and remote sensing_, 145:96-107, 2018.
* [28] Ruigang Niu, Xian Sun, Yu Tian, Wenhui Diao, Kaiqiang Chen, and Kun Fu. Hybrid multiple attention network for semantic segmentation in aerial images. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-18, 2022.
* [29] Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, Wesley Nunes Goncalves, Ana Paula Marques Ramos, Jonathan Li, and Jose Marcato Junior. The segment anything model (sam) for remote sensing applications: From zero to one shot. _arXiv preprint arXiv:2306.16623_, 2023.
* [30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _ICML_, pages 8748-8763. PMLR, 2021.
* [31] Simiao Ren, Francesco Luzi, Saad Lahrichi, Kaleb Kassaw, Leslie M Collins, Kyle Bradbury, and Jordan M Malof. Segment anything, from space? _arXiv preprint arXiv:2304.13000_, 2023.
* [32] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _MICCAI_, pages 234-241, 2015.
* [33] Xian Sun, Peijin Wang, Zhiyuan Yan, Feng Xu, Ruiping Wang, Wenhui Diao, Jin Chen, Jihao Li, Yingchao Feng, Tao Xu, Martin Weinmann, Stefan Hinz, Cheng Wang, and Kun Fu. Fair1m: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 184:116-130, 2022.
* [34] Xin-Ti Tong, Gui-Song Xia, Qikai Lu, Huanfeng Shen, Shengyang Li, Shucheng You, and Liangpei Zhang. Land-cover classification with high-resolution remote sensing images using transferable deep models. _Remote Sensing of Environment_, 237:111322, 2020.
* [35] Michele Volpi and Vittorio Ferrari. Semantic segmentation of urban scenes by learning local class interactions. In _CVPRW_, pages 1-9, 2015.
* [36] Di Wang, Jing Zhang, Bo Du, Gui-Song Xia, and Dacheng Tao. An empirical study of remote sensing pretraining. _IEEE Transactions on Geoscience and Remote Sensing_, 61:1-20, 2023.
* [37] Di Wang, Qiming Zhang, Yufei Xu, Jing Zhang, Bo Du, Dacheng Tao, and Liangpei Zhang. Advancing plain vision transformer toward remote sensing foundation model. _IEEE Transactions on Geoscience and Remote Sensing_, 61:1-15, 2023.
* [38] Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong. Loveda: A remote sensing land-cover dataset for domain adaptive semantic segmentation. In _NeurIPS Track on Datasets and Benchmarks_, volume 1, 2021.
* [39] Libo Wang, Rui Li, Chenxi Duan, Ce Zhang, Xiaoliang Meng, and Shenghui Fang. A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images. _IEEE Geoscience and Remote Sensing Letters_, 19:1-5, 2022.
* [40] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Interimimage: Exploring large-scale vision foundation models with deformable convolutions. In _CVPR_, pages 14408-14419, 2023.
* [41] Syed Waqas Zamir, Aditya Arora, Akshita Gupta, Salman Khan, Guolei Sun, Fahad Shahbaz Khan, Fan Zhu, Ling Shao, Gui-Song Xia, and Xiang Bai. isaid: A large-scale dataset for instance segmentation in aerial images. In _CVPRW_, pages 28-37, 2019.
* [42] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In _CVPR_, June 2018.
* [43] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In _ECCV_, pages 418-434, 2018.
* [44] Rongtao Xu, Changwei Wang, Jiguang Zhang, Shibiao Xu, Weiliang Meng, and Xiaopeng Zhang. Rssformer: Foreground saliency enhancement for remote sensing land-cover segmentation. _IEEE Transactions on Image Processing_, 32:1052-1064, 2023.
* [45] Jielu Zhang, Zhongliang Zhou, Gengchen Mai, Lan Mu, Mengxuan Hu, and Sheng Li. Text2seg: Remote sensing image semantic segmentation via text-guided visual foundation models. _arXiv preprint arXiv:2304.10597_, 2023.
* [46] Qiming Zhang, Yufei Xu, Jing Zhang, and Dacheng Tao. Vitaev2: Vision transformer advanced by exploring inductive bias for image recognition and beyond. _International Journal of Computer Vision_, 131(5):1141-1162, 2023.
* [47] Renrui Zhang, Zhengkai Jiang, Ziyu Guo, Shilin Yan, Junting Pan, Hao Dong, Peng Gao, and Hongsheng Li. Personalize segment anything model with one shot. _arXiv preprint arXiv:2305.03048_, 2023.
* [48] Xianwei Zheng, Linxi Huan, Gui-Song Xia, and Jianya Gong. Parsing very high resolution urban scene images by learning deep convnets with edge-aware loss. _ISPRS Journal of Photogrammetry and Remote Sensing_, 170:15-28, 2020.
Appendix
### Category Abbreviations
For the SOTA dataset, we present the list of all category abbreviations as follows. _LV: large vehicle, SP: swimming pool, HC: helicopter, BR: bridge, PL: plane, SH: ship, SBF: soccer ball field, BC: basketball court, GTF: ground track field, SV: small vehicle, BD: baseball diamond, TC: tennis court, RA: roundabout, ST: storage tank, HA: harbor, CC: container crane, AP: airport, HP: helipad._
For the SIOR dataset, we present the list of all category abbreviations as follows. _APL: airplane, APO: airport, BF: baseball field, BC: basketball court, BR: bridge, CH: chimney, ESA: expressway service area, ETS: expressway toll station, DA: dam, GF: golf field, GTF: ground track field, HA: harbor, OP: overpass, SH: ship, STD: stadium, STT: storage tank, TC: tennis court, TS: train station, VH: vehicle, WD: windmill._
For the FAST dataset, we present the list of all category abbreviations as follows. _A2: A220, A3: A321, A4: A330, A5: A350, ARJ: ARJ21, BF: baseball field, BC: basketball court, B3: boeing737, B4: boeing747, B7: boeing77, B8: boeing787, BR: bridge, BU: bus, C9: C919, CT: cargo truck, DCS: dry cargo ship, DT: dump truck, ES: engineering ship, EV: excavator, FB: fishing boat, FF: football field', IN: intersection, LCS: liquid cargo ship, MB: motorboat, OA: other airplane, OS: other ship, OV: other vehicle, PS: passenger ship, RA: roundabout, SC: small car, TC: tennis court, TRT: tractor, TRL: trailer, TUT: truck tractor, TB: tugboat, VA: van, WS: warship._
### Experiment Settings
We present the experiment settings of pre-training and fine-tuning in Table 7-8.
### SAMRS Training and Validation sets
For the experiments based on the SAMRS dataset (see Table 2 in the main text), in each subset, 95% samples are used for pre-training. They together consist of the SAMRS training set. The remained samples consist of the SAMRS validation set. SAMRS training and validation sets have 88,685 and 4,667 images, respectively. The samples transformed from the DIOR testing set [18] have not been used in any experiment.
### SAMRS-MAE
We conduct MAE pre-training [13] on the SAMRS training set. To improve the pre-training performance, we further clip the image to 384 \\(\\times\\) 384 with a stride of 300, obtaining 609,707 images.
### Evaluations on the SAMRS Validation Set
Table 9 lists the evaluation results of the pre-trained models on the SAMRS validation sets. For convenience, We uniformly use the images in the size of 512 \\(\\times\\) 512 obtained through a center cropping on SAMRS validation set samples for evaluation. Here, mIOU is adopted as the metric. It can be seen that all scores on the FAST validation set are relatively low, indicating the challenging nature of the proposed dataset. By comparing Table 9 to the fine-tuning results (Table 3-4 in the main text), we can find that model performance on validation and fine-tuning show similar trends. For example, the performances of adopting SEP alone are not as well as with a good initialization. As can
\\begin{table}
\\begin{tabular}{l|c} \\hline \\hline Config & value \\\\ \\hline Optimizer & AdamW \\\\ Momentum & (0.9, 0.999) \\\\ Batchsize & 96 l 8 \\\\ Iterations & 80000 \\\\ Scheduler & cosine decay \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: Basic settings in experiments. \"l\" means pre-training l fine-tuning. # line-tuning. # height decay. MLR: Minimum learning rate
\\begin{table}
\\begin{tabular}{l|c c c} \\hline \\hline Backbone & ILR & WD & MLR \\\\ \\hline ResNet [4] & 1e-3 & 5e-2 l1e-4 & 5e-6 \\\\ Swin [21]/ViTAEv2 [46] & 6e-5 & 1e-2 & 0 \\\\ InterImage [40] & 6e-5 & 5e-2 & 0 \\\\ ViT [12]/ViT-RVSA [37] & 6e-5 & 5e-2 & 0 \\\\ ViT-Adapter [4] & 6e-5 & 1e-2 & 0 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 8: Detailed settings of different models. \"l\" means pre-training l fine-tuning. # fine-tuning. # height decay. MLR: Minimum learning rate be seen, compared to UNet, UperNet achieves higher accuracies, indicating the model representations are more expressive. Therefore, UperNet can obtain better performances on the challenging IsAID dataset. In addition, it can be observed that the performances of vision transformer networks still surpass convolutional networks, especially for the hierarchical structure. We notice the InternalImage performs poorly with 512 \\(\\times\\) 512 images. Therefore, we resize the input image to 224 \\(\\times\\) 224, and the accuracy is recovered. Note the setting of 224 \\(\\times\\) 224 is adopted in pre-training. These results indicate the InterImImage may be more dependent on the input size of the pre-training. We have not presented the results of Mask2Former [5] because the validation accuracies on three subsets are close to 0, implying serious over fittings in pre-training since the loss was continuously decreasing. Compared to UNet and UperNet, Mask2former is a newly-proposed framework, it has many different hyperparameters that need to be carefully tuned. Further investigation of hyperparameter settings is required to evaluate its impact on remote sensing images. Nevertheless, we believe the SAMRS validation set is expected to help adjust the settings of SEP in future explorations.
### Dataset of Fine-tuning
We fine-tune the pre-trained model on two commonly used RS segmentation datasets, including ISPRS Potsdam 1 and iSAID [41]. Before using them, we conduct a series of pre-processing. Here are the details.
Footnote 1: [https://www.isprrs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx](https://www.isprrs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx)
**ISPRS Potsdam**: This is the most classical high-resolution RS segmentation dataset. It has 38 large images with an average size of 6,000 \\(\\times\\) 6,000, where the training and testing sets separately include 24 and 14 images. It contains 6 categories: impervious surface, building, low vegetation, tree, car, and clutter. In experiments, we crop the image into 512 \\(\\times\\) 512 with a stride of 320, obtaining 8,664 and 5,054 training and testing images. We only use RGB channels. Following most literature in the RS field [11; 36; 48], we ignore the clutter category in training and testing.
**iSAID**: This is a challenging dataset. It provides 15 foregrounds and 1 background category, where 2,806 high-resolution images that range from 800 \\(\\times\\) 800 to 4,000 \\(\\times\\) 13,000 pixels are contained. The training, validation, and test sets separately have 1,411, 458, and 937 large images. In this paper, we use the validation set for evaluation since the testing set cannot be acquired. In experiments, we crop the image into 896 \\(\\times\\) 896 by stride 512, increasing the size of the training and validation set to 33,620 and 11,533. In addition, only the foreground categories are considered.
### Visualization
We present more samples of different sets, as shown in Figure 8-10
\\begin{table}
\\begin{tabular}{l l l|c c c c} \\hline Method & Pretrain & Backbone & SOTA & SIOR & FAST & Average \\\\ \\hline UNet [32] & SEP & ResNet-50 & 55.88 & 71.44 & 35.35 & 54.23 \\\\ UNet & IMP [7]+SEP & ResNet-50 & 58.62 & 80.49 & 37.20 & 58.77 \\\\ UNet & RSP [36]+SEP & ResNet-50 & 62.43 & 85.37 & 38.41 & 62.07 \\\\ \\hline UperNet [43] & SEP & ResNet-50 & 64.79 & 82.74 & 39.32 & 62.28 \\\\ UperNet & IMP+SEP & ResNet-50 & 73.59 & 88.59 & 47.10 & 69.76 \\\\ UperNet & RSP+SEP & ResNet-50 & 76.03 & 90.66 & 47.62 & 71.44 \\\\ \\hline UperNet & IMP+SEP & Swin-T & 81.53 & 94.64 & 57.91 & 78.03 \\\\ UperNet & IMP+SEP & ViTAEV2-S & 81.13 & 94.06 & 57.90 & 77.70 \\\\ UperNet & IMP+SEP & InterImage-T & 58.07 & 72.04 & 29.33 & 53.15 \\\\ UperNet & IMP+SEP & InterImage-T \\(\\dagger\\) & 78.06 & 92.51 & 45.70 & 72.09 \\\\ \\hline UperNet & IMP+SEP & ViT-B & 79.37 & 92.76 & 53.05 & 75.06 \\\\ UperNet & IMP+SEP & ViT-Adapter-B & 80.41 & 93.76 & 51.13 & 75.10 \\\\ UperNet & BEIT [1] +SEP & ViT-B & 74.10 & 85.81 & 42.04 & 67.31 \\\\ UperNet & MAE [13] +SEP \\(\\dagger\\dagger\\) & ViT-B + RVSA & 79.00 & 92.09 & 53.46 & 74.85 \\\\ UperNet & SAMRS-MAE+SEP \\(\\dagger\\dagger\\) & ViT-B + RVSA & 77.64 & 91.87 & 54.78 & 74.67 \\\\ \\hline \\end{tabular}
\\end{table}
Table 9: The mIoUs of different models on SAMRS validation set. \\(\\dagger\\): The input image is resized to 224 \\(\\times\\) 224. \\(\\dagger\\dagger\\): MAE pre-training on the MillionAID. \\(\\dagger\\dagger\\): MAE pre-training on the SAMRS training set Figure 8: Visual examples from the SOTA subset of our SAMRS dataset.
Figure 9: Visual examples from the SIOR subset of our SAMRS dataset.
Figure 10: Visual examples from the FAST subset of our SAMRS dataset. | The success of the Segment Anything Model (SAM) demonstrates the significance of data-centric machine learning. However, due to the difficulties and high costs associated with annotating Remote Sensing (RS) images, a large amount of valuable RS data remains unlabeled, particularly at the pixel level. In this study, we leverage SAM and existing RS object detection datasets to develop an efficient pipeline for generating a large-scale RS segmentation dataset, dubbed SAMRS. SAMRS totally possesses 105,090 images and 1,668,241 instances, surpassing existing high-resolution RS segmentation datasets in size by several orders of magnitude. It provides object category, location, and instance information that can be used for semantic segmentation, instance segmentation, and object detection, either individually or in combination. We also provide a comprehensive analysis of SAMRS from various aspects. Moreover, preliminary experiments highlight the importance of conducting segmentation pre-training with SAMRS to address task discrepancies and alleviate the limitations posed by limited training data during fine-tuning. The code and dataset will be available at SAMRS. | Write a summary of the passage below. | 211 |
arxiv-format/2107_04659v2.md | # On Graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals
Mashhoor Refai
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
Rashid Abu-Dawwas
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
Unsal Tekir
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
Suat Koc
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
Roa'a Awawdeh
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
Eda Yildiz
Department of Mathematics, Faculty of Science, University of Tokyo, Tokyo 113-8582, Japan
## 1. Introduction
Throughout this article, \\(G\\) will be a group with identity \\(e\\) and \\(R\\) be a commutative ring having a nonzero unity \\(1\\). Then \\(R\\) is called a \\(G\\)_-graded ring_ if \\(R=\\bigoplus_{g\\in G}R_{g}\\) with \\(R_{g}R_{h}\\subseteq R_{gh}\\) for all \\(g,h\\in G\\) where \\(R_{g}\\) is an additive subgroup of \\(R\\) for all \\(g\\in G\\). The elements of \\(R_{g}\\) are called _homogeneous of degree_\\(g\\). If \\(a\\in R\\), then \\(a\\) can be written uniquely as a finite sum \\(\\sum_{g\\in G}a_{g}\\), where \\(a_{g}\\) is the component of \\(a\\) in \\(R_{g}\\). Note that \\(R_{e}\\) is a subring of \\(R\\) and \\(1\\in R_{e}\\). The set of all homogeneous elements of \\(R\\) is denoted by \\(h(R)=\\bigcup_{g\\in G}R_{g}\\). Let \\(P\\) be an ideal of a graded ring \\(R\\). Then \\(P\\) is called a _graded ideal_ if \\(P=\\bigoplus_{g\\in G}(P\\cap R_{g}),\\) or equivalently, \\(a=\\sum_{g\\in G}a_{g}\\in P\\) implies that \\(a_{g}\\in P\\) for all \\(g\\in G\\). It is not necessary that every ideal of a graded ring is a graded ideal. For instance, let \\(R=k[X]\\) where \\(k\\) is a field. Then \\(R\\) is a \\(\\mathbb{Z}\\)-graded ring where \\(R_{n}=0\\) if \\(n<0,\\ R_{0}=k\\) and \\(R_{n}=kX^{n}\\) if \\(n>0.\\) Then \\(I=(X+1)\\) is not a graded ideal since \\(1+X\\in I\\) but \\(1\
otin I\\). We will denote the set of all graded ideals of \\(R\\) by \\(GI(R)\\). For more details and terminology, see [8, 12].
For many years, various classes of graded ideals have been established such as graded prime, graded primary, graded absorbing ideals, and etc. All of them play an important performance when characterizing graded rings. The concept of graded prime ideals and its generalizations have an important place in graded commutative algebra since they are used in recognizing the structure of graded rings. Recall that a proper graded ideal \\(I\\) of \\(R\\) is said to be a _graded prime ideal_ if whenever \\(a,b\\in h(R)\\) such that \\(ab\\in I\\), then either \\(a\\in I\\) or \\(b\\in I\\) ([14]). The significance of graded prime ideals led many researchers to work on graded prime ideals and its generalizations. See for example, [1, 7, 13]. In [6], Atani introduced the notion of graded weakly prime ideal which is a generalization of graded prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is said to be a _graded weakly prime ideal_ of \\(R\\) if whenever \\(a,b\\in h(R)\\) such that \\(0\
eq ab\\in I\\), then \\(a\\in I\\) or \\(b\\in I\\). It is obvious that every graded prime ideal isgraded weakly prime but the converse is not true in general. For instance, consider the \\(\\mathbb{Z}\\)-graded ring \\(R=\\mathbb{Z}_{4}[X]\\) and the ideal \\(I=(0).\\) Then \\(I\\) is clearly a graded weakly prime ideal. However, \\(I\\) is not a graded prime ideal since \\(\\overline{2}\\cdot\\overline{2}X=\\overline{0}\\) but \\(\\overline{2}\\) and \\(\\overline{2}X\
otin I\\). Later, Al-Zoubi, Abu-Dawwas and Ceken in [5] introduced the notion of graded \\(2\\)-absorbing ideals. A nonzero proper graded ideal \\(I\\) of \\(R\\) is called a _graded \\(2\\)-absorbing ideal_ if \\(abc\\in I\\) implies \\(ab\\in I\\) or \\(ac\\in I\\) or \\(bc\\in I\\) for each \\(a,b,c\\in h(R)\\). Note that every graded prime ideal is also a graded \\(2\\)-absorbing ideal. After this, graded \\(2\\)-absorbing version of graded ideals and many generalizations of graded \\(2\\)-absorbing ideals attracted considerable attention by many researchers in [2, 16, 18]. In [9], the authors defined the notion of graded almost prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is said to be _graded almost prime_ if for \\(a,b\\in h(R)\\) such that \\(ab\\in I-I^{2}\\), then either \\(a\\in I\\) or \\(b\\in I\\). Also, in [5], the authors defined and studied graded _weakly \\(2\\)-absorbing ideals_ which is a generalization of graded weakly prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is called a _graded weakly \\(2\\)-absorbing ideal_ if \\(0\
eq abc\\in I\\) implies \\(ab\\in I\\) or \\(ac\\in I\\) or \\(bc\\in I\\) for each \\(a,b,c\\in h(R)\\). In [4], Alshehry and Abu-Dawwas defined a new class of graded prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is called _graded \\(\\phi\\)-prime ideal_ if whenever \\(ab\\in I-\\phi(I)\\) for some \\(a,b\\in h(R)\\), then either \\(a\\in I\\) or \\(b\\in I\\), where \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\) is a function. They proved that a graded prime ideal and a graded \\(\\phi\\)-prime ideal have some similar properties.
Recently, in [3], the notion of graded \\(1\\)-absorbing prime ideals has been introduced and studied. This class of graded ideals is a generalization of graded prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is called a _graded \\(1\\)-absorbing prime ideal_ if whenever \\(abc\\in I\\) for some nonunits \\(a,b,c\\in h(R)\\), then either \\(ab\\in I\\) or \\(c\\in I\\). Note that every graded prime ideal is graded \\(1\\)-absorbing prime and every graded \\(1\\)-absorbing prime ideal is graded \\(2\\)-absorbing ideal. The converses are not true. More currently, in [17], the notion of graded weakly \\(1\\)-absorbing prime ideals which is a generalization of graded \\(1\\)-absorbing prime ideals has been introduced and investigated. A proper graded ideal \\(I\\) of \\(R\\) is called a _graded weakly \\(1\\)-absorbing prime ideal_ if whenever \\(0\
eq abc\\in I\\) for some nonunits \\(a,b,c\\in h(R)\\), then either \\(ab\\in I\\) or \\(c\\in I\\).
In this article, we act in accordance with [19] to define and study graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals as a new class of graded ideals which is a generalization of graded \\(1\\)-absorbing prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is called a _graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal_ of \\(R\\) if whenever \\(a,b,c\\in h(R)\\) are nonunits such that \\(abc\\in I-\\phi(I)\\), then \\(ab\\in I\\) or \\(c\\in I\\). Among several results, an example of a graded weakly \\(1\\)-absorbing prime ideal that is not graded \\(1\\)-absorbing prime has been given (Example 2.6). Also, an example of a graded weakly \\(1\\)-absorbing prime ideal that is not graded weakly prime has been introduced (Example 2.8). In Theorem 2.13, we give a characterization on graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals. We introduce the concept of \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideals. A graded ideal \\(I\\) of \\(R\\) with \\(I_{g}\
eq R_{g}\\) is said to be a \\(g\\)-\\(\\phi\\)-\\(1\\)-_absorbing prime ideal of_\\(R\\) if whenever \\(a,b,c\\in R_{g}\\) such that \\(abc\\in I\\), then either \\(ab\\in I\\) or \\(c\\in I\\). In Theorem 2.16, we give a characterization of \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideals. We show that if \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\), then \\(I/\\phi(I)\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R/\\phi(I)\\) (Theorem 2.25). On the other hand, we prove that if \\(I/\\phi(I)\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R/\\phi(I)\\) and \\(U(R/\\phi(I))=\\{a+\\phi(I):a\\in U(R)\\}\\), then \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) (Theorem 2.27). In Theorem 2.28, we study graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals over multiplicative sets. In Theorems 2.30, 2.31 and 2.32,we study graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals over cartesian products of graded rings. Finally, we introduce and study the concept of graded von Neumann regular rings. A graded ring \\(R\\) is said to be a _graded von Neumann regular ring_ if for each \\(a\\in R_{g}\\), there exists \\(x\\in R_{g^{-1}}\\) such that \\(a=a^{2}x\\)[12]. In particular, we prove that if \\(R\\) is a graded von Neumann regular ring and \\(x\\in h(R)\\), then \\(Rx\\) is a graded almost \\(1\\)-absorbing prime ideal of \\(R\\) (Theorem 3.8).
## 2. Graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals
In this section, we introduce and study the concept of graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals.
**Definition 2.1**.: _Let \\(R\\) be a graded ring and \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\) be a function. A proper graded ideal \\(I\\) of \\(R\\) is called a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) if whenever \\(a,b,c\\in h(R)\\) are nonunits such that \\(abc\\in I-\\phi(I)\\), then \\(ab\\in I\\) or \\(c\\in I\\)._
**Remark 2.2**.: _The following notations are used for the rest of the article, they are types of graded \\(1\\)-absorbing prime ideals corresponding to \\(\\phi_{\\alpha}\\)._
1. \\(\\phi_{\\emptyset}(I)=\\emptyset\\) _(graded_ \\(1\\)_-absorbing prime ideal)_
2. \\(\\phi_{0}(I)=\\{0\\}\\) _(graded weakly_ \\(1\\)_-absorbing prime ideal)_
3. \\(\\phi_{1}(I)=I\\) _(any graded ideal)_
4. \\(\\phi_{2}(I)=I^{2}\\) _(graded almost_ \\(1\\)_-absorbing prime ideal)_
5. \\(\\phi_{n}(I)=I^{n}\\) _(graded_ \\(n\\)_-almost_ \\(1\\)_-absorbing prime ideal)_
6. \\(\\phi_{\\omega}(I)=\\bigcap_{n=1}^{\\infty}I^{n}\\) _(graded_ \\(\\omega\\)_-_\\(1\\)_-absorbing prime ideal)_
**Remark 2.3**.: _(1) Since \\(I-\\phi(I)=I-(I\\bigcap\\phi(I))\\) for any graded ideal \\(I\\), without loss of generality, throughout this article, we suppose that \\(\\phi(I)\\subseteq I\\)._
_(2) For functions \\(\\phi,\\psi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\), we write \\(\\phi\\leq\\psi\\) if \\(\\phi(I)\\subseteq\\psi(I)\\) for all \\(I\\in GI(R)\\). Obviously, therefore, we have the next order:_
**Remark 2.4**.: \\(\\phi_{\\emptyset}\\leq\\phi_{0}\\leq\\phi_{\\omega}\\leq\\cdots\\leq\\phi_{n+1}\\leq\\phi_ {n}\\leq\\cdots\\leq\\phi_{2}\\leq\\phi_{1}\\)_._
**Proposition 2.5**.: _Let \\(R\\) be a graded ring, \\(\\phi,\\psi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\) be two functions with \\(\\phi\\leq\\psi\\) and \\(I\\) be a proper graded ideal of \\(R\\)._
1. _If_ \\(I\\) _is a graded_ \\(\\phi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_, then_ \\(I\\) _is a graded_ \\(\\psi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
2. \\(I\\) _is a graded_ \\(1\\)_-absorbing prime ideal of_ \\(R\\Rightarrow I\\) _is a graded_ \\(\\psi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\Rightarrow I\\) _is a graded_ \\(\\psi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\) _is a graded_ \\(n\\)_-almost_ \\(1\\)_-absorbing prime ideal of_ \\(R\\) _for each_ \\(n\\geq 2\\Rightarrow I\\) _is a graded almost_ \\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
3. \\(I\\) _is a graded_ \\(n\\)_-almost_ \\(1\\)_-absorbing prime ideal of_ \\(R\\) _for each_ \\(n\\geq 2\\) _if and only if_ \\(I\\) _is a graded_ \\(\\omega\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
4. _Every graded_ \\(\\phi\\)_-prime ideal of_ \\(R\\) _is a graded_ \\(\\phi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
Proof.\\((1):\\) It is clear.
\\((2):\\) It follows from (1) and \\(\\phi_{\\emptyset}\\leq\\phi_{0}\\leq\\phi_{\\omega}\\leq\\cdots\\leq\\phi_{n+1}\\leq\\phi_ {n}\\leq\\cdots\\leq\\phi_{2}\\leq\\phi_{1}\\) in Remark 2.2.
\\((3):\\) By (2), if \\(I\\) is a graded \\(\\omega\\)-\\(1\\)-absorbing prime ideal of \\(R\\), then \\(I\\) is a graded \\(n\\)-almost \\(1\\)-absorbing prime ideal of \\(R\\) for each \\(n\\geq 2\\). Assume that \\(I\\) is a graded \\(n\\)-almost \\(1\\)-absorbing prime ideal of \\(R\\) for each \\(n\\geq 2\\). Let \\(abc\\in I-\\bigcap_{n=1}^{\\infty}I^{n}\\) for some nonunits \\(a,b,c\\in h(R)\\). Then there exists \\(r\\geq 2\\) such that \\(abc\
otin I^{r}\\). Sinceis a graded \\(r\\)-almost \\(1\\)-absorbing prime ideal of \\(R\\) and \\(abc\\in I-I^{r}\\), then either we have \\(ab\\in I\\) or \\(c\\in I\\).
\\((4):\\) It is obvious.
The next example introduces a graded weakly \\(1\\)-absorbing prime ideal that is not a graded \\(1\\)-absorbing prime.
**Example 2.6**.: _Consider \\(R=\\mathbb{Z}_{pq^{2}}[i]\\), where \\(p\\), \\(q\\) are two distinct primes, and \\(G=\\mathbb{Z}_{2}\\). Then \\(R\\) is \\(G\\)-graded by \\(R_{0}=\\mathbb{Z}_{pq^{2}}\\) and \\(R_{1}=i\\mathbb{Z}_{pq^{2}}\\). As \\(\\overline{q}^{2}\\in R_{0}\\), \\(I=\\langle\\overline{q}^{2}\\rangle\\) is a graded ideal of \\(R\\). Since \\(\\overline{p},\\overline{q}\\in R_{0}\\subseteq h(R)\\) are nonunits with \\(\\overline{pq}\\in I\\) while \\(\\overline{pq}\
otin I\\) and \\(\\overline{q}\
otin I\\), \\(I\\) is not a graded \\(1\\)-absorbing prime ideal of \\(R\\). On the other hand, we prove that \\(I\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\). Let \\(\\overline{0}\
eq\\overline{abc}\\in I\\) for some nonunits \\(\\overline{a},\\overline{b},\\overline{c}\\in h(R)\\). Then \\(q^{2}\\) divides \\(abc\\) but \\(pq^{2}\\) does not divide \\(abc\\)._
_Case (1): \\(\\overline{a},\\overline{b},\\overline{c}\\in R_{0}\\)._
_Since \\(\\overline{a},\\overline{b},\\overline{c}\\) are nonunits, \\(p\\) or \\(q\\) must divide \\(a\\), \\(b\\) and \\(c\\). If \\(p\\) divides \\(a,b\\) or \\(c\\), then \\(pq^{2}\\) divides \\(abc\\) which is a contradiction. So, \\(q^{2}\\) divides \\(ab\\) and so \\(\\overline{ab}\\in I\\). Therefore, \\(I\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\)._
_Case (2): \\(\\overline{a},\\overline{b}\\in R_{0},\\overline{c}\\in R_{1}\\)._
_In this case, \\(\\overline{c}=i\\overline{\\alpha}\\) for some \\(\\overline{\\alpha}\\in R_{0}\\). As \\(\\overline{c}\\) is nonunit, \\(\\overline{\\alpha}\\) is nonunit with \\(abc=iab\\alpha\\) and \\(pq^{2}\\) does not divide \\(ab\\alpha\\). Since \\(q^{2}\\) divides \\(abc\\), \\(iab\\alpha=q^{2}(x+iy)\\) for some \\(x,y\\in R_{0}\\), and then \\(\\underline{ab\\alpha}=q^{2}y\\) which implies that \\(q^{2}\\) divides \\(ab\\alpha\\). Similarly as in case (1), we have that \\(\\overline{ab}\\in I\\). Therefore, \\(I\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\)._
_Case (3): \\(\\overline{a}\\in R_{0}\\), \\(\\overline{b},\\overline{c}\\in R_{1}\\)._
_In this case, \\(\\overline{b}=i\\overline{\\alpha}\\) and \\(\\overline{c}=i\\overline{\\beta}\\) for some \\(\\overline{\\alpha},\\overline{\\beta}\\in R_{0}\\). As \\(\\overline{b}\\) and \\(\\overline{c}\\) are nonunits, \\(\\overline{\\alpha}\\) and \\(\\overline{\\beta}\\) are nonunits with \\(abc=-a\\alpha\\beta\\) and \\(pq^{2}\\) does not divide \\(a\\alpha\\beta\\). Since \\(q^{2}\\) divides \\(abc\\), \\(-a\\alpha\\beta=q^{2}(x+iy)\\) for some \\(x,y\\in R_{0}\\), and then \\(-a\\alpha\\beta=q^{2}x\\) which implies that \\(q^{2}\\) divides \\(a\\alpha\\beta\\). Similarly as in case (1), we have that \\(\\overline{a\\alpha}\\in I\\) and then \\(\\overline{ab}\\in I\\). Therefore, \\(I\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\)._
_Case (4): \\(\\overline{a},\\overline{b},\\overline{c}\\in R_{1}\\)._
_In this case, \\(\\overline{a}=i\\overline{\\alpha}\\), \\(\\overline{b}=i\\overline{\\beta}\\) and \\(\\overline{c}=i\\overline{\\gamma}\\) for some \\(\\overline{\\alpha},\\overline{\\beta},\\overline{\\gamma}\\in R_{0}\\). As \\(\\overline{a}\\), \\(\\overline{b}\\) and \\(\\overline{c}\\) are nonunits, \\(\\overline{\\alpha}\\), \\(\\overline{\\beta}\\) and \\(\\overline{\\gamma}\\) are nonunits with \\(abc=-i\\alpha\\beta\\gamma\\) and \\(pq^{2}\\) does not divide \\(\\alpha\\beta\\gamma\\). Since \\(q^{2}\\) divides \\(abc\\), \\(-i\\alpha\\beta\\gamma=q^{2}(x+iy)\\) for some \\(x,y\\in R_{0}\\), and then \\(-\\alpha\\beta\\gamma=q^{2}y\\) which implies that \\(q^{2}\\) divides \\(\\alpha\\beta\\gamma\\). Similarly as in case (1), we have that \\(\\overline{\\alpha\\beta}\\in I\\) and then \\(\\overline{ab}\\in I\\). Therefore, \\(I\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\)._
_Since the other cases are similar to one of the above cases, \\(I\\) is a graded weakly 1-absorbing prime ideal of \\(R\\)._
The next example introduces a graded \\(\\omega\\)-\\(1\\)-absorbing prime ideal that is not graded weakly \\(1\\)-absorbing prime.
**Example 2.7**.: _Consider \\(R=\\mathbb{Z}_{2}\\times\\mathbb{Z}_{2}\\times\\mathbb{Z}_{2}\\times\\mathbb{Z}_{2}\\) and the trivial graduation of \\(R\\) by any group \\(G\\), that is \\(R_{e}=R\\) and \\(R_{g}=\\{0\\}\\) for \\(g\\in G-\\{e\\}\\). Now, \\(I=\\mathbb{Z}_{2}\\times\\{\\overline{0}\\}\\times\\{\\overline{0}\\}\\times\\{\\overline{0}\\}\\) is a graded ideal of \\(R\\) satisfies \\(I^{2}=I\\), and then \\(I^{n}=I\\) for all \\(n\\geq 2\\), and hence \\(I\\) is a graded \\(\\omega\\)-\\(1\\)-absorbing prime ideal of \\(R\\). On the other hand, \\(I\\) is not a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\) since \\(a=(\\overline{1},\\overline{1},\\overline{1},\\overline{0}),b=(\\overline{1}, \\overline{1},\\overline{0},\\overline{1})\\) and \\(c=(\\overline{1},\\overline{0},\\overline{1},\\overline{1})\\in h(R)\\) are nonunits with \\(0\
eq abc\\in I\\) while \\(ab,c\
otin I\\)._
The next example introduces a graded weakly \\(1\\)-absorbing prime ideal that is not graded weakly prime.
**Example 2.8**.: _Consider \\(R=\\mathbb{Z}_{pq^{2}}[i]\\), where \\(p\\), \\(q\\) are two distinct primes, and \\(G=\\mathbb{Z}_{2}\\). Then \\(R\\) is \\(G\\)-graded by \\(R_{0}=\\mathbb{Z}_{pq^{2}}\\) and \\(R_{1}=i\\mathbb{Z}_{pq^{2}}\\). By Example 2.6, \\(I=\\langle\\overline{q}^{2}\\rangle\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R\\). On the other hand, \\(I\\) is not a graded weakly prime ideal of \\(R\\) since \\(\\overline{q}\\in h(R)\\) with \\(\\overline{0}\
eq\\overline{qq}\\in I\\) while \\(\\overline{q}\
otin I\\)._
A graded ring \\(R\\) is said to be graded local if it has a unique graded maximal ideal \\(\\mathfrak{m}\\), and it is denoted by \\((R,\\mathfrak{m})\\).
**Proposition 2.9**.: _Let \\((R,\\mathfrak{m})\\) be a graded local ring and \\(I\\) be a proper graded ideal of \\(R\\). If \\(\\mathfrak{m}^{2}\\subseteq I\\), then \\(I\\) is a graded \\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: Let \\(abc\\in I\\) for some nonunits \\(a,b,c\\in h(R)\\). Then \\(a,b,c\\in\\mathfrak{m}\\), which implies that \\(ab\\in\\mathfrak{m}^{2}\\subseteq I\\). Therefore, \\(I\\) is a graded \\(1\\)-absorbing prime ideal of \\(R\\).
**Corollary 2.10**.: _Let \\((R,\\mathfrak{m})\\) be a graded local ring. Then \\(\\mathfrak{m}^{2}\\) is a graded \\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: By [[3], Lemma 1], \\(\\mathfrak{m}^{2}\\) is a proper graded ideal of \\(R\\), and then \\(\\mathfrak{m}^{2}\\) is a graded \\(1\\)-absorbing prime ideal of \\(R\\) by Proposition 2.9.
**Proposition 2.11**.: _Let \\((R,\\mathfrak{m})\\) be a graded local ring and \\(I\\) be a proper graded ideal of \\(R\\). If \\(\\mathfrak{m}^{3}\\subseteq\\phi(I)\\), then \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: Suppose that \\(I\\) is not a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\). Then there exist nonunit elements \\(a,b,c\\in h(R)\\) such that \\(abc\\in I-\\phi(I)\\) but \\(ab\
otin I\\) and \\(c\
otin I\\). Since \\(a,b,c\\) are nonunits, they are elements of \\(\\mathfrak{m}\\), and then \\(abc\\in\\mathfrak{m}^{3}\\subseteq\\phi(I)\\), which is a contradiction. Hence, \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\).
**Corollary 2.12**.: _Let \\((R,\\mathfrak{m})\\) be a graded local ring and \\(\\phi(I)\
eq\\emptyset\\) for every ideal \\(I\\) of \\(R\\). If \\(\\mathfrak{m}^{3}=\\{0\\}\\), then every proper graded ideal of \\(R\\) is graded \\(\\phi\\)-\\(1\\)-absorbing prime._
Proof.: Apply Proposition 2.11.
**Theorem 2.13**.: _Let \\(R\\) be a graded ring and \\(I\\) be a proper graded ideal of \\(R\\). Consider the following conditions._
1. \\(I\\) _is a graded_ \\(\\phi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
2. _For each nonunits_ \\(a,b\\in h(R)\\) _with_ \\(ab\
otin I\\)_,_ \\((I:ab)=I\\cup(\\phi(I):ab)\\)_._
3. _For each nonunits_ \\(a,b\\in h(R)\\) _with_ \\(ab\
otin I\\)_, either_ \\((I:ab)=I\\) _or_ \\((I:ab)=(\\phi(I):ab)\\)_._
4. _For each nonunits_ \\(a,b\\in h(R)\\) _and proper graded ideal_ \\(L\\) _of_ \\(R\\) _such that_ \\(abL\\subseteq I\\) _and_ \\(abL\
subseteq\\phi(I)\\)_, either_ \\(ab\\in I\\) _or_ \\(L\\subseteq I\\)_._
5. _For each nonunit_ \\(a\\in h(R)\\) _and proper graded ideals_ \\(K,L\\) _of_ \\(R\\) _such that_ \\(aKL\\subseteq I\\) _and_ \\(aKL\
subseteq\\phi(I),\\) _either_ \\(aK\\subseteq I\\) _or_ \\(L\\subseteq I\\)_._
6. _For each proper graded ideals_ \\(J,K,L\\) _of_ \\(R\\) _such that_ \\(JKL\\subseteq I\\) _and_ \\(JKL\
subseteq\\phi(I),\\) _either_ \\(JK\\subseteq I\\) _or_ \\(L\\subseteq I.\\)__
_Then, \\((6)\\Rightarrow(5)\\Rightarrow(4)\\Rightarrow(3)\\Rightarrow(2)\\Rightarrow(1).\\)_
Proof.: \\((6)\\Rightarrow(5):\\) Suppose that \\(aKL\\subseteq I\\) and \\(aKL\
subseteq\\phi(I)\\) for some nonunit \\(a\\in h(R)\\) and proper graded ideals \\(K,L\\) of \\(R.\\) Then \\(J=Ra\\) is a graded ideal since \\(a\\in h(R)\\), and also \\(JKL\\subseteq I\\) and \\(JKL\
subseteq\\phi(I).\\) Then by \\((6)\\), we have \\(aK\\subseteq JK\\subseteq I\\) or \\(L\\subseteq I\\) which completes the proof.
\\((5)\\Rightarrow(4):\\) Let \\(abL\\subseteq I\\) and \\(abL\
subseteq\\phi(I)\\) for some nonunits \\(a,b\\in h(R)\\) and proper graded ideal \\(L\\) of \\(R.\\) Now, put \\(K=Rb.\\) Then \\(K\\) is a graded ideal such that\\(aKL\\subseteq I\\) and \\(aKL\
subseteq\\phi(I)\\). Then by (5), we have that \\(ab\\in aK\\subseteq I\\) or \\(L\\subseteq I\\) which is needed.
\\((4)\\Rightarrow(3):\\) Let \\(a,b\\in h(R)\\) nonunits such that \\(ab\
otin I\\). Then \\((I:ab)\\) is a proper graded ideal of \\(R\\). We have two cases. **Case 1:** let \\(ab(I:ab)\\subseteq\\phi(I)\\). Then \\((I:ab)\\subseteq\\left(\\phi(I):ab\\right).\\) As the reverse inclusion always holds, we have the equality \\((I:ab)=\\left(\\phi(I):ab\\right).\\)**Case 2:** let \\(ab(I:ab)\
subseteq\\phi(I)\\). Since \\(ab(I:ab)\\subseteq I\\), by (4), we get \\((I:ab)\\subseteq I\\). As \\(I\\subseteq(I:ab)\\) always holds, we have \\(I=(I:ab)\\). Therefore, \\((I:ab)=I\\) or \\((I:ab)=(\\phi(I):ab)\\).
\\((3)\\Rightarrow(2):\\) It is clear.
\\((2)\\Rightarrow(1):\\) Let \\(abc\\in I-\\phi(I)\\) for some nonunits \\(a,b,c\\in h(R)\\). Assume that \\(ab\
otin I\\). Then we have \\(c\\in(I:ab)-\\left(\\phi(I):ab\\right).\\) By (2), we conclude that \\(c\\in I\\) which completes the proof.
In the previous Theorem, the implication \\((1)\\Rightarrow(6)\\) is not true in general. See the following example.
**Example 2.14**.: _Consider the ring \\(R=\\mathbb{Z}_{50}[X]\\). Then \\(R=\\bigoplus\\limits_{n\\in\\mathbb{Z}}R_{n}\\) is a \\(\\mathbb{Z}\\)-graded ring, where \\(R_{n}=\\{\\overline{0}\\}\\) if \\(n<0,\\ R_{0}=\\mathbb{Z}_{50}\\) and also \\(R_{n}=\\mathbb{Z}_{50}X^{n}\\) if \\(n>0\\). Then the set of all nonunit homogeneous elements is \\(nu(h(R))=\\{\\overline{2k},\\overline{5k},\\overline{a}X^{n}:k,a\\in\\mathbb{Z}\\) and \\(n\\geq 1\\}\\). Now, consider the graded ideal \\(I=(X,\\overline{25})\\) of \\(R\\). Set \\(\\phi(I)=\\{\\overline{0}\\}\\). Now, we will show that \\(I\\) is a graded \\(\\phi\\)-1-absorbing prime ideal of \\(R\\). To see this, choose nonunit homogeneous elements \\(r,s,t\\in nu(h(R))\\) such that \\(rst\\in I-\\phi(I)\\). We have two cases. **Case 1:** If at least one of the \\(r,s,t\\) is of the form \\(\\overline{a}X^{n},\\) then we have \\(rs\\in I\\) or \\(t\\in I\\) since \\(X\\in I\\). **Case 2:** Assume that \\(r,s,t\\in\\{\\overline{2k},\\overline{5k}:k\\in\\mathbb{Z}\\}\\). Then we can write \\(r=\\overline{m},s=\\overline{n},t=\\overline{k}\\) for some \\(m,n,k\\in\\mathbb{Z}\\). Since \\(rst\\in I-\\phi(I),\\) we have \\(25|mnk\\) and \\(2\
mid mnk\\). Thus, \\(2\\) does not divide \\(m,n\\) and \\(k\\). Which implies that \\(25|mn\\) and so \\(rs\\in I\\). Therefore, \\(I\\) is a graded \\(\\phi\\)-1-absorbing prime ideal of \\(R\\). Now, we will show that \\(I\\) does not satisfy \\((2)\\) in Theorem 2.13. Now, take \\(a=\\overline{2}\\) and \\(b=\\overline{5}\\). Then note that \\(ab=\\overline{10}\
otin I\\). Also, it is easy to see that \\(\\overline{5},X\\in(I:ab)\\). Then we have \\(\\overline{5}+X\\in(I:ab)\\). On the other hand, note that \\(\\overline{5}+X\
otin(\\phi(I):ab)\\cup I\\). This shows that \\((\\phi(I):ab)\\cup I\\subsetneq(I:ab)\\). Thus, \\(I\\) does not satisfy (2), and so it does not satisfy all axioms \\((2)-(6)\\) in Theorem 2.13._
**Definition 2.15**.: _Let \\(R\\) be a \\(G\\)-graded ring and \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\) be a function. Suppose that \\(g\\in G\\) and \\(I\\) is graded ideal of \\(R\\) with \\(I_{g}\
eq R_{g}\\). Then \\(I\\) is called a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) if whenever \\(a,b,c\\in R_{g}\\) are nonunits such that \\(abc\\in I-\\phi(I)\\), then \\(ab\\in I\\) or \\(c\\in I\\)._
**Theorem 2.16**.: _Let \\(R\\) be a \\(G\\)-graded ring, \\(g\\in G\\) and \\(I\\) be a graded ideal of \\(R\\) with \\(I_{g}\
eq R_{g}\\). Then the following statements are equivalent._
1. \\(I\\) _is a_ \\(g\\)_-_\\(\\phi\\)_-_\\(1\\)_-absorbing prime ideal of_ \\(R\\)_._
2. _For each nonunits_ \\(a,b\\in R_{g}\\) _with_ \\(ab\
otin I\\)_,_ \\((I:_{R_{g}}ab)\\subseteq I\\cup(\\phi(I):_{R_{g}}ab)\\)_._
3. _For each nonunits_ \\(a,b\\in R_{g}\\) _with_ \\(ab\
otin I\\)_, either_ \\((I:_{R_{g}}ab)\\subseteq I\\) _or_ \\((I:_{R_{g}}ab)=(\\phi(I):_{R_{g}}ab)\\)_._
4. _For each nonunits_ \\(a,b\\in R_{g}\\) _and graded ideal_ \\(J\\) _of_ \\(R\\) _such that_ \\(J_{g}\
eq R_{g}\\)_,_ \\(abJ_{g}\\subseteq I\\) _but_ \\(abJ_{g}\
subseteq\\phi(I)\\)_, either_ \\(ab\\in I\\) _or_ \\(J_{g}\\subseteq I\\)_._
5. _For each nonunit_ \\(a\\in h(R)\\) _and graded ideals_ \\(J,K\\) _of_ \\(R\\) _such that_ \\(J_{g}\
eq R_{g}\\)_,_ \\(K_{g}\
eq R_{g}\\)_,_ \\(aJ_{g}K_{g}\\subseteq I\\) _but_ \\(aJ_{g}K_{g}\
subseteq\\phi(I)\\)_, either_ \\(aJ_{g}\\subseteq I\\) _or_ \\(K_{g}\\subseteq I\\)_._
6. _For each graded ideals_ \\(J,K,L\\) _of_ \\(R\\) _such that_ \\(J_{g}\
eq R_{g}\\)_,_ \\(K_{g}\
eq R_{g}\\)_,_ \\(L_{g}\
eq R_{g}\\)_,_ \\(J_{g}K_{g}L_{g}\\subseteq I\\) _but_ \\(J_{g}K_{g}L_{g}\
subseteq\\phi(I)\\)_, either_ \\(J_{g}K_{g}\\subseteq I\\) _or_ \\(L_{g}\\subseteq I\\)Proof.: \\((1)\\Rightarrow(2):\\) Let \\(a,b\\in R_{g}\\) be nonunits with \\(ab\
otin I.\\) Take \\(x\\in(I:_{R_{g}}ab).\\) Then we have \\(x\\in R_{g}\\) and \\(abx\\in I.\\) Since \\(ab\
otin I,\\ x\\) is nonunit. As \\(I\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R,\\) we conclude that \\(x\\in I\\) or \\(abx\\in\\phi(I).\\) Which implies that \\(x\\in I\\cup(\\phi(I):_{R_{g}}ab).\\) Thus, \\((I:_{R_{g}}ab)\\subseteq I\\cup(\\phi(I):_{R_{g}}ab).\\)
\\((2)\\Rightarrow(3):\\) Assume that \\((I:_{R_{g}}ab)\\subseteq I\\cup(\\phi(I):_{R_{g}}ab).\\) Then by [11], \\((I:_{R_{g}}ab)\\subseteq I\\) or \\((I:_{R_{g}}ab)\\subseteq(\\phi(I):_{R_{g}}ab).\\) In the first case, there is nothing to prove. Assume that \\((I:_{R_{g}}ab)\\subseteq(\\phi(I):_{R_{g}}ab).\\) Since the reverse inclusion always holds, we have the equality \\((I:_{R_{g}}ab)=(\\phi(I):_{R_{g}}ab).\\)
\\((3)\\Rightarrow(4):\\) Suppose that \\(abJ_{g}\\subseteq I\\) but \\(abJ_{g}\
subseteq\\phi(I)\\) for some nonunits \\(a,b\\in R_{g}\\) and graded ideal \\(J\\) of \\(R\\) with \\(J_{g}\
eq R_{g}.\\) If \\(ab\\in I,\\) then there is nothing to prove. So assume that \\(ab\
otin I.\\) Since \\(J_{g}\\subseteq(I:_{R_{g}}ab)\\) and \\(J_{g}\
subseteq(\\phi(I):_{R_{g}}ab),\\) by \\((3),\\ J_{g}\\subseteq(I:_{R_{g}}ab)\\subseteq I\\) which completes the proof.
\\((4)\\Rightarrow(5):\\) Suppose that \\(aJ_{g}K_{g}\\subseteq I\\) and \\(aJ_{g}K_{g}\\subseteq\\phi(I)\\). Assume that \\(aJ_{g}\
subseteq I\\) and \\(K_{g}\
subseteq I\\). Then there exists \\(x\\in J_{g}\\) such that \\(ax\
otin I\\). Also, since \\(aJ_{g}K_{g}\
subseteq\\phi(I)\\), there exists \\(y\\in J_{g}\\) such that \\(ayK_{g}\
subseteq\\phi(I)\\). Now, assume that \\(axK_{g}\
subseteq\\phi(I)\\). Since \\(a,x\\) are nonunits and \\(axK_{g}\\subseteq I\\), we have either \\(ax\\in I\\) or \\(K_{g}\\subseteq I\\), a contradiction. So, we get \\(axK_{g}\\subseteq\\phi(I)\\). Also, we have \\(a(x+y)K_{g}\\subseteq I\\) and \\(a(x+y)K_{g}\
subseteq\\phi(I)\\), which implies that \\(a(x+y)\\in I\\). Since \\(ayK_{g}\\subseteq I\\), \\(ayK_{g}\
subseteq\\phi(I)\\) and \\(K_{g}\
subseteq I\\), we get \\(ay\\in I\\). Thus, we obtain \\(ax\\in I\\) giving a contradiction.
\\((5)\\Rightarrow(6):\\) Suppose that \\(J_{g}K_{g}L_{g}\\subseteq I\\) but \\(J_{g}K_{g}L_{g}\
subseteq\\phi(I)\\) for some graded ideals \\(J,K\\) and \\(L\\) of \\(R\\) with \\(J_{g}\
eq R_{g}\\), \\(K_{g}\
eq R_{g}\\) and \\(L_{g}\
eq R_{g}\\). Assume that \\(J_{g}K_{g}\
subseteq I\\) and \\(L_{g}\
subseteq I\\). Then there exists \\(b\\in J_{g}\\) such that \\(bK_{g}\
subseteq I\\). Also, since \\(J_{g}K_{g}L_{g}\
subseteq\\phi(I)\\), \\(aK_{g}L_{g}\
subseteq\\phi(I)\\) for some \\(a\\in J_{g}\\). Then we get \\(aK_{g}\\subseteq I\\) since \\(aK_{g}L_{g}\\subseteq I\\) and \\(aK_{g}L_{g}\
subseteq\\phi(I)\\). Suppose that \\(bK_{g}L_{g}\
subseteq\\phi(I)\\). By \\((5)\\), this gives \\(bK_{g}\\subseteq I\\) or \\(L_{g}\\subseteq I\\), which is a contradiction. So, \\(bK_{g}L_{g}\\subseteq\\phi(I)\\). As \\((a+b)K_{g}L_{g}\\subseteq I\\) and \\((a+b)K_{g}L_{g}\
subseteq\\phi(I)\\), we have \\((a+b)K_{g}\\subseteq I\\). This implies \\(bK_{g}\\subseteq I\\), a contradiction.
\\((6)\\Rightarrow(1):\\) Let \\(abc\\in I-\\phi(I)\\) for some nonunits \\(a,b,c\\in R_{g}\\). Then \\((Ra)_{g}(Rb)_{g}(Rc)_{g}\\subseteq I\\) and \\((Ra)_{g}(Rb)_{g}(Rc)_{g}\
subseteq\\phi(I)\\). Hence, \\((Ra)_{g}(Rb)_{g}\\subseteq I\\) or \\((Rc)_{g}\\subseteq I\\) showing that \\(ab\\in I\\) or \\(c\\in I\\), as desired.
**Definition 2.17**.: _Let \\(I\\) be a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) and \\(a,b,c\\in R_{g}\\) be nonunits. Then \\((a,b,c)\\) is called a \\(g\\)-\\(\\phi\\)-\\(1\\)-triple zero of \\(I\\) if \\(abc\\in\\phi(I)\\), \\(ab\
otin I\\) and \\(c\
otin I\\)._
**Theorem 2.18**.: _Suppose that \\(I\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) and \\((a,b,c)\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-triple zero of \\(I\\). Then \\(abI_{g}\\subseteq\\phi(I)\\)._
Proof.: Now, \\(abc\\in\\phi(I),\\)\\(ab\
otin I\\) and \\(c\
otin I\\). Suppose that \\(abI_{g}\
subseteq\\phi(I)\\). Then there exists \\(x\\in I_{g}\\) such that \\(abx\
otin\\phi(I)\\). So, \\(ab(c+x)\\in I-\\phi(I)\\). If \\(c+x\\) is unit, then \\(ab\\in I\\), a contradiction. Now, assume that \\(c+x\\) is nonunit and so we get \\(ab\\in I\\) or \\(c\\in I\\), a contradiction. Thus, we have \\(abI_{g}\\subseteq\\phi(I)\\).
**Theorem 2.19**.: _Suppose that \\(I\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) and \\((a,b,c)\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-triple zero of \\(I\\). If \\(ac,bc\
otin I\\), then \\(acI_{g}\\subseteq\\phi(I)\\), \\(bcI_{g}\\subseteq\\phi(I)\\), \\(aI_{g}^{2}\\subseteq\\phi(I)\\), \\(bI_{g}^{2}\\subseteq\\phi(I)\\) and \\(cI_{g}^{2}\\subseteq\\phi(I)\\)._
Proof.: Suppose that \\(acI_{g}\
subseteq\\phi(I)\\). Then there exists \\(x\\in I_{g}\\) such that \\(acx\
otin\\phi(I)\\). This implies that \\(a(b+x)c\\in I-\\phi(I)\\). If \\(b+x\\) is unit, then \\(ac\\in I\\) which is a contradiction. Thus \\(b+x\\) is nonunit. Since \\(I\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal, we conclude either \\(a(b+x)\\in I\\) or \\(c\\in I\\), which implies that \\(ab\\in I\\) or \\(c\\in I\\), a contradiction. Thus, \\(acI_{g}\\subseteq\\phi(I)\\). By using similar argument, we have \\(bcI_{g}\\subseteq\\phi(I)\\)Now, we will show that \\(aI_{g}^{2}\\subseteq\\phi(I)\\). Suppose not. Then there exist \\(x,y\\in I_{g}\\) such that \\(axy\
otin\\phi(I)\\). It implies that \\(a(b+x)(c+y)\\in I-\\phi(I)\\). If \\((b+x)\\) is unit, then \\(a(c+y)\\in I\\) which gives \\(ac\\in I\\), a contradiction. Similarly, \\((c+y)\\) is nonunit. Then either \\(a(b+x)\\in I\\) or \\(c+y\\in I\\) implying that \\(ab\\in I\\) or \\(c\\in I\\). Thus, we have \\(aI_{g}^{2}\\subseteq\\phi(I)\\). Similarly, we get \\(bI_{g}^{2}\\subseteq\\phi(I)\\) and \\(cI_{g}^{2}\\subseteq\\phi(I)\\).
**Theorem 2.20**.: _Suppose that \\(I\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) and \\((a,b,c)\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-triple zero of \\(I\\). If \\(ac,bc\
otin I\\), then \\(I_{g}^{3}\\subseteq\\phi(I)\\)._
Proof.: Suppose that \\(I_{g}^{3}\
subseteq\\phi(I)\\). Then there exist \\(x,y,z\\in I_{g}\\) such that \\(xyz\
otin\\phi(I)\\), and then \\((a+x)(b+y)(c+z)\\in I-\\phi(I)\\). If \\(a+x\\) is unit, then we obtain that \\((b+y)(c+z)=bc+bz+cy+yz\\in I\\) and so \\(bc\\in I\\), which is a contradiction. Similarly, we can show that \\(b+y\\) and \\(c+z\\) are nonunits. Then we get \\((a+x)(b+y)\\in I\\) or \\(c+z\\in I\\). This gives \\(ab\\in I\\) or \\(c\\in I\\), a contradiction. Hence, \\(I_{g}^{3}\\subseteq\\phi(I)\\).
**Theorem 2.21**.: _Let \\(R\\) be a \\(G\\)-graded ring, \\(g\\in G\\) and \\(x\\in R_{g}\\) be nonunit. Suppose that \\((0:x)\\subseteq Rx\\). Then \\(Rx\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) with \\(\\phi\\leq\\phi_{2}\\) if and only if \\(Rx\\) is a \\(g\\)-\\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: Suppose that \\(Rx\\) is a \\(g\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) with \\(\\phi\\leq\\phi_{2}\\). Then it is also a \\(g\\)-\\(\\phi_{2}\\)-\\(1\\)-absorbing prime ideal of \\(R\\) by the sense of Proposition 2.5. Let \\(abc\\in Rx\\) for some nonunits \\(a,b,c\\in R_{g}\\). If \\(abc\
otin(Rx)^{2}\\), then \\(ab\\in Rx\\) or \\(c\\in Rx\\). Suppose that \\(abc\\in\\left(Rx\\right)^{2}\\). We have \\(ab(c+x)\\in Rx\\). If \\(c+x\\) is unit, we are done. Hence, we can assume that \\(c+x\\) is nonunit. Assume that \\(ab(c+x)\
otin\\left(Rx\\right)^{2}\\). Then we get either \\(ab\\in Rx\\) or \\(c+x\\in Rx\\) implying \\(ab\\in Rx\\) or \\(c\\in Rx\\). Now, assume that \\(ab(c+x)\\in\\left(Rx\\right)^{2}\\). This gives \\(xab\\in\\left(Rx\\right)^{2}\\) and so there exists \\(t\\in R\\) such that \\(xab=x^{2}t\\). Thus we have \\(ab-xt\\in(0:x)\\subseteq Rx\\). Therefore, \\(ab\\in Rx\\), as needed. The converse is clear.
**Remark 2.22**.: _Note that the condition \\((0:x)\\subseteq Rx\\) in Theorem 2.21 trivially holds for every regular element \\(x\\)._
**Theorem 2.23**.: _Let \\(R\\) be a graded ring and \\(I\\) be a graded ideal of \\(R\\) with \\(I_{e}\
eq R_{e}\\). Suppose that \\(R_{e}\\) is not local ring and \\((\\phi(I):_{R_{e}}a)\\) is not maximal ideal of \\(R_{e}\\) for each \\(a\\in I_{e}\\). Then \\(I\\) is an \\(e\\)-\\(\\phi\\)-prime ideal of \\(R\\) if and only if \\(I\\) is an \\(e\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: Suppose that \\(I\\) is an \\(e\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\). Let \\(a,b\\in R_{e}\\) such that \\(ab\\in I-\\phi(I)\\). If \\(a\\) or \\(b\\) is unit, then \\(a\\in I\\) or \\(b\\in I\\), as needed. Suppose that \\(a,b\\) are nonunits. Since \\(ab\
otin\\phi(I)\\), \\((\\phi(I):_{R_{e}}ab)\\) is proper. Let \\(\\mathfrak{m}\\) be a maximal ideal of \\(R_{e}\\) containing \\((\\phi(I):_{R_{e}}ab)\\). Since \\(R_{e}\\) is not local ring, there exists another maximal ideal \\(\\mathfrak{q}\\) of \\(R_{e}\\). Now, choose \\(c\\in\\mathfrak{q}-\\mathfrak{m}\\). Then \\(c\
otin(\\phi(I):_{R_{e}}ab)\\), and so we have \\((ca)b\\in I-\\phi(I)\\). Since \\(I\\) is an \\(e\\)-\\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\), we get either \\(ca\\in I\\) or \\(b\\in I\\). If \\(b\\in I\\), then we are done. Suppose that \\(ca\\in I\\). Then as \\(c\
otin\\mathfrak{m}\\), there exists \\(x\\in R_{e}\\) such that \\(1+xc\\in\\mathfrak{m}\\). Note that \\(1+xc\\) is nonunit. If \\(1+xc\
otin(\\phi(I):_{R_{e}}ab)\\), then we have \\((1+xc)ab\\in I-\\phi(I)\\) implying \\((1+xc)a\\in I\\) and so \\(a\\in I\\) since \\(ca\\in I\\). Assume that \\(1+xc\\in(\\phi(I):_{R_{e}}ab)\\), that is, \\(ab(1+xc)\\in\\phi(I)\\). Choose \\(y\\in\\mathfrak{m}-(\\phi(I):_{R_{e}}ab)\\). Then we have \\((1+xc+y)ab\\in I-\\phi(I)\\). On the other hand, since \\(1+xc+y\\in\\mathfrak{m}\\), \\(1+xc+y\\) is nonunit. This implies that \\((1+xc+y)a\\in I\\). Also, since \\(yab\\in I-\\phi(I)\\), we get \\(ya\\in I\\). Then we have \\(a=(1+xc+y)a-x(ca)-ya\\in I\\). Therefore, \\(I\\) is an \\(e\\)-\\(\\phi\\)-prime ideal of \\(R\\). The converse follows from Proposition 2.5.
Let \\(R\\) be a \\(G\\)-graded ring and \\(J\\) be a graded ideal of \\(R\\). Then \\(R/J\\) is a \\(G\\)-graded ring by \\((R/J)_{g}=(R_{g}+J)/J\\) for all \\(g\\in G\\). Moreover, we have the following:
Proposition 2.24 ([15], Lemma 3.2): _Let \\(R\\) be a graded ring, \\(J\\) be a graded ideal of \\(R\\) and \\(I\\) be an ideal of \\(R\\) such that \\(J\\subseteq I\\). Then \\(I\\) is a graded ideal of \\(R\\) if and only if \\(I/J\\) is a graded ideal of \\(R/J\\)._
For any graded ideal \\(J\\) of \\(R\\) define a function \\(\\phi_{J}:GI(R/J)\\to GI(R/J)\\cup\\{\\emptyset\\}\\) by \\(\\phi_{J}(I/J)=(\\phi(I)+J)/J\\) where \\(J\\subseteq I\\) and \\(\\phi_{J}(I/J)=\\emptyset\\) if \\(\\phi(I)=\\emptyset\\). Also, note that \\(\\phi_{J}(I/J)\\subseteq I/J.\\)
Theorem 2.25.: _Let \\(I\\) be a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\). Then \\(I/\\phi(I)\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R/\\phi(I)\\)._
Proof.: Let \\(0+\\phi(I)\
eq(a+\\phi(I))(b+\\phi(I))(c+\\phi(I))=abc+\\phi(I)\\in I/\\phi(I)\\) for some nonunits \\(a+\\phi(I),b+\\phi(I),c+\\phi(I)\\in R/\\phi(I)\\). Then \\(a,b,c\\) are nonunits in \\(R\\) and \\(abc\\in I-\\phi(I)\\). Since \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\), \\(ab\\in I\\) or \\(c\\in I\\), and then we get \\((a+\\phi(I))(b+\\phi(I))=ab+\\phi(I)\\in I/\\phi(I)\\) or \\(c+\\phi(I)\\in I/\\phi(I)\\). Hence, \\(I/\\phi(I)\\) is a graded weakly \\(1\\)-absorbing prime ideal of \\(R/\\phi(I)\\).
Similarly, one can prove the following:
Theorem 2.26.: _Let \\(I\\), \\(J\\) be two graded ideals of \\(R\\) with \\(J\\subseteq I\\) and \\(I\\) be a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\). Then \\(I/J\\) is a graded \\(\\phi_{J}\\)-\\(1\\)-absorbing prime ideal of \\(R/J\\)._
Theorem 2.27.: _Let \\(I/\\phi(I)\\) be a graded weakly \\(1\\)-absorbing prime ideal of \\(R/\\phi(I)\\) and \\(U(R/\\phi(I))=\\{a+\\phi(I):a\\in U(R)\\}\\). Then \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: Let \\(a,b,c\\in h(R)\\) be nonunits such that \\(abc\\in I-\\phi(I)\\). Then we have \\(0+\\phi(I)\
eq(a+\\phi(I))(b+\\phi(I))(c+\\phi(I))=abc+\\phi(I)\\in I/\\phi(I)\\). Since \\(U(R/\\phi(I))=\\{a+\\phi(I):a\\in U(R)\\}\\), \\(a+\\phi(I),b+\\phi(I),c+\\phi(I)\\) are nonunits in \\(R/\\phi(I)\\). Since \\(I/\\phi(I)\\) is a graded weakly \\(1\\)-absorbing prime ideal, we have either \\((a+\\phi(I))(b+\\phi(I))=ab+\\phi(I)\\in I/\\phi(I)\\) or \\(c+\\phi(I)\\in I/\\phi(I)\\), which implies \\(ab\\in I\\) or \\(c\\in I\\). Therefore, \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\).
Let \\(R\\) be a \\(G\\)-graded ring and \\(S\\subseteq h(R)\\) be a multiplicative set. Then \\(S^{-1}R\\) is a \\(G\\)-graded ring with \\((S^{-1}R)_{g}=\\{\\frac{a}{s}:a\\in R_{h},s\\in S\\cap R_{h{g}^{-1}}\\}\\) for all \\(g\\in G\\). If \\(I\\) is a graded ideal of \\(R\\), then \\(S^{-1}I\\) is a graded ideal of \\(S^{-1}R\\). Consider the function \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\). Define \\(\\phi_{S}:GI(S^{-1}R)\\to GI(S^{-1}R)\\cup\\{\\emptyset\\}\\) by \\(\\phi_{S}(S^{-1}I)=S^{-1}\\phi(I)\\) and \\(\\phi_{S}(S^{-1}I)=\\emptyset\\) if \\(\\phi(I)=\\emptyset\\). It is easy to see that \\(\\phi_{S}(S^{-1}I)\\subseteq S^{-1}I\\).
Theorem 2.28.: _Let \\(R\\) be a graded ring and \\(S\\subseteq h(R)\\) be a multiplicative set. If \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) with \\(I\\cap S=\\emptyset\\), then \\(S^{-1}I\\) is a graded \\(\\phi_{S}\\)-\\(1\\)-absorbing prime ideal of \\(S^{-1}R\\)._
Proof.: Let \\(\\frac{a}{s}\\frac{b}{t}\\frac{c}{u}\\in S^{-1}I-\\phi_{S}(S^{-1}I)\\) for some nonunits in \\(h(S^{-1}R)\\). Then there exists \\(v\\in S\\) such that \\(vabc\\in I\\). If \\(vabc\\in\\phi(I)\\), then we have \\(\\frac{a}{s}\\frac{b}{t}\\frac{c}{u}=\\frac{vabc}{vstu}\\in S^{-1}\\phi(I)=\\phi_{S}( S^{-1}I)\\) which is a contradiction. So we get \\(vabc\\in I-\\phi(I)\\). Since \\(va,b,c\\) are nonunits in \\(R\\) and \\(I\\) is a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal, we get \\(vab\\in I\\) or \\(c\\in I\\). This implies \\(\\frac{a}{s}\\frac{b}{t}=\\frac{vab}{vst}\\in S^{-1}I\\) or \\(\\frac{c}{u}\\in S^{-1}I\\). Hence, \\(S^{-1}I\\) is a graded \\(\\phi_{S}\\)-\\(1\\)-absorbing prime ideal of \\(S^{-1}R\\).
Let \\(R\\) and \\(T\\) be two \\(G\\)-graded rings. Then \\(R\\times T\\) is a \\(G\\)-graded ring by \\((R\\times T)_{g}=R_{g}\\times T_{g}\\) for all \\(g\\in G\\). Moreover, we have the following:
**Proposition 2.29**.: _([15], Lemma 3.12) Let \\(R\\) and \\(T\\) be two graded rings. Then \\(L=I\\times J\\) is a graded ideal of \\(R\\times T\\) if and only if \\(I\\) is a graded ideal of \\(R\\) and \\(J\\) is a graded ideal of \\(T\\)._
Let \\(R\\) and \\(T\\) be two graded rings, \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\), \\(\\psi:GI(T)\\to GI(T)\\cup\\{\\emptyset\\}\\) be two functions. Suppose that \\(\\theta:GI(R\\times T)\\to GI(R\\times T)\\cup\\{\\emptyset\\}\\) is a function defined by \\(\\theta(I\\times J)=\\phi(I)\\times\\psi(J)\\) for each graded ideals \\(I,J\\) of \\(R,T\\) respectively. Then \\(\\theta\\) is denoted by \\(\\theta=\\phi\\times\\psi\\).
**Theorem 2.30**.: _Let \\(R\\) and \\(T\\) be two graded rings, \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\), \\(\\psi:GI(T)\\to GI(T)\\cup\\{\\emptyset\\}\\) be two functions. Suppose that \\(\\theta=\\phi\\times\\psi\\). If \\(L=I\\times J\\) is a graded \\(\\theta\\)-\\(1\\)-absorbing prime ideal of \\(R\\times T\\), then \\(I\\) is a graded \\(\\phi\\)-prime ideal of \\(R\\) and \\(J\\) is a graded \\(\\psi\\)-prime ideal of \\(T\\)._
Proof.: Let \\(a,b\\in h(R)\\) such that \\(ab\\in I-\\phi(I)\\). Then we have \\((a,0)(1,0)(b,0)=(ab,0)\\in L-\\theta(L)\\) for some nonunits \\((a,0),(1,0),(b,0)\\in h(R\\times T)\\). Since \\(L\\) is a graded \\(\\theta\\)-\\(1\\)-absorbing prime ideal of \\(R\\times T\\), we get either \\((a,0)(1,0)=(a,0)\\in L\\) or \\((b,0)\\in L\\) implying that \\(a\\in I\\) or \\(b\\in I\\). Therefore, \\(I\\) is a graded \\(\\phi\\)-prime ideal of \\(R\\). Similarly, \\(J\\) is a graded \\(\\psi\\)-prime ideal of \\(T\\).
**Theorem 2.31**.: _Let \\(R\\) and \\(T\\) be two graded rings, \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\), \\(\\psi:GI(T)\\to GI(T)\\cup\\{\\emptyset\\}\\) be two functions. Suppose that \\(\\theta=\\phi\\times\\psi\\). If \\(L=I\\times J\\) is a graded \\(\\theta\\)-\\(1\\)-absorbing prime ideal of \\(R\\times T\\) and \\(\\theta(L_{e})\
eq L_{e}\\), then \\(I=R\\) or \\(J=T\\)._
Proof.: Since \\(\\theta(L_{e})\
eq L_{e}\\), either \\(\\phi(I_{e})\
eq I_{e}\\) or \\(\\psi(J_{e})\
eq J_{e}\\). Suppose that \\(\\phi(I_{e})\
eq I_{e}\\). Then there exists \\(a\\in I_{e}-\\phi(I_{e})\\) that is \\(a\\in I-\\phi(I)\\). This implies that \\((1,0)(1,0)(a,1)=(a,0)\\in L-\\theta(L)\\). Then we have either \\(1\\in I\\) or \\(1\\in J\\), that is \\(I=R\\) or \\(J=T\\). Similarly, if \\(\\psi(J_{e})\
eq J_{e}\\), we have either \\(I=R\\) or \\(J=T\\).
**Theorem 2.32**.: _Let \\(R\\) and \\(T\\) be two graded rings, \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\), \\(\\psi:GI(T)\\to GI(T)\\cup\\{\\emptyset\\}\\) be two functions. Suppose that \\(\\theta=\\phi\\times\\psi\\). Suppose that \\(L=I\\times J\\) is a graded \\(\\theta\\)-\\(1\\)-absorbing prime ideal of \\(R\\times T\\) and \\(\\theta(L_{e})\
eq L_{e}.\\) If \\(\\phi(R_{e})\
eq R_{e}\\) is not a unique maximal ideal of \\(R_{e}\\) and \\(\\psi(T_{e})\
eq T_{e}\\) is not a unique maximal ideal of \\(T_{e}\\), then either \\(L=R\\times J\\) and \\(J_{e}\\) is a prime ideal of \\(T_{e}\\) or \\(L=I\\times T\\) and \\(I_{e}\\) is a prime ideal of \\(T_{e}\\)_
Proof.: By Theorem 2.31, we know that \\(I=R\\) or \\(J=T.\\) Without loss of generality, we may assume that \\(I=R.\\) Let \\(xy\\in J_{e}\\) for some elemts \\(x,y\\in T_{e}\\). If \\(x\\) or \\(y\\) is unit, we are done. So assume that \\(x,y\\) are nonunits in \\(T_{e}\\). Since \\(\\phi(R_{e})\
eq R_{e}\\) is not a unique maximal ideal of \\(R_{e}\\), there exists a nonunit element \\(a\\in R_{e}-\\phi(R_{e})\\). Then we have \\((a,1)(1,x)(1,y)=(a,xy)\\in L-\\theta(L)\\). Since \\(I\\) is a graded \\(\\theta\\)-\\(1\\)-absorbing prime ideal of \\(R\\times T\\), we have either \\((a,1)(1,x)=(a,x)\\in L\\) or \\((1,y)\\in L\\) implying \\(x\\in J\\) or \\(y\\in J\\) that is either \\(x\\in T_{e}\\cap J=J_{e}\\) or \\(y\\in T_{e}\\cap J=J_{e}\\). Therefore, \\(J_{e}\\) is a prime ideal of \\(T_{e}\\).
## 3. Graded von Neumann regular Rings
In this section, we introduce and study the concept of graded von Neumann regular rings. We prove that if \\(R\\) is a graded von Neumann regular ring and \\(x\\in h(R)\\), then \\(Rx\\) is a graded almost \\(1\\)-absorbing prime ideal of \\(R\\) (Theorem 3.8).
**Definition 3.1**.: _Let \\(R\\) be a \\(G\\)-graded ring. Then \\(R\\) is said to be a graded von Neumann regular ring if for each \\(a\\in R_{g}\\) (\\(g\\in G\\)), there exists \\(x\\in R_{g^{-1}}\\) such that \\(a=a^{2}x\\)._A graded commutative ring \\(R\\) with unity is said to be a graded field if every nonzero homogeneous element of \\(R\\) is unit [15]. Clearly, every field is a graded field, however, the converse is not true in general, see ([15], Example 3.6).
**Lemma 3.2**.: _Let \\(R\\) be a graded ring. If \\(r\\in R_{g}\\) is a unit, then \\(r^{-1}\\in R_{g^{-1}}\\)._
Proof.: By ([12], Proposition 1.1.1), \\(r^{-1}\\in h(R)\\), which means that \\(r^{-1}\\in R_{h}\\) for some \\(h\\in G\\). Now, \\(rr^{-1}=1\\in R_{e}\\) and \\(rr^{-1}\\in R_{g}R_{h}\\subseteq R_{gh}\\). So, \\(0\
eq rr^{-1}\\in R_{e}\\cap R_{gh}\\), which implies that \\(gh=e\\), that is \\(h=g^{-1}\\). Hence, \\(r^{-1}\\in R_{g^{-1}}\\).
**Example 3.3**.: _Every graded field is a graded von Neumann regular ring. To see this, let \\(R\\) be a graded field and \\(a\\in R_{g}\\). If \\(a=0\\), then \\(x=0\\in R_{g^{-1}}\\) satisfies \\(a=a^{2}x\\). If \\(a\
eq 0\\), then \\(a\\) is unit, and then by Lemma 3.2, \\(x=a^{-1}\\in R_{g^{-1}}\\) with \\(a=a^{2}x\\). Hence, \\(R\\) is a graded von Neumann regular ring._
**Lemma 3.4**.: _If \\(R\\) is a graded ring, then \\(R_{e}\\) contains all homogeneous idempotent elements of \\(R\\)._
Proof.: Let \\(x\\in h(R)\\) be an idempotent element. Then \\(x\\in R_{g}\\) for some \\(g\\in G\\) and \\(x^{2}=x\\). If \\(x=0\\), then \\(x\\in R_{e}\\) and we are done. Suppose that \\(x\
eq 0\\). Since \\(x^{2}=x\\cdot x\\in R_{g}R_{g}\\subseteq R_{g^{2}}\\), \\(0\
eq x\\in R_{g}\\cap R_{g^{2}}\\), and then \\(g^{2}=g\\) which implies that \\(g=e\\), and hence \\(x\\in R_{e}\\).
**Proposition 3.5**.: _Let \\(R\\) be a graded ring. If \\(R\\) is a Boolean ring, then \\(R\\) is trivially graded._
Proof.: It is enough to prove that \\(R_{g}=\\{0\\}\\) for all \\(g\
eq e\\). Let \\(g\\in G-\\{e\\}\\) and \\(x\\in R_{g}\\). Since \\(R\\) is Boolean, \\(x\\) is an idempotent, and then \\(x\\in R_{e}\\) by Lemma 3.4. So, \\(x\\in R_{g}\\cap R_{e}\\) which implies the either \\(x=0\\) or \\(g=e\\). Since \\(g\
eq e\\), \\(x=0\\), and hence \\(R\\) is trivially graded.
**Example 3.6**.: _Every Boolean graded ring is a graded von Neumann regular ring. To see this, let \\(R\\) be a Boolean graded ring. Then by Proposition 3.5, \\(R\\) is trivially graded. Assume that \\(a\\in R_{g}\\). If \\(g\
eq e\\), then \\(a=0\\) and then \\(x=0\\in R_{g^{-1}}\\) with \\(a=a^{2}x\\). If \\(g=e\\), then \\(a\\) is an idempotent, and then \\(x=a\\in R_{e}=R_{g^{-1}}\\) with \\(a^{2}x=ax=a\\cdot a=a^{2}=a\\). Hence, \\(R\\) is a graded von Neumann regular ring._
**Lemma 3.7**.: _Let \\(R\\) be a graded von Neumann regular ring and \\(x\\in h(R)\\). Then \\(Rx=Ra\\) for some idempotent element \\(a\\in R_{e}\\)._
Proof.: Since \\(x\\in h(R)\\), \\(x\\in R_{g}\\) for some \\(g\\in G\\), and then there exists \\(y\\in R_{g^{-1}}\\) such that \\(x=x^{2}y\\) as \\(R\\) is graded von Neumann regular. Choose \\(a=xy\\), then \\(a=xy\\in R_{g}R_{g^{-1}}\\subseteq R_{e}\\), and \\(a^{2}=(xy)\\cdot(xy)=(x^{2}y)y=xy=a\\), which means that \\(a\\) is an idempotent. Now, \\(a=xy=yx\\in Rx\\), so \\(Ra\\subseteq Rx\\). On the other hand, \\(x=x^{2}y=x(xy)=xa\\in Ra\\), so \\(Rx\\subseteq Ra\\). Hence, \\(Rx=Ra\\).
**Theorem 3.8**.: _Let \\(R\\) be a graded von Neumann regular ring and \\(x\\in h(R)\\). Then \\(Rx\\) is a graded almost \\(1\\)-absorbing prime ideal of \\(R\\)._
Proof.: By [[3], Lemma 1], \\(I=Rx\\) is a graded ideal of \\(R\\). By Lemma 3.7, \\(I=Rx=Ra\\) for some idempotent \\(a\\in R_{e}\\), and then \\(I^{2}=I\\) which implies that \\(I=Rx\\) is a graded almost \\(1\\)-absorbing prime ideal of \\(R\\).
**Proposition 3.9**.: _Let \\(R\\) be a graded von Neumann regular ring and \\(x\\in h(R)\\). Then there exists an idempotent graded ideal \\(J\\) of \\(R\\) such that \\(R=Rx+J\\) and \\(Rx\\cap J=\\{0\\}\\)._Proof.: By Lemma 3.7, \\(Rx=Ra\\) for some an idempotent \\(a\\in R_{e}\\). Choose \\(J=R(1-a)\\), then as \\(1-a\\in R_{e}\\subseteq h(R)\\), \\(J\\) is a graded ideal of \\(R\\) by [[3], Lemma 1]. Also, \\((1-a)^{2}=1-2a+a^{2}=1-2a+a=1-a\\) which means that \\(1-a\\) is an idempotent, and so \\(J\\) is an idempotent ideal. Let \\(r\\in R\\). Then \\(r=ra+r(1-a)\\in Ra+R(1-a)=Rx+J\\), and hence \\(R=Rx+J\\). Assume that \\(y\\in Rx\\cap J=Ra\\cap J\\). Then \\(y=\\alpha a\\) and \\(y=\\beta(1-a)\\) for some \\(\\alpha,\\beta\\in R\\). Now, \\(ya=\\alpha a^{2}=\\alpha a=y\\), and \\(ya=\\beta(1-a)a=\\beta a-\\beta a^{2}=\\beta a-\\beta a=0\\). So, \\(y=0\\), and hence \\(Rx\\cap J=\\{0\\}\\).
**Corollary 3.10**.: _If \\(R\\) is a graded von Neumann regular ring, then \\(R\\) is a direct sum of two idempotent graded ideals of \\(R\\)._
Proof.: Apply Proposition 3.9 and Lemma 3.7.
## References
* [1] R. Abu-Dawwas, Graded semiprime and graded weakly semiprime ideals, Italian Journal of Pure and Applied Mathematics, 36 (2016), 535-542.
* [2] R. Abu-Dawwas, M. Bataineh and H. Shashan, Graded generalized 2-absorbing submodules, Beitrage zur Algebra und Geometrie/Contributions to Algebra and Geometry, (2020), DOI 10.1007/s13366-020-00544-1.
* [3] R. Abu-Dawwas, E. Yildiz, U. Tekir and S. Koc, On graded 1-absorbing prime ideals, Sao Paulo Journal of Mathematical Sciences, (2021), [https://doi.org/10.1007/s40863-021-00218-3](https://doi.org/10.1007/s40863-021-00218-3).
* [4] A. S. Alshehry and R. Abu-Dawwas, On graded \\(\\phi\\)-prime submodules, arXiv:2102.04155, submitted.
* [5] K. Al-Zoubi, R. Abu-Dawwas and S. Ceken, On graded 2-absorbing and graded weakly 2-absorbing ideals, Haceteppe Journal of Mathematics and Statistics, 48 (2019), 724-731.
* [6] S. E. Atani, On graded weakly prime ideals, Turkish Journal of Mathematics, 30 (2006), 351-358.
* [7] M. Bataineh and R. Abu-Dawwas, On graded 2-prime ideals, Mathematics, (2021), [https://doi.org/10.3390/math9050493](https://doi.org/10.3390/math9050493).
* [8] R. Hazrat, Graded rings and graded Grothendieck groups, Cambridge University press, 2016.
* [9] A. Jaber, M. Bataineh and H. Khashan, Almost graded prime ideals, Journal of Mathematics and Statistics, 4 (4) (2008), 231-235.
* [10] S. Koc, U. Tekir and E. Yildiz, On weakly 1-absorbing prime ideals, Ricerche di Matematica (2021).
* [11] N. H. McCoy, A note on finite unions of ideals and subgroups. Proceedings of the American Mathematical Society, 8(4) (1957), 633-637.
* [12] C. Nastasescu and F. Oystaeyen, Methods of graded rings, Lecture Notes in Mathematics, 1836, Springer-Verlag, Berlin, 2004.
* [13] M. Refai and K. Al-Zoubi, On graded primary ideals, Turkish Journal of Mathematics, 28 (2004), 217-229.
* [14] M. Refai, M. Hailat and S. Obiedat, Graded radicals and graded prime spectra, Far East Journal of Mathematical Sciences, (2000), 59-73.
* [15] H. Saber, T. Alraqad and R. Abu-Dawwas, On graded \\(s\\)-prime submodules, Aims Mathematics, 6 (2020), 2510-2524.
* [16] F. Sohelinia and A. Y. Darani, On graded 2-absorbing and graded weakly 2-absorbing primary ideals, Kyungpook Mathematical Journal, 57 (2017), 559-580.
* [17] U. Tekir, S. Koc, R. Abu-Dawwas and E. Yildiz, Graded weakly 1-absorbing prime ideals, submitted.
* [18] R. N. Uregen, U. Tekir, K. P. Shum and S. Koc, On graded 2-absorbing quasi primary ideals, Southeast Asian Bulletin of Mathematics, 43 (4) (2019), 601-613.
* [19] E. Yildiz, U. Tekir and S. Koc, On \\(\\phi\\)-1-absorbing prime ideals, Beitrage zur Algebra und Geometrie/Contributions to Algebra and Geometry, (2021), [https://doi.org/10.1007/s13366-020-00557-w](https://doi.org/10.1007/s13366-020-00557-w). | Let \\(G\\) be a group, \\(R\\) be a \\(G\\)-graded commutative ring with nonzero unity and \\(GI(R)\\) be the set of all graded ideals of \\(R\\). Suppose that \\(\\phi:GI(R)\\to GI(R)\\cup\\{\\emptyset\\}\\) is a function. In this article, we introduce and study the concept of graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals. A proper graded ideal \\(I\\) of \\(R\\) is called a graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal of \\(R\\) if whenever \\(a,b,c\\) are homogeneous nonunit elements of \\(R\\) such that \\(abc\\in I-\\phi(I)\\), then \\(ab\\in I\\) or \\(c\\in I\\). Several properties of graded \\(\\phi\\)-\\(1\\)-absorbing prime ideals have been examined.
Key words and phrases:Graded \\(\\phi\\)-prime ideal; graded \\(1\\)-absorbing prime ideal; graded \\(\\phi\\)-\\(1\\)-absorbing prime ideal 2010 Mathematics Subject Classification: Primary 13A02; Secondary 16W50 | Condense the content of the following passage. | 291 |
arxiv-format/1808_06762v1.md | # A Constellation of MicroSats to Search for NEOs
Michael Shao
Hanying Zhou
Slava G. Turyshev
Chengxing Zhai
Navtej Saini
Russell Trahan
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-0899, USA
## 1 Introduction
This paper is an extension of our previous effort to study the capabilities of microsatellites (MicroSats) for detection of the near-Earth objects (NEOs) which led to the conclusion that a constellation 6 MicroSats is capable of detecting \\(\\sim\\)90% of NEOs with the diameter of 140 m in \\(\\sim\\) 3 years (Shao et al. 2017, Zhai et al., 2018).
Here we describe a more capable constellation of MicroSats, and evaluates not just the \"detection\" statistics but also the statistics for \"cataloging\" the newly found NEOs. When a new NEO is detected, a single detection provides no information about its orbit. If the initial detection is not followed up with subsequent detections in the next few days, the object will be lost, i.e., when it will be re-discovered again at a later time, one will not be able to link the two observations confirming that they relate to the same object. A cataloged NEO is one that is observed at least 3 times over the period of \\(\\sim\\)3 weeks. This set of three measurements result in a crude orbit such that a second cataloged observation several decades later can be linked to the first one.
We begin by briefly describing the technique of synthetic tracking (Shao et al. 2017). Synthetic tracking improves the signal-to-noise ratio (SNR) of an observation of a moving object by one or more orders of magnitude, enabling very small telescopes to have sensitivity equal to that of a much larger telescopes. Such a dramatic increase in sensitivity makes it possible to consider deploying a constellation of small telescopes that is not only much less expensive but also of a much higher performance than a single large ground- or space-based facility.
We then describe the simulation we performed using a constellation of 8 microsatellites relying on \\(\\sim\\)20 cm telescopes with large field of view (FOV) focal planes that can catalog 90% of 140 m NEOs in \\(\\sim\\) 3 years of observation. While this simulation is interesting by itself, providing very valuable results, we also perform a range of simulations to understand why a constellation of small telescopes so vastly outperforms a single large telescope. These additional simulations provide a quantitative measure of the effect that we call saturation.
The simplest way to understand saturation is to think of a single large telescope in orbit that scans the sky with a sensitivity down to some faint limiting magnitude, say 23 mag, and can cover 4\\(\\pi\\) steradians in 1 week. At 23 mag, a 140 m NEO (at opposition) can be detected at a distance up to 0.8 AU from the Earth. On average, these objects will move close enough to be detected and stay detectable for \\(\\sim\\)3 months until they are no longer brighter than 23 mag. What would be gained by placing a 2-nd such telescope into operation? The answer is - close to zero - because the 1-st telescope can scan the entire sky in 7 days, while the objects are detectable for an average of 120 days. Thus, the 1-st telescope is already \\(120/17\\approx 17\\) times beyond saturation adding a 2-nd telescope at the same location will result in close to zero additional NEOs detected. At what distance would the 2-nd telescope have to be from the 1-st one to \"avoid\" saturation? The answer is \\(-\\) very roughly 0.8 AU, which is the distance at which we can detect a 140 m NEO.
## 2 Synthetic Tracking
Detecting NEOs is different than detecting stationary objects. Traditionally NEO search telescopes take several \\(\\sim\\)45 sec CCD exposures over a time span of \\(\\sim\\)1 hr. Objects that appear to move linearly in time over that time span of \\(\\sim\\)1 hr are potential NEOs. The key is that the moving object be detected in each of the \\(\\sim\\)4 images taken. Because NEOs move, a long exposure results in a \"streaked\" image. The streaking results in spreading the photons over a larger number pixels making them harder to detect above the background sky noise. Synthetic tracking avoids this loss of SNR by combining multiple short exposures and then adding the image stack using a shift/add algorithm.
Synthetic tracking is possible because modern CMOS focal plane sensors can be readout at very low read noise (below photon noise from the sky background) at relatively fast frame rates compared to CCD sensors. Most large format CCD sensors that are used in a large mosaic focal plane needs over 10 sec to read out and also needs a mechanical shutter to prevent image smearing while reading the image out.
One apparent drawback to shift and add is that before we detect the object, we do not know in advance what it's velocity is and how far and in which direction we should shift subsequent images. Synthetic tracking does a brute force search, performing the shift/add in \\(\\sim\\)10,000 velocities. This is now possible with relatively low cost PCI-e board with GPUs that have peak processing speed of-5 Tflop.
The use of synthetic tracking allows us to use very long integration times to make up for the smaller aperture telescope that we use. An example of a small telescope that would be uses for the search is a modified Schmidt telescope with a 28 cm aperture f/2.22 and a sensor with 3.8 \\(\\upmu\\)m pixels. A backside CMOS commercial sensor with 80% QE and \\(<\\) 2e-- read-noise at 2 Hz will soon be available in a very large \\(\\sim\\)140 Mpix format. This would provide a FOV \\(>\\) 16 deg\\({}^{2}\\) and a limiting magnitude (in space) of \\(\\sim\\)22 mag assuming image quality better than 1.75 arcsec FWHM.
## 3 Simulation of a Constellation of 8 Nicrosats
Table 2 showed what is capable possible with the latest available hardware. Our simulations looked at a slightly less capable system. With a 10 sqdeg FOV and 22 mag limit in 600 seconds.
\\begin{table}
\\begin{tabular}{l c c c} \\hline
**Inputs parameters** & **Vaues** & **Units** & **Derived param.** & **Vaues** & **Units** \\\\ \\hline NEO diameter & 140 m & Apparent mag & 22.01 mag & \\\\ Distance & 0.615 AU & Flux detected & 1.2 e/sec & \\\\ Transverse veloc. & 10 km/sec & Noise/frame & 82.58 variance e & \\\\ Phase angle & 0 degrees & Signal/frame & 12.01 eβ & \\\\ Telescope diam. & 279 mm & & \\\\ Total QE & 0.56 & Sotal SNR & 7 in 500 sec \\\\ Pixel size & 1.27 β & & \\\\ Read noise & 1.7 eβ & FOV & 17.79 sqdeg \\\\ Frame time & 10 sec & & \\\\ Total Integ time & 500 sec & & \\\\ Sky background & 22 mag/(\\(\\mathrm{c}\\))\\({}^{2}\\) & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Instrument input and derived parameters.
Figure 1: Shift-and-add concept illustrated: because of the motion of the NEA, photons are deposited on different pixels of a CCD, but in the synthetic image (with shifted/added frames) the asteroid smear is removed.
The simulation starts with a synthetic population of NEOs from (Granvik et al., 2016). We performed simulations of the entire NEO population as well as the \"impactor\" NEO population. By impactors we mean NEOs that have orbits which come within 0.01 AU of the Earth's orbit, or a MOID (minimum orbit intersection distance) of \\(<\\) 0.01 AU. We place the 8 telescopes evenly spaced in a \\(\\sim\\)1AU orbit around the Sun. Each satellite then systematically scanned the sky in an orange peel pattern. The simulation proceeded in steps of 600 seconds. At each step we calculated the position of the synthetic NEOs in the solar system as well as the 8 satellites. If a NEO was within the FOV of the telescope, we would calculate the apparent magnitude of the NEO as seen from the telescope. For observations, when the phase angle was not zero, we used the standard HG scattering function, with G = 0.15 that drops significantly more rapidly than Lambertian scattering with large phase angles. If the apparent magnitude was brighter than the limiting magnitude of the telescope/camera, it would be recorded as a detection. The simulation was then run continuously in 600 sec steps typically for \\(\\sim\\)6 years.
We varied a number of parameters in an attempt to optimize the number of NEOs detected. Because of the HG scattering function, NEOs at a fixed distance are much dimer at 90\\({}^{\\circ}\\) phase angle than at 0\\({}^{\\circ}\\) phase angle. Our search pattern had a variable sun-exclusion angle. We would skip over the parts of the sky where the telescope pointing was less than the Sun-exclusion angle. We also allowed the telescope's orbit around the Sun to be varied. We adjusted the sky background (zodi light) to reflect the fact that sky got brighter as the orbit radius got to be less than 1 AU.
Different constellations of MicroSats were investigated and we report only a small subset of the results. We collected statistics on three different types of detections. One is a simple or single detection. A second statistic was what we called a linked/catalog detection. This is where the NEO is detected at least 3 times over a \\(\\sim\\)21 day interval. Or the detections allowed for an orbit solution that was as good as 3 observations evenly spaced over 21 days. The third statistic was equivalent to a reasonable orbit detection, where the same NEO was detected a minimum of 6 times over an orbital arc of at least 120\\({}^{\\circ}\\) around the Sun.
Case 1 was 8 satellites uniformly spaced around the Sun. An H=22 mag NEO is \\(\\approx\\)140 m in diameter. H=23 mag is 88 m and H = 24 is 56 m in diameter. We see in this case the completion rate for single detections was extremely high 98% after 8 years. However, the cataloged or linked completion rate was much lower only 75%. We did not collect statistics for the 3-rd category for Case 1. Case 2 also had 8 satellites but in 4 pairs. The pairs were space about 200,000 km apart so that we could get a parallax measurement of the distance to the NEO. This improved our link detection statistic from 75% to 80% for 140 m NEO impactors. Case 3 has a total of 12 satellites in 6 pairs. With 12 satellites we find that \\(\\sim\\)93% of 140 m NEOs would be cataloged in 6 years. 48% would have reasonably good orbits.
We should explain the difference between the three detection, linked detection and orbit detection in a bit more detail. If a NEO is only detected once, we know its brightness, position and velocity at one point in time. We do not know its orbit. If 10 years later that NEO is detected again, by a subsequent mission/facility we cannot link those two observations together. In some sense, an isolated detection of a NEO will result in that object subsequently being lost. The concept of a \"linked\" or cataloged observation is that there is now a sequence of measurements, from which we can derive some orbital information. Enough that a similar \"cataloged\" observation set a few decades later can be linked to the original set of observations.
Note that a cataloged observation is not sufficient to \"predict\" where in the Sky the NEO will be a few decades later. With \\(\\sim\\)3-4 observations within \\(\\sim\\)20 days, The derived orbital parameters are not very precise. In particular the semi-major axis of the orbit may be in error by, say, 1%. A 1% error in the semi-major axis means the orbit period has an error of \\(\\sim\\)1.5%. A few decades later, say 10 times the orbital periods, the uncertainty in the orbital phase or where the NEO is in its orbit is 15% of the circumference of the orbit.
\\begin{table}
\\begin{tabular}{|c||c|c|c|c|c|c|c|c|} \\hline
**Eject** & \\multicolumn{3}{c|}{**Case 1.5**} & \\multicolumn{3}{c|}{**Case 2, 4 paired cubesats**} & \\multicolumn{3}{c|}{**Case 3, 6 paired cubesats**} \\\\ \\hline
**Eject** & \\multicolumn{3}{c|}{**Case 3, 6 paired cubesats**} & \\multicolumn{3}{c|}{**Case 2, 4 paired cubesats**} & \\multicolumn{3}{c|}{**Case 3, 6 paired cubesats**} \\\\ \\hline
**Eject** & \\multicolumn{3}{c|}{**Case 3, 6 paired cubesats**} \\\\ \\hline
Which would be an arc that extends over 1 AU. As seen from Earth, that 1 AU arc could extend \\(\\sim\\)60-120\\({}^{\\circ}\\) across the sky.
Of the 6 orbital parameters, orbital phase degrades with time. But many of the other orbital parameters do not degrade with time. The pole of the orbit and the semi-major axis does not degrade with time. Two cataloged observation sets can be linked when the orbital parameters that don not degrade with time match. A cataloged observation set cannot be linked to a single future observation, but can be linked to a future \"set\" of observations that constitute a 2-nd cataloged observation.
When the NEO is observed over a short arc, orbital parameters are not well determined. In general the accuracy improves as the square of the arc length. As the length of the arc further increases to be larger than \\(\\sim\\)90\\({}^{\\circ}\\) of motion around the Sun, the accuracy of the orbital parameters will improve linearly with the arclength. And, if the NEO is observed across multiple orbits, the improvement in accuracy grows as the square root of the number of observations. Our third statistic 120\\({}^{\\circ}\\) orbital arc represents observations of a NEO by more than 1 satellite in the constellation in the region where orbital accuracy is now increasing linearly with arclength. Very roughly speaking this would allow a single observation of that NEO to be linked to the 120\\({}^{\\circ}\\) orbit within a few years of the initial set of observations.
NEOCam and LSST have their simulation results published and as a comparison, after 6 years, for 140 m NEOs, NEOCam 82% single detections and 72% linked/cataloged (Mainzer et. al, 2016). For LSST, 60% cataloged/linked after 10 yrs of observation (Chesley et. al 2017).
## 4 Saturation
For telescopes with a small FOV and rather low sensitivity, doubling the number of telescopes will double the discovery rate of NEOs and decrease by a factor of 2 the time needed to find 90%. If the sensitivity of the telescope were to increase by 1 mag, from say 20 mag to 21 mag, the distance a 140 m NEO could be detected at 0 phase angle would increase from 0.30 AU to 0.43 AU. This increase in range would increase the volume of search space by almost a factor of 3 and one might think this would increase the discovery rate by a factor of 3. However, as sensitivity and FOV increases, we see an effect we call saturation, that is the decrease in the time to detect 90% of the NEOs is much less than one would expect given the larger FOV or the higher sensitivity.
There are several ways to understand this. If 140 m NEOs were detectable out to 0.4 AU from Earth, and if on average their relative velocity to Earth is 10 km/s, then they will typically be detectable for \\(\\sim\\)0.3 AU/10km/s or \\(\\sim\\)50 days. If the telescope/camera can survey the night time sky (\\(\\sim\\)20,000 sqdeg) in 20 days, Then building another telescope and doubling the sky coverage area would not double the rate of detection of new NEOs because the 2-nd telescope would just detect the NEOs found by the 1st. Most NEOs have highly eccentric orbits and come into the inner solar system (inside the orbit of Mars, for only a small portion of its orbit. If the Earth is on the other side of the Sun when that happens, they will not be detected, regardless of the sensitivity of the telescope. It would be useful to quantify the saturation effect, to find out how far into saturation the next generation of NEO search facilities would be.
Figure 2: Left LSST detection of H-22mag NEOs versus time (Chesley 2017). Right a single telescope in space configured to have similar performance to LSST. This is subsequently scaled to examine saturation effects.
Towards that goal, we constructed a model of a space based telescope operating in the visible band with synthetic tracking whose discovery rate is roughly that of LSST. The discovery rate of LSST was described in detail in a paper by Chesley et al. (2017). The %-age of NEOs larger than 140 m detected at least once is shown in the graph below. Alongside, is a space based telescope that has roughly the same performance. It is a system with a limiting magnitude of 23.3mag and can scan 9 sqdeg in 360 sec. Note that because this is in space it is able to observe 24 hrs a day and the moon and weather are not the limiting factors. The red line in the LSST plot is the single detection curve that should be compared to the surrogate on the right hand plot.
We then perturbed the \"baseline\" system first by doubling its sky coverage sqdeg/hr, equal to building two of these telescopes/cameras. Then we reduced that by 50%. The number of unique NEOs \\(<\\) H=22 mag versus time is plotted in the next two graphs.
We see that adding a 2-nd telescope only increased the %-age of NEOs from \\(\\sim\\)83% to 84.8%. Similarly, halving the sky coverage only decreased it to 81.4%. Next, we considered increasing the sensitivity, by letting the size and cost of the telescope increase by a factor of two. Many cost models of telescope have the cost grow as \\(\\sim\\)diameter\\({}^{2.5}\\). Using this a telescope with twice the cost would have a 1.3X larger diameter and its sensitivity would be 0.28 mag better. We then also simulated putting a 2-nd such telescope in solar orbit on the other side of the Sun. These two are shown in the two plots in Figure 4. Last of all, the effects of saturation are shown in the Table 3.
Figure 4: Left is the %-age of NEOs vs time for a telescope with \\(\\sim\\)1.3 times larger in diameter, 0.28 mag fainter limiting magnitude roughly doubles the cost (which scales as \\(\\sim\\)D\\({}^{2.5}\\)).
Figure 3: Left % of H\\(\\sim\\)22mag NEOs detected versus time for two telescopes. Right % of H\\(\\sim\\)22 mag for a telescope with % the sky coverage (in sqdeg/hr).
The column \"% in 10 yrs\" is the %-age of NEOs with diameter of \\(>140\\) m detected one or more times in 10 years. The column \"years _@_ 80%\" is the number of years it takes to detect 80% of the NEOs. From this simple example, it seems there might be a small advantage to building a larger telescope at twice the cost rather than building two telescopes. However that is based on the D\\({}^{2.5}\\) cost model. When building several telescopes the 2-nd is often less than building the 1-st. On the other hand irrespective of the cost, the gain in adding a 2-nd telescope is quite small if it is close to the 1-st. When the 2-nd telescope is placed on the other side of the Sun, the gain is quite a bit larger. Single detection of a NEO is not the appropriate metric, cataloged detection is the correct metric, but the purpose of this numerical experiment was to quantify the effect of saturation.
## 5 Summary
Synthetic tracking enables a small telescope to achieve similar sensitivity for moving objects as a much large telescope. The reduction in cost make it possible to consider a constellation of MicroSats distributed around the Sun, that avoids the saturation effect that would makes it virtually impossible to catalog 90% of 140 m NEOs in less than 10 years.
This work in part was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration, and in part from a grant from the B612 Foundation. (c) 2018 California Institute of Technology. U.S. Government sponsorship acknowledged.
## References
* [1] Chesley, S.R., Veres, P., \"Projected Near-Earth Object Discovery Performance of the Large Synoptic Survey Telescope,\" JPL Publication 16-11 (April 2017), arXiv:1705.06209 [astro-ph.EP]
* [2] Granvik, M., A. Morbidelli, R. Jedicke, B. Bolin, W.F. Bottke, E. Beshore, D. Vokrouhlicky, M. Delbo, and P. Michel, \"Super-catastrophic Disruption of Asteroids at Small Perihelion Distances.\" Nature, 530: 303-306 (2016).
* [3] Mainzer, A., T. Grav, J. Bauer, T. Conrow, R. M. Cutri, J. Dailey, J. Fowler, J. Giorgini, T. Jarrett, J. Masiero, T. Spahr, T. Statler, E. L. Wright, (2015) \"Survey Simulations of a New Near-Earth Asteroid Detection System\", The Astronomical Journal, 149(5), 172 (2015)
* [4] Shao, M., Turyshev, S.G., Spangelo, S., Werne, T.A., and Zhai, C., \"A constellation of SmallSats with synthetic tracking cameras to search for 90% of potentially hazardous near-Earth objects,\" A&A 603, A126, (2017), arXiv:1503.07944 [astro-ph.IM].
* [5] Zhai, C., M. Shao, S. Lai, P. Boerner, J. Dyer, E. Lu, H. Reitsema, and M. Buie, \"Technical Note: Asteroid Detection Demonstration from SkySat-3 B612 Data using Synthetic Tracking,\" JPL-Publication 18-1, (2018), arXiv:1805.01102 [astro-ph.IM]
\\begin{table}
\\begin{tabular}{|l r r|} \\hline & & years \\\\
**Saturation Effect** & \\% 10 yr & @ 80\\% \\\\ \\hline Single large telescope & 83.8\\% & 7.8 \\\\ Two in Earth orbit & 84.8\\% & 7.5 \\\\
1/2 in Earth orbit & 81.4\\% & 9 \\\\
1.3X dia larger 0.28 mag & 86.0\\% & 6.5 \\\\ Two in solar orbit & 95.0\\% & 2.7 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Summary of saturation effects | Large or even medium sized asteroids impacting the Earth can cause damage on a global scale. Existing and planned concepts for finding near-Earth objects (NEOs) with diameter of 140 m or larger would take \\(\\sim\\)15-20 years of observation to find \\(\\sim\\)90% of them. This includes both ground and space based projects. For smaller NEOs (\\(\\sim\\)50-70 m in diameter), the time scale is many decades. The reason it takes so long to detect these objects is because most of the NEOs have highly elliptical orbits that bring them into the inner solar system once per orbit. If these objects cross the Earth's orbit when the Earth is on the other side of the Sun, they will not be detected by facilities on or around the Earth. A constellation of MicroSats in orbit around the Sun can dramatically reduce the time needed to find 90% of NEOs \\(\\sim\\)100-140 m in diameter.
near-Earth objects (NEO), synthetic tracking, simulations | Provide a brief summary of the text. | 218 |
cambridge_university_press/4d48b825_edf9_4b55_8185_9518b1b4d744.md | Physical properties, crystalline textures and _c_-axis fabrics of the Siple Dome (Antarctica) ice core
Anthony J. GOW,1 Debra MEESE1,2
1US Army Cold Regions Research and Engineering Laboratory, 72 Lyme Road, Hanover, New Hampshire 03755-1290, USA
2Climate Change Institute, University of Maine, 303 Bryand Global Sciences Center, Orono, Maine 04469-5790, USA
2
## Introduction
The principal objective of the Siple Dome (West Antarctica) drilling project was to obtain a high-resolution ice core to bedrock located 1004 m below the surface of the dome. The dome is situated at 81.65\\({}^{\\circ}\\) S, 148.81\\({}^{\\circ}\\) W on an east-west-trending ridge between Kamb and Bindschaller Ice Streams (former Ice Streams C and D) (Fig. 1), and reaches a maximum elevation of about 620 m a.s.l. Bedrock is located about 400 m below sea level directly beneath the dome. Analysis of shallow firm cores obtained prior to deep drilling by the Polar Ice Coring Office (PICO) indicates a current accumulation rate of 10-14 cm of ice per year. Ten-meter firm temperature measurements indicate a mean annual surface temperature of \\(-\\)25\\({}^{\\circ}\\)C. A major reason for choosing Siple Dome as a drilling site was its coastal location where the ice would be thick enough to yield a core with a climatic record of at least 100 kyr. However, accurate interpretation of the geochemical records needed to evaluate paleoclimatic history of the Siple Dome core requires very careful evaluation of the physical and structural properties of the ice itself. For example, Gow and Williamson (1976), Danggaard and others (1982) and Robin (1983) have demonstrated that significant blurring of the paleoclimatic record can result from discontinuities and inhomogeneous flow in the deeper parts of ice sheets. At the Greenland Ice Sheet Project 2 (GISP2) and Greenland Ice Project (GRIP) sites, basal sections of both ice cores display evidence of significant stratigraphic distortion including inclined layering (up to 30\\({}^{\\circ}\\)), boudiange and overturned folds (Gow and others, 1993, 1997; Taylor and others, 1993; Alley and others, 1995). However, based upon the essentially flat topography extending outwards for several kilometers beneath Siple Dome, significant stratigraphic disturbance in the basal ice is not expected.
A principal objective of this paper is to document the major physical and structural properties of the Siple Dome ice core. The studies included determination of the ice-core relaxation process based on repeat measurements of ice-core densities; this process does not appear to follow the same course of change observed in the relaxation behavior of other deep cores from Antarctica and Greenland. Additional studies included evaluation of the forces driving crystal growth processes and _c_-axis fabric patterns as a function of depth in the Siple Dome ice core. These two property profiles reflect not only the dynamic situation of Siple Dome ice but also possible interaction between Siple Dome and Kamb and Bindschaller Ice Streams that flank it. Debits entrainment mechanisms based on consideration of the nature, disposition and concentration of debris in the
Figure 1: Location map of Siple Dome and environs (modified from Alley and Whillans, 1991).
basal ice, and variations in total gas content of the debris-bearing ice were also examined. Tephra deposition, indicative of active volcanism in Antarctica, was especially abundant between 700 and 800 m where the abrupt development of a strong horizontal shear-type fabric appears strongly linked to the widespread occurrence of dust-sized tephra particles embedded in the ice. Results of tephra studies, including detailed discussions of the timing and pattern of distribution of tephra deposition and the potential impact of the eruption and fallout of volcanic ash and dust on the climate and rheological characteristics of the ice at Siple Dome, are reported elsewhere (Gow and Meese, 2007). Results of earlier research conducted on hot-water cores from the summit and flank of Siple Dome in 1997/98 were presented by Gow and Engelhardt (2000).
## Physical properties
### Drilling procedures/mechanical condition of freshly drilled core
Drilling of the PICO core at Siple Dome during 1997-99 was conducted with a cable-suspended electromechanical drill. The drill was essentially the same as that used to drill to bedrock beneath the summit of the Greenland ice sheet during GlSP2. Butly acetate was added to the hole to counteract the overburden pressure and prevent closure of the drillhole. Stresses in the ice, relieved when the super-incumbent load of the overlying ice is removed by drilling. resulted in major changes in the mechanical condition of the core. Drilled cores, 13.2 cm in diameter, were in good to excellent condition through the top 350 m, with many of the cores, averaging 2.5 m per run, being delivered to the surface in one unbroken piece. By 400 m, substantially unbroken core was retrieved from about 75% of the drill runs, but the remaining 25% had undergone widespread fracturing. This fracturing increased in severity below 400 m and constituted the region known as the brittle zone, which persisted to the ice-bedrock interface 1004 m below the surface.
### Melt layers
Melt layers were identified sporadically throughout the Siple Dome core and testify to extended periods when surface snow was subjected to temperatures above 0\\({}^{\\circ}\\)C. At least 20 melt layers were observed in the upper 500 m. These included three melt events that were sufficiently severe to create lenses of ice 5-10 mm thick. Individual melt layers ranged in thickness from 1-2 mm to a maximum of 6 mm. At least eight melt layers were identified between 500 and 800 m depth that also ranged in thickness from 1 to 2 mm. However, the quality of core from below 800 m was so degraded by fracturing as to preclude any positive identification of melt events in this part of the Siple Dome ice core. For a discussion of the significance of the melt layers see Das (2003) and Das and Alley (2005).
### Inclined layers
The presence of inclined layers or other signs of englacial disturbance at Siple Dome was investigated during light table examination of the core. Slight tilting of layers associated with annual layers was first observed between 432.60 and 432.70 m and continued with depth at intervals to 550.71-550.76 m. Below this, short sequences of core exhibiting inclined layering with reversed dips began to make their appearance at 559.51-559.60 m. These tilted layers with reversed dips continued at intervals to nearly 800 m and were intermixed with inclined layers titled at angles that occasionally exceeded 10\\({}^{\\circ}\\), indicating the possibility of large-scale folds. Such folding indicates that a continuous uninterrupted climate signal does not exist below these depths. Other signs of disturbance such as small folds or boudinger were not observed in the basal 200 m of ice at Siple Dome; however, the very badly fractured condition of these basal cores greatly lessened any chance of observing disturbed structure if it existed.
### Brittle core processing and the relaxation process
Fracturing is a particular feature of ice in the brittle zone of all deep cores from Antarctica and Greenland. A major factor contributing to the intrinsic britteness of ice cores in this zone is release of the confinement pressure that causes so-called relaxation stresses to exceed the intrinsic tensile strength of the cored ice. At Siple Dome this condition is reached at 300-400 m depth, which was why, at the end of the season's drilling, all core from below 394 m was retained in a storage trench to relax for 1 year before being shipped to the US National Ice Core Laboratory (NICL) in Denver, Colorado, in 2000 for general processing. Core from above 394 m that was not brittle was returned to the United States and processed at NICL during June and July 1999. On-site processing for this project was limited to measurements of density and thin-section preparation on freshly drilled cores. However, even these studies had to be curtailed at around 700 m because of the increasingly poor quality of the core and intense fracturing that occurred in the very brittle ice when attempting to prepare samples on the bandsaw (personal communication from). Fitzpatrick, 1999.
Cores recovered from even moderate depths at Siple Dome are subject to relaxation, a process first documented in detail by Gow (1971) in regard to deep drill cores retrieved at Byrd Station, West Antarctica, in 1967/68. Relaxation manifests itself in a number of ways that include microcracking and fracturing, bubble decompression and dilation, and frequent splitting along basal planes of crystals intersecting pressurized air bubbles. An additional mechanism is esvolution of air previously converted in deeper parts of the ice dome to gas hydrates, which dissociate to form discrete bubble-like inclusions in the ice as it relaxes.
### Density measurements
Relaxation entails a decrease in density, leading to a volume expansion of the ice that can be monitored by repeat density measurements. Densities of core pieces, 10-11 cm long, were measured by hydrostatic weighing in reagent grade isootacene (2-2-4 trimethylpentine). This technique (Butkovich, 1953) measures densities to \\(\\pm 0.0003\\) Mg m\\({}^{-3}\\) and has been used extensively on polar ice cores by Langway (1958a), Gow (1968, 1971) and Gow and others (1997). Initial measurements were made on freshly drilled core within a few hours of its retrieval from the drillhole. These measurements were performed at approximately 20 m intervals beginning at 60 m, just below the firm/ice transition, and extending to 700 m before encountering very brittle core. A plot of density measurements from ice deeper than 100 m is presented in Figure 2. The inset diagram includes a density profile based on all data from the snow surface down.
The firm-ice transition, corresponding to pore close-off, occurs at around 54-55 m at a density of 0.830 Mg m\\({}^{-3}\\)Thereafter, density increases progressively with depth, reaching a maximum temperature-corrected density of 0.9160 Mg m\\({}^{-3}\\) at 700 m. Densification occurs primarily in response to increasing overburden pressure, causing compression of entrapped air bubbles originally sealed off at the film-ice transition. Thin-section measurements of bubble diameter, in conjunction with density-derived porosities, indicate that equalization of air-bubble pressure with ice load occurs around 300 m at an overburden pressure of 2.6-2.7 MPa. When applied to measurements of bubble concentrations in Siple Dome ice, the same technique yielded values of 180-210 bubbles per cubic centimeter in cores from 60 to 600 m. However, there is also evidence from thin-section observations that bubbles begin to decrease in concentration at about 600 m, at which depth other bubble-like inclusions appear. These inclusions are attributed to the dissociation of gas hydrates originally formed by dissolution of pressurized atmospheric gases through bubble walls into the ice. Bubbly ice occurs without break to the bed at 1004 m. Changes in air-bubble characteristics with increasing depth are shown in Figure 3.
### Relaxation characteristics
The relaxation characteristics of Siple Dome ice cores are presented in Figure 4. This dataset is based on the differences in density between measured samples of freshly drilled core and the same samples measured 5 years later. As noted earlier, measurements on freshly cored ice at the drill site were terminated at 702 m because of the very fragile and brittle nature of the core. Density measurements on samples of this deeper ice were not made until 5 years later. Relaxation characteristics of the deeper ice were obtained by comparing the densities of samples from 702-1004 m, suitably corrected for measured in situ temperatures, with those obtained by extrapolation of the depth-density profile below 702 m in Figure 2.
Except for ice from 700-800 m, the Siple Dome core has undergone minimal relaxation in the 5 years since it was drilled in 1997-99. By contrast, much more rapid relaxation, accompanied by a significant decrease in brittleness, was observed at the other drilling sites in Antarctica and Greenland where monitoring of the relaxation process was conducted, for example at Byrd Station (Gow, 1971) and at GISP2 (Gow and others, 1997). This behavior is demonstrated in Figure 4. Relaxation levels measured on Siple Dome cores 5 years after they were drilled were achieved at Byrd after only 16 months; this was followed by further significant relaxation at Byrd at the end of 27 months. Furthermore, cores from Byrd had become sufficiently 'ductile', within 6-9 months after drilling that they could be cut with a bandsaw without fracturing. However, the Siple Dome core has remained brittle and prone to fracturing during processing more than 5 years after it was drilled.
Ice from 700-800 m has undergone appreciably more relaxation than ice above or below this zone. This ice is much finer-grained, and also features a very strong vertical _c_-axis fabric in which the component crystals have been subject to extensive cleavage cracking (a form of lattice-controlled microcracking) along the basal glide planes of the crystals. Microcracking of this kind is a very effective way to relieve stress in crystals exhibiting a strong _c_-axis vertical orientation; the extensive nature of this microcracking, which is readily observed in hand-held cores, also accounts for the enhanced level of relaxation in the zone 700-800 m.
The generally poor condition of much of the Siple Dome core at the time it was drilled has been attributed largely to defects in the drilling technique (e.g. binding in the core barrel). However, the fact that the core has remained brittle long after it was drilled suggests that factors other than defective drilling techniques have contributed to its continued brittleness. One such factor may be unreviewed residual stresses related to the intimate contact of Siple Dome with Kamb and Bindschadler Ice Streams located on either side of the dome. Such intimate contact including shear margin migration likely led to complex ice dynamics at Siple Dome and the subsequent continued brittleness of the core.
Relaxation characteristics of the Siple Dome core depart radically from those observed in cores obtained by hot-water drilling during the 1997/98 field season at three sites in close proximity to Siple Dome (Gow and Englehardt, 2000). One of these sites was drilled in the immediate vicinity of the PICO drilllhole, another located on the true summit of Siple Dome, about 400 m from the PICO drill site, and a third situated on the flank about 8-9 km from the summit. When first removed from the core barrel, freshly drilled hot-water cores exhibited minimal fracturing. The cores were then retained at each of the drilling sites for several days while exposed to elevated surface air temperatures (one or two degrees below 0degC), before being transferred to a storage trench where temperatures hovered around -17degC. During the time cores were retained at the surface, they underwent a rapid relaxation that manifested itself in a number of ways including microcracking in the shallower
Figure 2: Plot of density measurements vs depth in cores from Siple Dome. Densities measured on freshly drilled core from just below the film-ice transition to 702 m are indicated by solid circles. Below 702 m the ice was too brittle to process on site. These densities, indicated by filled squares, were based in part on measurements of slightly relaxed ice samples in conjunction with extrapolation of the density profile from above 702 m. All densities were corrected for in situ temperature measured in the drilllhole by Englehardt (2004). Inset diagram includes a complete density profile together with ice overburden and temperature profiles at Siple Dome.
cores (down to about 400 m), and widespread fracturing beginning at around 500 m and continuing to the bottom of the ice dome. This mechanical conditioning of the ice at all levels appeared to have substantially stabilized prior to moving the cores to the storage trench.
The rapidity with which relaxation occurred in these thermally drilled cores at Siple Dome is attributed to thermal effects associated with hot-water drilling but was also accelerated by the thermal conditioning of cores exposed to elevated surface air temperatures for several days. Following this short-term stabilization of the relaxation process, cores could be readily processed without further fracturing. Volume expansion of the ice associated with this relaxation of the hot-water cores ranged from 0.3% at around 300 m to nearly 3% in the deepest cores which, as indicated in Table 1 (from Gow and Engelhardt, 2000), is nearly an order of magnitude greater than observed in the electro-mechanically drilled Siple Dome core (Fig. 4). Furthermore, comparison of the crystalline textures and c-axis fabrics of both the hot-water cores and mechanically drilled core has not revealed any significant differences between them. In light of these observations, it would have been interesting to determine whether exposing pieces of the Siple Dome core to elevated temperatures for several days to a month would have accelerated the rate of relaxation sufficiently to reduce britteness to a level that allowed processing of the ice without undue fracturing.
## Crystalline Textures
### Thin-section preparation
Crystal structure (texture and fabric) studies of the Siple Dome core were performed on thin sections prepared from 5-10 mm thick samples cut horizontally, that is perpendicular to the vertical axis of the core. These thick-section samples were affixed to glass slides with cyanoxclylate glue and then sliced to 1 mm or less on a microtome. Subsequent examination of thin sections was carried out following
Figure 3: Air-bubble change with increasing depth of burial in Siple Dome ice cores. Note rapid transition from largely tubular bubbles just below pore close-off at 59 m to substantially spherical bubbles between 104 and 140 m. Bubble sections from 205 to 507 m were obtained from samples that had undergone minimal relaxation. Bubbles persisted to the bed at Siple Dome accompanied by odd-shaped inclusions attributed to gas hydrate dissociation in slightly relaxed ice core in samples from 605 and 988 m. Smallest scale subdivisions measure 1 mm.
techniques described in Riggby (1955, 1960), Langway (1958b) and Gow (1970). Thin sections were then photographed between crossed polarizers to assist in identifying individual crystals and to aid in recording c-axis measurements with the Riggby stage. All measurements were made at \\(-10\\,\\mathrm{\\SIUnitSymbolCelsius}\\), and the thin sections themselves were stored at \\(-30\\,\\mathrm{\\SIUnitSymbolCelsius}\\) to minimize both sublimation and possible changes in crystalline structure.
### Crystal size measurements
Crystal sizes were determined from measurements of the number of crystals in given areas of thin-section photographs ranging in size from 72 to \\(80\\,\\mathrm{cm}^{2}\\). A small correction was applied to crystal counts to compensate for those crystals only partly contained within the area boundaries. This correction is minimal for the large number of crystals measured in fine-grained ice. However, it becomes increasingly difficult to apply this correction to coarse-grained ice when the number of crystals decreases to less than ten and crystals become increasingly interlocked. The number of crystals counted ranged from nearly 500 at \\(100\\,\\mathrm{m}\\) depth to approximately 50 at \\(361\\,\\mathrm{m}\\). Further, significant increases in crystal size between \\(361\\,\\mathrm{and}\\,686\\,\\mathrm{m}\\) depth reduced grain-size counts to between 20 and 10. This was followed by a section of core, \\(700\\)-\\(800\\,\\mathrm{m}\\), that underwent a dramatically abrupt decrease in grain size leading to a substantial increase in the number of crystals counted (84-193). Just as abruptly, the size of crystals between 800 and \\(1000\\,\\mathrm{m}\\) depth increased rapidly and the number of grains counted within the given area of a section decreased from around 10 to less than 2.
### Crystal size variation
Variations of crystal size with depth at Siple Dome are presented in Figure 5a and b (adapted from Gow and Engelhardt, 2000) and include grain-size data from a hot-water core drilled at the summit site in 1997-98 and results of measurements made on the Siple Dome (PICO) core drilled in 1997-99. Also included for comparison are the crystal size data obtained by Gow and Williamson (1976) on the Byrd Station core (Fig. 5a).
or buckling of the basal glide planes of the crystals as they rotated in response to increasing uniaxial vertical compression.
Ice below 261 m was marked by sharp increases in the size of crystals that exceeded 1.0 cm\\({}^{2}\\) by 341 m and 2.0 cm\\({}^{2}\\) by 382 m, followed by near-constant values of around 2.5 cm\\({}^{2}\\) between 400 and 445 m. Crystal sizes further increased to between 3.50 and 5.0 cm\\({}^{2}\\) by 540 m, followed by major changes in grain size to between 6 and nearly 8 cm\\({}^{2}\\) over the interval 560-586 m. Sutured crystal textures persisted from 221 m, when first encountered in Siple Dome ice, to 686 m. All thin sections from this depth range contained crystals that were intersected two or more times in the same section. This multiple cutting of the same crystal in a single thin section yields textures akin to a three-dimensional jigsaw puzzle that renders grain-size measurements in this kind of ice virtually meaningless, except to indicate trends in crystal growth.
As indicated in Figure 5, the grain-size changes observed in the Siple Dome core, down to at least 700 m depth, track closely those measured on the Summini (hot-water) core. This indicates that hot-water drilling per se has not affected crystalline textures in Siple Dome ice. An abrupt 30- to 15-fold decrease in grain size, beginning at 703 m (0.23 cm\\({}^{2}\\)) and persisting to at least 790 m (0.47 cm\\({}^{2}\\)) in the Siple Dome core, was not observed in the Summini (hot-water) core because core was not collected at these depths.
The zone of fine-grained ice in the Siple Dome core occurred coincidentally with sustained deposition of volcanic ash, particles which may have contributed initially to the much reduced grain size by anchoring the crystal boundaries. Expectations of finding volcanic ash in Siple Dome core were based on the documented existence of widespread deposition of tephra in the Byrd core (Gow and Williamson, 1971). Given its relative proximity to Byrd Station, it was anticipated that similar deposits of volcanic ash and dust would be found in ice of comparable age at Siple Dome. Using the age-depth model of Nreason and others (1996) for Siple Dome, Gow and Engelhardt (2000) accurately predicted that tephra coexist with that at Byrd should be concentrated in ice at 700-800 m. While the abrupt reduction in grain size in ice from 700 and 800 m is likely due to tephra particles initially inhibiting grain growth in the upper levels of ice at Siple Dome, the widespread occurrence of undulose extinction in conjunction with the fragmented appearance of many of the crystals suggests that incorporation of tephra particles has ultimately led to significant changes in the rheological properties of fine-grained ice.
By 820 m the crystalline texture of Siple Dome ice had transformed from very fine-grained, 0.47 cm\\({}^{2}\\) at 790 m, to very coarse-grained, 7.39 cm\\({}^{2}\\) at 820 m. This latter grain size is very similar to that measured between 560 and 686 m just prior to the sudden transition to fine-grained ice first encountered at 703 m. Average grain sizes, as best as they could be measured, increased progressively from 9.66 cm\\({}^{2}\\) at 848 m to greater than 50 cm\\({}^{2}\\) at 992 m, within 12 m or the ice-rock interface. Interlocked textures predominate, including repeated sectioning of individual crystals as first observed in the upper layers of the ice dome at around 221 m. Crystals in ice from below 900 m often exceeded the dimensions of the 13 cm diameter core. These included zones of very large crystals exhibiting etched grain boundaries on the outer surface of a core, including several growing side by side with individual cross-sections measuring at least 130 cm\\({}^{2}\\).
## Crystalline Fabrics
### Fabric patterns in polar glacier ice
Ice under stress deforms and it is the interplay of stress and deformation that determines the nature of _c_-axis fabrics in glacier ice. It is generally agreed that rotation of crystallographic \\(c\\) axes by glide along the basal planes of crystals is the predominant mechanism by which preferred orientations evolve in glacier ice. This is tantamount to Rigby's (1960) statement 'that proper orientation of the crystal in order to glide on the basal plane is very important in the flow of polar glacier ice'. This process applies equally well to rotations of \\(c\\) axes either into a glide fabric (a planar distribution of \\(c\\) axes) or into a broad clustering of \\(c\\) axes about a given axis. The nature of the stress in glaciers, including valley glaciers, ice sheets and ice domes, controls the nature of the fabric pattern. It is the cumulative strain to which the glacier ice is subjected, however, that determines the strength of the _c_-axis fabric. The formation of strong glide fabrics in which the \\(c\\) axes are uniformly distributed within a great circle
Figure 5: (a) Mean crystal size (cross-sectional area) vs depth at Siple Dome. Data are plotted on a logarithmic scale to accommodate the very large range of crystal sizes measured. Comparisons with datasets from the Summit hot-water core drilled in close proximity to the PICO drilled Siple Dome core and from the Byrd core are included. The largest crystal at any particular depth in the Summit core is indicated by an arrow at the end of the solid line. The termination of a dotted line in the deepest part of the PICO core designates the largest crystal measured, but with is grain boundaries only partially contained within the thin section. (b) Expanded plot of crystal size data (linear scale) for the top 250 m at Siple Dome.
(generally vertical in ice sheets) is attributed to ice undergoing deformation in uniaxial longitudinal extension or tension in which the tensile axis is located normal to the girdle (Fujita and others, 1987; Alley, 1988; Lipenkov and others, 1989; Wang and others, 2003). The formation of a broad single-maximum fabric is usually attributed to ice subjected to deformation under uniaxial vertical compression (Azuma and Higashi, 1985; Alley, 1988; Gow and others, 1997; Thorstinson and others, 1997) or, in the case of a very tight vertical clustering of \\(c\\) axes, to deformation governed by strong horizontal shear (Gow and Williamson, 1976; Gow and others, 1997). According to Wang and others (2002), an elongated single maximum can form when deformation is dominated by pure shear.
Rigby (1955) was perhaps the first to relate the strength of the \\(c\\)-axis fabric of glacier ice to the level or intensity of deformation to which it is subjected. This situation applied in particular to very strong single-maximum fabrics that Rigby observed forming perpendicular to foliation planes (planes of inferred shearing) in Greenland glacier ice. Additionally, Rigby (1955) appears to have been among the first to recognize that recrystallization of glacial ice under high temperatures or melting conditions can lead to large increases in the size of crystals and associated significant fabric changes. This has since been apply demonstrated in Antarctic and Greenland ice subject to elevated temperatures in basal and near-basal sections of these ice sheets (Gow and Williamson, 1976; Gow and others, 1997; Thorstinson and others, 1997).
Apart from potential complications to the paleoclimate record, acknowledgement of the existence of widespread crystal anisotropy in polar ice sheets is critical to our understanding of the mechanics of ice flow and is especially pertinent to the whole topic of ice-sheet modeling including evaluation of the assumptions upon which much of the modeling is based.
### c-axis fabric measurements: techniques
As indicated above, all \\(c\\)-axis fabric measurements were made with the manually operated four-axis Rigby stage. A total of 45 horizontal fabric sections from representative depths in the Siple Dome core were analyzed on the Rigby stage, which in recent years has been supplanted by a variety of computerized ice-fabric analyzers (e.g. Azuma and others, 1999; Wang and Azuma, 1999; Wang and others, 2002, 2003; Wilen and others, 2003). These instruments can measure \\(c\\)-axis orientations of very large numbers of crystals much more rapidly than could ever be accomplished on a Rigby stage. Additionally, automated fabric analyzers can simultaneously yield comprehensive datasets on the size, shape and nearest-neighbor relationships of the crystals. The \\(c\\)-axis fabric patterns obtained with either type of instrument
Figure 6: Thin sections photographed between crossed polarizers, and their corresponding point scatter \\(c\\)-axis fabrics from representative depths in the Siple Dome ice core. Smallest subdivisions in scales in the top right corner of each thin-section photograph measure 1 mm.
have been found to be essentially identical provided the number of crystals analyzed is statistically significant (Wilen, 2000). Statistically, the number should depend on the strength of the fabric, with a greater number of crystals needing to be measured in a randomly oriented fabric than one exhibiting either a girdle or an axial pattern of preferred _c_-axis orientation. It is generally agreed that around 200 c-axis measurements should suffice for any individual sample of ice exhibiting a random fabric or small deviations from randomness. Much less than 200 measurements should normally suffice for ice displaying strong _c_-axis fabrics.
The total number of \\(c\\) axes measured in horizontal sections of the Siple Dome ice core exceeded 3000. The bulk of these measurements were confined to thin sections from the topmost 800 m of the ice core. The bottom 200 m is dominated by exceedingly large crystals, which limits the number of crystals in a single section, usually to too few to yield statistically significant results other than to indicate fabric trends. The large crystals, severe fractures and enduring brittleness in the deepest ice at Siple Dome hindered or even prevented the maintenance of core continuity needed for multiple sectioning of azimuthally oriented samples. Only at two depths below 900 m was it possible to prepare two vertical sections with the same relative orientation. In these sections, measured _c_-axis orientations were rotated into the horizontal plane to ensure conformity with _c_-axis measurements made on the horizontal thin sections.
### c-axis fabric profile at Siple Dome
Thin-section photographs and associated _c_-axis fabrics of the Siple Dome core are presented in Figure 6. All fabric diagrams except two are based on equal-area (Schmid-net) projections of \\(c\\) axes measured in horizontal thin sections. The exceptions are two diagrams from 916 and 948 m, where paired vertical sections were used and the _c_-axis measurements subsequently rotated into the horizontal plane. The center of each fabric diagram coincides in all cases with the vertical axis of the core. As clearly demonstrated in Figure 6, changes in the texture of the ice at Siple Dome were accompanied by significant changes in aggregate _c_-axis orientation. At 60 m depth, ice from directly below the firm-ice transition at Siple Dome featured a random fabric, which was still in evidence at 80 m. However, by 99 m, signs of a broad vertical clustering of \\(c\\) axes begin to appear and this trend is maintained and somewhat strengthened to at least 261 m depth. Such broad clustering of the \\(c\\) axes, coupled with significant changes in the shapes and sizes of crystals that are clearly discernible in the thin-section photographs in Figure 6, is consistent with a deformational process dominated by rotation of the \\(c\\) axes towards the axis of vertical compression. As suggested earlier, the diminished grain size observed at 261 m together with significantly increased clustering of the \\(c\\) axes and widespread undulose extinction of the crystals might indicate the onset of enhanced deformation associated with rotation and bending of basal glide planes of crystals under uniaxial vertical compression.
Some indications of a change in _c_-axis orientation to a girdle-like fabric are evident in sections from 261 to 321 m. This trend towards a vertical girdle orientation appears firmly established by 360 m and generally persisted to at least 686 m, though in most sections the girdle was somewhat incomplete, and in a number of cases is replaced by a redistribution of \\(c\\) axes into three or four localized maxima, indicative of migration recrystallization. Between 686 and 703 m a very abrupt change in _c_-axis fabric that persisted to around 800 m depth occurred coincidentally with sustained deposition of volcanic ash (Gow and Meese, [http://nsidc.org/data/nsidc-0128.html](http://nsidc.org/data/nsidc-0128.html)). This fabric, characterized by a very tight clustering of the \\(c\\) axes about the vertical, is attributed to deformation now dominated by strong horizontal shear. Virtually identical texture (fine-grained) and single pole fabrics, also attributed to horizontal shearing, have been observed at Byrd (Gow and Williamson, 1976) and at GISP2 (Gow and others, 1997). At Byrd, the very strong single pole fabric also occurs coincidentally with the onset and near termination of tephra deposition between 1300 and 1800 m (Gow and Williamson, 1971). At Siple Dome, the onset of very tight vertical clustering of \\(c\\) axes between 686 and 703 m and its continuation to 790 m also coincides with a sudden increase in p-wave velocity to values exceeding 4000 m s\\({}^{-1}\\) measured downhole (personal communication from G. Lamorey, 2000).
The very abrupt transition or flip-flop between two highly contrasted _c_-axis fabric patterns at around 700 m at Siple Dome not only reflects an abrupt change in the nature of the stress-strain field, it also signals a major change in the rheological properties of the ice. No such abrupt change in texture or fabric has been observed in cores from other ice dome sites, for example, Dome F (Azuma and others, 1999) or Dome C (Wang and others, 2003) in East Antarctica where the ice is approximately three times thicker than at Siple Dome. The potential importance of widespread incorporation of volcanic particles on the rheological properties of ice from 700-800 m at Siple Dome is discussed in greater detail by Gow and Meese (2007).
A very abrupt change back to coarse-grained ice occurs between 790 and 803 m, as is evident in the thin-section photographs in Figure 6. The onset of rapid crystal enlargement at 803 m occurs at -8 to -9degC and is attributed to a form of dynamic recrystallization, also termed migration recrystallization by Duval and Castelnau (1995) and De La Chapelle and others (1998). Migration recrystallization involves both the nucleation and growth of strain-free grains at the expense of plastically deformed grains of the same material. Excess energy stored in the grains provides the driving force for this kind of recrystallization.
This extensive recrystallization to very coarse-grained ice, especially below 900 m at Siple Dome, was also accompanied by a marked change in fabric to a dispersed or multi-maximum distribution of \\(c\\) axes about the vertical. Age, accumulated strain and elevated temperatures rather than any drastic change in stress level are considered major factors driving this growth of very large crystals. The rapid increase in the size of crystals in ice as it approaches the bed seriously impacts the number of \\(c\\) axes that can be measured in a single or even two or three oriented thin sections. In ice from 848 m depth, the number of crystal \\(c\\) axes measured was sufficient to reveal the true multi-maximum nature of the fabric. This fabric pattern is also typical of coarse-grained ice that has undergone migration recrystallization at elevated temperatures in basal layers at Byrd Station (Gow and Williamson, 1976), GISP2 (Gow and others, 1997) and possibly at GRIP (Thorsteinsson and others, 1997). In ice deeper than 848 m with fewer than seven crystals in a single thin section, the statistical significance of the _c_-axis measurements is non-deterministic. The badly fractured condition of cores from the deeperice at Siple Dome greatly reduces the chances of preparing additional sections with the same relative orientation over an extended length of core. However, at two depths (916 and 948 m) in the basal ice at Siple Dome, the core remained sufficiently intact to allow preparation of two vertical sections at each depth while still retaining the same relative orientation. Ice at both depths clearly reveals the multi-maximum fabric typical of coarse-grained dynamically recrystallized ice. The onset of migration recrystallization may have occurred somewhat above 803 m. The marked increase in grain size beginning at 602 m may signal the early onset of migration recrystallization at Siple Dome where the in situ ice temperatures had risen to -13degC, the temperature at which migration recrystallization is initiated in deep ice at Byrd, GISP2 and less certainly at GRIP. This implies that deformation immediately prior to 602 m had reached a level sufficient to initiate nucleation and subsequent recrystallization to at least 686 m depth. Though ice between 602 and 686 m generally retains a vertical glirdle fabric, conversion to a multi-maximum fabric or ring-like small circle distribution of \\(c\\) axes about the vertical appears to have occurred at 640 m depth in both our and DiPrinzio and others' (2005) fabric profiles. Whether or not migration recrystallization is occurring between 602 and 686 m is still an open question. According to DiPrinzio and others (2005), evidence of dynamic (migration) recrystallization, involving the nucleation and growth of new grains in place of grains that existed prior to recrystallization, is observed between 200 and 685 m at Siple Dome. Engacial temperatures at Siple Dome increase from -23degC at 200 m to -13degC at 600 m (Engelhardt, 2004). Migration recrystallization is observed to occur in the basal parts of polar ice sheets only when temperatures of -13degC and warmer have been attained. If, as DiPrinzio, and others (2005) assert, migration recrystallization is evident by 200 m at Siple Dome, then such recrystallization must have occurred at appreciably colder temperatures than those observed in cores at Byrd and GISP2.
### Comparison studies of texture and fabrics in the Siple Dome core
A comparison of results obtained by DiPrinzio and others (2005) with those presented here is revealing in that virtually identical profiles were observed with respect to fabrics but much less so in regard to the textural characteristics. As noted earlier, the DiPrinzio and others (2005) fabric and grain-size data were based on vertical thin-section analyses. The fabric data were subsequently rotated into the horizontal plane, thus allowing direct comparison with our datasets obtained almost entirely on measurements in horizontal thin sections. Though both profiles of grain-size change show similar trends, actual grain sizes measured by Di Prinzio and others (2005), when recalculated in terms of crystal cross-sectional areas, are found to be two to three times smaller than those measured here. Some of this discrepancy may be due to the different analytical techniques used to determine grain size. However, most of the observed differences in each of the grain-size profiles appear related to textural anisotropy intrinsic to vertical and horizontal sectioning. Variable layer structure, including the presence of fine-grained layers, is more likely to occur in vertical sections than in horizontal sections; this would favor the intersection, generally, of larger crystals in horizontal sections.
The overall nature of changes in the two fabric/texture profiles can be summarized as follows:
The fabric of ice between 60 and 100 m depth departs little from random, although definite signs of a non-random fabric are becoming apparent by 100 m.
The transition to a broad vertical \\(c\\)-axis maximum is first evident at 99-100 m, an indication that directed stress has now begun to act on ice at Siple Dome. DiPrinzio and others (2005) estimate a total strain of about 20% at 200 m; the broad vertical maximum persists to at least 240 m, a fabric pattern that is consistent with uniaxial vertical compression.
A transition to a broadly dispersed girdle-like fabric occurs between 261 and 279 m. \\(c\\) axes tend to become less dispersed within the vertical plane of the girdle as the depth increases, a condition that persists to 686 m. However, two modifications of the girdle-like fabric appear at intervals over the depth range 261-686 m. The first involves a breakout of the girdle into several dispersed \\(c\\)-axis maxima, none of which coincide with the vertical axis of the core. The distribution of the maxima resembles the fabrics of dynamically recrystallized ice observed at other locations in Antarctica and Greenland. These modified fabrics are best exemplified in thin sections at 560 and 640 m in DiPrinzio and others (2005) and at 524 and 640 m in our studies.
A second modification involves a preferential clustering of \\(c\\) axes about the vertical. Such strong clustering, indicative of shearing, is superimposed on the pre-existing girdle. Examples of the two intermixed fabrics were observed at 261, 341, 360 and 482 m in our study and at 339, 482 and 523 m in the DiPrinzio and others (2005) fabric profile. The vertical clustering of \\(c\\) axes is invariably linked to layers of fine-grained ice, whereas ice exhibiting girdle-like fabrics is much coarser-grained. DiPrinzio and others (2005) attribute the formation of coarse-grained ice to recrystallization while simultaneously asserting that the layers of fine-grained ice have remained unrecrystallized.
The formation of strongly defined, great-circle girdles is generally ascribed to englacial deformation dominated by uniaxial longitudinal extension as exemplified by the fabrics and textures of ice at Vostok, Antarctica, (Lipenkov and others, 1989) and at NorthGRIP, Greenland (Wang and others, 2002). At Siple Dome, however, where the core was drilled just a few hundred meters from the dome's center, deformation in the top 60% of the ice is more likely to be dominated by uniaxial vertical compression than uniaxial longitudinal extension. However, an apparent mixing of two fabrics involving girdle-like features on the one hand and an increased vertical clustering of \\(c\\) axes, indicative of shear in a plane normal to the \\(c\\)-axis cluster on the other hand, raises the question of whether two contrasted deformation states, represented by the girdle and point cluster fabrics respectively, can coexist in Siple Dome ice. Interestingly, Wang and others (2003) have also observed, in deeper ice at NorthGRIP, an increased vertical clustering of the \\(c\\) axes superimposed upon the girdle, a situation very similar to the fabric pattern observed here and by DiPrinzio and others (2005). Wang and others (2003) attribute this mixed fabric pattern to the combined effects of vertical compression and horizontal tension. However, the fabric pattern observed above may simply be a variation of an elongated c-axis maximum, unrelated to the existence of two contrasted deformation states. Support for this view is indicated in a study by Thorsteinsson (2002) who, on the basis of fabric development modeling, has shown that fabrics in which vertical clustering is superimposed on a vertical girdle are readily duplicated in the ice subjected to pure shear stress.
Somewhere between 686 and 703 m an abrupt change is observed in both fabric profiles, in which the dominantly girdle-like fabric above 686 m transitions into a very tight clustering of crystallographic c axes about the vertical. This change to a shear-type fabric at 703 m persists to at least 790 m and is accompanied by an order-of-magnitude reduction in grain size in both our and the DiPrimzio and others (2005) datasets.
A dramatic change back to very coarse-grained ice between 790 and 804 m occurs coincidentally with a significant change of fabric in which \\(c\\) axes are now dispersed into several maxima, none of which is centered on the vertical. The temperature of the ice at 804 m is about \\(-9^{\\circ}\\)C and the fabric is typical of ice that has undergone migration recrystallization. As indicated earlier, such recrystallization is normally observed at temperatures warmer than \\(-13^{\\circ}\\)C in near-basal ice, for example, at Byrd Station (Gow and Williamson, 1976) and at GISP2 (Gow and others, 1997) and less certainly at GRIP (Thorsteinsson and others, 1997). However, below 848 m in both fabric profiles at Sipie Dome there are too few crystals to obtain statistically significant fabrics, except to indicate continuation of the multi-maxima pattern of ice, subjected to migration recrystallization.
## 5 Sediment Incorporation and Gas Content of Basal Ice
Lithic debris was first encountered at 1001.82 m depth, approximately 2 m above the glacier bed at Sipie Dome. Entrained debris consisted mainly of widely dispersed particles in the silt-sand range. However, a singularly large tabular grain measuring \\(2.6\\,\\mathrm{cm}\\times 1.0\\,\\mathrm{cm}\\) and \\(0.5\\,\\mathrm{cm}\\) thick, oriented in the horizontal plane, was observed near the top of the transition between bubbly glacier ice and the debris-bearing basal ice (Fig. 7). Examples of the dispersed nature of sediment incorporation in Sipie Dome basal ice are given in Figure 8.
The basal ice at Sipie Dome contained none of the coarser-grained debris such as cobbles and boulders that occurred sporadically in the basal ice at Byrd, located about \\(500\\,\\mathrm{km}\\) upstream of Sipie Dome. Furthermore, the debris entrained in ice at Byrd was strongly stratified throughout and considered predominantly of clay, sand and pebble-sized particles in addition to the aforementioned cobbles and boulders (Gow and others, 1979). Most of the particles identified as pebbles in the Byrd core actually consisted of sedimentary aggregates composed of clay, silt and sand that disintegrated upon melting. Similarly it was found that many of the sand-sized particles entrained in the basal ice at Sipie Dome were also composed of frozen aggregates of silt and clay.
Results of debris concentration measurements of eight samples from the basal ice at Sipie Dome together with measurements of their total gas content are presented in Table 2. Additionally, analyses of total gas content were performed on five bubbly glacial ice samples from directly above the transition with the debris-bearing basal ice. Total gas content was measured using a technique described by Langway (1958a) in which gas evolved during the complete melting, under kerosene, of accurately weighed ice samples is collected in a burette. Sediment released when ice samples were melted was oven-dried and weighed to determine debris concentrations.
Debris concentration in the basal ice at Sipie Dome ranged from 0.01% to 1.08% by weight. These sediment loads are between one and two orders of magnitude smaller than the 12-15% by weight of debris entrained in basal ice cores at Byrd (Gow and others, 1979); they clearly reflect major differences in the textural characteristics and debris concentrations in basal ice at these two locations. An
Figure 8: Nature of debris entrainment in ice in cores from the bottommost ice at Sipie Dome. Characteristic features included layers of dispersed debris between layers of debris-free ice (a, b) and a sustained sequence of debris-bearing ice (c). Smallest scale subdivisions measure 1 m.
Figure 7: Lithic fragment, oriented in the horizontal plane, located near the top of debris-bearing ice 2 m above bedrock at Sipie Dome. Smallest scale subdivisions measure 1 mm.
examination of Table 2 suggests a moderate level of correlation between increased debris concentration and decreased total gas content.
According to Engelhardt (2004), the ice at Siple Dome is frozen to its bed at a temperature of -2.54\\({}^{\\circ}\\)C, about 1.9\\({}^{\\circ}\\)C lower than the estimated pressure-melting point. With the basal ice temperature so close to pressure melting, the question becomes: how and when was debris incorporated into the ice! In the particular case of debris entrapment in basal ice at Byrd Station, Gow and others (1979) concluded that this occurred simultaneously with a process of 'freeze-on' of glacially derived meltwater at the bed. A critical observation was the absence of gas in the basal ice at Byrd, which Gow and others (1979) have suggested may well constitute the single most diagnostic test for discriminating between debris entrained in the melt-refreeze process and debris incorporated by purely mechanical means (e.g. shearing). At Byrd in 1968, water was encountered after the drill had penetrated the bed. This clearly demonstrated that at that time melting of the basal ice was occurring at the bed.
A case for meltwater refreezing at the bed at Siple Dome certainly applies to the two bottommost samples of ice tested for total gas content, which is likely to be much less than indicated in Table 2. In these two samples, maximum dissolution of gas in the meltwater was assumed. However, only occasional bubbles were observed entering the buret during melting of these two samples, indicating that minimal dissolution of gas in the meltwater had actually occurred. Except for the sample from 1002.09-1002.13 m which is bubble-free and exhibited a low total gas content, all the remaining debris-bearing samples were obtained from moderately bubbly ice that also yielded moderate to elevated gas concentrations when melted. Enhanced gas concentrations listed in Table 1 are still appreciably lower than those of the air-rich glacial ice located above the transition to debris-bearing basal ice. Total gas contents of up to 4 mL per 100 g ice can occur when water is frozen rapidly. If the saturation level for gas dissolved in the water is exceeded, bubbles begin to nucleate at the freezing interface and become entrained in the ice as freezing proceeds. Such a process could explain the origin of the widespread occurrence of bubbles in the ice with moderate gas concentrations. Downward diffusion of gas from the overlying air-rich glacial ice could also have contributed to moderate gas concentrations in basal ice at Siple Dome. While advocating freeze-on as the dominant mechanism of debris incorporation in basal ice at Camp Century, Greenland, Herron and Languay (1979) have suggested that downward diffusion of air from bubble-rich glacial ice has also led to moderate gas concentrations similar to those observed in basal ice at Siple Dome.
As indicated above, the ice at Siple Dome is still frozen to its bed. While evidence to account for the incorporation of debris in basal ice at Siple Dome is best explained in terms of a freeze-on process, we have no way of determining just when ice at the bed attained the pressure-melting point or for how long this condition persisted before freeze-on of meltwater occurred to allow 2 m of debris-bearing ice to be accreted.
## Conclusions
The quality of the ice cores drilled at Siple Dome varied from good to excellent in the top 350 m. However, fracturing of the core increased in severity between 400 m and the bottom of the ice dome at 1004 m depth. This interval of core constitutes the brittle zone in which bubbly ice persists without break to the ice-rock interface. Melt layers were observed sporadically throughout the core as were inclined layers. Decreasing concentrations of air bubbles below 600 m are attributed to gas hydrate formation. Cores from the brittle zone at Siple Dome, unlike other deep cores from Antarctica and Greenland, have undergone minimal relaxation and have remained brittle and prone to fracturing during processing more than 5 years after they were drilled. This behavior is attributed to the existence of unrelieved residual stresses possibly related to the intimate contact of Siple Dome with Kamb and Bindschadler Ice Streams located on either side of the dome.
Structurally the Siple Dome ice core is characterized by extensive recrystallization, including a progressive increase in the size of crystals in the upper levels of the ice. This crystal growth was accompanied by development of a fabric favoring a broad clustering of crystallographic \\(c\\) axes about the vertical, consistent with the rotation of \\(c\\) axes toward the axis of vertical compression. Beginning at about 261 m, a change in _c_-axis orientation toward a vertical girdle-like fabric began to appear and by 360 m had become fully established; its formation is attributed to the ice undergoing uniaxial longitudinal extension. This girdle-like fabric generally persisted to about 686 m, though in most thin sections the girdle was somewhat incomplete and in a number of cases was replaced by three or four discrete maxima, possibly indicative of migration recrystallization. Between 686 and 703 m a very abrupt change in _c_-axis fabric, that persisted to around 800 m depth, occurred coincidentally with sustained deposition of volcanic ash. The fabric, characterized by a very tight clustering of the \\(c\\) axes about the vertical axis of the core, and accompanied by an order-of-magnitude reduction in the size of the crystals, is attributed to deformation dominated by strong horizontal shear. It is speculated that this formation of a shear fabric between 700 and 800 m is linked to enhanced concentrations of silt-sized volcanic particles affecting the rheological properties of the ice. In ice below 800 m and within 2 m of the bed, the _c_-axis fabric had converted entirely to a multi-maximum orientation, composed of very large crystals with dimensions often exceeding the diameter of the core.
\\begin{table}
\\begin{tabular}{c c c} \\hline Depth range & Entrained debris & Gas content \\\\ m & wt\\% & mL(100 g)\\({}^{-1}\\) at STP\\({}^{\\circ}\\) \\\\ \\hline
995.80-995.85 & β & 11.56 \\\\
997.53β997.58 & β & 11.38 \\\\
998.11β998.16 & β & 10.61 \\\\
1000.60β1000.11 & β & 11.46 \\\\
1000.64β1000.69 & β & 10.23 \\\\
1001.85β1001.90 & 0.19 & 8.18 \\\\
1002.09β1002.13 & 1.08 & 2.83 \\\\
1002.55β1002.60 & 0.71 & 4.97 \\\\
1002.77β1002.82 & 0.10 & 7.28 \\\\
1003.03β1003.08 & 0.16 & 6.75 \\\\
1003.68β1003.73 & 1.06 & 2.12 \\\\
1003.73β1003.78 & 0.38 & 1.46 \\\\ \\hline \\end{tabular} \\({}^{\\circ}\\)Standard temperature and pressure.
\\end{table}
Table 2: Siple Dome basal ice samplesThe bottom 2 m of ice at Siple Dome contained widely dispersed sediment, principally in the silt-sand particle size range. Its occurrence is attributed to past accretion by freeze-on of basal meltwater. Currently the basal ice is frozen to the bed at -2.54\\({}^{\\circ}\\)C.
## Acknowledgements
This research was funded by the Office of Polar Programs, US National Science Foundation under grant OPP-0126212. Additional financial support was provided by the US Cold Regions Research and Engineering Laboratory. We thank B. Eider of the US Army Engineer Research and Development Centre-US Army Cold Regions Research and Engineering Laboratory (ERDC-CRREL) and the curatorial staff of the US National Ice Core Laboratory for their help in processing the Siple Dome ice core. Logistical support was provided by Antarctic Support Associates/Raytheon Polar Service Company and the New York Air National Guard. The authors also thank an anonymous reviewer for some very insightful comments.
## References
* Alley (1988) Alley, R.B. 1988. Fabrics in polar ice sheets: development and prediction. _Science_, **240**(4851), 493-495.
* Alley and Mullins (1991) Alley, R.B. and L.M. Mullins, 1991. Changes in the West Antarctic ice sheet. _Science_, **254**(5034), 959-963.
* Alley et al. (1995) Alley, R.B., A.J. Gow, S.J. Johnsen, J. Kipfstuhl, D.A. Meese and T. Throstinson, 1995. Comparison of deep ice cores. _Nature_, **373**(6513), 393-394.
* Azuma and Higashi (1985) Azuma, N. and A. Higashi, 1985. Formation processes of ice fabric pattern in ice sheets. _Ann. Clacid_, **6**, 130-134.
* Azuma (1999) Azuma, N. and _of others_ 1999. Features and fabrics in the Dome F (Antarctica) ice core. _Ann. Glacid._, **29**, 163-168.
* Butkovitch (1953) Butkovitch, T.R. 1953. Density of single crystals of ice from a temperate glacier. _SPIRE Res. Pap._, 7.
* Dansgaard and others (1982) Dansgaard, W. and _6 others_. 1982. _A new Greenland deep ice core._Science_, **218**(4579), 1273-1277.
* Das (2003) Das, S.B. 2003. West Antarctic ice sheet surface melting and Holocene climate variability. (PhD thesis, The Pennsylvania State University.)
* Das and Alley (2005) Das, S.B. and R.B. Alley. 2005. Characterization and formation of melt-layer in polar snow: observation and experiments from West Antarctica. _J. Glacid._, **51**(173), 307-313.
* De La Chapelle et al. (1998) De La Chapelle, S., O. Castelnau, V. Upenkov and P. Duval. 1998. Dynamic recrystallization and texture development in ice as revealed by the study of deep ice cores in Antarctica and Greenland. _J. Geophys. Res._, **103**(B3), 5091-5105.
* DiPrinio et al. (2005) DiPrinio, C.L., L.A. Wilten, R.B. Alley, J.F. Fitzpatrick, M.K. Spencer and A.J. Gow. 2005. Fabric and texture at Siple Dome, Antarctica. _J. Glacid._, **51**(173), 281-290.
* Duval and Castelnau (1995) Duval, P. and O. Castelnau. 1995. Dynamic recrystallization of ice in polar ice sheets. _J. Phys. IV [Paris]_, **5**, 197-205. [Supplement at 3.3]
* Engelhardt (2004) Engelhardt, H. 2004. Ice temperature and high geothermal flux at Siple Dome, West Antarctica, from borehole measurements. _J. Glacid._, 501(69), 251-256.
* Fujita and Nakanov (1987) Fujita, S., N. Nakanov and S. Mae. 1987. Orientation of the 700-m Mizuno core and its strain history. _Proc. NIRR Symp. Polar Meteorol. Clacid._, **1**, 122-131.
* Gow (1968) Gow, A.J. 1968. Deep core studies of the accumulation and densification of snow at Byrd Station and Little America V, Antarctica. _CRREL Res. Rep._, 197.
* Gow (1970) Gow, A.J. 1970. Deep core studies of the crystal structure and fabrics of Antarctic glacier ice. _CRREL Res. Rep._, 282.
* Gow (1971) Gow, A.J. 1971. Relaxation of ice in deep drill cores from Antarctica. _J. Geophys. Res._, **76**(11), 2533-2541.
* Gow and Henghalter (2000) Gow, A.J. and H. Engelhardt. 2000. Preliminary analysis of ice cores from Siple Dome, West Antarctica. _In_ Hondoh, T., ed. _Physics of ice core records_. Sapporo, Hokkaido University Press, 63-82.
* Gow and Meese (2007) Gow, A.J. and D.A. Meese. 2007. The distribution and timing of tophra deposition at Siple Dome, Antarctica: possible climatic and rheologic implications. _J. Glacid._, **53**(183), 585-596.
* (1971) Gw, A.J. and T. Williamson. 1971. Volcanic ash in the Antarctic ice sheet and its possible climatic implications. _Earth Planet. Sci. Lett._, **13**(1), 210-218.
* Gow and Williamson (1976) Gow, A.J. and T. Williamson. 1976. Rheological implications of the internal structure and crystal fabrics of the West Antarctic ice sheet as revealed by deep core drilling at Byrd Station. _Geol. Soc. Am. Bull._, **87**(12), 1665-1677.
* Gow et al. (1979) Gow, A.J., E. Epstein and W. Sheehy. 1979. On the origin of stratified debris in ice cores from the bottom of the Antarctic ice sheet. _J. Glacid._, **23**(89), 185-192.
* Gow and Meese and Alley (1993) Gow, A.J., D.A. Meese and R.A. Alley. 1993. Discontinuities including possible distortion of the environmental record in cores of deep basal ice from central Greenland. _Fors._**74**(3), 84.
* Gow and others (1997) Gow, A.J. and _of others_. 1997. Physical and structural properties of the Greenland Ice Sheet Project 12 ice cores: a review. _J. Geophys. Res._, **102**(C12), 26,559-26,575.
* Herron and Langway (1979) Herron, S. and C.C. Langway, Jr. 1979. The debris-laden ice at the bottom of the Greenland ice sheet. _J. Glacid._, **23**(89), 193-207.
* _Physics of the Movement of Ice_, 336-349.
* Langway (1958b) Langway, C.C., Jr. 1958b. Ice fabrics and the universal stage. _SPIRE Tech. Rep._ 62.
* Lipenkov et al. (1989) Lipenkov, V.A., N.I. Barkov, P. Duval and P. Pimenta. 1989. Crystalline texture of the 2083 m ice core at Vostok Station, Antarctica. _J. Clacid._, **35**(121), 392-398.
* Nereson et al. (1996) Nereson, N.A., E.D. Waddington, C.F. Raymond and H.P. Jacobson. 1996. Predict age-depth scales for Siple Dome and inland WAIS ice cores in West Antarctica. _Geophys. Res. Lett._, **23**(22), 3163-3166.
* Paterson (1994) Paterson, W.S.B. 1994. _The physics of glaciers. Third edition_. Oxford, etc., Elsevier.
* Rigby (1955) Rigby, G.P. 1955. Study of ice fabrics, Thule area, Greenland. _SPIRE Res. Rep._, **2**, 1-6.
* Rigby (1960) Rigby, G.P. 1960. Crystal orientation in glacier and experimentally deformed ice. _J. Glacid._, **3**(27), 589-606.
* Robin (1983) Robin, G. deQ. 1983. _The climatic record in polar ice sheets_. Cambridge, Cambridge University Press.
* Taylor and others (1993) Taylor, K.C. and _9 others_. 1993. Electrical conductivity measurements from the GSP2 and GRIP Greenland ice cores. _Nature_, **366**(6455), 549-552.
* Throstinson (2002) Throstinson, T. 2002. Fabric development with nearest-neighbor interaction and dynamic recrystallization. _J. Geophys. Res._, **107**(B1), 2014. (10.1019/2001180024.4)
* Throstinson et al. (1997) Throstinson, T., J. Kipfstuhl and H. Miller. 1997. Textures and fabrics in the GRIP ice core. _J. Geophys. Res._, **102**(C12), 2658-2659.
* Wang and Azuma (1999) Wang, Y. and N. Azuma. 1999. A new automatic ice-fabric analyzer which uses image-analysis techniques. _Am. J._, **29**, 155-162.
* Wang et al. (2002) Wang, Y., T. Throstinson, J. Kipfstuhl, H. Miller, D. Dahl-Jensen and H. Shoji. 2002. A vertical glide fabric in the NorthGRIP deep ice core, North Greenland. _Ann. Glacid._, **35**, 515-520.
* Wang et al. (2003) Wang, Y., S. Kipfstuhl, N. Azuma, T. Throstinsoninson and H. Miller. 2003. Ice fabrics study in the upper 1500 m of the Dome C (East Antarctica) deep ice core. _Ann. Glacid._, **37**, 97-104.
* Welen (2000) Welen, L.A. 2000. A new technique for ice-fabric analysis. _J. Glacid._, **46**(152), 129-139.
* Wilen et al. (2003) Wilen, L.A., C.L. DiPrinio, R.B. Alley and N. Azuma. 2003. Development, principles, and applications of automated ice fabric analyzers. _Microsc. Res. Tech._, **62**(1), 2-18. | The quality of the ice core from Siple Dome, West Antarctica, varied widely, with significant fracturing below 400 m. Bubbly ice persisted to the ice-rock interface at 1004 m and constituted the brittle zone. The core has undergone minimal relaxation and has remained brittle and prone to fracturing more than 5 years after it was drilled. This behavior is attributed to unrelieved stresses from Kamb and Bindschaller ice Streams (former Ice Streams C and D) flanking the dome. Melt layers were identified sporadically throughout the core, as were inclined layers tilted at angles that occasionally exceeded 10\\({}^{\\circ}\\). Structurally, the ice was characterized by extensive recrystallization including grain-size changes from 0.074 cm\\({}^{2}\\) at 59 m to \\(>\\)50 cm\\({}^{2}\\) at 992 m, and major transitions in _c_-axis fabrics. Unusual fabrics included vertical _c_-axis clusters superimposed on vertical girdles that may reflect vertical compression acting in conjunction with horizontal tension. The sudden appearance of a shear-type fabric at 700-800 m appears closely linked to the occurrence of abundant tephra particles embedded in the ice. The occurrence of dispersed sediment in the bottom 2 m is attributed to freeze-on of basal meltwater.
1 | Give a concise overview of the text below. | 285 |
arxiv-format/2103_04069v2.md | # Adaptive Lidar Scan Frame Integration:
Tracking Known MAVs in 3D Point Clouds
Li Qingqing1, Yu Xianjia1, Jorge Pena Queralta1, Tomi Westerlund1
1Turku Intelligent Embedded and Robotic Systems (TIERS) Lab, University of Turku, Finland.
Emails: 1{qingqli, xianjia.yu, jopequ, toewe}@utu.fi
## I Introduction
Micro-aerial vehicles (MAVs) have seen an increasing adoption across a variety of application domains in recent years [1]. Multiple works have been devoted to the navigation of MAVs in GNSS-denied environments [2], and state estimation in both single [3] and multi-MAV systems [4]. In this paper, we are particularly interested in tracking and state estimation from an external system, for those applications where MAVs are deployed together with or from unmanned ground vehicles (UGVs) [5, 6].
From the perspective of deployment within multi-robot systems, being able to track MAVs from UGVs enables miniaturization and higher degrees of flexibility lowering the need for high-accuracy onboard localization. A recent and significant example of multi-robot system deployment in GNSS-denied environments is the DARPA Subterranean challenge [7, 8]. Reports from participating teams indicate that localization and collaborative sensing were among the key challenges, with MAVs being deployed from UGVs dynamically during the challenge. Since MAVs often rely on visual-inertial odometry (VIO) for self and relative estate estimation [9], relying on external lidar-based tracking can also extend the operability to low-visibility or other domains where VIO has inherent limitations [10, 11].
Tracking and detecting MAVs has been a topic of interest for researchers in recent years. First, owing to the increasing need of identifying and detecting foreign objects or drones in areas with controlled airspace such as airports [12, 13]. Second, to optimize the utilization of MAVs as flexible mobile sensing platforms [14]. This paper focuses on the latter use. Compared to the existing literature, which relies mainly on vision-based techniques [15], we provide a lidar-based solution that can be utilized more independently of the environmental conditions. Until recently, most 3D lidars provided relatively sparse point clouds in terms of object recognition [16], with limited vertical resolution in inexpensive devices. However, solid-state lidars have recently emerged as state-of-the-art in terms of long-range scanners featuring high-density point clouds [17]. The
Fig. 1: Conceptual illustration of the field-of-view coverage with different integration times on a Livox Horizon lidar (top) and its application to tracking MAVs (bottom).
main caveat is the limited field of view (FoV) in most of these devices [18], but solutions include utilizing multiple lidars or correspondingly adjusting the position and orientation of the robot base where the lidar is installed.
We are particularly interested in the problem of tracking a MAV that is deployed from a ground robot. We assume thus that the initial position of the MAV after take-off is known. We also assume that its shape and size are known a priori. We develop methods targeting solid-state lidars owing to the higher density of the resulting point cloud even with more limited FoV. Moreover, in these lidars, the concept of a frame or scan frequency changes considerably. Similarly, as in rotating 3D lidars, a frame in solid-state lidars can be naturally related to a single revolution. With non-repetitive scan patterns, lidars can output point clouds at adjustable frequencies with varying FoV coverage, as illustrated in Fig. (a)a. This opens the door to new lidar perception methods that exploit the possibilities of adaptively adjust the frame integration time to better sense the objects. To the best of our knowledge, this approach has not been previously studied. We apply the proposed adaptive lidar scan integration methods within the problem of a UGV tracking a MAV for external state estimation, as conceptualized in Fig. (b)b. While our focus is on MAVs, the proposed methods can also be easily adapted to detect foreign objects or intruder MAVs more accurately. We first put our focus on single and known MAV detection, but present generic methods that can be extended to multi-MAV tracking as long as FoV limitations are accounted for.
The main contribution of this paper is twofold. We first introduce a novel adaptive lidar scan integration method enabling more accurate and reliable object recognition and tracking from 3D point clouds, specifically applied to MAV detection. In addition, we then define a multi-modal tracking system that relies on processing point clouds resulting in different integration times for higher accuracy and persistent tracking, while validating the trajectories using a priori known information about the MAV dimensions.
The remaining of this paper is organized as follows. In section II, we review the state-of-the-art in MAV detection, lidar-based object detection and tracking, and a handful of existing works on the vision-, radar- and lidar-based MAV detection and tracking. Section III then formulates the adaptive scanning method, and how it applies to a MAV detection and tracking problem. Section IV reports on our methodology, and Section V describes experimental results with different settings. Finally, we conclude this work and outline future research directions in Section VI.
## II Related Works
This section reviews the literature in the areas of detection and tracking of MAVs. Owing to the scarcity of works devoted to lidar-based MAV tracking, we have focused on: (i) the state-of-the-art in MAV detection, mostly vision-based; (ii) lidar-based detection and tracking of small objects; and (iii) detection of MAVs based on lidar or radar point cloud data.
### _Vision-based MAV Detection_
Most of the work to data in tracking small objects and MAVs has been related to vision-based approaches [19, 20, 15]. Vision-based approaches can be classified among those that rely on passive or active visual markers, and those that detect and track objects in general, e.g., with traditional computer vision or deep learning. In the former category, [20] provides an example of tracking based on passive artificial visual markers, which can be used to calculate the relative 3D position of the MAV from a camera. On a different direction aimed at MAV-to-MAV detection, Walter et al. presented UVDAR, an ultra-violet (UV) solution for relative localization in multi-MAV systems [21].
Regarding the latter category, the development of deep convolutional neural networks (CNNs) in recent years has facilitated the adoption within the domain of object detection and tracking. Arguably, a significant portion of the state-of-the-art in tracking is based on deep learning methods [22]. These methods often offer significantly higher degrees of accuracy and robustness. For instance, Vrba et al. have presented a marker-less system for relative localization of MAVs [15], which can be applied to detecting foreign or intruder MAVs.
The potential of depth cameras for detecting MAVs has also been showcased in the literature. For instance, deep learning models processing depth maps have been applied to tracking a MAV and aiding it in navigating and avoiding obstacles [23].
While depth cameras can provide accurate location and size measurements, and vision sensors, in general, are able of robust tracking and relative positioning, our focus in this paper is to work with lidars owing to their flexibility in terms of environmental conditions, and because of their significantly higher range and accuracy when compared to depth cameras.
### _Lidar-based object tracking_
More in line with the research presented in this paper is point-cloud-based tracking. While this generally refers to lidar point clouds, some of the work in the literature is also devoted to point clouds generated by stereo or depth cameras, or radars. In general terms, traditional approaches to tracking in point cloud data rely mostly on distance-based clustering [24].
Nonetheless, significant work has been carried out in the area of deep learning voxel-based methods for segmentation and detection of objects in 3D point clouds. For instance, VoxelNet [25] implements a voxel features extractor (VFE) on point cloud to characterize object points. Other networks have been proposed that directly process point sets, such as PointNet [26] and PointNet++ [27], to fully exploit the inherent information in the point cloud data for object tracking. Iterating over these, works such as [28] have proposed end-to-end and point-to-box networks for 3D object tracking.
When considering small objects, the specific literature is more scarce. In [16], Razlaw et al. focus on detecting people in sparse point clouds from multi-channel rotating 3D lidars. Compared with this approach, we focus on exploiting the adaptive frame integration capabilities of solid-state lidars to optimize the point cloud density and do not necessarily assume sparsity.
### _Lidar and radar-based MAV detection and tracking_
When the focus is more on detection rather than on accurate localization or tracking of the detected MAV, radar has been proven a robust solution [29]. Lidars, in any case, have been identified as having big potential for MAV detection and tracking [30].
In summary, while object detection and tracking in point clouds is a relatively mature field, we have found a gap in the literature in terms of optimizing the way these point clouds are generated. In particular, we see most of the current work being focused on processing point clouds, while our objective is to study how we can enhance the performance of a given tracking algorithm by improving the quality of the point cloud data it is fed with. Our focus here is on actively adapting lidar-based perception for detecting and tracking a flying MAV, where the density and size of the point cloud is optimized based on, e.g., the MAVs distance to the lidar sensor, or its speed.
## III Problem Definition
We consider the problem of tracking a MAV from a ground robot. The ultimate objective is, e.g., to improve the collaboration between the robots and the ability of the MAV to navigate in complex environments aided by the UGV. The rest of this paper delves into the definition, design, and implementation of methods for tracking a single MAV. Nonetheless, these can be extended to multiple MAVs. The main limitation when tracking multiple units is the FoV of the lidar sensors onboard the ground vehicle, and therefore assumptions have to be made to the spatial distribution of the MAVs (always within the FoV of the ground robot). Alternatively, more lidar scanners can be installed to increase the FoV.
### _Rationale_
The majority of 3D laser scanners available to date are multi-channel, rotating lidars. While devices with 64 or 128 vertical channels can provide high angular resolution in both horizontal and vertical dimensions, these high-end devices are not the most common. Moreover, the scanning pattern is in general repetitive, which has benefited from a geometric perspective in terms of data processing but does not enable a higher FoV coverage with longer exposure if the position of the sensor is fixed. New solid-state lidars featuring non-repetitive scan patterns, albeit having more limited FoV, can provide more dense point clouds and often feature longer detection ranges. In particular, we are interested in the possibilities of dynamically adjusting the FoV coverage and density in the point cloud to be processed for detection and tracking. Among the benefits of these new lidars and the possibilities of adaptive scanning rates is also higher resilience against one of the challenges in lidar-based perception: motion-induced distortion [31]. In general, the literature targeting tracking of MAVs using lidar scanners is scarce, and existing methods in point cloud object detection and tracking considering mainly static frames. We aim to define more optimal settings for generating point clouds based on the state (speed and distance to the sensor) of the MAV being tracked.
### _System Overview_
We propose three simultaneous tracking modalities with three processes analyzing point cloud frames resulting in integration times ranging several orders of magnitude. A general view of the multi-modal tracking processes is shown in Fig. 2. In more detail, the three modalities are described below:
1. Adaptive high-frequency tracking. In this first process, sparse point clouds are integrated at frequencies up to \\(100\\,\\mathrm{Hz}\\). The MAV is only trackable through a reduced number of points, but we are able to estimate its position and speed with high accuracy. In this process, the MAV is not necessarily recognizable in all processed frames.
2. Adaptive medium-frequency tracking. The second process operates at frequencies within the range of typical lidar scanners (i.e., 5 to \\(20\\,\\mathrm{Hz}\\)). The frequency within that
Fig. 3: Integration trajectory recovery example
Fig. 2: Overview of the proposed methods, where tracking is simultaneously performed at three different scan frequencies. Within each of these three threads, the scan frame integration is adjusted based on the distance to the target MAV and its speed.
same range is dynamically adjusted to optimize the density of the point cloud. At these frequencies, the extracted point cloud representing the MAV is distorted by motion, and thus the localization and speed estimation accuracy is lower. However, this process enables more robust and persistent tracking as the MAV can be recognized in most if not all frames.
3. Low-frequency trajectory and object validation. The third and last process that runs in parallel to the previous two performs long-term tracking and validates the reconstructed trajectory of the MAV based on predefined dimensional constraints. An illustration of such trajectory reconstruction is shown in Fig. 3
### _Formulation_
Let \\(\\mathcal{P}_{k}(I_{r}^{k})\\) be the point cloud generated by the lidar with an integration time \\(I_{r}^{k}\\), and let \\(\\textbf{s}_{UGV}^{k}\\)={ \\(\\textbf{q}_{UGV}^{k}\\), \\(\\textbf{q}_{UGV}^{k}\\)} be the position and speed defining the state of the UGV at time \\(k\\). We also denote by \\(\\textbf{s}_{MAV}^{k}\\)={\\(\\textbf{p}_{MAV}^{k}\\),\\(\\textbf{p}_{MAV}^{k}\\)} the position and speed of the MAV. We use discrete steps represented by \\(k\\) owing to the discrete nature of the set of consecutive point clouds. The output of the main tracking algorithm is to extract from \\(\\mathcal{P}_{k}(I_{r}^{k})\\) the set of points representing the MAV, which we denote by \\(\\mathcal{P}_{MAV}^{k}\\), and to adjust the integration time for the next point cloud, \\(I_{HF}^{k},I_{MF}^{k}\\).
### _Adaptive scan integration_
Since we assume that the state of the MAV \\((\\textbf{p}_{MAV}^{k-1},\\textbf{p}_{MAV}^{k-1})\\) is initially known, the point cloud processing proceeds as follows. First, we perform ground removal based on the known position of the UGV and the last-known altitude of the MAV. We then proceed with finding the nearest neighbor points to a predicted MAV position. This step is repeated for both the high and medium frequency scans, the former one providing a more accurate position estimation while the latter is more persistent in time. Finally, these two estimations are combined, and the results are utilized to adjust the integration rates based on the point cloud density expected for the given distance and speed. The UGV is also controlled to maintain the MAV within the FoV of its lidar. This process is outlined in Algorithm 1.
```
Input: Low-freq int. rate: \\(I_{LF}^{k-1}\\) 3D lidar point cloud: \\(\\mathcal{P}_{k}\\left(I_{LF}^{k-1}\\right)\\) 3MAV state history: \\((\\textbf{p}_{MAV},\\,\\textbf{p}_{MAV})\\) Output: Trajectory validation (bool) whilenew\\(\\mathcal{P}_{k}\\left(I_{LF}^{k-1}\\right)\\)do//Generate cubic splines //with position and speed constraints \\(\\{B_{i}\\}\\leftarrow\\{\\textbf{p}_{MAV},\\,\\textbf{p}_{MAV}\\}\\); // Estimate expected point cloud from // known density at given distance and speed \\(\\hat{\\mathcal{P}}_{k}\\leftarrow\\{\\{B_{i}\\},\\,\\textbf{p}_{MAV},\\,\\textbf{p}_{ MAV}\\}\\); // Calculate IoU \\(U=calc\\_IoU\\left(P_{k}\\left(I_{LF}^{k-1}\\right),\\hat{\\mathcal{P}}_{k}\\right);\\) if\\(IoU>th\\)then return True else return False
```
**Algorithm 2**Trajectory validation
#### Iii-D1 Trajectory validation
The main purpose of the low-frequency scan stream is to validate the extracted MAV's trajectory. While the tracking with adaptive scan integration only takes into account the MAV size roughly in terms of distance within which nearest neighbors are looked for, the extracted point cloud is not validated against its known dimensions. This is done when enough points are accumulated into a reconstructed trajectory. As exposed in Algorithm 2, we first perform a cubic spline interpolation based on the history of estimated positions and speeds. To calculate the parameters of the cubic spline, we utilize constraints on the first derivative based on the speed, rather than forcing the first and second derivative to be continuous. Indeed, the acceleration of the MAV can suddenly change. Based on predetermined values of point cloud density as a function of the MAV's distance to the lidar and its speed, we then produce an expected point cloud. We validate the original point cloud given a threshold for the IoU measure with the generated estimate.
## IV Methodology
### _Experimental platforms_
The experimental platforms consist on a single ground robot and a commercially available Ryze Tello MAV. The ground robot is an EAI Dashgo platform equipped with a LivoxHorizon lidar (\\(81.7^{\\circ}\\times 25.1^{\\circ}\\) FoV). The lidar is able to output scanned pointcloud up to 100 Hz, featuring a non-repetitive pattern. A pair of ultra-wideband (UWB) transceivers is used to obtain a single range between the robot and the MAV at frequencies ranging from 10 Hz to 100 Hz. The UWB ranging is only used in aiding the manual validation of the extracted trajectory in places where there was no external positioning system. In the future, it could be incorporated as part of the tracking algorithm as well, as is becoming increasing adopted in multi-robot systems [32, 33].
### _Software_
The system has been implemented using ROS Melodic under Ubuntu 18.04. The algorithms are running in the main computer onboard the ground robot. The computer runs the robot's driver1, the Tello MAV driver2, the Livox lidar driver3, and our open-source MAV tracking package4. The latter is a multi-threaded node able to process the different point clouds in real time. The point cloud library (PCL) [34] is utilized to extract the position of the MAV from the lidar's point cloud.
Footnote 1: [https://github.com/TIERS/dashog-d1-ros](https://github.com/TIERS/dashog-d1-ros)
Footnote 2: [https://github.com/TIERS/fello-driver-ros](https://github.com/TIERS/fello-driver-ros)
Footnote 3: [https://github.com/Livox-SDK/livox_ros_driver](https://github.com/Livox-SDK/livox_ros_driver)
Footnote 4: [https://github.com/TIERS/adaptive-lidar-tracking](https://github.com/TIERS/adaptive-lidar-tracking)
### _Metrics_
Owing to the lack of an accurate external positioning system such as a motion capture system, our focus is instead on measuring the performance of the tracking at different scan integration rates and manually validating the overall trajectory. The experimental flights are carried out in large indoor halls with multiple columns and objects, as shown in Fig. 3. Another set of experiments is carried out in a small flying area where an external UWB positioning system was available and used to fly the MAV over a predefined trajectory. A characterization on the accuracy of such system can be found in [35].
## V Experimental Results
In this section we report on the experimental results. The experimental results consist mainly on flights in two different indoor environments and different conditions.
### _Adaptive scan integration_
The first objective of our experiments was to assess the tracking performance at different scan frequencies in order to better model the adaptiveness of our algorithm. In order to adapt the scanning frequency to optimize the tracking performance, key parameters are the point cloud density at different distances and the reliability of the detections at different speeds.
The point cloud density for different scanning frequencies as a function of the distance between the lidar and the MAV is shown in Fig. 5. This measure refers only to the density of the points representing the MAV and not the overall density including the rest of the scene. The darker lines represent the average point cloud density, while the band with higher transparency represents the values within the standard deviation. The size of the Tello MAV is about 500 cubic centimeters. Based on our experiments, reliable tracking at high speeds can be achieved with at least 4 points, while we require at least 20 points at medium scanning frequency. This, however, only applies in free space. As can be seen in Fig. 7, significant noise appears in the point cloud between the MAV and walls in the environment when flying nearby. We discuss further this issue at the end of this section.
In terms of the tracking performance based on the speed, we plot in Fig. 6 the distance between consecutive detections at different scanning frequencies. The results in this particular figure cannot be directly utilized to model the adaptive nature of our tracking algorithm. Nonetheless, they can be leveraged to better understand what are the speed limits under which given scanning frequencies do not provide the expected distance between detection that can be inferred from the MAV speed and the scan frequency.
The results included in Fig. 5 and Fig. 6 have been obtained flying the MAV in a long, straight corridor with a length of about 35 m. The MAV was flying mostly in straight lines and the speed was estimated using both visual odometry and the position history extracted from the lidar data in a partially manual manner.
### _Qualitative trajectory validation_
In order to validate the performance of the tracking algorithm and better understand the limitations of our tracking approach at different scanning frequencies, we compare two different types of trajectories. Owing to the lack of a system to obtain ground truth (e.g., a motion capture system), we provide qualitative analysis for one of the trajectories and compare it with a UWB positioning system in the other one.
First, we test the tracking algorithm through a trajectory where the MAV flies in a large open area at distances from 2 m to over 17 m far from the lidar scanner and variable speeds. In this scenario, the analysis is mostly qualitative, with the trajectories shown in Fig. 8. However, the UWB ranging data and the lidar data has been both manually confirmed, so the maximum positioning error along the track is at worst around 20 cm. Qualitatively, the main results from this experiment are the ability of the tracking algorithm to keep track of the
Fig. 4: Ground robot and MAV utilized in the experiments.
MAV over changes in speed, direction, and at longer distances. The figure only shows frequencies equal to or above \\(5\\,\\mathrm{Hz}\\) because at lower scanning frequencies the speed estimation was highly inaccurate during the early stages of the flight. We can see that only at the highest frequency we are able to track the MAV along the completed trajectory, while the trajectory itself is noisier. The higher level of error when estimating the MAV position is due to a lower number of points being detected, which can correspond to different parts of the MAV in consecutive scans. The last subplot shows the overall estimated trajectory where our algorithm has combined the different scanning frequencies to obtain the smoothness of the medium frequencies and the performance of the higher frequencies. The trajectory also employs the cubic spline interpolation from the validation algorithm.
Second, we perform a continuous flight with a predefined circular trajectory in a small flying arena where the UWB positioning system is available. The results for this flight are shown in Fig. 9. The leftmost plot shows the reference position. However, it is worth noticing that the accuracy of the lidar, of around \\(2\\,\\mathrm{cm}\\) for distances smaller than \\(20\\,\\mathrm{m}\\), is higher than the average accuracy of 10 to \\(15\\,\\mathrm{cm}\\) in the UWB positioning system. Therefore, the trajectory is mere as a reference and only a qualitative discussion is possible with these results. In any case, owing to the continuous change in the speed of the MAV, which is a prior unknown to the tracking algorithm, again only at frequencies equal or over \\(5\\,\\mathrm{Hz}\\) are we able to track the MAV. Nonetheless, at \\(5\\,\\mathrm{Hz}\\) the tracking stops before the fourth revolution is completed, and persistent tracking is only possible when higher frequencies are taken into account.
### _Discussion_
We have shown in this section qualitative results that show the performance of the adaptive tracking algorithm and the same approach applied only to specific scanning frequencies. From both sets of experiments, the main conclusion is that the adaptive approach is able to accommodate a wider variety of
Fig. 5: Density of the point cloud representing the MAV based on the distance to the lidar scanner and the scanning frequency.
Fig. 6: Distance between consecutive MAV detections based on its speed and the lidarβs scanning frequency.
Fig. 7: Accumulated point cloud for the circular trajectory.
scenarios. We have been able to put together the flexibility of high-speed tracking with the robustness of medium frequencies, avoiding the frequent errors of the former, and the lower tracking capacity of the latter is more challenging conditions.
One key limitation when tracking MAVs, as visualized in the circular trajectory experiments, is the low density of the point cloud and the inability to tell the difference between the MAV's points and lidar noise. This is also due to the low reflectively of the MAV, and there is thus the potential for mitigation with more reflective surfaces that could aid in separating the sparse MAV point cloud from the lidar noise originated due to near objects. As we can see in Fig. 7, the point cloud density near the rear wall is very sparse in some areas, therefore being unable to reconstruct a robust trajectory as there are multiple options available that would meet the dynamics and dimensional constraints of the MAV.
## VI Conclusion
We have presented a set of methods for detecting and tracking MAVs that are deployed from ground robots, assuming that the initial position is known. The focus has been on the introduction of a novel adaptive lidar scan integration method that enables more accurate MAV localization with high-frequency scans, robust and persistent tracking with longer frame integration times, and trajectory validation with low-frequency analysis. Experimental results from different settings confirm the better suitability of the different integration times for different scenarios or MAV behaviour, with our adaptive tracking being able to consistently track a MAV in places where a constant lidar scan frequency cannot. Finally, with an additional method to validate the trajectory based on the known shape and size of the MAV, we are able to confirm that the object being tracked meets the dimensional constraints.
In future works, we will explore the integration of lidar-based tracking into the navigation of the MAV, and the integration of onboard state estimation at the MAV into the tracking algorithm.
## Acknowledgment
This research work is supported by the Academy of Finland's AutoSOS project (Grant No. 328755) and Finnish Foundation for Technology Promotion.
## References
* [1]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [2]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [3]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [4]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [5]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [6]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [7]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [8]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [9]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [10]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [11]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [12]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [13]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [14]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [15]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [16]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [17]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [18]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [19]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [20]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [21]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [22]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [23]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [24]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [25]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [26]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [27]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [28]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [29]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [30]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [31]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [32]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [33]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [34]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [35]M. Blosch, S. Weiss, and S. Scherer (2010) A multi-sensor fusion mav state estimation from long-range stereo, imu, gps and barometric sensors. Sensors17 (1), pp. 11. Cited by: SSI.
* [36]M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart (2010) Vision based mav navigation in unknown and unstructured environments. In IEEE International Conference on Robotics and Automation (ICRA), pp. 21-28. Cited by: SSI.
* [37]T. Rouek, M. Pecka, P. Cizek, T. Petficek, J. Bayer, V. Salansky, D. Heft, M. Petrik, T. Baca, V. Spurny, et al. (2019) Darpa subterranean challenge: multi-robotic exploration of underground environments. In International Conference on Modelling and Simulation for Autonomous Systems, pp. 274-290. Cited by: SSI.
* [38]T. Nguyen, K. Mohta, C. J. Taylor, and V. Kumar (2020) Vision-based multi-mav localization with anonymous relative measurements using coupled probabilistic data association filter. In IEEE International Conference on Robotics and Automation (ICRA), pp. 3349-3355.
Fig. 8: Estimated trajectories at different frequencies and with adaptive approach (top five plots), and trajectory estimated from our algorithm (bottom plot).
* [10] J. Pena Queralta, L. Qingqing, F. Schiano, and T. Westerlund, \"Vio-uwb-based collaborative localization and dense scene reconstruction within heterogeneous multi-robot systems,\" _arXiv preprint arXiv:2011.00830_, 2020.
* [11] L. Qingqing _et al._, \"Offloading Monocular Visual Odometry with Edge Computing: Optimizing Image Compression Ratios in Multi-Robot Systems,\" in _The 5th ISCC_. IEEE, 2019.
* [12] I. Guvenc, F. Koohfar, S. Singh, M. L. Sichitiu, and D. Matolak, \"Detection, tracking, and interdiction for amateur drones,\" _IEEE Communications Magazine_, vol. 56, no. 4, pp. 75-81, 2018.
* [13] S. Hengy, M. Laurenzis, S. Schertzer, A. Hommes, F. Kloeppel, A. Shoykhetbrot, T. Geibig, W. Johannes, O. Rassy, and F. Christmacher, \"Multimodal uav detection: study of various intrusion scenarios,\" in _Electronic-Optical Remote Sensing XI_, vol. 10434. International Society for Optics and Photonics, 2017, p. 104340P.
* [14] J. Pena Queralta, J. Raitoharju, T. N. Gia, N. Passalis, and T. Westerlund, \"Autososos: Towards multi-uav systems supporting maritime search and rescue with lightweight ai and edge computing,\" _arXiv preprint arXiv:2005.03409_, 2020.
* [15] M. Vrba and M. Saska, \"Marker-less micro aerial vehicle detection and localization using convolutional neural networks,\" _IEEE Robotics and Automation Letters (RA-L)_, vol. 5, no. 2, pp. 2459-2466, 2020.
* [16] J. Razlaw, J. Quenzel, and S. Behnke, \"Detection and tracking of small objects in sparse 3d laser range data,\" in _IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 2967-2973.
* [17] K. Li, M. Li, and U. D. Hamebeck, \"Towards high-performance solid-state-lidar-inertial odometry and mapping,\" _arXiv preprint arXiv:2010.13150_, 2020.
* [18] J. Lin and F. Zhang, \"Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV,\" in _IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 3126-3131.
* [19] M. Mueller, N. Smith, and B. Ghanem, \"A benchmark and simulator for uav tracking,\" in _European conference on computer vision_. Springer, 2016, pp. 445-461.
* [20] P. H. Nguyen, K. W. Kim, Y. W. Lee, and K. R. Park, \"Remote marker-based tracking for uav landing using visible-light camera sensor,\" _Sensors_, vol. 17, no. 9, p. 1987, 2017.
* [21] V. Walter, N. Staub, A. Franchi, and M. Saska, \"Uvdar system for visual relative localization with application to leader-follower formations of multirotor uavs,\" _IEEE Robotics and Automation Letters (RA-L)_, vol. 4, no. 3, pp. 2637-2644, 2019.
* [22] P. Li, D. Wang, L. Wang, and H. Lu, \"Deep visual tracking: Review and experimental comparison,\" _Pattern Recognition_, vol. 76, pp. 323-338, 2018.
* [23] A. Carrio, S. Vemprala, A. Ripoll, S. Saripalli, and P. Campoy, \"Drone detection using depth maps,\" in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2018, pp. 1034-1037.
* [24] A. Rangesh and M. M. Trivedi, \"No blind spots: Full-surround multi-object tracking for autonomous vehicles using cameras and lidars,\" _IEEE Transactions on Intelligent Vehicles_, vol. 4, no. 4, pp. 588-599, 2019.
* [25] Y. Zhou and O. Tuzel, \"Woxelnet: End-to-end learning for point cloud based 3d object detection,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2018, pp. 4490-4499.
* [26] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, \"Pointnet: Deep learning on point sets for 3d classification and segmentation,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 652-660.
* [27] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, \"Pointnet++: Deep hierarchical feature learning on point sets in a metric space,\" _arXiv preprint arXiv:1706.02413_, 2017.
* [28] H. Qi, C. Feng, Z. Cao, F. Zhao, and Y. Xiao, \"P2D: Point-to-box network for 3d object tracking in point clouds,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 6329-6338.
* [29] G. Fang, J. Yi, X. Wan, Y. Liu, and H. Ke, \"Experimental research of multistatic passive radar with a single antenna for drone detection,\" _IEEE Access_, vol. 6, pp. 33 542-33 551, 2018.
* [30] M. Hammer, M. Hebel, B. Borgmann, M. Laurenzis, and M. Arens, \"Potential of lidar sensors for the detection of uavs,\" in _Laser Radar Technology and Applications XXIII_, vol. 10636. International Society for Optics and Photonics, 2018, p. 1063605.
* [31] F. Neuhaus, T. Koss, R. Kohnen, and D. Paulus, \"Mc3alam: Real-time inertial lidar odometry using two-scan motion compensation,\" in _German Conference on Pattern Recognition_. Springer, 2018, pp. 60-72.
* [32] W. Shule, C. M. Almansa, J. Pena Queralta, Z. Zou, and T. West-erlund, \"Uwb-based localization for multi-uav systems and collaborative heterogeneous multi-robot systems: a survey,\" _arXiv preprint arXiv:2004.08174_, 2020.
* [33] C. M. Almansa, W. Shule, J. Pena Queralta, and T. Westerlund, \"Autocalibration of a mobile uwb localization system for ad-hoc multi-robot deployments in gnss-denied environments,\" _arXiv preprint arXiv:2004.06762_, 2020.
* [34] R. B. Rusu and S. Cousins, \"3d is here: Point cloud library (pcl),\" in _IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2011, pp. 1-4.
* [35] J. Pena Queralta, C. M. Almansa, F. Schiano, D. Floreano, and T. Westerlund, \"Uwb-based system for uav localization in gnss-denied environments: Characterization and dataset,\" _arXiv preprint arXiv:2003.04380_, 2020.
Fig. 9: Reference trajectory (UWB) and estimated positions at different fixed frequencies. | Micro-aerial vehicles (MAVs) are becoming ubiquitous across multiple industries and application domains. Lightweight MAVs with only an onboard flight controller and a minimal sensor suite (e.g., IMU, vision, and vertical ranging sensors) have potential as mobile and easily deployable sensing platforms. When deployed from a ground robot, a key parameter is a relative localization between the ground robot and the MAV. This paper proposes a novel method for tracking MAVs in lidar point clouds. In lidar point clouds, we consider the speed and distance of the MAV to actively adapt the lidar's frame integration time and, in essence, the density and size of the point cloud to be processed. We show that this method enables more persistent and robust tracking when the speed of the MAV or its distance to the tracking sensor changes. In addition, we propose a multi-modal tracking method that relies on high-frequency scans for accurate state estimation, lower-frequency scans for robust and persistent tracking, and sub-Hz processing for trajectory and object identification. These three integration and processing modalities allow for an overall accurate and robust MAV tracking while ensuring the object being tracked meets shape and size constraints.
Micro-aerial vehicles, MAV, UAV, UGV, detection, tracking, lidar detection, lidar tracking, adaptive scanning. | Give a concise overview of the text below. | 260 |
arxiv-format/2403_11614v4.md | # CRS-Diff: Controllable Remote Sensing Image Generation with Diffusion Model
Datao Tang, Xiangyong Cao, Xingsong Hou, Zhongyuan Jiang, Junmin Liu, Deyu Meng
This work was supported in part by the National Key Research and Development Program of China under Grant 2021ZD0112902 and in part by the China NSFC Projects under Contract 62272375 and Contract 12226004. (_Corresponding author: Xiangyong Cao._)Datao Tang and Xiangyong Cao are with the School of Computer Science and Technology and the Ministry of Education Key Lab for Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an 710049, China (Email: [email protected]).Xingsong Hou is with the School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China. Zhongyuan Jiang is with the School of Cyber Engineering, Xidian University, Xi'an, Shaanxi 710049, China.Junmin Liu and Deyu Meng are with the School of Mathematics and Statistics and the Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China, and also with Parhou Laboratory (Huangpu), Guangzhou, Guangdong 510555, China.
## I Introduction
Diffusion models [1, 2] are a class of probabilistic generative models that turn noise into a representative data sample. Recently, image generation based on diffusion model [3, 4, 5, 6, 7, 8, 9, 10] has emerged as a hot research topic since the generated images exhibit high quality, e.g., generating realistic images [3, 4, 5, 6], transforming art styles [11, 12], image super-resolution [13, 14, 15], video generation [16], etc. However, most existing diffusion models focus primarily on general image generation, with insufficient exploration in generating specific types of images, such as remote sensing (RS) images.
As shown in Fig. 1 (a), the RS imagery differs significantly from the traditional RGB imagery in several ways, particularly in terms of resolution, coverage area, and information richness. The resolution of remotely sensed images is often very high to capture subtle surface features, unlike the standard resolution of traditional RGB images. In addition, RS images cover wide geographic areas and various environments, such as urban, rural, forest, and marine environments, providing extensive geographic information. In contrast, traditional RGB images usually capture only local areas, offering far less information richness than RS images. Therefore, the high resolution of RS images requires that generative models possess higher accuracy and detail capture capabilities, handle larger scale data, and maintain geographic information consistency during generation. Moreover, the rich information in RS images requires generative models to integr
Fig. 1: (a) Comparison between natural image and remote sensing (RS) image. The image content is the Capital Museum of China, sourced from Google Maps and Google Street View, respectively. As can be seen, RS imagery differs significantly from traditional RGB imagery in resolution, coverage area, and information richness. (b) Comparison of the generation results between the two control modes. The upper image is the generation result guided solely by text, while the lower image is the result guided by both text and sketch. As can be seen, the single text control condition fails to generate accurate image content while βtext + sketchβ conditions can succeed.
multidimensional data, as relying solely on simple textual control is often insufficient. As shown in the upper part of Fig. 1 (b), textual descriptions, while providing some contextual information, are often inadequate for handling complex geographic and atmospheric data, making it difficult to accurately control the quality and content of the generated images. When generating scene details and regular buildings in the images, although the text-guided images exhibit largely similar features, they often contain distorted line segments and incomprehensible details that contradict the physical world. Therefore, we believe that more fine-grained conditional control is necessary to generate RS images. As shown in the lower part of Fig. 1 (b), incorporating additional conditions (e.g., image sketch) into the image generation process enables the creation of more realistic images. Establishing this correspondence between conditions and images can expand the application scenarios of the generated model. Thus, additional control conditions need to be explored for RS image generation.
Currently, the research in RS image generation mainly includes GAN-based [22, 23, 24, 25, 26] and diffusion model-based approaches [17, 19, 21, 27]. For example, Reed et al. developed StackGAN [24], which employs stacked generators to produce clear RS images with the size of 256 \\(\\times\\) 256. However, GAN-based methods are unstable in the training process. In contrast, diffusion models exhibit superior generative ability and a relatively stable training process. As shown in Tab. I, there have been several controlled RS image generation models [17, 18, 19, 20, 21, 28, 29]. For example, RSDiff [17] proposes a novel cascade architecture for RS text-to-image generation using diffusion model. SatDM [18] emphasizes the crucial role of semantic layouts in generating RS images and can produce RS images guided by semantic masks. Yuan et al. [19] notably generate high-quality RS images guided by semantic masks and introduce a lightweight diffusion model, obtained through a customized distillation process, to achieve fast convergence, addressing the inherent issue of prolonged training times in diffusion models. Recently, Yu et al. [20] proposed a guided self-cascading generation framework employing a novel noise sampling strategy, capable of generating images with diverse geographic resolutions across any region for downstream tasks. DiffusionSat [21] incorporates the associated metadata such as geolocation as conditioning information to generate the RS image. However, these models lack control over the image detail level, still using text as the primary control condition
Fig. 2: Visualisation results of our proposed CRS-Diff. (a) Singe text condition generation: the RS images are generated based only on text. (b) Single image condition generation: the RS images are generated based on the image condition. (c) Multi-condition image generation: the RS images are generated under the control of multiple conditions.
and neglecting to incorporate image-related features as control signals. A single text-guided generated image can easily suffer from partial distortion (as shown in Fig. 1 (b)), making it difficult to adapt to the high information density of RS images, rendering the generated images of limited use to downstream tasks.
To address these issues, in this paper, we propose CRS-Diff, i.e. a controllable remote sensing generation model. Specifically, a base diffusion model is first trained tailored for the RS domain based on the Stable Diffusion (SD) model [6], which is capable of converting high-precision textual descriptions into RS images as shown in Fig. 2 (a). Based on this, we integrated ControlNet [30] to include two additional control signals in the diffusion model for the controlled generation of RS images. These two control signals adjust global and local condition information of the image, including six additional image control conditions (semantic segmentation mask, roadmap, sketch, etc.) and textual conditions (prompt, content image, and metadata encoding) as shown in Fig. 2 (b). The optional combination of multiple conditions is controlled to ensure that the resulting RS images are visually realistic and accurately reflect specific geographic and temporal information. For the text condition, we concatenate directly with the original text encoding through an additional encoding step, leveraging the model's natural control mechanism. For the image condition, we explore multiscale feature fusion to coordinate different control conditions and efficiently implement the bootstrapping of generative process noise maps, making our method flexible enough to combine any conditions for image generation, as shown in Fig. 2 (c).
In summary, the contributions of our work are threefold:
* We propose a new controllable RS generative model with diffusion models (CRS-Diff), which is a framework specifically designed for RS image generation. Different from previous RS generative models, our CRS-Diff can simultaneously support more types of controllable conditions, i.e., text, metadata and image.
* To effectively integrate multiple control information, we introduce a new conditional control mechanism to achieve multi-scale feature fusion to enhance the guiding effect of control conditions, thus broadening the image generation space. As far as we know, our CRS-Diff is the first multiple-condition controllable RS generative model, which is capable of generating high-quality RS images that meet specific requirements under the guidance of composite conditions.
* Experimental results have demonstrated the superiority of our proposed CRS-Diff in generating RS imagery that adheres to specific conditions and surpasses previous RS image generation methods both quantitatively and qualitatively. Additionally, our CRS-Diff can serve as a data engine that generates high-quality training data for downstream tasks, e.g., road extraction.
The rest of this paper is organized as follows. Section II provides a brief overview of related work. Section III details the technical aspects of the CRS-Diff implementation. Section IV describes the experimental design, presents the experimental results, and offers specific analyses. Finally, Section V presents the conclusions of the paper.
## II Related work
### _Text-to-Image Generation_
Text-to-image generation that generates high-definition images corresponding to given textual descriptions has attracted significant attention in the multimodal field. Early research is primarily focused on GANs [31, 32, 33], with text-conditional GANs emerging as pioneering end-to-end differential architectures from character to pixel level. For example, Reed et al. [32] introduced the generative adversarial network, capable of generating 128\\(\\times\\) 128-pixel images. Additionally, Zhang et al. [33] developed StackGAN, employing stacked generators to produce clear 256\\(\\times\\)256 pixel RS images. However, these models face two main challenges: training instability and limited generalization to open-domain scenes.
In addition to GAN-based methods, recent studies have shifted toward autoregressive models for text-to-image generation, using web-scale image-text pairs, such as DALL-E [5]. These models demonstrate robust generative capabilities, particularly in zero-shot settings for open-domain scenes, starkly contrasting the small-scale data focus of GAN-based approaches. OpenAI's DALL-E, leveraging large transformer models and extensive training data, effectively maps language concepts to the pixel level, generating high-quality 256\\(\\times\\)256 images. Furthermore, Yong et al. employed modern Hopfield layers [34, 35] for hierarchical prototype learning [36] in text and image embeddings, aiming to extract the most representative prototypes and implement a coarse-to-fine learning strategy. These prototypes are then used to encapsulate more complex semantics in text-to-image tasks, enhancing the realism of generated RS images.
Diffusion Models [1] are generative models that generate new images by gradually transforming an image from a Gaussian noise state to a target image. This process contains two main steps: the forward diffusion process and the reverse generation process. Compared to autoregressive models, diffusion models excel in generating more realistic images through a gradual denoising process. Numerous studies have since focused on enhancing the diffusion model, DALLE-2 [5] enhances textual guidance capabilities through integration with the CLIP model, while GLIDE [3] explores diverse guidance methodologies. Conversely, Stable Diffusion (SD) [6] augments the training data by leveraging the capabilities of the diffusion model, thus improving the generation results.
### _Controlled diffusion models_
Controlled Diffusion Models (CDM) [37, 38, 39, 40, 21] for text-to-image (T2I) generation aims to enable users to precisely dictate the content of generated images. Although traditional T2I models can generate images from text descriptions, users often experience limited control over the final output. Controlled diffusion models enable users to specify additional generative details, including style, color, and object positioning, through the introduction of enhanced controllable parameters or mechanisms. For example, ControlNet [30],GLIGEN [40] and T2I-Adapter [38] incorporate additional control networks or control signals on top of the weights of SD to enable integrated control of multiple conditions to reduce training costs. The composer trains a large diffusion model from scratch through a new generative paradigm that allows flexibility in the construction of generative conditions, improves controllability, and achieves better results.
In particular, ControlNet [30] can be used in conjunction with diffusion models such as Denoising Diffusion Probabilistic Models (DDPM) to augment the controllability and diversity of the generated images. By introducing additional control signals or conditions, such as textual descriptions, image attribute labels, etc., Zhao et al. [39] proposed Uni-ControlNet, which supports various additional control signals or combinations of conditions. By fine-tuning the adapters while keeping the original SD model unchanged, Uni-ControlNet significantly reduces the training cost, necessitating only two additional adapters for effective outcomes.
In the remote sensing field, many RS generative models have been proposed. For example, Espinosa et al. [28] proposed a pre-trained diffusion model that is conditioned on cartographic data to generate realistic satellite images. RSDiff [17] introduces a new architecture consisting of two cascading diffusion models for RS text-to-image generation. Yuan et al. [19] introduced a lightweight diffusion model obtained through a customized distillation process, which enhances the quality of image generation via a multi-frequency extraction module and achieves fast convergence by resizing the image at different stages of the diffusion process. SatSynth [29] can simultaneously generate images and corresponding masks for satellite image segmentation, which can then be applied to data augmentation. DiffusionSat [21], has demonstrated the capability to generate high-resolution satellite data utilizing numerical metadata and textual captions. In contrast, we employed a sophisticated training strategy to design and introduce an additional control network, enabling our model to achieve composite control generation under various conditions.
## III Method
We introduce the diffusion model into the field of remote sensing image generation, aiming to enhance generic image generation capabilities for producing more realistic remote sensing images, subsequently introducing an optimized multi-conditional control mechanism that leverages text, image, and other multidimensional information to guide precise image generation and yield high-quality RS images. The construction of the model consists of two steps: initially, text-image pairs are utilized to train the generative diffusion model weights for RS images, building upon the traditional SD framework, and then the combination of multiple conditions (image conditions and text conditions) is implemented through a conditional control network. We will detail the two-stage training process for CRS-Diff, alongside illustrating the implementation of separate decompositions and combinations of training data.
### _Text-to-Image generation_
Initially, we employed the Stable Diffusion (SD) [6] for text-to-image generation. This process involves utilizing a frozen variational autoencoder (VAE) encoder and decoder. The purpose is to convert each image \\(x\\in\\mathbb{R}^{C\\times H\\times W}\\) into its corresponding latent space variable \\(z\\), thus circumventing the direct learning of the original image's conditional data distribution given the text conditions \\(p(x|\\tau)\\), and instead focuses on learning the feature distribution of the mapped image feature vector \\(p(z|\\tau)\\). Text description corresponding to an image is encoded by a CLIP model [41], which subsequently guides the image generation during the denoising process via a cross-attention mechanism [42]. Thus, the training process of the diffusion model involves updating the latent space related to the U-Net. By predicting the noise added in the forward process and removing it in the reverse process, the model can learn the data distribution in the latent space. The training objective in this process is defined as follows:
\\[\\min_{\\theta}\\mathcal{L}(\\theta)=\\mathbb{E}_{z,\\epsilon,t}\\left[\\|\\epsilon- \\epsilon_{\\theta}(z_{t},t,c)\\|_{2}^{2}\\right], \\tag{1}\\]
where \\(\\theta\\) is the parameters of the model being optimized, \\(\\epsilon\\) is the noise added in the forward process, \\(\\epsilon_{\\theta}(z_{t},t,c)\\) denotes the prediction of the noise given the noisy data \\(z_{t}\\), time \\(t\\), and condition \\(c\\).
Additionally, a Classifier-free Guidance (CFG) [43] mechanism is introduced:
\\[\\hat{\\epsilon}_{\\theta}(z_{t},c)=\\omega\\cdot\\epsilon_{\\theta}(z_{t},c)+(1- \\omega)\\cdot\\epsilon_{\\theta}(z_{t}), \\tag{2}\\]
where \\(z_{t}=a_{t}z_{0}+\\sigma_{t}\\) and \\(\\omega\\) denote the bootstrap weight. \\(\\hat{\\epsilon_{\\theta}}(z_{t},c)\\) is the output of the CFG mechanism. It is a weighted sum of the class-conditional output \\(\\epsilon_{\\theta}(z_{t},c)\\) and the unconditional output \\(\\epsilon_{\\theta}(z_{t})\\).
Textual information serves as the sole guiding factor in this process. Simultaneously, we employ the pre-trained CLIP ViT-L-14 model [6], fine-tuned on the RSICD RS image dataset, to amplify the effect of textual guidance. Under the original CFG framework, conditional noise prediction \\(\\epsilon_{\\theta}(z_{t},c)\\) is solely dependent on the model's processing of a given condition (e.g., text description). With the introduction of the CLIP bootstrap, this conditional prediction becomes further influenced by the similarity loss between the image and text as computed by the CLIP model. This implies that the calculation of \\(\\epsilon_{\\theta}(z_{t},c)\\) incorporates considerations for better aligning the resulting image with the textual description.
Our text-to-image generation methodology initially encodes an image \\(x\\in\\mathbb{R}^{C\\times H\\times W}\\) utilizing a static VAE, which is then converted into a latent representation \\(z=E(x)\\in\\mathbb{R}^{C^{\\prime}\\times H^{\\prime}\\times W^{\\prime}}\\). Subsequently, Gaussian noise is added to the latent image features to produce a noisy latent representation \\(z_{t}=\\alpha_{t}z+\\sigma_{t}\\varepsilon\\), where \\(\\varepsilon\\) represents the Gaussian noise component, and \\(\\alpha_{t}\\) and \\(\\sigma_{t}\\) are coefficients modulating the noise intensity. The text caption \\(\\tau\\) is encoded by a CLIP model \\(T_{\\theta}(\\tau)\\), generating the text embedding \\(\\tau^{\\prime}\\). This is accomplished through a denoising model \\(\\epsilon_{\\theta}(z_{t};\\tau^{\\prime},c)\\). Finally, the denoised potential representation is up-sampled to the original image resolution using the VAE decoder, completing the image generation process. In this process, we use the Denoising Diffusion Implicit Models (DDIM) algorithm to sample, which can speed up the sampling process of diffusion models.
Additionally, the weights of the encoder \\(E\\), decoder \\(D\\), CLIP text encoder \\(T_{\\theta}\\), and denoising model \\(\\epsilon_{\\theta}\\) are inheritedfrom the SD 1.5 version. During the training process, only the denoising model \\(\\epsilon_{\\theta}\\) is updated to accelerate training convergence and exploit the rich semantic information in the SD model.
### _Image Decoupling_
To address the issue of insufficient training data, the original image data is decoupled into the corresponding feature condition data using supplementary network structures, thus constructing a large-scale combined condition dataset that also functions as an interface during the inference process, and subsequently introducing nine conditions for formal model training.
_Caption_: Under conditions where caption data is available, such as the RSICD dataset [44], we directly utilize the corresponding captions of remote sensing images. For other datasets, such as fMoW dataset [45], we leverage the category information of remote sensing images, to synthesize captions.
_HED (Holistically-nested Edge Detection)_: A pre-trained deep neural network [46] is employed to predict edges and object boundaries directly from the original image, thereby capturing high-level object boundary information and low-level details.
_MLSD (Multiscale Line Segment Detection)_: We use a pre-trained transformer-based model [47] to detect straight line segments in remote sensing images.
_Depthmap_: We use a pre-trained depth estimation model [48] to extract the Depthmap of the image, which approximates the layout of the image and aids in enhancing the model's understanding of the remote sensing image's semantics.
_Sketch_: Edge detection models [49] are applied to extract sketches from an image, focusing on the local details while conveying limited semantics.
_Road Map_: In certain remote sensing images, greater emphasis is placed on road information, leading to the introduction of a pre-trained Separable Graph Convolutional Network (SGCN) [50] aimed at road extraction to yield single-channel road data.
Fig. 4: The raw test was encoded using a CLIP text encoder fine-tuned on RS images. The content image is initially encoded using the CLIP image encoder, and the resulting encoding is then converted into four additional text tokens by a Feed Forward Network(FFN). The metadata is first mapped into fixed intervals and then converted into the same number of tokens by an embedding layer. Finally, these processed encodings are concatenated, replacing the original text-encoded input.
Fig. 3: The overall architecture of our proposed CRS-Diff model. CRS-Diff is mainly based on Stable Diffusion (SD), that diffusion process is performed in latent space. The training of CRS-Diff contains two stages. In the first stage of the training process, we train the backbone U-Net network of SD on text-image pairs. The diffusion network obtained from this training is frozen (blue area) during the second training phase, and the encoder and intermediate blocks are copied into ControlNet to adapt to conditional inputs. In the second stage of training, we stack the conditional images as inputs and extract conditional features using a feature extractor. These features are gradually injected into the encoder of ControlNet (orange area) through a Feature Fusion (FF) module. Here, we use a convolutional network to reshape the obtained feature vectors to the current noise dimension and then integrate them with the noise output of the current block of the ControlNet encoder through Attention Feature Fusion (AFF), achieving multi-scale conditional injection.
_Segmentation Mask_: we employ UNetFormer [51], a network specializing in remote sensing images, to extract semantic information and produce masks segmented into eight categories.
_Content_: For content, the given image is considered as control information. Utilizing the Image Encoder in the pre-trained CLIP ViT-L-14 [41] model, the image is transformed into feature encoding, obtaining a global embedding. This approach offers more relevant guiding conditions than text descriptions alone.
_Metadata_: In processing the RS image, it is crucial to incorporate additional metadata, such as temporal (year, month, day) and spatial (ground sampling distance, latitude, longitude, cloud cover) information. This information is first quantified and categorized, serving as input for the traditional diffusion model's category guidance [6], through the labeling of these categories. Additionally, these metadata are transformed into sequence tokens, which are incorporated into the text encoding as weak text control conditions.
### _Multi-conditional fusion_
Based on the backbone structure outlined in the previous section, additional conditional control modules are added to reconstruct the original image from the decoupled image representation conditions. This process trains the model's multi-conditional generation capability. The known conditions are categorized into three types: image conditions, text conditions, and metadata. For each type of condition, we explore the corresponding condition injection methods and construct a feature extraction network that meets the requirements. The obtained condition features are then integrated with the ControlNet control strategy [30] through feature fusion to achieve composite control of arbitrary conditions, as illustrated in Fig. 3 and Fig. 4.
_Text Conditional Fusion_: We aim to establish a joint conditional bootstrapping mechanism that includes captions, content, and metadata, as illustrated in Fig. 4. The text description, as the main bootstrap condition, is represented as \\(y_{i}\\) by obtaining a word embedding through a specialized CLIP text encoder. Different types of metadata are first mapped to values between 0 and 1 based on their value ranges, denoted as \\(\\mathbf{m}=[m_{1},m_{2},\\ldots,m_{n}]\\), where \\(m_{i}\\) denotes the \\(i\\)-th type of metadata. Subsequently, these normalized metadata values are encoded into vectors of uniform length using different Multi-Layer Perceptron (MLP) layers. These vectors are concatenated to form the metadata embedding \\(y_{\\text{m}}\\) as
\\[y_{\\text{m}}=[\\text{MLP}_{1}(m_{1});\\text{MLP}_{2}(m_{2});\\ldots;\\text{MLP}_ {n}(m_{n})], \\tag{3}\\]
where \\(\\text{MLP}_{i}\\) denotes the MLP used for the \\(i\\)-th metadata.
Additionally, we introduce an image encoder to encode the content image and then use a Feed-Forward Network (FFN) to connect the feature vectors with the prompt encoding symbols, represented as:
\\[y^{\\prime}=\\text{Concat}\\left(y_{\\text{k}},w_{\\text{c}}\\cdot\\text{FFN}(y_{ \\text{c}}),w_{\\text{m}}\\cdot y_{\\text{m}}\\right), \\tag{4}\\]
where \\(w_{c}\\) and \\(w_{m}\\) are the weights applied to the outputs of \\(y_{\\text{c}}\\) and the \\(y_{\\text{m}}\\), respectively. This network subsequently integrates these encodings with the caption's word embedding, replacing the original input tokens as the Key and Value in the cross-attention layer.
_Image Conditional Fusion:_ All decoupled image conditions, including Sketch, Segmentation mask, Depthmap, HED, Road map, and MLSD, are utilized as local control information. As shown in Fig. 3, this part of CRS-Diff is based on the ControlNet, with the SD weights initially fixed to replicate the encoders and the structure and weights of the intermediate blocks. It is worth noting that the feature extractor, consisting of stacked convolutional neural networks, contains multiple feature condition information in the feature map. We then fuse the latent features with the denoising latent variables through a feature fusion network. Consistent with Uni-ControlNet [39], we perform the feature injection step in four downsampled ResNet blocks within the U-Net structure of the diffusion model.
Specifically, given a set of image conditions \\(\\{c_{i}\\}_{i=1}^{n}\\), image conditional information processing can be represented as a function \\(\\mathcal{F}\\), which maps the set of conditions to a feature space that equations with the input noise dimensions. This mapping is formally defined as \\(\\mathcal{F}:\\{c_{i}\\}\\rightarrow\\mathbb{R}^{c\\times d\\times d}\\), where \\(d\\) represents the dimensionality of the latent feature map. The transformation leverages a series of convolutional and pooling layers to effectively capture the spatial hierarchies and semantic features of the control information, ensuring that the generated features are representative of the underlying conditions. Based on this, we resample the obtained feature map to the same dimension as the current latent variable and use Attention Feature Fusion (AFF) [52] to achieve a better fusion of the noise and feature images. This process replaces the inputs of the original residual block of the U-Net, allowing us to achieve multi-conditional information injection, represented as:
\\[\\mathbf{z}^{\\prime}=\\text{AFF}(\\mathbf{z},\\text{Resample}(\\mathcal{F}(\\{c_{i} \\}),\\text{dim}(\\mathbf{z}))), \\tag{5}\\]
where \\(\\mathbf{z}\\) is the current latent variable, \\(\\{c_{i}\\}\\) is the set of image conditions, \\(\\mathcal{F}(\\{c_{i}\\})\\) extracts the feature map from the conditions, and \\(\\text{Resample}(\\cdot)\\) adjusts the feature map dimensions to match \\(\\mathbf{z}\\), \\(\\text{AFF}(\\cdot)\\) fuses the resampled feature map with \\(\\mathbf{z}\\), \\(\\mathbf{z}^{\\prime}\\) is the fused latent variable. This formula replaces the inputs of the original residual block of the U-Net, allowing us to achieve multi-conditional information injection.
### _Training Strategy_
Our CRS-Diff employs a two-stage training strategy to train the native Stable Diffusion (SD) architecture and the ControlNet architecture within its framework, respectively. Initial training is conducted using SD 1.5 weights on a text-to-image RS dataset, aiming to develop a high-precision diffusion model for text-to-image generation that serves as the backbone of the ControlNet structure. This foundation enables the model to accurately guide the denoising process, leveraging a combination of multiple conditions through joint training. For both text and image control conditions, individual conditions are omitted with an independent probability of 0.5, and all conditions with a joint probability of 0.1, in accordance with Classifier-freeGuidance. The dropout probability of certain conditions will be adjusted during experiments based on performance. Each condition element is treated as a distinct bootstrap condition, with single or multiple conditions being omitted at certain probabilities, enabling the model to learn a broader array of condition combinations.
## IV Experiments
### _Datasets_
During the training stage, our CRS-Diff used the following datasets:
* RSICD dataset [44]: This dataset is designed specifically for image captioning in remote sensing imagery, and it contains 10,921 aerial remote sensing images accompanied by captions in natural language, sized at \\(224\\times 224\\).
* fMoW dataset [45]: This dataset is a large-scale remote sensing image dataset, and MoW includes the spatio-temporal and category information for each image. Notably, only the RGB images, sized at \\(224\\times 224\\), are used, with relevant metadata extracted from a total of 110,000 images for training the multi-conditional control model.
* Million-AID dataset [53]: This dataset is a benchmark dataset for remote sensing scene classification, and contains millions of instances, featuring 51 scene categories with 2,000 to 45,000 images per category.
### _Implementation details_
In the initial backbone model training phase, the following steps are taken: The RSICD dataset is fine-tuned over 10 epochs using the U-Net and the AdamW optimizer with a learning rate of \\(1\\times 10^{-5}\\). The input images are resized to \\(512\\times 512\\), with the model's parameters totaling approximately 0.9 billion. For the sampling process, DDIM is utilized, setting the number of time steps to 100 and the classifier-free bootstrap scale to 7.5.
During the conditional control phase, original images from the fMoW and Million-AID datasets are organized into 200,000 text-image pairs. These images are then segmented into multiple conditional representations through annotator networks. This process involves randomly combining single or multiple conditional information, including road maps, MLSD, content, and extracted raw captions, metadata, and fine-tuning the conditional control network over 5 epochs. The AdamW optimizer, with a learning rate of \\(1\\times 10^{-4}\\), is used throughout this process, resizing both the input image and the local conditional graph to \\(512\\times 512\\). Experiments are conducted on NVIDIA GeForce RTX 4090 and NVIDIA RTX A100 GPUs, with a batch size of 8.
### _Evaluation metrics_
For text-to-image generation tasks, we utilize four metrics to assess the effectiveness of image generation under the single text condition, namely the Inception Score (IS) [56], the Frechet Inception Distance (FID) [57], the CLIP Score [41], and the Overall Accuracy (OA) [55] for zero-shot classification. We evaluate the performance of our proposed method and the baselines under identical settings.
\\[IS=\\exp\\left(\\mathbb{E}_{\\mathbf{x}\\sim p_{g}}\\left[D_{\\text{KL}}(p(y|\\mathbf{ x})||p(y))\\right]\\right), \\tag{6}\\]
where \\(p(y|\\mathbf{x})\\) is the conditional label distribution given an image \\(\\mathbf{x}\\) and \\(p(y)\\) is the marginal label distribution. \\(D_{\\text{KL}}\\) denotes the Kullback-Leibler divergence, and \\(p_{g}\\) represents the distribution of generated images.
\\[FID=||\\mu_{r}-\\mu_{g}||^{2}+\\text{Tr}(\\Sigma_{r}+\\Sigma_{g}-2(\\Sigma_{r}\\Sigma _{g})^{\\frac{1}{2}}), \\tag{7}\\]
where \\(\\mu_{r}\\) and \\(\\Sigma_{r}\\) are the mean and covariance of the real images' features, and \\(\\mu_{g}\\) and \\(\\Sigma_{g}\\) are the mean and covariance of the generated images' features.
Notably, we utilize the zero-sample classification Overall Accuracy (OA) to evaluate the ability of the generative model to generalize across unseen categories. The computation of the OA for zero-sample classification is summarized in the following steps:
1. Train a classifier (e.g., ResNet [58]) by utilizing the images produced by the generative model as training data.
2. Test the accuracy of this classifier in categorizing images within a set of real, yet unseen categories encountered during the training phase. The formula is: \\[OA=\\frac{1}{n}\\sum_{i=1}^{n}\\mathbf{1}(y_{i}=\\hat{y}_{i}),\\] (8)
where \\(y_{i}\\) is the true label, \\(\\hat{y}_{i}\\) is the predicted label, \\(\\mathbf{1}\\) denotes the indicator function, and \\(n\\) is the number of samples in the test set.
For conditional image generation tasks, we utilized three metrics (i.e., SSIM, mIoU, and CLIP score) to evaluate the performance of conditional image generation. We evaluated our proposed method and the baseline methods under identical settings. The calculation formulas for the SSIM and mIoU metrics are as follows:
\\[SSIM(x,y)=\\frac{(2\\mu_{x}\\mu_{y}+c_{1})(2\\sigma_{xy}+c_{2})}{(\\mu_{x}^{2}+\\mu_ {y}^{2}+c_{1})(\\sigma_{x}^{2}+\\sigma_{y}^{2}+c_{2})}, \\tag{9}\\]
where \\(\\mu_{x}\\) is the mean of image \\(x\\), \\(\\mu_{y}\\) is the mean of image \\(y\\), \\(\\sigma_{x}^{2}\\) is the variance of image \\(x\\), \\(\\sigma_{y}^{2}\\) is the variance of image \\(y\\), \\(\\sigma_{xy}\\) is the covariance of images \\(x\\) and \\(y\\), and \\(c_{1}\\) and \\(c_{2}\\) are constants to stabilize the division.
\\[mIoU=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{|P_{i}\\cap G_{i}|}{|P_{i}\\cup G_{i}|}, \\tag{10}\\]
where \\(n\\) is the number of classes, \\(P_{i}\\) is the predicted region for class \\(i\\), and \\(G_{i}\\) is the ground truth region for class \\(i\\).
### _Comparison and Analysis_
#### Iv-D1 **Text-to-Image generation**
We trained CRS-Diff on the RSICD dataset using solely text as the initial input and compared it with recent state-of-the-art (SOTA) methods.
_Qualitative analysis._ In Fig. 5, generated images using various methods, such as CRS-Diff, Tx4Img-MHN (including VQVAE and VQGAN) [55], are illustrated. Notably, our proposed CRS-Diff demonstrates the capability to generate clearer and more realistic images compared with other methods. For instance, when confronted with complex textual descriptions, exemplified by the phrase on the left side of the fifth line: \"a square lawn and a half round lawn consist the square which is surrounded by the forest\", the model proficiently deciphers the semantic content. Moreover, the model adeptly identifies the semantic information correlated with the textual descriptions, accurately reflecting this in the results. It also excels at synthesizing more appropriate images in response to shape descriptions like \"square\" and \"long.\" Furthermore, CRS-Diff distinctly grasps the concept of quantity, a feature intuitively evident in its handling of numerical descriptors like \"some\", \"many\", or \"two\" (as seen on the left side of the first line). Besides, our model can accurately simulate real lighting conditions, and coordinate elements such as color and texture.
_Quantitative analysis._ We compared CRS-Diff with AttnGAN [22], DAE-GAN [23], StrucGAN [24], DF-GAN [25], Lafite [26], DALL-E [54], Txt2Img-MHN (including VQVAE and VQGAN) [55], RSDiff [17] and SD (fine-tuned on sd1.5) [6]. The generated results are quantitatively analysed using the RSICD test set, employing four evaluation metrics: zero-shot classification OA, initial score, CLIP score, and FID score, as delineated in Tab. II. Our method surpassed the baseline in three metrics and achieved second place in the Inception Score. Intriguingly, the performance in the CLIP score is not as high as anticipated, possibly due to the specificities of the CLIP model employed in our evaluation. Nevertheless, the proposed CRS-Diff demonstrates excellent performance in controllability and generation quality, meeting the demands of practical applications like urban planning and laying a
Fig. 5: Visual comparison of different text-to-image generation methods based on text descriptions on the RSICD test set.
foundation for future research on controllable generation.
#### Iv-A2 **Single-condition image generation**
Except for the text condition, CRS-Diff can support more conditions to guide the model towards generating more refined images.
_Qualitative analysis._ Fig. 6 shows the visual comparison results of generated RS images from a single metadata condition. To mitigate the risk of inaccurate results due to conflicts between the content condition and the semantics of the textual description, we added textual guidance to all conditions except the content condition. The content condition provides richer semantic information. Metadata control proved more challenging, so we chose salient attributes such as month and cloud cover to offer more granular control.
Fig. 7 shows the visual comparison results of generated RS images from a single image condition. In HED and Sketch, intuitive image control is achieved by restricting the boundary and contour information of the generated image. Features, e.g., segmentation masks and roadmaps, can provide richer semantic information, which can be efficiently interpreted by CRS-Diff to influence the generated outputs. Conversely, CRS-Diff generates images with clear texture details and coherent scene relationships, enabling better comprehension even in areas not covered by feature conditions.
_Quantitative analysis_ We compare CRS-Diff with ControlNet and Uni-ControlNet for quantitative evaluation on a test set of RSICD at a resolution of \\(512\\times 512\\). We randomly select one caption per image from the test set to be used as textual bootstrap information, obtaining 1k pieces of generated images for the quality evaluation. We employ the image decoupling method mentioned earlier to obtain more control conditions for constructing conditional data. Single condition generation (in addition to metadata) is restricted for quantitative evaluation. For Uni-ControlNet and ControlNet (Multi-ControlNet), the same dataset is used to train the conditional generation capabilities of the seven conditions. For HED, MLSD, Sketch, and Depthmap, we compute the SSIM of the generated images corresponding to the decoupled conditions. For the semantic segmentation mask and road map, we compute the mIoU. For the Content condition, considered as a text markup, we compute the CLIP Score using the CLIP model fine-tuned to the remotely sensed images. The specific results are shown in Tab. III. Our method achieves the best results on four metrics. Additionally, we calculate the FID metrics for the generated images, with specific results shown in Tab. IV. The experimental results demonstrate that CRS-Diff has excellent generative capabilities and quantitatively superior performance in most conditions compared to existing methods.
in generative capacity, controllability, and realism, successfully completing the synthesis of the target image. Meanwhile, the generated images demonstrate sufficient diversity.
### _Ablation Analysis_
We explore improvements in the structure of the multi-conditional control network and the method of injecting control information. We perform ablation experiments on CRS
Fig. 7: Visual comparison results of generated RS images from single image condition. All the conditions are used together with the textual descriptions.
Diff and its variants, analyzing the impact of replacing the backbone model (ReB) and the feature fusion approach (FF) on the generation quality and control effectiveness, respectively. We constructed a baseline based on the underlying Multi-ControlNet and executed the alteration approach sequentially, reporting the evaluation metrics of the different models, as shown in Tab. V. The pre-training of the backbone model and the incorporation of the feature fusion approach significantly improve the generation effect, enabling the model to generate RS images with higher information density and realize the fusion of various control information types, further enhancing the model's control generation capability.
Tab. VI presents a detailed ablation study to evaluate the impact of different versions of the CLIP model and the corresponding training strategy on the model generation capability in the single-text condition. We chose two versions of the CLIP text encoder for encoding the text condition in the text-to-image generation process, i.e., ViT-B-32 and ViT-L-14. We compared the effects of various parameter sizes and the impact of specific fine-tuning of the encoder on model performance. The results show that models with a greater number of parameters exhibit superior generation capabilities. Additionally,
Fig. 8: Visual comparison results of generated RS images under multiple condition control. Except for the content condition, all these conditions are used together with the textual descriptions.
fine-tuning on RS images proves to be an effective method for enhancing performance.
We have conducted additional experiments to validate the positive impact of additional image control conditions on the quality of the generated images. We use a text encoder and an image encoder to process the text condition, the content image condition, and their combined conditions to generate the image, aiming to evaluate the quality of the generated images separately. The experimental results are shown in Tab. VII. As can be seen, this additional image condition information is beneficial to the quality of the generated images, reflecting on the FID and CLIP Score metrics. However, this image condition will reduce the diversity of the generated image, reflecting on the IS metric.
Tab. VIII describes the impact of different versions of the CLIP image encoder on model generation capability guided by the content image condition. We compared the results of image generation using four different CLIP image encoders and found that encoders based on the ViT architecture consistently achieved the best results in terms of FID and CLIP scores, which are standard metrics for evaluating image quality. The ViT-based encoders demonstrated a significant performance advantage over those based on the ResNet architecture. This suggests that the CLIP ViT-L-14 model currently in use possesses superior feature extraction capabilities.
### _Application for downstream road detection task_
For the condition generation phase, we posit that the generated image should have a sufficiently high correlation with the control labels as conditions. We aim for the generated image to encapsulate as much information from the conditioned image as possible and to offer training data support for downstream tasks. Consequently, we incorporate experiments on the generated images to ascertain the pertinent performance of CRS-Diff. We consistently integrate synthetic dummy data into the training set of SGCN [50] for the road extraction task and assess it on the official test set. Tab. IX demonstrates the performance comparison of the SGCM method under different settings of training datasets. As can be seen, synthetic training datasets can obtain almost the same performance as the real training dataset, which means that our CRS-Diff can simulate real images. By adding the synthetic dataset to the real dataset, the detection performance can be further significantly improved, indicating that the generated RS images conditioned on the road can promote the downstream road detection task. Besides, Fig. 9 also visually compares the road extraction results under the three training conditions. The red boxes highlight areas with relatively large differences, indicating that the model trained with the augmented dataset (Real + Synthetic) performs better in terms of continuity and completeness when dealing with complex road networks. Overall, the combination of real and synthetic data results in a more robust and generalized model, capable of handling more diverse and complex road structures. This blend of data sources not only increases the amount of training data but also introduces a wider range of scenarios, results in improved performance for road detection task.
## V Conclusion
In this paper, we propose a new controllable RS generative model with diffusion models (CRS-Diff). Developed from the diffusion model framework, CRS-Diff enables high-quality RS image generation. By integrating an optimized multi-conditional control mechanism, CRS-Diff can effectively synthesize multidimensional information, including text, metadata and images, guiding precise image generation and yielding highly accurate and controllable remote sensing images. This significantly broadens the control spectrum of the generation model, enhancing its adaptability to more complex application scenarios. Additionally, a comprehensive evaluation of existing multi-conditional generation models confirms CRS-Diff's superior ability to generate remote sensing images under various conditions and its high controllability. This makes it suitable for a wide spectrum of use cases and enhances the performance of downstream tasks.
Fig. 9: Visualisation comparison of SGCN model for road detection task under different settings of training datasets.
* [44] X. Lu, B. Wang, X. Zheng, and X. Li, \"Exploring models and data for remote sensing image caption generation,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, no. 4, pp. 2183-2195, 2017.
* [45] G. Christie, N. Fendley, J. Wilson, and R. Mukherjee, \"Functional map of the world,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 6172-6180.
* [46] S. Xie and Z. Tu, \"Holistically-nested edge detection,\" in _Proceedings of the IEEE International Conference on Computer Vision_, 2015, pp. 1395-1403.
* [47] Y. Xu, W. Xu, D. Cheung, and Z. Tu, \"Line segment detection using transformers without edges,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 4257-4266.
* [48] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, \"Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 44, no. 3, pp. 1623-1637, 2020.
* [49] E. Simo-Serra, S. Iizuka, K. Sasaki, and H. Ishikawa, \"Learning to simplify: fully convolutional networks for rough sketch cleanup,\" _ACM Transactions on Graphics_, vol. 35, no. 4, pp. 1-11, 2016.
* [50] G. Zhou, W. Chen, Q. Gui, X. Li, and L. Wang, \"Split depth-wise separable graph-convolution network for road extraction in complex environments from high-resolution remote-sensing images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-15, 2021.
* [51] J. Wang, Z. Zheng, A. Ma, X. Lu, and Y. Zhong, \"Loveda: A remote sensing land-cover dataset for domain adaptive semantic segmentation,\" _arXiv preprint arXiv:2110.08733_, 2021.
* [52] Y. Dai, F. Gieseke, S. Ohmcke, Y. Wu, and K. Barnard, \"Attentional feature fusion,\" in _Proceedings of the IEEE/CVF winter conference on applications of computer vision_, 2021, pp. 3560-3569.
* [53] Y. Long, G.-S. Xia, S. Li, W. Yang, M. Y. Yang, X. X. Zhu, L. Zhang, and D. Li, \"On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 14, pp. 4205-4230, 2021.
* [54] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, \"Zero-shot text-to-image generation,\" in _International Conference on Machine Learning_. PMLR, 2021, pp. 8821-8831.
* [55] Y. Xu, W. Yu, P. Ghamisi, M. Kopp, and S. Hochreiter, \"Tx2img-mhm: Remote sensing image generation from text using modern hopfield networks,\" _IEEE Transactions on Image Processing_, 2023.
* [56] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, \"Improved techniques for training gans,\" _Advances in Neural Information Processing Systems_, vol. 29, 2016.
* [57] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, \"Gans trained by a two time-scale update rule converge to a local nash equilibrium,\" _Advances in Neural Information Processing Systems_, vol. 30, 2017.
* [58] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2016, pp. 770-778.
\\begin{tabular}{c c} & Dato Tang received the B.E. degrees from Xi'an Jiaotong University, Xi'an, China, in 2023. He is currently a postgraduate with the School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, China. His research interests include image processing and remote-sensing image generation. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xiangyong Cao (Member, IEEE) received the B.Sc. and Ph.D. degrees from Xi'an Jiaotong University, Xi'an, China, in 2012 and 2018, respectively. From 2016 to 2017, he was a Visiting Scholar with Columbia University, New York, NY, USA. He is an Associate Professor with the School of Computer Science and Technology, Xi'an Jiaotong University. His research interests include statistical modeling and image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xingsong Hou (Member, IEEE) received the Ph.D. degree from Xi'an Jiaotong University, China, in 2005. From October 2010 to October 2011, he was a Visiting Scholar at Columbia University, New York, NY, USA. He is currently a Professor with the School of Information and Communications Engineering, Xi'an Jiaotong University. He is also with the Key Laboratory for Intelligent Networks and Network Security, Ministry of Education. His research interests include video/image coding, wavelet analysis, sparse representation, compressive sensing, and radar signal processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhongyuan Jiang received both B.S. and Ph.D. degrees from Beijing Jiaotong University in 2009 and 2013 respectively. Currently, he is a professor of School of Cyber Engineering, Xidian University, China. His research interests include privacy preserving, social computing, urban computing, and network functions virtualization. \\\\ \\end{tabular} \\begin{tabular}{c c} & Junmin Liu received the Ph.D. degree in Mathematics from Xi'an Jiaotong University, Xi'an, China, in 2013. From 2011 to 2012, he has served as a Research Assistant with the Department of Geography and Resource Management, The Chinese University of Hong Kong, Hong Kong, China. From 2014 to 2017, he worked as a Visiting Scholar at the University of Maryland, College Park, USA. Currently, he is a full Professor with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China. His main research interests include data mining, image processing, deep learning, and so on. He has published over 60+ research papers in international conferences and journals. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Deyu Meng (Member, IEEE) received the B.Sc., M.Sc., and Ph.D. degrees from Xi'an Jiaotong University, Xi'an, China, in 2001, 2004, and 2008, respectively. From 2012 to 2014, he took his two-year sabbatical leave at Carnegie Mellon University, Pittsburgh, PA, USA. He is a Professor with the School of Mathematics and Statistics, Xi'an Jiaotong University, and an Adjunct Professor with the Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau, China. His research interests include model-based deep learning, variational networks, and meta learning. \\\\ \\end{tabular} | The emergence of generative models has revolutionized the field of remote sensing (RS) image generation. Despite generating high-quality images, existing methods are limited in relying mainly on text control conditions, and thus do not always generate images accurately and stably. In this paper, we propose CRS-Diff, a new RS generative framework specifically tailored for RS image generation, leveraging the inherent advantages of diffusion models while integrating more advanced control mechanisms. Specifically, CRS-Diff can simultaneously support text-condition, metadata-condition, and image-condition control inputs, thus enabling more precise control to refine the generation process. To effectively integrate multiple condition control information, we introduce a new conditional control mechanism to achieve multi-scale feature fusion, thus enhancing the guiding effect of control conditions. To our knowledge, CRS-Diff is the first multiple-condition controllable RS generative model. Experimental results in single-condition and multiple-condition cases have demonstrated the superior ability of our CRS-Diff to generate RS images both quantitatively and qualitatively compared with previous methods. Additionally, our CRS-Diff can serve as a data engine that generates high-quality training data for downstream tasks, e.g., road extraction. The code is available at [https://github.com/Sonettoo/CRS-Diff](https://github.com/Sonettoo/CRS-Diff).
Remote sensing image, Diffusion model, Controllable generation, deep learning | Write a summary of the passage below. | 279 |
mdpi/01a308fd_30f0_490e_8100_fa399f2fba3b.md | A Short Note on the Potential of Utilization of Spectral AERONET-Derived Depolarization Ratios for Aerosol Classification
Il-Sung Zo
1Research Institute for Radiation-Satellite, Gangneung-Wonju National University, Gangneung 25457, Korea; [email protected]
Sung-Kyun Shin
2School of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield AL10 9AB, UK 2
## 1 Introduction
Atmospheric aerosols influence Earth's energy budget by scattering and absorbing radiation (direct effects) and altering cloud processes (indirect effects) [1]. The impact of atmospheric aerosols on climate is quantified in terms of the aerosol radiative forcing. Different atmospheric aerosols or atmospheric aerosol mixtures lead to different aerosol radiative forcing [1]. To accurately quantify the impact of aerosol radiative forcing on regional and global climate, atmospheric aerosols need to be properly classified [1; 2; 3]. Accurate classification of atmospheric aerosol types would markedly improve the accuracy of aerosol radiative forcing in numerical models and is thus of high importance to climate modelling [4; 5].
Atmospheric aerosols are difficult to characterize or classify both spatially and temporally due to their life cycle and geographically diverse sources. Moreover, variations in aerosol properties, including composition, shape and size, further compound the problem [5; 6]. Remote sensing techniques are useful for quantifying the characteristics of atmospheric aerosols. To date, observations from integrated remote sensing techniques, including sun/sky radiometer, light detection and ranging (LIDAR) and satellite techniques have been used to classify aerosol types worldwide [7; 8; 9; 10; 11; 12; 13; 14].
Various aerosol parameters, including optical and microphysical properties, have been used to distinguish aerosol types. For instance, the spectral dependence of aerosol optical depth (AOD) with respect to wavelength (i.e., Angstrom exponent, A) is commonly used in aerosol remote sensing to infer particle size. Higher values of A (>1) typically represent the accumulation mode of particles from sources such as fresh biomass burning, while low values of A (close to or less than 0) generally represent coarse particles such as dust [5; 10; 15; 16]. When using the absorption properties of atmospheric aerosols, single scattering albedo (\\(\\omega\\)) can be used to infer the aerosol types. For example, aerosols with high values of \\(\\omega\\) (>0.95) absorb little light, whereas those with low values of \\(\\omega\\) (<0.88) absorb more light [17]. Furthermore, the derivative of the \\(\\omega\\) or the spectral difference of \\(\\omega\\) may provide information about the particle type with respect to size and growth. For example, dust particles exhibit strong light absorption at short wavelengths (e.g., 440 nm) and lower light absorption as the wavelength increases [18]. Fine-mode or hygroscopic particles have neutral \\(\\omega\\) spectral dependence and lower light absorption properties. Black carbon (BC) particles have the strongest light absorption properties at near-infrared wavelengths [19; 20]. Spectral lidar ratios have also been implemented to determine the aerosol types in various applications [15].
The linear particle depolarization ratio (\\(\\delta_{\\rm p}\\)) can be used to identify the shape of particles. High \\(\\delta_{\\rm p}\\) values (0.3-0.35) indicate non-spherical particles such as dust particles, while low \\(\\delta_{\\rm p}\\) values indicate the presence of spherical particles such as biomass smoke or anthropogenic particles [21; 22; 23]. Recent studies have suggested a relationship utilizing the wavelength dependence of \\(\\delta_{\\rm p}\\) to determine the dominant aerosol type from lidar measurements at triple wavelengths (355, 532 and 1020 nm) [24; 25; 21]. The spectral dependence of \\(\\delta_{\\rm p}\\) for dust particles differs according to the dust origin and age. Values of \\(\\delta_{\\rm p}\\) increase with wavelength for dust particles, whereas \\(\\delta_{\\rm p}\\) values peak (0.30) at 532 nm and lower values of \\(\\delta_{\\rm p}\\) were found at other wavelengths (0.25 at 355 nm and 0.23 at 1064 nm) [21; 24; 25]. However, \\(\\delta_{\\rm p}\\) measurements with lidar observations at triple wavelengths are rare, limiting the availability of long-term data to accurately describe the spectral dependence of \\(\\delta_{\\rm p}\\). Moreover, the spectral dependence of \\(\\delta_{\\rm p}\\) with respect to the aerosol types (e.g., urban/industrial, smoke aerosols) has not been ascertained to date.
The Aerosol Robotics NETwork (AERONET) is an automatic sun-tracking sun/sky radiometer observation network which could be a good alternative method to obtain spectral \\(\\delta_{\\rm p}\\). AERONET collects and stores data obtained by sun/sky radiometers at more than 800 observation sites worldwide. Data obtained by individual radiometers were sent to the NASA Goddard Space Flight Centre (GSFC) to retrieve aerosol optical/microphysical properties using the AERONET algorithm [26; 27]. Globally, distributed observations of spectrally resolved optical/microphysical properties of atmospheric aerosols such as A, complex refractive index (n \\(\\pm\\) ik), \\(\\omega\\) and size distribution are downloadable from the AERONET database ([http://aeronet.gsfc.nasa.gov/](http://aeronet.gsfc.nasa.gov/)). Currently, AERONET version 3 includes spectral lidar ratios and \\(\\delta_{\\rm p}\\) (440, 675, 870 and 1020 nm) as standard inversion output.
In this study, we report on values of \\(\\delta_{\\rm p}\\) obtained from AERONET observation sites selected as representative of an aerosol type (dust, mixed dust, smoke and urban) based on previous literature. We discuss the spectral dependency of \\(\\delta_{\\rm p}\\) with respect to the dominant aerosol type and evaluate the spectral \\(\\delta_{\\rm p}\\) and size relationship according to aerosol type. Section 2 describes the methods used in this study, Section 3 includes results and discussion and a summary of findings is presented in Section 4.
## 2 Methodology
### Theoretical Background
Polarization lidars can measure the \\(\\delta_{\\rm p}\\) value from the particle backscatter coefficient (\\(\\beta_{\\lambda}^{\\rm P}\\)) at depolarization channels as indicated in Equation (1):
\\[\\delta_{\\lambda}=\\frac{\\beta_{\\lambda}^{\\rm P,\\perp}}{\\beta_{\\lambda}^{\\rm P, \\parallel}} \\tag{1}\\]In this case, measurement of the return signal in the plane of polarization perpendicular to that of the emitted polarized laser light and careful calibration of the measurement of the lidar receiver are required [28].
As mentioned above, the AERONET sun/sky radiometer measures direct solar radiation and sky radiation. The measured data are automatically analyzed using the AERONET inversion algorithm [29]. The retrieved aerosol products are available from the AERONET data base ([http://aeronet.gsfc.gov/](http://aeronet.gsfc.gov/)). The kernel look-up tables introduced by [29] allow us to infer \\(\\delta_{\\mathrm{P}}\\) from the AERONET inversion products. In addition, [30] reported that AERONET retrieved \\(\\delta_{\\mathrm{P}}\\) value shows high correlation with \\(\\delta_{\\mathrm{P}}\\) measured by lidar measurement. The currently released version 3 of the AERONET retrieval added spectral \\(\\delta_{\\mathrm{P}}\\) to the list of standard inversion products, along with the complex refractive index, particle size distribution and single scattering albedo (\\(\\omega\\)). For each observation, the element F\\({}_{11}\\)(\\(\\lambda\\)) and F\\({}_{22}\\)(\\(\\lambda\\)) of the Muller scattering matrix [31] are computed from the particle size distribution and the refractive index that have been inferred from the AERONET inversion product. The element F\\({}_{11}\\)(\\(\\lambda\\)) is proportional to the flux of scattered light in case of unpolarized incident light, while F\\({}_{22}\\)(\\(\\lambda\\)) strongly depends on the angular and spectral distribution of the radiative intensity [31] as measured with AERONET sun/sky radiometer [29]. From the element F\\({}_{11}\\)(\\(\\lambda\\)) and F\\({}_{22}\\)(\\(\\lambda\\)) at the scattering angle of 180\\({}^{\\circ}\\), \\(\\delta^{\\mathrm{P}}\\) can be computed as:
\\[\\delta^{\\mathrm{P}}_{\\lambda}=\\frac{1-\\mathrm{F}_{22}(\\lambda,180^{\\circ})/ \\mathrm{F}_{11}(\\lambda,180^{\\circ})}{1+\\mathrm{F}_{22}(\\lambda,180^{\\circ})/ \\mathrm{F}_{11}(\\lambda,180^{\\circ})}. \\tag{2}\\]
### AERONET Data Collection
AERONET Version 3 Level 2.0 aerosol inversion products were used in this study. The AERONET is a global network of ground-based CIMEL sun/sky radiometers (CIMEL, Paris, France) that directly measure solar and sky radiation. Observations are sent to the GSFC (Goddard Space Flight Center, Greenbelt, MD, U.S.A) for aerosol retrieval using the AERONET inversion algorithm [29]. AERONET inversion products include columnar comprehensive aerosol optical/microphysical properties (AOD, volume size distribution, complex index of refraction and single scattering albedo at 440, 675, 870 and 1020 nm) [26]. The recently released Version 3 of AERONET inversion products include spectral lidar ratios and particle linear depolarization ratios at 440, 675, 870 and 1020 nm.
In this study, fourteen AERONET sites were selected for analysis based on the availability of an extensive data record and their geographic distribution among representative aerosol source regions (Figure 1). The observation sites were selected as representative source regions for dust, smoke and urban/industrial mixed aerosol particles: (1) Dust, (2) Smoke and (3) Urban/industrial mixed on the basis of previous studies (Table 1). Other aerosol types, such as maritime aerosols, are not considered in this study due to their low aerosol loading conditions that are insufficient to meet the threshold for AERONET inversion retrieval [29; 32]. The AERONET provides level 2.0 inversion product only for observations with an AOD > 0.4 at 440 nm [29].
Figure 1: Map of AERONET observation sites based on the dominant particle type. (red: dust, green: smoke, blue: urban/industrial mixed).
### Data Filtering
Filtering of the available AERONET version 3 level 2.0 inversion products for the selected sites was necessary to ensure that the obtained values of \\(\\delta_{\\mathrm{P}}\\) were representative of each aerosol type. Figure 2 shows scatter plots of A versus the fine-mode fraction (FMF) for the AERONET sites that reflect the source regions dominated by dust, smoke and urban/industrial mixed particles. A were a widely used parameter to describe the wavelength dependence of AOD to obtain basic information on aerosol size distribution [8]. High values of A (>2) are typically considered to be fine particles related to the combustion of fresh biomass, whereas lower values (close to or less than zero) indicate the presence of coarse particles such as dust [10; 33]. We employed the values of A obtained from the wavelength dependence of AOD at 440 nm and 870 nm (A440-870) for data filtering. Dust and anthropogenic particles can also be distinguished by their particle size distribution [8]. For example, dust particles predominantly have a radius >1 \\(\\mu\\)m (i.e., a lower FMF). Conversely, particles from combustion have a high FMF [29]. Figure 2 therefore shows the requirement to filter for each aerosol type even at sites considered to be representative AERONET sites for each aerosol source. The majority of observations at dust sites shows lower A and lower FMF (e.g., FMF < 0.4 and A < 0.4). These values are directly related to the contribution of dust particle. The particles which exhibit features of anthropogenic/biomass pollution or dusty mixture (higher A and higher FMF) are also found in the distribution at the dust sites. This suggests that desert sites are remarkably affected by a considerable contribution from anthropogenic pollution or smoke particles [34]. Likewise, the observations at urban/industrial/mixed sites also show a broad distribution across all possible values (FMF: 0.15-0.99, A: 0-1.97). The urban/industrial/mixed sites are also dominated by dust particles as well as dust mixed with pollution or pollution particles [10]. In addition, higher values of A and FMF reflect the fact that smoke particles are frequently emitted at smoke sites. Although higher values of A and FMF dominated at the smoke sites, filtering is still required.
To filter data that were not representative of the source aerosol, the FMF and A values were employed. For dust sites, we only consider observations with an A440-870 value <0.4 and an FMF of the total AOD <0.4 [8; 35]. At smoke sites, two sets of filters are applied to eliminate non-smoke cases from the data (other types of aerosol were likely transported periodically to sites). This required A to be >1.4 to remove potential cases dominated by other aerosol types such as mineral dust or mixed conditions with the exception of the boreal sites (i.e., Bonanza Creek, Moscow, Tomsk and Yakutsk, \\(\\AA>1.0\\)) [9]. The data are also restricted to the main burning season for smoke sites where this is well defined by a previous study [9]. For urban/industrial/mixed sites, we used FMF to distinguish the dusty mixed particles and pollution particles (mixed: 0.4 < FMF < 0.6, pollution: FMF > 0.6) [35]. In addition, we defined the pollution particles into non-absorbing (NA), moderately-absorbing (MA) and high-absorbing (HA) particles according to their \\(\\omega\\) values. [35] suggested an aerosol classification algorithm based on the FMF and \\(\\omega\\). (NA: \\(\\omega\\) > 0.95, MA: 0.95 > \\(\\omega\\) > 0.85), HA: 0.85 > \\(\\omega\\)).
The number of cases for which the availability of AERONET Level 2.0 inversion products allows for a retrieval of optical/microphysical properties for each aerosol type was initially 7899, 1873 and 5753 for dust, smoke and urban/industrial mixed sites, respectively. In consideration of representative aerosol cases, the number of cases decreased to 5894, 1646 and 5627 for dust, smoke and urban/industrial mixed aerosol sites, respectively. The corresponding values and the range of year for each site are summarized in Table 1.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline
**Site** & **Latitude [\\({}^{\\circ}\\)]** & **Longitude [\\({}^{\\circ}\\)]** & **Elevation [m]** & **Range of Year** & **Number of Retrieval (Filtered)** & **References** \\\\ \\hline
**Dust** & & & & & & \\\\ Banizoumbou & 13.55 & _2.67_ & 274 & 1995β1997, 1999-2011 & 3120 (2331) & \\\\ Capo Verde & 16.73 & \\(-\\)22.94 & 60 & 1994β1995, 1999β2004 & 597 (510) & [5; 36; 37; 38] \\\\ DMN\\_Maine\\_Sora & 13.22 & 12.02 & 350 & 2005β209 & 498 (365) & \\\\ IER\\_Cinzana & 13.28 & \\(-\\)5.93 & 285 & 2004β2016 & 3684 (2688) & \\\\ \\hline
**Smoke** & & & & & & \\\\ Moscow & 55.71 & 37.52 & 192 & 2001β2015 & 360 (206) & \\\\ Tomks & 56.48 & 85.05 & 174 & 2003β2004, 2006β2010 & 98 (91) & \\\\ Alta Floresta & \\(-\\)9.87 & \\(-\\)56.10 & 277 & 1993β1995, 1999β2015 & 817 (779) & [5; 20; 27; 39] \\\\ Bonaza\\_Creek & 64.74 & \\(-\\)148.32 & 353 & 1997β2000, 2002, 2004, & 190 (181) & \\\\ Tomsk\\_22 & 56.42 & 84.07 & 80 & 2011β2016 & 159 (145) & \\\\ Yakutsk & 61.66 & 129.37 & 118.5 & 2004, 2006, 2008β2009, & 249 (244) & \\\\ & & & & 2011β2017 & & \\\\ \\hline
**Urban/Industrial/Mixed** & & & & & & \\\\ Gwangju\\_GIST & 35.23 & 126.84 & 52 & 2004β2005, 2007β2017 & 1061 (1054) & \\\\ Beijing & 39.98 & 116.38 & 92 & 2001β2011 & 2541 (2450) & [7; 11; 40; 41] \\\\ Beijing, CAMS & 39.93 & 116.32 & 106 & 2012β2017 & 1364 (1346) & \\\\ Seoul\\_SNU & 37.46 & 126.95 & 116 & 2002β2003, 2012β2013, & 787 (777) & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Geographical/retrieval information for each aerosol type of the AERONET sites, along with corresponding literature citations. The number of cases of urban/industrial mixed aerosol is considered to equal number of cases that exclude the number of cases for dust particles.
## 3 Results and Discussion
### Characteristics of Aerosol Types at Selected Sites
To confirm whether each aerosol site reflected that well-known characteristics of each aerosol type, we investigated the optical and microphysical properties of each aerosol type. In addition, investigation on the characteristics of aerosol types at the urban/industrial sites, which were classified as mixed, NA, MA and HA, should provide a more detailed insight for classification of the \\(\\delta_{\\rm P}\\) value according to aerosol type. Figure 3 shows the spectral \\(\\omega\\) values at 440, 675, 870 and 1020 nm along with the particle size distributions obtained from the AERONET database for the sites of each aerosol type. The corresponding values are listed in Table 2.
Figure 3: Mean values of spectral (**a**) single scattering albedo and (**b**) particle size distribution for dust (black), smoke (red), mixed (green), non-absorbing pollution (blue), moderately-absorbing (cyan) and high-absorbing (magenta).
Figure 2: Scatter plot of fine-mode fraction (FMF) versus AngstrΓΆm exponent for the pair of wavelength 440 and 870 nm (Γ
\\({}_{440-870}\\)) for (**a**) dust sites, (**b**) smoke sites and (**c**) urban/industrial/mixed sites. Thresholds of Γ
\\({}_{440-870}\\) <0.4 and FMF <0.4 are used to determine the dust particles, Γ
\\({}_{440-870}\\) >1.0 is used to distinguish the smoke particles, 0.4 <FMF <0.6 is used to distinguish the mixed particles and FMF > 0.6 is used to distinguish the pollution particles at the respective sites of each aerosol type.
As can be seen from Figure 3, the values of \\(\\omega\\) at each representative aerosol source region differ significantly. The values of \\(\\omega\\) for dust sites were similar to those of dust (0.96 at 550 nm and 0.94-0.98 at 440, 675, 870 and 1020 nm) as reported in previous studies [42; 43]. The \\(\\omega\\) values for smoke have been previously reported as 0.90-0.95, 0.90-0.96, 0.90-0.97 and 0.89-0.97 at 440, 675, 870 and 1020 nm, respectively [9], which are similar to the measured \\(\\omega\\) values from the smoke sites investigated herein. In addition, the \\(\\omega\\) at 1020 nm of water soluble aerosols (e.g., sulfate) is close to unity [44], while that of black carbon (BC) was previously reported close to 0 (0.07 at 1020 nm) [45]. The values of \\(\\omega\\) for aerosol mixture (e.g., BC mixed with sulfate) vary depending on the mixing ratio (e.g., 0.91 at 1020 nm with 0.5 BC/sulfate mixing ratio for internal mixing) [46]. Furthermore, the obtained values of \\(\\omega\\) for NA, MA and HA particles could be considered to result from mixing of these aerosol particles. We note that NA particles would be defined as light scattering aerosols such as sulfate-dominant particles, whereas the HA particles are defined as BC-dominant particles that originated from urban/industrial regions. The value of \\(\\omega\\) for HA is significantly lower than that of smoke particles. BC that is coated with non-absorbing particle absorbs more strongly than the same amount of uncoated BC particle [47]. We therefore believed that BC particles in urban/industrial regions are more likely to exist in the mixed state with other aerosol than BC measured at the smoke sites. We initially considered that smoke particles and HA particles have similar values of \\(\\omega\\) values, as both consist mainly of light absorbing particles such as BC. However, the values of \\(\\omega\\) for smoke particles are significantly higher than the corresponding values of \\(\\omega\\) for HA particles and so this was considered to be the result of mixing with other aerosol. It should also be noted that spherical BC particles exhibit a similar absorption to BC aggregates, in addition to having double the scattering capacity [48]. Based on our results, smoke particle were considered to be BC or organic carbon (OC) particle, which could originate from biomass burning, whereas the HA particles were likely to be BC particles mixed with other aerosols.
The aerosol types at each aerosol site are more clearly distinguished according to the spectral \\(\\omega\\) behavior. In this context, dust particles that aggregated with varying combinations of clay, iron oxide and quartz exhibit strong light absorption in short wavelength regions, with lower absorption in the visible a near infrared wavelength regions [17]. In addition, fine mode particles and hygroscopic aerosol particles such as sulfate display an almost neutral \\(\\omega\\) spectral dependence and high light-scattering properties [19]. BC exhibits the strongest light-absorption properties in near-infrared wavelength regions, while brown carbon and OC exhibit stronger light-absorption characteristics in the ultraviolet and visible wavelength [20]. Furthermore, the variation in the spectral \\(\\omega\\) values for dust and mixed particles in this study behavior is similar to the wavelength dependence behavior of the dust particles. However, the values of \\(\\omega\\) for mixed particles are lower than corresponding value for dust particles at visible and near-infrared wavelength. It is considered that the \\(\\omega\\) of dust particles are altered by mixing with the pollutant particles. As dust and anthropogenic particles tend to be transported in difference source and typically exist as a mixture over East Asia [10; 40], the particles classified as mixed aerosol at the urban/industrial/mixed sites should be defined as polluted dust following filtration of the dust and pollutant particle. Moreover, the values of \\(\\omega\\) for smoke particles tend to decrease as wavelength
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & & & & **Aerosol Type** & & \\\\ \\hline
**Properties** & **Dust** & **Smoke** & **Mixed** & **Pollution (NA)** & **Pollution (MA)** & **Pollution (HA)** \\\\ \\hline \\(\\dot{\\mathrm{A}}_{440/870}\\) & 0.18 \\(\\pm\\) 0.09 & 1.75 \\(\\pm\\) 0.22 & 0.57 \\(\\pm\\) 0.10 & 1.31 \\(\\pm\\) 0.22 & 1.25 \\(\\pm\\) 0.21 & 1.25 \\(\\pm\\) 0.17 \\\\ \\hline \\(\\omega_{440}\\) & 0.90 \\(\\pm\\) 0.03 & 0.94 \\(\\pm\\) 0.03 & 0.90 \\(\\pm\\) 0.03 & 0.97 \\(\\pm\\) 0.01 & 0.92 \\(\\pm\\) 0.03 & 0.83 \\(\\pm\\) 0.02 \\\\ \\(\\omega_{975}\\) & 0.97 \\(\\pm\\) 0.02 & 0.93 \\(\\pm\\) 0.04 & 0.95 \\(\\pm\\) 0.02 & 0.97 \\(\\pm\\) 0.01 & 0.93 \\(\\pm\\) 0.03 & 0.87 \\(\\pm\\) 0.03 \\\\ \\(\\omega_{970}\\) & 0.98 \\(\\pm\\) 0.02 & 0.92 \\(\\pm\\) 0.05 & 0.95 \\(\\pm\\) 0.02 & 0.97 \\(\\pm\\) 0.02 & 0.92 \\(\\pm\\) 0.03 & 0.86 \\(\\pm\\) 0.03 \\\\ \\(\\omega_{1020}\\) & 0.98 \\(\\pm\\) 0.02 & 0.91 \\(\\pm\\) 0.05 & 0.95 \\(\\pm\\) 0.02 & 0.96 \\(\\pm\\) 0.02 & 0.92 \\(\\pm\\) 0.03 & 0.85 \\(\\pm\\) 0.04 \\\\ \\hline FMF & 0.29 \\(\\pm\\) 0.06 & 0.96 \\(\\pm\\) 0.04 & 0.52 \\(\\pm\\) 0.05 & 0.94 \\(\\pm\\) 0.04 & 0.87 \\(\\pm\\) 0.08 & 0.84 \\(\\pm\\) 0.07 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Mean value of AnstrΓΆm exponent at 440β870 nm pair (\\(\\dot{\\mathrm{A}}_{440-870}\\)), single scattering albedo (\\(\\omega\\)) and fine-mode fraction (FMF) from the AERONET Version 3 Level 2 inversion product at each representative aerosol source region.
increases. Although it is thought that aerosol types classified as smoke may also include BC, it is important to note that variations in the BC concentration and other aerosols such as brown carbon or OC could produce an ambiguous \\(\\omega\\) wavelength dependence. We also note that the spectral \\(\\omega\\) for NA tends to be nearly neutral but with a higher value of \\(\\omega\\). It is thought that the aerosol classified as NA mostly consists of fine-mode and hygroscopic aerosol particles (e.g., sulfate). The \\(\\omega\\) values for HA particles decrease upon increasing the wavelength, which is similar to the wavelength dependence of BC, however, in this case, the strongest light absorption was found at 440 nm.
The particle size distribution obtained from each aerosol source region also varies according to the aerosol type. Dust and mixed particles mostly consist of coarse-mode particle, while fine-mode particles predominate at smoke sites. These results lead to lower values of A\\({}_{\\text{440-870}}\\) and FMF for the dust (0.18 \\(\\pm\\) 0.09 and 0.29 \\(\\pm\\) 0.06, respectively) and mixed particles (defined as polluted dust, 0.57 \\(\\pm\\) 0.10 and 0.52 \\(\\pm\\) 0.05, respectively). The higher values of A and FMF at the smoke sites were found to be 0.17 \\(\\pm\\) 0.22 and 0.96 \\(\\pm\\) 0.04, respectively, as shown in Table 2. Lower values of A (>0.4) reflect the characteristics of dust particles, while higher values of A (smoke: 1.2-1.95; urban pollution: 0.75-1.2) were reported for smoke or urban pollutants in previous studies [9; 38; 39]. In addition, it should be noted that the NA particles consist mainly of fine-mode particles with a high FMF value (0.94 \\(\\pm\\) 0.04), with the HA particles having a significant contribution to the coarse-mode particles in terms of the total particle size distribution compared to the smoke particles. The A\\({}_{\\text{440-870}}\\) and FMF values were found to be 1.25 \\(\\pm\\) 0.17 and 0.84 \\(\\pm\\) 0.07 for HA particles at the urban/industrial/mixed sites, respectively.
### Statistics of the Depolarization Ratio
Figure 4 shows the frequency distribution of the AERONET-derived \\(\\delta_{\\text{P}}\\) values with respect to the aerosol type and the corresponding values are summarized in Table 3. As indicated, the mean values of \\(\\delta_{\\text{P}}\\) differ markedly with respect to the selected observation sites. More specifically, for the dust sites, the average \\(\\delta_{\\text{P}}\\) values were 0.22 \\(\\pm\\) 0.04, 0.27 \\(\\pm\\) 0.03, 0.29 \\(\\pm\\) 0.03 and 0.30 \\(\\pm\\) 0.03 at 440, 675 and 870 and 1020 nm, respectively. A maximum value of 0.37 was obtained at 1020 nm in the desert sites. In addition, the mean value of \\(\\delta_{\\text{P}}\\) for polluted dust particles was slightly lower than for pure dust particles, with mean values of 0.13 \\(\\pm\\) 0.03, 0.17 \\(\\pm\\) 0.03, 0.19 \\(\\pm\\) 0.03, 0.21 \\(\\pm\\) 0.04 at 440, 675, 870 and 1020 nm being obtained, respectively, for polluted dust particles in the urban/industrial/mixed sites. These \\(\\delta_{\\text{P}}\\) values are similar to those of several field campaigns. For example, the Saharan Mineral Dust experiment 2006 (SAMUM 2006) reported a maximum \\(\\delta_{\\text{P}}\\) value of \\(\\sim\\)0.33 at 532 nm for Saharan dust particles, while dust particles observed during SAMUM 2008 gave values of 0.27-0.35 at 532 nm. These were transported from Arabia (mainly Syria) across Cyprus [49; 50]. Smaller values of \\(\\delta_{\\text{P}}\\) indicate a mixture of dust with weakly depolarizing particles such as biomass burning smoke or particles of anthropogenic origin [10]. Furthermore, a previous report [13] found \\(\\delta_{\\text{P}}\\) values of 0.13-0.20 at 532 nm for polluted dust particles using the High Spectral Resolution Lidar (HSRL) technique, while the \\(\\delta_{\\text{P}}\\) values for polluted dust particles were found to be 0.10-0.20, that is, less than the corresponding values for pure dust particles due to mixing with particles of anthropogenic origin [51]. Likewise, the values of \\(\\delta_{\\text{P}}\\) for polluted dust in this contribution are distinguishable from the \\(\\delta_{\\text{P}}\\) values of pure dust.
In contrast, smoke particles and urban/industrial pollutants typically consist of spherical particles with lower \\(\\delta_{\\text{P}}\\) values. More specifically, the mean values of \\(\\delta_{\\text{P}}\\) were found to be 0.006 \\(\\pm\\) 0.015, 0.012 \\(\\pm\\) 0.012, 0.01 \\(\\pm\\) 0.01 and 0.005 \\(\\pm\\) 0.014 at 440, 675, 870 and 1020 nm, respectively for smoke sites. In terms of the literature values, it was reported [52] that compact spherical tropospheric smoke particles lead to almost no depolarization at 355, 532 and 1020 nm (<0.03) and so the obtained values of \\(\\delta_{\\text{P}}\\) at the smoke sites appear to indicate the presence of spherical smoke particles. In addition, a value of 0.05 at 532 nm was previously used to separate smoke particles from a dust-smoke mixed plume [34]. Furthermore, the values of \\(\\delta_{\\text{P}}\\) for coated soot aggregates were found to be 0.24, 0.09 and 0.02 at 355, 532 and 1064 nm, respectively [24], while values of 0.22, 0.18 and 0.04 at 355, 532 and 1064 nm, were reported for the dry irregularly shaped soot particles in the stratosphere [52]. Again,these considerations suggest that the smoke particles discussed herein are likely to be spherical in nature. Moreover, the mean values obtained for the pollution particles ranged from 0.03 to 0.06 regardless of the classification type, while a previous study [23] found that the \\(\\delta_{\\mathrm{p}}\\) value of urban aerosol particles was 0.03-0.07 at 532 nm. The values for individual aerosol types considered as urban pollution were also reported [53], that is, 0.04 \\(\\pm\\) 0.003 at 532 nm for ammonium sulfate crystals and 0.01 \\(\\pm\\) 0.001 for liquid droplets in the submicrometer range.
Figure 4: Frequency distribution of the spectral depolarization ratio for dust, smoke, mixed, non-absorbing (NA), moderate-absorbing (MA) and high-absorbing (HA) pollution particles at 440 (blue), 675 (green), 870 (magenta) and 1020 (red) nm wavelengths, respectively. Statistics for the histograms are provided in Table 3.
### Spectral Dependency of Depolarization Ratio
The determination of aerosol particle types and in particular pollutant particles, appears to be limited by the absolute values of \\(\\delta_{\\rm p}\\) because the threshold of \\(\\delta_{\\rm p}\\) is broadened to distinguish between certain types of aerosol particles. Thus, to investigate whether the spectral dependence of \\(\\delta_{\\rm p}\\) could provide additional information regarding the aerosol type, we analyzed the wavelength dependence of \\(\\delta_{\\rm p}\\) according to the representative aerosol source, as shown Figure 5. We found that the wavelength dependence of \\(\\delta_{\\rm p}\\) differed significantly with respect to the selected observation sites and aerosol types. For dust and polluted dust particles, the maximum of the \\(\\delta_{\\rm p}\\) distribution decreased as the wavelength decreased. We also note that the spectral \\(\\delta_{\\rm p}\\) behavior of dust and polluted particles has been investigated previously with lidar observations [24; 25]. The AERONET-derived \\(\\delta_{\\rm p}\\) values for the dust and polluted particles in our study were found to peak at 0.30 and 0.21 at 1020 nm and then decrease steadily to 0.22 and 0.13 at 440 nm, respectively. Similarly, for local North American
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{4}{c}{**Particle Linear Depolarization Ratio**} \\\\ \\cline{2-5} & **440 nm** & **675 nm** & **870 nm** & **1020 nm** \\\\ \\hline
**Dust** & 0.224 \\(\\pm\\) 0.035 & 0.269 \\(\\pm\\) 0.027 & 0.291 \\(\\pm\\) 0.028 & 0.303 \\(\\pm\\) 0.030 \\\\ \\hline BanΓzoumbou & 0.225 \\(\\pm\\) 0.036 & 0.268 \\(\\pm\\) 0.028 & 0.29 \\(\\pm\\) 0.029 & 0.302 \\(\\pm\\) 0.031 \\\\ Capo Verde & 0.249 \\(\\pm\\) 0.030 & 0.283 \\(\\pm\\) 0.024 & 0.300 \\(\\pm\\) 0.025 & 0.309 \\(\\pm\\) 0.026 \\\\ DMN\\_Maine\\_Soroa & 0.225 \\(\\pm\\) 0.030 & 0.265 \\(\\pm\\) 0.023 & 0.283 \\(\\pm\\) 0.024 & 0.294 \\(\\pm\\) 0.026 \\\\ IER\\_Cinzana & 0.218 \\(\\pm\\) 0.033 & 0.268 \\(\\pm\\) 0.026 & 0.292 \\(\\pm\\) 0.027 & 0.305 \\(\\pm\\) 0.029 \\\\ \\hline
**Smoke** & 0.006 \\(\\pm\\) 0.015 & 0.012 \\(\\pm\\) 0.012 & 0.01 \\(\\pm\\) 0.01 & 0.005 \\(\\pm\\) 0.014 \\\\ \\hline Moscow & 0.011 \\(\\pm\\) 0.025 & 0.018 \\(\\pm\\) 0.023 & 0.018 \\(\\pm\\) 0.025 & 0.012 \\(\\pm\\) 0.027 \\\\ Tomks & 0.017 \\(\\pm\\) 0.033 & 0.022 \\(\\pm\\) 0.029 & 0.024 \\(\\pm\\) 0.031 & 0.018 \\(\\pm\\) 0.035 \\\\ Alta Floresta & 0.003 \\(\\pm\\) 0.007 & 0.010 \\(\\pm\\) 0.005 & 0.011 \\(\\pm\\) 0.005 & 0.004 \\(\\pm\\) 0.005 \\\\ Bonaza\\_Creek & 0.006 \\(\\pm\\) 0.013 & 0.012 \\(\\pm\\) 0.010 & 0.012 \\(\\pm\\) 0.009 & 0.005 \\(\\pm\\) 0.008 \\\\ Tomsk\\_22 & 0.004 \\(\\pm\\) 0.005 & 0.010 \\(\\pm\\) 0.004 & 0.010 \\(\\pm\\) 0.003 & 0.003 \\(\\pm\\) 0.003 \\\\ Yakutsk & 0.004 \\(\\pm\\) 0.013 & 0.010 \\(\\pm\\) 0.008 & 0.010 \\(\\pm\\) 0.007 & 0.003 \\(\\pm\\) 0.006 \\\\ \\hline \\multicolumn{5}{c}{**Urban/Industrial/Mixed**} \\\\ \\hline Mixed (Polluted dust) & 0.129 \\(\\pm\\) 0.029 & 0.171 \\(\\pm\\) 0.030 & 0.193 \\(\\pm\\) 0.034 & 0.208 \\(\\pm\\) 0.037 \\\\ Gwangju\\_GIST & 0.132 \\(\\pm\\) 0.028 & 0.178 \\(\\pm\\) 0.031 & 0.202 \\(\\pm\\) 0.036 & 0.217 \\(\\pm\\) 0.039 \\\\ Beijing & 0.129 \\(\\pm\\) 0.031 & 0.167 \\(\\pm\\) 0.030 & 0.188 \\(\\pm\\)
dust, it was previously reported [24] that \\(\\delta_{\\rm P}\\) reached a peak value of 0.38 at 1064 nm and gave lower values of 0.37 and 0.24 at 532 and 355 nm, respectively. In addition, a similar pattern was reported [54] in the spectral variation of \\(\\delta_{\\rm P}\\) in the desert region, with a peak at 1064 nm and the lowest value at 440 nm (Saharan dust: 0.24 at 440 nm, 0.31 at 1020 nm). Furthermore, the first peak in the spectral depolarization ratio was found to shift to larger wavelengths as the particle size increases [54]. These spectral dependences of \\(\\delta_{\\rm P}\\) are considered as a feature of dust particles and so the obtained values of \\(\\delta_{\\rm P}\\) indicate that dust is composed of very large particles and as a result, peak values were obtained at 1064 nm, prior to decreasing with the wavelength.
In contrast, several lidar observations show that the spectral dependence of \\(\\delta_{\\rm P}\\) differs according to the origin and age of the observed dust plume. For example, a peak of 0.28 at 532 nm and lower values of 0.25 at 355 nm and 0.23 at 1064 nm were reported for aged Saharan dust [25], while a similar pattern was observed in the spectral variation of \\(\\delta_{\\rm P}\\) for aged and transported dust with a clear maximum of 0.30 at 532 nm and smaller \\(\\delta_{\\rm P}\\) values of 0.27 and 0.25 at 1064 and 355 nm, respectively [24]. It is possible that the large dust particles settled during transport for the aged and transported dust cases. However, the polluted dust particles in this contribution still contain significant amounts of large dust particles (i.e., coarse-mode particles, as shown in Figure 3). Accordingly, the wavelength dependency of \\(\\delta_{\\rm P}\\) for the polluted particles is similar to the dependency of \\(\\delta_{\\rm P}\\) for the dust particles.
The \\(\\delta_{\\rm P}\\) values for the smoke particles examined herein peaked as 0.012 at 675 nm with lower values of 0.006 at 440 nm and 0.005 at 1020 nm being observed. In a previous report [24], a peak value of 0.24 was obtained at 355 nm and this decreased to 0.09 and 0.02 at 532 and 1064 nm, respectively, for smoke particles considered to be coated soot aggregates. It was therefore concluded that the larger \\(\\delta_{\\rm P}\\) value at 355 nm compared to those observed at longer wavelengths may indicate a smaller size for the non-spherical particles than for dust particles. We note that the smoke particles examined herein are likely influenced by spherical BC or organic matter that should be distinguished as BC/other aerosol mixtures.
Figure 5: Spectral variation of the particle linear depolarization ratios for dust, smoke, mixed, non-absorbing (NA), moderate-absorbing (MA) and high-absorbing (HA) pollution particles, respectively.
The values of \\(\\delta_{\\mathrm{p}}\\) for the NA particles peaked as 0.05 at 440 nm and decreased steadily to 0.03 at 1020 nm, whereas the MA particles exhibited a nearly neutral \\(\\delta_{\\mathrm{p}}\\) spectral dependence. In terms of the HA particles, the wavelength dependency appeared rather complex. More specifically, the values of \\(\\delta_{\\mathrm{p}}\\) for the HA particles peaked at 675 nm with lower values at 870 and 1020 nm in the majority of cases, thereby suggesting that the wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) may provide more detailed information regarding the aerosol classification. In addition, the value of \\(\\delta_{\\mathrm{p}}\\) for the NA particles, which is considered to be a sulfate, decreased upon increasing the wavelength, with the maximum value at 440 nm being attributed to the fact that the NA particles mostly consist of fine-mode particles. Furthermore, the neutral wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) for the MA particles could be explained by the broad particle size distribution, as various types of aerosols and aerosol mixtures could be determined as MA particles. Although HA particles could be considered to be BC aggregates, an explanation of the wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) for the HA pollution particles is challenging. Indeed, we have insufficient information on whether BC aggregates consisted of internal and/or external mixtures. Moreover, the comparison of the wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) for BC aggregates is limited and we expect that BC aggregates originating from difference sources (i.e., biomass burning, anthropogenic pollution) may influence the wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) for BC/other aerosol mixtures.
## 4 Summary and Conclusions
We herein presented the spectral linear particle depolarization ratios (\\(\\delta_{\\mathrm{p}}\\)) obtained from Aerosol Robotics NETwork (AERONET) sun/sky radiometer observations with respect to the aerosol type. The recently released AERONET version 3 level 2.0 inversion product was employed to investigate the optical and microphysical properties of aerosols, including the spectral \\(\\delta_{\\mathrm{p}}\\) values. AERONET observation sites, considered to be representative of aerosol source regions for dust, smoke, urban/industrial mixed aerosols, were selected to investigate \\(\\delta_{\\mathrm{p}}\\) according to the aerosol type. To obtain data representative of each aerosol condition, observation data were filtered using the Angstrom exponent (A), fine-mode fraction (FMF) and single scattering albedo (\\(\\omega\\)). Moreover, polluted dust and the non-absorbing (NA), moderately-absorbing (MA) and high-absorbing (HA) pollution particles were classified according to their light-absorbing properties.
The AERONET-derived \\(\\delta_{\\mathrm{p}}\\) values were generally within the range of independent lidar observations for each aerosol type although they are provided at different wavelengths. We found that the spectral variation of the \\(\\delta_{\\mathrm{p}}\\) value differed markedly according to the aerosol type. More specifically, dust and polluted dust particles gave peak \\(\\delta_{\\mathrm{p}}\\) values at 1020 nm, which decreased upon decreasing the wavelength. We believe that the larger dust particles and the growth of particles due to the mixing of dust with other aerosols were responsible for this observed wavelength dependency for dust and polluted dust, respectively. Furthermore, the wavelength dependency of \\(\\delta_{\\mathrm{p}}\\) for smoke particles was attributed to the fact that the smoke particles were considered to be coated soot aggregates. The smoke sites selected in this study appeared to be mainly influenced by spherical black carbon and organic carbon. Moreover, the spectral \\(\\delta_{\\mathrm{p}}\\) values for NA particles decreased upon increasing the wavelength, whereas a neutral wavelength dependency was found in the case of the MA particles.
We also found that the depolarization ratio is a useful aerosol parameter for aerosol classification. More specifically, the \\(\\delta_{\\mathrm{p}}\\) values presented in this study could be used as reliable reference values to identify the contribution of each aerosol type in aerosol particle mixtures in future studies. Interest regarding the utilization of \\(\\delta_{\\mathrm{p}}\\) obtained at triple wavelengths to infer the size of particles and the aerosol types has recently increased in the lidar community. Consequently, the spectral wavelength dependency of the \\(\\delta_{\\mathrm{p}}\\) value requires a detailed discussion. Unfortunately, lidar observations with \\(\\delta_{\\mathrm{p}}\\) values obtained at triple wavelengths and investigations into the spectral \\(\\delta_{\\mathrm{p}}\\) values based on the aerosol type are rare due to limitations in the available lidar observations (i.e., a limited number of lidar stations and a limited specification of lidar systems). The AERONET sun/sky radiometer may therefore be a suitable alternative to obtaining details regarding the spectral wavelength dependency of \\(\\delta_{\\mathrm{p}}\\). We therefore expect that our findings could provide additional insight into the aerosol classification/separation in remote sensing research. Ongoing research is focusing on collecting sufficient observations of depolarization ratios at multiple wavelengths. Eventually, we hope to establish a reasonable interpretation of the wavelength dependency of \\(\\delta_{\\mathrm{P}}\\) for aerosol classification.
**Author Contributions:** I-S.Z. and S.-K.S. had the idea for this study. I-S.Z. and S.-K.S. performed the data analysis and prepared the figures and tables. I.-S.Z. and S.-K.S. contributed to the discussion of the findings and the preparation of the manuscript.
**Funding:** This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number NRF-2017R1D1A3B03034467) and the International Research & Development Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (grant number 2018K1A3A7A08089712).
**Acknowledgments:** We thank the principal investigators and their staff for establishing and maintaining the AERONET sites used in this investigation.
**Conflicts of Interest:** The authors declare no conflict of interest.
## References
* _Stocker et al. (2013)_ Stocker, T.; Qin, D.; Plattner, G.; Tignor, M.; Allen, S.; Boschung, J.; Nauels, A.; Xia, Y.; Bex, V.; Midgley, P. _IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change_; Cambridge University Press: Cambridge, UK, 2013.
* Kaskaoutis and Kamebzidis (2008) Kaskaoutis, D.; Kamebzidis, H.D. The choice of the most appropriate aerosol model in a radiative transfer code. _Sol. Energy_**2008**, _82_, 1198-1208. [CrossRef]
* Kaskaoutis et al. (2007) Kaskaoutis, D.; Kosmopoulos, P.; Kamebzidis, H.D.; Nastos, P.T. Aerosol climatology and discrimination of different types over Athens, Greece, based on MODIST data. _Atmos. Environ._**2007**, _41_, 7315-7329. [CrossRef]
* _Satheesh and Srinivasan (2006)_ Satheesh, S.; Srinivasan, J. A method to estimate aerosol radiative forcing from spectral optical depths. _J. Atmos. Sci._**2006**, _63_, 1082-1092. [CrossRef]
* _Giles et al. (2012)_ Giles, D.M.; Holben, B.N.; Eck, T.F.; Sinyuk, A.; Smirnov, A.; Slutsker, I.; Dickerson, R.; Thompson, A.; Schafer, J. An analysis of AERONET aerosol absorption properties and classifications representative of aerosol source regions. _J. Geophys. Res. Atmos._**2012**, _117_, D17203. [CrossRef]
* _Kahn et al. (2010)_ Kahn, R.A.; Gaitley, B.J.; Garay, M.J.; Diner, D.J.; Eck, T.F.; Smirnov, A.; Holben, B.N. Multiangle Imaging SpectroRadiometer global aerosol product assessment by comparison with the Aerosol Robotic Network. _J. Geophys. Res. Atmos._**2010**, _115_, D23209. [CrossRef]
* Eck et al. (2005) Eck, T.; Holben, B.; Dubovik, O.; Smirnov, A.; Goloub, P.; Chen, H.; Chatenet, B.; Gomes, L.; Zhang, X.Y.; Tsay, S.C. Columnar aerosol optical properties at AERONET sites in central eastern Asia and aerosol transport to the tropical mid-Pacific. _J. Geophys. Res. Atmos._**2005**, _110_, D06202. [CrossRef]
* _Schuster et al. (2006)_ Schuster, G.L.; Dubovik, O.; Holben, B.N. Angstrom exponent and bimodal aerosol size distributions. _J. Geophys. Res. Atmos._**2006**, _111_, D07207. [CrossRef]
* _Sayer et al. (2014)_ Sayer, A.; Hsu, N.; Eck, T.; Smirnov, A.; Holben, B. AERONET-based models of smoke-dominated aerosol near source regions and transported over oceans, and implications for satellite retrievals of aerosol optical depth. _Atmos. Chem. Phys._**2014**, _14_, 11493-11523. [CrossRef]
* _Shin et al. (2015)_ Shin, S.K.; Muller, D.; Lee, C.; Lee, K.H.; Shin, D.; Kim, Y.J.; Noh, Y.M. Vertical variation of optical properties of mixed Asian dust/pollution plumes according to pathway of air mass transport over East Asia. _Atmos. Chem. Phys._**2015**, _15_, 6707-6720. [CrossRef]
* _Kim et al. (2016)_ Kim, M.; Kim, J.; Jeong, U.; Kim, W.; Hong, H.; Holben, B.; Eck, T.F.; Lim, J.H.; Song, C.K.; Lee, S. Aerosol optical properties derived from the DRAGON-NE Asia campaign, and implications for a single-channel algorithm to retrieve aerosol optical depth in spring from Meteorological Imager (MI) on-board the Communication, Ocean, and Meteorological Satellite (COMS). _Atmos. Chem. Phys._**2016**, _16_, 1789-1808. [CrossRef]
* _Kaskaoutis et al. (2010)_ Kaskaoutis, D.G.; Sifakis, N.; Retalis, A.; Kamebzidis, H.D. Aerosol monitoring over Athens using satellite and ground-based measurements. _Adv. Meteorol._**2010**, _2010_, 147910. [CrossRef]
* _Burton et al. (2012)_ Burton, S.; Ferrare, R.; Hostetler, C.; Hair, J.; Rogers, R.; Obland, M.; Butler, C.; Cook, A.; Harper, D.; Froyd, K. Aerosol classification using airborne High Spectral Resolution Lidar measurements-methodology and examples. _Atmos. Meas. Technol._**2012**, \\(5\\), 73-98. [CrossRef]* Gross et al. (2013) Gross, S.; Essselborn, M.; Weinzierl, B.; Wirth, M.; Fix, A.; Petzold, A. Aerosol classification by airborne high spectral resolution lidar observations. _Atmos. Chem. Phys._**2013**, _13_, 2487-2505. [CrossRef]
* Omar et al. (2005) Omar, A.H.; Won, J.G.; Winker, D.M.; Yoon, S.C.; Dubovik, O.; McCormick, M.P. Development of global aerosol models using cluster analysis of Aerosol Robotic Network (AERONET) measurements. _J. Geophys. Res. Atmos._**2005**, _110_, D10S14. [CrossRef]
* Chen et al. (2009) Chen, W.-N.; Chen, Y.-W.; Chou, C.C.; Chang, S.-Y.; Lin, P.-H.; Chen, J.-P. Columnar optical properties of tropospheric aerosol by combined lidar and sunphotometer measurements at Taipei, Taiwan. _Atmos. Environ._**2009**, _43_, 2700-2708. [CrossRef]
* Abel et al. (2003) Abel, S.J.; Haywood, J.M.; Highwood, E.J.; Li, J.; Buseck, P.R. Evolution of biomass burning aerosol properties from an agricultural fire in southern Africa. _Geophys. Res. Lett._**2003**, _30_, 1785. [CrossRef]
* Sokolik and Toon (1999) Sokolik, I.N.; Toon, O.B. Incorporation of mineralogical composition into models of the radiative properties of mineral aerosol from UV to IR wavelengths. _J. Geophys. Res. Atmos._**1999**, _104_, 9423-9444. [CrossRef]
* Dubovik et al. (2002) Dubovik, O.; Holben, B.; Eck, T.F.; Smirnov, A.; Kaufman, Y.J.; King, M.D.; Tanre, D.; Slutsker, I. Variability of absorption and optical properties of key aerosol types observed in worldwide locations. _J. Atmos. Sci._**2002**, _59_, 590-608. [CrossRef]
* Eck et al. (2009) Eck, T.; Holben, B.; Reid, J.; Sinyuk, A.; Hyer, E.; O'Neill, N.; Shaw, G.; Vande Castle, J.; Chapin, F.; Dubovik, O. Optical properties of boreal region biomass burning aerosols in central Alaska and seasonal variation of aerosol optical depth at an Arctic coastal site. _J. Geophys. Res. Atmos._**2009**, _114_, D11201. [CrossRef]
* Freudenthaler et al. (2006) Freudenthaler, V.; Esselborn, M.; Wiegner, M.; Heese, B.; Tesche, M.; Ansmann, A.; Muller, D.; Althausen, D.; Wirth, M.; Fix, A. Depolarization ratio profiling at several wavelengths in pure Saharan dust during SAMUM 2006. _Tellus B_**2009**, _61_, 165-179. [CrossRef]
* Tesche et al. (2009) Tesche, M.; Ansmann, A.; Mueller, D.; Althausen, D.; Mattis, I.; Heese, B.; Freudenthaler, V.; Wiegner, M.; Esselborn, M.; Pisani, G. Vertical profiling of Saharan dust with Raman lidars and airborne HSRL in southern Morocco during SAMUM. _Tellus B_**2009**, _61_, 144-164. [CrossRef]
* Burton et al. (2013) Burton, S.; Ferrare, R.; Vaughan, M.; Omar, A.; Rogers, R.; Hosteller, C.; Hair, J. Aerosol classification fromairborne HSRL and comparisons with the CALIPSO vertical feature mask. _Atmos. Meas. Technol._**2013**, \\(6\\), 1397-1412. [CrossRef]
* Burton et al. (2015) Burton, S.; Hair, J.; Kahnert, M.; Ferrare, R.; Hosteller, C.; Cook, A.; Harper, D.; Berkoff, T.; Seaman, S.; Collins, J. Observations of the spectral dependence of linear particle depolarization ratio of aerosols using NASA Langley airborne High Spectral Resolution Lidar. _Atmos. Chem. Phys._**2015**, _15_, 13453-13473. [CrossRef]
* Haarig et al. (2017) Haarig, M.; Ansmann, A.; Althausen, D.; Klepel, A.; Gross, S.; Freudenthaler, V.; Toledano, C.; Mamouri, R.-E.; Farrell, D.A.; Prescod, D.A. Triple-wavelength depolarization-ratio profiling of Saharan dust over Barbados during SALTRACE in 2013 and 2014. _Atmos. Chem. Phys._**2017**, _17_, 10767. [CrossRef]
* Holben et al. (1998) Holben, B.N.; Eck, T.F.; Slutsker, I.; Tanre, D.; Buis, J.; Setzer, A.; Vermote, E.; Reagan, J.; Kaufman, Y.; Nakajima, T. AERONET--A federated instrument network and data archive for aerosol characterization. _Remote Sens. Environ._**1998**, _66_, 1-16. [CrossRef]
* Kambezidis et al. (2008) Kambezidis, H.D.; Kaskaoutis, D. Aerosol climatology over four AERONET sites: An overview. _Atmos. Environ._**2008**, _42_, 1892-1906. [CrossRef]
* Mattis et al. (2009) Mattis, I.; Tesche, M.; Grein, M.; Freudenthaler, V.; Muller, D. Systematic error of lidar profiles caused by a polarization-dependent receiver transmission: Quantification and error correction scheme. _Appl. Opt._**2009**, _48_, 2742-2751. [CrossRef]
* Dubovik et al. (2006) Dubovik, O.; Sinyuk, A.; Laypyonok, T.; Holben, B.N.; Mishchenko, M.; Yang, P.; Eck, T.F.; Voten, H.; Munoz, O.; Veihelmann, B. Application of spheroid models to account for aerosol particle nonsphericity in remote sensing of desert dust. _J. Geophys. Res. Atmos._**2006**, _111_, D11208. [CrossRef]
* Noh et al. (2017) Noh, Y.; Muller, D.; Lee, K.; Kim, K.; Shimizu, A.; Sano, I.; Park, C.B. Depolarization ratios retrieved by AERONET sun-sky radiometer data and comparison to depolarization ratios measured with lidar. _Atmos. Chem. Phys._**2017**, _17_, 6271-6290. [CrossRef]
* Bohren and Huffman (1983) Bohren, C.; Huffman, D. _Absorbing and Scattering of Light by Small Particles_; Wiley: Weinheim, Germany, 1983. [CrossRef]
* Sayer et al. (2012) Sayer, A.M.; Smirnov, A.; Hsu, N.C.; Holben, B.N. A pure marine aerosol model, for use in remote sensing application. _J. Geophys. Res. Atmos._**2012**, _117_, D05213. [CrossRef]* Russell et al. (2010) Russell, P.B.; Bergstrom, R.W.; Shinozuka, Y.; Clarke, A.D.; DeCarlo, P.F.; Jimenez, J.L.; Livingston, J.M.; Redemann, J.; Dubovik, O.; Strawa, A. Absorption angstrom exponent in AERONET and related data as an indicator of aerosol composition. _Atmos. Chem. Phys._**2010**, _11_, 1155-1169. [CrossRef]
* Tesche et al. (2011) Tesche, M.; Muller, D.; Gross, S.; Ansmann, A.; Althausen, D.; Freudenthaler, V.; Weinzierl, B.; Veira, A.; Petzold, A. Optical and microphysical properties of smoke over Cape Verde inferred from multiwavelength lidar measurement. _Tellus B_**2011**, _63_, 677-694. [CrossRef]
* Lee et al. (2010) Lee, J.; Kim, J.; Song, C.H.; Kim, S.B.; Chun, Y.; Sohn, B.J.; Holben, B.N. Characteristics of aerosol types from AERONET sunphotometer measurements. _Atmos. Environ._**2010**, _44_, 3110-3117. [CrossRef]
* Tanre et al. (2001) Tanre, D.; Kaufman, Y.; Holben, B.; Chatenet, B.; Karnieli, A.; Lavenu, F.; Blarel, L.; Dubovik, O.; Remer, L.; Smirnov, A. Climatology of dust aerosol size distribution and optical properties derived from remotely sensed data in the solar spectrum. _J. Geophys. Res. Atmos._**2001**, _106_, 18205-18217. [CrossRef]
* Reid et al. (2003) Reid, J.S.; Kinney, J.E.; Westphal, D.L.; Holben, B.N.; Welton, E.J.; Tsay, S.C.; Eleuterio, D.P.; Campbell, J.R.; Christopher, S.A.; Colarco, P. Analysis of measurements of Saharan dust by airborne and ground-based remote sensing methods during the Puerto Rico Dust Experiment (PRIDE). _J. Geophys. Res. Atmos._**2003**, _108_, D19. [CrossRef]
* Schuster et al. (2012) Schuster, G.L.; Vaughan, M.; MacDonnell, D.; Su, W.; Winker, D.; Dubovik, O.; Lapyonok, T.; Trepte, C. Comparison of CALIPSO aerosol optical depth retrievals to AERONET measurements, and a climatology for the lidar ratio of dust. _Atmos. Chem. Phys._**2012**, _12_, 7431. [CrossRef]
* Verma et al. (2015) Verma, S.; Prakash, D.; Ricaud, P.; Payra, S.; Attie, J.-L.; Soni, M. A new classification of aerosol sources and types as measured over Jaipur, India. _Aerosol Air Qual. Res._**2015**, _15_, 985-993. [CrossRef]
* Khatri et al. (2014) Khatri, P.; Takamura, T.; Shimizu, A.; Sugimoto, N. Observation of low single scattering albedo of aerosols in the downwind of the East Asian desert and urban areas during the inflow of dust aerosols. _J. Geophys. Res. Atmos._**2014**, _119_, 787-802. [CrossRef]
* Ou et al. (2017) Ou, Y.; Zhao, W.; Wang, J.; Zhao, W.; Zhang, B. Characteristics of Aerosol Types in Beijing and the Associations with Air Pollution from 2004 to 2015. _Remote Sens._**2017**, \\(9\\), 898. [CrossRef]
* Mikami et al. (2006) Mikami, M.; Shi, G.; Uno, I.; Yabuki, S.; Iwasaka, Y.; Yasui, M.; Aoki, T.; Tanaka, T.; Kurosaki, Y.; Masuda, K. Aeolian dust experiment on climate impact: An overview of Japan-China joint project ADEC. _Glob. Planet. Chang._**2006**, _52_, 142-172. [CrossRef]
* Yu et al. (2006) Yu, X.; Cheng, T.; Chen, J.; Liu, Y. A comparison of dust properties between China continent and Korea, Japan in East Asia. _Atmos. Environ._**2006**, _40_, 5787-5797. [CrossRef]
* Hess et al. (1998) Hess, M.; Koepke, P.; Schult, I. Optical properties of aerosols and clouds: The software package OPAC. _Bull. Am. Meteorol. Soc._**1998**, _79_, t831-t844. [CrossRef]
* Haywood and Ramaswamy (1998) Haywood, J.M.; Ramaswamy, V. Global sensitivity studies of the direct radiative forcing due to anthropogenic sulfate and black carbon aerosols. _J. Geophys. Res._**1998**, _103_, 6043-6058. [CrossRef]
* Wang and Martin (1998) Wang, J.; Martin, S.T. Satellite characterization of urban aerosols: Importance of including hygroscopicity and mixing state in the retrieval algorithms. _J. Geophys. Res._**1998**, _112_, D17203. [CrossRef]
* Fierce et al. (2016) Fierce, L.; Bond, T.C.; Bauer, S.E.; Mena, F.; Riemer, N. Black carbon absorption at the global scale is affected by particle-scale diversity in composition. _Nat. Commun._**2016**, \\(7\\), 12361. [CrossRef] [PubMed]
* Chung et al. (2012) Chung, C.E.; Lee, K.; Muller, D. Effect of internal mixture on black carbon radiative forcing. _Tellus B_**2012**, _64_, 10925. [CrossRef]
* Ansmann et al. (2011) Ansmann, A.; Petzold, A.; Kandler, K.; Tegen, I.; Wendisch, M.; Mueller, D.; Weinzierl, B.; Mueller, T.; Heintzenberg, J. Saharan Mineral Dust Experiments SAMUM-1 and SAMUM-2: What have we learned? _Tellus B_**2011**, _63_, 403-429. [CrossRef]
* Mamouri and Ansmann (2014) Mamouri, R.-E.; Ansmann, A. Fine and coarse dust separation with polarization lidar. _Atmos. Meas. Technol._**2014**, \\(7\\), 3717-3735. [CrossRef]
* Shimizu et al. (2004) Shimizu, A.; Sugimoto, N.; Matsui, I.; Arao, K.; Uno, I.; Murayama, T.; Kagawa, N.; Aoki, K.; Uchiyama, A.; Yamazaki, A. Continuous observations of Asian dust and other aerosols by polarization lidars in China and Japan during ACE-Asia. _J. Geophys. Res. Atmos._**2004**, _109_, D19517. [CrossRef]
* Haarig et al. (2018) Haarig, M.; Ansmann, A.; Baars, H.; Jimenez, C.; Veselovskii, I.; Engelmann, R.; Althausen, D. Depolarization and lidar ratios at 355, 532, and 1064 nm and microphysical properties of aged tropospheric and stratospheric Candian wildfire smoke. _Atmos. Chem. Phys._**2018**, _18_, 11847-11861. [CrossRef]* Sakai et al. (2010) Sakai, T.; Nagai, T.; Zaizen, Y.; Mano, Y. Backscattering linear depolarization ratio measurements of mineral, sea-salt, and ammonium sulfate particles simulated in a laboratory chamber. _Appl. Opt._**2010**, _49_, 4441-4449. [CrossRef] [PubMed]
* Shin et al. (2018) Shin, S.-K.; Teshce, M.; Kim, K.; Kezoudi, M.; Tatarov, B.; Muller, D.; Noh, Y. On the spectral depolarisation and lidar ratio of mineral dust provided in the AERONET version 3 inversion product. _Atmos. Chem. Phys._**2018**, _18_, 12735-12746. [CrossRef] | We herein present the spectral linear particle depolarization ratios (\\(\\delta_{\\mathrm{P}}\\)) from an Aerosol Robotics NETwork (AERONET) sun/sky radiometer with respect to the aerosol type. AERONET observation sites, which are representative of each aerosol type, were selected for our study. The observation data were filtered using the Angstrom exponent (A), fine-mode fraction (FMF) and single scattering albedo (\\(\\omega\\)) to ensure that the obtained values of \\(\\delta_{\\mathrm{P}}\\) were representative of each aerosol condition. We report the spectral \\(\\delta_{\\mathrm{P}}\\) values provided in the recently released AERONET version 3 inversion product for observation of the following aerosol types: dust, polluted dust, smoke, non-absorbing, moderately-absorbing and high-absorbing pollution. The AERONET-derived \\(\\delta_{\\mathrm{P}}\\) values were generally within the range of the \\(\\delta_{\\mathrm{P}}\\) values measured from lidar observations for each aerosol type. In addition, it was found that the spectral variation of \\(\\delta_{\\mathrm{P}}\\) differed according to the aerosol type. From the obtained results, we concluded that our findings provide potential insight into the identification and classification of aerosol types using remote sensing techniques.
AERONET; spectral depolarization ratio; wavelength dependency; aerosol classification Article | Summarize the following text. | 321 |
arxiv-format/2310_17126v1.md | # Deep Learning on SAR Imagery: Transfer Learning Versus Randomly Initialized Weights
## 1 Introduction
In recent years, various architectures of deep learning have been developed for Synthetic Aperture Radar (SAR) imagery in application domains spanning environmental monitoring and change detection. One such case is sea ice mapping. SAR is the primary data source for mapping sea ice, as multiple C-band SAR sensors including Sentinel-1 and RADARSAT-2 have polar coverage, and can acquire images regardless of cloud cover or light conditions. Sea ice undergoes constant and rapid changes due to the combined influence of wind, temperature, and ocean currents. Hence, frequent mapping of sea ice is essential to ensure maritime safety. Currently, sea ice mapping is primarily performed by national ice centers of countries having interests in the Arctic and Antarctic regions, as automated mapping of sea ice using SAR imagery still remains a challenge, especially during the melt season, when surface melt masks the underlying ice surface, resulting in mistaking ice for open water.
Deploying deep learning on SAR imagery is challenging for several other reasons as well, including (a) the systematic TOPSAR noise (banding and scalloping) in the Extra Wide (EW) mode (which is the sole mode of acquisition over open oceans and polar regions), (b) ambiguous volume scattering patterns of sea ice types with different thickness, (c) similar backscatter patterns of smooth dark young ice and calm water, making the discrimination of water and ice challenging. Researchers have been experimenting to establish optimal configuration and training strategies for deep learning models that best tackle these challenges. One important area, less explored systematically, is fine-tuning image segmentation models on SAR imagery using models pre-trained on natural RGB imagery [1, 2]. Given the inherent differences of SAR and optical imagery, as well as differences of remote sensing and generic/natural (fashion, and animal) targets, it is unclear what impact starting with pre-trained weights would have on the results of segmentation.
In this paper, we analyze the performance of deep learning-based image segmentation models on SAR imagery using two different training strategies: one using transfer learning, fine-tuning pre-trained ImageNet weights on SAR imagery, and the other strategy using randomly initialized weights. We use a publicly available benchmark dataset for this purpose, and test the model performance on held-out test scenes, one during the melt season and one during the freeze up season. We analyze the results using both performance metrics, as well as visual inspection of classification error for each set up.
## 2 Data and Model
We use the Extreme Earth v2 dataset [3], which includes high-resolution ice charts over the East Coast of Greenland aligned with twelve Sentinel-1 images acquired in EW mode, with each image having a spatial footprint 400 x 400 km. The twelve images were acquired roughly one month apart throughout 2018. The polygon labels are interpretations of expert sea ice analysts using SAR as primary source, as well as other data sources used in conjunction with domain knowledge of the region.
We use the labels to train semantic segmentation models for the separation of ice and water. We hold out image and label pairs acquired in January and July (two out of twelve) for testing the performance of the model, with January representing the freeze up season conditions, and July for the melt season, which as mentioned above, is more challenging for deep learning models.
For validation during training using non-overlapping images, we clip half of the entire February, June, August, and December images, and assign them to the validation (i.e., development) set. The training samples are generated by the extraction of 100 randomly placed patches of size 80 km, equivalent to 1000 x 1000 pixels using images with 80 x 80 m pixel size. Since our models are fully convolutional, we generate output for test and validation images using a single pass on the entire scene. Our model architecture uses the first three blocks of ResNet18 [4] as encoder, and a decoder based on the Atrous Spatial Pyramid Pooling (ASPP) module [5], resulting in a total of 4 M trainable parameters.
The model takes as input the horizontal emit, horizontal receive (HH) and horizontal emit, vertical receive (HV) polarization values of SAR, in addition to the incidence angle from Sentinel-1 EW mode, and rasterized ice and water polygons from the Extreme Earth dataset as labels. Raster labels are binary, with one class representing water and another representing ice. We use a batch size of 32, Adam optimizer [6] with a learning rate starting at 1e-5 to train the models. We decrease the learning rate by a factor of 10 when the validation loss does not decrease in five epochs, to a minimum of 1e-8. The models stop training when the validation loss does not decrease in 20 epochs. We save the models' weights with the smallest validation loss for testing. These hyperparameters are kept the same for all models trained.
## 3 Experiments
We perform two sets of experiments, with three runs for each to average the performance metrics over the stochastic nature of gradient descent optimization. First, we initialize the entire model with random weights using PyTorch's [7] default parameters, hereinafter \"randomly initialized models\". Second, we initialize the decoder with random weights, but the encoder is initialized with ImageNet [8] weights. The weights of the encoder and decoder are updated during training. We call these \"pre-trained\" models.
## 4 Results
Table 1 shows the average for resulting metrics for the experiments across three runs for each setup. Our results show that pre-trained models have better performance metrics than randomly initialized models on average for the melt season test scene (i.e., July). Specifically, weighted F1 increases by 0.06 to 0.98 and weighted IOU increases by 0.11 to 0.95, which is a considerable improvement.
As for the July scene, there are noteworthy observations (Fig. 1): pre-trained models are more robust and classify ice under banding noise, and better classify water under windy conditions. Randomly initialized models are thrown off by ruffled water as well as banding noise over areas of low backscatter such as dark (younger) first year ice.
Figure 1: (a) SAR image acquired in July from the Extreme Earth V2 dataset (b) randomly initialized model misclassification error in purple for the same image, (c) pre-trained model classification error map. Fine-tuning a pre-trained model has led to much better results during the melt season.
Looking closer at the confusion matrix for the July test scene, we observe that there are major improvements in identifying sea ice when fine-tuning a pre-trained model. When using randomly initialized weights, 15% of actual sea ice pixels are mistakenly classified as water, which can lead to potentially risky outcomes for generating navigational ice charts.
While using pre-trained models has a clear advantage on the melt-season test scene, the results on the January test scene are not as conclusive, and in fact, show potentially opposite effects in performance. Metrics are similar on the January (freeze up) test scene for both models, with F1 around 0.97 and IOU approximately 0.95, and a 0.1 decrease in performance for both metrics when fine-tuning pre-trained models compared to randomly initialized weights. Figure 3 shows misclassification errors for the January test scene, with both models having roughly similar results, with the model with randomly initialized weights slightly more successful in classifying sea ice along the edge (Fig. 3 And Fig. 4), however, the model with pre-trained weights shows slightly better results for classifying water.
\\begin{table}
\\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \\hline & & Average F1 & Micro avg IoU & Macro avg IoU & Weighted IoU \\\\ \\hline January test scene & Randomly initialized & 0.98 & 0.96 & 0.96 & 0.96 \\\\ \\hline July test scene & Predained & 0.97 & 0.95 & 0.95 & 0.95 \\\\ \\hline July test scene & Randomly initialized & 0.92 & 0.85 & 0.85 & 0.85 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Performance metrics comparison for the two setups, averaged for three training runs to minimize stochasticity.
Fig. 3: (a) SAR image acquired in January from the Extreme Earth V2 dataset, (b) pre-trained model classification error map, (c) randomly initialized model misclassification error in purple for the same image. Both models perform roughly similarly overall on both classes, however, the pre-trained
Fig. 2: Confusion matrix for the July (melt conditions) test scene, for the model with randomly initialized weights (left) Confusion matrix for the model fine-tuned from with pre-trained weights (right), which shows much better performance on the sea ice class. In the legend, class 0 represents water, and class 1 is sea ice.
model performs slightly better in identifying pixels of the sea ice class.
It is worth noting that pre-trained models tended to train faster too, unsurprisingly: the number of epochs for pre-trained models to stop training was 28, 29, 39 for the three experiments, against 32, 36, 45 epochs for models with randomly initialized weights.
## 5 Conclusion and Future Work
In this study, we compared the performance of fine-tuning deep-learning-based segmentation models pre-trained on natural images against models trained on randomly initialized weights for the purpose of sea ice mapping. Our results highlight the potential of fine-tuning models originally pre-trained on generic images for use with SAR imagery in mapping sea ice, leading to better performance and usually fewer epochs to converge. The results show clear improvement for samples collected during the melt season, when sea ice mapping is commonly more challenging due to similarities in signal of open water, melt ponds, and generally, surface melt. However, the results for samples collected during the freeze up season are not conclusive, with only a slight advantage for the models initialized with random weights for classifying sea ice, and slight advantage for pre-trained models for classifying water.
Future research on larger datasets is needed to further explore the effects of pre-trained weights on model output. Additionally, tasks such as sea ice type classification, concentration estimation, and floe size estimation require similar analyses. Research into using different model sizes (layers) and the specific pre-trained weights (coming from different generics datasets) can also help pave the way for more efficient model design and implementation with fewer training samples for sea ice mapping, and remote sensing with SAR in general.
## 6 Acknowledgement
This material is based upon work supported by the National Science Foundation under Grant No. 2026962. We thank the Extreme Earth project and MET Norway for making the ExtremeEarth dataset available to the sea ice community. The code used for this research is available at [https://github.com/geohai/sea-ice-segment](https://github.com/geohai/sea-ice-segment).
## References
* [1]M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio (2019) Transfusion: understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems, Cited by: SS1.
* [2]S. Khaleghian, H. Ullah, T. Kreemer, N. Hughes, T. Eltofi, and A. Marinoni (2021) Sea Ice Classification of SAR Imagery Based on Convolution Neural Networks. Remote Sens.13 (9). External Links: Document, ISSN 10391734 Cited by: SS1.
* [3]A. Everett et al. (2020) ExtremeEarth: the Polar Use Case. In Phi-week, Cited by: SS1.
* [4]K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, External Links: Document, ISSN 10391734 Cited by: SS1.
* [5]L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2018) Rethinking Atrous Convolution for Semantic Image Segmentation Liang-Chieh. IEEE Trans. Pattern Anal. Mach. Intell.2. Cited by: SS1.
* 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015. Conference Track Proceedings. International Conference on Learning Representations (ICLR). Cited by: SS1.
* [7]B. Rozemberczki et al. (2021) PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models. In International Conference on Information and Knowledge Management, Proceedings, External Links: Document, ISSN 10391734 Cited by: SS1.
* [8]O. Russakovsky et al. (2015) ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis.2015, pp. 10.1007/s11263-015-0816-y.
Figure 4: Confusion matrix for the January (freeze up conditions) test scene, for the model with randomly initialized weights (left) and the model fine-tuned with pre-trained weights (right). | Deploying deep learning on Synthetic Aperture Radar (SAR) data is becoming more common for mapping purposes. One such case is sea ice, which is highly dynamic and rapidly changes as a result of the combined effect of wind, temperature, and ocean currents. Therefore, frequent mapping of sea ice is necessary to ensure safe marine navigation. However, there is a general shortage of expert-labeled data to train deep learning algorithms. Fine-tuning a pre-trained model on SAR imagery is a potential solution. In this paper, we compare the performance of deep learning models trained from scratch using randomly initialized weights against pre-trained models that we fine-tune for this purpose. Our results show that pre-trained models lead to better results, especially on test samples from the melt season.
Morteza Karimzadeh\\({}^{1}\\), Rafael Pires de Lima\\({}^{1}\\)\\({}^{1}\\)Department of Geography, University of Colorado Boulder
SAR, Transfer Learning, Sea Ice, Deep Learning, Segmentation | Summarize the following text. | 211 |
arxiv-format/2112_13224v1.md | # Simultaneous Location of Rail Vehicles and Mapping of Environment with Multiple LiDARs
Yusheng Wang, Weiwei Song, Yidong Lou, Fei Huang, Zhiyong Tu and Shimin Zhang
## I Introduction
Multi-modal sensor integration has become a crucial prerequisite for the real-world navigation systems. Recent studies have reported successful deployment of LiDAR-inertial system in handled devices [1], legged robots [2], unmanned ground and aerial vehicles (UAV and UGV) [3, 4], autonomous cars [5], and boats [6]. However, the LiDAR-inertial based simultaneous localization and mapping (SLAM) is still an open problem for rail vehicle applications.
With the framework of estimating train state and mapping the surrounding in the meantime, SLAM is a promising approach towards rail vehicle localization and railroad environment monitoring problems. However, a number of difficulties affect the application of SLAM on rail vehicles:
_Large velocities_: Existing SLAM frameworks are usually evaluated with slow motion platforms. For example, the maximum speed of UAV [4] and the Clearpath Jackal [7] is below 10 km/h. On the contrary, the average speed of the slowest rail vehicle is already beyond 80 km/h. The great speed not only cause motion blur but also introduce extra challenge for frame-to-frame registration of small FoV LiDARs [8, 9].
_No-revisited districts_: The accumulated drift of most SLAM algorithms can be corrected with detected loops at backend. However, there are no revisited districts for the railroad.
_Long-during repetitive scenes_: The safety regulations require a clearance gauge for the railroad, where only the rail tracks, and track side infrastructures are observable, making most of the railroad with repetitive structures and prone to degenerate.
_Large mapping coverage required_: In practice, many railroad contingencies are the consequence of insufficient environment awareness and prevention. For example, some power failures are caused by short circuit from blown away greenhouse plastic film. Therefore, a good knowledge of both the track bed and the long-range environment is required, leading to a multi-LiDAR setup on rail vehicles, result in extra consideration of precise extrinsic calibration problems.
To tackle these challenges, in this letter, we present a multi-LiDAR based localization and mapping system for rail vehicles, with the system setup and some real-time mapping result shown in Fig. 1. This system receives measurements from multiple LiDARs, an IMU, rail vehicle odometer, and a global
Fig. 1: A): The hardware setup of seven LiDARs on the rail vehicle (5 Livox Horizon, \\(81.7^{\\circ}\\times 25.1^{\\circ}\\) FoV, 2 Livox Tele-15, \\(14.5^{\\circ}\\times 16.2^{\\circ}\\) FoV). B): The mapping result of the proposed system coded by LiDAR IDs. C): An example of the real-time mapping result from our proposed framework, the color is coded by PCV. Both the map of railroad and the environment (up to 500 m from the rail vehicle) can be well acquired with the cooperation of seven LiDARs.
navigation satellite system (GNSS) receiver. All the point clouds are first denoised and made distortion-free utilizing IMU/odometer information. According to the various installment and FoV of LiDARs, different LiDAR odometry methods are employed. In addition, we leverage the typical geometric patterns on railroads to further refine the pose estimation. The local maps are then registered to global with absolute positioning data and compass heading. In summary, our contributions are:
1. We propose a framework that tightly fuses LiDAR, IMU, rail vehicle wheel odometer, and GNSS through sliding window based factor graph formulation.
2. We employ different scan matching methods according to the geometric displacement of different LiDARs. Besides, we introduce an online extrinsic estimating and updating scheme for long-during tasks.
3. We leverage the geometric structure of environment for state estimation, where plane constraints from extracted rail tracks and height information descriptor are employed to prevent degeneracy.
## II Related Work
### _Train Positioning and Railway Mapping Solutions_
The existing train positioning strategy is mainly dependent on trackside infrastructures like track circuits [10] and Balises [11]. Since the accuracy of these systems is determined by the operation interval, they are neither accurate nor efficient for intelligent rail transportation systems. Considering its large capital investment and low efficiency, many researchers seek to complement the system limitations with onboard sensors. The satellite-based methods utilizes the GNSS for train positioning, and the accuracy can be further improved with integrated track odometry [12], wheel odometer [13], and IMU [14]. However, these methods merely achieve train state information without awareness of environmental information.
The current railroad environment monitoring is still a human-intensive work, and a professional technician need to go with the train driver every time to manually check the infrastructure defects. Although the visual approaches [15, 16] have been largely investigated, they are inaccurate for range measuring and sensitive to illumination conditions. In many of the previous works [17, 18, 19], laser scanners have been included in the mobile mapping system (MMS) for railroad monitoring tasks. As a direct geo-referencing approach, the MMS system requires high-precision GNSS/IMU determination and survey-grade laser scanners. Although these solutions can achieve highly-accurate 3D maps, they are costly for large deployment and less-efficiency for real-time perception.
The potential of SLAM for rail vehicles localization and mapping has not been well investigated. One of the early works, RailSLAM, jointly estimated the train state and validated the correctness of initial track map based on a general Bayesian theory [20]. The performance of Visual-inertial odometry on rail vehicles have been extensively evaluated in [21], indicating that the Visual-inertial odometry is not reliable for railroad applications. But the LiDAR-inertial SLAM is still an open problem for railway applications.
### _Multi-LiDAR Based SLAM_
According to the data association scheme, multiple LiDAR integration can be classified into centralized and decentralized approaches. A centralized multi LiDAR method is presented in [5], this approach runs onboard with several desirable features, including a tightly-coupled multi-LiDAR motion estimation, online extrinsic calibration with convergence identification, and uncertainty-aware mapping. However, as a LiDAR-only SLAM, this approach is inevitable to long-duration navigation drifts. A decentralized framework based on the Extended Kalman filter (EKF) is proposed in [22]. This method distribute the intensive computation among dedicated LiDARs, and treats each LiDAR input as independent modules for pose estimation. Although approval accuracy can be reached, this system is only simulated on a high-performance computer, the communication delay and message loss in the real case are not taken into consideration.
Six LiDARs are integrated into state estimation and map construction in our previous work [23], achieving accurate result in complex urban roads. And this paper seeks to achieve real-time, low-drift and robust odometry and mapping for large-scale railroad environments with multiple LiDAR integrated LiDAR-inertial SLAM.
## III System Overview
As shown in Fig. 2, the proposed system receives measurements from seven LiDARs, an IMU, wheel odometers, a GNSS receiver and outputs 10 Hz odometry as well as 1Hz mapping. In addition, the multiple LiDAR placement is illustrated in Fig. 3. before dive into the details of the methodology, we first define the notations used throughout this article.
We denote \\(\\big{(}\\cdot\\big{)}^{W}\\), \\(\\big{(}\\cdot\\big{)}^{B}\\), \\(\\big{(}\\cdot\\big{)}^{L}\\), and \\(\\big{(}\\cdot\\big{)}^{Q}\\) as the world, body, LiDAR and odometer frame. In addition, we define \\(\\big{(}\\cdot\\big{)}^{B}_{W}\\) as the transform from world frame to the IMU frame. We use both rotation matrix \\(\\mathbf{R}\\) and quaternion \\(\\mathbf{q}\\) to represent rotation. Besides, we denote \\(\\bigotimes\\) as the multiplication between two quaternions. \\(\\big{(}\\cdot\\big{)}\\) is denoted as the estimation of a certain quantity.
### _State Definition_
We split the full state vector \\(\\boldsymbol{\\chi}\\) into three groups, with:
\\[\\boldsymbol{\\chi} =\\big{[}\\mathbf{X}_{s},\\ \\ \\boldsymbol{\\chi}_{w},\\ \\ \\ \\boldsymbol{\\chi}_{e}\\big{]} \\tag{1}\\] \\[=\\big{[}\\mathbf{x}_{1}, ,\\mathbf{x}_{l},\\mathbf{x}_{l+1}, , \\mathbf{x}_{N+1},\\mathbf{x}_{2}^{p}, ,\\mathbf{x}_{l_{p}}^{p}\\big{]}\\] \\[\\mathbf{x}_{i} =\\big{[}\\mathbf{p}_{i},\\mathbf{v}_{i},\\mathbf{q}_{i},\\mathbf{b}_{ a},\\mathbf{b}_{g},\\mathbf{c}_{i}\\big{]},i\\in[1,N+1]\\] \\[\\mathbf{x}_{i_{i}}^{c} =\\big{[}\\mathbf{p}_{l_{i}^{c}}^{p},\\mathbf{q}_{l_{i}}^{p}\\big{]},i\\in[2,6]\\]where \\(\\mathbf{\\chi}_{s}=[\\mathbf{x}_{1}, ,\\mathbf{x}_{t}]\\) are considered as the solid states, with accurate extrinsic. \\(\\mathbf{x}_{i}\\) is the state of the primary LiDAR (LiDAR _1_), with \\(\\mathbf{p}\\in\\mathbb{R}^{3}\\), \\(\\mathbf{v}\\in\\mathbb{R}^{3}\\), and \\(\\mathbf{q}\\in\\text{SO}(3)\\) denoting the position, linear velocity, and orientation vector. \\(\\mathbf{b}_{a}\\) and \\(\\mathbf{b}_{g}\\) are the usual IMU gyroscope and accelerometer biases. And \\(\\mathbf{c}\\) is the scale factor of the odometer. As the rail vehicles work for long hours, the unavoidable abrasion and deformation will shift the original parameters. We hereby introduce the states with variations, \\(\\mathbf{\\chi}_{v}=[\\mathbf{x}_{i+1}, ,\\mathbf{x}_{N+1}]\\) and the dedicated LiDAR extrinsic \\(\\mathbf{\\chi}_{e}=[\\mathbf{x}_{i_{2}}^{p}, ,\\mathbf{x}_{i_{p}}^{p}]\\) for online refinement, and \\(\\mathbf{x}_{i_{t}}^{p}\\) represents the extrinsics from the auxiliary LiDAR \\(i\\) to the primary LiDAR.
### _Maximum-a-Posterior Problem_
We seek to estimate the trajectory and map the surrounding of a rail vehicle with multi-sensor measurements, in which the state estimation procedure can be formulated as a maximum-a-posterior (MAP) problem. Given the measurements \\(\\mathbf{\\chi}_{k}\\) and the history of states \\(\\mathbf{\\chi}_{k}\\), the MAP problem can be formulated as:
\\[\\mathbf{\\chi}_{k}^{*}=\\operatorname*{argmax}_{\\mathbf{\\chi}_{k}}p\\left(\\mathbf{\\chi}_{k} |\\mathbf{\\chi}_{k}\\right)\\propto p(\\mathbf{\\chi}_{0})p\\big{(}(\\mathbf{\\chi}_{k}|\\mathbf{\\chi}_ {k})\\big{)} \\tag{2}\\]
If the measurements are conditionally independent, then (2) can be solved through least squares minimization:
\\[\\mathbf{\\chi}^{*}=\\operatorname*{argmin}_{\\mathbf{\\chi}_{k}}\\sum\\sum_{i=1}^{k}\\|\\mathbf{r }_{i}\\|^{2} \\tag{3}\\]
where \\(\\mathbf{r}_{i}\\) is the residual of the error between the predicted and measured value.
### _Optimization_
Since the extrinsics are configured with a total station and a 3D laser scanner before each experiment, we assume the LiDAR extrinsics are accurate enough at the beginning, and the optimal solid states \\(\\mathbf{\\chi}_{s}\\) are obtained through minimizing:
\\[\\mathcal{F}_{\\mathcal{M}}(\\mathbf{\\chi})=\\mathcal{F}_{\\mathcal{M}}( \\mathbf{\\chi}_{s})\\] \\[=\\min_{\\mathbf{\\chi}_{s}}\\{\\big{\\|}\\mathbf{r}_{p}\\big{\\|}^{2}+\\sum_{l=1}^ {N_{K}}\\big{\\|}\\mathbf{r}_{\\gamma_{l}}\\big{\\|}^{2}+\\sum_{i=1}^{N_{\\mathcal{E}_{k} }}\\mathbf{r}_{\\ell_{l}}\\] \\[\\quad\\quad+\\sum_{l=1}^{N_{\\mathcal{E}_{k}}}\\mathbf{r}_{\\mathcal{R}_{ l}}+\\sum_{l=1}^{N_{\\mathcal{P}_{k}}}\\big{\\|}\\mathbf{r}_{\\mathcal{P}_{l}}\\big{\\|}^{2}+ \\sum_{l=1}^{N_{\\mathcal{E}_{k}}}\\big{\\|}\\mathbf{r}_{\\mathcal{G}_{l}}\\big{\\|}^{2}\\} \\tag{4}\\]
where \\(\\mathbf{r}_{p}\\) is the prior factor marginalized by Schur-complement [24], \\(\\mathbf{r}_{\\gamma_{l}}\\) is the residual of IMU/odometer preintegration result. \\(\\mathbf{r}_{\\ell_{l}}\\), \\(\\mathbf{r}_{\\mathcal{R}_{l}}\\) and \\(\\mathbf{r}_{\\mathcal{P}_{l}}\\) defines the residual of feature-based and GICP-based scan registration, as well as the residual of ground constraints. The residual of global positioning system is \\(\\mathbf{r}_{\\mathcal{G}_{l}}\\).
Once the rail vehicle runs for a long time, the online extrinsic calibration is triggered. And we exploit the map registration based measurements to correct the extrinsics:
\\[\\mathcal{F}_{\\mathcal{M}}(\\mathbf{\\chi})=\\mathcal{F}_{\\mathcal{M}}(\\mathbf{\\chi}_{v} )+\\mathcal{F}_{\\mathcal{M}}(\\mathbf{\\chi}_{e}) \\tag{5}\\]
## IV Methodology
### _Calibration_
Considering the highly restricted FoV and non-overlapping of our multi-LiDAR system setup, we utilize a total station and a 3D laser scanner to achieve the preliminary LiDAR extrinsics through EPnP [25].
Unlike many data gathering vehicles which only works for one or two hours each time, the minimum operation time for a maintenance rail vehicle is five hours from our experience, with the longest one continuously runs for three days, covering thousands of kilometers. Since the metal abrasion and seasonal deformation is unavoidable for long-during tasks, we refine the extrinsics for two criterias.
_Stop at certain stations_: The rail vehicles need to stop at some certain stations to wait for the dispatching orders from the automatic control system (ATC). Since the railway stations are with many column-like pillars and man-made structures, we
Fig. 3: Visual illustration of the placement of seven LiDARs (left), including 2 front-view Livox Horizon (_1_, _2_), 2 side-view Livox Horizon (_3_, _4_), 2 side-view Livox Tele-15 (_5_, _6_), and 1 up-view Livox Horizon (_7_). The red-green-blue color indicates relative \\(x\\)-\\(y\\)-\\(z\\) axis. A single scan coded by LiDAR IDs (right).
Fig. 2: Block diagram illustrating the full pipeline. The seven input LiDARs are synchronized, denoised, made distortion free and downsampled at the preprocessing. The two front-view LiDARs (_1_ and _2_) perform feature extraction and scan registration for LiDAR odometry, besides, the rail track plane constraints as well as height descriptor constraints are extracted. For the side-view LiDARs (_3_ - _6_) with small FoV, they only employ generalized iterative closest points (GICP) for registration. The online calibration is triggered when the vehicle travels for a relative long time. All the constraints are jointly optimized with constructed factor graph. For the up-view LiDAR (_7_), it only employs the simultaneous odometry output for mapping and calibration.
employ the edge-based camera-LiDAR calibration algorithm [26] to refine the parameters between multiple LiDARs and the panoramic camera.
_Long during_: Once the rail vehicle runs for a long time without stop, the online extrinsic calibration is triggered, and we leverage (5) for refinement.
### _IMU/Odometer Preintegration Factor_
The raw accelerometer, gyroscope, and train wheel odometer measurements, \\(\\mathbf{\\hat{a}}\\), \\(\\mathbf{\\hat{\\omega}}\\), and \\(\\mathbf{\\hat{\\psi}}^{0}\\) are given by:
\\[\\mathbf{\\hat{a}}_{k}=\\mathbf{a}_{k}+\\mathbf{R}_{W}^{B_{k}}\\mathbf{g}^{W}+\\mathbf{b }_{a_{k}}+\\mathbf{\\eta}_{a}\\]
\\[\\mathbf{\\hat{\\omega}}_{k}=\\mathbf{\\omega}_{k}+\\mathbf{b}_{\\omega_{k}}+\\mathbf{ \\eta}_{\\omega}\\]
\\[\\mathbf{c}^{Ok}\\mathbf{\\psi}^{0}=\\mathbf{v}^{0}+\\mathbf{\\eta}_{o} \\tag{6}\\]
where \\(\\mathbf{g}^{W}=[0,0,g]^{T}\\) is the gravity vector in the world frame. \\(\\mathbf{\\eta}_{a}\\), \\(\\mathbf{\\eta}_{\\omega}\\), and \\(\\mathbf{\\eta}_{o}\\) are the zero-mean white Gaussian noise.
Given two consecutive frames \\(k\\) and \\(k+1\\), the position, velocity, and orientation states can be propagated by the IMU/odometer measurements with:
\\[\\mathbf{p}_{k_{k+1}}^{W}=\\mathbf{p}_{B_{k}}^{W}+\\mathbf{v}_{B_{k }}^{W}\\Delta_{t_{k}}\\] \\[+\\iint\\limits_{\\begin{subarray}{c}t=t_{k}\\\\ t_{k+1}\\end{subarray}}^{t=t_{k}}(\\mathbf{R}_{t}^{W}(\\mathbf{\\hat{a}}_{t}- \\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{a})-\\mathbf{g}^{W})dt^{2}\\] \\[\\mathbf{v}_{B_{k+1}}^{W}=\\mathbf{v}_{B_{k}}^{W}+\\int\\limits_{ \\begin{subarray}{c}t=t_{k}\\\\ t_{k+1}\\end{subarray}}^{t=t_{k}}(\\mathbf{R}_{t}^{W}(\\mathbf{\\hat{a}}_{t}- \\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{a})-\\mathbf{g}^{W})dt\\] \\[\\mathbf{q}_{B_{k+1}}^{W}=\\mathbf{q}_{B_{k}}^{W}\\mathbf{\\otimes} \\int\\limits_{t=t_{k}}^{t}\\frac{1}{2}\\mathbf{\\Omega}(\\mathbf{\\hat{\\omega}}_{t}- \\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{o})\\mathbf{q}_{t}^{B_{k}}dt \\tag{7}\\]
where
\\[\\mathbf{\\Omega}(\\mathbf{\\omega})=\\begin{bmatrix}-[\\mathbf{\\omega}]_{X}&\\mathbf{ \\omega}\\\\ -\\mathbf{\\omega}^{T}&0\\end{bmatrix},[\\mathbf{\\omega}]_{X}=\\begin{bmatrix}0&- \\mathbf{\\omega}_{x}&\\mathbf{\\omega}_{y}\\\\ \\mathbf{\\omega}_{x}&0&-\\mathbf{\\omega}_{x}\\\\ -\\mathbf{\\omega}_{y}&\\mathbf{\\omega}_{x}&0\\end{bmatrix} \\tag{8}\\]
Based thereupon and the preintegration form in [24], we can formulate the IMU and odometer increment between \\(k\\) and \\(k+1\\) as:
\\[\\mathbf{\\alpha}_{B_{k+1}}^{B_{k}}=\\iint\\limits_{t=k}^{k+1}\\mathbf{R}_{B_{k}}^{B_ {k}}(\\mathbf{\\hat{a}}_{t}-\\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{a})dt^{2}\\]
\\[\\mathbf{\\beta}_{B_{k+1}}^{B_{k}}=\\int\\limits_{t=k}^{k+1}\\mathbf{R}_{B_{k}}^{B_{k}} (\\mathbf{\\hat{a}}_{t}-\\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{a})dt\\]
\\[\\mathbf{\\gamma}_{B_{k+1}}^{B_{k}}=\\int\\limits_{t=k}^{k+1}\\frac{1}{2}\\mathbf{\\Omega }(\\mathbf{\\hat{\\omega}}_{t}-\\mathbf{b}_{a_{t}}-\\mathbf{\\eta}_{a})\\mathbf{\\gamma}_{ B_{t}}^{B_{k}}dt\\]
\\[\\mathbf{\\alpha}_{\\alpha_{k+1}}^{Ok}=\\int\\limits_{t=k}^{k+1}\\mathbf{R}_{O_{t}}^{B_ {k}}\\big{(}\\mathbf{c}^{Ok}\\mathbf{\\psi}^{0}-\\mathbf{\\eta}_{s^{0}}\\big{)}dt \\tag{9}\\]
Using the calibration parameter \\(\\mathbf{p}_{O_{k+1}}^{B_{k+1}}\\) between the odometer and the IMU measured by a total station, we can also transform \\(\\mathbf{\\alpha}_{O_{k+1}}^{Ok}\\) into IMU frame \\(\\mathbf{\\phi}_{B_{k+1}}^{B_{k}}\\) with:
\\[\\mathbf{\\phi}_{B_{k+1}}^{B_{k}}=\\int\\limits_{t=k}^{k+1}\\mathbf{R}_{B_{k}}^{B_{k}} \\mathbf{R}_{O_{t}}^{B_{t}}\\big{(}\\mathbf{c}^{Ok}\\mathbf{\\psi}^{0}-\\mathbf{\\eta}_{s^ {0}}\\big{)}dt \\tag{10}\\]
Finally, the residual of preintegrated IMU/odometer data \\(\\Big{[}\\delta\\mathbf{\\alpha}_{B_{k+1}}^{B_{k}}\\delta\\mathbf{\\beta}_{B_{k+1}}^{B_{k}} \\delta\\mathbf{\\theta}_{B_{k+1}}^{B_{k}}\\delta\\mathbf{\\delta}\\mathbf{b}_{a}\\delta\\mathbf{\\beta}_ {B_{k+1}}^{B_{k}}\\delta\\mathbf{c}^{0}\\Big{]}^{T}\\)is given as:
\\[\\mathbf{r}_{\\gamma}(\\widehat{\\mathbf{Z}}_{B_{k+1}}^{B_{k}}\\mathbf{\\chi})=\\]
\\[\\begin{bmatrix}\\mathbf{R}_{W}^{B_{k}}\\Big{(}\\mathbf{p}_{B_{k+1}}^{W}-\\mathbf{p }_{B_{k}}^{W}+\\frac{1}{2}\\mathbf{g}^{W}\\Delta t_{k}^{2}-\\mathbf{v}_{B_{k}}^{W} \\Delta t_{k}\\Big{)}-\\widehat{\\mathbf{\\alpha}}_{B_{k+1}}^{B_{k}}\\\\ \\mathbf{R}_{W}^{B_{k}}(\\mathbf{v}_{B_{k+1}}^{W}+\\mathbf{g}^{W}\\Delta t_{k}-\\mathbf{v }_{B_{k}}^{W})-\\widehat{\\mathbf{\\beta}}_{B_{k+1}}^{B_{k}}\\\\ 2\\left[\\left(\\mathbf{q}_{B_{k}}^{W}\\right)^{-1}\\mathbf{\\otimes}\\left(\\mathbf{q}_ {B_{k+1}}^{W}\\right)\\mathbf{\\otimes}\\left(\\widehat{\\mathbf{\\gamma}}_{B_{k+1}}^{B_{k }}\\right)^{-1}\\right]_{2,4}\\\\ \\mathbf{b}_{a_{k+1}}-\\mathbf{b}_{a_{k}}\\\\ \\mathbf{b}_{g_{k+1}}-\\mathbf{b}_{g_{k}}\\\\ \\mathbf{R}_{W}^{B_{k}}\\left(\\mathbf{p}_{B_{k+1}}^{W}-\\mathbf{p}_{B_{k}}^{W}+ \\mathbf{R}_{B_{k+1}}^{W}\\mathbf{p}_{O_{k+1}}^{B_{k+1}}\\right)-\\widehat{\\mathbf{ \\phi}}_{B_{k+1}}^{B_{k}}\\\\ \\mathbf{c}^{Ok+1}-\\mathbf{c}^{Ok}\\end{bmatrix} \\tag{11}\\]
where \\([\\cdot]_{2\\cdot 4}\\) is used to take out the last four elements from a quaternion.
### _LiDAR Odometry Factors_
Since the range measuring error in the axial direction is large for short-distance, we first remove the too close points from LiDAR. Then we apply the IMU/odometer increment model to correct LiDAR point motion distortion with linear interpolation.
As shown in Fig. 3, we select the LiDAR \\(I\\) as the primary unit \\(\\mathcal{L}^{1}\\), and others are regarded as secondary LiDARs \\(\\mathcal{L}^{i},i\\in[2,7]\\). The scan period of the primary LiDAR is approximately 0.1 s, thus for any \\(k\\), the time-span between \\(t_{k}\\) and \\(t_{k+1}\\) is also 0.1 s. However, the secondary LiDARs are less likely to have the identical timestamps with the primary unit due to unpredictable time delays and information loss. As they are sent to different threads for feature extraction, we then merge all the feature points whose start time fall in \\([t_{k},t_{k+1})\\) to obtain fused scan \\(\\mathcal{F}_{k}\\). The combined points inherit the timestamp of the primary LiDAR and stretch over the time span \\([t_{k},t_{k}^{*})\\).
The
Fig. 4: Illustration of the synchronization and feature extraction within multiple LiDAR scansIMU measurements in the interval \\([t_{k-1},t^{\\prime}_{k})\\) are used for state propagation, where the samples in \\([t_{k-1},t_{k})\\) are utilized for preintegration and the other subsets are utilized for motion compensation.
For LiDAR \\(1\\) and \\(2\\), we follow the work of [27] to extract two sets of feature points from denoised and distortion-free point cloud. The edge features \\(\\epsilon\\) are selected with high curvature and the planar features \\(\\rho\\) are with low curvature. Then we take the fused feature points to perform scan registration with the edge and planar patch correspondence computed through point-to-line and point-to-plane distances, \\(\\mathbf{d}_{z\\epsilon z}\\) and \\(\\mathbf{d}_{\\rho 2\\rho}\\). Then the LiDAR odometry residual at k-th frame can be formulated by:
\\[\\mathbf{r}_{\\mathcal{L}_{k}}=\\sum_{l=1}^{N_{\\mathcal{L}}}\\omega_{l} \\mathbf{r}_{\\mathcal{L}_{l}}\\] \\[\\mathbf{r}_{\\mathcal{L}_{l}}=\\sum_{j=1}^{N_{\\mathcal{L}}}(\\mathbf{d}_{z \\epsilon z_{j}})^{2}+\\sum_{j=1}^{N_{\\rho}}(\\mathbf{d}_{z\\epsilon z_{j}})^{2} \\tag{12}\\]
where \\(N_{\\mathcal{L}}\\) denotes the number of LiDARs. \\(\\omega_{l}\\) is the weighting factor for multiple LiDAR measurements evaluation, and can be expressed as:
\\[\\omega_{l}=\\omega_{l}^{l}\\omega_{l}^{D}\\] \\[\\omega_{l}^{I}=1-(\\frac{\\left\\lVert\\mathbf{p}_{h_{k}}^{b_{k+1}} \\right\\rVert-\\left\\lVert\\mathbf{p}_{h_{k}}^{b_{k+1}}\\right\\rVert}{\\left\\lVert \\mathbf{p}_{h_{k}}^{b_{k+1}}\\right\\rVert})^{2}\\] \\[\\omega_{l}^{D}=\\frac{\\lambda_{l}}{\\lambda_{emp}} \\tag{13}\\]
where \\(\\omega_{l}^{I}\\) is the inertial weighting factor, \\(\\mathbf{p}_{h_{k}}^{B_{k+1}}\\), \\(\\mathbf{p}_{h_{k}}^{L_{k+1}}\\) are the IMU/odometer preintegration and the LiDAR odometry pose divergence between two consecutive keyframes. \\(\\omega_{l}^{D}\\) denotes the degeneracy-aware weight factor, with the degeneracy factor calculated following [28]. The empirical threshold \\(\\lambda_{emp}\\) is get from feature-rich railway stations.
Since the horizontal FoV for the four side-view LiDARs are poorly restricted (25.1\\({}^{\\circ}\\) for the Horizon, 14.5\\({}^{\\circ}\\) for the Tele-15), the feature-based scan matching is prone to fail for large speed rail vehicles. We hereby leverage the GICP-based factor graph optimization [29] to get \\(\\mathbf{r}_{\\mathcal{R}_{k}}\\).
We notice that the LiDAR-only odometry with LiDAR of limited FoV is over-sensitive to the vibrations caused by the joint of rail tracks and the rail track turnouts, where errors may appear in the pitch direction. Besides, the two rail tracks are not of the same height at turnings, and the LiDAR-only odometry will keep this roll divergence even in the following straight railways. Illustrated in [30], the planar features from segmented ground can effectually constrain the roll and pitch rotation. However, the angle-based ground extraction is not robust for railways as the small height variations will be ignored by the segmentation, which will generate large vertical divergence for large-scale mapping tasks.
We hereby employ the rail track plane to provide ground constraints. We first detect the track bed area using the LiDAR sensor mounting height and angle as illustrated in Fig. 5 (a). With the assumption of the LiDAR is centered between two rail tracks, we can set two candidate areas around the left and right rail tracks and search the points with local maximum height over the track bed. Two straight lines can then be fixed using random sample consensus (RANSAC) [31] method. Finally, we exploit the idea of region growing [32] for further refinement, with the result shown in Fig. 5 (b).
We are now able to define a plane with the two sets of rail track points using RANSAC. And the ground plane \\(\\mathbf{m}\\) can be parameterized by the normal direction vector \\(\\mathbf{n}_{p}\\) and a distance scalar \\(d_{p}\\), \\(\\mathbf{m}=[\\mathbf{n}_{p}^{T},d_{p}]^{T}\\). Then the ground plane measurement residual can be expressed as:
\\[\\mathbf{r}_{\\mathcal{P}_{k}}=\\mathbf{m}_{k+1}-\\mathbf{T}_{k+1}^{L_{k}}\\mathbf{m}_{k} \\tag{14}\\]
### _Optimization with Online Calibration_
Inspired by [5] and [22], we treat the online calibration as a submap registration problem. When the long-during operation (normally 4 hours) triggers the online calibration, the system will collect the submaps constructed under respective LiDAR coordinates, and perform the ICP algorithm to align the various coordinate frames. When the online calibration of LiDAR _1_\\(\\sim\\) 6_ is finished, LiDAR 7 will utilize the optimized pose for submap construction and follow-up calibration.
### _GNSS Factor_
The accumulated drifts of the system can be corrected using GNSS measurements. The GNSS factor is added when the estimated position covariance is larger than the reported GNSS covariance in [6]. However, we find that the reported GNSS covariance is not trustworthy sometimes, and may yield blurred or inconsequent mapping result. We hereby model the GNSS measurements \\(\\mathbf{p}_{k}^{W}\\) with additive noise, and the global position residual can be defined as:
Fig. 5: (a) Illustration of the track bed area detection and candidate rail track points searching. (b) The extracted planar points (green), edge points (red), and rail tracks (yellow) from the LiDAR \\(1\\).
\\[\\mathbf{r}_{\\xi_{k}}=\\mathbf{R}_{W}^{B_{k}}(\\mathbf{p}^{W_{k}}-\\mathbf{p}_ {W}^{B}-\\ \\mathbf{p}_{B_{k}}^{W}\\] \\[+\\frac{1}{2}\\mathbf{g}^{W}\\Delta t_{k}^{2}-\\ \\mathbf{v}_{B_{k}}^{W} \\Delta t_{k})-\\widehat{\\mathbf{a}}_{B_{k+1}}^{B_{k}} \\tag{15}\\]
where \\(\\mathbf{p}_{W}^{B}\\) is the transformation from the receiver antenna to the IMU, which can be obtained from installation configuration. Note that we only consider the single point positioning (SPP) result as input due to the inconsistent 4G communication quality for long railroads.
### _Map Management_
The accurate scan-to-map registration of LOAM relies on the convergence of nonlinear optimization from sufficiently many iterations. However, we find the scan-to-map sometimes does not converge due to insufficient correspondences caused by large velocity, and destroy the whole mapping result. To cope with this problem, we propose a submap-based two-stage map-to-map registration, which first creates submaps based on local optimization, and utilizes the GNSS measurements for error correction and map registration. Once the number of iterations reaches a threshold, we introduce the GNSS positions as initial guess for ICP registration between current frame and the current accumulated submap. In addition, we also leverage the GNSS information for submap-to-submap registration using the normal distribution transform (NDT) [33]. In practice, 10 keyframes are maintained in each submap, which can reduce the mapping blurry caused by frequent correction.
## V Experiment
The setup of the seven LiDARs is shown in Fig. 1 and Fig. 3, and the overall system is shown in Fig. 6. All the LiDARs are connected via a Livox Hub 1. Besides, we employ Ladybug5+2 panoramic camera for calibration and LiDAR-camera based object detection. Additionally, the system also fuses an integrated navigation unit Femtomomes Minill-D-INS3 and rail vehicle wheel odometers. All the sensors are hardware-synchronized with a u-blox EVK-M8T GNSS timing evaluation kit using GPS pulse per second (GPS-PPS).
Footnote 1: [https://www.livoxtech.com/hub](https://www.livoxtech.com/hub)
Footnote 2: [https://www.flir.com/products/ladybug5plus/](https://www.flir.com/products/ladybug5plus/)
Footnote 3: [http://www.femtomes.com/en/Minill.php?name=Minill](http://www.femtomes.com/en/Minill.php?name=Minill)
We employ a personalized onboard computer, with i9-10980HK CPU (2.4 GHz, octa-core), 64GB RAM, for real-time processing. All our algorithms are implemented in C++ and executed in Ubuntu Linux using the ROS [34].
We conduct a series of experiments on different railroads, and we employ two datasets for explanation here as listed in TABLE I. The ground truth is kept by the post processing result of a MPSTNAV M39 GNSS/INS integrated navigation system4 (with RTK corrections sent from Qianxun SI).
Footnote 4: [http://www.whmpsst.com/en/imgproduct.php?aid=29](http://www.whmpsst.com/en/imgproduct.php?aid=29)
### _Result of Online Extrinsic Calibration_
The maintenance rail vehicle leaves the station at 6:57 AM and works until 3:32 PM, with _FY-Back1_ covering a portion of the return data. Shown in Fig. 7 A, the mapping result is blurred without extrinsic refinement. After an online extrinsic calibration around 43 s, both the rotation and translation converge to a stable value. The clear and well-matched power towers (joint mapping of LiDAR \\(1\\), \\(2\\), \\(4\\), and _6_) in Fig. 7 B show the remarkable ability of our algorithm to refine the extrinsic.
### _Result of State Estimation_
We now present a series of evaluation to quantitatively analyze our proposed framework. We employ two novel Livox LiDAR based graph SLAM, Lili-om [3] and Lio-Livox5 for comparison. Both of them directly take the calibrated and merged point clouds as input. Besides, the odometer and global constraints are also manually added for a fair comparison. TABLE II summarizes the root mean square error (RMSE) metrics. It is seen that Lio-Livox has a worse performance than SPP due to wrongly detected plane constraints, which generates large deviation in vertical direction. Besides, both algorithms cannot achieve real-time performance on a i9-11900K, 128GB RAM desktop.
Footnote 5: [https://github.com/Livox-SDK/LIO-Livox](https://github.com/Livox-SDK/LIO-Livox)
### _Result of Multi-LiDAR Mapping_
We show that our proposed method is accurate enough to build large-scale map of railroad environments. The real-time mapping is shown in Fig. 8 and Fig. 9. We can see that the point
Fig. 6: Our hardware setup. The white, yellow, red, blue, and green dashed rectangle indicates the Ladybug5+, Minill-D-INS and M39, localization antenna, Livox Hub, and GPS-PPS synchronization units, respectively.
cloud data from different LiDARs is aligned well together and the consistency is kept locally. The well-matched result with the satellite image indicates that our proposed method is of high precision globally. In addition, we leverage the refined camera-LiDAR extrinsic to plot the colored mapping result in Fig. 10.
### _Runtime Analysis_
The average runtime for processing each scan in different scenarios is shown in TABLE III, denoting the proposed system capable of real-time operation for all conditions.
Fig. 8: A): The mapping result of _FY-Back1_ aligned with the satellite map, and the color is coded by height variations. B) and C) indicates two examples of potential risks detected by the panoramic camera, with B) a prefabricated houses 110 m away from the railroad central line and C) a greenhouse plastic film 162 m away from the railroad central line. D) and E) denotes the corresponding point clouds of B) and C).
Fig. 10: Real-time colored mapping result using the panoramic camera and multiple LiDARs, since the distortion of fisheye camera is large with increased distance, we only take the points within 50 m for mapping.
Fig. 7: Visual illustration of the online calibration of multiple LiDARs. A) presents the blurred mapping due to eight-hour continuously running without online refinement, where the power towers are βstretchedβ and the up-down half does not coincide with each other. B) \\(\\sim\\) D) presents the mapping of a power tower, two maintenance rail vehicles, and a station after online extrinsic refinement. All the color is coded by intensity variations.
Fig. 9: The mapping result of _HQ-to2_ coded by height variations. Note that the outlier points (with large z value) are caused by the direct sunlight. A) β D) is the detailed inspection of the area marked in dashed circle, with A) in the urban area, B) denotes a small station, C) presents a village path, and D) is a nearby park.
## VI Conclusion
In this paper, we proposed an accurate and robust localization and mapping framework for rail vehicles. Our system integrates measurements from multiple LiDARs, IMU, train odometer, and GNSS in a tightly-coupled manner. Besides, we leverage geometric structure constraints to cope with the rotational divergence due to limited FoV. The proposed method has been extensively validated in large-scale railway, with a decimeter-scale accuracy in most scenarios.
Future work will integrate the panoramic visual information into pose estimation as well as map construction, and verify the system robustness in presence of GNSS failure.
## Acknowledgment
We would like to thanks colleagues from Hefei power supply section, China Railway, for their kind support.
## References
* [1]W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang (2021) FAST-LIO2: Fast Direct LiDAR-inertial Odometry. ArXiv Prepr. ArXiv210706829. Cited by: SSI.
* [2]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [3]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [4]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [5]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [6]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [7]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [8]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [9]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [10]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [11]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [12]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [13]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [14]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [15]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [16]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [17]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [18]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [19]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [20]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [21]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [22]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [23]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [24]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [25]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [26]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [27]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [28]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [29]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [30]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [31]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [32]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [33]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [34]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [35]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [36]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [37]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [38]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [39]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [40]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [41]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [42]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [43]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [44]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [45]J. Deng, Y. Dong, and J. Sun (2018) Deep residual learning for robust localization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 1-1. External Links: Document Cited by: SSI.
* [46]J. Deng, Y. Dong, J. Sun, and J. | Precise and real-time rail vehicle localization as well as railway environment monitoring is crucial for railroad safety. In this letter, we propose a multi-LiDAR based simultaneous localization and mapping (SLAM) system for railway applications. Our approach starts with measurements preprocessing to denoise and synchronize multiple LiDAR inputs. Different frame-to-frame registration methods are used according to the LiDAR placement. In addition, we leverage the plane constraints from extracted rail tracks to improve the system accuracy. The local map is further aligned with global map utilizing absolute position measurements. Considering the unavoidable metal abrasion and screw loosening, online extrinsic refinement is awakened for long-during operation. The proposed method is extensively verified on datasets gathered over 3000 km. The results demonstrate that the proposed system achieves accurate and robust localization together with effective mapping for large-scale environments. Our system has already been applied to a freight traffic railroad for monitoring tasks.
SLAM, multi-LiDAR, rail vehicle. | Give a concise overview of the text below. | 198 |
arxiv-format/2309_05682v1.md | # A compendium of data sources for data science, machine learning, and artificial intelligence
Paul Bilokon
Imperial College London
South Kensington Campus
Exhibition Road
London SW7 4UB
[email protected]
Oleksandr (Alex) Bilokon
Thalesians Marine Ltd
3rd Floor, 120 Baker Street
London W1U 6TU
[email protected]
Saeed Amen
Turnleaf Analytics Ltd
59 Kensington Court
London W8 5DG
[email protected]
## 1 Introduction
Bearing in mind the recent advances in data science, machine learning, and artificial intelligence, such as the emergence of large language models [4], the availability of high-quality data is crucial for data scientists, machine learning, and artificial intelligence experts of all levels of seniority. Now that high-quality tools are available, access to data is a crucial enabling factor for research.
While data sources are application-specific, and it is impossible to produce an exhaustive list of such data sources, it seems that a comprehensive, rather than complete, list would still benefit the scientific and business communities. We intend to update this list as new data sources become available.
Many of the data sources listed here are what is known as _alternative data_: data gathered outside of traditional sources such as company filings and broker research notes. It is not our goal to provide in this document instructions for making sense and extracting value from such data -- the reader may wish to consult [3].
However, for the benefit of the reader, we will provide a few quick tips for working with these datasets. The technology of choice for dealing with data is the lingua franca of data science -- Python -- and its libraries, such as pandas, NumPy, and Matplotlib [19].
More advanced tools, such as boosting [27], deep learning [9], and ChatGPT [4] can then be applied for high-quality data analysis, machine learning, and artificial intelligence. The process of artificial intelligence-assisted data analysis is quite involved, and usually requires practical expertise and an appropriate educational background, usually at the MSc or PhD level or equivalent.
The dataset may consist of historical data, which is static, or real-time data, which keeps arriving in real time. It may be relatively small in size or may constitute _big data_[22], whose use may require special tools, such as specialised big data / high-frequency data databases (e.g. kdb+q [22]). If you intend to build a real-time production system utilising this data, you may need to build it in a programming language other than Python, such as C++ [28].
The datasets may be commercial or noncommercial/free of charge. In each case, before using a dataset, make sure that you carefully examine the terms and conditions, such as licensing. Some vendors will provide _application programming interfaces (APIs)_[13], which will enable you to easily access their product offering. You may have to use a general-purpose _protocol_, such as REST [5] or WebSocket [6], or a specialised one, such as the Financial Information Exchange (FIX) protocol [7], to access the dataset. Usually the relevant information is contained in the product documentation.
In some cases no API is provided, and the data may have to be extracted from websites (using libraries such as Selenium [26] or Beautiful Soup [25]), images or PDF files (using libraries such as Tesseract OCR [15]). Before applying such _scraping_[21] make sure that you have read the vendor's terms and conditions and confirm that the terms and conditions do indeed allow such usage. In each case it is generally a good idea to consult a legal professional before onboarding a dataset, especially a commercial dataset.
Sometimes you (or your organisation) may be your own best source of data -- in which case make sure that you log it carefully using an appropriate logging framework, database, or observability stack (Elastic1, Grafana2, Splunk3, etc.).
Footnote 1: [https://www.elastic.co/](https://www.elastic.co/)
Footnote 2: [https://grafana.com/](https://grafana.com/)
Footnote 3: [https://www.splunk.com/](https://www.splunk.com/)
Footnote 4: [https://www.alteryx.com/](https://www.alteryx.com/)
Footnote 5: [https://powerbi.microsoft.com/](https://powerbi.microsoft.com/)
Footnote 6: [https://www.tableau.com/](https://www.tableau.com/)
You may wish to further automate your data science work using one or several of the business intelligence stacks (Alteryx4, Microsoft Power BI5, Tableau Software 6, etc.). Such tools can also make the data science, machine learning, and, in principle, artificial intelligence analysis more accessible to less technical users.
Footnote 5: [https://www.splunk.com/](https://www.splunk.com/)
When selecting datasets for inclusion, we have of necessity been biased. We have therefore included first and foremost those datasets that have been used by us, our students, or other collaborators in our academic and/or commercial work. We do not guarantee the reliability of those datasets and the information is provided \"as is\" without any explicit or implicit warranty. The reader is reminded to check the relevant terms and conditions before using any dataset, including those mentioned here.
If a particular dataset is missing from this compendium and you would like to see it included in the following editions, please let us know. If there are mistakes and/or omissions, e.g. missing citations and/or URLs, please accept our apologies -- this is not intentional -- please also do let us know (ideally mentioning the suggested BibTeX, where appropriate).
## 2 General resources
Before we proceed to consider specific datasets, we will briefly mention some general-purpose tools that can help with dataset and machine learning technique search. It is never a good idea to reinvent the wheel, unless your intention is to replicate and validate existing results. Before starting your analysis, make sure that it hasn't already been done by someone else and does not appear in the literature. Therefore it's useful to perform a search for relevant academic papers and white papers before you commence your work. These are also good sources of up-to-date machine learning techniques.
1. _Google Dataset Search_ is a search engine from Google that helps researchers locate online data that is freely available for use. The company launched the service on 5 September, 2018, and stated that the product was targeted at scientists and data journalists. The service was out of beta as of 23 January, 2020. URL: [https://datasetsearch.research.google.com/](https://datasetsearch.research.google.com/)
2. _Google Scholar_ provides a simple way to broadly search for scholarly literature. It is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.
URL: [https://scholar.google.com/](https://scholar.google.com/)
3. _arXiv_ is an open-access repository of electronic preprints and postprints (known as e-prints) approved for posting after moderation, but not peer reviewed. While such repositories provide early access to research, users should be aware that preprints have not been peer reviewed, and may not be of the same quality as peer reviewed papers in high-quality academic journals. Use at your own risk. URL: [https://arxiv.org/](https://arxiv.org/)
4. _medRxiv_ is similar to arXiv but it distributes preprints specifically in health sciences. URL: [https://www.medrxiv.org/](https://www.medrxiv.org/)
5. _Papers With Code_ -- a free and open resource with machine learning papers, code, datasets, methods, and evaluation tables. URL: [https://paperswithcode.com/](https://paperswithcode.com/)
6. _GitHub_ is a web-based version control and collaboration platform for software developers. It contains many software projects, many of which are public/open source, and can be searched. URL: [https://github.com/](https://github.com/)
7. _GitLab_ is a web-based version control and collaboration platform for software developers, which also aims to be a comprehensive DevOps platform being delivered as a single application. It contains many software projects, many of which are public/open source, and can be searched. URL: [https://gitlab.com/](https://gitlab.com/)
8. _Kaggle_ is the world's largest data science community with powerful tools and resources to help you achieve your data science goals. It features competitions, datasets, and tutorials. URL: [https://www.kaggle.com/](https://www.kaggle.com/)
9. The _Conference on Neural Information Processing Systems (NeurIPS)_ is a premier conference on machine learning and artificial intelligence. The conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas. URL: [https://nips.cc/](https://nips.cc/)
10. The _International Conference on Machine Learning (ICML)_ is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics, and data science, as well as important application areas such as machine vision, computational biology, speech recognition and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs. URL: [https://icml.cc/](https://icml.cc/)
11. The _International Conference on Learning Representations (ICLR)_ is the premier gathering of professionals dedicated to the advancedment of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics, and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics. Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs. URL: [https://iclr.cc/](https://iclr.cc/)
12. Wikipedia's _List of datasets for machine-learning research_ lists the datasets that are applied for machine learning research and have been cited in peer-reviewed academic journals. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learning algorithms (such as deep learning), computer hardware, and, less intuitively, the availability of high-quality training datasets. High-quality labelled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labelled, high-quality datasets for unsupervised learning can also be difficult and costly to produce. If you are developing machine learning algorithms, make sure that you compare them against these established benchmarks. URL: [https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research](https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research)13. _Common Crawl_ (a 501(c)(3) non-profit founded in 2007) maintains a free, open repository of web crawl data. They make wholesale extraction, transformation, and analysis of open web data accessible to researchers. The resulting corpus contains petabytes of data regularly collected since 2008. You may use Amazon's cloud platform to run analysis jobs directly against this data, or you can download it, whole or in part. This corpus was used to train the large language model ChatGPT. URL: [https://commoncrawl.org/](https://commoncrawl.org/)
14. The _Webis-Dataset-Reviews-21_[17] corpus comprises the curated list of 13,372 NLP-related datasets and their 539,411 mentions extracted from all publications available in ACL Anthology corpus. URL: [https://webis.de/data/webis-dataset-reviews-21.html](https://webis.de/data/webis-dataset-reviews-21.html)
15. The _Rapid API Hub_ enables the data scientist to discover and connect to thousands of APIs within sports, finance, data, entertainment, travel, location, science, food, transportation, music, business, visual recognition, tools, text analysis, weather, gaming, SMS, events, health and fitness, and payments. URL: [https://rapidapi.com/hub](https://rapidapi.com/hub)
## 3 Datasets
In this section we will list some of the datasets that we (or some of our collaborators) find interesting, classifying them by area of application. Our goal is not to repeat the list of standard benchmark datasets (which you can find in Wikipedia, see \"Wikipedia's List of datasets for machine-learning research\" above). Our goal is to list those datasets that can lead to application-specific insights and, hopefully, academic and industrial breakthroughs.
### General
1. _U.S. Government's open data (DATA.GOV)_ contains around 236,476 datasets in different fields such as agriculture, climate, education, finance, health, etc. It also has a search box that helps you to find out the data you are looking for. The datasets are public in nature. Users can download datasets in different formats. The data is maintained by the GitHub repository. Data.giv is a dataset aggregator and a home of U.S. Government's open data. URL: [https://data.gov/](https://data.gov/)
2. _The United Kingdom Find Open Data_ -- find data published by central government, local authorities, and public bodies to help you build products and services. The search function covers business and economy, crime and justice, defence, education, environment, government, government spending, health, mapping, society, towns and cities, transport, digital service performance, government reference data. URL: [https://www.data.gov.uk/](https://www.data.gov.uk/)
3. _The United Kingdom statistical data sets_ include 1,037 data sets such as unclaimed estates list, fishing quota allocations for England and the UK, quota use statistics, quarterly traffic estimates (TRA25), live tables on planning application statistics, historical and discontinued planning live tables, Marine Management Organisation effort statistics, non-association independent schools inspections and outcomes: management information, port and domestic waterborne freight statistics: data tables (PORT), and more. URL: [https://www.gov.uk/government/statistical-data-sets](https://www.gov.uk/government/statistical-data-sets)
4. _The UK Data Service_ is the UK's largest collection of economic, population, and social research data for teaching, learning, and public benefit. The website offers a selection of links to open data platforms, portals, and hubs. The list includes European sources, non-European sources, and non-government sources. URL: [https://ukdatasetrice.ac.uk/](https://ukdatasetrice.ac.uk/)
5. _Open Government Data (OGD)_ Platform India is a single point of access to datasets in open formats published by Ministries and Departments. The source consists of datasets on real-life of all shapes and sizes along with their APIs and visualisations. The datasets are available for public use. URL: [https://data.gov.in/](https://data.gov.in/)
6. _Wharton Research Data Services (WRDS)_ provide access to data from numerous data vendors. WRDS data is compiled from independent sources that specialise in specific historical data. Some sources include Capital IQ, NYSE, CRSP, and Refiniti (formerly Thomson Reuters), and more specialised sources such as Markit, FactSet, Hedge Fund Research, Inc., Eventus, and GSIOnline with more added regularly to meet the needs of the clients. The datasets include financial statements, audit and regulatory filings, banks, segments/industry data, compensation, intellectual property, stock prices, analyst estimates, intraday trades and quotes, indices and factors, bonds and fixed income, private equity/venture capital, mutual fund / hedge fund /ETF returns, derivatives / options, REITs, currency exchange rates, ownership, mergers and acquisitions, ESG: Environmental, Social, Governance data, economics, marketing, news (including RavenPack News Analytics and SnP Capital IQ Key Developments), and healthcare. URL: [https://wrds-www.wharton.upenn.edu/](https://wrds-www.wharton.upenn.edu/)
7. Microsoft along with the external research community launched a repository in July 2018 known as _Microsoft Research Open Data_. It consists of curated datasets that were used in the published research studies. In addition, datasets are present in different fields such as computer science, biology, healthcare, mathematics, etc. The repository offers a wide variety of formats for downloading datasets. URL: [https://www.microsoft.com/en-us/research/project/microsoft-research-open-data/](https://www.microsoft.com/en-us/research/project/microsoft-research-open-data/)
8. _Socrata OpenData_ is a portal that contains multiple datasets. This broad range of information makes it more attractive and useful among data scientists and other researchers. You can look for the data in the tabular form in the browser or can use some built-in visualisation tools. URL: [https://dev.socrata.com/data/](https://dev.socrata.com/data/)
9. _Kaggle_ offers more than 250,233 datasets across different subjects, many of them open. At the time of writing, the ten most popular datasets on Kaggle7 are: Footnote 7: [https://www.kaggle.com/micmichoi0218/insurance](https://www.kaggle.com/micmichoi0218/insurance)
Footnote 8: [https://www.kaggle.com/micmichoi/260690](https://www.kaggle.com/micmichoi/260690)
* _Credit Card Fraud Detection_[23] -- This dataset helps companies and teams recognise fraudulent credit card transactions. The dataset contains transactions made by European credit cardholders in September 2013. The dataset presents details of 284,807 transactions, including 492 frauds, that happened over two days. URL: [https://www.kaggle.com/mlg-ulb/creditcardfraud](https://www.kaggle.com/mlg-ulb/creditcardfraud)
* _European Soccer Database_ -- The dataset contains 25,000+ matches, 10,000+ players, 11 European countries with their lead championship, seasons 2008 to 2016, players and teams' attributes sourced from EA Sports' FIFA video game series, including weekly updates, team line up with squad formation (X, Y coordinates), betting odds from up to 10 providers, detailed match events (goal types, corner, possession, fouls, etc.) for 10,000+ matches. For non-commercial use only. URL: [https://www.kaggle.com/hugomathien/soccer](https://www.kaggle.com/hugomathien/soccer)
* _Avocado Prices_ -- The dataset shows the historical data on avocado prices and sales volume in multiple US markets. THe information has been generated from the Hass Avocado Board website. It represents weekly 2018 retail scan data for national retail volume (units and price, along with region, types (conventional or organic), and avocado sold volume. The dataset can be applied to other fruits and vegetables across geographies. Contributed by Hass Avocado Board. URL: [https://www.kaggle.com/neuromusic/avocado-prices](https://www.kaggle.com/neuromusic/avocado-prices)
* _Open Food Facts_ -- This is a free, open, collaborative database of food products worldwide, with ingredients, allergens, nutrition facts, and all the tidbits of information found on product labels. The database is a part of Google's Summer of Code 2018. 5,000+ contributors have added 600K+ products from 150 countries using an app or their camera to scan barcodes and upload pictures of products and their labels. URL: [https://www.kaggle.com/openfoodfacts/world-food-facts](https://www.kaggle.com/openfoodfacts/world-food-facts)
* _IBM HR Analytics Employee Attrition and Performance_ -- Created by IBM data scientists, this fictional dataset is used to predict attrition in an organisation. It uncovers various factors that lead to employee attrition and explores correlations such as \"a breakdown of distance from home by job role and attrition,\" or \"comparison of average monthly income by education and attrition.\" URL: [https://www.kaggle.com/pavansubhash/ibm-hr-analytics-attrition-dataset](https://www.kaggle.com/pavansubhash/ibm-hr-analytics-attrition-dataset)
* _Red Wine Quality_[2] -- Red wine quality is a clean and straightforward practice dataset for regression or classification modelling. The two datasets available are related to red and white variants of the Portuguese 'Vinho Verde' wine. The information in this dataset includes fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, and others. The dataset is also available on the UCI machine learning repository. URL: [https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009](https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009)
* _Medical Cost Personal Datasets_[18] -- This dataset is used for forecasting insurance via regression modelling. The dataset includes age, sex, body mass index, children (dependents), smoker, region and charges (individual medical costs billed by health insurance). The dataset is also available on GitHub. URL: [https://www.kaggle.com/mirichoi0218/insurance](https://www.kaggle.com/mirichoi0218/insurance)* _Open Food Facts_ -- This is a free, open, collaborative database of food products worldwide, with ingredients, allergens, nutrition facts, and all the tidbits of information found on product labels. The database is a part of Google's Summer of Code 2018. 5,000+ contributors have added 600K+ products from 150 countries using an app or their camera to scan barcodes and upload pictures of products and their labels. URL: [https://www.kaggle.com/openfoodfacts/world-food-facts](https://www.kaggle.com/openfoodfacts/world-food-facts)
* _Machine Learning and Data Science Survey_ -- Kaggle conducted an industry-wide survey in 2017 to establish a comprehensive overview of the data science and machine learning landscape. The survey received over 16K responses gathering information around data science, machine learning innovation, how to become data scientists, and more. You can find the kernels used in the report here. URL: [https://www.kaggle.com/kaggle/kaggle-survey-2017](https://www.kaggle.com/kaggle/kaggle-survey-2017)
* _Titanic_ -- The Titanic dataset consists of original data from the Titanic competition and is ideal for binary logistic regression. The dataset contains information about the passenger's id, age, sex, fare, etc. The Titanic competition involves users creating a machine learning model that predicts which passengers survived the Titanic shipwreck. URL: [https://www.kaggle.com/heptapod/titanic](https://www.kaggle.com/heptapod/titanic)
* _Annotated Corpus for Named Entity Recognition_ -- This dataset is extracted from the Groningen Meaning Bank (GMB) corpus, tagged, annotated, and built specifically to train the classifier to predict labelled entities such as name, location, etc. It gives you a broad view of feature engineering and helps solve business problems like picking entities from electronic medical records, etc. URL: [https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus](https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus)
Further datasets can be found at the following URL: [https://www.kaggle.com/datasets](https://www.kaggle.com/datasets)
10. _UC Irvine Machine Learning Repository_ maintains 644 datasets as a service to the machine learning community. Here you can donate and find datasets used by millions of people all around the world. URL: [https://archive.ics.uci.edu/](https://archive.ics.uci.edu/)
11. _Academic Torrents_ is not a mainstream yet powerful repository to share data. The main purpose behind its creation is an attempt to make academic datasets and research papers available via BitTorrent. However, the main focus is to share datasets from different research papers. URL: [https://academictorrents.com/](https://academictorrents.com/)
12. _Reddit_ is a popular social news site, but it also acts as a discussion board to share datasets. Such discussion boards are called _subreddits_ or _r/datasets_. It is a place to share, find and discuss datasets. However, the quality of the datasets may vacancy because different users submit them. URL: [https://www.reddit.com/r/datasets/](https://www.reddit.com/r/datasets/)
13. _Awesome Public Datasets_ is a repository on GitHub of high quality topic-centric public data sources. They are collected and tidied from blogs, answers, and user responses. Almost all of these are free. URL: [https://github.com/awesomedata/awesome-public-datasets](https://github.com/awesomedata/awesome-public-datasets)
14. _Data is Plural_ is a weekly newsletter of useful/curious datasets. URL: [https://www.data-is-plural.com/](https://www.data-is-plural.com/)
15. _Data World_ is an open data repository containing data contributed by thousands of users and organisations all around the world. It contains hard to find data. For example, it contains 3,667 free health datasets. URL: [https://data.world/](https://data.world/)
16. _Library of Congress_ offers datasets as potential sources for data science or machine learning projects. Time series are available for most economic, business, census, and demographic statistics. For additional sources of datasets, see the Business Reference Services guide on Data sets (_BeOnline_). The Library of Congress makes these two datasets freely available to researcher and analysts: (1) By the People Data Sets -- transaction data was created from completed By the People campaigns and is available in bulk as zipped.csv files; (2) Web Archive Datasets -- The Library of Congress Web Archives provides users with derivative datasets for users to download, re-use, and explore. URL: [https://guides.loc.gov/datasets/repositories](https://guides.loc.gov/datasets/repositories)
17. _Datarade.ai_ offer an interface for finding, comparing and accessing data products from 500+ premium data providers across the globe. URL: [https://datarade.ai/](https://datarade.ai/)
### Finance and economics
1. _Bloomberg Terminal_, which the developers descibe as \"a global icon of progress\" has \"revolutionized an industry by bringing transparency to financial markets. More than four decades on, it remains at the cutting edge of innovation and information delivery -- with fast access to news, data, unique insight and trading tools helping leading decision makers turn knowledge into action.\" The Terminal provides coverage of markets, industries, companies, and securities across all asset classes. Bloomberg Terminal is known for its \"unparalleled coverage.\" Access to the Bloomberg Terminal is available on a commercial basis. URL: [https://www.bloomberg.com/professional/solution/bloomberg-terminal/](https://www.bloomberg.com/professional/solution/bloomberg-terminal/)
2. _Bloomberg Professional Services_ offer several APIs for accessing Bloomberg data. For more information refer to the BLAPAPI Developer's Guide -- a tutorial for developing applications with BLAPAPI in C++, Java, and NET. There is API Windows, API Linux, API macOS, Schema Downloader, and API Python. URL: [https://www.bloomberg.com/professional/support/api-library/](https://www.bloomberg.com/professional/support/api-library/)
3. _Bloomberg Server API (SAPI)_ delivers a powerful complement to the Bloomberg Terminal. It allows the user to consume Bloomberg's real-time market, historical, and key reference data, as well as calculation engine capabilities when using proprietary and third-party applications. URL: [https://www.bloomberg.com/professional/product/server-api/](https://www.bloomberg.com/professional/product/server-api/)
4. _Refinitiv Eikon_ is the financial analysis desktop and mobile solution for access to leading data and content, Reuters news, market data, and liquidity pools. Access to Refinitiv Eikon is available on commercial basis. URL: [https://www.refinitiv.com/en/products/eikon-trading-software](https://www.refinitiv.com/en/products/eikon-trading-software)
5. Refinitiv offers numerous _Refinitiv APIs_ for accessing data programmatically, such as App Studio -- Web SDK, CIAM, Cash RFQ FIX API, and more. URL: [https://developers.refinitiv.com/en/api-catalog](https://developers.refinitiv.com/en/api-catalog)
6. The curators claim that \"The world's most powerful data lives on _Quandl_.\" It is the premier source for financial, economic, and alternative datasets, serving investment professionals. Quandl's platform is used by over 400,000 people, including analysts from the world's top hedge funds, asset managers, and investment banks. Quandl is subdivided into Core Financial Data -- market data from hundreds of sources via API, or directly into Python, R, Excel and many other tools; and Alternative Data for institutional clients only: \"We bring undiscovered data from non-traditional publishers to investors seeking unique, predictive insights.\" URL: [https://demo.quandl.com/](https://demo.quandl.com/)
7. _Wharton Research Data Services (WRDS)_ provide access to data from numerous data vendors. WRDS data is compiled from independent sources that specialise in specific historical data. Some sources include Capital IQ, NYSE, CRSP, and Refinitiity (formerly Thomson Reuters), and more specialised sources such as Markit, FactSet, Hedge Fund Research, Inc., Eventus, and GSIOnline with more added regularly to meet the needs of the clients. The datasets include financial statements, audit and regulatory filings, banks, segments/industry data, compensation, intellectual property, stock prices, analyst estimates, intraday trades and quotes, indices and factors, bonds and fixed income, private equity/venture capital, mutual fund / hedge fund / ETF returns, derivatives / options, REITs, currency exchange rates, ownership, mergers and acquisitions, ESG: Environmental, Social, Governance data, economics, marketing, news (including RavenPack News Analytics and SnP Capital IQ Key Developments), and healthcare. URL: [https://wrds-www.wharton.upenn.edu/](https://wrds-www.wharton.upenn.edu/)
8. _Trading Economics_ provides its members with access to millions of economics indicators for 196 countries and historical/delayed/live quotes for exchange rates, stocks, indexes, bonds, and commodity prices. URL: [https://tradingeconomics.com/analytics/features.aspx?source=footer](https://tradingeconomics.com/analytics/features.aspx?source=footer)
9. _BMLL Technologies_ is a leading, independent provider of Level 3 historical data and analytics. The company claims that their Level 3 data is \"the cleanest order book data available anywhere in the capital markets.\" BMLL Level 3 data captures every order sent to the market, and is fully harmonised across venues and asset classes. BMLL's 6+ years of data ana analytics span global equities, ETFs and futures from 75+ trading venues, and are used by banks and brokers, asset managers, global exchange groups, and hedge funds. URL: [https://www.bmltech.com/](https://www.bmltech.com/)
10. _Databento_ provides real-time and historical market and reference data, sourced directly from colocation sites. At present the database covers equities, equity options, futures, and options on futures. The APIs provided by Databento (WebSocket, Raw, HTTP, C++, Python) offer both historical and live data. The data covers a large URL: [https://databento.com/](https://databento.com/)11. _FirstRate Data_ is a leading provider of high-resolution intraday stock market, crypto, futures, and FX data. They source their historical stock data directly from major exchanges and fully adjust the data for both splits and dividends. Futures and ETF datasets are also sourced from co-located servers in major exchanges. All datasets are rigorously tested for accuracy. The historical intraday data solutions are research-ready and are offered in 1-minute, 5-minute, 30-minute, 1-hour, and 1-day intraday stock data as well as intraday futures, ETFs, and FX data going back 15 years, and tick data going back 10 years. URL: [https://firstratedata.com/](https://firstratedata.com/)
12. _Turnleaf Analytics_ use machine learning, machine learning, alternative data and new technologies to create economic forecasts of inflation and analyze financial markets, in order to provide our clients the much needed clarity required when navigating the complex financial landscape. URL: [https://turnleafanalytics.com/](https://turnleafanalytics.com/)
13. _FINRA_ provides real-time and historic data for most _TRACE-eligible securities_ (including US corporate bonds) to members and any others that choose to subscribe for a fee. The data feeds include real-time data, end-of-day data, terminals and snapshot data. URL: [https://www.finra.org/filing-reporting/trace/data](https://www.finra.org/filing-reporting/trace/data)
14. _Neptune_ deliver targeted, high quality data directly from corporate bond dealers into core workflow tools. Real-time, structured and standardised connectivity means reduction of \"noise\" and inaccurate data. The use of FIX allows for ease of connectivity via API, OMS/EMS or via Neptune's GUI. High quality, pre-trade bond data is available via one-connection, from the leading sell-side market makers in fixed income. URL: [https://neptunefi.com/](https://neptunefi.com/)
15. _Bonds.com_ provide market and static data for over 20,000 bonds. The data consists of real-time tick-by-tick (top of book or full depth, 30-100mm+ updates per day), intra-day intervals, end of day, and historical. URL: [https://bonds.com/data/](https://bonds.com/data/)
16. _Dukascopy Swiss Banking Group_ provide the _Historical Data Feed_, which includes historical price data for a variety of financial instruments (e.g. FX, commodities, and indices): [https://www.dukascopy.com/swiss/english/marketwatch/historical/](https://www.dukascopy.com/swiss/english/marketwatch/historical/)
17. _Investing.com_ offer live FX option volatility surfaces for G10 and EM currency pairs. URL: [https://www.investing.com/currencies/forex-options](https://www.investing.com/currencies/forex-options)
18. _Google Finance_ provides free real-time quotes, international exchanges, up-to-date financial news and analytics. URL: [https://www.google.com/finance/?hl=en](https://www.google.com/finance/?hl=en)
19. _Yahoo Finance_ provides free stock quotes, up-to-date news, portfolio management resources, international market data, social interaction, and mortgage data. URL: [https://finance.yahoo.com/](https://finance.yahoo.com/)
20. _Credit Card Fraud Detection_[23] -- This dataset helps companies and teams recognise fraudulent credit card transactions. The dataset contains transactions made by European credit cardholders in September 2013. The dataset presents details of 284,807 transactions, including 492 frauds, that happened over two days. URL: [https://www.kaggle.com/mlg-ulb/creditcardfraud](https://www.kaggle.com/mlg-ulb/creditcardfraud)
### Legal (laws and regulations)
1. _The United States Patent and Trademark Office (UPSTO)_ Patent Public Search tool is a web-based patent search application that has replaced the internal legacy search tools PubEast and PubWest and external legacy search tools PatFT and AppFT. URL: [https://www.uspto.gov/patents/search](https://www.uspto.gov/patents/search)
2. _The United States Patent and Trademark Office (UPSTO)_ new trademark search system will soon replace the existing Trademark Electronic Search System (TESS): URL: [https://www.uspto.gov/trademarks/search](https://www.uspto.gov/trademarks/search)
3. _GOV.UK Search-for-a-patent_ searches for published patent applications and registered patents using the Intellectual Property Office's patent information and document service (Ipsum) and patent publication service. URL: [https://www.gov.uk/search-for-patent](https://www.gov.uk/search-for-patent)
4. _GOV.UK Search-for-a-trade-mark_ can be used to search for a UK trae mark by trade mark number, owner, keyword, phrase, or image. URL: [https://www.gov.uk/search-for-trademark](https://www.gov.uk/search-for-trademark)5. For trade marks in Jersey, search the _Jersey trade mark register_. URL: [http://www.jgreffe-online.gov.je/trademarksdb/searchform.asp](http://www.jgreffe-online.gov.je/trademarksdb/searchform.asp)
6. For trade marks in Guernsey, search the _Guernsey trade mark register_. URL: [http://ipo.guernseyregistry.com/article/107508/View-the-Trade-Mark-Register](http://ipo.guernseyregistry.com/article/107508/View-the-Trade-Mark-Register)
7. _Expacenet_ provides free access to over 140 million patent documents of the European Patent Office. URL: [https://worldwide.espacenet.com/](https://worldwide.espacenet.com/)
8. _Deutsches Patent- und Merkenamt (DPMA)_ offers access to patents, trademarks, and designs via DMPAregister, DEPATISnet, DPMAdirektWeb, and DPMAdirektPro. URL: [https://www.dpma.de/](https://www.dpma.de/)
9. The _China National Intellectual Property Administration (CNIPA)_ offers patent and trademark search. URL: [https://english.cnipa.gov.cn/](https://english.cnipa.gov.cn/)
10. The _Indian Patent Advanced Search System of Intellectual Property India_ offers patent search. URL: [https://ipresearch.ipindia.gov.in/publicsearch](https://ipresearch.ipindia.gov.in/publicsearch)
11. Using _World Intellectual Property Organization (WIPO) PATENTSCOPE_ you can search 113 million patent documents including 4.7 million published international patent applications (PCT). URL: [https://patentscope.wipo.int/search/en/search.jsf](https://patentscope.wipo.int/search/en/search.jsf)
12. _Google Patents_ searches and displays the full text of patents from around the world. URL: [https://patents.google.com/](https://patents.google.com/)
### Life sciences
1. The World Health Organization (WHO) _International Clinical Trials Registry Platform (ICTRP)_ is \"a voluntary platform to link clinical trials registers in order to ensure a single point of access and the unambiguous identification of trials with a view to enhancing access to information by patients, families, patient groups and others.\" The platform has a searchable portal: International Clinical Trials Registry Platform (ICTRP) search portal. URL: [http://apps.who.int/trialsearch/](http://apps.who.int/trialsearch/)
2. _ClinicalTrials.gov_ is a registry and results database of privately and publicly funded clinical studies conducted around the world. The resource is provided by the U.S. National Library of Medicine. Each study record includes a summary of the study protocol. Some study records include a summary of the results in a tabular format. Studies can be searched by status, condition or disease, contry, or by other terms. The database is continually updated; it currently lists over 300,000 research studies located in all 50 states in the United States of America and over 200 other countries around the world. URL: [https://clinicaltrials.gov/](https://clinicaltrials.gov/)
3. The _ISRCTN registry_ is a primary clinical trial registry. It is recognised by the World Health Organisation (WHO) and the International Committee of Medical Journal Editors (ICMJE) and accepts all clinical research studies (whether proposed, ongoing or completed). Each study record includes a plain English summary as well as details of the study protocol. All study records can be searched using an advanced search function. Currently the registry lists over 18,000 studies. URL: [https://www.isrctn.com/](https://www.isrctn.com/)
4. The _EU Clinical Trials Register_ contains information on interventional clinical trials on medicines conducted in the European Union (EU), or the European Economic Area (EEA) which started after 1 May 2004. It also includes some clinical trials conducted outside the EU/EEA or some older trials that meet certain criteria. Study records indicate the trial protocol and provide results where available. Records can be searched using an advanced search function. Currently the registry displays over 34,000 clinical trials. URL: [https://euclinicaltrials.eu/](https://euclinicaltrials.eu/)
5. The _Pan African Clinical Trail Registry (PACTR)_ is a regional register of clinical trials conducted in Africa. An open-access platform where clinical trials can be registered free of charge, providing an electronic database of planned trials and trials currently in progress. URL: [https://pactr.samrc.ac.za/](https://pactr.samrc.ac.za/)
6. _PubChemChem_[16] is the world's largest collection of freely accessible chemical information. Users can search chemicals by name, molecular formula, structure, and other identifiers. The database contains chemical and physical properties, biological activities, safety and toxicity information, patents, literature citations, and more.
PubChem contains 116 million compounds, 308 million substances, and 934 data sources. URL: [https://pubchem.ncbi.nlm.nih.gov/](https://pubchem.ncbi.nlm.nih.gov/)
7. _ChEMBL_ or _ChEMBLdb_[20] is a manually curated chemical database of bioactive molecules with drug inducing properties. It is maintained by the European Bioinformatics Institute, of the European Molecular Biology Laboratory, based at the Welcome Trust Genome Campus, Hinxton, UK. The database brings together chemical, biological, and genomic data to aid the translation of genomic information into effective new drugs. At the time of writing, ChEMBL includes 15,398 targets, 2,399,743 distinct compounds, 20,334,684 activities, 88,630 publications, and 215 deposited datasets. URL: [https://www.ebi.ac.uk/chembl/](https://www.ebi.ac.uk/chembl/)
8. _Chemical Entities of Biological Interest_[11], also known as _ChEBI_, is a chemical database and ontology of molecular entities focused on'small' chemical compounds that is part of the Open Biomedical Ontologies effort at the European Bioinformatics Institute (EBI). In order to create ChEBI, data from a number of sources were incorporated and subjected to merging procedures to eliminate redundancy. Four of the main sources from which the data are drawn are: * _IntEnz_ -- the Integrated relational Enzyme database of the EBI. IntENz is the master copy of the Enzyme Nomenclature, the recommendations of the NC-IUBMB on the Nomenclature and Classification of Enzyme Catalysed Reactions. * _KEGG COMPOUND_ -- One part of the Kyoto Encyclopedia of Genes and Genomes LIGAND database, COMPOUND is a collection of biochemical compound structures. * _PDBChem_ -- The service providing web access to the Chemical Component Dictionary of the wwPDB as this is loaded into the PDB database at the EBI. * _ChEMBL_ -- A database of bioactive compounds, their quantitative properties and bioactivities, abstracted from the primary scientific literature. It is part of the ChEMBL resources at the EBI. URL: [https://www.ebi.ac.uk/chebi/](https://www.ebi.ac.uk/chebi/)
9. _DrugBank_[29] is the most comprehensive, up-to-date, and accurate drug database on the market. It is available for commercial and academic research and is also accessible via a clinical API. DrugBank offers customisable drug search options, drug-drug interaction checker, allergy and cross-sensitivities information, and US drug labels. DrugBank Online is offered to the public as a free-to-access resource. Use and re-distribution of the content of DrugBank Online or the DrugBank Data, in whole or in part, for any purpose requires a license. Academic users may apply for a free license for certain use cases and all other users require a paid license. DrugBank boasts 26,500+ citations in scientific publications, 13 of 20 top pharma companies are customers, and more than 1.5 billion USD has been spent on research utilising DrugBank. URL: [https://www.drugbank.com/](https://www.drugbank.com/)
10. _MoleculeNet_[30] is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package (MIT license). MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores. URL: [https://mocelucenet.org/](https://mocelucenet.org/)
11. _AlphaFold Protein Structure Database_[14] provides open access to over 200 million protein structure predictions to accelerate scientific research. AlphaFold is an AI system developed by DeepMind that predicts a protein's 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment. DeepMind and EMBL's European Bioinformatics Institute (EMBL-EBI) have partnered to create AlphaFold DB to make these predictions freely available to the scientific community. The latest database provides broad coverage of UniProt (the standard repository of protein sequences and annotations). They provide individual downloads for the human proteome and for the proteomes of 47 other key organisms important in research and global health. They also provide a download for the manually curated subset of UniProt (Swiss-Prot). URL: [https://alphafold.ebi.ac.uk/](https://alphafold.ebi.ac.uk/)12. _RCSB Protein Data Bank (RCSB PDB)_[1] enables breakthroughs in science and education by providing access and tools for exploration visualisation, and analysis of (1) experimentally-determined 3D structures from the Protein Data Bank (PDB) archive; (2) Computed Structure Models (CSM) from AlphaFold DB and Model Archive. These data can be explored in context of external annotations providing a structural view of biology. URL: [https://www.rcsb.org/](https://www.rcsb.org/)
13. _KEGG COMPOUND_[10] is one of the four original databases, together with KEGG PATHWAY, KEGG GENES and KEGG ENZYME, introduced at the start of the KEGG project in 1995. It is a collection of small molecules, biopolymers, and other chemical substances that are relevant to biological systems. Each entry is identified by the C number, such as C00047 for L-lysine, and contains chemical structure and associated information, as well as various links to other KEGG databases and outside databases. Some COMPOUND entries are also represented as GLYCAN and DRUG entries with the \"Same as\" links. While GLYCAN entries are represented as tree structures with monosaccharide codes, COMPOUND entries for peptides and polyketides, such as C11996 for methylromycin, are represented as sequences using the abbreviation codes for the monomeric units of amino acids and carboxylic acids. URL: [https://www.genome.jp/kegg/compound/](https://www.genome.jp/kegg/compound/)
14. _ZINC20_[12] is a free database of commercially-available compounds for virtual screening. ZINC contains over 230 million purchasable compounds in ready-to-dock, 3D formats. ZINC also contains over 750 million purchasable compounds you can search for analogs in under a minute. ZINC is provided by the Irwin and Shoichet Laboratories in the Department of Pharmaceutical Chemistry at the University of California, San Francisco (UCSF). We thank NIGMS for financial support (GM71896). URL: [https://zinc20.docking.org/](https://zinc20.docking.org/)
15. The _Tox21_[24] dataset comprises 12,060 training samples and 647 test samples that represent chemical compounds. There are 801 \"dense features\" that represent chemical descriptors, such as molecular weight, solubility, or surface area, and 272,776 \"sparse features\" that represent chemical substructures (ECFP10, DFS6, DFS8; stored in Matrix Market Format). Machine learning methods can either use sparse or dense data or combine them. For each sample there are 12 binary labels that represent the outcome (active/inactive) of 12 different toxicological experiments. Note that the label matrix contains many missing values (NAs). URL: [https://tripod.nih.gov/tox21/challenge/](https://tripod.nih.gov/tox21/challenge/)
16. _FooDB_ is the world's largest and most comprehensive resource on food constituents, chemistry, and biology. It provides information on both macronutrients and micronutrients, including many of the constituents that give foods their flavour, colour, taste, texture, and aroma. Each chemical entry in the FooDB contains more than 100 separate data fields covering detailed compositional, biochemical and physiological information (obtained from the literature). This includes data on the compound's nomenclature, its description, information on its structure, chemical class, its physico-chemical data, its food source(s), its colour, its aroma, its taste, its physiological effect, presumptive health effects (from published studies), and concentrations in various foods. Users are able to browse or search FooDB by food source, name, descriptors, function or concentrations. Depending on individual preferences users are able to view the content of FooDB from the Food Browse (listing foods by their chemical composition) or the Compound Browse (listing chemicals by their food sources). URL: [https://www.foodb.ca/](https://www.foodb.ca/)
17. _Open Food Facts_ -- This is a free, open, collaborative database of food products worldwide, with ingredients, allergens, nutrition facts, and all the tidbits of information found on product labels. The database is a part of Google's Summer of Code 2018. 5,000+ contributors have added 600K+ products from 150 countries using an app or their camera to scan barcodes and upload pictures of products and their labels. URL: [https://www.kaggle.com/openfoodfacts/world-food-facts](https://www.kaggle.com/openfoodfacts/world-food-facts)
18. _Scopus_ is Elsevier's abstract and citation database launched in 2004. Scopus is curated by independent subject matter experts. As of March 2023, Scopus includes 27,950 active titles: 26,591 active peer-reviewed journals, 192 trade journals, 1,167 book series, 11.7+ million conference papers from 148,500+ worldwide events, \"articles-in-press\" from 9,100+ journals, 292,000+ stand-alone books, 90.6+ million records (84+ million records post-1969 with references and 6.5+ million records pre-1970 with the oldest record dating back to 1788). There are also 49.2+ million patent records from five patent offices. URL: [https://www.scopus.com/](https://www.scopus.com/)
19. _Medical Cost Personal Datasets_[18] -- This dataset is used for forecasting insurance via regression modelling. THe dataset includes age, sex, body mass index, children (dependents), smoker, region and charges (individual medical costs billed by health insurance). The dataset is also available on GitHub. URL: [https://www.kaggle.com/mirichio0218/insurance](https://www.kaggle.com/mirichio0218/insurance)20. _Plants For A Future (PFAF)_ is a database of 8,000+ edible and medicinal plants. It includes hardness zones, care, hazards, physical characteristics, synonyms, habitats, edible uses, medicinal uses, other uses, cultivation details, propagation, and more. URL: [https://pfaf.org/user/](https://pfaf.org/user/)
21. The _Medicinal Plant Database of the Botanical Survey of India_ covers plants that are employed in different medicinal systems and ethnic medicines. India has a rich tradition of herbal medicines and it has made contributions not only in the form of Ayurveda and Siddha but also in the discovery of modern drugs and pharmacological research. This database provides information on scientific name, family, vernacular name, medicinal uses, location of species and images of herbarium specimen. In the first phase, a total of 1,915 species are listed and about 1,000 will be added in the next phase. URL: [https://bsi.gov.in/page/en/medicinal-plant-database](https://bsi.gov.in/page/en/medicinal-plant-database)
22. _CAB Database of Plant Science_ contains abstracts of internationally published scientific research. URL: [http://www.cabi.org/](http://www.cabi.org/)
23. _Dr Duke's Phytochemical and Ethnobotanical Databases_ is a database of the ethnobotanical uses and chemical activities in plants. URL: [https://phytochem.nal.usda.gov/phytochem/search](https://phytochem.nal.usda.gov/phytochem/search)
24. _Food and Agriculture Organization of the United Nations (FAO)_ is a specialised agency of the United Nations that leads international efforts to defeat hunger. The goal is to achieve food security for all and ensure that people have regular access to enough high quality food to lead active, healthy lives. URL: [http://www.fao.org/home/en/](http://www.fao.org/home/en/)
25. _Harvard University Herbaria and Libraries_. URL: [http://huh.harvard.edu/](http://huh.harvard.edu/)
26. The American Botanical Council's _HerbMed_ and _HerbMedPro_ -- an interactive, electronic herbal database that provides hyperlinked access to the scientific data underlying the use of herbs for health. It is an evidence-based information resource for professionals, researchers, and general public. HerbMedPro is the professional version of HerbMed. This enhanced version provides access to the entire database with continuous updating as the information is being compiled. URL: [https://www.herbalgram.org/resources/herbmedpro/](https://www.herbalgram.org/resources/herbmedpro/)
27. _Planetes medicinales_, the journal of _Guilde des herboristes_. URL: [http://www.guildedesherboristes.org/](http://www.guildedesherboristes.org/)
28. The _Integrative Medicine Program_ at the MD Anderson Cancer Center engages patients and their families to become active participants in improving their physical, psycho-spiritual, and social health. The ultimate goals are to optimise health, quality of life, and clinical outcomes through personalised evidence-based clinical care, exceptional research and education. URL: [http://www.mdanderson.org/education-and-research/departments-programs-and-labs/programs-centers-institutes/integrative-medicine-program/index.html](http://www.mdanderson.org/education-and-research/departments-programs-and-labs/programs-centers-institutes/integrative-medicine-program/index.html)
29. _NAPRALERT_ -- A relational database of all natural products, including ethnomedical information, pharmacological and biochemical information of extracts of organisms in vitro, in situ, in vivo, in humans (case reports, non-clinical trials) and clinical studies. Similar information is available for secondary metabolites from natural sources. URL: [http://www.napralert.org/](http://www.napralert.org/)
30. The US National Library of Medicine (NLM), on the campus of the National Institutes of Health in Bethesda, Maryland, has been a centre of information innovation since its founding in 1836. The world's largest biomedical library, NLM maintains and makes available a vast print collection and produces electronic information resources on a wide range of topics that are searched billions of times each year by millions of people around the globe. It also supports and conducts reseearch, development, and training in medical informatics and health information technology. URL: [https://www.nlm.nih.gov/](https://www.nlm.nih.gov/)
31. The W3 Tropicos database links over 1.38M scientific names with over 6.85M specimens and over 1.55M digital images. The data includes over 165K references from over 54.9K publications offered as a free service to the world's scientific community. URL: [https://www.tropicos.org/](https://www.tropicos.org/)32. The _American Botanical Council_ maintains a list of databases and data sources. URL: [https://www.herbalgram.org/resources/related-links-page/databases/](https://www.herbalgram.org/resources/related-links-page/databases/)
33. _Abbott FreeStyle Libre 3_ is the world's smallest, thinnest glucose sensor that noninvasively tracks glucose levels in the body. It provides unsurpased 14-day accuracy and optimal glucose alarms, but also evolves the portfolio with new features, such as continuous real-time glucose readings automatically delivered to a person's smartphone every minute and a sensor that is easy to apply with a one-piece applicator. URL: [https://www.abbot.com/corpnewsroom/strategy-and-strength/freeStyle-libre-3-worlds-smallest-sensor-is-here.html](https://www.abbot.com/corpnewsroom/strategy-and-strength/freeStyle-libre-3-worlds-smallest-sensor-is-here.html)
34. _mymonX_ combines a smart wearable and an app to produce non-invasive medical grade measurements of blood glucose levels, heart rate, ECG, blood pressure, oxygenation (SPo2), breathing rate (Rr), sleep, activity, and more. URL: [https://mymonx.co/products/mymonx-original-smart-watch](https://mymonx.co/products/mymonx-original-smart-watch)
35. _ZOE_ consists of easily applicable at-home tests that give insight into blood fat, blood sugar, and gut microbiome health. The results are then mapped into ZOE Scores for food (from 0 to 100). The product is backed by several academic publications. URL: [https://zoe.com/how-it-works](https://zoe.com/how-it-works)
### News sentiment and social media
1. _RavenPack_ offer news analytics, regulatory filings, earnings dates, training data, job analytics, transcripts, and insider transactions datasets. URL: [https://www.ravenpack.com/](https://www.ravenpack.com/)
2. _Bloomberg Professional Services_ supply news and social sentiment data. URL: [https://www.bloomberg.com/professional/sentiment-analysis-white-papers/](https://www.bloomberg.com/professional/sentiment-analysis-white-papers/)
3. _Reuters/Refinity News Sentiment_ can be computed with Eikon Data APIs. URL: [https://developers.refinitiv.com/en/article-catalog/article/introduction-news-sentiment-analysis-eikon-data-apis-python-example](https://developers.refinitiv.com/en/article-catalog/article/introduction-news-sentiment-analysis-eikon-data-apis-python-example)
4. _InfoTrie FinSent_ stands for Financial News and Sentiment Screener. Get real-time analysis information for over 100,000 stocks, topics, companies, people, and other assets with up to 15 years of history. URL: [https://infortie.com/finents-stock-and-sentiment-screener/](https://infortie.com/finents-stock-and-sentiment-screener/)
5. The _Twitter API_ enables programmatic access to Twitter. URL: [https://developer.twitter.com/en/docs/twitter-api](https://developer.twitter.com/en/docs/twitter-api)
6. _LinkedIn_ offers several APIs. URL: [https://developer.linkedin.com/product-catalog](https://developer.linkedin.com/product-catalog)
7. _Meta (Facebook)_ offer several APIs, platforms, products, and SDKs. URL: [https://developers.facebook.com/docs/](https://developers.facebook.com/docs/)
### Retail and ecommerce
1. The _E-Commerce Sales Data_ on Kaggle is a comprehensive dataset with sales data across channels and financial information. Data includes SKUs, design numbers, stock levels, product categories, product sizes, product colours, the amount paid, rate per piece, date of sale, gross amounts, and more. URL: [https://www.kaggle.com/datasets/thedevastator/unlock-profits-with-e-commerce-sales-data](https://www.kaggle.com/datasets/thedevastator/unlock-profits-with-e-commerce-sales-data)
2. The _Electronic Product Pricing_ dataset on Kaggle offers 10 fields of pricing information for 7,000 electronic products. URL: [https://www.kaggle.com/datasets/retailrocket/ecommerce-dataset](https://www.kaggle.com/datasets/retailrocket/ecommerce-dataset)
3. _Datos_ offer a structured data feed with information on click events and funnel actions for the most popular U.S. online stores to make it easy to track and analyse purchase funnel activity on any given retail site. This feed is enriched with retailer metadata, such as item price, product name, category, etc. for an easy to understand taxonomy of what is happening on any given retailer. Currently the dataset includes U.S. data for Amazon, Walmart, Target, and Etsy, URL categorisation by event type (search, product view, purchase, etc.), breakdown by country and platform type -- desktop and mobile, and hundred of millions events per month. The dataset is available on commercial basis. URL: [https://datos.live/online-retail-feed/](https://datos.live/online-retail-feed/)4. _Avocado Prices_ -- The dataset shows the historical data on avocado prices and sales volume in multiple US markets. THe information has been generated from the Hass Avocado Board website. It represents weekly 2018 retail scan data for national retail volume (units and price, along with region, types (conventional or organic), and avocado sold volume. The dataset can be applied to other fruits and vegetables across geographies. Contributed by Hass Avocado Board URL: [https://www.kaggle.com/neuromusic/avocado-prices](https://www.kaggle.com/neuromusic/avocado-prices)
5. _Red Wine Quality_[2] -- Red wine quality is a clean and straightforward practice dataset for regression or classification modelling. The two datasets available are related to red and white variants of the Portuguese 'Vinho Verde' wine. The information in this dataset includes fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, and others. The dataset is also available on the UCI machine learning repository. URL: [https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009](https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009)
6. _RetailNext_ offers accurate foot traffic measurement (Traffic 2.0), video security, occupancy, and shopper journey insights. Some of the products are built on RetailNext's Aurora, the next-generation sensor for physical location analytics. URL: [https://retailnext.net/](https://retailnext.net/)
7. Rather than being a dataset vendor, _Datarade_ offer \"the easy way to find, compare, and access data products from 500+ premium data providers across the globe.\" URL: [https://datarade.ai/](https://datarade.ai/)
8. The _Amazon Selling Partner API (SP-API)_ is a REST-based API that helps Amazon selling partners programmatically access their data on orders, shipments, payments, and much more. Applications using the SP-API can increase selling efficiency, reduce labour requirements, and improve response time to custmers, helping selling partners grow their businesses. Amazon's Selling Partner's API can be used for both Selling Partners and Vendors, and is designed to improve efficiency and aid in accelerating growth. URL: [https://developer.amazonservices.com/](https://developer.amazonservices.com/)
A number of interesting retail and ecommerce-related datasets can be found on Kaggle.
### Satellite imagery
1. _EarthScope Consortium_ operates the National Science Foundation's Geodetic Facility for the Advancement of Geoscience (GAGE) and Seismological Facility for the Advancement of Geoscience (SAGE). The Synthetic Aperture Radar (SAR) data available from the GAGE Facility includes satellite-transmitted and received radar scans of the Earth's surface. SAR data, analysed using Interferometric SAR (InSAR) techniques, can be used to model millimeter-to-centimeter scale deformation of the Earth's surface over regions tens to hundreds of kilometers across. These displacement fields are essential guides for studies of tectonics, earthquake focal mechanisms, volcano behaviour, hydrology, and public safety related to Earth hazards. The primary tool for discovering and accessing SAR data from the GAGE Facility is the Seamless SAR Archive (SSARA). This tool is available as a command line utility (API) and web-based interface (GUI) that allows for the search and download of data from WInSAR's collection as well as data from the Alaska Satellite Facility (ASF) and some PI specific collections. URL: [https://www.unavco.org/geodetic-imaging/sar-data/](https://www.unavco.org/geodetic-imaging/sar-data/)
2. _ArcGIS Living Atlas of the World_ is the foremost collection of geographic information from around the globe. It includes maps, apps, and data layers. It includes Landsat Level-2 archive -- 41 years of scientific earth observation imagery. The Living Atlas Landsat Level-2 image service provides seamless access to a unique historical record of the Earth. ArcGIS Living Atlas of the World includes authoritative live feeds and other content that helps learn more about active hurricanes, cyclones, and typhoons. Since its initial release in 2022, the Wildfire Aware application in Living Atlas has been helping improve awareness and understanding of wildfires throughout the United States. ArcGIS Living Atlas also includes authoritative content that helps users learn more about current sea, temperature, and coral bleaching. URL: [https://livingatlas.arcgis.com/en/home/](https://livingatlas.arcgis.com/en/home/)
3. _Maxar_ have decades of experience manufacturing communication and Earth observation satellites -- with more than 285 Maxar-built satellites and 2,750 cumulative years on orbit. Maxar's Earth observation constellation offers the most comprehensive suite of commercial satellite imagery; they offer diversity in resolution, currency, spectral bands, and accuracy.
URL: [https://www.maxar.com/products/](https://www.maxar.com/products/)
4. The _ICEYE_ radar satellite constellation delivers radar imaging that makes it possible to see the surface of the Earth through clouds and even in total darkness. Governments and businesses can now look at their locations of interest 24/7. The constellation provides new images of the same location every hour. Tracking all changes that happen even within individual days is finally possible. URL: [https://www.iceye.com/](https://www.iceye.com/)
### Shipping and logistics
1. _Pole Star API_ offers a customisable data feed that can be used to access vessel registration data for screening and tracking; retrieving screening results, vessel details; retrieving detailed watchlist checks on vessels, companies, and associated countries; retrieving detailed ship movement history; retrieving PurpleTRAC PDF Screening Report; monitor tracked vessels and getting their positions and other events. URL: [https://developers.polestar-production.com/getting-started](https://developers.polestar-production.com/getting-started)
2. _Spire Maritime_ offer _Enhanced Satellite Automatic Identification System (AIS)_, a solution that offers vessel tracking in highly contested areas (busy shipping lanes or congested ports) where signal collision makes the vessel signal detection harder for other AIS collection methods. Such areas include the Arabian Gulf, Bab al-Mandab Strait, Cape of Good Hope, Gulf of Mexico, Mediterranean Sea, North and Baltic Seas, South China Sea, Strait of Gibraltar, and Suez Canal. Enhanced Satellite AIS provides a high frequency of position updates and global AIS coverage that helps customers track the vessels with enhanced detection. The product comes with a live API feed and historical data. URL: [https://spire.com/maritime/solutions/enhanced-satellite-ais/](https://spire.com/maritime/solutions/enhanced-satellite-ais/)
3. _Spire Maritime's Port Events_ enable effective supply chain monitoring, port operations, and feeding data science models. Port Events is based on the most comprehensive AIS coverage available, including Satellite AIS, Terrestrial AIS, and Enhanced Satellite AIS. URL: [https://spire.com/maritime/solutions/port-events/](https://spire.com/maritime/solutions/port-events/)
4. _Starboard Maritime Intelligence_ helps nations tackle complex maritime challenges, ranging from risk assessing arriving vessels to detecting illegal fishing and uncovering non-reporting dark vessels. By combining global Automatic Identification System (AIS) data, multiple layers of satellite data, scientific models, and other information or intelligence, Starboard enables teams to effectively analyse and investigate vessels and areas -- all on a secure and intuitive platform. URL: [https://starboard.nz/](https://starboard.nz/)
5. _Critchlow Geospatial_ offer next-generation satellite imagery (including vast imagery archives) for commercial use integrated with GIS software and authoritative location-based data. URL: [https://www.critchlow.co.nz/](https://www.critchlow.co.nz/)
6. _Alen Space_ provides an end-to-end solution to orbit clients' Automatic Identification System (AIS) services. They help companies interested in providing maritime security services, law-inforcement, Search and Rescue (SAR), maritime surveillance, environmental solutions, and fleet management services for commercial users (shipping companies and ship owners). SAT-AIS is a solution that overcomes terrestrial coverage limitations with the potential to provide AIS services for any given area on Earth. The VDES solution (VHF Data Exchange System) with small satellites takes advantage of the new VDE-SAT functionality, which will allow bidirectional communications and a unified standard in maritime communications. The VDES satellite service (VDE-SAT) is prepared to transform the sector and to meet the needs of the maritime industry. VDES will allow real-time control of all data flows between vessels, with authorities, and service providers around the world. Vessels that are not broadcasting their identification, position, and course with AIS transponders can be detected with SIGINT capabilities that rely on CubeSats. Those dark vessels are still communicating with path-to-talk (PTT) radio systems or satellite phones, or they are navigating with S-band or X-band radio systems. Those signals can be identified from space. URL: [https://alen.space/small-satellites-for-ais-services/](https://alen.space/small-satellites-for-ais-services/)
7. _ICEYE_ also offer a SAR satellite data solution that can be used to surveil maritime activity in any area of interest -- day or night, in any weather, which can be used quickly to react to potential security threats or illegal activities. Such data can be used to detect dark vessels and oil trafficking from space. URL: [https://www.iceye.com/sar-data-applications/maritime-domain-awareness](https://www.iceye.com/sar-data-applications/maritime-domain-awareness)8. _Unseenlabs_, a startup commercial company of Rennes in northwestern France, provides the civilian and military in the maritime sector with the ability to locate ships by detecting and characterising their passive electromagnetic signature. They claim to be \"able to track any ship, anywhere, anytime, where other systems cannot.\" The company describes itself as the world leader in radio frequency data and solutions for maritime domain awareness. URL: [https://unseenlabs.space/](https://unseenlabs.space/)
9. _SEA.AI_ detects floating objects early, using thermal and optical cameras to catch even objects that escape conventional systems such as Radar or AIS: Unsignalled crafts or other floating obstacles, e.g., containers, tree trunks, buoys, inflatables, kayaks, persons over board, etc. SEA.AI offer SEA.AI Offshore (high-tech safety and convenience for blue water sailors), SEA.AI Sentry (for commercial and government use as well as for use on motoryachts), SEA.AI Competition (for ocean racing and performance yachts with rotating mast). The SEA.AI system computes input from lowlight and thermal cameras, using latest Machine Vision technology, best-in-class deep learning capabilities, and a proprietary database of millions of annotated marine objects. URL: [https://sea.ai/](https://sea.ai/)
10. _Visivise container tracking tool_ collects data from multiple sources such as shipping lines, AIS data, port and terminals, railways, and then refine and standardise them to make a unified experience to track caragos and bring visibility. URL: [https://www.visiwise.co/tracking/container/](https://www.visiwise.co/tracking/container/)
11. _MYTRACKINGDEVICES GPS and IoT platform_ provides solutions for shipment tracking and monitoring, asset tracking and recovery, vehicle tracking, and personal tracking. The solution consists of small tracking devices that transmit their location from anywhere in the globe by connecting to cellular data networks. The low-powered devices are capable of up to 12 months of battery life and even longer for tracking containerised cargo. The current precision locating technology combines multiple sensors using GPS, WiFi, and Cell-ID. URL: [https://mytrackingdevices.com/gps-container-tracking/](https://mytrackingdevices.com/gps-container-tracking/)
12. _Titanic_ -- The Titanic dataset consists of original data from the Titanic competition and is ideal for binary logistic regression. The dataset contains information about the passenger's id, age, sex, fare, etc. The Titanic competition involves users creating a machine learning model that predicts which passengers survived the Titanic shipwreck. URL: [https://www.kaggle.com/heptapod/titanic](https://www.kaggle.com/heptapod/titanic)
### Sports
1. _European Soccer Database_ -- The dataset contains 25,000+ matches, 10,000+ players, 11 European countries with their lead championship, seasons 2008 to 2016, players and teams' attributes sourced from EA Sports' FIFA video game series, including weekly updates, team line up with squad formation (X, Y coordinates), betting odds from up to 10 providers, detailed match events (goal types, corner, possession, fouls, etc.) for 10,000+ matches. For non-commercial use only. URL: [https://www.kaggle.com/hugomathien/soccer](https://www.kaggle.com/hugomathien/soccer)
2. _Football Analytics_ -- This dataset contains European football team stats Only teams of Premier League, Ligue 1, Bundesliga, Serie A and La Liga are listed. The auxiliary datasets contain 2021-2022 Football Player Stats and 2021-2022 Football Team Stats. URL: [https://www.kaggle.com/datasets/vivovinco/football-analytics](https://www.kaggle.com/datasets/vivovinco/football-analytics)
3. _Football DataSet_ -- 96,000+ matches with detailed minute-by-minute history of the single game and players names, goals, yellow/red cards, penalties, var, penalties missed, etc. Season 2021-2022 included. 18 European Leagues from 10 countries with their lead championship. URL: [https://www.kaggle.com/datasets/bastekforever/complete-football-data-89000-matches-18-leagues](https://www.kaggle.com/datasets/bastekforever/complete-football-data-89000-matches-18-leagues)
4. The Football Dataset from the Football Computer Vision project [8]. URL: [https://universe.roboflow.com/football-detect/football-xrbge](https://universe.roboflow.com/football-detect/football-xrbge)
5. RapidAPI lists several football APIs that provide football (soccer) data for developers. URL: [https://rapidapi.com/collection/football-soccer-apis](https://rapidapi.com/collection/football-soccer-apis)
## 4 Conclusion
In this, necessarily incomplete, compendium we have provided some citations and/or links to datasets that we, and/or our collaborators, consider interesting and useful. We hope to update this list in future editions.
## References
* [1] H. M. Berman. The protein data bank. _Nucleic Acids Research_, 28(1):235-242, jan 2000.
* [2] P. Cortez, Antonio Luiz Cerdeira, Fernando Almeida, Telmo Matos, and Jose Reis. Modeling wine preferences by data mining from physicochemical properties. _Decis. Support Syst._, 47:547-553, 2009.
* [3] Alexander Denev and Saeed Amen. _The Book of Alternative Data: A Guide for Investors, Traders and Risk Managers_. Wiley, 2020.
* [4] Jim Euchner. Generative AI. _Research-Technology Management_, 66(3):71-74, apr 2023.
* [5] Otavio Ferreira. _Semantic Web Services: A RESTful Approach_. IADIS, 2009.
* [6] I. Fette and A. Melnikov. RFC 6455: The WebSocket protocol. Technical report, Internet Engineering Task Force (IETF), 2011.
* [7] FIX Protocol Ltd. FIX latest online specification (as of EP276). Technical report, FIX Protocol Ltd, 2023.
* [8] Football Detect. Football dataset. [https://universe.roobflow.com/football-detect/football-xrbge](https://universe.roobflow.com/football-detect/football-xrbge), feb 2023. visited on 2023-09-05.
* [9] Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville. _Deep Learning_. MIT Press, Cambridge, MA, USA, 2016. [http://www.deeplearningbook.org](http://www.deeplearningbook.org).
* [10] Kosuke Hashimoto, Akiyasu C. Yoshizawa, Shujiro Okuda, Keiichi Kuma, Susumu Goto, and Minoru Kanehisa. The repertoire of desaturases and elongases reveals fatty acid variations in 56 eukaryotic genomes. _Journal of Lipid Research_, 49(1):183-191, jan 2008.
* [11] Janna Hastings, Gareth Owen, Adriano Dekker, Marcus Ennis, Namrata Kale, Venkatesh Muthukrishnan, Steve Turner, Neil Swainston, Pedro Mendes, and Christoph Steinbeck. ChEBI in 2016: Improved services and an expanding collection of metabolites. _Nucleic Acids Research_, 44(D1):D1214-D1219, oct 2015.
* [12] John J. Irwin, Teague Sterling, Michael M. Mysinger, Erin S. Bolstad, and Ryan G. Coleman. ZINC: A free tool to discover chemistry for biology. _Journal of Chemical Information and Modeling_, 52(7):1757-1768, jun 2012.
* [13] Daniel Jacobson, Greg Brail, and Dan Woods. _APIs: A Strategy Guide_. O'Reilly, 2011.
* [14] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. _Nature_, 596(7873):583-589, jul 2021.
* [15] Anthony Kay. Tesseract: An open-source optical character recognition engine. _Linux J._, 2007(159):2, jul 2007.
* [16] Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, Leonid Zaslavsky, Jian Zhang, and Evan E Bolton. PubChem 2023 update. _Nucleic Acids Research_, 51(D1):D1373-D1380, oct 2022.
* [17] Nikolay Kolyada, Martin Potthast, and Benno Stein. Webis-dataset-reviews-21, 2021.
* [18] Brett Lantz. _Machine Learning with R_. Packt, 3 edition, 2019.
* [19] Wes McKinney. _Python for Data Analysis: Data Wrangling with pandas, NumPy and Jupyter_. O'Reilly, 2 edition, 2022.
* [20] David Mendez, Anna Gaulton, A Patricia Bento, Jon Chambers, Marleen De Veij, Eloy Felix, Maria Paula Magarinos, Juan F Mosquera, Prudence Muowo, Michal Nowotka, Maria Gordillo-Maranon, Fiona Hunter, Laura Junco, Grace Mugumbate, Milagros Rodriguez-Lopez, Francis Atkinson, Nicolas Bosc, Chris J Radoux, Aldo Segura-Cabrera, Anne Hersey, and Andrew R Leach. ChEMBL: towards direct deposition of bioassay data. _Nucleic Acids Research_, 47(D1):D930-D940, nov 2018.
* [21] Ryan Mitchell. _Web Scraping with Python_. O'Reilly, 2 edition, 2018.
* [22] Jan Novotny, Paul Bilokon, Aris Galiotos, and Frederic Deleze. _Machine Learning and Big Data with kdb+/q_. Wiley, 2019.
* [23] Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson, and Gianluca Bontempi. Calibrating probability with undersampling for unbalanced classification. In _2015 IEEE Symposium Series on Computational Intelligence_. IEEE, dec 2015.
* [24] Ann M. Richard, Ruili Huang, Suramya Waidyanatha, Paul Shinn, Bradley J. Collins, Inthirany Thillianadarajah, Christopher M. Grulke, Antony J. Williams, Ryan R. Lougee, Richard S. Judson, Keith A. Houck, Mahmoud Shobair, Chihae Yang, James F. Rathman, Adam Ysgar, Suzanne C. Fitzpatrick, Anton Simeonov, Russell S. Thomas, Kevin M. Crofton, Richard S. Paules, John R. Bucher, Christopher P. Austin, Robert J. Kavlock, and Raymond R. Tice. The tox21 10k compound library: Collaborative chemistry advancing toxicology. _Chemical Research in Toxicology_, 34(2):189-216, nov 2020.
* [25] Leonard Richardson. Beautiful soup documentation. _April_, 2007.
* [26] Sagar Shivaji Salunke. _Selenium Webdriver in Python: Learn with Examples_. CreateSpace Independent Publishing Platform, North Charleston, SC, USA, 1st edition, 2014.
* [27] Robert E. Schapire and Yoav Freund. _Boosting: Foundations and Algorithms_. The MIT Press, 2012.
* [28] Bjarne Stroustrup. _A Tour of C++_. Addison-Wesley Professional, 3 edition, 2022.
* [29] David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, Nazanin Assempour, Ithayavani Iynkaran, Yifeng Liu, Adam Maciejewski, Nicola Gale, Alex Wilson, Lucy Chin, Ryan Cummings, Diana Le, Allison Pon, Craig Knox, and Michael Wilson. DrugBank 5.0: a major update to the DrugBank database for 2018. _Nucleic Acids Research_, 46(D1):D1074-D1082, nov 2017.
* [30] Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. Moleculenet: A benchmark for molecular machine learning. March 2017. | Recent advances in data science, machine learning, and artificial intelligence, such as the emergence of large language models, are leading to an increasing demand for data that can be processed by such models. While data sources are application-specific, and it is impossible to produce an exhaustive list of such data sources, it seems that a comprehensive, rather than complete, list would still benefit data scientists and machine learning experts of all levels of seniority. The goal of this publication is to provide just such an (inevitably incomplete) list -- or compendium -- of data sources across multiple areas of applications, including finance and economics, legal (laws and regulations), life sciences (medicine and drug discovery), news sentiment and social media, retail and ecommerce, satellite imagery, and shipping and logistics, and sports.
artificial intelligence AI machine learning ML data science datasets data alternative data | Give a concise overview of the text below. | 173 |
arxiv-format/1812_02239v1.md | # Monitoring activities of satellite data processing services in real-time with SDDS Live Monitor
Minh Duc Nguyen
e-mail: [email protected]\\({}^{1}\\)Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow, Russia
## 1 Introduction
The ultimate goal of research in space weather is creating complex operational magnetosphere-ionosphere models which allow predicting the radiation risk for space satellites in different orbits, and also estimating the occurrence risk of technological disasters due to magnetic storms and charged particle precipitations. Currently, solutions to the problem involve collecting data from all available satellites and ground stations, each of which measures a limited number of parameters of space weather, and generating forecasts based on the result of its analysis. Such task is challenging because data formats, storage methods, datasets and values of measured parameters differ between satellites. To achieve this goal an automated system called SDDS has been created at SINP MSU. SDDS automatically connects to different data storage servers of various satellites, downloads real-time data (both decoded txt-files and binary telemetry) whenever available, decodes binary telemetry, processes decoded data and stores it in a unified database.
Satellite data collected by SDDS is used as the primary source of input data for space weather operational models created at SINP MSU. Since the correctness and preciseness of space weather models depend much on the input data, it is critical to be sure that data is correctly processed. It is also important to detect errors in any step of the data processing cycle as soon as possible so that they could be fixed quickly and their consequences could be prevented in the future. Solution to the problem is monitoring all activities of all components involved in data processing and sending alertsto responsible people (satellite developers, system administrators, workers on duty, etc.) so they could take measures in time. This paper is organized as follows: in the second section, we consider several existing solutions to the problem compared to our approach; in section 3, we give a more detailed view of the overall architecture of the Live Monitor subsystem. A brief description of the backend library is considered in section 4. Also in this section, we explain how events are classified and how alerts are sent via different mechanisms. Section 5 is dedicated to event representation on the web interface. In section 6, we give a brief description of using Live Monitor's API to create a customized monitoring service. In conclusion, we give a short resume of our completed work and describe our vision of the future perspective.
## 2 Related works
Since SDDS is working on Linux, the first solution came to mind is using an existing open source monitoring solution such as Zabbix [1], Nagios [2], or MMonit [3] to accomplish the task. But the detailed functional analysis showed that these systems were mainly designed for monitoring IT infrastructure (servers, routers, switches, etc.), system and network services. Zabbix and Nagios support application monitoring but this function is commercial. To monitor a custom service (or application) both Zabbix and Nagios assume that one must write a wrapper which runs a number of tests to check the service and produces standardized output data interpretable by the interface of the solution. The server component of Zabbix or Nagios then call this wrapper directly or via a client agent on a regular basis to check the service. Such solutions are not suitable for SDDS because of several reasons. Processing data of different satellites involves various components, the components can change dynamically, and working states of each component also differ. Writing a wrapper for each satellite would lead to a big amount of source code to be maintained. Additional checks on a regular basis for too many services would affect the overall performance of the operating system. Thus, a lightweight event-driven monitoring mechanism would be better.
A better solution is requiring all components of SDDS to inform about their current states in such a way that states could be treated uniformly. Each event of each state can be logged as a record to a journal file. When the log record is produced, we can use a message broadcasting server to either send the record to a web interface to show it to an operator or deliver the record via email or a messenger service directly to him. We have developed a library in three major programming languages Python, PHP, and JavaScript solely for this purpose.
While this approach can be applied perfectly to under development programs and internal components of SDDS being maintained by us, the same is not true for external programs that are used by SDDS to extract scientific data from the raw binary data received from satellites. It is a real challenge because most of these programs were written using different tools, both open source and commercial, and programming languages. The explanation of this fact is that data formats of different satellites differ and physicists use the tools they know best to achieve their goals. Changing these programs is not reasonable, mainly because either many of them were written such a long time ago that no one knows exactly what was implemented inside, or they are such complicated, so changes can lead to unexpected behaviors, resulting in erroneous data. To overcome this difficulty, we have decided to run these programs inside a wrapper which uses the developed library. Our final solution became the Live Monitor subsystem that is used to monitor all activities of SDDS in real-time. We have also developed a RESTful API so that physicists, whose space weather models are working based on the data supplied by SDDS, can use it to create their own monitoring scheme and deliver customized alerts to their customer.
## 3 Live Monitor's architecture
Live monitor subsystem consists of the following components: a backend logging library, RabbitMQ message broker, a RESTful API backend, a frontend UI library. An illustration of the architecture is shown below in figure 1.
The backend logging library is a customization of the popular Python Logging module. Every component of the SDDS system uses the library to inform about its states while running. External programs are executed within a wrapper of the SDDS global controller. When an external program crashes or returns an error code, the wrapper informs about it using the library. Besides the standard behavior which is writing short text messages in different log levels to a log file, the library sends these text messages to the RabbitMQ message broker [4] via a TCP socket. Messages are formatted using the STOMP protocol [5]. RabbitMQ in turn broadcast received text messages to all active web clients. When an error occurs during a certain state of data processing, the library informs operators about the error by sending a corresponding error description directly to the operators via the Telegram messenger service and/or email. Notification features can be turned on and off by editing proper configuration files.
Each data processing cycle is divided into stages. For example, a processing cycle of data from Meteor-M2 satellites consists of the following stages: connecting to data sources, downloading new raw data files, extracting scientific binary data files from raw ones, processing binary data files, adding processed data to the database, moving both raw and binary files to the local storage. Processing stages of different satellites differ. Stages to be displayed on the web interfaces of all components are customizable. One just needs to define a list of all states in the component configuration file; the library automatically does everything else. Since a satellite has a number of instruments on board, the whole processing stage consists of sub-stages each of which belongs to each instrument. An example is shown below in figure 2.
Figure 1: Live Monitorβs architecture
## 4 Live Monitor's backend library
The primary goal of Live Monitor's backend library is logging all events from all components of SDDS as text messages to log files. If an SDDS's component needs to be monitored in real-time, one can change the proper property in the component configuration file, and all logging messages will be passed to RabbitMQ message broker and then delivered to all web clients. To monitor external programs used for decoding scientific data from raw one, SDDS execute them in a wrapper. The wrapper logs all start and stop execution points and also error codes and output of these programs using the logging library.
The logging library supports four levels of logging messages: debug, info, warning, and error. Error messages are logged when an error occurs during the data processing cycle which could lead to incorrectly processed data or cause a component failure. Warning messages are logged for minor errors that do not affect the data correctness and normal functioning. Info messages are just normal text descriptions of events during the data processing cycle. Debug messages include diagnostic information that is helpful in failure investigation. If the Live Monitor property is enabled in the configuration file of a satellite, error messages will be passed directly to operators on duty using the Telegram messenger service.
Another responsibility of Live Monitor's backend is to control what should be shown on the web interface during a data processing cycle of a satellite. When a satellite is created, the satellite operator defines in the configuration what stages the data processing cycle consists of and what instruments are working on the satellite. A RESTful API has been developed to deliver this kind of information to the frontend library flexibly and to allow remote work with Live Monitor subsystem.
RabbitMQ was used as the message broker server to deliver short messages to the frontend web client, primarily, because of its excellent documentation and community support. Also, RabbitMQ provides libraries for all major programming languages, which is a big advantage. In our case, the type of message distribution is \"publish-subscribe\" with no guaranteed delivery. In this scenario, RabbitMQ demonstrates an impressive performance and stability according to the performance bench
Figure 2: Live Monitorβs architecturemark in the study [6]. Also, RabbitMQ requires less effort in configuration settings and implementation, and it's more reasonable compared with Apache ActiveMQ Apollo [7] or ZeroMQ [8].
## 5 Live Monitor's frontend library
The main goal of the frontend library is to control how a data processing cycle of a satellite should be shown on the web interface. When a user opens the Live Monitor's page of a satellite, the frontend library sends a request to the backend according to the RESTful API to retrieve necessary information of what should be shown. The answer from the backend is a JSON object that consists of a number of stages and a number of instruments decoders involved in the data decoding stage. After that, the frontend library establishes a WebSocket connection with the RabbitMQ message broker. When a text message of a stage arrives, the frontend library parses its content and changes the visual appearance of the stage. Below is an example of the common \"connecting-to-data-sources\" stage of each satellite.
We use WebSocket protocol [9] to deliver messages from RabbitMQ broker to subscribed web clients because of its performance. The big advantage of using WebSocket instead of AJAX long polling requests [10] is that we can push notifications to clients when an event occurs through a bi-directional socket. It significantly reduces the workload of the server. In the case of using AJAX long polling requests, each web client would have to poll the broker regularly for new events. This approach leads to significant performance overheads, especially when we have a large number of clients, due to the unnecessary check data packets sending back and forth between the broker and the clients.
We use Simple (or Streaming) Text Oriented Messaging Protocol (STOMP) as the communication language between the broker and the clients. STOMP provides an interoperable wire format so that STOMP clients can communicate with any STOMP message broker to provide easy and widespread messaging interoperability among many languages, platforms, and brokers. It suits our needs best. STOMP is a very simple and easy to be implemented protocol, coming from the HTTP school of design. It is very easy to write a client to get yourself connected. For example, one can use Telnet to
Figure 3: Different states of the βconnecting-to-data-sourceβ stage of a satellite
log into any STOMP broker and interact with it. We have managed to write three STOMP clients (in Python, PHP, JavaScript) in just a couple of hours and to integrate them to the RabbitMQ broker.
## 6 Customized monitoring service based on Live Monitor
At the moment of writing, Live Monitor is used mostly to monitor data processing of satellites. However, it is also possible for one to create his own monitoring service based on Live Monitor's API.
First of all, one must be a registered user of our SDDS system to be authorized to use Live Monitor's API. To create a customized monitoring service, a user must send a POST request with a proper JSON object body which consists of a service name, all its stages, and component names involving in each stage. When such request is received, Live Monitor's backend checks the request body for correctness. If the request is correct, the backend will create a RabbitMQ channel dedicated only to delivering messages of the requested service and a configuration file. As the answer, the user receives a JSON object which consists of a URL to the page of the requested service, a pair of login and password to be used in user's program to deliver messages to the RabbitMQ message broker. To control the behaviour of the service, one must send proper requests to the Live Monitor's backend. Currently, the following operations are supported by Live Monitor:
* create/delete a customized monitoring service;
* switch a monitoring service on/off;
* switch Telegram message delivery for a service on/off.
## 7 Conclusion
SDDS is the primary satellite data processing system being used at SINP MSU. The Live Monitor subsystem of SDDS system has been deployed from the first day of operation. During 12 months of operation Live Monitor helped us react to any occurred event in data processing in time. Since all stages of data processing are shown in details, Live Monitor allows us to identify and localize the scope of a problem immediately and hence prevent or fix them quickly. In future, we plan to support more operations to control the behavior of monitoring services, such as manipulating a stage name or a component name of a stage, controlling the monitoring behavior of each stage and component of a service independently, and so on.
## Acknowledgments
We would like to thank Dr. Vladimir Kalegaev for helpful discussions on satellite data processing and clear problem statements. Also, we would like to thank Dr. Alexander Kryukov for his support in all aspects. This project is supported by RSF grant #16-17-00098.
## References
* [1] Zabbix SIA, _Zabbix Documentation_ ([https://www.zabbix.com/documentation/3.0/manual](https://www.zabbix.com/documentation/3.0/manual), Version 3.0)
* [2] Ethan Galstad, Nagios Core Development Team, and Community Contributors _Nagios Core Documentation_ ([https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/index.html](https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/index.html), October 4th, 2016)* [3] Tildeslash Ltd, _M/Monit User Manual_ ([https://mnonit.com/documentation/mnonit_manual.pdf](https://mnonit.com/documentation/mnonit_manual.pdf), Version 3.7.1)
* [4] Pivotal Software, Inc. _RabbitMQ Server Documentation_ ([https://www.rabbitmq.com/admin-guide.html](https://www.rabbitmq.com/admin-guide.html), Version 3.6.9)
* [5]_STOMP Protocol Specification_ ([https://stomp.github.io/stomp-specification-1.2.html](https://stomp.github.io/stomp-specification-1.2.html), Version 1.2)
* [6] Hiram Chirino, _Stomp Benchmark_ ([https://github.com/chirino/stomp-benchmark](https://github.com/chirino/stomp-benchmark))
* [7] The Apache Software Foundation, _ActiveMQ Apollo User Manual_ ([https://activemq.apache.org/apollo/documentation/user-manual.html](https://activemq.apache.org/apollo/documentation/user-manual.html), Version 1.7.1)
* The Guide_ ([http://zguide.zeromq.org/page:all](http://zguide.zeromq.org/page:all), Version 2.2)
* [9] Internet Engineering Task Force, _The WebSocket Protocol_ ([https://tools.ietf.org/html/rfc6455](https://tools.ietf.org/html/rfc6455), December 2011)
* [10] J. J. Garrett, _Ajax: A New Approach to Web Applications_, ([https://web.archive.org/web/20080702075113/http://www.adaptivepath.com/ideas/essays/archives/000385.php](https://web.archive.org/web/20080702075113/http://www.adaptivepath.com/ideas/essays/archives/000385.php), February 18th, 2005) | This work describes Live Monitor, the monitoring subsystem of SDDS - an automated system for space experiment data processing, storage, and distribution created at SINP MSU. Live Monitor allows operators and developers of satellite data centers to identify errors occurred in data processing quickly and to prevent further consequences caused by the errors. All activities of the whole data processing cycle are illustrated via a web interface in real-time. Notification messages are delivered to responsible people via emails and Telegram messenger service. The flexible monitoring mechanism implemented in Live Monitor allows us to dynamically change and control events being shown on the web interface on our demands. Physicists, whose space weather analysis models are functioning upon satellite data provided by SDDS, can use the developed RESTful API to monitor their own events and deliver customized notification messages by their needs. | Condense the content of the following passage. | 160 |
arxiv-format/2208_02361v1.md | # A Multibranch Convolutional Neural Network for Hyperspectral Unmixing
Lukasz Tulczyjew, Michal Kawulok,, Nicolas Longepe, Bertrand Le Saux,, and Jakub Nalepa,
LT, MK and JN are with Silesian University of Technology (SUT), Gliwice, Poland (e-mail: [email protected]) and with KP Labs, Gliwice, Poland. NL and BLS are with d-lab, European Space Agency, Frascati, Italy.This work was funded by the European Space Agency (the GENESIS project), supported by the ESA Q-lab ([https://pliab.phi.nics.int/](https://pliab.phi.nics.int/)), and by the SUT grant for maintaining and developing research potential.
## I Introduction
Hyperspectral imaging (HSI) allows for retrieving data of high spectral dimensionality, commonly at a cost of lower spatial resolution, which means that a single hyperspectral pixel presents a mixture of signatures from many endmembers, or pure signature of a given material. This is especially visible for the satellite missions, where the spatial resolution may reach tens of meters for such spaceborne-acquired hyperspectral data. The process of estimating the individual endmembers along with their fractional abundances is known as _hyperspectral unmixing_ (HU). Initial approaches toward HU, including the linear mixing model (LMM) [1], were underpinned with the assumption that each pixel is a linear combination of the endmembers' abundances. Although this assumption may not hold due to the variations in illumination, atmospheric conditions, and spectral variability, resulting in low accuracy of the linear models, the linear mixture model is a good approximation for remote sensing applications, in which hyperspectral pixels commonly contain large and homogeneous regions of coherent materials [2]. There are machine learning models, including support vector regression (SVR) [3], as well as artificial neural networks [4], which rely on the training data for HU, but they can also be enriched with certain priors based on physical modeling [5]. We have been observing an unprecedented success of deep learning in various fields of science and industry, with HU not being an exception here. The process of estimating the fractional abundances can be effectively learned from the data using convolutional neural networks (CNNs) [6] which benefit from automated representation learning and can capture features that would be difficult or impossible to extract using hand-crafted feature extractors. Also, the deep image prior was recently exploited for the unmixing task [2]--here, the authors additionally addressed the problem of the endmember estimation, which is another important challenge in HU, especially relevant to large-scale Earth observation scenarios.
There are two important research directions in the intensively explored field of HU using deep learning [7]: (_i_) developing new deep architectures and (_ii_) attempts to overcome the problem of limited ground-truth data. The latter can be addressed with semi- [8], weakly-supervised [9], and unsupervised learning [10]. In [11], a deep convolutional autoencoder (DCAE) was proposed for a supervised unmixing scenario, but DCAEs are now commonly exploited in unsupervised approaches [12]. Recently, Jin et al. trained an unsupervised DCAE in an adversarial manner to increase its robustness against noise [13]. In [9], a weakly-supervised autoencoder (WS-AE) was introduced which requires a small portion of the labeled set. These architectures extract either spectral or spectral-spatial features from an input HSI cube, with the latter being reported to obtain better performance [11].
However, it was not attempted to combine the spatial and spectral features later in the processing chain. Such late fusion approaches--implemented in a form of multi-branch networks--were developed for other tasks related with HSI analysis [14], but they were not reported for HU so far. They were utilized in hyperspectral classification [15], and such techniques encompass, among others, attention multi-branch
Fig. 1: An outline of our multi-branch architecture. Three branches extract spectral (1D), spatial (2D) and spectral-spatial (3D) features from an input HSI, and they are fused to estimate the abundances. For the 1D branch, we show a relative change in the dimensionality.
CNNs benefiting from the adaptive region search [16], multi-branch-multi-scale residual fusion networks [17], and multi-branch networks based on weight sharing [18].
We address this research gap by introducing a new multi-branch convolutional architecture for the hyperspectral unmixing which benefits from the early and late fusion of spectral and spatial features to deliver accurate fractional abundances (Section II). In a high-level flowchart in Fig. 1, we highlight the operation of the proposed deep learning model trained in a supervised way, utilizing the ground-truth abundances (hence, we do _not_ tackle another important problem of the endmember determination [2]). The 1D and 2D branches extract spectral and spatial features, respectively, and the 3D branch realizes an early fusion of the features extracted from these two domains. Afterwards, the features extracted with these branches are combined within the late fusion. In contrast to the networks based on the spatial-spectral features (early fusion), we ensure that the valuable features will be extracted from both spatial and spectral domains. Our thorough experiments (Section III) indicate that the suggested CNN outperforms other state-of-the-art techniques over several widely-used benchmarks and leads to obtaining high-quality unmixing (consistently outperforming other techniques in both root mean square error and the root mean square abundance angle distance). We performed the ablation analysis to understand (_i_) the impact of introducing the spectral, spatial, and spectral-spatial branches into the model, (_ii_) the sensitivity of our techniques to noise, and (_iii_) their performance for various sizes of the training sets. Finally, we make our implementation, together with the detailed CNN architectural diagrams available at [https://gitlab.com/jnalepa/mbhu](https://gitlab.com/jnalepa/mbhu) to ensure full reproducibility.
## II Multi-Branch CNNs for HU
In our **M**ulti-**B**ranch CNN (**MB**), we exploit spectral, spatial, and spectral-spatial features to improve the quality of HU (Fig. 1). The architecture embodies parallel feature extraction branches, where each branch can be portrayed as a separate module encapsulating several convolutional layers--the input constitutes a 3D hyperspectral patch of size \\(p\\times p\\times\\lambda\\).
### _Multi-Branch Feature Extraction_
The first block applies three 1D convolutional operations as well as three max pooling layers. The spectral extent of the filters is equal to 9, 7, and 5 for each layer, respectively, whilst the max pooling window encompasses two activation units. Since the input sample consists of three axes, the spatial dimensions are concatenated. Consequently, the input into the 1D convolutional layers is two-dimensional, and the number of pixels is treated as the number of input channels. Therefore, the input tensor is reshaped as: \\(p\\times p\\times\\lambda\\to p^{2}\\times\\lambda\\), where \\(p\\times p\\) is the input patch's size. As a result, the convolving kernel extracts the spectral representation of features by sliding along the spectral (band) dimension. In the 2D convolutional branch, we capture the _spatial_ features with five convolutional kernels of the \\(2\\times 2\\times\\lambda\\) size, spanning the entire spectral dimension of the input patch of size \\(p\\times p\\). In the last 3D convolutional branch, we use three blocks, each one consisting of two layers with the \\(2\\times 2\\times 9\\) and \\(2\\times 2\\times 5\\) kernels, to extract _spectral-spatial_ features. The output of every branch is concatenated and serves as the input to the fully-connected regression part that estimates the abundance fractions for each endmember. It incorporates 512, 64, and \\(c\\) units in the consecutive layers, where \\(c\\) denotes the number of endmembers in a scene. We utilize the Rectified Linear Unit (ReLU) activations.
### _Extensions of the Architecture_
The baseline multi-branch CNN that was discussed in the previous section has been extended with the following modifications which will be experimentally analyzed in Section III:
1) **Dimensionality reduction (MB-DR)**. As the number of features extracted in the 3D branch can be of orders of magnitude larger than those extracted in other branches (e.g., in MB, we would have ca. 500, 200, and more than \\(4.6\\cdot 10^{4}\\) features extracted in the 1D, 2D, and 3D branches for an input patch of size \\(3\\times 3\\times 162\\)), we introduce dimensionality reduction in the 3D branch. We exploit two additional 3D convolutional layers without padding, with the same kernel dimensionalities and larger strides, followed by a flattening layer before the concatenation to reduce the number of features resulting from the 3D branch, and to keep it comparable to other branches.
2) **Residual connections**. Residual connections can significantly accelerate the process of training of deeper neural networks through improving the gradient flow and mitigating the vanishing and exploding gradient problem [19]. In the **MB-Res** model, we include the residual connections in the 3D branch which help us propagate the original hyperspectral information within the network. We include the dimensionality reduction mechanism from MB-DR--the skip connections bypass the extraction part of this branch and the original data characteristics are fused with the spectral-spatial features.
3) **Sequential training strategy**. Here, MB-Res is trained in two steps. First, we train each branch separately to decouple them from other ones. Afterwards, we can either fine tune the entire architecture once it is combined into a multi-branch network, or train the regression part only. In the former case, the strategy may be understood as pre-training the feature extractor (MB-PT), whereas the latter strategy corresponds to the transfer learning-like scheme (MB-TL). Therefore, the main difference between MB-PT and MB-TL is the \"depth\" of the parameters' update--in MB-PT, we fine tune all trainable parameters, whereas in MB-TL, only a small subset of them.
Although the approaches introduced in this section are model-agnostic and can be incorporated into other multi-branch CNNs, we analyze the models in the following order, with the networks expanding their predecessors: MB\\(\\rightarrow\\)MB-DR\\(\\rightarrow\\)MB-Res\\(\\rightarrow\\)MB-PT and MB-TL. It will help track the impact of specific components on the pipeline's performance.
## III Experimental results
The objective of our experiments is three-fold--(_i_) to understand the impact of utilizing parallel branches in the multi-branch architecture trained from the training sets of various sizes, (_ii_) to confront our multi-branch CNNs with other algorithms, and (_iii_) to verify their robustness against noise.
Our models were coded in Python 3.6 with Tensorflow 1.12. All models were trained using ADAM with the learning rate of \\(10^{-3}\\). The maximum number of epochs was 100, with the early stopping of 15 epochs without an improvement in the loss calculated over a randomly sampled \\(10\\%\\) of all training pixels, with a batch size equal to 256. The mean square error (MSE) between the estimated and ground-truth fractional abundances is used as the loss function in the training process.
We confront our multi-branch CNN with LMM [1], SVR with one regression model fitted per target for the multi-target unmixing [3], and recent approaches such as the cube-based variant of CNN (CB-CNN) that extracts the spectral-spatial features [6], the WS-AE architecture [9], and the UnDIP architecture [2]. Apart from LMM, each algorithm operates on a 3D patch of size \\(3\\times 3\\times\\lambda\\), and all of the methods were trained in a supervised way, utilizing the ground-truth abundances (therefore, all of them perform the estimation of endmember fractional abundances, and they do _not_ automatically determine the set of all endmembers in the scene). In the ablation study, we analyze all combinations of the 1D, 2D, and 3D branches in MB--the MB variants which include a single branch are MB(1D), MB(2D), and MB(3D), whereas those with two parallel branches encompass MB(1D+2D), MB(1D+3D), and MB(2D+3D), together with the impact of the training set size on their capabilities. Additionally, we generated the contaminated test sets with the white zero-mean Gaussian noise added to the original test data (the signal-to-noise ratio of 20, 30, 40, and 50 dB), to verify the robustness of the models against noise. As the unmixing quality metrics, we capture the root mean square error (RMSE), and the root mean square abundance angle distance (rmsAAD) [11].
We focus on three benchmarks: Samson (Sa, \\(95\\times 95\\), 156 bands), Urban (Ur, \\(207\\times 307\\), 162 bands) and Jasper Ridge (JR, \\(100\\times 100\\), 198 bands) [20, 21]. The Sa set incorporates three endmembers: #1 Soil, #2 Tree and #3 Water, for Ur we have six endmembers: #1 Asphal, #2 Grass, #3 Tree, #4 Roof, #5 Metal, and #6 Dirt, whereas for JR, there are four endmembers: #1 Road, #2 Water, #3 Soil, and #4 Tree. We perform 30-fold Monte Carlo cross-validation, and sample 30 test sets that do _not change_ with the change of the training set (\\(\\mathbf{T}\\)) size--we always report the results obtained for the unseen test sets (\\(\\Psi\\)), whose size is kept constant as suggested in [22], and equals 3025, 47249, and 2500 for Sa, Ur, and JR. For Sa, Ur and JR, we have \\(6\\cdot 10^{3}\\), \\(47\\cdot 10^{3}\\) and \\(7.5\\cdot 10^{3}\\) training pixels in total. To verify the impact of the training set sizes, we use the subsets of the full \\(\\mathbf{T}\\)'s: \\(\\{1,6,13,33,66\\}\\%\\), therefore sample the reduced sets of \\(\\{60,360,7.8\\cdot 10^{2},1.98\\cdot 10^{3},3.96\\cdot 10^{3}\\}\\), \\(\\{470,2.8\\cdot 10^{3},6.1\\cdot 10^{3},15.5\\cdot 10^{3},31\\cdot 10^{3}\\}\\) and \\(\\{75,500,10^{3},2.5\\cdot 10^{3},5\\cdot 10^{3}\\}\\) pixels for Sa, Ur and JR.
The results obtained for all datasets, sizes of \\(\\mathbf{T}\\)'s and algorithms (Figs. 2-3; for RMSE elaborated for Sa, which is consistent with Ur, and the rmsAAD plots see the supplementary material) indicate that using all branches in MB leads to statistically significantly better unmixing in the majority (72/108) of cases, as confirmed by the Wilcoxon tests (Table I for RMSE, \\(p<0.05\\)). Here, we report the results obtained for confronting MB with its variants that utilize a subset of all branches (the Wilcoxon tests for all algorithms are at [https://gitlab.com/jnalepa/mbhu](https://gitlab.com/jnalepa/mbhu)). The experiments show that using the 1D and 3D branches brings the largest benefits, and further fusing them in the multi-branch processing leads to the improved unmixing. Fusing the 2D and 3D branches does not significantly enhance the capabilities of the variant utilizing the 3D branch only--both MB(3D) and MB(2D+3D) led to statistically identical results as MB in 5/18 cases. It can be attributed to the fact that both branches capture spectral-spatial characteristics, hence they may lead to similar features. On the other hand, notable improvements are observed once spectral and spectral-spatial branches are fused. The least visible differ
\\begin{table}
\\begin{tabular}{l c c c c c c c} \\hline \\hline Compound with \\(\\downarrow\\) & **100\\%** & **66\\%** & **33\\%** & **13\\%** & **6\\%** & **1\\%** & Total \\\\ \\hline MB(1D) & **0/3** & **0/3** & **0/3** & 1/3 & 2/3 & 2/3 & 5/18 \\\\ MB(2D) & **0/3** & **0/3** & **0/3** & **0/3** & 1/3 & 1/3 & 3/5 & 5/18 \\\\ MB(1D+2D) & 1/3 & **0/3** & 2/3 & 2/3 & 2/3 & 7/18 \\\\ MB(1D+3D) & 3/3 & 1/3 & 1/3 & 2/3 & 3/3 & 3/3 & 13/18 \\\\ MB(2D+3D) & 1/3 & **0/3** & **0/3** & **0/3** & 2/3 & 2/3 & 5/18 \\\\ \\hline Total\\(\\rightarrow\\) & 5/18 & 1/18 & 1/18 & 6/18 & 10/18 & 13/18 & 36/108 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: The results of the two-tailed Wilcoxon tests (\\(p<0.05\\))βwe present the number of cases (for each training set size, out of 3 HU sets) in which the confronted variants lead to obtaining statistically the same results as those by MB. We boldface the entries, in which MB obtained the statistically significantly better results for all sets.
ences are manifested for the smallest training sets--it indicates that the performance of CNNs is becoming saturated for such small samples, and cannot be further improved without capturing or synthesizing more training examples [23].
The MB CNNs consistently outperform other techniques in all sets and training set sizes, also when averaged across all sizes of \\(\\mathbf{T}\\) (Table II), with MB and MB-DR being the Top-2 methods for RMSE and rmsAAD, according to the ranking tests (Table III). It indicates that capturing spectral and spectral-spatial features through automated representation learning in parallel branches, and effectively fusing them leads to more precise fractional abundance estimation. For virtually all techniques, increasing \\(\\mathbf{T}\\)'s significantly improves the HU quality over \\(\\Psi\\)'s which remained unchanged during the experimentation. The results also indicate that decoupling the feature extraction branches from each other in MB-TL adversely affects the generalization of the model, as it gave the worst performance in all cases when compared to other MB CNNs. The detailed results aggregated for all separate executions across all scenarios gathered in Table II confirm that the multi-branch architectures outperform other methods investigated in this study. Finally, the visualizations of the abundance maps included in the supplementary material confirm the unmixing abilities of the investigated techniques, and their quality strictly corresponds to the quantitative metrics (RMSE and rmsAAD).
To investigate the robustness of the models against noise, we render MB, together with the three best-performing algorithms from the literature in Fig. 4 (all 30 \\(\\Psi\\)'s were independently
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c} \\hline \\hline & \\multicolumn{4}{c}{RMSE} & \\multicolumn{4}{c}{rmsAAD} \\\\ \\cline{2-13}
**Models**
contaminated with noise for JR). Here, although UnDIP presents the best robustness against additive white Gaussian noise, both UnDIP and MP led to statistically same results for the uncontaminated \\(\\Psi\\)'s and those with the signal-to-noise of 50 dB (\\(p<0.01\\); for all algorithms, see supplementary material). Additionally, in Table IV, we quantify the influence of different input patch sizes on RMSE obtained for JR using MB (executed for five test sets). The results show that analyzing too large pixel neighborhoods leads to less stable models--for the \\(5\\times 5\\), \\(7\\times 7\\) and \\(9\\times 9\\) patches, RMSE's standard deviation amounted to 0.220, 0.209, and 0.215, whereas for \\(3\\times 3\\): 0.002. This may have been caused by too large pixel sizes (GSD) which negatively impacted the unmixing process of the central pixel because of capturing different materials.
In Fig. 5, we can appreciate that the training time of MB CNN and its variants, averaged across all independent executions for Urban, being the largest set, is shorter or comparable to other techniques. Similarly, the averaged inference time (across all training set sizes) over all test samples in the test sets amounted to 3.6-4.1 s for MB (it was 0.2-3.8 s for the variants utilizing a subset of all branches)--it is up to \\(57\\times\\) faster than SVR (the average inference time for LMM, CB-CNN, and WS-AE was 12.3 s, 2.1 s, and 24.9 s). It shows that the multi-branch processing can be executed in short time.
## IV Conclusions and Future Work
Hyperspectral unmixing remains one of the most challenging tasks in HSI analysis. Although deep learning has been blooming in the field, new deep architectures emerge at a steady pace to improve the quality of HU. We introduced a multi-branch CNN for HU that benefits from automated representation learning and efficient fusion of spectral, spatial, and spectral-spatial features during the unmixing process. The experiments showed that our technique outperforms other classical and deep learning methods, and indicated the benefits of utilizing parallel branches that capture spectral, spatial and spectral-spatial features which are later fused in the network.
Although our experiments involved the extensive multi-fold analysis, such training/test samples are drawn from the very same HSI (i.e., they capture the same Earth peculiarities). Thus, acquiring spatially de-correlated HSIs for HU validation would help us better understand the generalization of such methods. It would be interesting to enhance the multi-branch models in order to perform the endmember determination, as it is a pivotal step in real-life Earth observation use cases, and to deploy them for the non-linear models, e.g., in microscopic scenarios [2]. Finally, to reduce the amount of data to be transferred from a satellite equipped with a hyperspectral imager, and to accelerate the response time through the in-orbit analysis of raw HSIs, we are working on deploying the MB CNNs onboard KP Labs' Intuition-1 hyperspectral mission.
## References
* [1]D. Heinz, C. Chang, and M. Althouse (1999) Fully constrained least-squares based linear unmixing. In Proc. IEEE IGARSS, pp. 1401-1403. Cited by: SSI.
* [2]B. Rasiti, B. Koirala, P. Scheunders, and P. Ghamisi (2022) UnDIP: hyperspectral unmixing using Deep Image Prior. IEEE TGRS60, pp. 1-15. Cited by: SSI.
* [3]B. Koirala and P. Scheunders (2019) A semi-supervised method for nonlinear hyperspectral unmixing. In Proc. IEEE IGARSS, pp. 361-364. Cited by: SSI.
* [4]B. Koirala, P. Scheunders, and P. Ghamisi (2022) UnDIP: hyperspectral unmixing using Deep Image Prior. IEEE TGRS60, pp. 1-15. Cited by: SSI.
* [5]B. Koirala, P. Scheunders, and P. Ghamisi (2022) UnDIP: hyperspectral unmixing using Deep Image Prior. IEEE TGRS60, pp. 1-15. Cited by: SSI.
* [6]B. Koirala, P. Scheunders, and P. Ghamisi (2022) UnDIP: hyperspectral unmixing using Deep Image Prior. IEEE TGRS60, pp. 1-15. Cited by: SSI.
* [7]B. Koirala, P. Scheunders, and P. Ghamisi (2022) UnDIP [2]B.
**A Multibranch Convolutional Neural Network for Hyperspectral Unmixing (Supplementary Material)**
Lukasz Tulczyjew, Michal Kawulok, Nicolas Longepe, Bertrand Le Saux, Jakub Nalepa
[email protected]
This supplementary material collects the detailed experimental results obtained using the investigated hyperspectral unmixing techniques (Section 1). To access the diagrams for each of the implemented networks, the code to reproduce the experiments, and other detailed results of the experimental study (Wilcoxon tests, training and test times for each training set size and benchmark set, and the detailed metric values), see [https://gitlab.com/jnalepa/mbhu](https://gitlab.com/jnalepa/mbhu).
## 1 Detailed Experimental Results
In this section, we gather the detailed experimental results that are discussed in the main body of the letter. The following results are included in the supplementary material:
* Overall RMSE over Samson reported for different training sizes and all investigated algorithms: Figure 1.
* Overall rmsAAD over the Samson, Urban, and Jasper Ridge datasets reported for different training sizes and all investigated algorithms: Figures 2-4.
* The ranking (RMSE and rmsAAD) obtained for the Samson, Urban, and Jasper Ridge datasets by all investigated HU algorithms: Tables 1-3.
* The abundance maps obtained by applying different unmixing techniques from the literature, and by different variants of the proposed multi-branch architecture (trained over the entire training set) over the Jasper Ridge dataset: Figures 5-7.
* The impact of the white zero-mean Gaussian noise added to the original data (with the signal-to-noise ratio of 20, 30, 40, and 50 dB) on the performance (quantified as RMSE) of all investigated unmixing algorithms over the Jasper Ridge dataset: Figure 8.
* The impact of the white zero-mean Gaussian noise added to the original data (with the signal-to-noise ratio of 20, 30, 40, and 50 dB) on the performance (quantified as mean, median, and standard deviation of RMSE) of all investigated unmixing algorithms over the Jasper Ridge dataset: Table 4.
* The impact of the patch size on RMSE (on Jasper Ridge), quantified as \\(\\Delta_{\\text{RMSE}}=\\text{RMSE}^{k\\times k}-\\text{RMSE}^{3\\times 3}\\) (\\(k=\\{5,7,9\\}\\)) obtained using MB-DR, MB-Res, MB-TL, and MB-PT, trained from the training sets of various sizes: Tables 5-8.
Figure 3: Overall rmsAAD over the Urban dataset reported for different training sizes: \\(\\blacksquare\\) LMM, \\(\\blacksquare\\) SVR, \\(\\blacksquare\\) CB-CNN, \\(\\blacksquare\\) WS-AE, \\(\\blacksquare\\) UnDIP, \\(\\blacksquare\\) MB(1D), \\(\\blacksquare\\) MB(2D), \\(\\blacksquare\\) MB(3D), \\(\\blacksquare\\) MB(1D+2D), MB(1D+3D), \\(\\blacksquare\\) MB(2D+3D), \\(\\blacksquare\\) MB, \\(\\square\\) MB-DR, \\(\\blacksquare\\) MB-Res, \\(\\blacksquare\\) MB-TL, \\(\\blacksquare\\) MB-PT. For some methods, we indicate the exact value of rmsAAD above the arrow to maintain readability of the plot (those values are outside the current rmsAAD range on the Y axis).
Figure 2: Overall rmsAAD over the Samson dataset reported for different training sizes: \\(\\blacksquare\\) LMM, \\(\\blacksquare\\) SVR, \\(\\blacksquare\\) CB-CNN, \\(\\blacksquare\\) WS-AE, \\(\\blacksquare\\) UnDIP, \\(\\blacksquare\\) MB(1D), \\(\\blacksquare\\) MB(2D), \\(\\blacksquare\\) MB(3D), \\(\\blacksquare\\) MB(1D+2D), \\(\\blacksquare\\) MB(1D+3D), \\(\\blacksquare\\) MB(2D+3D), \\(\\blacksquare\\) MB, \\(\\blacksquare\\) MB-DR, \\(\\blacksquare\\) MB-Res, \\(\\blacksquare\\) MB-TL, \\(\\blacksquare\\) MB-PT. For some methods, we indicate the exact value of rmsAAD above the arrow to maintain readability of the plot (those values are outside the current rmsAAD range on the Y axis).
\\begin{table}
\\begin{tabular}{r|r r r r r r r r r r r r r r r} \\hline \\multicolumn{13}{c}{**RMSE**} & \\multicolumn{1}{c}{**rmsAAD**} \\\\ \\hline
**Train. size \\(\\rightarrow\\)** & **100\\%** & **66\\%** & **33\\%** & **13\\%** & **6\\%** & **1\\%** & **Mean** & **100\\%** & **66\\%** & **33\\%** & **13\\%** & **6\\%** & **1\\%** & **Mean** \\\\ \\hline LMM & 16.000 & 16.000 & 16.000 & 16.000 & 16.000 & 15.000 & 15.833 & 16.000 & 16.000 & 16.000 & 16.000 & 16.000 & 15.000 & 15.833 \\\\ SVR & 15.000 & 15.000 & 15.000 & 15.000 & 14.000 & 12.000 & 14.333 & 15.000 & 15.000 & 15.000 & 14.000 & 12.000 & 14.333 \\\\ CB-CNN & 12.000 & 12.000 & 11.000 & 9.000 & 10.000 & 9.000 & 10.500 & 12.000 & 11.000 & 11.000 & 9.000 & 10.000 & 9.000 & 10.333 \\\\ WS-AE & 14.000 & 14.000 & 14.000 & 14.000 & 12.000 & 10.000 & 13.000 & 14.000 & 14.000 & 14.000 & 13.000 & 12.000 & 10.000 & 12.833 \\\\ UnDIP & 13.000 & 13.000 & 13.000 & 11.000 & 11.000 & 14.000 & 12.500 & 13.000 & 13.000 & 11.000 & 11.000 & 11.000 & 14.000 & 12.500 \\\\ \\hline MB(1D) & 6.000 & 7.000 & 6.500 & 4.000 & 6.000 & 5.000 & 5.750 & 8.500 & 8.500 & 8.500 & 4.500 & 4.000 & 4.000 & 6.333 \\\\ MB(2D) & 11.000 & 11.000 & 12.000 & 13.000 & 13.000 & 11.000 & 11.833 & 11.000 & 12.000 & 12.000 & 14.000 & 13.000 & 11.000 & 12.167 \\\\ MB(3D) & 10.000 & 7.000 & 9.500 & 12.000 & **1.000** & 6.500 & 7.667 & 8.500 & 7.000 & 8.500 & 12.000 & **1.000** & 7.500 & 7.417 \\\\ \\hline MB(1D+2D) & 6.000 & 7.000 & 6.500 & 4.000 & 6.000 & 2.500 & 5.333 & 10.000 & 8.500 & 6.500 & 4.500 & 5.000 & **1.500** & 6.000 \\\\ MB(1D+3D) & 6.000 & 7.000 & **1.500** & 4.000 & 6.000 & 2.500 & 4.500 & 3.500 & 4.500 & 2.500 & 3.000 & 6.000 & 3.000 & 3.750 \\\\ MB(2D+3D) & 6.000 & 7.000 & 9.500 & 8.000 & 8.000 & 6.500 & 7.500 & 5.000 & 4.500 & 6.500 & 8.000 & 9.000 & 7.500 & 6.750 \\\\ \\hline
**MB** & 6.000 & 3.000 & 3.000 & 6.000 & 4.000 & **1.000** & 3.833 & 3.500 & 3.000 & 2.500 & 6.000 & 7.000 & **1.500** & 3.917 \\\\
**MB-DR** & 6.000 & 10.000 & 5.000 & 2.000 & 3.000 & 4.000 & 5.000 & 6.000 & 10.000 & 5.000 & 2.000 & 2.000 & 5.000 & 5.000 \\\\
**MB-Res** & **1.000** & 2.000 & 4.000 & 7.000 & 9.000 & 8.000 & 5.167 & **1.000** & 2.000 & 4.000 & 7.000 & 8.000 & 6.000 & 4.667 \\\\
**MB-TL** & 6.000 & 4.000 & 8.000 & 10.000 & 15.000 & 16.000 & 9.833 & 7.000 & 6.000 & 10.000 & 15.000 & 16.000 & 10.667 \\\\
**MB-PT** & 2.000 & **1.000** & **1.500** & **1.000** & 2.000 & 13.000 & **3.417** & 2.000 & **1.000** & **1.000** & **1.000** & 3.000 & 13.000 & **3.500** \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: The ranking (RMSE and rmsAAD) obtained for the Urban dataset by all HU algorithms. The best ranking for each training set size is boldfaced, whereas the second best is underlined.
\\begin{table}
\\begin{tabular}{r|r r r r r r r r r r r r r r r} \\hline \\multicolumn{13}{c}{**RMSE**} & \\multicolumn{1}{c}{**rmsAAD**} \\\\ \\hline
**Train. size \\(\\rightarrow\\)** & **100\\%** & **66\\%** & **33\\%** & **13\\%** & **6\\%** & **1\\%** & **Mean** & **100\\%** & **66\\%** & **33\\%** & **13\\%** & **6\\%** & **1\\%** & **Mean** \\\\ \\hline LMM & 16.000 & 15.000 & 14.000 & 13.000 & 13.000 & 2.000 & 12.167 & 16.000 & 15.000 & 14
\\begin{table}
\\begin{tabular}{r r r r r r r r r r r r r r r r r} \\hline \\hline & \\multicolumn{4}{c}{**Original**} & \\multicolumn{4}{c}{**50 dB**} & \\multicolumn{4}{c}{**40 dB**} & \\multicolumn{4}{c}{**30 dB**} & \\multicolumn{4}{c}{**20 dB**} \\\\ \\cline{2-19} \\multicolumn{1}{c}{**Models**} & \\multicolumn{1}{c}{**Mean**} & \\multicolumn{1}{c}{\\(\\sigma\\)} & \\multicolumn{1}{c}{**Med.**} & \\multicolumn{1}{c}{**Mean**} & \\multicolumn{1}{c}{\\(\\sigma\\)} & \\multicolumn{1}{c}{**Med.**} & \\multicolumn{1}{c}{**Mean**} & \\multicolumn{1}{c}{\\(\\sigma\\)} & \\multicolumn{1}{c}{**Med.**} & \\multicolumn{1}{c}{**Mean**} & \\multicolumn{1}{c}{\\(\\sigma\\)} & \\multicolumn{1}{c}{**Med.**} & \\multicolumn{1}{c}{**Mean**} & \\multicolumn{1}{c}{\\(\\sigma\\)} & \\multicolumn{1}{c}{**Med.**} \\\\ \\hline LMM & 0.078 & 0.001 & 0.078 & 0.078 & 0.001 & 0.078 & 0.001 & 0.078 & 0.001 & 0.078 & 0.001 & 0.078 & 0.078 & 0.001 & 0.078 \\\\ SVR & 0.051 & 0.001 & 0.051 & 0.051 & 0.001 & 0.051 & 0.051 & 0.001 & 0.051 & 0.051 & 0.001 & 0.051 & 0.051 & 0.054 & 0.002 & 0.054 \\\\ CB-CNN & 0.021 & 0.002 & 0.020 & 0.021 & 0.002 & 0.020 & 0.020 & 0.021 & 0.002 & 0.020 & 0.021 & 0.002 & 0.021 & 0.026 & 0.004 & 0.025 \\\\ WS-AE & 0.027 & 0.000 & 0.026 & 0.027 & 0.000 & 0.026 & 0.027 & 0.000 & 0.026 & 0.027 & 0.000 & 0.027 & 0.031 & 0.001 & 0.030 \\\\ UniDP & 0.028 & 0.004 & 0.027 & 0.028 & 0.004 & 0.027 & 0.028 & 0.004 & 0.027 & 0.028 & 0.004 & 0.027 & 0.028 & 0.004 & 0.027 & 0.030 & 0.004 & 0.029 \\\\ MB(1D) & 0.015 & 0.001 & 0.015 & 0.011 & 0.015 & 0.001 & 0.015 & 0.001 & 0.015 & 0.017 & 0.001 & 0.016 & 0.027 & 0.004 & 0.026 \\\\ MB(2D) & 0.018 & 0.001 & 0.018 & 0.018 & 0.001 & 0.018 & 0.018 & 0.001 & 0.018 & 0.019 & 0.001 & 0.019 & 0.027 & 0.001 & 0.027 \\\\ MB(3D) & 0.016 & 0.001 & 0.016 & 0.016 & 0.001 & 0.016 & 0.016 & 0.001 & 0.016 & 0.017 & 0.001 & 0.016 & 0.021 & 0.001 & 0.021 \\\\ MB(1D+2D) & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.016 & 0.001 & 0.016 & 0.028 & 0.005 & 0.026 \\\\ MB(1D+3D) & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.001 & 0.014 & 0.015 & 0.001 & 0.015 & 0.024 & 0.004 & 0.023 \\\\ MB(2D+3D) & 0.015 & 0.001 & 0.015 & 0.015 & 0.001 & 0.015 & 0.015 & 0.001 & 0.015 & 0.016 & 0.001 & 0.016 & 0.021 & 0.001 & 0.021 \\\\ MB & 0.014 & 0.001 & 0.013 & 0.014 & 0.001 & 0.013 & 0.014 & 0.001 & 0.014 & 0.015 & 0.001 & 0.015 & 0.024 & 0.005 & 0.022 \\\\ MB-DR & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.016 & 0.001 & 0.016 & 0.027 & 0.004 & 0.026 \\\\ MB-Res & 0.014 & 0.001 & 0.014 & 0.014 & 0.001 & 0.014 & 0.015 & 0.001 & 0.015 & 0.016 & 0.001 & 0.016 & 0.023 & 0.001 & 0.023 \\\\ MB-TL & 0.055 & 0.104 & 0.016 & 0.055 & 0.104 & 0.016 & 0.055 & 0.104 & 0.016 & 0.056 & 0.104 & 0.017 & 0.061 & 0.102 & 0.022 \\\\ MB-PT & 0.024 & 0.058 & 0.013 & 0.024 & 0.058 & 0.013 & 0.024 & 0.058 & 0.013 & 0.025 & 0.058 & 0.014 & 0.032 & 0.057 & 0.021 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: The impact of the white zero-mean Gaussian noise added to the original data (with the signal-to-noise ratio of 20, 30, 40, and 50 dB) on the performance (quantified as mean, median, and standard deviation of RMSE) of all investigated unmixing algorithms over Jasper Ridge.
Figure 5: The abundance maps obtained by applying different unmixing techniques from the literature (trained over the entire training set) over Jasper Ridge (the rows correspond to the endmembers: 0βtree, 1βwater, 2βsoil, 3βroad). The _Labels_ column corresponds to the ground truth.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline & RMSE & \\multicolumn{2}{c}{\\(\\Delta_{\\text{RMSE}}\\)} \\\\ \\cline{2-5}
**Train. size** & \\(3\\times 3\\) & \\(5\\times 5\\) & \\(7\\times 7\\) & \\(9\\times 9\\) \\\\ \\hline
100\\% & 0.015 & 0.240 & 0.374 & 0.359 \\\\
66\\% & 0.017 & 0.245 & 0.437 & 0.271 \\\\
33\\% & 0.024 & 0.300 & 0.403 & 0.278 \\\\
13\\% & 0.034 & 0.122 & 0.386 & 0.240 \\\\
6\\% & 0.059 & 0.232 & 0.298 & 0.302 \\\\
1\\% & 0.195 & 0.132 & 0.123 & 0.169 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: The impact of the patch size on RMSE (on Jasper Ridge), quantified as \\(\\Delta_{\\text{RMSE}}=\\text{RMSE}^{k\\times k}-\\text{RMSE}^{3\\times 3}\\) (\\(k=\\{5,7,9\\}\\)) obtained using MB-Res trained from the training sets of various sizes.
Figure 6: The abundance maps obtained by applying different variants of the proposed multi-branch architecture (investigated in the ablation study, and trained over the entire training set) over Jasper Ridge (the rows correspond to the endmembers: 0βtree, 1βwater, 2βsoil, 3βroad). The _Labels_ column corresponds to the ground truth.
Figure 8: The impact of the white zero-mean Gaussian noise added to the original data (with the signal-to-noise ratio of 20, 30, 40, and 50 dB) on the performance (quantified as RMSE) of all investigated unmixing algorithms over the Jasper Ridge dataset. The results of the Friedmanβs tests with post-hoc Dunnβs, verifying if the differences are statistically significant, are reported as: * (\\(p<0.05\\)), ** (\\(p<0.01\\)), *** (\\(p<0.001\\)), and *** (\\(p<0.0001\\)). | Hyperspectral unmixing remains one of the most challenging tasks in the analysis of such data. Deep learning has been blooming in the field and proved to outperform other classic unmixing techniques, and can be effectively deployed onboard Earth observation satellites equipped with hyperspectral imagers. In this letter, we follow this research pathway and propose a multi-branch convolutional neural network that benefits from fusing spectral, spatial, and spectral-spatial features in the unmixing process. The results of our experiments, backed up with the ablation study, revealed that our techniques outperform others from the literature and lead to higher-quality fractional abundance estimation. Also, we investigated the influence of reducing the training sets on the capabilities of all algorithms and their robustness against noise, as capturing large and representative ground-truth sets is time-consuming and costly in practice, especially in emerging Earth observation scenarios.
Hyperspectral unmixing, deep learning, CNN. | Summarize the following text. | 192 |
arxiv-format/1907_05418v1.md | # Adversarial Objects Against LiDAR-Based Autonomous Driving Systems
Yulong Cao \\({}^{1}\\) Chaowei Xiao\\({}^{1}\\) Dawei Yang\\({}^{1}\\) Jing Fang \\({}^{2}\\) Ruigang Yang \\({}^{2}\\)
Mingyan Liu\\({}^{1}\\) Bo Li\\({}^{3}\\)
\\({}^{1}\\)University of Michigan, Ann Arbor
\\({}^{2}\\)Baidu Research, Baidu Inc.
\\({}^{3}\\) University of Illinois at Urbana-Champaign
The first three authors contributed equally.
## 1 Introduction
Machine learning, especially deep neural networks (DNNs), have achieved great successes in various domains, [5; 6; 9; 17]. Several safety-critical applications such as autonomous vehicles (AV) have also adopted machine learning models and achieved promising performance. However, recent studies show that machine learning models are vulnerable to adversarial attacks [2; 8; 18; 20; 21; 23]. In these attacks, small perturbations are sufficient to cause various well-trained models to output \"adversarial\" prediction. In this paper we aim to explore similar vulnerabilities in today's autonomous driving systems.
Such adversarial attacks have been largely explored in the image domain. In addition, to demonstrate such attacks pose a threat in the real world, some studies propose to generate physical stickers or printable textures that can confuse a classifier to recognize a stop sign [1; 7]. However, an autonomous driving system is not merely an image-based classifier. For perception, most autonomous driving detection systems are equipped with LiDAR (Light Detection And Ranging) or RADAR (Radio Detection and Ranging) devices which are capable of directly probing the surrounding 3D environment with laser beams. This raises the doubt of whether texture perturbation in previous work will affect LiDAR-scanned point clouds. In addition, the LiDAR-based detection system consists of multiple non-differentiable steps, rather than a single end-to-end network, which largely limits the use of gradient-based end-to-end attacks. These two key obstacles not only invalidate previous image-based approaches, but also raise several new challenges when we want to construct an adversarial object: 1) LiDAR-based detection system projects 3D shape to a point cloud using physical LiDAR equipment. The point cloud is then fed into the machine learning detection system. Therefore, howshape perturbation affects the scanned point cloud is not clear. 2) The preprocessing of the LiDAR point clouds is non-differentiable, preventing the naive use of gradient-based optimizers. 3) The perturbation space is limited due to multiple aspects. First, we need to ensure the perturbed object can be reconstructed in the real world. Second, a valid LiDAR scan of an object is a constrained subset of point cloud, making the perturbation space much smaller compared to perturbing the point cloud without any constraint [19].
In this paper, we propose _LiDAR-Adv_ to address the above issues and generate adversarial object against real-world LiDAR system as shown in Figure 1. We first simulate a differentiable LiDAR renderer to bridge the perturbations from 3D objects to LiDAR scans (or point cloud). Then we formulate 3D feature aggregation with a differentiable proxy function. Finally, we devise different losses to ensure the smoothness of the generated 3D adversarial objects. To better demonstrate the flexibility of the proposed attack approach, we evaluate our attacking approach under two different attacking scenarios: 1) _Hiding Object_: synthesizing an \"adversarial object\" that will not be detected by the detector. 2) _Changing Label_: synthesizing an \"adversarial object\" that is recognized as a specified adversarial target by the detector. We also compare _LiDAR-Adv_ with the evolution algorithm in the blackbox setting.
To evaluate the real-world impact of _LiDAR-Adv_, we 3D print out the generated adversarial objects and test them on the Baidu Apollo autonomous driving platform, an industry-level system which is not only highly adopted for research purpose but also actively used in industries. We show that with 3D perception and a production-level multi-stage detector, we are able to mislead the autonomous driving system to achieve different adversarial targets.
To summarize, our contributions are as follows: (1) We propose _LiDAR-Adv_, an end-to-end approach to generate physically plausible adversarial objects against LiDAR-based autonomous driving detection systems. To the best of our knowledge, this is the first work to exploit adversarial objects for such systems. (2)We experiment on Apollo, an industry-level autonomous driving platform, to illustrate the effectiveness and robustness of the attacks in practice. We also compare the objects generated by _LiDAR-Adv_ with evolution algorithm to show that _LiDAR-Adv_ can provide smoother objects. (3) We conduct physical experiments by 3D-printing the optimized adversarial object and show that it can consistently mislead the LiDAR system equipped in a moving car.
## 2 Related work
**Image-space adversarial attacks** Adversarial examples have been heavily explored in 2D image domains [3; 8; 13; 14; 21]. Various works [1; 7; 11] start to study robust physical adversarial examples. Evtimov et al. [7] has created printable 2D stickers to attach to a stop sign and cause a detector to predict wrong labels. Following this line, there are also works [12; 22] aiming to optimize the 3D shapes to show that even the surface geometry itself can produce adversarial behaviors.
In this work, we exploit the object surfaces to generate adversarial objects, and one fundamental challenge that differentiates our work from the previous ones is: the sensor in a LiDAR-based system directly probes the 3D environment as the input, bypassing surface textures of the adversarial objects. This means we may only rely on shape geometry to perform any attacks. On the other hand, compared to prior works that have shown successes on attacking single models, it is worth noting that the victim model which we experiment on
Figure 1: Overview of _LiDAR-Adv_. The first row shows that a normal box will be detected by the LiDAR-based detection system; while the generated adversarial object with similar size in row 2 cannot be detected.
(Apollo), is not merely an end-to-end deep learning model but an industry-level autonomous driving platform that consists of multiple non-differentiable parts.
**Adversarial point clouds** Xiang et al. [19] show a proof of concept, that models taking raw 3D point clouds as input [15] can be vulnerable to adversarial point clouds. However, this approach is only evaluated with a single digital model. It is not clear that the generated point clouds can form plausible 3D shape surfaces, or it can be reconstructed through LiDAR scans. While in our approach, though the victim model takes point clouds as input similarly, these point clouds have to satisfy extra constraints such as: all points have to be the intersections of the laser beams and the object surfaces. We address this challenge by proposing a differentiable renderer which simulates the reconstructed laser beams projecting onto object surfaces. As we will show later, when the object moves, the point cloud changes in accordance with the laser hits, and how to enforce the robustness against such LiDAR scans is non-trivial.
## 3 LiDAR-based Detection
In this section, we provide the details of the LiDAR-based detection system that are directly related to our proposed adversarial attacks. Refer to the online repository2 for more details.
Footnote 2: [https://github.com/ApolloAuto/apollo/tree/r2.0.0/docs](https://github.com/ApolloAuto/apollo/tree/r2.0.0/docs)
An overview of the system is shown in Fig. 2. First, a LiDAR sensor scans the 3D environment and obtains a raw point cloud of the scene. Next, the point cloud goes through preprocessing, and is fed to a detection model. Finally, post-processing is applied to the detection output to obtain the detection predictions.
**LiDAR.** A LiDAR sensor scans the surrounding environment and generates a point cloud of \\(X\\in\\mathbb{R}^{n\\times 4}\\) with 3D coordinates (\\(u^{X},v^{X},w^{X}\\)) and intensity \\(int^{X}\\). First, a sensor fires off an array of laser beams consecutive in horizontal and vertical directions. It then captures the light intensity reflecting back, and calculates the time that photons have traveled along each beam (Time of Flight). The distance and the coordinate of the surface points along the beam can be computed. These points then form a raw point cloud of the object surfaces in the environment. LiDAR sensors are supposedly robust to object surface textures, as the Time of Flight is not easily affected by texture change. Though it also detects the intensity of reflected lights, it is unclear how adversarial algorithms designed for natural lighting in image space can be adapted to invisible laser beams used as light sources. Therefore, image-based adversarial attacks may have limited effects on such LiDAR-based detection system.
Preprocessing phase.The previous raw point cloud \\(X\\) goes through a preprocessing phase to form a feature map of \\(x\\in\\mathbb{R}^{H\\times W\\times 8}\\) (see Sec. B). The raw point cloud \\(X\\) is first transformed and filtered based on a High Definition Map (HDMap) to attain a ROI point cloud \\(X_{roi}\\). This point cloud \\(X_{roi}\\) is then sliced into \\(H\\times W\\) vertical cells at \\(\\left(\\left\\lfloor u^{X_{roi}}\\right\\rfloor,\\left\\lfloor v^{X_{roi}}\\right\\rfloor\\right)\\). This \"hard\" assignment of points into cells will introduce piecewise zero gradients for **counting** and **max** w.r.t. the input. After slicing, in each cell, the information of the points are aggregated to generate a feature of size 8 for this cell, including heights, intensity, point counts \\(etc.\\) (detailed in Sec. B). This \\(H\\times W\\times 8\\) feature map \\(x\\) will then be fed into a machine learning model.
In this procedure, many operations (_e.g._ max height, count) introduce zero gradients due to the \"hard\" assignment, so the end-to-end optimization-based attack algorithms are not directly applicable.
Machine learning model.Deep Neural Networks (DNNs) are used to process the \\(H\\times W\\times 8\\) feature map, and then output the metrics for each one of the \\(H\\times W\\) cells. The metrics are listed in Sec. B.
Post-processing phase.The post-processing phase aggregates previous outputs from the machine learning model and recognizes the detected objects. The Post-processing can be roughly divided into 3 major sequential components: _clustering_, _box building_ and _tracking_. The clustering process composes obstacle candidates
Figure 2: Overview of LiDAR-based detection on AV.
using both the model output metrics and ROI point cloud \\(X_{roi}\\) generated from the preprocessing phase. In the clustering process, cells with higher _objectness_ confidence (greater than 0.5 by default) are used for constructing clusters by building a connected graph using _center offset_. The obstacle candidates are produced by selecting the clusters with two constraints: (1) the average _confidence_ of cells in the cluster needs to be greater than 0.1 (2) the number of points in the ROI point cloud that are assigned to the cluster is greater than 3. The class probabilities of the obstacle candidate are calculated by summing up class probabilities of all cells in the cluster. The box builder then reconstructs the bounding boxes including the height, width, length of the obstacle candidates from the point cloud assigned to the candidate. Finally, the tracker integrates multiple frames of processed results to generate tracked objects as the output of the LiDAR-based detection, together with additional information such as object id, speed etc.
Note that in this paper, we only consider a single frame for the adversarial attacks as a demonstration of feasibility. For the case of multiple frames, it can be treated as enhancing robustness against object motions, and such robustness against different locations is shown in later experiments (SS 5.4).
## 4 Generating Adversarial Object Against LiDAR-based Detection
In this section, we will formulate the problem first and describe the adversarial goals and challenges. We then describe our whitebox method _LiDAR-Adv_ which aims to tackle the challenges and fulfill diverse adversarial goals. Finally, we propose an evolution-based attack method for blackbox settings.
### Methodology overview
Given a 3D object \\(S\\) in a scene, as stated in the background, after the scene is scanned by a LiDAR sensor, a point cloud \\(X\\) is then generated based on \\(S\\) so that \\(X=\\mathrm{render}(S,\\mathrm{background})\\) For preprocessing, this point cloud \\(X\\) is sliced and aggregated to generate \\(x\\), which is a \\(H\\times W\\times 8\\) feature vector, and we call this aggregation process as \\(\\Phi\\): \\(x=\\Phi(X)\\). Then a machine learning model \\(M\\) maps this 2D feature \\(x\\in R^{H\\times W\\times 8}\\) to \\(O=M(x)\\), where \\(O\\in R^{H\\times W\\times 7}\\) (see Sec. B for concrete output meanings). \\(O\\) is then post-processed by a clustering process \\(\\Psi\\) to generate the confidence \\(y_{conf}\\) and label \\(y_{label}\\) of detected obstacles so that \\((y_{conf},y_{label})=\\Psi(O)\\) An adversarial attacker aims to manipulate the object \\(S\\) to achieve the adversarial goals. Here we define two types of adversarial goals: 1) _Hiding object_: Hide an existing object \\(S\\) by manipulating \\(S\\); 2) _Changing label_: Change the label \\(y\\) of the detected object \\(S\\) to a specified target \\(y^{\\prime}\\).
To achieve the above adversarial goals in LiDAR-based detection is non-trivial, and there are the following challenges: 1) **Multiple pre/post-processing stages.** Unlike the adversarial attacks on traditional image-space against machine learning tasks such as classification and object detection, the LiDAR-based detection here is not a single end-to-end learning model, It consists of the differentiable learning model \\(M\\) and several non-differentiable parts including preprocessing and post-processing. Thus, the direct gradient based attacks are not directly applicable. 2) **Manipulation constraints.** Instead of directly manipulating the point cloud \\(X\\) as in [19], we manipulate the 3D shape of \\(S\\) given the limitation of LiDAR. The points in \\(X\\) are the intersections of laser beams and object surfaces and cannot move freely, so the perturbations on each point may affect each other. Keeping the shape plausible and smooth adds additional constraints [22]. 3) **Limited Manipulation Space.** Consider the practical size of the object versus the size of the scene that is processed by LiDAR, the 3D manipulation space is rather small (\\(<2\\%\\) in our experiments), as shown in Fig. 1.
Given the above challenges, we design an end-to-end attacking pipeline. In order to facilitate gradient-based algorithms, we implement an approximated differentiable renderer \\(R\\), which simulates the functionality of LiDAR, to intersect a set of predefined rays with the 3D object surface (\\(S\\)) consisting of vertices \\(V\\) and faces \\(W\\). The points at the intersections form the raw point cloud \\(X\\). After preprocessing, the point cloud is then fed to a preprocessing function \\(\\Phi\\) to generate the feature map \\(x=\\Phi(X)\\). The feature map \\(x\\) is then taken as input for a machine learning model \\(M\\) to obtain the output metrics \\(O=M(x)\\).
The whole progress can be symbolized as \\(F(S)=M(\\Phi(R(S)))\\). Note that by differentiating the renderer \\(R\\), the whole process \\(F(*)=M(\\Phi(R(*)))\\) is differentiable w.r.t. \\(S\\). In this way, we can manipulate \\(S\\) to generate adversarial \\(S_{\\mathrm{adv}}\\) via our designed objective function operating on the final output \\(F(S)\\).
### Approximate differentiable renderer
LiDAR simulationThe renderer \\(R\\) simulates the physics of a LiDAR sensor that probes the objects in the scene by casting laser beams. The renderer first takes a mesh \\(S\\) as input, and compute the intersections of a set of predefined ray directions to the meshes in the scene to generate point cloud \\(X\\). After depth testing, the distance along each beam is then captured, representing the surface point of a mesh that it first encounters, as if a LiDAR system receives a reflection from an object surface. Knowing ray directions of the beams, the exact positions of the intersection points can be inferred from the distance, in the form of point clouds \\(X\\).
Real background from a road sceneWe render our synthetic object onto a realistically captured point cloud. First, we obtain the 3D scan of a road scene, using the LiDAR sensor Riegl VMX-1HA mounted on a car. Then, we obtain the laser beam directions by computing the normalized vectors from the origin (LiDAR) pointing to the scanned points. This fixed set of ray directions are then used for rendering our synthetic objects throughout the paper. Note that we can also manually set ray directions given sensor specifications, but it will be less real, because it may not model the noises and fluctuations that occur in a real LiDAR sensor.
Hybrid rendering of synthetic objects onto a realistic backgroundGiven the ray directions reconstructed from the background point cloud, a subset will intersect with the object, forming the point cloud for the object of interest. The corresponding background points are then removed since these background points are occluded by the foreground object. In this way, we obtain a semi-real synthetic point cloud scene: the background points come from the captured real data; the foreground points are physically accurate simulated based on the collected real data.
### Differentiable proxy function for feature aggregation
As in Section 3, in the preprocessing of Apollo, it aggregates the point cloud into hardcoded 2D features, including **count**, **max height**, **mean height**, **intensity** and **non-empty**. These operations are non-differentiable. In order to apply end-to-end optimizers to for our synthetic object \\(S\\), we need to flow the gradient through the feature aggregation step, with the help of our proxy functions.
Given a point cloud \\(X\\) with coordinate \\((u^{X},v^{X},w^{X})\\), and we hope to count the number of points falling into the cells of a 3D grid \\(G\\in\\mathbb{R}^{H\\times W\\times P}\\). For a point \\(X_{i}\\) with location\\((u^{X_{i}},v^{X_{i}},w^{X_{i}})\\), we increase the count of 8 cells: the centers of these 8 cells form a cube, and the point \\(X_{i}\\) is inside this cube. Specifically, we increase the count of 8 cells using trilinear weights:
\\[G(u_{i},v_{i},w_{i})=\\sum_{p}(1-d(u_{p},u^{X_{i}}))\\cdot(1-d(v_{p},w^{X_{i}})) \\cdot(1-d(w_{p},w^{X_{i}})), \\tag{1}\\]
where \\(p\\in\\mathcal{N}(u^{X_{i}},v^{X_{i}},w^{X_{i}})\\) are the indices of the 8-pixel neighbors at location \\((u^{X_{i}},v^{X_{i}},w^{X_{i}})\\) and \\(d(\\cdot,\\cdot)\\) represents the \\(L_{1}\\) distance. The **count** feature \\(x_{\\text{count}}\\) is the value \\(G_{p}=G(u_{i},v_{i},w_{i})\\) computed for each grid \\(i\\). Note that this feature is no longer an integer and can have non-zero gradients w.r.t. the point coordinates.
We then use this \"soft count\" feature to further compute \"mean height\" and \"max height\" features. For simplicity, we first define a constant height matrix \\(T\\in\\mathcal{R}^{H,W,P}\\), where \\(T(.,.,p)=p,p\\in\\{1 P\\}\\). This matrix stores the height of each cell. Next, we can formulate the **mean height**\\(x_{\\text{mean-height}}\\) and **max height**\\(x_{\\text{max-height}}\\) using soft count \\(G\\):
\\[x_{\\text{mean-height}}=\\cdot\\frac{\\sum_{p\\in P}G_{p}\\circ T_{p}}{\\sum_{p\\in P }G_{p}+\\epsilon}\\qquad\\text{and}\\qquad x_{\\text{max-height}}=\\max_{p}\\operatorname {sign}\\left(G(.,.,p)\\right)\\circ T(.,.,p) \\tag{2}\\]
where \\(\\epsilon=1e^{-7}\\) to prevent zero denominators. Note that the \\(\\operatorname{sign}\\) function is non-differentiable, so we approximate the gradient using \\(\\operatorname{sign}(G)=G\\) during back propagation. The feature **intensity** has the similar formulation of **height** so we omit them here. The feature **non-empty** is formulated as \\(x_{\\text{non-empty}}=\\operatorname{sign}(G)\\).
We denote the above trilinear approximator as \\(\\Phi^{\\prime}\\), in constrast to the original non-differentiable preprocessing step \\(\\Phi\\). A visualization of output of our \\(\\Phi^{\\prime}(X)_{\\text{count}}\\) compared to the original \\(\\Phi(X)_{\\text{count}}\\) is shown in Sec. C. Since our approximation introduces differences in counting, \\(\\Phi^{\\prime}(X)\\) is not strictly equal to \\(\\Phi(X)\\), resulting in different \\(\\operatorname{obj}\\) values of the final model prediction. We observe that this difference will raise new challenges to transfer the adversarial object generated based on \\(\\Phi^{\\prime}\\) to \\(\\Phi\\). To solve this problem, we reduce the difference between \\(\\Phi^{\\prime}\\) and \\(\\Phi\\), by replacing the L1 distance \\(d\\) in Eq. 1 with \\(d(u_{1},u_{2})=0.5+0.5\\cdot\\tanh(\\mu\\cdot(u_{1}-u_{2}-1))\\) where \\(\\mu=20\\). We name this approximation \"tanh approximator\" and denote it \\(\\Phi^{\\prime\\prime}\\). We observe that the input difference between \\(\\Phi^{\\prime\\prime}\\) and the original \\(\\Phi\\) is largely reduced compared to \\(\\Phi^{\\prime}\\), allowing for smaller errors of the model predictions and better transferability. To extend our approximator and further reduce the gap between \\(\\Phi^{\\prime\\prime}\\) and \\(\\Phi\\), we interpolate the distance: \\(d(u_{1},u_{2})=\\alpha\\cdot(0.5+0.5\\cdot\\tanh(5\\mu\\cdot(u_{1}-u_{2}-1)))+(1- \\alpha)\\cdot(u_{1}-\\lfloor u_{2}\\rfloor)\\), where \\(\\alpha\\) is a hyper-parameter balancing the accuracy of approximation and the availability of gradients.
### Objective functions
Our objective is to generate a synthetic adversarial object \\(S^{\\mathrm{adv}}\\) from an original object \\(S\\) by perturbing its vertices, such that the LiDAR-based detection model will make incorrect predictions. We first optimize \\(S^{\\mathrm{adv}}\\) against the semi-real simulator detection model \\(M\\).
\\[\\mathcal{L}(S^{\\mathrm{adv}})=\\mathcal{L}_{\\mathrm{adv}}(S^{\\mathrm{adv}},M)+ \\lambda\\mathcal{L}_{\\mathbf{r}}(S^{\\mathrm{adv}};S) \\tag{3}\\]
The objective function \\(\\mathcal{L}\\) consists of two losses. \\(\\mathcal{L}_{\\mathrm{adv}}\\) is the adversarial loss to achieve the target goals while the \\(\\mathcal{L}_{\\mathbf{r}}\\) is the distance loss to keep the properties of the \"realistic\" adversarial 3D object \\(S^{\\mathrm{adv}}\\). We optimize the objective function by manipulating the vertices. The distance loss is defined as follows:
\\[\\mathcal{L}_{\\mathbf{r}}=\\sum_{\\mathbf{v}_{i}\\in V}\\sum_{q\\in\\mathcal{N}( \\mathbf{v}_{i})}\\lVert\\Delta\\mathbf{v}_{i}-\\Delta\\mathbf{v}_{q}\\rVert_{2}^{2}+\\beta \\sum_{\\mathbf{v}_{i}\\in V}\\lVert\\Delta\\mathbf{v}_{i}\\rVert_{2}^{2}, \\tag{4}\\]
where \\(\\Delta\\mathbf{v}_{i}=\\mathbf{v}_{i}^{\\mathrm{adv}}-\\mathbf{v}_{i}\\) represents the displacement between the adversarial vertex \\(\\mathbf{v}_{i}^{\\mathrm{adv}}\\) and pristine vertex \\(\\mathbf{v}_{i}\\). \\(\\beta\\) is the hyperparameter balancing these two losses. The first losses [22] is a Laplacian loss preserving the smoothness of the perturbation applied on the adversarial object \\(S^{adv}\\). The second part is the \\(L2\\) distance loss to limit the magnitude of perturbation.
Objective: hide the inserted adversarial objectAs introduced in the background section, the existence of the object highly depends on the \"positiveness\" metric. \\(H(*,M,S)\\) denotes a function extracting \\(*\\) metric from the model \\(M\\) given an object \\(S\\). \\(\\mathcal{A}\\) is the mask of the target object's bounding box. Our adversarial loss is represented as follows:
\\[\\mathcal{L}_{\\mathrm{adv}}=H(\\mathrm{pos},M,S)*\\mathcal{A} \\tag{5}\\]
Objective: changing labelIn order to change the predicted label of the object, it needs to increase the logits of the target label and decrease the logits of the ground-truth label. Moreover, it also needs to preserve the high positiveness. Based on this, our adversarial loss is written as
\\[\\mathcal{L}_{\\mathrm{adv}}=(-H(\\mathrm{c}_{y^{\\prime}},M,S)+H((c)_{y},M,S) \\cdot\\mathcal{A}*H(\\mathrm{pos},M,S) \\tag{6}\\]
In order to ensure that adversarial behaviors still exist when the settings are slightly different, we create robust adversarial objects that can perform successful attacks within a range of settings, such as different object orientations, different positions to the LiDAR sensor _etc._ To achieve such goal, we sample a set of physical transformations to optimize the loss expectation. In reality, we create a victim set \\(D\\) by rendering the object \\(S\\) at different positions and orientations. Instead of optimizing an adversarial object \\(S\\) by attacking single position and orientation, we generate an universal adversarial object \\(S\\) to attack all positions and orientations in the victim set \\(D\\).
### Blackbox Attack
In reality, it is possible that the attackers do not have complete access to the internal model parameters, _i.e._ the model is a black box. Therefore, in this subsection, we also develop an evolution-based approach to perform blackbox attack.
In evolution, a set of individuals represent the solutions in the search space, and the fitness score defines how good the individuals are. In our case, the individuals are mesh vertices of our adversarial object, and the fitness score is \\(-\\mathcal{L}(S^{\\mathrm{adv}})\\). We initialize \\(m\\) mesh vertices using the benign object \\(S\\). For each iteration, new population of \\(n\\) mesh vertices are generated by adding random perturbations, drawn from a Gaussian distribution \\(\\mathcal{N}(0,\\sigma)\\), to each mesh vertices in the old population. \\(m\\) mesh vertices with top fitness scores will remain for the next iteration, while the others will be replaced. We iterate the process until we find a valid solution or reach a maximum number of steps.
## 5 Experiments
In this section, we first expose the vulnerability of the LiDAR-based detection system via the evolution-based blackbox algorithm by achieving the goal of \"hiding object\", because missing obstacles can cause accidents in real life. We then show the qualitative results and quantitative results of _LiDAR-Adv_ under whitebox settings. In addition, we also show that _LiDAR-Adv_ can achieve other adversarial goals such as \"changing label\". Moreover, the point clouds are continuously captured in real life, so attacks in a single static frame may not have much effect in real-world cases. Therefore, in our experiments, we generate a universal robust adversarial object against a victim dataset which consists of different orientations and positions. We 3D-print such universal adversarial object and conduct the real-world drive-by experiments, to show that they indeed can pose a threat on road.
### Experimental setup
We conduct the evaluation on the perception module of Baidu Apollo Autonomous Driving platform (V2.0). We initialize the adversarial object as a resampled 3D cube-shaped CAD model using MeshLab [4]. For rendering, we implement a fully differential LiDAR simulator with predefined laser beam ray directions extracted from a real scene captured by the Velodyne HDL-64E sensor, as stated in SS 4.2. It has around 2000 angles in the azimuth angle and around 60 angles in the elevation angle. We use Adam optimizers [10], and choose \\(\\lambda\\) as \\(0.003\\) in Eq. 3 using binary search. For the evolution-based blackbox algorihtm, we choose \\(\\sigma=0.1\\), \\(n=500\\) and \\(m=5\\).
### Vulnerability analysis
Here, we first show the existence of the vulnerability using our evolution-based blackbox attacks, with the goal of \"hiding object\". We generate adversarial objects in different size (50cm and 75cm in edge length). For each object, we select 45 different position and orientation pairs for evaluation, and the results are shown in Table 1. The results indicate that the LiDAR-based detection system is vulnerable. The visualization of the adversarial object is shown in Figure 3(a) and (c).
### _LiDAR-Adv_ with different adversarial goals
After showing the vulnerability of the LiDAR-based detection system, here we focus on whitebox settings to explore what a powerful adversary can do, since \"the design of a system should not require secrecy\" [16]. Therefore, we evaluate the effectiveness of our whitebox attack _LiDAR-Adv_ with the goal of \"hidding object\". We also evaluate the feasibility of _LiDAR-Adv_ to achieve another goal of \"changing label\".
Hiding objectWe follow the same settings as in the above sections, and Table 1 shows the results. We find that _LiDAR-Adv_ can achieve 71% attack success rate with size 50cm. The attack success rate is consistently higher than the evolution-based blackbox attacks. Figure 3 (b) and (c) show the visualizations of the adversarial objects. We visually observe that the adversarial objects generated by _LiDAR-Adv_ are smoother than that of evolution.
Changing labelThe result shown in Figure 4 indicates that we can successfully change the label of the object. We also experiment with different initial shapes and target labels. More details can be found in Sec. D.
### _LiDAR-Adv_ on generating robust physical adversarial objects
To ensure the generated _LiDAR-Adv_ preserves adversarial behaviors under various physical conditions, we optimize the object by sampling a set of physical transformations such as possible positions and orientations. We show that the generated robust adversarial object is able to achieve the attack goal of hiding object with a high success rate in Table 2. An interesting phenomenon is that some attack performance under the unseen settings is even better than that within the controlled environment. This implies that our adversarial objects are robust enough to generalize to unseen settings.
Furthermore, we evaluate the generated robust adversarial object in the physical world by 3D printing the generated object. We collect the point cloud data using a Velodyne HDL-64E sensor with a real car driving by
\\begin{table}
\\begin{tabular}{l|c|c} \\hline \\hline \\multirow{2}{*}{Attacks} & \\multicolumn{2}{c}{Object size} \\\\ & 50cm & 75cm \\\\ \\hline _LiDAR-Adv_ & 32/45 (71\\%) & 23/45 (51\\%) \\\\ Evolution-based & 28/45 (62\\%) & 16/45 (36\\%) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Attack success rate of _LiDAR-Adv_ and evolution based method under different settings.
Figure 3: Adversarial meshes of different sizes can fool the detectors even with more LiDAR hits. We generate the object with _LiDAR-Adv_ and evolution-based method (Evo.).
and evaluate the collected traces on the LiDAR perceptual module of Baidu Apollo. As shown in Figure 4(a), we find that the adversarial object is not detected around the target position among all 36 different frames. To compare, the box object (in Figure 4(b)) is detected in 12 frames among all 18 frames. The number of total frames is different due to the different vehicle speed. More details can be found in Sec. D.
## 6 Conclusion
We show that LiDAR-based detection systems for autonomous driving are vulnerable against adversarial attacks. By integrating our proxy differentiable approximator, we are able to generate robust physical adversarial objects. We show that the adversarial objects are able to attack the Baidu Apollo system at different positions with various orientations. We also show _LiDAR-Adv_ can generate much smoother object than evolution based attack algorithm. Our findings raise great concerns about the security of LiDAR systems in AV, and we hope this work will shed light on potential defense methods.
\\begin{table}
\\begin{tabular}{l c|c c|c c} \\hline \\multicolumn{2}{c|}{Controlled Setting} & \\multicolumn{4}{c}{Unseen Setting} \\\\ Distance (cm) \\& Orientation (\\({}^{\\circ}\\)) & Attack & \\multicolumn{2}{c|}{Distance (cm)} & \\multicolumn{2}{c}{Orientation (\\({}^{\\circ}\\))} \\\\ & & 0-50 & 50-100 & 0-5 & 0-10 \\\\ \\hline \\(\\left\\{0,\\pm 50\\right\\}\\times\\left\\{0,\\pm 2.5,\\pm 5\\right\\}\\) & 41/45 & 96/100 & 91/100 & 10/10 & 9/10 \\\\ \\(\\left\\{0,\\pm 50\\right\\}\\times\\left\\{0,\\pm 2.5,\\pm 5\\right\\}\\) & 43/45 & 96/100 & 90/100 & 8/10 & 10/10 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Attack success rates of _LiDAR-Adv_ at different positions and orientations under both controlled and unseen settings.
Figure 4: The adversarial mesh generated by _LiDAR-Adv_ is mis-detected as a βPedestrianβ.
Figure 5: Results of physical attack. Our 3D-printed robust adversarial object by _LiDAR-Adv_ is not detected by the LiDAR-based detection system in a moving car. Row 1 shows the point cloud data collected by LiDAR sensor, and Row 2 presents the corresponding images captured by a dash camera.
## References
* [1] A. Athalye and I. Sutskever. Synthesizing robust adversarial examples. _arXiv preprint arXiv:1707.07397_, 2017.
* [2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In _IEEE Symposium on Security and Privacy, 2017_, 2017.
* [3] N. Carlini and D. A. Wagner. Towards evaluating the robustness of neural networks. In _2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017_, pages 39-57, 2017. doi: 10.1109/SP.2017.49. URL [https://doi.org/10.1109/SP.2017.49](https://doi.org/10.1109/SP.2017.49).
* [4] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia. Meshlab: an open-source mesh processing tool. In _Eurographics Italian chapter conference_, volume 2008, pages 129-136, 2008.
* [5] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In _Proceedings of the 25th international conference on Machine learning_, pages 160-167. ACM, 2008.
* [6] L. Deng, J. Li, J.-T. Huang, K. Yao, D. Yu, F. Seide, M. L. Seltzer, G. Zweig, X. He, J. D. Williams, et al. Recent advances in deep learning for speech research at microsoft. In _ICASSP_, volume 26, page 64, 2013.
* [7] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song. Robust physical-world attacks on deep learning models. _arXiv preprint arXiv:1707.08945_, 1, 2017.
* [8] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_, 2014.
* [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [10] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [11] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. _arXiv preprint arXiv:1607.02533_, 2016.
* [12] H.-T. D. Liu, M. Tao, C.-L. Li, D. Nowrouzezahrai, and A. Jacobson. Adversarial geometry and lighting using a differentiable renderer. _CoRR_, abs/1808.02651, 2018.
* [13] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 2574-2582, 2016.
* [14] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In _Security and Privacy (EuroS&P), 2016 IEEE European Symposium on_, pages 372-387. IEEE, 2016.
* [15] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 652-660, 2017.
* [16] C. E. Shannon. Communication theory of secrecy systems. _Bell Labs Technical Journal_, 28(4):656-715, 1949.
* [17] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _nature_, 529(7587):484, 2016.
* [18] M. Sun, J. Tang, H. Li, B. Li, C. Xiao, Y. Chen, and D. Song. Data poisoning attack against unsupervised node embedding methods. _arXiv preprint arXiv:1810.12881_, 2018.
* [19] C. Xiang, C. R. Qi, and B. Li. Generating 3d adversarial point clouds. _arXiv preprint arXiv:1809.07016_, 2018.
* [20] C. Xiao, R. Deng, B. Li, F. Yu, D. Song, et al. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In _Proceedings of the (ECCV)_, pages 217-234, 2018.
* [21] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song. Generating adversarial examples with adversarial networks. _arXiv preprint arXiv:1801.02610_, 2018.
* [22] C. Xiao, D. Yang, B. Li, J. Deng, and M. Liu. Meshadv: Adversarial meshes for visual recognition. In _CVPR_, 2018.
* [23] C. Xiao, J.-Y. Zhu, B. Li, W. He, M. Liu, and D. Song. Spatially transformed adversarial examples. _arXiv preprint arXiv:1801.02612_, 2018.
Differential Renderer
LiDAR SimulationThe renderer simulates the physics of a LiDAR sensor that probes the objects in the scene by casting laser \\(N_{\\text{ray}}\\) rays: \\(R=\\{\\mathbf{r}_{i}\\in\\mathbb{R}^{3},\\|\\mathbf{r}_{i}\\|=1,i=1,2,\\cdots,N_{\\text{ ray}}\\}\\), with \\(\\mathbf{r}_{i}\\) representing the direction of the \\(i\\)-th ray. Given a shape \\(S\\) with the surface \\(\\partial S\\) as input, the renderer computes the intersections of rays \\(R\\) to the mesh faces in the scene. For each ray \\(\\mathbf{r}_{i}\\), the intersection coordinate \\(P_{i}\\) are computed through depth testing (assuming the center of the rays is at origin, _i.e._ the reference frame of LiDAR):
\\[\\begin{split}\\mathbf{p}_{i}=\\operatorname*{arg\\,min}_{\\mathbf{p}} \\{\\|\\mathbf{p}\\|\\mid\\exists t>0,\\mathbf{p}=t\\cdot\\mathbf{r}_{i},\\mathbf{p}\\in \\partial S\\},\\\\ i=1,2,\\cdots,N_{\\text{ray}}\\end{split} \\tag{7}\\]
Object insertionNotice that we have a predefined set of rays \\(R\\). To obtain these rays, one can refer to the specifications of a LiDAR device. In our paper, we directly compute the directions from the captured background point cloud \\(P^{\\prime}\\), so that the rays are close to real world cases:
\\[\\mathbf{r}_{i}=\\frac{\\mathbf{p}_{i}^{\\prime}}{\\|\\mathbf{p}_{i}^{\\prime}\\|} \\tag{8}\\]
With this, Eq. (7) becomes:
\\[\\begin{split}\\mathbf{p}_{i}=\\operatorname*{arg\\,min}_{\\mathbf{p}} \\{\\|\\mathbf{p}\\|\\mid\\mathbf{p}=\\mathbf{p}_{i}^{\\prime}\\vee\\mathbf{p}=t\\cdot \\mathbf{r}_{i},t>0,\\mathbf{p}\\in\\partial S\\},\\\\ i=1,2,\\cdots,N_{\\text{ray}}\\end{split} \\tag{9}\\]
This means when rays intersect with an object, the corresponding background points blocked by the above-ground parts of the object are removed during depth testing; if the object is below the ground, the intersections leave those corresponding background points intact also due to depth testing. In this way, we obtain a semi-real synthetic point cloud scene: the background points come from the captured real data; the foreground points are physically accurate simulations based on the captured real data.
## Appendix B Background
### LiDAR perception system
Detailed machine learning model input features and machine learning model output metrics are shown in Table C and Table D.
\\begin{table}
\\begin{tabular}{l|l} \\hline \\hline
**Feature** & **Description** \\\\ \\hline
**Max height** & Maximum height of points in the cell. \\\\
**Max intensity** & Intensity of the highest point in the cell. \\\\
**Mean height** & Mean height of points in the cell. \\\\
**Mean intensity** & Mean intensity of points in the cell. \\\\
**Count** & Number of points in the cell. \\\\
**Direction** & Angle of the cellβs center with respect to the origin. \\\\
**Distance** & Distance between the cellβs center and the origin. \\\\
**Non-empty** & Binary value indicating whether the cell is empty or occupied. \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table C: Machine learning model input features extracted in the preprocessing phase.
\\begin{table}
\\begin{tabular}{l|l} \\hline \\hline
**Metric** & **Description** \\\\ \\hline
**Center offset** (off) & Offset to predicted center of the cluster the cell belongs to. \\\\
**Objectness** (obj) & The probability of a cell belonging to an obstacle. \\\\
**Positiveness** (pos) & The confidence score of the detection. \\\\
**Object height** (hei) & The predicted object height. \\\\ \\(i\\)**th Class Probability** (\\(\\text{cls}_{i}\\)) & The probability of the cell being from class \\(i\\) (vehicle, pedestrian, etc.). \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table D: Output metrics of the segmentation model.
## Appendix C Generating Adversarial Object Against LiDAR Perception
### Gradient of proxy functions
Figure F visualizes the improvement of our tanh approximator \\(\\Phi^{\\prime\\prime}\\) compared to the trilinear approximator \\(\\Phi^{\\prime}\\) in terms of the **count** feature and the **objectness** metric. Given object \\(S\\), \\(\\Phi^{\\prime}(X)_{\\mathbf{a}}\\) represents the aggregated feature \\(a\\) of the point cloud \\(X\\). \\(M(\\Phi^{\\prime}(X))_{\\mathbf{a}}\\) represents the model output with respect to metric \\(a\\). We observe that our approximator \\(\\Phi^{\\prime}\\) will introduce errors due to our approximation, which will finally leads to model output difference. However, the error of the approximator has been largely decreased by using a more accurate approximator \\(\\Phi^{\\prime\\prime}\\). This reduces the error in model output, as can be seen in Figure F.
## Appendix D Additional results
### Changing label
We conduct experiments with 3 pristine meshes (cuba, sphere, tetrahedron) and set the target label to the other 4 labels except for the original label. The results are shown in Table E, showing that our _LiDAR-Adv_ has a high chance to trick the detector to output target labels, regardless of different pristine meshes that it starts from.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline & Cube & Sphere & Tetrahedron & Cylinder & Overall \\\\ \\hline Attack Success Rate & 75\\% & 100\\% & 75\\% & 50\\% & 75\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table E: The attack success rate of the adversarial objects generating using _LiDAR-Adv_, starting from different types of pristine meshes. The target labels are the other four labels different from the original predictions.
#### d.1.1 _LiDAR-Adv_ on generating robust physical adversarial objects
In this subsection, we add additional results to evaluate the robustness of the generated objects against different positions and different angles. By doing so, it can provide insight on the performance of our adversarial object in real-world settings, before we 3D-print the object.
_LiDAR-Adv_ against different anglesWe generate the adversarial objects by attacking for 9 angles simultaneously and evaluate the attack success rate among these angles. Our approach achieves 100% attack success rate (Table F) both on our approximate model and the Apollo system. This indicates that our designed differentiable proxy functions are accurate enough to transfer the adversarial behavior to Apollo. Figure G shows qualitative results of the adversarial object from different close-up views. We can observe that the adversarial example is smooth and can be easily reconstructed in the real-world.
_LiDAR-Adv_ against different positionsSimilarly, we generate a single robust adversarial object against different positions simultaneously, as is shown in Figure H. We select 9 positions and use our algorithm to generate a universal robust adversarial example against different positions. Figure I shows 7 views of the generated object from different angles, compared to the original object. This adversarial example is smooth from all views. It shows that our approach is able to achieve the goal while keeping the shape plausible, so we can easily print the object to perform physical attack. Table G show the detailed results of our adversarial object against these 9 positions: it can successfully attack the system among these 9 positions.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline Angle & -10\\({}^{\\circ}\\) & -5\\({}^{\\circ}\\) & 0\\({}^{\\circ}\\) & 5\\({}^{\\circ}\\) & 10\\({}^{\\circ}\\) \\\\ \\hline Objectness & Model & β & β & β & β & β \\\\ (Confid.) & Apollo & β & β & β & β & β \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table F: Robust Adversarial Object against different angles. The original confidence is x. Our success rate is 100%. (οΏ½ represents no object detected)
Figure G: The visualization of adversarial object with different angles. In the benign frame (a), the system is able to detect the cube. When we replace the cube with our adversarial object, the system fails to detect the object at all three angles. We visualize the mesh along with the point clouds in a close-up view in (b), (c) and (d).
### Physical experiments
We 3D-print our robust adversarial object at 1:1, and drive a real car mounted with LiDAR and dashcams. The adversarial object is put on the road, and a car drives by, collecting scanned point clouds and the reference dashcam videos. For comparison, we also put the benign object, which is a box of same size at the same location and follow the same protocol when collecting the point clouds.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline Position & \\begin{tabular}{c} Objectness (Confid.) \\\\ Ours \\\\ Apollo \\\\ \\end{tabular} & \\begin{tabular}{c} Position \\\\ \\end{tabular} & \\begin{tabular}{c} Objectness (Confid.) \\\\ Ours \\\\ Apollo \\\\ \\end{tabular} & Position & \\begin{tabular}{c} Objectness (Confid.) \\\\ Ours \\\\ \\end{tabular} &
\\begin{tabular}{c} Position \\\\ Apollo \\\\ \\end{tabular} \\\\ \\hline (-50, -50) & β & β & (0, -50) & β & β & (50, -50) & β & β \\\\ (-50, 0) & β & β & (0, 0 ) & β & β & (50, -50) & β & β \\\\ (-50, 50) & β & β & (0, 50) & β & β & (50, 50) & β & β \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Robust Adversarial Object against different positions. The original object can be detected by Apollo. Our success rate is 100%. (β represents no object detected)
Figure 1: The optimized robust adversarial objects from 6 principal views and a particular view, compared with the original pristine object.
Figure 1: Our physical experiment setting. We 3D-print the generated adversarial object at 1:1, and drive a car mounted with LiDAR and dashcams to collect the scanned point clouds and the reference videos. | Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a \"physically adversarial _Stop Sign_\" can be synthesized such that the autonomous driving cars will interrecognize it as others (e.g., a speed limit sign). However, these image-based adversarial examples cannot easily alter 3D scans such as widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach _LiDAR-Adv_ to generate real-world adversarial objects that can evade the LiDAR-based detection systems under various conditions. We first explore the vulnerabilities of LiDAR using an evolution-based blackbox attack algorithm, and then propose a strong attack strategy, using our gradient-based approach _LiDAR-Adv_. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We 3D-print our adversarial objects and perform physical experiments with LiDAR equipped cars to illustrate the effectiveness of _LiDAR-Adv_. Please find more visualizations and physical experimental results on this website: [https://sites.google.com/view/lidar-adv](https://sites.google.com/view/lidar-adv). | Provide a brief summary of the text. | 308 |
arxiv-format/2407_02061v1.md | # LiDAR-based HD Map Localization using Semantic Generalized ICP with Road Marking Detection
Yansong Gong, Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang\\({}^{*}\\)
Yansong Gong ([email protected]), Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang (corresponding author, [email protected]) are with UISEF Technology (Beijing Co., Ltd.)This version of the manuscript has been accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (**IROS 2024**).
## I Introduction
Accurate localization is a prerequisite for autonomous driving. In unsheltered open-air environments, the global positioning system (GPS) is the predominant technology for accurate localization. However, the GPS-provided poses become unstable when satellite signals are obstructed by ceilings or viaducts. Therefore, localization through environmental perception using observation sensors, such as cameras and light detection and ranging (LiDAR) sensors, becomes necessary for autonomous vehicles, especially in GPS-denied environments.
In autonomous vehicle navigation, the detection of road markings stands out as the most widely employed technique for achieving precise and stable environmental perception. Subsequently, the detected road markings can be associated with semantic elements in high-definition (HD) maps to estimate the vehicle's pose. Cameras have been widely used for road marking detection [1, 2, 3, 4], because camera images contain rich texture information of environments. However, cameras are limited by the susceptibility of illumination variations and distortions in bird's-eye view (BEV) lane representation, rendering them less robust for certain applications. [5, 6].
In contrast, LiDAR sensors exhibit reduced sensitivity to varying illumination conditions and provide a precise 3D representation of the environment. Meanwhile, road markings can be extracted from road surfaces using LiDAR point clouds, leveraging the characteristic of their high reflectance from the retro-reflective materials [7, 8]. However, these LiDAR-based methods face challenges to balance the need for denser point cloud with the essential requirement for real-time performance.
To address the challenges, a real-time LiDAR-based approach is proposed for road marking detection and registration with HD maps, as visualized in Fig. 1 (a). For road marking detection, an adaptive segmentation technique is first employed to efficiently isolate points correlated with road markings. Then, a spatio-temporal probabilistic local map is established by aggregating segmented points from historical scans, resulting in a dense point cloud representation of road markings. Finally, a LiDAR bird's-eye view (LiBEV) image is generated by partitioning the local map into grid cells, and a proficiently trained instance segmentation network (CenterMask [9] is selected in our implementation) is applied to accurately detect 9 different types of road markings, as shown in Fig. 1 (b).
Fig. 1: (a) The HD map localization of our approach is visualized, where the trajectory of vehicle localization is marked in green, and the current pose of the vehicle is represented by a red cube. The blue point cloud represents ground points from a single-frame LiDAR data. These ground points are adaptively segmented to identify highly reflective points. Subsequently, they are aggregated by successive frames of data to form a denser point cloud. Finally, semantic segmentation is applied to obtain a semantic point cloud, which is then registered with the HD map to estimate the vehicleβs pose. (b) Road markings extracted using our approach are visualized, encompassing dashed lanes, solid lanes, stop lines, texts, arrows, diamond signs, triangle signs, curbs, and crosswalks.
As for the road marking registration, a semantic generalized iterative closest point (SG-ICP) algorithm is specifically designed to robustly align the detected road markings with the HD map by leveraging both their semantic and geometric attributes. In the proposed SG-ICP registration, the linear types of road markings are modeled as 1-manifolds embedded in the 2D space, making the constraints along the linear direction have minimal influence on the ultimate solution.
The contributions of this paper are summarized as follows.
1. A LiDAR-based road marking detection approach is proposed for online environmental perception, in which point density and real-time performance are balanced by adaptively segmenting high-reflectance points and updating spatio-temporal probabilistic local map. Finally, a LiBEV image is generated, and 9 different types of road markings can be detected accurately using an instance segmentation network on the LiBEV image.
2. A novel road marking registration algorithm is proposed for localization of autonomous vehicles on HD maps, in which linear road markings are represented as 1-manifolds embedded in 2D space. This representation can provide a robust and accurate solution for the registration problem with minimal influence on the under-constrained dimensions. Compared with the widely-used ICP, SG-ICP achieves higher accuracy of localization.
3. Comprehensive experiments are conducted in real-world scenarios, demonstrating real-time performance and localization accuracy of our system. Furthermore, experimental results indicate the approach's adaptability to various types of LiDAR sensors, as well as its robustness under different vehicle speeds and weather conditions.
## II Related work
In urban autonomous driving scenarios, the detection of road markings stands out as a crucial method for environmental perception. The road markings, typically painted on asphalt roads using retro-reflective materials, play a vital role in guiding autonomous vehicles. Leveraging the near-infrared wavelength of laser pulses, road markings exhibit higher reflectance compared to unpainted road surfaces [7]. As a result, the LiDAR sensor's ability to capture intensity measurements becomes instrumental in detecting these road markings [8].
LiDAR-based road marking detection is extensively applied in the generation of HD maps [10, 11, 12, 13, 14, 15]. Since the data for HD map generation is processed offline, consecutive scans are aggregated into a point cloud with a significantly high density of points, capturing detailed information about the surroundings [10]. However, processing such high-density points is time-consuming, rendering existing methods applied in HD map generation impractical for the online environmental perception and localization.
In existing studies focused on real-time perception, the detection of road markings is achieved by thresholding the measured intensities within a single LiDAR scan. A lane markings detection approach was developed by Team AnnieWAY for the DARPA Urban Challenge 2007, which detected the painted lane markings from the single scans by thresholding the points with high-reflectance gradients [16]. Similarly, the approach proposed in [17] detected highly reflective lane markings by employing a polar lane detector grid. In [18], a modified Otsu thresholding technique was employed to segment high-reflectance points obtained from a multilayer LiDAR into distinct categories, such as asphalt and road markings. Due to the sparsity of LiDAR measurements, the single-scan-based approaches face challenges detecting complete road markings, making the detection results susceptible to noise and lack robustness.
The approach proposed in [19] accumulated two consecutive frames of segmented road points, and then applied a fixed intensity threshold to isolate lane marking points. In the subsequent works [20, 21], the approach was extended to detect various types of high-reflectance landmarks, such as road signs and guard rail reflectors, to improve the localization accuracy. However, these multi-scan-based methods utilize a fixed intensity threshold to segment road marking points, which is sensitive to changes in environmental conditions and sensor types.
Recently, the deep learning approaches have been widely-used in the road marking detection tasks. The global feature correlation (LLDN-GFC) was introduced in [22] which leveraged the spatial characteristics of lane lines within the point cloud including sparsity, thinness, and elongation across the entirety of the ground points. This method was further improved in [23], resulting in a substantial reduction in computational cost. Nevertheless, LLDN-GFC focuses solely on extracting lane lines, overlooking other types of road markings. This limitation implies that the extracted lane lines can only provide lateral constraints on the vehicle's poses, potentially contributing to a degeneracy problem during the localization.
## III Methodology
In response to the limitations identified in previous researches, we propose a LiDAR-based road marking detection system for real-time environmental perception. Additionally, a novel road marking registration algorithm is introduced to enhance the localization accuracy of autonomous vehicles with HD maps. The flowchart of the proposed system is illustrated in Fig. 2.
### _LiDAR-based Real-Time Road Marking Detection_
Limited by the sparse distribution of LiDAR points, the stable and robust detection of road markings proves challenging when relying solely on individual frame of data. To overcome this limitation, successive LiDAR scans are aggregated into a local map, generating a denser point cloud that is conducive to effective road marking detection. In consideration of online requirements and high-reflectance road markings, the aggregation process can selectively extract points with higher intensities from the ground plane. This approach ensures the construction of a local map optimized for road marking detection, striking a balance between computational efficiency and information richness.
#### Ii-A1 High-Reflectance Point Segmentation
This procedure aims to adaptively identify points with high reflectance, which are often correlated with road markings painted using retro-reflective materials. To ensure adaptability across diverse sensors and scenarios, we introduce an adaptive segmentation approach designed to isolate high-reflectance points. This enhancement contributes to a more robust system overall.
For the efficiency of the system, only ground points are considered, which are extracted from the LiDAR scan utilizing the methodology detailed in [24]. This approach segments ground points based on height information and subsequently extracts them by partially fitting the ground plane. Then, a segmentation coefficient \\(\\rho_{k}\\) is introduced to distinguish high-reflectance points from the ground points in the \\(k\\)-th scan. Specifically, points with intensities below \\(\\rho_{k}\\) are excluded from the scan. Notably, the segmentation coefficient \\(\\rho_{k}\\) is not predetermined manually. Instead, it is dynamically estimated and continuously updated using a Kalman filter. The state of the Kalman filter is evolved according to the state-transition model
\\[\\rho_{k}=\\rho_{k-1}+w_{k}, \\tag{1}\\]
where \\(w_{k}\\sim\\mathcal{N}(0,Q_{k})\\) is the process noise. The measurement model is given by
\\[z_{k}=\\rho_{k}+v_{k}, \\tag{2}\\]
where the \\(v_{k}\\sim\\mathcal{N}(0,R_{k})\\) is the measurement noise. In each LiDAR scan, the mean \\(\\mu_{k}\\) and variance \\(\\sigma_{k}\\) of the intensities of the ground points are calculated. The measurement for the innovation computation is then determined as \\(z_{k}=\\mu_{k}+2\\sigma_{k}\\).
This adaptive approach relies on two assumptions. Firstly, it presumes that nearby consecutive roads should possess similar segmentation coefficients owing to the consistency in ground materials. Secondly, it assumes that the majority of LiDAR points lie on the common asphalt surface, while road marking points exhibit statistically higher intensities. These two assumptions are satisfied in most urban road environments, ensuring the effectiveness of the approach. Furthermore, segmenting these high-reflectance points is pivotal for optimizing the efficiency, strategically mitigating the computational load by excluding a significant volume of data points unrelated to road markings.
#### Ii-A2 Probabilistic Local Map Update
A local map is constructed through the aggregation of spatio-temporally successive LiDAR scans using an odometry, incorporating high-reflectance points to generate a dense point cloud for road marking detection. However, with the accumulation of scan data, the volume of information grows substantially, leading to an increasing computational burden.
To achieve real-time performance, a novel approach for probabilistically updating the local map is introduced. This approach employs a probabilistic discarding strategy, wherein each point in the map is selectively removed based on a calculated probability value. The probability assigned to the \\(i\\)-th point in the local map, denoted as \\(p_{i}\\), is computed by
\\[p_{i}=\\frac{1}{1+(\\left|k-k_{i}\\right|/\\eta)^{2}}, \\tag{3}\\]
where \\(k\\) denotes the index of the current frame, and \\(k_{i}\\) represents the frame from which the \\(i\\)-th point originates. \\(\\eta\\) is a manually-set parameter to determine the probability of discarding old points. As \\(\\eta\\) increases, old points are more likely to be retained, resulting in a higher density of points in the probabilistic local map.
As indicated by (3), higher retaining probability values are assigned to newly observed points by the LiDAR sensor. This strategy effectively ensures the spatio-temporal consistency of the local map, alleviates the impact of accumulated errors over time. Moreover, when contrasted with the aggregation method employing scans within a fixed window, the proposed approach ensures a more seamless transition in the local map data, thereby yielding higher-quality LiBEV images.
#### Ii-A3 LiBEV Image Generation
The generation of the LiBEV image involves dividing the local map into grid cells on the ground plane, where each cell corresponds to a pixel in the LiBEV image. Within each cell, the RGB value of the corresponding pixel is determined by mapping the maximum intensity value among the enclosed points using a color map.
Our implementation leverages a proficient instance segmentation network, specifically the CenterMask [9], to accurately
Fig. 2: The flowchart of the proposed approach.
segment semantic road markings from the generated LiBEV images. Subsequently, points located within the grid cells corresponding to the segmented pixels are extracted from the local map. The extraction yields a semantic point cloud wherein each point is labeled with a specific road marking category. Notably, our approach is designed to accommodate the segmentation of up to 9 types of road markings, including dashed lanes, solid lanes, stop lines, texts, arrows, diamond signs, triangle sighs, curbs and crosswalks, as shown in Fig. 1 (b). The incorporation of diverse semantic road markings, in contrast to approaches solely focused on lane lines, significantly enhances the robustness of map matching-based pose estimation. In addition, since annotating image semantic segmentation is faster and more convenient than annotating point clouds, the proposed approach converts point clouds into images, which is more conducive to the deployment in practical applications.
### _SG-ICP-based Road Marking Registration with HD Map_
After road marking detection, the detected road markings can be associated with their corresponding elements in the HD map shared with the same semantic label. Finally, road marking registration is employed to estimate the pose of the vehicle in 2D space. In this subsection, the SG-ICP algorithm is introduced for robust registration of detected road markings from LiDAR scans with semantic elements in the HD map. In our proposed SG-ICP, detected road markings are divided three categories, including lines, line segments and others. Solid lanes and curbs exhibit a linear distribution in their point clouds and lack distinct endpoints, and thus they are classified as lines. Dashed lanes, sidewalks, and stop lines also have a linear distribution but possess endpoints, leading to their classification as line segments. Texts, arrows, diamond signs and triangle signs do not have linear point cloud distribution, and thus they are classified as others.
For lines, the lack of endpoints leads to the complete loss of constraints along the linear direction of these markings. For line segments, endpoints can provide constraints along the linear direction. However, due to inaccurate endpoint estimation, registration between endpoints still leads to significant localization errors along the direction of the line segment. Consequently, for linear markings, constraints along their linear direction need to have minimal influence on the pose estimation, mitigating the effect of under-constraint issues in the overall pose estimation process. As for others, their point clouds are not linearly distributed, thus often providing sufficient constraints on the pose estimation. In our algorithm, the registration of the three different categories of markings is organized into a unified representation using the objective function of generalized ICP (GICP).
The GICP algorithm incorporates a probabilistic model into the optimization procedure, as defined by
\\[\\begin{split}\\mathbf{T}^{*}=&\\arg\\min_{\\mathbf{T}}\\bigg{(} \\sum_{i=1}^{n}(\\mathbf{q}_{mi}-\\mathbf{T}\\cdot\\mathbf{q}_{Li})^{T}\\\\ &(\\mathbf{C}_{mi}+\\mathbf{RC}_{Li}\\mathbf{R})^{-1}(\\mathbf{q}_{mi}-\\mathbf{T}\\cdot \\mathbf{q}_{Li})\\bigg{)},\\end{split} \\tag{4}\\]
where \\(\\mathbf{q}_{mi}\\) and \\(\\mathbf{q}_{Li}\\) represent a pair of corresponding points, belonging respectively to the HD map element and the labeled point cloud. Their correspondences are established through the nearest neighbor search strategy in the ICP algorithm. \\(\\mathbf{C}_{mi}\\) and \\(\\mathbf{C}_{Li}\\) represent the covariance matrices of points from map and labeled point cloud, respectively, which are appropriately constructed in our semantic GICP (SG-ICP) to mitigate the influence of under-constrained direction.
In our SG-ICP, the probabilistic model is specifically designed by exploiting the semantic and geometric attributes inherent in semantic road markings. For the points lying on the \\(i\\)-th detected road marking instance, the covariance matrix \\(\\tilde{\\mathbf{C}}_{Li}\\) is estimated by
\\[\\tilde{\\mathbf{C}}_{Li}=\\frac{1}{n_{i}-1}\\sum_{j}^{n_{i}}(\\mathbf{p}_{L(i,j)}-\\tilde{ \\mathbf{p}}_{Li})\\cdot(\\mathbf{p}_{L(i,j)}-\\tilde{\\mathbf{p}_{Li}^{{}^{\\prime}}})^{T}, \\tag{5}\\]
where \\(\\mathbf{p}_{\\mathbf{L(i,j)}}\\) represents the \\(j\\)-th point of the \\(i\\)-th road marking instance, \\(\\tilde{\\mathbf{p}_{Li}^{{}^{\\prime}}}\\) represents the centroid of the points. Then, the singular value decomposition (SVD) is performed on \\(\\mathbf{C}_{Li}\\).
\\[\\tilde{\\mathbf{C}}_{Li}=\\mathbf{U}_{i}\\tilde{\\mathbf{\\Sigma}}_{i}\\mathbf{V}_{i},\\quad\\tilde{ \\mathbf{\\Sigma}}_{i}=\\begin{bmatrix}\\sigma_{1}^{2}&0\\\\ 0&\\sigma_{2}^{2}\\end{bmatrix}, \\tag{6}\\]
\\(\\sigma_{1}\\) and \\(\\sigma_{2}\\) satisfy \\(\\sigma_{1}\\geq\\sigma_{2}\\). Then, a matrix \\(\\mathbf{\\Sigma}_{i}=\\operatorname{diag}(1,\\epsilon)\\) is constructed, with \\(\\epsilon\\) satisfying
\\[\\epsilon=\\begin{cases}1e-6,&\\text{if the marking is classified lines};\\\\ 1e-1,&\\text{if the marking is classified line segments};\\\\ 1,&\\text{if the marking is classified others}.\\end{cases} \\tag{7}\\]
The three categories of road markings have distinct values of \\(\\epsilon\\), representing the different constraints along the line direction. A value of \\(\\epsilon\\) closer to \\(1.0\\) indicates a stronger constraint along the line direction. The final covariance matrix corresponding to the \\(i\\)-th road marking can be calculated by
\\[\\mathbf{C}_{Li}=\\mathbf{U}_{i}\\mathbf{\\Sigma}_{i}\\mathbf{V}_{i}. \\tag{8}\\]
The \\(i\\)-th semantic element in the HD map is represented as \\(\\{\\mathbf{v}_{mi},l_{mi},P_{mi}\\}\\), where \\(\\mathbf{v}_{mi}\\), \\(l_{mi}\\) and \\(P_{mi}=\\{\\mathbf{p}_{m(i,j)},j=1,2,\\cdots,n_{mi}\\}\\) denote the main direction, the semantic label and the point set of the map element, respectively. The rotation that rotates the basis vector \\(\\mathbf{e}_{1}=[1,0]^{T}\\) to the direction \\(\\mathbf{v}_{mi}\\) can be calculated by
\\[\\mathbf{R}_{vi}=\\cos\\theta\\cdot\\mathbf{I}+(1-\\cos\\theta)\\mathbf{r}\\mathbf{r}^{T}+\\sin\\theta \\cdot[\\mathbf{r}]_{\\times}, \\tag{9}\\]
where
\\[\\mathbf{r}=[\\mathbf{e}_{1}]_{\\times}\\cdot\\mathbf{v}_{mi},\\quad\\theta=\\arccos(\\mathbf{e}_{1}^{T }\\mathbf{v}_{mi}). \\tag{10}\\]
The symbol \\([\\mathbf{r}]_{\\times}\\) denote the skew-symmetric matrix associated with the vector \\(\\mathbf{r}\\). The covariance matrix corresponding to the \\(i\\)-th semantic element is calculated by
\\[\\mathbf{C}_{mi}=\\mathbf{R}_{vi}\\mathbf{\\Sigma}_{i}\\mathbf{R}_{vi}. \\tag{11}\\]
Finally, associations can be established between the semantic point cloud and the closest points of the map elements shared the same semantic label. Meanwhile, their corresponding covariance matrices calculated in (8) and (11)are then substituted into the objective function (4) to initiate the optimization and iteration process. The probabilistic model from SG-ICP characterizes both the semantic and geometric attributes for road marking registration, which improves the accuracy of pose estimation.
## IV Experimental Evaluation
In this section, extensive experiments are conducted using data collected from diverse scenarios and vehicular platforms, demonstrating the accuracy and robustness of the proposed approach across different scenes and types of LiDAR sensors.
### _Experimental Setup_
All experiments are conducted on the NVIDIA Jetson AGX Xavier. The acquisition frequency of LiDAR data is set to \\(10Hz\\). The global localization results of vehicles are recorded using Real-Time Kinematic (RTK) and temporally synchronized with the LiDAR data. These RTK results are used as ground truths. The experimental scenarios and the corresponding HD maps are shown in Fig. 3. _Fangshan1_ and _Fangshan2_ represent two open urban scenarios in Beijing Fangshan, which covers a \\(0.30km\\times 0.25km\\) area and spans a length of \\(2.0km\\), respectively. _Jiashan_ depicts an internal road measuring \\(0.20km\\) located in a test field in Zhejiang Jiasham. _Airport_ represents an internal road spanning a length of \\(4.0km\\) located within an airport. For the parameters of our approach, the initial variances of the state-transition model and the measurement model were experimentally set to \\(0.1\\) and \\(2.0\\) in the Kalman filter, respectively. \\(\\eta\\) to determine the discarding probability of local map points was set to \\(50.0\\) empirically.
### _Evaluation on Road Marking Detection_
In this subsection, an experiment is conducted to assess the performance of our road marking detection approach using precision-recall metrics. To ensure a comprehensive evaluation, 80% of the manually annotated LiBEV data is randomly selected for training, while the remaining 20% is reserved for testing. The manual annotations serve as the ground truths against which we evaluate the precision and recall of our approach in detecting road markings. A true positive sample is identified when the Intersection over Union (IoU) between the detected instance and its corresponding annotated instance exceeds 0.5, and both instances shared the same semantic label. Conversely, a false positive sample represent a detection result for which no corresponding instance with the same semantic label and an IoU greater than 0.5 could be found in the ground truths. Meanwhile, a false negative sample indicates that an instance present in the ground truth is not successfully detected by our approach.
The precision, recall and F1-score for all types of road markings supported by the approach are presented in TABLE I. The proposed approach successfully detects 9 distinct types of road markings, assigning semantic labels to each point in the LiDAR data, as visually depicted in Fig. 1 (b). The experimental results demonstrate the effectiveness of our approach in successfully detecting common road elements, achieving high precision and recall rates. Notably, certain elements such as curbs and crosswalks exhibit a slight decrease in precision, attributed to their visual similarity to lane markings in LiBEV images. However, the subsequent HD map registration steps effectively mitigate the impact of these false positives on localization. Moreover, the proposed detection approach is highly efficient, meeting real-time perception requirements for vehicles, which is detailed in Section IV-D.
### _Evaluation on Localization_
The SG-ICP algorithm proposed in this paper is assessed based on lateral, longitudinal, and yaw errors. The evaluation encompasses eight experimental sequences, spanning four scenarios and employing seven different LiDAR configurations, demonstrating the flexibility of the proposed approach. The widely-used ICP algorithm is chosen as the baseline for evaluation, and the comparative results are presented in TABLE II. As indicated in the table, the SG-ICP algorithm outperforms the ICP-based approach in most sequences. Notably, SG-ICP has a clear superiority in terms of lateral and yaw accuracy, due to the emphasis placed on the sufficiently constrained direction during the SG-ICP calculation process.
Fig. 4 depicts the visualized trajectories estimated by SG-ICP-based and ICP-based approaches, respectively, in comparison with the ground-truths acquired through RTK. It is worth noting that the substantial localization error of SG-ICP and ICP are marked with purple and red lines, respectively, where estimated distance errors exceed 2.0 m or yaw errors surpass 5.0\\({}^{\\circ}\\). It is evident from Fig. 4 that SG-ICP demonstrates significantly fewer occurrences of substantial localization errors compared to ICP across all sequences.
In conclusion, the proposed approach achieves centimeter-level lateral localization accuracy in a variety of environmental
Fig. 3: The experimental scenarios (top) and their corresponding HD maps (bottom). (a) _Fangshan1_ (b) _Jiashan_ (c) _Fangshan2_ (d) _Airport_.
scenarios and with different types of LiDAR sensors. The tested sensors encompass not only traditional mechanical LiDARs like VLP-32C, Hesai-Pandar64, and Hesai-XT16 but also solid-state LiDARs such as HAP. The comprehensive experiments illustrate the robustness and adaptability of the approach across diverse scenarios and sensor types. In addition, it is worth noting, as indicated in TABLE II, that the longitudinal error is slightly larger than the lateral error. In the urban road scenario where autonomous driving occurs, the majority of road markings exhibit a linear shape along the longitudinal direction. Consequently, the stronger influence of lateral constraints, compared to longitudinal constraints, contributes to a more accurate and precise lateral localization outcome. Nevertheless, our approach ensures that the longitudinal error remains below 0.20 m, thereby ensuring its effectiveness in autonomous driving applications.
### _Evaluation on Runtime_
During the experiments conducted on the eight sequences, the runtime for each sub-step of our approach is detailed in TABLE III. The corresponding box-plot depicting the statistical results can be observed in Fig. 5. It is worth noting that the runtime of the detection sub-step is divided into CPU time and GPU time. CPU time refers to the time consumed by the steps processed by the CPU, including high-reflectance point segmentation, probabilistic local map update, and LiBEV image generation. GPU time refers to the inference time of the instance segmentation of the LiBEV image. The registration sub-step is processed only by the CPU. It can be seen that, when utilizing the onboard processor XAVIER, the average and maximum runtime of the overall approach consistently remains below 50 ms and 200 ms across various scenes and types of LiDAR sensors. Consequently, the efficiency of the proposed system proves sufficient for real-time perception and localization in autonomous vehicle applications.
Moreover, it is worth highlighting that the runtime on the _S1_ sequence is only 8.35 ms longer than that on the _S2_ sequence, despite the fact that the data quantity of _S1_ is twice that of _S2_ (as indicated in the _LiDAR type_ column in TABLE II). This observation demonstrates that the runtime does not exhibit a linear increase with the quantity of the point cloud data, because the substantial reduction in the quantity of the aggregated local map points is achieved through a probabilistic discarding strategy.
### _Evaluation on Robustness_
To demonstrate the robustness of our approach, we evaluate the localization errors at different vehicle speeds, as outlined in TABLE IV. In particular, the vehicle was driven at speeds of 20 km/h, 40 km/h, and 60 km/h in the _Fangshan1_ scenario using 1 Hesai-Pandar64 LiDAR. The obtained results were then compared against the ground-truth provided by RTK. As evident from TABLE IV, there is a slight increase in the localization error with higher driving speeds. This can be attributed to the fact that, as the driving speed increases, the point cloud data captured by the LiDAR sensors is more
\\begin{table}
\\begin{tabular}{c c c c c c c c c} \\hline \\hline Sequence & Scene & LiDAR type & \\multicolumn{2}{c}{Longitudinal error (m)} & \\multicolumn{2}{c}{Lateral error (m)} & \\multicolumn{2}{c}{Yaw error (deg)} \\\\ & & ICP & SG-ICP & ICP & SG-ICP & ICP & SG-ICP \\\\ \\hline S1 & _Fangshan1_ & 2\\#Hesai-Pandar64 & 0.158 & **0.137** & 0.050 & **0.043** & 0.233 & **0.208** \\\\ S2 & _Fangshan1_ & 1\\#Hesai-Pandar64 & 0.165 & **0.160** & 0.051 & **0.041** & 0.386 & **0.346** \\\\ S3 & _Fangshan1_ & 2\\#VLP-32C & 0.167 & **0.130** & 0.127 & **0.043** & 0.302 & **0.188** \\\\ S4 & _Fangshan1_ & 2\\#HAP & 0.164 & **0.139** & 0.080 & **0.058** & 0.299 & **0.218** \\\\ S5 & _Jiashan_ & 2\\#VLP-32C + 1\\#VLP-16 & 0.082 & **0.077** & 0.055 & **0.050** & 0.547 & **0.401** \\\\ S6 & _Jiashan_ & 3\\#HAP & **0.099** & 0.106 & 0.082 & **0.062** & 0.330 & **0.309** \\\\ S7 & _Fangshan2_ & 1\\#Hesai-Pandar64 & **0.124** & 0.125 & 0.091 & **0.040** & 0.277 & **0.184** \\\\ S8 & _Airport_ & 2\\#Hesai-XT16 & 0.129 & **0.128** & 0.116 & **0.050** & 0.447 & **0.230** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE II: Average localization error compared to the ground-truth obtained through RTK.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c} \\hline \\hline & Dashed & Solid & & & & & Diamond & Triangle & & & \\\\ & lane & lane & & & & & sign & sign & & & \\\\ \\hline Precision & 94.05\\% & 86.11\\% & 81.16\\% & 89.19\\% & 91.70\\% & 94.66\\% & 88.23\\% & 64.52\\% & 72.77\\% & 86.31\\% \\\\ Recall & 96.50\\% & 85.48\\% & 75.47\\% & 100.00\\% & 96.57\\% & 97.37\\% & 100.00\\% & 76.11\\% & 71.96\\% & 88.56\\% \\\\ F1-score & 95.27\\% & 85.79\\% & 78.21\\% & 94.29\\% & 94.07\\% & 95.99\\% & 93.75\\% & 69.83\\% & 72.36\\% & 85.20\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: The precision, recall and F1-score for all types of road marking supported by our approach.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline Seq. & Detection & Detection & Registration & Total Time \\\\ & (CPU) & (GPU) & Registration & Cost \\\\ \\hline S1 & 28.20ms & 16.47ms & 4.63ms & 49.30ms \\\\ S2 & 20.34ms & 16.36ms & 4.25ms & 40.95ms \\\\ S3 & 22.13ms & 16.42ms & 4.43ms & 42.98ms \\\\ S4 & 19.66ms & 18.21ms & 3.48ms & 41.35ms \\\\ S5 & 22.20ms & 16.55ms & 3.96ms & 42.71ms \\\\ S6 & 17.46ms & 19.13ms & 4.73ms & 41.32ms \\\\ S7 & 18.69ms & 14.74ms & 3.46ms & 36.89ms \\\\ S8 & 19.98ms & 16.75ms & 2.63ms & 39.36ms \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: The runtime for each sub-step of the proposed approach.
prone to motion distortions. Despite the slight increase in localization error with higher driving speeds, the proposed approach consistently maintains a relatively high level of localization accuracy. This demonstrates the robustness of the approach across varying vehicle speeds. Regarding real-time performance, as indicated in TABLE IV, the overall system runtime is minimally affected by increases in driving speed. This further highlights the robustness of the system in handling variations in vehicle speed.
To illustrate the robustness of our approach under varying weather conditions, experiments were conducted in different settings. As depicted in Fig. 6, the intensity distribution of LiDAR point clouds on dry and wet road surfaces (on sunny and rainy days) typically exhibits significant differences. As a result, rainy weather poses considerable challenges to intensity-based road marking extraction, particularly for methods relying on fixed intensity thresholds. The LiBEV images generated under both sunny and rainy weather conditions are depicted in Fig. 7. It is evident that the proposed adaptive threshold-based approach consistently provides stable and accurate segmentation results, even in the presence of significantly different intensity distributions caused by varying weather conditions. TABLE V presents a comparison of localization errors under both dry and wet ground conditions in the _Fangshan1_ scenario, employing 1 Hesai-Pandar64 LiDAR. Although more noise in LiBEV images causes an increase in localization error when driving on wet ground, it can still ensure average lateral error within 0.10 m and longitudinal error within 0.20 m. These results demonstrate the robustness of our approach in addressing challenging weather conditions.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{Seeds} & Longitudinal & Lateral & Yaw error & Time cost \\\\ & error (m) & error (m) & (deg) & (ms) \\\\ \\hline
20 km/h & 0.166 & 0.048 & 0.366 & 42.56 \\\\
40 km/h & 0.124 & 0.067 & 0.620 & 44.40 \\\\
60 km/h & 0.153 & 0.091 & 0.775 & 44.92 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: The localization errors at various driving speeds.
Fig. 4: Comparison between the trajectories estimated by SG-ICP and ICP is conducted using ground-truth trajectories provided by RTK. The substantial localization error of SG-ICP and ICP are marked with purple and red lines, respectively, where estimated distance errors exceed 2.0 m or yaw errors surpass 5.0\\({}^{\\circ}\\).
Fig. 5: A box plot illustrating the time consumption for each sub-step of the proposed approach.
Fig. 6: Histograms illustrating the LiDAR intensity distribution on (a) sunny and (b) rainy days, respectively, in the _Fangshan1_ scenario.
## V Conclusion
In this paper, we introduce a LiDAR-based online environmental perception and localization system with high efficiency and robustness. The proposed road marking detection approach employs a novel adaptive segmentation technique to enhance efficiency, and utilize a spatio-temporal probabilistic local map to ensure the density of points. For road marking registration, an SG-ICP algorithm is designed, modeling linear road markings as 1-manifolds embedded in 2D space. Our approach minimizes the influence of constraints along the linear direction of markings, to address the under-constrained problem, and thus improve the localization accuracy. Extensive experiments conducted in real-world urban environments demonstrate the effectiveness and robustness of the proposed system, showcasing its potential for reliable online environmental perception and localization. However, our approach cannot be applied to roads without road markings on the ground surface, due to the lack of high-reflectance points. In future work, we will explore the effective utilization of above-ground information to improve the robustness of localization.
## References
* [1]M. Bai, G. Mattyus, N. Homayounfar, S. Wang, S. K. Lakshmikanth, and R. Urtasun (2018) Deep multi-sensor lane detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol., pp. 3102-3109. Cited by: SSI.
[MISSING_PAGE_POST] | In GPS-denied scenarios, a robust environmental perception and localization system becomes crucial for autonomous driving. In this paper, a LiDAR-based online localization system is developed, incorporating road marking detection and registration on a high-definition (HD) map. Within our system, a road marking detection approach is proposed with real-time performance, in which an adaptive segmentation technique is first introduced to isolate high-reflectance points correlated with road markings, enhancing real-time efficiency. Then, a spatio-temporal probabilistic local map is formed by aggregating historical LiDAR scans, providing a dense point cloud. Finally, a LiDAR bird's-eye view (LiBEV) image is generated, and an instance segmentation network is applied to accurately label the road markings. For road marking registration, a semantic generalized iterative closest point (SG-ICP) algorithm is designed. Linear road markings are modeled as 1-manifolds embedded in 2D space, mitigating the influence of constraints along the linear direction, addressing the under-constrained problem and achieving a higher localization accuracy on HD maps than ICP. Extensive experiments are conducted in real-world scenarios, demonstrating the effectiveness and robustness of our system.
Localization, autonomous vehicles, road markings, LiDAR, HD map. | Write a summary of the passage below. | 257 |
arxiv-format/2012_03597v3.md | PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote Sensing Images
Guangshuai Gao, Qingjie Liu,, Zhenghui Hu, Lu Li, Qi Wen, Yunhong Wang,
## I Introduction
Object counting, which is to estimate the accurate number of object instances in images or videos, has been attracting remarkable interest in recent years owing to its potential value in traffic monitor [1], urban planning [2], public safety [3], and crowd behavior understanding [4], etc. Additionally, object counting has been applied in many practical applications, such as cell microscopy [5], animals [6], and remote sensing applications [7, 8, 9, 10].
Recent prevalent object counting methods have been following the pioneering work [5], which estimates the count number over a density map. Lately, driven by the powerful feature representation ability of Convolutional Neural Networks (CNNs), a lot of CNN-based density estimation algorithms have been presented. Although remarkable progress has been achieved, there still exist challenges limiting the counting performance, such as large scale variation, complex background interference, non-uniform density distribution, which are much tougher in remote sensing images. Taking Fig.1 (a) as an example, the scale variation of different ship instances is large because of different types of ships. In Fig.1 (b), complex background interferences (such as the green plants) are easy to fool models to make wrong predictions. Furthermore, in Fig.1 (c), the spatial distribution is non-uniform varying from sparse to congested even in the same scene.
Many efforts tackle the scale variation problem by designing multi-column architectures [11, 12, 13] or employing techniques, such as dilated convolution [14, 15], Spatial Pyramid Pooling (SPP) [16, 15], Atrous Spatial Pyramid Pooling (ASPP) [17, 18], and Inception blocks [19] to capture multi-scale information [20, 21, 22, 23]. These models relieve scale variation problem yet still have some limitations. 1) Multi-column architectures or Inception blocks have multiple branches built with different kernel sizes, which introduce a large number of parameters and huge computation burdens [24]. 2) The pooling operation in these models (e.g., SPP) may lead to fine-detail information loss, thus degenerating the performance. 3) Hand-crafted dilation rates are hard to match the range of scale variations. To alleviate these issues, motivated by [25], we embed a pyramidal scale module (PSM) into our framework to effectively capture multi-scale information.
To suppress background distractors, visual attention has been successfully applied to the counting task [7, 8, 26] and achieves good performance. However, these attention modules suffer from heavy computation cost and high complexity, for instance, the fashionable Squeeze-and-Excitation network (SENet) [27] and its followers [28, 29] employ multiple fully connected (FC) layers to compute attention weights. Such designs are inefficient and not helpful for capturing the interactive information across channels. Inspired by [30, 31], we introduce an effective and efficient global context module (GCM) to select more suitable scales generated from PSM.
Most counting methods convert annotated points into density maps using Gaussian filters and then train CNN models
Fig. 1: Illustrations of large scale variation, complex background interference and non-uniform density distribution. In (a), scales of ships enclosed by the red bounding boxes vary largely. In (b), the objects (i.e., small vehicles) are shaded by the plants. In (c), ships at harbor are unevenly distributed.
using \\(L_{2}\\) loss. Consequently, the counting performance highly depends on the quality of the \"ground truth\" density map. However, such a pixel-independent based \"ground-truth\" density map generation manner may be suboptimal, especially in non-uniform distributed regions. As an alternation, Ma et al. [32] propose a reliable supervision manner through learning the count expectation from the point annotations, named Bayesian Loss (BL). This effective supervision manner could alleviate the problem of non-uniform density distribution. However, there may exist an inconsistency between the training phase (point-to-point loss) and the testing stage (the difference between the overall summation of estimated density maps and ground truth counts). Therefore, apart from the Bayesian loss, we add a counting loss to mitigate this issue.
In summary, the contributions of this work are three-fold:
* A novel **P**yramidal **S**cale and **G**lobal **C**ontext-based framework for dense object counting in remote sensing images, termed PSGCNet, is presented.
* A flexible pyramid scale module is designed to effectively extract multi-scale features of dense scenes. And a lightweight global context module is embedded to make use of the rich interaction information across channels of feature map to guide the model to select more suitable scales.
* Extensive experiments conducted on four remote sensing object counting datasets demonstrate the effectiveness and superiority of the proposed approach, and the extension to four commonly used crowd counting datasets further validate the generalization ability and robustness of our proposed method.
The remainder of this paper is organized as follows. The related work of object counting algorithms is briefly surveyed in Section II. The details of our proposed method are introduced in Section III, following which experimental results and analysis are presented in Section IV. Finally, the conclusion is concluded in Section V.
## II Related Work
### _Object counting in congested scenes_
Early object counting methods are mainly detection-based [33, 34], they first detect the interested object instances and then count the number of the bounding boxes. These methods obtain satisfactory performance in sparse scenarios thanks to the powerful detectors. However, they may fail in highly congested scenes, since the object instances are usually with small sizes and easily confused with background distractors. Another mainline is regression-based methods, which map the high dimension image space to natural numbers [35, 36]. As a highly non-linear regression task, it is very hard to optimize models and the performance is far from satisfactory. [5] rekindles the counting task as a density map generation problem, which estimates the counting number of object instances by integrating all the pixels of the density map. Entering the deep learning era, the performance of object counting has been significantly improved. Many deep neural networks have been designed for tackling the counting task. The performances on several representative benchmark datasets such as ShanghaiTech [11], UCF-QNRF [37], and UCF_CC_50 [38] have reached promising results. For a comprehensive review of the counting task, please refer to [39, 40].
### _Object counting from the remote sensing view_
Capturing from a remote distance, aerial images or videos provide a wider field of view and thus with much more complex scene contents, which brings great challenges to existing counting models. To facilitate research in this field, [41] introduces a drone-based crowd dataset and develops a multi-resolution network for estimating the number of pedestrians in aerial images. LPN [9] takes advantage of the regular spatial layout of cars and proposes a spatial layout proposal network for car counting and localization, simultaneously. LEP [42] proposes to predict the image-level count by dividing the image into a set of divisions. It achieves good performance on several drone-based counting datasets. Li et al. [43] draws inspiration from object detectors and proposes to detect and count cars simultaneously using a unified framework. STNNet [44] takes a step further and performs the density map estimation, localization, and tracking tasks in one network. ASPDNet [8] builds a new benchmark for aerial image counting. It employs recently developed techniques such as dilated convolution, attention, and deformable convolution to achieve a better performance.
### _Alleviating large scale variation_
Scale variation is a great challenge for object counting. Four strategies are widely studied to address this problem: multi-column network architectures, dilated convolution, Spatial Pyramid Pooling (SPP), and Inception module. For example, MCNN [11] is a simple multi-column network, in which, each column is built with different filter kernels. Switch-CNN [45] adopts a frame structure similar to MCNN [11]. The difference is that a specialized classifier is applied to select a suitable column network for inputs. CSRNet [20] takes advantage of dilated convolution to enlarge the receptive fields without increasing computation cost. CAN [22] combines scale-aware and context-aware feature information to boost the performance. SANet [23] captures multi-scale features built on the shoulder of the Inception module [19]. DSNet [46] cascades multiple dense dilated convolution blocks and links them with dense residual connections. ADSCNet [47] adopts adaptive dilated convolution to learn dynamic and continuous dilated rates for each pixel location. MRCNet [41] combines low-level and high-level features with lateral connections to learn contextual and detailed local information in aerial imagery. SACANet [48] utilizes a pyramid contextual module to extract long-range contextual information and enlarge the receptive fields of the objects in drone scenes. ASPDNet [7, 8] integrates a scale pyramid module to capture multi-scale information for counting in remote sensing images.
### _Mitigating cluttered background interferences_
Attention mechanism has been widely used to suppress cluttered backgrounds and highlight foreground regions. For instance, SAANet [49] develops a soft attention mechanism to learn a set of gating masks to aggregate the multi-scale density maps. ADCrowdNet [26] combines visual attention and deformable convolution [50] into a unified framework. HA-CNN [51] designs a hierarchical attention based network to selectively enhance the features at various levels. RANet [52] and ANF [53] incorporate self-attention to capture long-range dependencies of the feature maps. SDANet [54] builds a dense attention network based on shallow features. ASNet [55] learns attention scaling factors and automatically adjusts the density regions by multiplying multiple density attention masks on them. SACANet [48] leverages a scale-adaptive self-attention multi-branch module to address isolated clusters in aerial images. ASPDNet [7, 8] cascades channel attention and spatial attention to relieve the impact of complex cluttered backgrounds in diverse remote sensing scenarios. These methods have gained significant performance, nevertheless, the sophisticated structures of the attention modules incorporated in them introduce a large number of parameters, thus making them suffer from huge computation burdens. Although some lightweight attention modules such as Squeeze-and-Excitation networks (SENet) [27] and convolution block attention module (CBAM) [28] are developed to alleviate this problem, the fully connected (FC) layers still have many parameters. What's more, the channel dimensionality reduction in these models also limits the upper bound of the performance.
Different from the aforementioned methods, our proposed PSGCNet takes advantage of a pyramidal scale module to capture multi-scale features, which can flexibly cover various scales and enlarge the receptive field without increasing any computation cost. Additionally, we devise an effective global context module, essentially a lightweight channel attention operation. It can not only reduce the computation burden of attention modules, but also make the cross-channel interaction more efficient by avoiding dimensionality reduction. Finally, we train our model with a reliable supervision manner on the count expectation at each annotation point.
## III Proposed Method
### _PSGCNet Overview_
The architecture of PSGCNet is illustrated in Fig. 2. It has four key components, including a backbone network as feature extractor, a pyramidal scale module capturing multi-scale information, a global context module suppressing cluttered backgrounds, and a decoder to estimate the final density map.
Specifically, we adopt a truncated VGG19 [56] same to [32] as the backbone network, in which the three fully connected layers and one pooling layer are removed. The output feature map's resolution of the backbone is 1/16 of the original input image. Afterwards, a pyramid scale module is built on top of the feature maps to capture multi-scale information. Then, an effective global context module (GCM) followed is leveraged to restrain the complex backgrounds. Then feature maps are upsampled twice with bilinear interpolation operation. Finally, a decoder is equipped to produce the density map, in which three successive convolutional layers are used, including two 3\\(\\times\\)3 convolution layers with 256 and 128 channels, and one 1\\(\\times\\)1 convolution. To further improve the performance, we optimize the model using a modified Bayesian Loss.
### _Pyramidal Scale Module (PSM)_
Scale variation is a critical problem in remote sensing image understanding. In this paper, we attack this problem by introducing a pyramidal scale module (PSM). PSM deploys two paralleled network paths: a local PyConv path and a global PyConv path. The two paths have a dual-oriented pyramid architecture, enabling richer multi-scale information capturing.
PyConv has a pyramid structure, as shown in Fig. 3. It contains increasing kernel sizes from bottom to top in a pyramidal manner, and decreasing kernel depths (connectivity) with grouped convolution. The double-oriented pyramid operation allows the model to capture richer multi-scale information, from larger receptive fields of kernels with lower connectivity to smaller receptive fields with higher connectivity. This design is efficient, flexible and economical computational cost.
The local PyConv path (the left branch in Fig. 4) has smaller receptive fields, which is responsible for
Fig. 3: The sketch of pyramidal convolution (PyConv).
Fig. 2: The architecture of PSGCNet for object counting in remote sensing images. The parameters of the convolution layers are denoted as βConv-(kernel_size)-(number of filters)β. β\\(\\otimes\\)β indicates the element multiplication operation. We take VGG19 as the backbone to extract features \\(F_{x}\\) from an input image, which further pass through the proposed GCM and PSM modules to generate enhanced features \\(F_{x}^{\\omega}\\). The predicted density map is produced after one upsampling and three convolution layers. We train the whole pipeline with a hybrid loss by combining Bayesian and counting loss. The red and blue rectangles are the background pixels and activations of an object.
applies 1\\(\\times\\)1 convolutions to reduce the channels to 512 and then aggregates four layers with different kernel sizes (i.e., 9\\(\\times\\)9, 7\\(\\times\\)7, 5\\(\\times\\)5, and 3\\(\\times\\)3). Besides, the number of groups (G) enables the kernels to have different connectivity. This is achieved with 1\\(\\times\\)1 convolutions. Note that each convolution block is followed by a batch normalization layer and a ReLU activation layer.
The global PyConv path (the right branch in Fig. 4) is to capture features of large objects in a global perspective. It has a similar structure to the local PyConv block, however, uses an adaptive average pooling operation on the top to reduce the spatial size of the feature maps to 9\\(\\times\\)9 and upsample the feature maps to the same resolution to the input through bilinear interpolation at the bottom.
The features from both local and global PyConv blocks are then concatenated and followed by a standard convolution layer with the size of 3\\(\\times\\)3. Finally, we upsample the feature maps to the original image size. The PSM module is efficient, flexible, and economical in computational cost. It could also boost the robustness of the model to scale variation.
### _Global Context Module (GCM)_
Visual attention has been claimed as a promising solution to overcome the interference of complex backgrounds. These models have achieved improved performance, however, with a cost of higher model complexities and heavier computational burden, since they usually use self-attention [57] or non-local modules [58].
Drawing inspiration from [30] and [31], we propose an efficient and lightweight global context module to model the dependencies across the channels. The global context module designed in our work is depicted in Fig. 5.
Concretely, given an intermediate feature map, denoted as \\(x\\in\\mathbb{R}^{C\\times H\\times W}\\), where \\(C\\), \\(H\\), and \\(W\\) represent the number of channels, height, and width of the feature map, respectively. Let \\(x_{c}\\) be the feature map corresponding to the \\(c\\)-th channel, i.e., \\(x_{c}=\\left[x_{c}^{i,j}\\right]_{H\\times W}\\in\\mathbb{R}^{H\\times W},c\\in\\{1,2,\\cdots,C\\}\\). A global context module is embedded to capture global context information of each channel. The module is formulated as:
\\[s_{c}=\\alpha_{c}\\left\\|x_{c}\\right\\|_{2}=\\alpha_{c}\\left\\{\\left[\\sum_{i=1}^{ H}\\sum_{j=1}^{W}\\left(x_{c}^{i,j}\\right)^{2}\\right]+\\epsilon\\right\\}^{\\frac{1}{2}} \\tag{1}\\]
where \\(\\alpha_{c}\\) denotes the embedding weight, and \\(\\epsilon\\) is a small constant to avoid the deviation at zero points. This global context module is somewhat similar to the global average pooling (GAP) but more robust than it [31].
Generally, to effectively learn cross-channel interactions, typical solutions are SENet [27] or CBAM [28], however, they destroy the correspondence between channels. Here, we adopt an alternation strategy, which first adaptively determines the kernel sizes \\(k\\) (\\(k=3\\) in this paper) and then performs a 1D convolution operation, i.e.,
\\[\\hat{s}_{c}=C1D\\left(s_{c}\\right) \\tag{2}\\]
where \\(C1D\\) means \\(1D\\) convolution.
A subsequent channel normalization is applied, which can be formulated as:
\\[\\tilde{s}_{c}=\\frac{\\sqrt{C}\\hat{s}_{c}}{\\|\\mathbf{s}\\|_{2}}=\\frac{\\sqrt{C} \\hat{s}_{c}}{\\sqrt{\\sum_{c=1}^{C}\\hat{s}_{c}^{2}+\\epsilon}} \\tag{3}\\]
Eventually, the final global context attention map \\(\\tilde{x}_{c}^{att}\\in\\mathbb{R}^{C\\times 1\\times 1}\\) is obtained after a \\(tanh\\) activation layer:
\\[\\tilde{x}_{c}^{att}=\\tanh\\left(w_{c}\\tilde{s}_{c}+\\beta_{c}\\right) \\tag{4}\\]
where \\(w_{c}\\) and \\(\\beta_{c}\\) represent the trainable weight and bias, which are both initialized to 0 in the training stage.
Fig. 4: The detail architecture and parameters of PSM. β\\(\\copyright\\)β indicates the concatenation operation.
Fig. 5: Illustration of the global context module.
### _Bayesian and counting loss function (BCL)_
To optimize models, Euclidean distance (\\(L_{2}\\) loss) between the prediction and the ground truth density maps is widely used. However, the loss is not robust to the occlusion, scale variation, and non-uniform density. Recently, Ma et al. [32] propose a novel supervision manner, named Bayesian Loss to relieve this problem. It constructs a density contribution model from point annotations and then defines the loss as the difference between the count expectation and the ground truth number at each annotated point:
\\[\\mathcal{L}^{\\text{Bayesian}}=\\sum_{n=1}^{N}\\mathcal{F}\\left(1-E\\left[c_{n} \\right]\\right)+\\mathcal{F}\\left(0-E\\left[c_{0}\\right]\\right) \\tag{5}\\]
where \\(N\\) is the total number of labelled objects, \\(E\\left[c_{n}\\right]\\) and \\(E\\left[c_{0}\\right]\\) indicate the expected counts for each instance and the entire background, respectively. The first term denotes that impelling the foreground count at each annotation point equals 1, while the second term means enforcing the background count to be zero. \\(\\mathcal{F}(\\cdot)\\) is a distance function, we adopt \\(\\ell_{1}\\) distance metric as suggested in [32].
Although reliable and effective, there may exist inconsistency between the training phase and the testing stage. Therefore, apart from Bayesian loss, we add a counting loss to mitigate this issue. The counting loss is defined as:
\\[\\mathcal{L}^{\\text{Count}}=\\frac{1}{N}\\sum_{i=1}^{N}\\left\\|F\\left(X_{i}; \\Theta\\right)-Y_{i}\\right\\|_{1} \\tag{6}\\]
where \\(F\\left(X_{i};\\Theta\\right)\\) and \\(Y_{i}\\) represent the count integrated by the estimated density map and ground truth count of the \\(i\\)-th image. \\(\\Theta\\) denotes training parameters and \\(\\|\\cdot\\|_{1}\\) means \\(\\ell_{1}\\)-norm.
Therefore, the overall loss function is the combination of Bayesian loss \\(\\mathcal{L}^{\\text{Bayesian}}\\) and counting loss \\(\\mathcal{L}^{\\text{Count}}\\) :
\\[\\mathcal{L}^{\\text{Overall}}=\\mathcal{L}^{\\text{Bayesian}}+\\lambda\\mathcal{L}^ {\\text{Count}} \\tag{7}\\]
where \\(\\lambda\\) is a tunable positive hyperparameter.
## IV Experimental results
In this section, the datasets, evaluation protocols, and implementation details are first introduced. Then ablation studies and comparisons with state-of-the-art methods are provided to demonstrate the effectiveness and superiority of the proposed approach. Furthermore, some extension experiments to other object counting applications are conducted to validate the generalization ability and robustness of the model.
### _Datasets and evaluation protocols_
**Datasets:** Extensive experiments are conducted on four remote sensing object counting datasets including RSOC [7, 8], CARPK [9], PUCPR+ [9], and Drone-crowd [44] to evaluate the effectiveness and superiority of the proposed approach. Moreover, to validate the generalization ability and robustness of the model, we also conduct experiments on four widely used crowd counting datasets, i.e., ShanghaiTech Part_A and Part_B [11], UCF-QNRF [37], and UCF_CC_50 [38]. The statistics of the datasets is presented in Table I.
\\(\\bullet\\)**RSOC**[7, 8]1 is a remote sensing object counting dataset, which is composed of four categories, including buildings, small vehicles, large vehicles, and ships. The dataset consists of 3,057 images with 286,539 instances in total. In which 2,468 building images, 1,205 and 1,263 are used for training and testing; 280 small vehicle images, 222 images for training and 58 for testing; 172 large vehicle images, 108 for training and 64 for testing; 137 ship images, 97 images for training and 40 images for testing, respectively.
Footnote 1: [https://github.com/gagoungshuai/Counting-from-Sky-A-Large-scale-Dataset-for-Remote-Sensing-Object-Counting-and-A-Benchmark-Method](https://github.com/gagoungshuai/Counting-from-Sky-A-Large-scale-Dataset-for-Remote-Sensing-Object-Counting-and-A-Benchmark-Method)
\\(\\bullet\\)**CAPPK**[9]2 is a large-scale drone-view car counting dataset, which contains 1,448 images with nearly 90k cars in total, of which 989 images for training and the remaining 459 images for testing.
Footnote 2: [https://lafi.github.io/LPN/](https://lafi.github.io/LPN/)
Footnote 3: [https://lafi.github.io/LPN/](https://lafi.github.io/LPN/)
\\(\\bullet\\)**PUCPR+**[9]3 is also a car counting dataset, all the images are captured from the 10th floor of a building. The dataset contains 125 images with approximately 17k cars, of which 100 images are served as training set, and the rest as testing set.
Footnote 3: [https://lafi.github.io/LPN/](https://lafi.github.io/LPN/)
\\(\\bullet\\)**Drone-crowd**[44]4 is a drone-captured dataset for density map estimation, crowd localization and tracking, which is composed of 112 video clips with 33,600 frames in total. The video clips are annotated with over 4.8 million head annotations and several video-level attributes. All the images are captured by drone-mounted cameras in 70 different scenarios across 4 different cities in China (i.e., Tianjin, Guangzhou, Daqing, and Hong Kong). For the counting task in this paper, we split the dataset into training and test set, of which 24,600 images for training and the remaining 9,000 for testing.
Footnote 4: [https://github.com/VisDrone-Dataset](https://github.com/VisDrone-Dataset)
\\(\\bullet\\)**ShanghaiTech**[11]5 includes two parts, i.e., Part_A and Part_B, with a total number of 1,198 images. The images of Part_A are randomly crawled from the Internet, which are across diverse scenes and largely varied densities. Part_A has 482 images, of which 300 are served as training set and the remaining 182 for testing. The images of Part_B are taken from the metropolis in Shanghai, which consists of 400 images for training and 316 for testing.
Footnote 4: [https://lafi.github.io/LPN/](https://lafi.github.io/LPN/)
\\(\\bullet\\)**UCF-QNRF**[37]6 is a recently released large and challenging dataset, which has a wide range of image resolutions, counts, scale variations and diversely density distribution. The images of this dataset are crawled from Flickr, Web Search and Hajj footage, containing 1,535 images with over 125 million point annotations, where 1,201 images are used for training and the remaining 334 images for testing.
\\(\\bullet\\)**UCF_CC_50**[38]7 is composed of 50 images with various resolutions. The dataset is small-scale yet challenging since the average count is up to 1,280. Following [38], five-fold cross-validation is performed to obtain the final test result.
Footnote 6: [https://www.cvcv.net/data/ucf-qnrf/](https://www.cvcv.net/data/ucf-qnrf/)
**Evaluation protocol:** Two most widely used evaluation metrics, i.e., Mean Average Error (MAE) and Root MeanSquared Error (RMSE), are employed to evaluate the performance of the proposed method. The two metrics are defined as follows:
\\[MAE=\\frac{1}{K}\\sum_{i=1}^{K}\\left|\\hat{C}_{i}-C_{i}\\right| \\tag{8}\\]
\\[RMSE=\\sqrt{\\frac{1}{K}\\sum_{i=1}^{K}\\left|\\hat{C}_{i}-C_{i}\\right|^{2}} \\tag{9}\\]
where \\(K\\) is the number of test images, \\(\\hat{C}_{i}\\) denotes the predicted count and \\(C_{i}\\) indicates the ground truth count for the \\(i\\)-th image, respectively.
### _Implementation details_
We implement our proposed PSGCNet in PyTorch and train it in an end-to-end manner. All the experiments are conducted on one NVIDIA 2080Ti GPU. A truncated VGG19 [56] pre-trained on ImageNet [59] is taken as the backbone, with the fully connected layers and the last pooling layer removed. During training, the initial learning rate is 1e-5, and Adam optimizer is used. For better training and avoiding overfitting, random crop and horizontal flipping are applied for augmentation. Specifically, the crop size is \\(256\\times 256\\) for RSOC_building datasets, ShanghaiTech Part_A, and UCF_CC_50, and \\(512\\times 512\\) for RSOC_small-vehicle, RSOC_large-vehicle, RSOC_ship, CARPK, PUCPR+, DroneCrowd, ShanghaiTech Part_B and UCF_QNRF, since they have large image sizes. In addition, for all the datasets, 10% of the images are randomly sampled for validation from each training set. The batch size is set 1 for all the datasets.
### _Ablation studies_
To validate the effectiveness of each module of our approach, we conduct ablation studies on RSOC_building dataset. The baseline method is BL [32]. The specific settings are shown in Table II.
\\(\\bullet\\)**Effect of PSM and GCM.** From Table II we can observe that when PSM is introduced, the performance can achieve a significant improvement. Specifically, there are relative improvements of 21.28% and 17.04% w.r.t MAE and RMSE, demonstrating the robustness of the proposed PSM to the problem of large scale variation. To validate the robustness of the model to the complex background interference, we adopt a global context module. From Table II we can find that the GCM can boost the baseline method with a considerable elevation. In particular, the performance will gain by 24.93% and 24.31% w.r.t MAE and RMSE, which proves that it has made a significant impact on highlighting objects parts while diminishing background noise.
\\(\\bullet\\)**Effect of the hyperparameter \\(\\lambda\\)**, To verify the effectiveness of the proposed BCL loss function, we conduct experiments under the condition of different \\(\\lambda\\). As can be observed from Table III, when \\(\\lambda=0.1\\), we can obtain the best performance.
\\(\\bullet\\)**Different backbones.** Our proposed modules and loss function can be readily applied to any network structures to improve the performance. Here we apply them to VGG-19 and VGG-16 backbones and perform comparisons with the L2 loss-based models. The quantitative results from Table IV show that our method achieves better performance compared with the naive L2-based modes by a considerable margin.
\\begin{table}
\\begin{tabular}{c c c|c c} \\hline \\hline Baseline & PSM & GCM & MAE & RMSE \\\\ \\hline β & β & β & 11.51 & 15.96 \\\\ β & β & β & 9.06 & 13.24 \\\\ β & β & β & 8.64 & 12.08 \\\\ β & β & β & 7.54 & 10.52 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE II: Different settings on RSOC_building dataset.
\\begin{table}
\\begin{tabular}{c|c|c|c|c|c|c|c|c} \\hline \\hline \\multirow{2}{*}{Dataset} & \\multirow{2}{*}{Sensor} & \\multirow{2}{*}{\\#Images} & \\multirow{2}{*}{Training/Test} & \\multirow{2}{*}{Average Resolution} & \\multicolumn{3}{c|}{Count Statistics} \\\\ \\cline{6-9} & & & & & Total & Min & Average & Max \\\\ \\hline RSOC\\_building [7] & Satellite & 2468 & 1205/1263 & 512\\(\\times\\)512 & 76,215 & 15 & 30.88 & 142 \\\\ RSOC\\_small-vehicle [7] & Satellite & 280 & 222/58 & 2473\\(\\times\\)2339 & 148,838 & 17 & 531.56 & 8531 \\\\ RSOC\\_large-vehicle [7] & Satellite & 172 & 108/64 & 1552\\(\\times\\)1573 & 16,594 & 12 & 96.48 & 1336 \\\\ RSOC\\_ship [7] & Satellite & 137 & 97/40 & 2558\\(\\times\\)2668 & 44,892 & 50 & 327.68 & 1661 \\\\ \\hline \\hline CARPK [9] & Drone & 1448 & 989/459 & 720\\(\\times\\)1280 & 89,777 & 1 & 62 & 188 \\\\ \\hline PUCPR+ [9] & Camera & 125 & 100/25 & 720\\(\\times\\)1280 & 16,915 & 0 & 135 & 331 \\\\ \\hline DroneCrowd [44] & Drone & 33,600 & 24,600/9,00 & 1920\\(\\times\\)1080 & 4,864,280 & 25 & 144.8 & 455 \\\\ \\hline \\hline SHT\\_A [11] & CCTV & 482 & 300/182 & 589 \\(\\times\\) 868 & 241,677 & 33 & 501.4 & 3,139 \\\\ SHT\\_B [11] & CCTV & 716 & 400/316 & 768 \\(\\times\\) 1024 & 88,488 & 9 & 123.6 & 578 \\\\ UCF-QNRF [37] & CCTV & 1,535 & 1201/334 & 2013 \\(\\times\\) 2902 & 1,251,642 & 49 & 815 & 12,865 \\\\ UCF\\_CC\\_50 [38] & CCTV & 50 & β & 2101 \\(\\times\\) 2888 & 63,974 & 94 & 1,280 & 4,543 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: Statistics of the object counting datasets. Total-, min-, average- and max represent the total number, the minimum, average number and maximum number of instances in the datasets, respectively.
\\begin{table}
\\begin{tabular}{|l c|c|} \\hline & MAE & RMSE \\\\ \\hline \\(\\lambda\\)=0 & 8.18 & 12.61 \\\\ \\(\\lambda\\)=10 & 8.36 & 12.92 \\\\ \\(\\lambda\\)=1 & 8.02 & 11.96 \\\\ \\(\\lambda\\)=0.1 & **7.54** & **10.52** \\\\ \\(\\lambda\\)=0.01 & 7.88 & 11.02 \\\\ \\(\\lambda\\)=0.001 & 7.94 & 11.46 \\\\ \\hline \\end{tabular}
\\end{table} TABLE III: Impacts of different \\(\\lambda\\) on RSOC_building dataset.
### _Comparisons on RSOC dataset_
We compare our approach with state-of-the-art methods and show results in Table V and visualize some representative density maps in Fig. 6. Our model achieves substantial improvements on all the four subsets. Specifically, we improve the baseline with improvements of 34.49%, 6.76%, 17.85% and 11.01% on building, small-vehicle, large-vehicle and ship subsets in terms of MAE, respectively, indicating that our proposed method has a strong counting performance.
### _Comparisons on CARPK and PUCPR+ dataset_
Table VI reports the MAE and RMSE results on two car counting datasets, i.e., CARPK and PUCPR+ [9]. We compare our proposed approach with state-of-the-art car counting methods including detection-based counting methods (YOLO [64], Faster RCNN [65], LPN [9], SSD [67], YOLO9000 [68], RetinaNet [69], and LEP [42]), a regression-based counting method (One-Look Regression [66]), and density map estimation based methods (MCNN [11], CSRNet [20], and BL [32]). The results reveal that our method consistently performs better than the comparative methods, which demonstrate the superiority of our method both in sparse and congested scenarios. Specifically, compared with several outstanding object detectors such as Faster RCNN [65] and YOLO [64], our proposed method surpasses them by a large margin. Moreover, compared with One-Look Regression [66], our approach shows better performance. We conjecture that it may be uncontrollable when regressing the count directly. Furthermore, compared with the density map estimation methods, i.e., MCNN [11], CSRNet [20], and BL [32], our proposed method still obtains
\\begin{table}
\\begin{tabular}{l|c c|c c} \\hline \\hline MethodsBackbones & \\multicolumn{2}{c|}{VGG-19} & \\multicolumn{2}{c}{VGG-16} \\\\ \\cline{2-5} & MAE & RMSE & MAE & RMSE \\\\ \\hline L2-loss & 9.08 & 12.48 & 9.94 & 14.04 \\\\ \\hline PSGCNet(Ours) & **7.54** & **10.52** & **8.84** & **12.18** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: Performances of using different backbones on RSOC_building dataset.
Fig. 6: Density maps generated by Baseline (the middle row) and our method (the bottom row). The ground truth and estimated count are put at the bottom of each image. Compared with the baseline, our proposed model can obtain more accurate estimations across diverse scenarios.
Fig. 7: Visualization results on CARPK dataset. The top row shows the original image and the ground truth counts. The bottom one shows the density maps generated by our proposed method and the estimated counts.
the highest count scores. We visualize some qualitative results in Fig. 7. It demonstrates that the proposed method not only performs a better counting performance, but also shows strong localization ability.
### _Comparisons on DroneCrowd dataset_
We also evaluate our method on a more challenging dataset, called DroneCrowd [44]. Table VII lists the counting results w.r.t MAE and RMSE, PSGCNet achieves comparable performance when compared with the state of the art methods. To further analyse the results, we also report the performance on several subsets according to three video-level attributes, i.e., two categories of scales including _Large_ (the diameter of objects \\(>\\) 15 pixels) and _Small_ (the diameter of objects \\(\\leq\\) 15 pixels), three categories of illumination conditions including _Cloudy_, _Sunny_, and _Night_, two density levels including _Crowded_ (with the number of objects in each frame larger than 150) and _Sparse_ (with the number of objects in each frames
\\begin{table}
\\begin{tabular}{l|c|c c|c c|c c|c} \\hline \\hline MethodsDatasets & Year\\&Venue & \\begin{tabular}{c} RSOC\\_Building \\\\ \\end{tabular} & \\begin{tabular}{c} RSOC\\_Small-vehicle \\\\ \\end{tabular} & \\begin{tabular}{c} RSOC\\_Large-vehicle \\\\ \\end{tabular} &
\\begin{tabular}{c} RSOC\\_Ship \\\\ \\end{tabular} \\\\ \\hline MCNN [11] & 2016 CVPR & 13.65 & 16.56 & 488.65 & 1317.44 & 36.56 & 55.55 & 263.91 & 412.30 \\\\ CMTL [60] & 2017 AVSS & 12.78 & 15.99 & 490.53 & 1321.11 & 61.02 & 78.25 & 251.17 & 403.07 \\\\ CSRNet [20] & 2018 CVPR & 8.00 & 11.78 & 443.72 & 1252.22 & 34.10 & 46.42 & 240.01 & 394.81 \\\\ SANet [23] & 2018 ECCV & 29.01 & 32.96 & 497.22 & 1276.66 & 62.78 & 79.65 & 302.37 & 436.91 \\\\ SFCN [61] & 2019 CVPR & 8.94 & 12.87 & 440.70 & 1248.27 & 33.93 & 49.74 & 240.16 & 394.81 \\\\ SPN [21] & 2019 WACV & 7.74 & 11.48 & 445.16 & 1252.92 & 36.21 & 50.65 & 241.43 & 392.88 \\\\ SCAR [62] & 2019 NC & 26.90 & 31.35 & 497.22 & 1276.65 & 62.78 & 79.64 & 302.37 & 436.92 \\\\ CAN [22] & 2019 CVPR & 9.12 & 13.38 & 457.36 & 1260.39 & 34.56 & 49.63 & 282.69 & 423.44 \\\\ SFANet [63] & 2019 CVPR & 8.18 & 11.75 & 435.29 & 1284.15 & 29.04 & 47.01 & 201.61 & 332.87 \\\\ BL [32] & 2019 ICCV & 11.51 & 15.96 & 168.62 & 280.50 & 13.39 & 35.24 & 84.18 & 136.21 \\\\ ASPDNet [7, 8] & 2020 ICASSP/TGRS & 7.59 & 10.66 & 433.23 & 1238.61 & 18.76 & 31.06 & 193.83 & 318.95 \\\\ PSGCNet(Ours) & β & **7.54** & **10.52** & **157.55** & **245.31** & **11.00** & **17.65** & **74.91** & **112.11** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: Performance comparison on RSOC [7] dataset.
Fig. 8: Density maps generated by Baseline (the middle row) and our method (the bottom row). The ground truth and estimated count are put at the bottom of each image. Compared with the baseline, our proposed model can obtain more accurate estimations from sparse to highly congested scenes.
\\begin{table}
\\begin{tabular}{l|c|c|c} \\hline \\hline MethodsDatasets & \\begin{tabular}{c} CARPK [9] \\\\ \\end{tabular} &
\\begin{tabular}{c} PUCPR+ [9] \\\\ \\end{tabular} \\\\ \\hline YOLO [64] & 102.89 & 110.02 & 156.72 & 200.44 \\\\ \\hline *YOLO [64] & 48.89 & 57.55 & 156.00 & 200.42 \\\\ \\hline Faster RCNN [65] & 103.48 & 110.64 & 156.76 & 200.59 \\\\ \\hline *Faster RCNN [65] & 24.32 & 37.62 & 39.88 & 47.67 \\\\ \\hline One-Look Regression [66] & 59.46 & 68.84 & 21.88 & 36.73 \\\\ \\hline SSD [67] & 37.33 & 42.32 & 119.24 & 132.22 \\\\ \\hline YOLO9000 [68] & 38.59 & 43.18 & 97.96 & 133.25 \\\\ \\hline LPN [9] & 23.80 & 36.79 & 22.76 & 34.46 \\\\ \\hline RetinaNet [69] & 16.62 & 22.30 & 24.58 & 33.12 \\\\ \\hline LEP [42] & 51.83 & β & 15.17 & β \\\\ \\hline MCNN [11] & 39.10 & 43.30 & 21.86 & 29.53 \\\\ \\hline CSRNet [20] & 11.48 & 13.32 & 8.65 & 10.24 \\\\ \\hline BL [32] & 9.58 & 11.38 & 6.54 & 8.13 \\\\ \\hline PSGCNet(Ours) & **8.15** & **10.46** & **5.24** & **7.36** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE VI: Performance comparison on CARPK [9] and PUCPR+ [9] dataset. β*β indicates that the method has been fine-tuned on PUCPR+ dataset.
less than 150). From the performance of subsets, we can find that our proposed method performs well in the _Cloudy_, _Sunny_, and _Crowded_ subsets, degrades in the _Night_ subset, this may be attributed to extremely low illumination and severe class imbalance. In particular, STNNet [44] performs the best across the whole dataset. It is a multi-task learning model to jointly solve density map estimation, localization and tracking. The method also leverages both spatial and temporal information, in which a neighboring context loss is applied to capture relations among neighboring targets in consecutive frames. Even so, our proposed model achieves a comparatively good performance and even surpasses it in the _Sunny_ and _Crowded_ subsets.
### _Comparisons on crowd counting datasets_
To further validate the generalization ability and robustness of the proposed model, we extend it on four widely used crowd counting datasets, the counting results are reported in Table VIII. It demonstrates that our proposed approach can achieve consistent improvements compared with 15 state-of-the-art methods [11, 20, 22, 32, 45, 54, 55, 60, 61, 72, 77, 78, 79, 80, 81, 82]. Specifically, on ShanghaiTech dataset, our proposed model increases relative improvements of 12.4%/5.9% on Part_A and 15.4%/28.1% on Part_B, w.r.t, MAE/RMSE. Even on the more crowded UCF_QNRF and UCF_CC_50, we still improve the baseline with relative improvements of 12.5%/11.9% and 20.9%/14.5% w.r.t MAE/RMSE. It indicates that our proposed method achieves superior performance not only for sparse but also highly congested crowd scenes.
In consideration of some methods that may perform well on one dataset however poorly on other ones, for fairness, we adopt the average ranking evaluation strategy [55] to make a comprehensive evaluation (denoted by avg. R. in Tabel VIII). The average ranking value is obtained by summing all ranks that one method gains to divide the number of datasets it utilizes. The lower value indicates a higher rank. Therefore, our proposed method obtains the best average ranking, which reveals its powerful ability to deal with the diverse crowd scenes.
We visualize some estimated density maps of the proposed method and the baseline in Fig. 8, from which we can observe that our proposed method obtains more accurate estimations. Benefiting from the proposed PSM and GCM, our method can better reflect the scale variation of the pedestrians. Compared with the baseline method, our proposed model obtains more accurate estimations across diverse scenes from sparse to highly congested. Moreover, compared with baseline, our method obtains clearer density maps and shows stronger localization ability to a certain extent.
## V Conclusion
In this paper, we have presented a novel supervised learning framework for dense object counting in remote sensing images, named PSGCNet. Our PSGCNet is characterized by three components: 1) capturing multi-scale features with an effective pyramidal scale module; 2) alleviating the interferences of complex background with a lightweight global context module, and 3) a reliable supervision manner combined with Bayesian loss and counting loss, which is utilized to train the network and learn the count expectation at each annotation point. Extensive experiments on four remote sensing object counting datasets demonstrate the effectiveness and superiority of the proposed approach. Moreover, extension experiments on four widely used crowd counting benchmark datasets further validate the generalization ability and robustness of the model.
## References
* [1] D. Kang, Z. Ma, and A. B. Chan, \"Beyond counting: Comparisons of density maps for crowd analysis tasks--counting, detection, and tracking,\" _IEEE Transactions on Circuits and Systems for Video Technology_, vol. 29, no. 5, pp. 1408-1422, 2018.
* [2] T. Li, H. Chang, M. Wang, B. Ni, R. Hong, and S. Yan, \"Crowded scene analysis: A survey,\" _IEEE Transactions on Circuits and Systems for Video Technology_, vol. 25, no. 3, pp. 367-386, 2014.
* [3] S. Zhang, G. Wu, J. P. Costeira, and J. M. Moura, \"Understanding traffic density from large-scale web camera data,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2017, pp. 5898-5907.
* [4] C. Zhang, K. Kang, H. Li, X. Wang, R. Xie, and X. Yang, \"Data-driven crowd understanding: A baseline for a large-scale crowd dataset,\" _IEEE Transactions on Multimedia_, vol. 18, no. 6, pp. 1048-1061, 2016.
* [5] V. Lempitsky and A. Zisserman, \"Learning to count objects in images,\" _Advances in Neural Information Processing Systems_, 2010.
* [6] C. Arteta, V. Lempitsky, and A. Zisserman, \"Counting in the wild,\" in _Proceedings of the European Conference on Computer Vision_. Springer, 2016, pp. 483-498.
* [7] G. Gao, Q. Liu, and Y. Wang, \"Counting dense objects in remote sensing images,\" in _IEEE International Conference on Acoustics, Speech and Signal Processing_, 2020, pp. 4137-4141.
* [8] ----, \"Counting from sky: A large-scale data set for remote sensing object counting and a benchmark method,\" _IEEE Transactions on Geoscience and Remote Sensing_, pp. 3642-3655, 2020.
* [9] M.-R. Hsieh, Y.-L. Lin, and W. H. Hsu, \"Drone-based object counting by spatially regularized regional proposal network,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2017, pp. 4145-4153.
* [10] D. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, \"The unmanned aerial vehicle benchmark: Object detection and tracking,\" in _Proceedings of the European Conference on Computer Vision_, 2018, pp. 370-386.
* [11] Y. Zhang, D. Zhou, S. Chen, S. Gao, and Y. Ma, \"Single-image crowd counting via multi-column convolutional neural network,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2016, pp. 589-597.
* [12] V. A. Sindagi and V. M. Patel, \"Generating high-quality crowd density maps using contextual pyramid cnns,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2017, pp. 1861-1870.
* [13] Z.-Q. Cheng, J.-X. Li, Q. Dai, X. Wu, J.-Y. He, and A. G. Hauptmann, \"Improving the learning of multi-column convolutional neural network for crowd counting,\" in _Proceedings of the 27th ACM International Conference on Multimedia_, 2019, pp. 1897-1906.
* [14] F. Yu and V. Koltun, \"Multi-scale context aggregation by dilated convolutions,\" in _International Conference on Learning Representations_, 2016.
* [15] M. Lan, Y. Zhang, L. Zhang, and B. Du, \"Global context based automatic road segmentation via dilated convolutional neural network,\" _Information Sciences_, vol. 535, pp. 156-171, 2020.
* [16] K. He, X. Zhang, S. Ren, and J. Sun, \"Spatial pyramid pooling in deep convolutional networks for visual recognition,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 37, no. 9, pp. 1904-1916, 2015.
* [17] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, \"DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 40, no. 4, pp. 834-848, 2017.
* [18] L. Zhang, M. Lan, J. Zhang, and D. Tao, \"Stagewise unsupervised domain adaptation with adversarial self-training for road segmentation of remote-sensing images,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2021.
* [19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, \"Going deeper with convolutions,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2015, pp. 1-9.
* [20] Y. Li, X. Zhang, and D. Chen, \"Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 1091-1100.
* [21] X. Chen, Y. Bin, N. Sang, and C. Gao, \"Scale pyramid network for crowd counting,\" in _IEEE Winter Conference on Applications of Computer Vision_, 2019, pp. 1941-1950.
* [22] W. Liu, M. Salzmann, and P. Fua, \"Context-aware crowd counting,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 5099-5108.
* [23] X. Cao, Z. Wang, Y. Zhao, and F. Su, \"Scale aggregation network for accurate and efficient crowd counting,\" in _Proceedings of the European Conference on Computer Vision_, 2018, pp. 734-750.
* [24] J. He, Z. Deng, and Y. Qiao, \"Dynamic multi-scale filters for semantic segmentation,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 3562-3572.
* [25] I. C. Duta, L. Liu, F. Zhu, and L. Shao, \"Pyramidal convolution: Rethinking convolutional neural networks for visual recognition,\" _arXiv preprint arXiv:2006.11538_, 2020.
* [26] N. Liu, Y. Long, C. Zou, Q. Niu, L. Pan, and H. Wu, \"Adcrowdnet: An attention-injective deformable convolutional network for crowd understanding,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 3225-3234.
* [27] J. Hu, L. Shen, and G. Sun, \"Squeeze-and-excitation networks,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 7132-7141.
* [28] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, \"Cbam: Convolutional block attention module,\" in _Proceedings of the European Conference on Computer Vision_, 2018, pp. 3-19.
* [29] X. Li, W. Wang, X. Hu, and J. Yang, \"Selective kernel networks,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 510-519.
* [30] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, \"Eca-net: Efficient channel attention for deep convolutional neural networks,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 11 534-11 542.
* [31] Z. Yang, L. Zhu, Y. Wu, and Y. Yang, \"Gated channel transformation for visual recognition,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 11 794-11 803.
* [32] Z. Ma, X. Wei, X. Hong, and Y. Gong, \"Bayesian loss for crowd count estimation with point supervision,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 6142-6151.
* [33] D. Kamenetsky and J. Sherrah, \"Aerial car detection and urban understanding,\" in _International Conference on Digital Image Computing: Techniques and Applications_, 2015, pp. 1-8.
* [34] T. Moranduzzo and F. Melgani, \"Automatic car counting method for unmanned aerial vehicle images,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 3, pp. 1635-1647, 2013.
* [35] S. An, W. Liu, and S. Venkatesh, \"Face recognition using kernel ridge regression,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2007, pp. 1-7.
* [36] A. B. Chan, Z.-S. J. Liang, and N. Vasconcelos, \"Privacy preserving crowd monitoring: Counting people without people models or tracking,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2008, pp. 1-7.
* [37] H. Hores, M. Tayyab, K. Athrey, D. Zhang, S. Al-Maadeed, N. Rajpoot, and M. Shah, \"Composition loss for counting, density map estimation and localization in dense crowds,\" in _Proceedings of the European Conference on Computer Vision_, 2018, pp. 532-546.
* [38] H. Idrees, I. Saleemi, C. Seibert, and M. Shah, \"Multi-source multi-scale counting in extremely dense crowd images,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2013, pp. 2547-2554.
* [39] G. Gao, J. Gao, Q. Liu, Q. Wang, and Y. Wang, \"Cm-based
\\begin{table}
\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \\hline \\multirow{2}{*}{Methods} & \\multirow{2}{*}{Speer(PS)} & \\multicolumn{2}{c|}{Overall} & \\multicolumn{2}{c|}{Large} & \\multicolumn{2}{c|}{Small} & \\multicolumn{2}{c|}{Closely} & \\multicolumn{2}{c|}{Sunsy} & \\multicolumn{2}{c|}{Single} & \\multicolumn{2}{c|}{Crowd}{} & \\multicolumn{2}{c}{Sques} \\\\ \\cline{3-14} & & \\multicolumn{1}{c|}{MAE} & RMSE & \\multicolumn{1}{c|}{MAE} & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE \\\\ \\hline MCNN [11] & **20.98** & 34.7 & 42.5 & 36.8 & 44.1 & 31.7 & 40.1 & 21.0 & 27.5 & 39.0 & 43.9 & 67.2 & 68.7 & 29.5 & 35.3 & 37.7 & 46.2 \\\\ \\hline CMTL [60] & 2.31 & 56.7 & 65.9 & 53.5 & 63.2 & 61.5 & 69.7 & 59.5 & 66.6 & 67.8 & 48.2 & 58.3 & 81.6 & 88.7 & 42.2 & 47.9 \\\\ \\hline MSCN [70] & 1.76 & 58.0 & 55.2 & 68.4 & 77.9 & 57.5 & 71.1 & 64.5 & 85.8 & 53.8 & 65.5 & 46.8 & 57.3 & 91.4 & 106.4 & 38.7 & 48.8 \\\\ \\hline LGCFC [71] & 3.08 & 136.9 & 150.6 & 126.3 & 140.3 & 152.8 & 164.8 & 147.1 & 160.3 & 133.7 & 151.7 & 105.6 & 113.8 & 208.5 & 211.1 & 95.4 & 110.0 \\\\ \\hline SwitBackN [45] & 0.01 & 66.5 & 77.8 & 61.5 & 74.2 & 74.0 & 83.0 & 56.0 & 63.4 & 69.0 & 80.9 & 92.8 & 105.8 & 67.7 & 79.8 \\\\ \\hline AGCN [72] & 1.38 & 48.1 & 60.2 & 57.0 & 70.6 & 34.8 & 39.7 & 42.5 & 36.4 & 37.3 & 43.8 & 86.6 & 106.6 & 36.0 & 41.9 & 55.1 & 68.5 \\\\ \\hline AHOLON [73] & 0.16 & 165.6 & 167.7 & 166.7 & 168.9 & 16.8 & 165.9 & 160.5 & 162.3 & 174.8 & 177.1 & 162.3 & 164.3 & 165.5 & 167.7 & 165.6 & 167.8 & 167.8 \\\\ \\hline StackPooling [74] & 0.73 & 68.8 & 77.2 & 68.7 & 77.1 & 68.8 & 77.3 & 66.5 & 75.9 & 74.0 & 83.4 & 63.2 & 67.4 & 95.7 & 101.1 & 53.1 & 59.1 \\\\ \\hline DxM [75] & 2.32 & 36.5 & 47.3 & 41.5 & 54.7 & 28.9 & 33.1 & 45.4 & 88.6 & 26.5 & 31.3 & 29.5 & 34.0 & 56.3 & 68.3 & 24.9 & 28.7 \\\\ \\hline CSNet [20] & 3.92 & 19.8 & 25.6 & 17.8 & 25.4 & 22.9 & 25.8 & 12.8 & 16.1 & 29.1 & 22.5 & 42.3 & 25.8 & 22.0 & 24.0 & 19.6 & 26.5 \\\\ \\hline CAN [22] & 7.12 & 22.1 & 33.4 & 18.9 & 26.7 & 26.9 & 41.5 & **11.2** & **14.9** & 14.8 & 17.5 & 69.4 & 73.6 & **14.4** & **17.9** & 26.0 & 39.7 \\\\ \\hline DM-Count [69] & 100.4 & 18.4 & 27.0 & 19.2 & 29.6 & 17.2 & 22.4 & 11.4 & 16.3 & **12.6** & 52.1 & 53.1 & 17.6 & 21.8 & 18.9 & 29.6 \\\\ \\hline STNN [44] & 3.41 & **15.8** & **18.7** & **16.0** & **18.4** & **15.6** & **19.2** & 14.1 & 17.2density estimation and crowd counting: A survey,\" _arXiv preprint arXiv:2003.12783_, 2020.
* [40] V. A. Sindagi and V. M. Patel, \"A survey of recent advances in cm-based single image crowd counting and density estimation,\" _Pattern Recognition Letters_, vol. 107, pp. 3-16, 2018.
* [41] R. Bahmanyar, E. Vig, and P. Reinartz, \"Mrcnet: Crowd counting and density map estimation in aerial and ground imagery,\" _BMVC Workshop on Object Detection and Recognition for Security Screening_, 2019.
* [42] T. Stahl, S. L. Pintea, and J. C. Van Gemert, \"Divide and count: Generic object counting by image divisions,\" _IEEE Transactions on Image Processing_, vol. 28, no. 2, pp. 1035-1044, 2018.
* [43] W. Li, H. Li, Q. Wu, X. Chen, and K. N. Ngan, \"Simultaneously detecting and counting dense vehicles from drone images,\" _IEEE Transactions on Industrial Electronics_, vol. 66, no. 12, pp. 9651-9662, 2019.
* [44] L. Wen, D. Du, P. Zhu, Q. Hu, Q. Wang, L. Bo, and S. Lyu, \"Detection, tracking, and counting meets drones in crowds: A benchmark,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 7812-7821.
* [45] D. B. Sam, S. Surya, and R. V. Babu, \"Switching convolutional neural network for crowd counting,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2017, pp. 4031-4039.
* [46] F. Dai, H. Liu, Y. Ma, X. Zhang, and Q. Zhao, \"Dense scale network for crowd counting,\" in _Proceedings of the 2021 International Conference on Multimedia Retrieval_, 2021, pp. 64-72.
* [47] S. Bai, Z. He, Y. Qiao, H. Hu, W. Wu, and J. Yan, \"Adaptive dilated network with self-correction supervision for counting,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 4594-4603.
* [48] H. Bai, S. Wen, and S.-H. Gary Chan, \"Crowd counting on images with scale variation and isolated clusters,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops_, 2019, pp. 0-0.
* [49] R. R. Varior, B. Shuai, J. Tighe, and D. Modolo, \"Scale-aware attention network for crowd counting,\" _arXiv preprint arXiv:1901.06026_, 2019.
* [50] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, \"Deformable convolutional networks,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2017, pp. 764-773.
* [51] V. A. Sindagi and V. M. Patel, \"Ha-ccn: Hierarchical attention-based crowd counting network,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 323-335, 2019.
* [52] A. Zhang, J. Shen, Z. Xiao, F. Zhu, X. Zhen, X. Cao, and L. Shao, \"Relational attention network for crowd counting,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 6788-6797.
* [53] A. Zhang, L. Yue, J. Shen, F. Zhu, X. Zhen, X. Cao, and L. Shao, \"Attentional neural fields for crowd counting,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 5714-5723.
* [54] Y. Miao, Z. Lin, G. Ding, and J. Han, \"Shallow feature based dense attention network for crowd counting,\" in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol. 34, no. 07, 2020, pp. 11 765-11 772.
* [55] X. Jiang, L. Zhang, M. Xu, T. Zhang, P. Lv, B. Zhou, X. Yang, and Y. Pang, \"Attention scaling for crowd counting,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, p. 4706-4715.
* [56] K. Simonyan and A. Zisserman, \"Very deep convolutional networks for large-scale image recognition,\" in _International Conference on Learning Representations_, 2015, 3.
* [57] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, \"Attention is all you need,\" in _Advances in Neural Information Processing Systems_, 2017, pp. 5998-6008.
* [58] X. Wang, R. Girshick, A. Gupta, and K. He, \"Non-local neural networks,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 7794-7803.
* [59] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Advances in Neural Information Processing Systems_, 2012, pp. 1097-1105.
* [60] V. A. Sindagi and V. M. Patel, \"Cnn-based cascaded multi-task learning of high-level prior and density estimation for crowd counting,\" in _IEEE International Conference on Advanced Video and Signal Based Surveillance_, 2017, pp. 1-6.
* [61] Q. Wang, J. Gao, W. Lin, and Y. Yuan, \"Learning from synthetic data for crowd counting in the wild,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 8198-8207.
* [62] J. Gao, Q. Wang, and Y. Yuan, \"Scar: Spatial-/channel-wise attention regression networks for crowd counting,\" _Neurocomputing_, vol. 363, pp. 1-8, 2019.
* [63] L. Zhu, Z. Zhao, C. Lu, Y. Lin, Y. Peng, and T. Yao, \"Dual path multi-scale fusion networks with attention for crowd counting,\" _arXiv preprint arXiv:1902.01115_, 2019.
* [64] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, \"You only look once: Unified, real-time object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2016, pp. 779-788.
* [65] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster r-cnn: towards real-time object detection with region proposal networks,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 39, no. 6, pp. 1137-1149, 2016.
* [66] T. N. Mumdenk, G. Konjevod, W. A. Sakla, and K. Boakye, \"A large contextual dataset for classification, detection and counting of cars with deep learning,\" in _Proceedings of the European Conference on Computer Vision_, 2016, pp. 785-800.
* [67] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, \"Ssd: Single shot multibox detector,\" in _Proceedings of the European Conference on Computer Vision_, 2016, pp. 21-37.
* [68] J. Redmon and A. Farhadi, \"Yolo9000: better, faster, stronger,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2017, pp. 7263-7271.
* [69] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, \"Focal loss for dense object detection,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2017, pp. 2980-2988.
* [70] L. Zeng, X. Xu, B. Cai, S. Qiu, and T. Zhang, \"Multi-scale convolutional neural networks for crowd counting,\" in _IEEE International Conference on Image Processing_, 2017, pp. 465-469.
* [71] I. H. Laradji, N. Rostamzadeh, P. O. Pinheiro, D. Vazquez, and M. Schmidi, \"Where are the blobs: Counting by localization with point supervision,\" in _Proceedings of the European Conference on Computer Vision_, 2018, pp. 547-562.
* [72] Z. Shen, Y. Xu, B. Ni, M. Wang, J. Hu, and X. Yang, \"Crowd counting via adversarial cross-scale consistency pursuit,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 5245-5254.
* [73] D. Deb and J. Ventura, \"An aggregated multicolumn dilated convolution network for perspective-free counting,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_, 2018, pp. 195-204.
* [74] S. Huang, X. Li, Z.-Q. Cheng, Z. Zhang, and A. Hauptmann, \"Stacked pooling: Improving crowd counting by boosting scale invariance,\" _arXiv preprint arXiv:1808.07456_, 2018.
* [75] Z. Zou, X. Su, X. Qu, and P. Zhou, \"Da-net: Learning the fine-grained density distribution with deformation aggregation network,\" _IEEE Access_, vol. 6, pp. 60745-60756, 2018.
* [76] B. Wang, H. Liu, D. Samaras, and M. Hoai, \"Distribution matching for crowd counting,\" in _Advances in Neural Information Processing Systems_, 2020.
* [77] X. Jiang, Z. Xiao, B. Zhang, X. Zhen, X. Cao, D. Doermann, and L. Shao, \"Crowd counting and density estimation by trellis encoder-decoder networks,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 6133-6142.
* [78] C. Xu, K. Qiu, J. Fu, S. Bai, Y. Xu, and X. Bai, \"Learn to scale: Generating multipolar normalized density maps for crowd counting,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 8382-8390.
* [79] D. Guo, K. Li, Z.-J. Zha, and M. Wang, \"Dadnet: Dilated-attention-deformable corner for crowd counting,\" in _Proceedings of the 27th ACM International Conference on Multimedia_, 2019, pp. 1823-1832.
* [80] M.-h. Oh, P. Olsen, and K. N. Ramamurthy, \"Crowd counting with decomposed uncertainty,\" in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol. 34, no. 07, 2020, pp. 11 799-11 806.
* [81] A. Luo, F. Yang, X. Li, D. Nie, Z. Jiao, S. Zhou, and H. Cheng, \"Hybrid graph neural networks for crowd counting,\" in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol. 34, no. 07, 2020, pp. 11 693-11 700.
* [82] L. Liu, Z. Qiu, G. Li, S. Liu, W. Ouyang, and L. Lin, \"Crowd counting with deep structured scale integration network,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 1774-1783. | Object counting, which aims to count the accurate number of object instances in images, has been attracting more and more attention. However, challenges such as large scale variation, complex background interference, and non-uniform density distribution greatly limit the counting accuracy, particularly striking in remote sensing imagery. To mitigate the above issues, this paper proposes a novel framework for dense object counting in remote sensing images, which incorporates a pyramidal scale module (PSM) and a global context module (GCM), dubbed PSGCNet, where PSM is used to adaptively capture multi-scale information and GCM is to guide the model to select suitable scales generated from PSM. Moreover, a reliable supervision manner improved from Bayesian and Counting loss (BCL) is utilized to learn the density probability and then compute the count expectation at each annotation. It can relieve non-uniform density distribution to a certain extent. Extensive experiments on four remote sensing counting datasets demonstrate the effectiveness of the proposed method and the superiority of it compared with state-of-the-arts. Additionally, experiments extended on four commonly used crowd counting datasets further validate the generalization ability of the model. Code is available at [https://github.com/gaoguangshuai/PSGCNet](https://github.com/gaoguangshuai/PSGCNet).
Object Counting, Pyramidal Scale, Global Context, Bayesian Loss, Remote Sensing | Summarize the following text. | 283 |
arxiv-format/2406_07860v1.md | # BookSQL: A Large Scale Text-to-SQL Dataset for Accounting Domain
Rahul Kumar\\({}^{\\dagger}\\) Amar Raja Dibbu\\({}^{\\dagger}\\) Shrutendra Harsola\\({}^{*}\\)
**Vignesh Subrahmanian\\({}^{*}\\)** **Ashutosh Modi\\({}^{\\dagger}\\)**
\\({}^{\\dagger}\\) Indian Institute of Technology Kanpur (IIT Kanpur) \\({}^{*}\\) Intuit
{rahulkumar21,amard21}@iitk.ac.in
{shrutendra_harsola,vignesh_subrahmaniam}@intuit.com
{ashutoshm}@cse.iitk.ac.in
## 1 Introduction
Relational databases are pervasive in all modern-day organizations, from financial establishments to educational institutes. Typically, query languages such as SQL are used to extract the required data from relational databases. However, formulating queries in SQL needs mastery of the language itself; consequently, this excludes people (particularly those without technical background, e.g., financial accountants) who do not know SQL from using databases. It is imperative to develop techniques to address the research question, can relational databases be queried using natural language? In this paper, we take a step toward this goal; in particular, we explore if one could develop a natural language interface for accounting databases. In recent years, several large-scale general-purpose datasets (Deng et al., 2022) have been proposed for developing Text-to-SQL systems1, such as Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017). Such datasets,2 though cross-domain, are still not suitable for developing systems that could address real-world business use cases, such as accessing accounting databases via natural language interfaces. The primary reason is that these large-scale datasets have a considerable breadth regarding types of domains. However, they either lack certain domains (such as accounting) or have limited data and query types for specific domains (e.g., financial, sales, and marketing). In this paper, we try to address this gap by proposing a large-scale Text-to-SQL dataset (called **BookSQL**) for the accounting and business domain. We collaborate with financial experts to create a dataset that reflects actual accounting databases used in the industry.
Footnote 1: By Text-to-SQL system we refer to a system that, given a natural language query, automatically retrieves the desired information from a database or multiple databases by converting a natural language query to SQL query as an intermediate representation.
Footnote 2: By Text-to-SQL dataset we refer to a dataset having both the natural language queries with corresponding SQL formulation and correct answers along with the corresponding database against which queries are fired
To the best of our knowledge, there is no large-scale dataset in the accounting domain that contains granular records of accounting books used in businesses. To give an idea about the scale of usage of accounting databases: there are around \\(33\\)
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Model** & **Spider** & **BookSQL** \\\\ \\hline UniSAr & 70\\% & 3.8\\% \\\\ SEDE & 63.2\\% & 0.0\\% \\\\ RESDSQL & 80.5\\% & 10.8\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Performance (Exact Match Accuracy (c.f. Β§5)) of pre-trained SOTA Text-to-SQL models on Spider and the proposed **BookSQL** dataset. As can be observed existing models have very poor performance on **BookSQL** indicative of poor domain generalization.
million small businesses3 in the US alone. Most of these businesses use accounting software to maintain their books to keep track of their finances, i.e., money-in transactions (e.g., invoice and sales receipt) and money-out transactions (e.g., expense, purchase order, and bill payment). Additionally, for tax purposes, these books need to follow standard accounting principles like double-entry accounting,4 hierarchical chart of account structure,5 and accrual accounting.6 Transactions in the accounting database span across multiple tables. The corresponding SQL queries can involve complex operations such as aggregations, computing distinct counts, and nested queries to extract information from these. For a novice user, this is not an easy task. Moreover, as observed in our initial experiments (Table 1), existing state-of-the-art (SOTA) Text-to-SQL models trained on Spider have very poor performance on domain-specific **BookSQL** dataset, pointing towards the need for a accounting domain specific dataset which will further lead to the development of SOTA models. In a nutshell, in this resource paper, we make the following contributions:
Footnote 3: [https://tinyurl.com/mr3vrtj](https://tinyurl.com/mr3vrtj)
Footnote 4: [https://en.wikipedia.org/wiki/Double-entry_bookkeeping](https://en.wikipedia.org/wiki/Double-entry_bookkeeping)
Footnote 5: [https://en.wikipedia.org/wiki/Chart_of_accounts](https://en.wikipedia.org/wiki/Chart_of_accounts)
Footnote 6: [https://en.wikipedia.org/wiki/Basis_of_accounting](https://en.wikipedia.org/wiki/Basis_of_accounting)
1. We create a new and large-scale Text-to-SQL financial dataset referred to as **BookSQL**. The dataset consists of a financial-accounts database of 1 million records. The corresponding natural language queries are designed to address various practical intricacies of the accounting domain. BookSQL has 100k QuerySQL pairs which is about 1.25 times the existing largest Text-2-SQL dataset: WikiSQL. In particular, for designing the queries, we consulted financial experts to understand various practical use cases.
2. We run existing state-of-the-art models (including GPT-4) for the Text-to-SQL task on BookSQL to see the performance and analyze the shortcomings of the models trained on existing large-scale datasets such as Spider, pointing towards developing specialized models for this domain. We release the dataset and model code via GitHub: [https://github.com/Exploration-Lab/BookSQL](https://github.com/Exploration-Lab/BookSQL).
## 2 Related Work
Due to its importance in practical applications, developing natural language interfaces to databases has been an active area of research. Due to space constraints, we cannot cover all the research, and we refer the reader to the survey by Deng et al. (2022). We outline some of the main works in this area in this section. Several datasets have been proposed for Text-to-SQL task in recent years. For example, _Spider_Yu et al. (2018) dataset has been proposed; it covers 138 different domains. A large-scale dataset, WikiSQL (Zhong et al., 2017), consisting of 24241 Wikipedia tables, has been created. Similarly, Squall (Shi et al., 2020), KaggleDBQA (Lee et al., 2021), and BIRD-SQL (Li et al., 2023) datasets have been generated to evaluate the generalization property of models on unseen domains. Domain-specific datasets have also been proposed, such as those based on Yelp and IMDB (Yaghmazadeh et al., 2017), Advising domain (Finegan-Dollak et al., 2018), MIMICSQL (Wang et al., 2020), SEDE (Hazoom et al., 2021),
\\begin{table}
\\begin{tabular}{c c c c c c c c c} \\hline \\hline
**Dataset** & **\\#Size** & **\\#DB** & **\\#D** & **\\#T/DB** & **Domain** & **ORDER** & **BY** & **GROUP** & **BY** & **NESTED** \\\\ \\hline Spider & 10,181 & 200 & 138 & 5.1 & Cross & 1335 & 1491 & 844 \\\\ WikiSQL & 80,654 & 26,521 & - & 1 & Cross & 0 & 0 & 0 \\\\ Advising & 3,898 & 208 & 1 & 10 & Single & 15 & 9 & 22 \\\\ BIRD & 12,751 & 95 & 37 & 7.3 & Cross & 2576 & 881 & 0 \\\\ IMDB & 131 & 1 & 1 & 16 & Single & 10 & 6 & 1 \\\\ Yelp & 128 & 1 & 1 & 7 & Single & 18 & 21 & 0 \\\\ \\hline
**BookSQL** & 100k & 1 & 1 & 7 & Single & 17,529 & 11,508 & 4,456 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Comparison of benchmark datasets with **BookSQL**. #Size, #DB, #D, and #T/DB represent the numbers of query-SQL pairs, databases, domains, and the averaged number of tables per domain, respectively. The β-β in the #D column indicates an unknown number of domains. Last 3 columns indicate the query types. Yelp dataset is based on Yelp website, IMDB is based on movie domain and Advising dataset is based on the University Course domain accounting Restaurants domain Tang and Mooney (2001), and Academic domain Li and Jagadish (2014). The purpose of these datasets is to evaluate the performance of models with a high degree of precision while disregarding the generalization characteristic of the models.
**Comparison.** We compare BookSQLwith other popular datasets in Table 2. As can be observed, BookSQL has a much large number of Query-SQL pairs, has a more diverse number of queries in terms of the SQL clauses (e.g., ORDER BY), and involves more complex (and nested) queries. Benchmark dataset such as Spider have a very wide coverage over various domains (138) but very few queries per domain (e.g., average number of queries per domain is 74 in the case of Spider), limiting its performance in a specific domain (see also Table 1). Moreover, BookSQL can be merged with the existing Spider dataset to increase its coverage in the business domain.
**Models.** Various models have been proposed for the Text-to-SQL task Deng et al. (2022). Some state-of-the-art models include the non-invasive UniSAr model Dou et al. (2022) based on Seq2Seq architecture. The model has shown high accuracy on the multi-domain, multi-table Spider dataset. RESDSQL Li et al. (2023) decouples the schema linking and the skeleton parsing for Text-to-SQL generation. Schema linking identifies the table and columns required for a given question. Skeleton parsing first generates the SQL skeleton and then the final SQL. It achieves SOTA performance on the Spider benchmark.
## 3 BookSQL Dataset
Given the importance and wide prevalence of business databases across the world, the proposed dataset, BookSQL focuses on the finance and accounting domain. Accounting databases are used across a wide spectrum of industries like construction, healthcare, retail, educational services, insurance, restaurant, real estate, etc. Business in these industries arranges their financial transactions into their own different set of categories (called a chart of accounts7 in accounting terminology). For example, a restaurant business could have categories like advertising, license fees, etc., a real estate brokerage business could have categories like commissions, office supplies, etc. Keeping generalization in mind BookSQL dataset includes a variety of businesses from different industries. Hence, a Text-to-SQL system developed on BookSQL will be robust at handling various types of accounting databases. The total size of the dataset is 1 million. The dataset is prepared under financial experts' supervision, and the dataset's statistics are provided in Table 3. The dataset consists of 27 businesses, and each business has around 35k - 40k transactions. The distributions of all businesses and their products are shown in Appendix Figure 3 and Figure 4.
Footnote 7: [https://www.investopedia.com/terms/c/chart-accounts.asp](https://www.investopedia.com/terms/c/chart-accounts.asp)
### BookSQL Tables
Figure 1 shows the detailed database schema. The schema is reflective of real-life databases used in the finance and accounting domain. There are seven tables in the BookSQL, namely, Master Transactions, Customer, Employees, Product Service, Vendor, Chart of Account, and Payment Method tables. We arrived at the list of seven tables after examining (and corresponding discussions with finance experts) the databases of several businesses. Given the nature of accounting domain, majority of databases used by businesses across the globe are restricted mainly to these seven tables only. The main table is the \"Master Transaction\" table (e.g., Appendix Table 8), which records money-in transactions (invoice, sales receipt, etc.) and money-out transactions (expense, purchase order, bill payment, etc.) This table also records additional corresponding transaction details, like the customer, vendor, product/service, credit account, debit account, and amount. The \"Chart of accounts\" table (e.g., Appendix Table 9) contains information on all account names and types. The \"Customer\" table (e.g., Appendix Table 10) contains all the customer's details, i.e., name, billing, and shipping address. The \"Vendors\" table (e.g., Appendix Table 11) contains all the vendor details of all the businesses, i.e., vendor names and billing addresses. The \"Employees\" table (e.g., Appendix
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**BookSQL** & **Stats** \\\\ \\hline Size of the database & 1 million \\\\ Total Businesses & 27 \\\\ Size of Question-SQL Pair & 100k \\\\ Number of Easy SQL & 10,000 \\\\ Number of Medium SQL & 45,000 \\\\ Number of Hard SQL & 45,000 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Statistics of BookSQLTable 12) contains information about all the business employees. The \"Product service\" table (e.g., Appendix Table 13) contains the details of all the products and services. The \"Payment method\" table (e.g., Appendix Table 14) contains different payment methods the business uses.
### Financial Constraints
For creating the dataset, we took existing accounting databases based on the schema described above and anonymized the names and entries in the tables, i.e., actual names, businesses, and numbers were replaced with fictional ones while adhering to the financial constraints (described next). This is done to maintain the privacy of individuals and businesses. The resulting database is a true reflection of a real-world accounting setting. Accounting databases follow certain accounting rules and financial constraints, which were followed when anonymizing the database. In particular, standard double-entry accounting was followed, which means every entry to an account needs a corresponding and opposite entry to a different account, i.e., debit and credit. So, the sum of debit should always be equal to the sum of credit for every transaction. All seven tables were partitioned by business_id. For a given transaction_id, the sum of the credits column should equal the sum of the debits column, and both should equal the amount column in the Master Transactions table. Credit (in the Master Transaction table) should be equal to the product of Quantity and Rate. The chart of accounts was anonymized using the industry-wise list published by a popular CPA.8 Business-specific custom fields were anonymized using the examples provided in the help articles of various accounting software. The created database was cross-checked with financial experts to make sure that the created database looked like a real-world accounts database.
Footnote 8: [https://hectogarcia.com/resources/](https://hectogarcia.com/resources/)
### Dataset Creation and Annotation
BookSQL dataset consists of 100k questions in natural language and their corresponding SQL on
Figure 1: BookSQL Database schema
multiple tables, covering 27 different businesses. We involved financial experts in the query creation process. We collaborated with two financial experts who have previously been involved in the creation of accounting software. Moreover, these experts have the knowledge and experience in dealing with customer interactions involving account books. The financial experts helped us on a pro bono basis since the creation of Text-to-SQL system for the accounting domain would help them and their customers.
The question-SQL pair formulation process is as follows. With the help of financial experts, we first created a list of typical questions (based on the account book) that customers (or business people) usually ask or questions about the information that customers are interested in knowing. We tried to keep the questions (queries) as natural as possible to capture real-world scenarios. We relied on the experience of financial experts to keep the list as exhaustive as possible. We also created the corresponding SQL query for each of the natural language queries in the list. The queries in the list were then used to create more queries via the process of templatization. Figure 2 explains the process with help of an example.
In order to be as exhaustive as possible, with the help of experts, we arrived at a list of \\(183\\) unique natural language questions that customers typically ask when interacting with accounting databases. These natural language questions were used to create query templates, and this was further used to generate diverse range of Question-SQL pairs in BookSQL. Additionally, we performed a second round of verification of the BookSQL corpus and query templates with financial experts to verify the consistency, veracity, and ensure that the dataset reflected the real-world scenario. Note that existing general Text-to-SQL datasets (e.g., Spider and WikiSQL) consist of databases from multiple domains and BookSQL is focused on the financial domain, hence the number of templates may appear to be less. However, the number of templates is still large when compared across a single domain, for example, to give a rough estimate, Spider dataset uses 5693 templates and spans 138 domains, so a rough estimate of number of templates per domain is about 41 (\\(\\sim 5693/138\\)). Note that Spider doesn't provide details about templates for each domain, hence a rough estimate. Moreover, questions in existing Text-to-SQL datasets (like Spider) are created by students Yu et al. (2018), whereas questions in BookSQL are created by financial experts who use accounting systems on a regular basis and are well-versed. Although our dataset is small (in terms of total number of templates), it is of high quality and more complex; hence it helps in learning models that would generalize well. Moreover, while experimenting with models, the queries in the test set are based on templates that are not used during training (see section 5).
To the best of our knowledge, BookSQL is the first Text-to-SQL dataset to have multi-step questions, which requires nested SQL queries to get the answer. For example - _\"What products are selling less than last month/week?\"_ It would first require computing monthly/weekly product level sales and then comparing each product's current and last month's/week's sales. BookSQLdatabase schema also contains complex column types. Additionally, BookSQLis the first Text-to-SQLdataset to have extensive time-based filters like last month, this quarter to date, last financial year, between July to August, this week, yesterday, etc.
### Complexity of SQL in BookSQL
SQL queries in BookSQL are diverse and cover various levels of complexity, i.e., it covers the following operations: SELECT with multiple columns and aggregations, WHERE, GROUP BY, HAVING, ORDER BY, LIMIT, JOIN, INTERSECT, UNION, NOT IN, OR, AND, EXISTS, CONTAINS as well as nested queries. Table 2 shows the comparisons
Figure 2: An example showing the pipeline for creating BookSQL dataset. Note, here we can replace _aggregation_entity_ by max, min, total, and average, and _customer_name_ can be replaced with any possible name to get the Question-SQL pair. Similarly, _date/period_ can be replaced with _last quarter, this quarter, last month_.
of all Text-to-SQL datasets. In terms of complexity, BookSQL consists of complex SQL queries containing 17,529 ORDER BY, 11,508 GROUP BY, and 4,456 NESTED queries. We further divided all Query-SQL pairs into three categories: Easy, Medium, and Hard, based on the complexity of SQL. Table 4 shows examples for each category. Table 3 shows the main statistics of the BookSQL. BookSQL consists of 7,193 Hard SQL queries, making it a more complex, large, and challenging dataset. We used the following criteria to decide on the complexity of a query.
* EASY: simple queries with single WHERE condition
* MEDIUM: multiple conditions in WHERE clause and multiple columns in SELECT clause
* HARD: Join, Group by, Inner queries, Union, Except as these are hard to predict from Natural Language question.
## 4 Baseline Models
We benchmark existing state-of-the-art (SOTA) Text-to-SQL models on BookSQL dataset.
**SEDE:** We fine-tuned the SEDE model (Hazoom et al., 2021) on the BookSQL dataset. SEDE is a T5-based sequence-to-sequence model (Raffel et al., 2020). It takes unordered schema items (tables and column names) along with questions as input and generates the corresponding SQL query as output.
**UniSAr:** We fine-tuned the UniSAr model (Dou et al., 2022) on the BookSQL train dataset, with T5-large as the base language model. UniSAr converts any seq-to-seq language model into a text2sql model by three non-invasive extensions: (1) Structure Mark to encode database schema in the model input, (2) Constrained Decoding to generate well-structured SQL. For the BookSQL dataset, we removed the constrained decoding module of UniSAr, since it did not support the SQL queries with complex grammar present in the BookSQL dataset. (3) SQL Completion for completing potential missing JOIN relationships.
**RESDSQL:** RESDSQL (Li et al., 2023) decouples the schema linking and the skeleton aware decoding for SQL generation. A cross-encoder is trained to rank the tables and columns required for a given query for schema linking. For SQL generation, a seq-to-seq model with skeleton-aware decoding is used, which first generates an SQL skeleton, and then the model predicts the actual SQL query from it. The masked self-attention method in the decoder allows the first created skeleton to direct the future SQL parsing implicitly.
**DIN-SQL + GPT4:** We use prompt chaining tech
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline
**Complexity** & **Question** & **SQL** \\\\ \\hline Easy & What is the balance & SELECT balance from Customers where customer\\_name = βJohnβ \\\\ & owned by John? & \\\\ \\hline Medium & What is the maximum sales for John & SELECT MAX(credit) FROM master\\_txn\\_table where account\\_type in (βIncomeβ, βOther Incomeβ) AND customer = βJohnβ \\\\ & in the last month? & AND month(transaction\\_date) = month(current\\_timestamp) - 1 \\\\ \\hline Hard & What products are selling less than last & SELECT A\\_product\\_service, revenue\\_this\\_month, revenue\\_last\\_month FROM (SELECT product\\_service, SUM(credit) as revenue\\_this\\_month FROM master\\_txn\\_table WHERE account\\_type in (βIncomeβ, βOther Incomeβ) AND month(transaction\\_date) = month(current\\_timestamp) GROUP BY 1) AS A INNER JOIN (SELECT product\\_service, SUM(credit) as revenue\\_last\\_month FROM master\\_txn\\_table WHERE account\\_type in (βIncomeβ, βOther Incomeβ) AND month(transaction\\_date) = month(current\\_timestamp) - 1 GROUP BY 1) AS B ON A\\_product\\_service = B\\_product\\_service WHERE revenue\\_this\\_month < revenue\\_last\\_month \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Examples of Question-SQL pairs from BookSQLbased on complexity of the query.
nique as proposed in Pourreza and Rafiei (2023). It decomposes Text-to-SQL task into multiple sub-tasks and then solves each sub-task one by one by prompting GPT4 (Achiam et al., 2023) with sub-task-specific prompts. It uses the following sub-tasks:
1. **Schema Linking:** This module identifies references to database tables and columns required to answer the natural language question.
2. **Classification and Decomposition:** This module classifies each question into easy, non-nested complex, and nested complex. This signifies the type of SQL query required for the given question.
3. **SQL Generation:** This module generates the SQL using the output of previous modules.
4. **Self Correction module:** This module is responsible for correcting any minor mistakes in the SQL generated by the previous module.
Sample prompts for each of these sub-tasks are provided in the Appendix SSC.
**Dynamic few-shot prompt + GPT4 (DFew+GPT4):** We follow a dynamic few-shot prompting technique similar to Sun et al. (2023). Firstly, a vector database is created by an embedding train set questions using SentenceTransformers _all-MiniLM-L6-v2_ model.9 This model is trained on the 1 billion sentence pairs dataset10 and is best suited for generating sentence embeddings. This created embedding database is called trainDB. Then, at inference time, embedding for the test question is created using the same SentenceTransformers model, and this embedding is used to do ANN (Approximate Nearest Neighbor) search in trainDB to get ten examples from the train set. These ten examples and database schema is used to create the few-shot SQL generation prompt for GPT4. Pseudo-code and sample prompts are provided in Appendix SSB. We use ChromaDB11 as the underlying vector database and for ANN search.
Footnote 9: [https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
Footnote 10: [https://huggingface.co/blog/lb-sentence-embeddings](https://huggingface.co/blog/lb-sentence-embeddings)
## 5 Experiments, Results and Analysis
### Evaluation Metrics
We use the standard evaluation metrics (details in Appendix D) of Exact Match Accuracy (EMA) (Yu et al., 2018), Execution Accuracy (EA) (Yu et al., 2018), Partial Component Match F1 (PCM-F1) (Hazoom et al., 2021), BLEU-4 (Papineni et al., 2002), and ROUGE-L (Lin, 2004).
### Experimental Setup
We divide the dataset into \\(70\\%\\) train, \\(10\\%\\) validation, and \\(20\\%\\) test sets based on query templates. The test set contains \\(14.37\\%\\) easy, \\(78.43\\%\\) medium, and \\(7.2\\%\\) hard SQL queries. In order to check the generalization performance, queries in the test set are based on templates that are not used during training. Given limitations on the number of calls to OpenAI GPT4 API, we used a random \\(10\\%\\) of BookSQL test set for GPT4-based approaches. We provide details about training and hyper-parameters in Appendix E.
### Results
Table 5 and Table 6 shows the performance of baseline models. Table 5 shows the performance of
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline & \\multicolumn{3}{c}{**Spider**} & \\multicolumn{6}{c}{**BookSQL**} \\\\ \\cline{2-9}
**Model** & EMA & EA & EMA & PCM-F1 & EA & BLEU-4 & ROUGE-L \\\\ \\hline SEDE & 63.2\\% & - & 43.4\\% & 0.82 & 44.3\\% & 0.69 & 0.83 \\\\ UniSAr & 70\\% & - & 43.0\\% & 0.78 & 47.6\\% & 0.72 & 0.80 \\\\ RESDSQL & 80.5\\% & 84.1\\% & 51.5\\% & 0.81 & 54.4\\% & 0.74 & 0.81 \\\\ DIN-SQL+GPT4 & 60\\% & 85.3\\% & 9.3\\% & 0.63 & 7.6\\% & 0.43 & 0.68 \\\\ DFew+GPT4 & - & - & 47.5\\% & 0.89 & 67.2\\% & 0.86 & 0.90 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Results on Spider and BookSQL datasets. EMA refers to Exact Match Accuracy, EA refers to Execution Accuracy, and PCM-F1 refers to Partial Component Match F1. DFew+GPT4 refers to Dynamic few-shot prompt+GPT4SOTA Text-to-SQL models fine-tuned on the BookSQL dataset. REDSQL performs best as can be observed with regard to exact match accuracy and execution accuracy scores. SEDE and UniSAr have poor exact match and execution accuracy scores. Though BookSQL and Spider are not directly comparable, we also include results of the models on the Spider dataset to provide a reference for comparison purposes. As can be observed, the models that perform well on Spider do not have a good performance on BookSQL, indicating the complexity of the dataset. Table 5 shows the in-context learning performance of GPT4 on the BookSQL test set. DIN-SQL+GPT4 could only get \\(9.3\\%\\) exact match accuracy, while Dynamic few-shot prompt+GPT4 comes close to the best-fine-tuned model, with exact match accuracy of \\(47.5\\%\\) and execution accuracy of \\(67.2\\%\\). Table 6 shows the performance on easy, medium and hard queries. All models have a perfect performance (\\(100\\%\\) execution accuracy) on easy queries but struggle with medium and hard queries.
### Error Analysis
We observed that SOTA models fail on queries with date filters, nested queries, distinct aggregations, and domain-specific filters. Table 7 shows the outputs of models on some examples from the test set. **DIN-SQL + GPT4** performs very poorly with execution accuracy of 7.6%. Perhaps, the reason for the bad performance is that it uses the same static chain-of-thought prompt, irrespective of the test question. BookSQL questions are very diverse and require domain knowledge. It is impossible to capture this diversity and domain knowledge in only a few examples in the prompt. Due to this, DIN-SQL fails whenever the test question is completely different from the examples provided in the prompt. **Dynamic few shot prompt + GPT4** model addresses the limitations of DIN-SQL by dynamically selecting a few shot examples for the prompt based on the test question. It significantly improves execution accuracy to 67.2%. Possibly, the reasons for poor performance are: 1) Getting confused between different columns (like WHERE clause on product_service vs. account column - see table 7), 2) Mixing up credit, debit, and amount columns and using incorrect columns in aggregations, 3) Not able to generate Nested SQL, even when required to answer the test question correctly, 4) failing when domain-specific information is required to generate SQL correctly. For example, transaction_type filters of the invoice, sales receipt, and purchase order, or account_type filters of expense, income, account receivable, and account payable are incorrectly applied.
**SEDE** fails to generate correct SQL, possibly due to a lack of question and schema linking in the input to the T5 model. Due to this, it mixes up different columns like customer, vendor, product_service, and account. **UniSAr** performs poorly, possibly due to complex queries introduced in BookSQL like date filters, nested queries, distinct aggregations, etc. UniSAr introduces constrained grammar-based decoding, which works well for simple queries but fails with such complex queries. **REDSQL** is the best-performing model. Poor performance is possibly due to: 1) Failure at complex time-based questions like _\"What is average revenue for customer X in last 6 years\"_ (see table 7); 2) mixing up of credit and debit columns; 3) failure when distinct aggregations are required like \\(\\mathtt{COUNT(DISTINCT transaction\\_id)}; 4) failure in case of many nested queries.
## 6 Future Directions
Results show the poor performance of the SOTA models on BookSQL. We outline some of the possible directions for the future to improve performance.
**Multi-task learning:** One could employ a multi-task learning setup, i.e., in addition to optimizing for SQL generation objective, adding other multi-task objectives could help improve the performance on hard SQL queries. These objectives could include (1) nested vs. non-nested SQL classification, (2) distinct keyword classification, and (3) date format classification.
**Pre-training:** For large databases, it is difficult for any model to relate the question tokens with column names when the question might refer to some table cell value. Before the Text-to-SQL task,
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Query** & **SEDE** & **UniSAr** & **RESDSQL** & **GPT4** \\\\ \\hline E & 100 & 100 & 100 & 100 \\\\ M & 43.08 & 46.49 & 62.12 & 71.35 \\\\ H & 15.00 & 12.34 & 15.00 & 22.08 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Execution Accuracy (in %) of various models on SQL queries of varying complexity. E refers to **Easy** query, M refers to **Medium** query and H refers to query with **Hard** complexity.
one could do pre-training to better understand the question and table relationships. This can be done using mask modeling by defining tasks such as column recovery and column predictions where few tokens could be masked, and the model tries to recover and predict the masked tokens; a similar approach is proposed by Shi et al. (2020) via the GAP model.
**Multi-step few-shot prompting:** One could also generate SQL in multiple steps using dynamic few-shot prompting instead of generating in a single step.
**Value Encoding:** In-context learning models (GPT4) mixes up different columns due to a lack of knowledge about table contents. Adding related table rows in the prompt could alleviate this issue.
## 7 Conclusion
In this paper, we propose BookSQL, a Text-to-SQL dataset that will have broad applications in the finance and accounting domain. The experimental outcomes of several Text-to-SQL models indicate considerable room for improvement. In the future, we aim to build a more robust model that can handle hard queries and improve performance.
\\begin{table}
\\begin{tabular}{p{142.3pt} p{284.5pt}} \\hline
1 & Question: & **What was the average invoice value for Biogenic municipal waste-fueled power generation?** \\\\ & Gold SQL: & SELECT avg(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr(account, βBiogenic municipal waste-fueled power generationβ) \\\\ & Few-shot GPT4: & SELECT avg(amount) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND product\\_service = βBiogenic municipal waste-fueled power generationβ \\\\ & SEDE: & SELECT avg(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr(account, βbiogenic municipal waste-fueled power generationβ) \\\\ & UniSAr: & SELECT avg(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND product\\_service = βBiogenic municipal waste-fueled power generationβ \\\\ & RESDSQL: & SELECT avg(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr(account, βbiogenic municipal waste-fueled power generationβ) \\\\ & RESDSQL: & SELECT avg(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr(account, βbiogenic municipal waste-fueled power generationβ) \\\\ & 2 & Question: & **What was the last invoice value for Drain cleaning in This week to date?** \\\\ & Gold SQL: & SELECT max(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr(account,βDrain cleaningβ) AND transaction\\_date BETWEEN date (current\\_date, βweekday 0β, β-7 daysβ) AND date(current\\_date) \\\\ & Few-shot GPT4: & SELECT credit FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND not\\_service = βDrain cleaningβ AND transaction\\_date BETWEEN date(current\\_date, βweekday 0β, β-7 daysβ) AND date(current\\_date) \\\\ & SEDE: & SELECT max(credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND customers = βdrain cleaningβ AND transaction\\_date BETWEEN date (current\\_date, βweekday 0β, β-7 daysβ) AND date (current\\_date) \\\\ & UniSAr: & SELECT max(credit) FROM master\\_txn\\_table WHERE transaction\\_date BETWEEN date (current\\_date, βweekday 0β, β-7 daysβ) AND date (current\\_date ) \\\\ & RESDSQL: & SELECT max (credit) FROM master\\_txn\\_table WHERE transaction\\_type = βinvoiceβ AND instr (account, βDrain cleaningβ) AND transaction\\_date BETWEEN date (current\\_date, βweekday βweekday 0β, β-7 daysβ) AND date (current\\_date ) \\\\ \\hline
3 & Question: & **What is my average revenue for Customer Nathan Hernandez in the last 6 years?** \\\\ & Gold SQL: & SELECT sum(credit)/6 FROM master\\_txn\\_table WHERE customers = βNathan Hernandezβ AND strftime(β\\%Yβ, transaction\\_date) =βsrftime(β\\%Yβ, current\\_date)-6-1 \\\\ & Few-shot GPT4: & SELECT sum(credit)/6 FROM master\\_txn\\_table WHERE customers = βNathan Hernandezβ AND strftime(β\\%Yβ, transaction\\_date) =βsrftime(β\\%Yβ, current\\_date)-6-1 \\\\ & SEDE: & SELECT avg(credit) FROM master\\_txn\\_table AS t1 JOIN chart\\_of\\_accounts AS t2 ON t1.account = t2.account\\_name WHERE account\\_type IN (βincomeβ,βother incomeβ) AND customers = βnathan hernandezβ AND transaction\\_date BETWEEN date(current\\_date,βstart of yearβ,β-6 yearβ) AND date(current\\_date,βstart of yearβ,β-1 dayβ) β \\\\ UniSAr: & SELECT avg(credit) FROM master\\_txn\\_table as T1 JOIN chart\\_of\\_accounts as T2 on T1.account = T2.account\\_name WHERE account\\_type in (βincomeβ, βOther Incomeβ) AND customers = βNathan Hernandezβ AND transaction\\_date BETWEEN date (current\\_date,'start of yearβ, β-6 yearβ) AND date (current\\_date,βstart of yearβ, β-1 dayβ) β \\\\ & RESDSQL: & SELECT sum (credit) /6 FROM master\\_txn\\_table WHERE customers = βNathan Hernandezβ AND strftime (β\\%Yβ, transaction\\_dateβ) \\textbackslash{{{}}}\\) \\\\ & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Error analysis for different models on BookSQLtest set
### Limitations
Since this is a resource paper, we release a large dataset and consequently focus less on modeling the Text-to-SQL system. We tested existing Text-to-SQL systems to see how well these fare on the new dataset. The results are indicative of considerable scope for improvement. In the future, we will focus on developing new models with better performance on BookSQL. Moreover, we hope that once the dataset is released, it will foster more research in this domain, resulting in more interesting models.
## Ethics Statement
Considering the privacy aspect, we create anonymized entries in the dataset. Moreover, the dataset was verified by financial experts to make sure that the entries adhere to accounting principles and are reflective of real-life scenarios. We will be releasing the dataset publicly for research uses. To the best of our knowledge, we are not aware of any other possible ethical consequences of the proposed dataset.
## References
* Achiam et al. (2023) Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. _arXiv preprint arXiv:2303.08774_.
* Deng et al. (2022) Naihao Deng, Yulong Chen, and Yue Zhang. 2022. Recent advances in text-to-SQL: A survey of what we have and what we expect. In _COLING_, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
* Dou et al. (2022) Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, and Jian-Guang Lou. 2022. Unisar: A unified structure-aware autoregressive language model for text-to-sql.
* Finegan-Dollak et al. (2018) Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 351-360, Melbourne, Australia. Association for Computational Linguistics.
* Hazoom et al. (2021a) Moshe Hazoom, Vibhor Malik, and Ben Bogin. 2021a. Text-to-SQL in the wild: A naturally-occurring dataset based on stack exchange data. In _Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)_, pages 77-87, Online. Association for Computational Linguistics.
* Hazoom et al. (2021b) Moshe Hazoom, Vibhor Malik, and Ben Bogin. 2021b. Text-to-sql in the wild: a naturally-occurring dataset based on stack exchange data. _arXiv preprint arXiv:2106.05006_.
* Lee et al. (2021) Chia-Hsuan Lee, Oleksandr Polozov, and Matthew Richardson. 2021. KaggleDBQA: Realistic evaluation of text-to-SQL parsers. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 2261-2273, Online. Association for Computational Linguistics.
* Li and Jagadish (2014) Fei Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. _Proc. VLDB Endow._, 8(1):73-84.
* Li et al. (2023a) Haoyang Li, Jing Zhang, Cuiping Li, and Hong Chen. 2023a. ResdSql: Decoupling schema linking and skeleton parsing for text-to-sql. In _Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI)_.
* Li et al. (2023b) Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Chenhao Ma, Kevin C. C. Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023b. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls.
* Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Weijing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311-318.
* Pourreza and Rafiei (2023) Mohammadreza Pourreza and Davood Rafiei. 2023. Din-sql: Decomposed in-context learning of text-to-sql with self-correction. _arXiv preprint arXiv:2304.11015_.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_, 21(1):5485-5551.
* Shi et al. (2020a) Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2020a. Learning contextual representations for semantic parsing with generation-augmented pre-training.
* Shi et al. (2020b) Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daume III, and Lillian Lee. 2020b. On the potential of lexico-logical alignments for semantic parsing to SQL queries. In _Findings of the Association for Computational Linguistics: EMNLP 2020_, pages 1849-1864, Online. Association for Computational Linguistics.
Ruoxi Sun, Sercan O Arik, Hootan Nakhost, Hanjun Dai, Rajarishi Sinha, Pengcheng Yin, and Tomas Pfister. 2023. Sql-palm: Improved large language modeladaptation for text-to-sql. _arXiv preprint arXiv:2306.00739_.
* Tang and Mooney (2001) Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In _Proceedings of the 12th European Conference on Machine Learning_, pages 466-477, Freiburg, Germany.
* Wang et al. (2020) Ping Wang, Tian Shi, and Chandan K. Reddy. 2020. Text-to-sql generation for question answering on electronic medical records. In _Proceedings of The Web Conference 2020_, WWW '20, page 350-361, New York, NY, USA. Association for Computing Machinery.
* Yaghmazadeh et al. (2017) Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. sqlizer: Query synthesis from natural language. _Proc. ACM Program. Lang._, 1(OOPSLA).
* Yu et al. (2018) Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qinging Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task.
* Zhong et al. (2017) Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. _CoRR_, abs/1709.00103.
Figure 3: Sample BookSQL Business Distribution. The middle section shows the sample set of businesses, inner section shows the industries associated with the corresponding business and outer most section shows the corresponding product of the business. This chart is made with the information available at: [https://www.ibisworld.com/united-states/list-of-industries/](https://www.ibisworld.com/united-states/list-of-industries/).
Figure 4: BookSQLBusiness Distribution. Here, inner circle indicates the industries, middle circle shows the sets of businesses associated to respective industry, and the outer most circle indicate corresponding product of the business. This chart is made with the information available at: [https://www.ibisworld.com/united-states/list-of-industries/](https://www.ibisworld.com/united-states/list-of-industries/).
Dynamic Few-shot Prompt + GPT4
Pseudo-code for dynamic few-shot train example selection for a given test question:
```
fromlangchain.embeddings.huggingfaceimportHuggingFaceEmbeddings fromlangchain.prompts.example_selectorimportMaxMarginalRelevanceExampleSelector fromlangchain.vectorstimportChroma example_selector= MaxMarginalRelevanceExampleSelector.from_examples(examples, HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\"), Chroma, k=10, input_keys=[\"input\"] )
```
### Example Prompt
_Database schema:_
_Table master_txn_table, columns = [*, Transaction_ID, Transaction_DATE, Transaction_TYPE, Amount, CreatedDATE, CreatedUSER, Account, AR_paid, AP_paid, Due_DATE, Open_balance, Customers, Vendor, Product_Service, Quantity, Rate, Credit, Debit, payment_method, Misc]_
_Table chart_of_accounts, columns = [*, Account_name, Account_type] Table customers, columns = [*, customer_name, customer_full_name, Billing_address, Billing_city, Billing_state, Billing_ZIP_code, Shipping_address, Shipping_city, Shipping_state, Shipping_ZIP_code, Balance]_
_Table employees, columns = [*, Employee_name, Employee_ID, Hire_date, Billing_rate, Deleted]_
_Table products, columns = [*, Product_Service, Product_Service_type]_
_Table vendors, columns = [*, Vendor_name, Billing_address, Billing_city, Billing_state, Billing_ZIP_code, Balance]_
_Table payment_method, columns = [*, Payment_method, Credit_card]_
_Foreign_keys = [master_txn_table.Account = chart_of_accounts.Account_name, master_txn_table.Customers = customers.customer_name, master_txn_table.Vendor = vendors.Vendor_name, master_txn_table.Product_Service = products.Product_Service, master_txn_table.payment_method = payment_method.payment_method]_
_Following are the example of questions and corresponding SQL queries. *10 Few shot examples from train sets_
_Translate following question to SQL query._
_Input: How much open credit does customer Ronald Bailey have?_
_Output: SELECT_
## Appendix C DIN-SQL+GPT4 Prompts
Following section shows the sample prompts used in different DIN-SQL modules. For brevity, we have added only 1 few shot example in these sample prompts. Though in practice, 5-10 few shot examples are used and is mentioned at the end of prompt in * *.
### Schema Linking Prompt
Table master_txn_table, columns = [*, Transaction_ID, Transaction_DATE, Transaction_TYPE, Amount, CreatedDATE, CreatedUSER, Account, AR_paid, AP_paid, Due_DATE, Open_balance, Customers, Vendor, Product_Service, Quantity, Rate, Credit, Debit, payment_method, Misc]
Table chart_of_accounts, columns = [*, Account_name, Account_type]
Table customers, columns = [*, customer_name, customer_full_name, Billing_address, Billing_city, Billing_city, Billing_city, Billing_city, Billing_city, Billing_city, Billing_state, Billing_city, Billing_state, Billing_ZIP_code, Shipping_address, Shipping_city, Shipping_state, Shipping_ZIP_code, Balance]_
_Table employees, columns = [*, Employee_name, Employee_ID, Hire_date, Billing_rate, Deleted]_
_Table products, columns = [*, Product_Service, Product_Service_type]_
_Table vendors, columns = [*, Vendor_name, Billing_address, Billing_city, Billing_state, Billing_ZIP_code, Balance]_
_Table payment_method, columns = [*, Payment_method, Credit_card]_
_Foreign_keys = [master_txn_table.Account = chart_of_accounts.Account_name, master_txn_table.Customers = customers.customer_name, master_txn_table.Vendor = vendors.Vendor_name, master_txn_table.Product_Service = products.Product_Service, master_txn_table.payment_method = payment_method.payment_method =
_Following are the example of questions and corresponding SQL queries. *10 Few shot examples from train sets_
_Translate following question to SQL query._
_Input: How much open credit does customer Ronald Bailey have?_
_Output: SELECT_
## Appendix DIN-SQL+GPT4 Prompts
Following section shows the sample prompts used in different DIN-SQL modules. For brevity, we have added only 1 few shot example in these sample prompts. Though in practice, 5-10 few shot examples are used and is mentioned at the end of prompt in * *.
### Schema Linking Prompt
Table master_txn_table, columns = [*, Transaction_ID, Transaction_DATE, Transaction_TYPE, Amount, CreatedDATE, CreatedUSER, Account, AR_paid, AP_paid, Due_DATE, Open_balance, Customers, Vendor, Product_Service, Quantity, Rate, Credit, Debit, payment_method, Misc]
Table chart_of_accounts, columns = [*, Account_name, Account_type]
Table customers, columns = [*, customer_name, customer_full_name, Billing_address, B
Billing_state, Billing_ZIP_code, Shipping_address, Shipping_city, Shipping_state, Shipping_ZIP_code, Balance]
Table employees, columns = [*, Employee_name, Employee_ID, Hire_date, Billing_rate, Deleted]
Table products, columns = [*, Product_Service, Product_Service_type]
Table vendors, columns = [*, Vendor_name, Billing_address, Billing_city, Billing_state, Billing_ZIP_code, Balance]
Table payment_method, columns = [*, Payment_method, Credit_card]
Foreign_keys = [master_txn_table.Account = chart_of_accounts.Account_name, master_txn_table.Customers = customers.customer_name, master_txn_table.Vendor = vendors.Vendor_name, master_txn_table.Product_Service = products.Product_Service, master_txn_table.payment_method = payment_method.payment_method]
Q: How much open credit does customer Ronald Bailey have?
S: select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = 'Ronald Bailey')
A: Let's think step by step. In the question \"How much open credit does customer Ronald Bailey?\", we are asked: \"How much open credit\", so we need column = [master_txn_table.open_balance] \"open credit does customer Ronald Bailey\", so we need column = [master_txn_table.transaction_id, master_txn_table.customers] Based on the columns and tables, we need these Foreign_keys = []. Based on the tables, columns, and Foreign_keys, The set of possible cell values are = [Ronald Bailey]. So the Schema_links are: Schema_links: [master_txn_table.open_balance, master_txn_table.customers, master_txn_table.transaction_id, Ronald Bailey]
*9 more few-shot examples>>
### Classification prompt
Q: What are my transactions MTD?
schema_links: [master_txn_table.transaction_id, master_txn_table.amount, master_txn_table.transaction_date]
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we don't need JOIN and don't need nested queries, then the the SQL query can be classified as \"EASY\".
Q: How many products are never sold with total value higher than 5?
schema_links: [Product_Service.transaction_id, master_txn_table.transaction_type]
A: Let's think step by step. The SQL query for the question \"How many products are never sold with total value higher than 5?\" needs these tables = [Product_Service,master_txn_table], so we need JOIN. Plus, it requires nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN) or inner query inside from clause, and we need the answer to the questions = [\"products that are sold with total value higher than 5\"]. So, we need JOIN and need nested queries, then the the SQL query can be classified as \"NESTED\"
Q: YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
schema_links = [master_txn_table.account = chart_of_accounts.account_name, master_txn_table.credit, master_txn_table.transaction_date, master_txn_table.account_type, master_txn_table.debit]
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
A: Let's think step by step. The SQL query for the question \"YTD, what was our smallest expense?
A: Let's think step by step. The SQL query for the question \"How many products are never sold with total value higher than 5?\" needs these tables = [Product_Service.transaction_id, master_txn_table.transaction_type]
A: Let's think step by step. The SQL query for the question \"How many products are never sold with total value higher than 5?\" needs these tables = [Product_Service,master_txn_table], so we need JOIN. Plus, it requires nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"EASY\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"EASY\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master_txn_table], so we don't need JOIN. Plus, it doesn't require nested queries with (INTERSECT, UNION, EXCEPT, IN, NOT IN), and we need the answer to the questions = [\"\"]. So, we need JOIN and don't need nested queries, then the the SQL query can be classified as \"NON-NESTED\".
A: Let's think step by step. The SQL query for the question \"What are my transactions MTD?\" needs these tables = [master
## 67 More Few-shot Examples*
### SQL Generation
#### c.3.1 Easy Prompt
Q: \"How much open credit does customer Ronald Bailey?\"
Schema_links: [master_txn_table.open_balance, master_txn_table.transaction_id, master_txn_table.customers,Ronald Bailey]
SQL: select sum(open_balance) from ( select distinct transaction_id, open_balance from master_txn_table where customers = 'Ronald Bailey')
*4 more few-shot examples*
#### c.3.2 Non-Nested Complex Prompt
Q: \"How many Traveller accomodation did we sell to Ethan Walker today?\"
Schema_links: [master_txn_table.quantity,master_txn_table.customers, master_txn_table.product_service, master_txn_table.transaction_type, master_txn_table.transaction_date]
A: Let's think step by step. For creating the SQL for the given question, we need to join these tables = []. First, create an intermediate representation, then use it to construct the SQL query. Intermediate_representation: select sum(master_txn_table.quantity) from master_txn_table where master_txn_table.customers = 'Ethan Walker' and master_txn_table.product_service = 'Traveller accomodation' and master_txn_table.transaction_type in ('invoice','sales receipt') and master_txn_table.transaction_date BETWEEN date(current_date) AND date(current_date)
SQL: select sum(quantity) from master_txn_table where customers = Ethan Walkerand product_service = Traveller accomodationand transaction_type in ('invoice','sales receipt') and transaction_date BETWEEN date(current_date) AND date(current_date)
*9 more few-shot examples*
#### c.3.3 Nested Complex Prompt
Q: \"How many products are never sold with total value higher than 5?\" Schema_links: [master_txn_table.product_service, master_txn_table.transaction_type, master_txn_table.credit, product_service.*]
A: Let's think step by step. \"How many products are never sold with total value higher than 5?\" can be solved by knowing the answer to the following sub-question \"Show me all the products which are never sold with total credit value higher than 5?\". The SQL query for the sub-question \"Show me all the products which are never sold with total credit value higher than 5?\" is SELECT count(*) FROM Product_Service WHERE product_service NOT IN ( SELECT product_service FROM master_txn_table WHERE transaction_type in ('invoice','sales receipt') group by product_service having sum(credit)>5) So, the answer to the question \"How many products are never sold with total value higher than 5?\" is = Intermediate_representation: SELECT count(Product_Service.*) FROM Product_Service WHERE Product_Service.product_service NOT IN ( SELECT master_txn_table.product_service FROM master_txn_table WHERE master_txn_table.transaction_type in ('invoice','sales receipt') group by master_txn_table.product_service having sum(master_txn_table.credit) > 5)
SQL: SELECT count(*) FROM Product_Service WHERE product_service NOT IN ( SELECT product_service FROM master_txn_table WHERE transaction_type in ('invoice','sales receipt') group by product_service having sum(credit) > 5)
*9 more few-shot examples*
#### c.4 Self Correction Prompt
For the given question, use the provided tables, columns, foreign keys, and primary keys to fix the given SQLite SQL QUERY for any issues. If there are any problems, fix them. If there are no issues, return the SQLite SQL QUERY as is.
Use the following instructions for fixing the SQL QUERY:
1) Use the database values that are explicitly mentioned in the question.
2) Pay attention to the columns that are used for the JOIN by using the Foreign_keys.
3) Use DESC and DISTINCT when needed.
4) Pay attention to the columns that are used for the GROUP BY statement.
5) Pay attention to the columns that are used for the SELECT statement.
6) Only change the GROUP BY clause when necessary (Avoid redundant columns in GROUP BY).
7) Use GROUP BY on one column only.
## Appendix D Evaluation Metrics
The following standard metrics are used:
* **Exact Match Accuracy (Yu et al., 2018):** Both predicted and the Gold SQL are decomposed into different SQL components like SELECT, WHERE, GROUP BY, etc. Predicted SQL is marked as correct if all SQL components exactly match with the Gold SQL.
* **Execution Accuracy (Yu et al., 2018):** Output of predicted SQL is the same as Gold SQL's output on execution against the database.
* **Partial Component Match F1 (Hazoom et al., 2021):** Both the predicted query and the gold query are parsed into tress using JSqlParser12. These two parsed trees are compared, and an aggregated score is calculated based on the number of matching sub-trees. Footnote 12: [https://github.com/JSQLParser/JSqlParser](https://github.com/JSQLParser/JSqlParser)
* **BLEU-4 (Papineni et al., 2002):** It measures the number of matching n-grams between the predicted and the Gold SQL.
* **ROUGE-L (Lin, 2004):** It is based on the longest common sub-sequence (LCS) between the predicted and the Gold SQL. A longer shared sequence indicates more similarity between the predicted and the Gold SQL.
## Appendix E Training Details and Hyper-parameters
All experiments were done on a single NVIDIA A10G Tensor Core GPU.
For SEDE, we used T5-Large as the base seq-to-seq model, with a learning rate of \\(5e-5\\) with 15 epochs and batch size of 6. For decoding, a beam size of 6 was used, with max decoding steps of 250.
For UniSAr, we use T5-Large as a base language model with a learning rate of 1e-5 and max tokens is 1024. We adopt the polynomial_decay with 5,000 warmup updates. The dropout rate is 0.1. Optimizer is Adam with the default parameters. The max-update is set to 10,000. Empirically, the model obtained the best performance about 10 \\(\\sim\\) 15 epochs in BookSQL. The Fairseq dynamically tunes the batch size to realize higher GPU utilization.
For RESDSQL, we used settings recommended by the original paper and code. The Schema Item Classifier module used a RoBERTa-large model with a learning rate of \\(1e-5\\) and an effective batch size of 32 (using gradient accumulation). topk_table_num value of 4 and topk_column_num value of 8 were used. For the text2sql module, a T5-large model was used with a learning rate of \\(5e-5\\) and an effective batch size of 32 (using gradient accumulation). Beam search decoding was used with num_beams set to 8 and num_return_sequences set to 8.
For DIN-SQL+GPT4 and Dynamic few shot prompt + GPT4, we used OpenAI GPT4 API with following settings: _n = 1, temperature=0.0, max_tokens=600, top_p = 1.0, frequency_penalty=0.0, presence_penalty=0.0._ Given limitations on the number of calls to OpenAI GPT4 API, we used a random 10% of BookSQL test set for GPT4-based approaches.
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline
**Business Id** & **Product_service** & **Product_Service_type** \\\\ \\hline
2 & Hours & Service \\\\
2 & Services & Service \\\\
2 & Design & Service \\\\
2 & Installation & Service \\\\
2 & Lighting & Service \\\\
2 & Maintenance \\& Repair & Service \\\\
2 & Refunds \\& Allowances & Service \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 13: Product Service Table
\\begin{table}
\\begin{tabular}{l l l l l l l} \\hline \\hline
**Business Id** & **Vendor name** & **Billing address** & **Billing city** & **Billing state** & **Billing ZIP code** & **Balance** \\\\ \\hline
2 & Shelly Ramos & 82768 Dawn Crescent & West Cynthia & WY & 39877 & 4042.15 \\\\
2 & Jade Barnett & 782 Mitchell Camp & Grahambury & KS & 80370 & 12949.89 \\\\ & & Suite 676 & & & \\\\
2 & Nicole Jordan & 14959 Mccullough & East Kevinfurt & WI & 42930 & 5294.89 \\\\ & & Green Suite 029 & & & \\\\
2 & Adam Pena & 192 Brenda Gardens & Erinmouth & IA & 93008 & 6949.89 \\\\
2 & Jeffrey Roman & 784 Cameron Parks & North Gloriafurt & AR & 48141 & 7299.89 \\\\ & & Apt. 902 & & & \\\\
2 & Zachary Butler & 61717 Christopher & Port Joshua & MT & 44164 & 465.09 \\\\ & & Cliffs Apt. 122 & & & \\\\
2 & Taylor Moses & 19368 Jenny Courts & Kerristad & OR & 25430 & 65.09 \\\\ & & Apt. 094 & & & \\\\
2 & John Russo & Unit G387 Box 0856 & DPO & AA & 73133 & 1538.8 \\\\
2 & Robert Phillips & USCGC Steele & FPO & AA & 91533 & 55388.8 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 11: Vendor Table
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline \\hline
**Business Id** & **Employee name** & **Employee ID** & **Hire date** & **Billing rate** & **Deleted** \\\\ \\hline
2 & Stephanie Baker & STE123 & 07/17/2022 & β & No \\\\
2 & Julia Rivera & JUL456 & 07/31/2002 & β & No \\\\
2 & Valerie Kline & VAL232 & 04/15/2012 & β & Yes \\\\
2 & Greg Cardenas & GRE443 & 08/27/2013 & β & No \\\\
2 & Mr. Zachary Levy & ZAC998 & 01/28/2000 & β & Yes \\\\
2 & Taylor Hughes & TAY009 & 07/17/2022 & β & Yes \\\\
2 & Jodi Bishop & JOD778 & 12/27/2016 & β & Yes \\\\
2 & Andrew Flores & AND667 & 05/20/2018 & β & No \\\\
2 & Earl Lee & EAR221 & 08/19/2002 & β & No \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 12: Employee Tables
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline
**Business Id** & **Product_service** & **Product_Service_type** \\\\ \\hline
2 & Hours & Service \\\\
2 & Services & Service \\\\
2 & Design & Service \\\\
2 & Installation & Service \\\\
2 & Lighting & Service \\\\
2 & Maintenance \\& Repair & Service \\\\
2 & Refunds \\& Allowances & Service \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 13: Product Service Table | Several large-scale datasets (e.g., WikiSQL, Spider) for developing natural language interfaces to databases have recently been proposed. These datasets cover a wide breadth of domains but fall short on some essential domains, such as finance and accounting. Given that accounting databases are used worldwide, particularly by non-technical people, there is an imminent need to develop models that could help extract information from accounting databases via natural language queries. In this resource paper, we aim to fill this gap by proposing a new large-scale Text-to-SQL dataset for the accounting and financial domain: BookSQL. The dataset consists of 100k natural language queries-SQL pairs, and accounting databases of 1 million records. We experiment with and analyze existing state-of-the-art models (including GPT-4) for the Text-to-SQL task on BookSQL. We find significant performance gaps, thus pointing towards developing more focused models for this domain. | Give a concise overview of the text below. | 185 |
arxiv-format/1005_3358v1.md | # The Role of Provenance Management in Accelerating the Rate of Astronomical Research
NASA Exoplanet Science Institute, Infrared Processing and Analysis Center, California Institute of Technology 770 South Wilson Avenue, Pasadena, CA 91125, USA
NASA Exoplanet Science Institute, Infrared Processing and Analysis Center, California Institute of Technology 770 South Wilson Avenue, Pasadena, CA 91125, USA
E-mail:
NASA Exoplanet Science Institute, Infrared Processing and Analysis Center, California Institute of Technology 770 South Wilson Avenue, Pasadena, CA 91125, USA
Ewa Deelman
Information Sciences Institute, University of Southern California, 4676 Admiralty Way, Suite 1001, Marina del Rey, CA 90292, USA
E-mail:
## 1 Introduction
Astronomers need to understand the technical content of data sets and evaluate published claims based on them. All data products and records from all the steps used to create science data sets ideally would be archived, but the volume of data would be prohibitively high. The high-cadence surveys currently under development will exacerbate this problem; the Large Synoptic Survey Telescope alone is expected to deliver 60 PB of just raw data in its operational lifetime. There is therefore a need to create records of how data were derived \\(-\\) provenance - that contain sufficient information to enable replication of the data. A report issued by the National Academy of Sciences dedicated to the integrity of digital data recommends the curation of the provenance of data sets as part of its key recommendations [1].
Provenance records must meet strict specifications if they are to have value in supporting research. They must capture the algorithms, software versions, parameters, input data sets, hardware components and computing environments. The records should be standardized and captured in a permanent store that can be queried by end users. In this paper, we describe how the Montage image mosaic engine acts as a driver for the application in astronomy of provenance management methodologies now in development. Provenance management is an active field in many areas of science, and we describe work in earth sciences and oceanography that has applicability to astronomy. [2] describes provenance management in more detail.
## 2 Montage : A Case Study for Provenance Management
### What is Montage?
Montage ([http://montage.ipac.caltech.edu](http://montage.ipac.caltech.edu)) is a toolkit for aggregating astronomical images in Flexible Image Transport System (FITS) format into mosaics. Its scientific value derives from three features of its design:
* It uses algorithms that preserve the calibration and positional (astrometric) fidelity of the input images to deliver mosaics that meet user-specified parameters of projection, coordinates, and spatial scale. It supports all projections and coordinate systems in use in astronomy.
* It contains independent modules for analyzing the geometry of images on the sky, and for creating and managing mosaics.
* It is written in American National Standards Institute (ANSI)-compliant C, and is portable and scaleable the same engine runs on desktop, cluster, supercomputer environments or clouds running common Unix-based operating systems.
There are four steps in the production of an image mosaic:
1. Discover the geometry of the input images on the sky from the input FITS keywords and use it to calculate the geometry of the output mosaic on the sky.
2. Re-project the input images to the spatial scale, coordinate system, World Coordinate System (WCS)- projection, and image rotation.
3. Model the background radiation in the input images to achieve common flux scales and background level across the mosaic.
4. Co-add the re-projected, background-corrected images into a mosaic.
Each production step has been coded as an independent engine run from an executive script. Figure 1 illustrates the second through fourth steps for the simple case of generating a mosaic from three input mosaics. In practice, as many input images as necessary can be processed in parallel, limited only by the available hardware.
### Production of Mosaics
In the production steps shown in Figure 1, the files output by one step become the input to the subsequent step. That is, the reprojected images are used as input to the background rectification. This rectification itself consists of several steps that fit a model to the differences between flux levels of each image, and in turn the rectified, reprocessed images are input to the co-addition engine. Thus the production of an image mosaic actually generates a volume of data that is substantially greater than the volume of the mosaic. Table 1 illustrates this result for two use cases that return 3-color mosaics from the Two Micron All Sky Survey (2MASS) images (see [http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html](http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html)). One is a 6 deg sq mosaic of \\(\\rho\\) Oph and the second is an All Sky mosaic. The table makes clear that the volume of intermediate products exceeds the mosaic size by factors of 30 to 50. The Infrared Processing and Analysis Center (IPAC) hosts an on-request image mosaic service (see Section 3) that delivers mosaics of user-specified regions of the sky, and it currently receives 25,000 queries per year. Were mosaics of the size of the \\(\\rho\\) Oph mosaic processed with such frequency, the service would produce 3.8 PB of data each year. Such volumes are clearly too high to archive.
\\begin{table}
\\begin{tabular}{l r r} \\hline \\hline & \\(\\rho\\) Oph 6 deg sq & All Sky Mosaic \\\\ \\hline \\# input images & 4,332 & 4,121,439 \\\\ \\# comp. steps & 25,258 & 24,030,310 \\\\ \\# intermediate products & 67,300 & 61,924,260 \\\\ Size of intermediate products & 153 GB & 126 TB \\\\ Mosaic Size & 2.4 GB & 4 TB \\\\ Annual Volume & 3.8 PB & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Estimates of Files Generated in the Production of Image Mosaics. See text for an explanation of _Annual Volume_.
Figure 1: The processing steps used in computing an image mosaic with the Montage engine.
### The Scientific Need To Reprocess Mosaics
Montage makes three assumptions and approximations that affect the quality of the mosaics:
* Reprojection involves redistributing the flux from the input pixel pattern to the output pixel pattern. Montage uses a fast, custom algorithm that approximates tangent plane projections1 as polynomial approximations to the pixel pattern on the sky, which can produce small distortions in the pixel pattern of the mosaic. Footnote 1: Geometric projections of the celestial sphere onto a tangent plane from a center of projection at the center of the sphere
* There is no physical model of the sky background that predicts its flux as a function of time and wavelength. Montage assumes that the sky background is only significant at the lowest spatial frequencies, and rectifies the flux at these frequencies to a common level across all the input images. This approximation can confuse background flux with an astrophysical source present at the same frequencies, such as extended diffuse emission in a nebula or dust cloud.
* Co-additions of the reoprojected, rectified images are weighted to not take into account outliers due to e.g. residual cosmic ray hits.
Users have two options in investigating the impact of these three factors, and both involve knowing the provenance of the mosaics: 1. Analyze the output from intermediate steps to understand how the features in the mosaic originate. 2. Replace modules with implementations of new algorithms, such as a custom background rectification, and reprocess the mosaic.
## 3 Information Needed In Provenance Records
Column 1 of Table 2 lists all the information needed to specify a provenance record for an image mosaic. To illustrate the current quality of provenance recording, column 2 describes the provenance information that is made available to users by an on-line, on-request image mosaic service at [http://hachi.ipac.caltech.edu:8080/montage/](http://hachi.ipac.caltech.edu:8080/montage/). This service is hosted at IPAC, and returns mosaics of 2MASS, Sloan Digital Sky Survey (SDSS) and Digitized Sky Surveys at Space Telescope (DSS) images. When processing is complete, users are directed to a web page that contains links to the mosaic and to processing information. It is to the contents of these pages that column 2, table 2 refers.
The only information that is permanently recorded are the runtime parameters that specify the properties of the image mosaic \\(-\\)the coordinate system, projection, spatial sampling and so on \\(-\\) written as keywords in the mosaic file header. The file itself, as well as log files and the traceability to the input images, are deleted after 72 hours (but these can be reproduced if the user has a record of the specifications of the mosaic requested). There is no record of the execution environment, and the algorithm and software information are described in the project web page, and presume that users know where to find them and that the web pages do not become stale.
## 4 Experiments in Recording Provenance Information
The previous section reveals an obviously unsatisfactory state of affairs. We have therefore investigated how astronomers may take advantage of methodologies already under development in other fields to create and manage a permanent store of provenance records for the Montage engine. When complete, these investigations are intended to deliver an operational provenance system that will enable replication of any mosaic produced by Montage.
### Characteristics of Applications and Provenance Management
The design of Montage is well suited for the creation of provenance records, as follows (see [3] for more details):
* It is deterministic; that is, processing a common set of input files will yield the same output mosaic.
* It is component based, rather than monolithic.
* It is self-contained and requires, e.g., no distributed services.
* It runs on all common hardware platforms.
* It inputs data in self-describing standard formats.
* Its input data are curated and served over the long term.
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline
**Information** & **Recorded In On-Request Service?** \\\\ \\hline
**Algorithms** & \\\\ Algorithm Design Documents & Accessible from Montage web page \\\\ Algorithm Version & Accessible from Montage web page \\\\
**Execution Environment** & \\\\ Specific hardware & No \\\\ OS and version & No \\\\ Process Control and Management Tools & No \\\\
**Software** & \\\\ Software Source Code, version & Accessible from Montage web page \\\\ Software Build Environment, version & Accessible from Montage web page \\\\ Compiler, version & Accessible from Montage web page \\\\ Dependencies and versions & Accessible from Montage web page \\\\ Test Plan Results & Accessible from Montage web page \\\\
**Runtime** & \\\\ Parameters & Included in output files \\\\ Input files, version & Retained for 72 hours after completion of job \\\\ Output Files, Log Files & Retained for 72 hours after completion of job \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Comparison of Required and Recorded Provenance Information
### Capturing the Provenance of Montage Processing
Many provenance systems are embedded in processing environments, which offer the benefits of efficient collection of self-contained provenance records, but at the cost of ease of interoperation with other provenance systems. Given that Montage can be run as a pipeline, it too can employ such a system, and indeed ([3]) has demonstrated this. In this paper, we will report instead on efforts to leverage an existing methodology to create a standardized provenance store that can interoperate with other applications. The methodology is the _Provenance Aware Service Oriented Architecture_ (PASOA) ([5]), an open source architecture already used in fields such as aerospace engineering, organ transplant management, and bioinformatics. In brief, when applications are executed they produce documentation of the process that is recorded in a _provenance store_, essentially a repository of provenance documents and records. The store is housed in a database so that provenance information can be queried and accessed by other applications.
In our investigation, Montage was run with the Pegasus framework [4]. Pegasus was developed to map complex scientific workflows onto distributed resources. It operates by taking the description of the processing flow in Montage (the abstract workflow) and mapping it onto the physical resources that will run it, and records this information in its logs. It allows Montage to run on multiple environments and takes full advantage of the parallelization inherent in the design. Pegasus has been augmented with PASOA to create a provenance record for Montage in eXtended Markup Language (XML) that captures the information identified in Table 2. We show a section of this XML structure below, captured during the creation of a mosaic of M17:
<Tm1 version=\"1.0\" encoding=\"ISO-855-1\">> cinvocation nml=\"http/o/nsi.io.do/invocation\" xml=\"http/o/nsi.io.do/invocation\" xml=\"http/o/nsi.io.do/schema/iv-1.10\"> cinvocation nml=\"http/o/nsi.io.
## 5 Applications in Earth Sciences and Oceanography
While the work described above is an advanced experimental stage, Earth Sciences and Oceanography projects have for a number of years exploited operational provenance management systems [2]). We would suggest that astronomy has much to learn from these projects. Here we describe two examples, one involving an integrated pipeline, and one involving a complex data system that uses many instruments collecting a complex and dynamic data set.
### Example 1: The Moderate Resolution Imaging Spectroradiometer (MODIS)
An instrument launched in 1999 aboard the Terra platform, MODIS scans the Earth in 36 bands every two days. The raw (\"level 0\") data are transformed into calibrated, geolocated prodcuts (\"level 1B\"), which are then aggregated into global data products (\"level 2\") that are the primary science products. Examples include a global vegetative index map and a global sea surface temperature map. The raw data are archived permanently, but the level 1B data are much too large to archive. These data are retained for 30-60 days only. Consequently, the MODIS archive records all the process documentation needed to reproduce the Level 1B data from the raw satellite data. The process documentation includes the algorithms used, their versions, the original source code, a complete description of the processing environment and even the algorithm design documents themselves [6].
### Example Two: The Monterey Bay Aquarium Shore Side Data System (SSDS)
For the past four years, the SSDS has been used to track the provenance of complex data sets form many sources [7]. Oceanographers undertake campaigns that involve taking data from multiple sources \\(-\\) buoys, aircraft, underwater sensors, radiosondes and so on. These instruments measure quantities such as salinity and amount of chlorophyll. These data are combined with published data including satellite imagery in simulations to predict oceanographic features, such as seasonal variations in water levels. The SDSS was developed to track the provenance of the data measured in the campaigns in standardized central repository. Scientists use SDSS to track back from derived data products to the metadata of the sensors including their physical location, instrument and platform. The system automatically populates metadata fields, such as the positions of instruments on moving platforms.
## 6 Conclusions
* Tracking the provenance of data products will assume ever-growing importance as more and larger data sets are made available to astronomers.
* Methodologies such as PASOA are in use in aerospace and bioinformatics applications and show great promise for providing provenance stores for astronomy.
* Earth Science projects routinely track provenance information. There is much that astronomy can learn from them.
* There is also an effort in the provenance community to standardize on a provenance model [8], intended to foster interoperability between provenance systems and spur on the development of generic provenance capture and query tools.
## References
* [1] Committee on Ensuring the Utility and Integrity of Research Data in a Digital Age. _\"Ensuring the Integrity, Accessibility, and Stewarathship of Research Data in the Digital Age.\"_ National Academy of Sciences. 2009.
* [2] E. Deelman, B. Berriman, A. Chervenak, O. Corcho, Paul Groth, Luc Moreau. _\"Scientific Data Management: Challenges, Existing Technology, and Deployment_. Arie Shoshani and Doron Rotem, Editor. To be published by CRC Press/Taylor and Francis Books. 2009
* [3] P. Groth, E. Deelman, G. Juve, G. Mehta and B. Berriman. \"Pipeline-Centric Provenance Mode.l\" Paper accepted for publication at Supercomputing 09. 2009.
* [4] E. Deelman, G. Singh, M-H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, J. Good, A. Laity, J. C. Jacob and D. S. Katz. \"Pegasus: A Framework for Mapping Complex Scientific Workflows Onto Distributed Systems.\" Scientific Programming, 13, 219. 2005
* [5] S. Miles, P. Groth, S. Munroe, S. Jiang, T. Assandri, and L. Moreau. \"Extracting Causal Graphs from an Open Provenance Data Model\". Concurrency and Computation: Practice and Experience. 2007.
* [6] C. Tilmes and A. Fleig,. \"Provenance Tracking in an Earth Science Data Processing System\". Second International Provenance and Annotation Workshop, IPAW, 221. 2008.
* [7] M. McCann and K. Gomes _\"Oceanographic Data Provenance Tracking with the Shore Side Data System,\"_. Second International Provenance and Annotation Workshop, IPAW, 309. 2008.
* [8] L. Moreau, J. Freire, J. Futrelle, R. McGrath, J. Myers, and P. Paulson, _\"The open provenance model\"_. University of Southampton, Technical Report, 2007. | The availability of vast quantities of data through electronic archives has transformed astronomical research. It has also enabled the creation of new products, models and simulations, often from distributed input data and models, that are themselves made electronically available. These products will only provide maximal long-term value to astronomers when accompanied by records of their provenance; that is, records of the data and processes used in the creation of such products. We use the creation of image mosaics with the Montage grid-enabled mosaic engine to emphasize the necessity of provenance management and to understand the science requirements that higher-level products impose on provenance management technologies. We describe experiments with one technology, the \"Provenance Aware Service Oriented Architecture\" (PASOA), that stores provenance information at each step in the computation of a mosaic. The results inform the technical specifications of provenance management systems, including the need for extensible systems built on common standards. Finally, we describe examples of provenance management technology emerging from the fields of geophysics and oceanography that have applicability to astronomy applications | Give a concise overview of the text below. | 214 |
arxiv-format/2203_03208v1.md | # Trajectory Test-Train Overlap in
Next-Location Prediction Datasets
Massimiliano Luca\\({}^{1,2}\\)
Work done prior joining Amazon
Luca Pappalardo\\({}^{3}\\)
Bruno Lepri\\({}^{2}\\)
and Gianni Barlacchi\\({}^{+}\\)
\\({}^{1}\\)Free University of Bolzano, Piazza Domenicani, 3, Bolzano, 39100, Italy.
\\({}^{2}\\)Bruno Kessler Foundation, Via Sommarive, 19, Trento, 38123, Italy.
\\({}^{3}\\)ISTI-CNR, Via Moruzzi, 1, Pisa, 56127, Italy.
\\({}^{4}\\)Amazon Alexa AI, Berlin, Germany.
## 1 Introduction
Next-location prediction is the task of forecasting which location an individual will visit, given their historical trajectories. It is crucial in many applications such as travel recommendation, and optimization [1; 2], early warning of potential public emergencies [3; 4; 5; 6], location-aware advertisements and geomarketing, and recommendation of friends in social network platforms [7; 8; 9; 10; 11]. Predicting an individual's location is challenging as it requires capturing human mobility patterns [12; 13] and combining heterogeneous data sources to model multiple factors influencing human displacements (e.g., weather, transportation mode, presence of points of interest and city events).
The striking development of Deep Learning (DL) and the availability of large-scale mobility data has offered an unprecedented opportunity to design powerful next-location predictors (NLs) and has driven test-set performance on mobility data to new heights [13]. However, little work has been done on how challenging these benchmarks are, what NLs learn, and their actual generalization capabilities. Although some studies investigate the predictability of human whereabouts and its relationship with the trajectories' features [14; 15], we know comparatively little about how the individuals' trajectories are distributed in mobility benchmarks, making it hard to understand and contextualize our observed results. Recent studies in natural language processing [16; 17] and computer vision [18] show that DL models excel on specific test sets but are not solving the underlying task. In this paper, we investigate whether it is the case for NLs too.
We perform an extensive study of the test sets of several public next-location benchmark datasets [13] and evaluate a set of state-of-the-art DL-based NLs on their generalization capability. We identify three levels of generalization that an NL should exhibit: (i) _known mobility_, requiring no generalization beyond recognizing trajectories seen during the training phase; (ii) _fragmentary mobility_, requiring generalization to novel compositions of previously observed trajectories; and (iii) _novel mobility_, requiring generalization to a sequence of movements not present in the training set. It is unclear how well state-of-the-art NLs perform on each of these three scenarios.
To address this compelling issue, we stratify mobility data by whether the trajectories in the test set also appear fully or partially in the training set. We quantify the overlap between trajectories with three measures accounting for different ways of computing the percentage of locations in the test trajectories that are also in the training trajectories.
We find that, in five next-location benchmark datasets, there is a severe problem of trajectory overlapping between the test and training sets when composing them randomly: \\(\\sim\\) 43% to 72% of test trajectories overlap at least with 50% of the points with trajectories in the training set, and with 7% to 14% of test sub-trajectories entirely overlap training sub-trajectories. In other words, based on the standard way training and test sets are split in the literature, a significant portion of the trajectories in the test sets have already been seen during training.
Based on these observations, we propose to evaluate NLs on _stratified test sets based on the overlap between trajectories in the training set_. We find significant variability in model performance, varying the percentage of overlap. Indeed, we find an accuracy \\(\\leq 5\\%\\) when predicting unseen trajectories (novel mobility) and \\(\\geq 90\\%\\) when predicting trajectories with high overlaps (known mobility). Surprisingly, we also find that DL-based NLs perform even worse than baseline models (e.g., Mobility Markov Chain or MMC [19]) when tested on novel mobility. Our results are consistent across the datasets analyzed and the NLs selected, demonstrating that current train/test splits are flawed, and more robust methods are needed to evaluate the generalization capabilities of NLs. We also show a way to improve next-location prediction accuracy, especially for the novel mobility scenario, injecting mobility laws into state-of-the-art NLs through a learning-to-rank task. In a nutshell, this paper provides the following novel contributions:
* We show that standard train/test splits of trajectory datasets generate a high trajectory overlap, proposing three metrics to quantify it;
* We evaluate NLs on stratified test sets and show that DL-based NLs do not generalize well on novel mobility, being outperformed by other simpler baselines (e.g., Mobility Markov Chains);
* We show how to improve the accuracy of DL-based NLs, especially for the novel mobility behavior, by performing a rerank of the models' scores based on spatial mobility patterns;
* Based on our findings, we provide a list of recommendations to improve datasets' creation and models' evaluation for next-location prediction.
## 2 Related Work
### Model Generalization
Measuring the generalization capabilities of deep neural networks has recently captured the attention of researchers in artificial intelligence. Lewis et al. [16] find that, in popular Question Answering (QA) datasets, 30% of test-set questions have a near-duplicate in the training sets and that all models perform worse on questions that cannot be memorized from training sets. Sen and Saffari [17] show that QA models do not generalize well on unseen question-context pairs. However, they still perform well on popular QA benchmarks because of their high overlap between train and test data. Liu et al. [20] go beyond the data and study the key factors that impact generalization in QA. An essential impact in generalization is played by cascading errors from retrieval, question pattern frequency, and entity frequency.
### Predictability of Human Mobility
Several studies measure the limits of predictability of human mobility [12; 13]. Song et al. [14] analyze mobility traces of anonymized mobile phone users to find that 93% of the movements are potentially predictable. Zhang et al. [21]show that, when considering the mobility context (e.g., visiting time, kind of place visited), the upper bound of potential predictability in human mobility increases. Other studies show that this upper bound depends on the data scale and the processing techniques adopted [22; 23; 24]. In [15; 25], there are shreds of evidence that the so-called explorers (e.g., individuals without a routinary behavior) [26] are less predictable than the others. All the works discussed suggest that models may memorize certain trajectories (e.g., routinary mobility) while not being able to generalize well on novel mobility (i.e., mobility not observed during the training phase).
### Next-Location Prediction
Most NLs are based on (gated) recurrent neural networks (RNNs). RNNs [27] can efficiently deal with sequential data such as time series, in which values are ordered by time, or sentences in natural language, in which the order of the words is crucial to shaping its meaning. In Spatial Temporal Recurrent Neural Networks (ST-RNN) [28], RNNs are augmented with time- and space-specific transition matrices. Through linear interpolation, each RNN layer learns an upper and lower bound for the temporal and spatial matrices, which are then used to infer an individual's next visited location. Long Short-Term Memory Projection (LSTPM) [29] use sequential models to capture long- and short-term patterns in mobility data. The authors rely on a non-local network [30] for modeling long-term preferences and on geo-dilated RNNs inspired to capture short-term preferences [31]. More sophisticated models like DeepMove [32] use attention layers to capture the periodicity in mobility data. First, past and current trajectories are sent to a multi-modal embedding module to construct a dense representation of spatio-temporal and individual-specific information. Next, an attention mechanism extracts mobility patterns from historical trajectories, while a Gated Recurrent Unit (GRU) handles current trajectories. Finally, the multi-modal embedding, GRU, and attention layer outputs are concatenated to predict the future location. Recently, Spatio-Temporal Attention Network (STAN) [33] proposes to capture spatio-temporal information to leverage spatial dependencies explicitly. In particular, the authors use a multi-modal embedding layer to model historical trajectories and the GPS locations in the current trajectories. The embeddings are then forwarded to a spatio-temporal attention mechanism that selects a set of potential next locations. Many other works deal with spatio-temporal data using (gated) RNNs and attention mechanisms. Some of them also deal with the semantic meaning associated with locations. Examples of such models are Semantics-Enriched Recurrent Model (SERM) [34], Hierarchical Spatial-Temporal Long-Short Term Memory (HST-LSTM) [35], VANext [36], and Flashback [37]. Other Deep Learning solutions to next-location prediction have been discussed in a recent survey paper [13].
## 3 Problem Definition
Next-location prediction is commonly defined as the problem of predicting the next location an individual will visit given their historical movements, typically represented as spatio-temporal trajectories [13].
**Definition 1** (Trajectory): A spatio-temporal point \\(p=(t,l)\\) is a tuple where \\(t\\) indicates a timestamp and \\(l\\) a geographic location. A trajectory \\(P=p_{1},p_{2},\\ldots,p_{n}\\) is a time-ordered sequence of \\(n\\) spatio-temporal points visited by an individual, who may have several trajectories, \\(P_{1},\\ldots,P_{k}\\), where all the locations in \\(P_{i}^{u}\\) are visited before locations in \\(P_{i+1}\\).
Given this definition, we formalize next-location prediction as follows:
**Problem 1** (Next-location prediction): Given the current trajectory of an individual \\(P_{k}=p_{1},p_{2},\\ldots,p_{n}\\) and their historical trajectories \\(\\mathcal{H}=P_{1},\\ldots,P_{k-1}\\), next-location prediction is the problem of forecasting the next point \\(p_{n+1}\\in P_{k}\\).
In other terms, a next-location predictor (NL) is a function \\(\\mathcal{M}(P_{k},\\mathcal{H})\\to p_{n+1}\\), which takes the current trajectory \\(P_{k}\\), the set of \\(u\\)'s historical trajectories \\(\\mathcal{H}\\), and returns a spatio-temporal point \\(p_{n+1}\\) in \\(P_{k}\\).
## 4 Trajectory Overlap
An NL should be able to predict an individual's next location in three scenarios: _(i)_ the NL has seen the individual's entire current trajectory during the training phase; _(ii)_ it has seen the current trajectory only partially, or it has seen a very similar trajectory of the same individual; _(iii)_ the current trajectory was absent from the training set. The latter scenario is essential, as machine learning models' ability to generalize is their capacity of making predictions on data never seen during the training phase [38].
However, in next-location prediction, there may be a significant _overlap_ between trajectories in the test set and those in the training set. For example, some test and training trajectories may belong to the same individual. Since human mobility is routinary, an individual's trajectories are similar to each other [12; 39], leading to scenarios _(i)_ and _(ii)_ above. Given this discussion, we investigate _the extent to which the overlap between trajectories in the test and training sets influences the model's ability to generalize_. We explore three ways to examine overlap: Jaccard Similarity (JS), Longest Common Subsequence (LCST), and Overlap From the End (OFE).
Jaccard Similarity (JS) measures the percentage of locations in the test trajectories that are also in the training trajectories, regardless of the order in which locations appear. Test trajectories with a high JS have many locations in common with training trajectories. In contrast, test trajectories with low JS should be less predictable as they are mainly composed of locations that arenot in the training trajectories. Formally, we define the JS between a trajectory \\(R\\in D_{\\text{test}}\\) and \\(P\\in D_{\\text{train}}\\) as:
\\[\\text{JS}(R,P)=\\frac{|P\\cup R|-|P\\cap R|}{|P\\cup R|}\\]
We quantify the overlap between \\(R\\) and the training set as the maximum JS over all the trajectories in the training set:
\\[\\max_{P\\in D_{\\text{train}}}\\text{JS}(R,P).\\]
\\(\\text{JS}\\in[0,1]\\), where 1 indicates a full overlap (all locations in \\(R\\) are at least in a trajectory in \\(D_{\\text{train}}\\)) and 0 indicates no overlap (none of the locations in \\(R\\) are in the training set).
The Longest Common SubTrajectory (LCST) is the longest subtrajectory in common between two trajectories. Formally, given a training trajectory \\(P=p_{1},p_{2},\\ldots,p_{n}\\) and a test trajectory \\(R=r_{1},r_{2},\\ldots,r_{m}\\), we define a recursive function \\(f(P,R)\\) as:
\\[f(P,R)=\\begin{cases}0,&\\text{if $i=0$ or $j=0$}\\\\ f(p_{i-1},r_{j-1})+1,&\\text{if $i,j$>0$ and $p_{i}$=$r_{j}$}\\\\ \\max(f(p_{i-1},r_{j}),f(p_{i},r_{j-1}))&\\text{if $i,j$>0$ and $p_{i}$\
eq$r_{j}$} \\end{cases}\\]
where P and R indicate the length of the training and test trajectories, respectively, and \\(f(P,R)\\in[0,\\min(\\textsf{P},\\textsf{R})]\\). The LCST between \\(P\\) and \\(R\\) is then:
\\[\\text{LCST}(P,R)=f(P,R)/\\textsf{R}.\\]
We quantify the overlap between R and the training set as the maximum LCST over all the trajectories in the training set:
\\[\\max_{P\\in D_{\\text{train}}}\\text{LCST}(R,P).\\]
The Overlap From End (OFE) enforces that the common subtrajectory is at the end of the two trajectories. Formally, given a trajectory \\(P=p_{1},p_{2},\\ldots,p_{n}\\), we define \\(P^{\\prime}=p_{n},\\ldots,p_{2},p_{1}\\) as its reversed trajectory. We then compute OFE\\((R,P)\\) with Algorithm 1 and quantify the overlap between \\(R\\) and the training set as the maximum OFE over all the trajectories in the training set:
\\[\\max_{P\\in D_{\\text{train}}}\\text{OFE}(R,P).\\]
In other terms, given a trajectory in the test set, we scan all the trajectories in the training set and we compute, for each pair \\((P,R)\\), we compute, starting from the last point the number of common points. We than convert this number into a percentage. Finally, the OFE of \\(P\\) is the higher percentage found.
```
overlaps \\(\\leftarrow\\) dictionary() for\\(R^{\\prime}\\in D_{\\mathrm{test}}\\)do overlap \\(\\gets 0\\) for\\(P^{\\prime}\\in D_{\\mathrm{train}}\\)do \\(\\mathrm{count}\\gets 0\\) for\\(k\\in\\{0,\\ldots,\\min(\\mathsf{P}^{\\prime},\\mathsf{R}^{\\prime})\\}\\)do if\\(R^{\\prime}[k]=P^{\\prime}[k]\\)then \\(\\mathrm{count}\\leftarrow\\mathrm{count}+1\\) elseif\\(R^{\\prime}[k]\
eq P^{\\prime}[k]\\)then Break endif endfor if\\(\\mathrm{count}/\\)\\(\\mathsf{R}^{\\prime}>\\mathrm{overlap}\\)then \\(\\mathrm{overlap}\\leftarrow\\mathrm{count}/\\)\\(\\mathsf{R}^{\\prime}\\) endif endfor \\(\\mathrm{overlaps}[R^{\\prime}]\\leftarrow\\mathrm{overlap}\\) endfor
```
**Algorithm 1** OFE Computation
## 5 Experimental Setup
### Datasets
We use five public datasets widely adopted in the literature to evaluate NLs [13] (see Table 1). Three of them (Gowalla, Foursquare New York, Foursquare Tokyo) are collected through social networking platforms, in which mobility traces are generated by the users' georeferenced posts (check-ins). Consequently, these mobility traces are sparse both in time and space. The other two datasets (Taxi Porto and Taxi San Francisco) describe GPS traces from taxis dense in space and time. In detail, Gowalla was a location-based social network platform that, like Foursquare, allowed users to check-in at so-called spots (venues) via a website or an app. The dataset [40] has almost six million check-ins collected over a year and a half, from February 2009 to October 2010. Each check-in contains the user identifier, location identifier, latitude and longitude pair, and timestamp. The dataset also contains information on the users' friendship network, which has around 200,000 nodes and one million edges. Foursquare is another location-based social network platform that allows users to check in into places. Data can be collected through the available APIs. A widely used dataset based on Foursquare is described in [41]. The information contained are the same as Gowalla, with additional information about the category of the venue. Piorkowski et al. [42] collected taxi trajectories in San Francisco in May 2008. Each point in a trajectory includes the taxi's identity, latitude, longitude, timestamp, and occupancy. Points are sampled every 10 seconds on average. Moreira et al. [43] (ECML/PKDD Challenge) collected taxi trajectories in Porto, Portugal. For each trajectory, we have the taxi's identifier, the latitude, longitude, and timestamp showing when the trip began. For each trajectory, data are sampled every 15 seconds. The dataset also includes auxiliary information for each trip, such as the trip's typology (e.g., sent from the central, demanded to the operator, demanded to the driver), the stand from which the taxi left, and a phone number identification for the passenger.
To extract trajectories from these datasets, we follow the same approach as in [32]: first, we filter out the users with less than ten records; second, we cut the sequence of records into several trajectories for each user based on the time interval between two neighbor records. As in [32], we choose 72 hours as the default interval threshold based on the practice. Finally, we remove the users with less than five trajectories.
### Models
We validate our hypothesis by testing the generalization capability of the following state-of-the-art DL-based NLs.
* **RNN**[27], the building block of the majority of NLs. RNNs are commonly adopted to model sequential data such as time series and natural language, in which the order of the items is crucial to shaping its meaning. RNNs are also widely used as building blocks of NLs to capture spatial and temporal patterns in the trajectories. An RNN is made of a sequence of gates, each one outputting a hidden state \\(h_{i}\\) based on the current input \\(x_{i}\\) and the previous gate \\(h_{i-1}\\). In this work, a gate is implemented as a hyperbolic tangent function (\\(\\tanh\\)).
* **ST-RNN**[28] enhances RNNs with time- and space-specific transition matrices in this study. Each RNN layer learns an upper and lower bound for the temporal and spatial matrices via linear interpolation. These matrices are then used to predict where a person will go next.
* **Deep Move**[32] uses attention mechanisms to capture spatio-temporal periodicity in the historical trajectories. Also, the model uses GRUs (gated RNNs) to capture patterns in the current trajectory and relies on a multi-modal embedding to capture individual preferences and project trajectories in a low-dimensional space before passing them to the attention mechanisms and GRUs.
* **LSTPM**[29] combines long- and short-term sequential models: long-term patterns are modeled using a non-local network [30], short term preferences are captured using a geographic-augmented version of the concept of dilated RNNs [31].
* **STAN** explicitly captures spatio-temporal information using a multi-modal embedding to represent the trajectories and a spatio-temporal attention mechanism to capture patterns in the data [33]. The role of the attention mechanisms, supported by a balanced sampler, is to rank potential next locations.
### Training
We split the trajectories into a training set, a validation set, and a test set for each dataset. All sets include trajectories from several users. We sort the trajectories temporally for each user and put the first 70% in the training set, the following 10% in the validation set, and the remaining 20% in the test set.
All models are implemented with PyTorch and are made available through the library LibCity [44]. We follow the same configuration as [32] and use Adam [45] as optimizer.
We ran the experiments on a machine with 126GB of memory and two Nvidia RTX 2080Ti.
## 6 Testing Generalization Capability
We evaluate the performance of all models using the \\(k\\)-accuracy (ACC@\\(k\\)), the most common evaluation metric in the literature [13]. NLs output a list of all possible locations an individual will visit next ranked from the most to the least likely. ACC@\\(k\\) indicates how many times the true location is among the \\(k\\) top predicted locations. We evaluate all models using ACC@5.
We compare the DL models with Mobility Markov Chains (MMCs) [19], in which the visited locations are the states of a Markov chain and a transition matrix represents the first-order transition probabilities between these locations. The choice of MMCs as a baseline is justified because they cannot generalize as they summarize the training data.
For all datasets and overlap metrics (JS, LCST, and OFE), we compute the number of trajectories in the test set with an overlap with the training set between 0-20%, 20-40%, 40-60%, 60-80%, and 80-100%. Figure 1 shows the results for all the datasets analyzed.
The percentage of trajectories with a high overlap (between 80% and 100%) varies widely with the overlap metric and the dataset. Taxi datasets have more trajectories with a high overlap than the datasets based on check-ins, suggesting that the overlap problem is more severe in GPS traces than in check-ins. We also observe that JS and LCST produce similar overlaps, while with OFE, the number of trajectories with low overlap is remarkably higher. This is due to the severe constraints that OFE imposes by definition (e.g., the overlap is evaluated starting only from the end of the trajectory).
\\begin{table}
\\begin{tabular}{l|c c c c} & & Users & Locations & Trajectories \\\\ \\hline Gowalla & [40] & 5300 & 125,771 & 72,593 \\\\ Foursquare NYC & [41] & 4390 & 13,960 & 12,519 \\\\ Foursquare Tokyo & [41] & 935 & 21,394 & 34,662 \\\\ Taxi Porto & [43] & 500 & 8524 & 94,214 \\\\ Taxi SF & [42] & 500 & 9321 & 103,120 \\\\ \\end{tabular}
\\end{table}
Table 1: Properties of the datasets adopted in our study. We describe each datasetβs time span, number of users, number of locations, and the number of trajectories extracted.
In any case, Figure 1 highlights that a significant overlap exists between the test and the training set, introducing a bias when evaluating NLs using a random train-test split. Hence, we investigate to what extent this overlap affects model performance.
Figure 2 shows the performances for all the NLs and overlap metrics. Here, increasing the overlap induces a striking improvement in the model performance for both MMC and the NLs, which have similar performance. We present all performances in detail in the Supplementary A.
For example, for Foursquare New York and OFE, the performance of NLs is close to 100% on a test made of trajectories with an overlap with the training set in the range 80-100%. Results for Taxi Porto follow a similar increasing trend, although with less striking performance.
Overall, Figure 2 shows that model performance is strongly affected by trajectory test-train overlap, suggesting that NLs memorize trajectories instead of generalizing. NLs perform well on trajectories with high overlap with the training set but poorly on trajectories with low overlap. These results raise the question of how to improve the accuracy of NLs for low overlap scenarios.
Figure 1: Fraction of the test trajectories with an overlap of 0-20%, 20.40%, 40-60%, 60-80%, and 80-100% with the training trajectories, for the all datasets, for the evaluation metrics JS, LCST, and OFE.
## 7 Learning to Rank Locations Using Mobility Laws
A possible reason why NLs perform poorly on trajectories with low overlaps lies in the type of DL tools they rely on, i.e., RNNs: they focus on memorizing regularities in long sequences, thus limiting NLs' generalization capabilities. Wrong location predictions happen when the probabilities assigned to each potential location by the NL (i.e., the locations' scores) are low and relatively uniformly distributed. Our intuition is to rerank locations based on new scores obtained, injecting human mobility laws into NLs. We select three prominent human mobility laws [12; 13]:
* the distance law [12]: people prefer travelling short distances. Given an individual's trajectories \\(P=p_{1},p_{2},\\ldots,p_{n}\\), we compute the Haversine distance between all the consecutive locations \\(p_{i},p_{i+1}\\) and consider the average of the distances as a feature \\(dist_{u}\\);
Figure 2: Results (in terms of ACC@5) for all the datasets and models. We compute the accuracy for the three overlap metrics (JS, LCST, OFE) and for five bins of percentage of trajectory overlap (from 0-20% to 80-100%).
* the visitation law [39]: the visits to a location decrease as the inverse square of the product of their visiting frequency and travel distance. We denote as \\(f\\) the number of visits to a location (by any individual) and compute how many people visit it within a distance \\(r\\). An individual's probability to visit location \\(p_{i+1}\\) is given by a power-law of the form \\(p_{i+1}(r,f)=\\mu_{i}/(rf)^{\\gamma}\\), with \\(\\gamma=1.6\\), a parameter fitted with the least squares method. We use the five most probable locations \\(top_{n},n\\in 1\\ldots 5\\) as an input to the reranker.
* the returner and explorer dichotomy [26]: individuals naturally split into two profiles based on their degree of spatial exploration. We compute the average radius of gyration \\(r_{g}(u)\\) and the 2-radius of gyration \\(r_{g}^{(2)}(u)\\) for each individual and compute the ratio \\(\\frac{r_{g}(u)}{r_{2g}(u)}\\) using the scikit-mobility [46] library. The profile of the user is then translated into a binary feature: 0 if the individual is a returner and 1 if the individual is an explorer. We denote this feature as \\(re_{u}\\).
Our approach consists in predicting the next location using a NL, and then combining into a single scoring model, i.e., a fully connected neural network, both the NL score for the location and the mobility laws. We trained the network using the binary cross-entropy loss \\(\\mathcal{L}=-\\sum_{i\\in\\{0,1\\}}y_{i}\\log p(y_{i})\\), where \\(y_{i}\\) is the label (i.e., 0 or 1) and \\(p(y_{i})\\) is the predicted probability.
The training dataset consists of vectors of the form \\([\\text{NL}_{i}(P),dist_{u},top_{1},\\ldots,top_{5},re_{u}]\\). We denote with \\(\\text{NL}_{i}(P)\\) the score of the NL for a given location \\(i\\) starting from a trajectory \\(P\\). The label is 1 if the location \\(i\\) is the individual's next-location and 0 otherwise. This means that in a dataset with \\(n\\) locations, for each trajectory we have a positive sample (e.g., the correct next location) and \\(n-1\\) negative samples for each trajectory. As the number of incorrect samples is much higher than the correct ones, for each correct location, we randomly sampled \\(k=20\\) wrong locations (e.g., locations that are different from the actual next location the individual will visit), as we found it to be a good trade-off between performance and dataset size.
Table 2 and Figure 3 show how the accuracy changes on the test trajectories with 0-20 overlap on all the datasets and models considered. Our reranking leads to improved accuracy regardless of the dataset and the overlap measures used. The bigger relative improvements are related to the trajectories with an overlap of 0-20. Regarding check-in datasets, on Foursquare New York, the improvement varies from +3.25% (ST-RNN) to +9.38 (LSTPM). Similarly, on Foursquare Tokyo, the improvement varies from +5.69% (DeepMove) to a +9.33% of improvement (STAN). In Gowalla, we have the lowest relative improvement on DeepMove (+4.43%) and the highest on RNN (+29.09%). Concerning taxi datasets, on Taxi Porto, the relative improvement on the average case (i.e., without stratifying the test set) varies from a +2.68% (RNN) to +5.84% (STAN). On Taxi San Francisco, the relative improvement varies from +2.49% (RNN) to +5.74% (DeepMove). Regarding the 0-20 stratification, the largest relative improvement is associated with metrics JS, followed by LCST and OFE. On Foursquare New York, the relative improvement
with JS is up to +96.15%, with LCST being +20.39%, and with OFE being +33.05%. Similarly, on Foursquare Tokyo, we have top relative improvements of +82.35%, +21.78%, and +24.36% with JS, LCTS, and OFE, respectively. Finally, Gowalla's top relative improvements for JS, LCTS, and OFE are +68.82%, +45.45%, and +50.03%. In general, taxi datasets are associated with the lowest relative improvement: with JS, it is up to +7.96%, with LCST +6.68%, and with OFE +7.05% on Taxi Porto. On the other hand, on Taxi San Francisco, the relative improvements for JS, LCST, and OFE are +5.82%, +9.68%, and +8.76%. The largest relative improvement is associated with the 0-20 overlap scenario. For example, the largest relative improvement on the 80-100 bin is 0.12%. In other words, our rerank strategy brings the largest improvement in accuracy, especially where NLs are the least accurate.
Figure 3: Results (in terms of ACC@5) for all the datasets for the three overlap metrics (JS, LCST, OFE) for trajectories with a 0-20%. We provide the results for all datasets and the other overlaps in Supplementary B.
## 8 Discussion and Recommendations
This work finds that the models' performances are deeply affected by the level of overlap present in the test trajectories. Based on the amount of trajectory overlap, we identify three scenarios:
* **Known Mobility**: the NL sees the entire trajectory in the training phase (overlap between 80% and 100%). Predictive performance is much higher than the performance on a non-stratified test set (close to 100%) as the test trajectories are almost identical to the training trajectories.
* **Fragmentary Mobility**: the NL sees a significant portion of the trajectory (overlap between 20% and 80%). The majority of trajectories in the test set lies in this scenario. There is a drop in the model performance compared to the previous scenario, decreasing up to \\(\\sim\\)80%.
* **Novel Mobility**: the NL sees a tiny or no portion of the trajectory (overlap below 20%). A significant number of trajectories lie in this scenario. However, since NLs cannot rely on the trajectories already seen in the training phase, these are the most difficult trajectories to predict. Indeed, the performance of NLs on test sets with low overlap is considerably lower than the performance on a non-stratified test set.
\\begin{table}
\\begin{tabular}{l|l|c c c c} & NL + RR & & **JS** & LCST & OFE \\\\ \\hline \\multirow{5}{*}{Foursquare NYC} & RNN &.233 (+9.38\\%) & **.051 (+96.15\\%)** &.158 (+19.69\\%) &.241 (+26.17\\%) \\\\ & ST-RNN &.261 (+52.42\\%) & **.059 (+84.37\\%)** &.186 (+15.52\\%) &.299 (+31.14\\%) \\\\ & Deep Move &.277 (+6.94\\%) & **.084 (+64.71\\%)** &.213 (+19.66\\%) &.268 (+16.01\\%) \\\\ & LSTPM &.272 (+8.36\\%) & **.072 (+56.52\\%)** &.184 (+10.84\\%) &.271 (+18.34\\%) \\\\ & STAN &.281 (+6.43\\%) & **.101 (+32.89\\%)** &.214 (+15.05\\%) &.283 (+18.90\\%) \\\\ \\hline \\multirow{5}{*}{Foursquare TKY} & RNN &.196 (+6.56\\%) & **.028 (+40.02\\%)** &.123 (+21.78\\%) &.171 (+15.54\\%) \\\\ & ST-RNN &.213 (+7.58\\%) & **.057 (+67.65\\%)** &.133 (+15.64\\%) &.194 (+24.36\\%) \\\\ & Deep Move &.223 (+5.69\\%) & **.060 (+46.34\\%)** &.142 (+19.33\\%) &.201 (+19.64\\%) \\\\ & LSTPM &.233 (+6.88\\%) & **.074 (+75.45\\%)** &.151 (+21.77\\%) &.236 (+34.86\\%) \\\\ & STAN &.246 (+9.33\\%) & **.093 (+82.35\\%)** &.153 (+18.60\\%) &.239 (+32.04\\%) \\\\ \\hline \\multirow{5}{*}{Gowalla} & RNN &.142 (+29.09\\%) & **.157 (+68.82\\%)** &.016 (+45.45\\%) &.144 (+50.03\\%) \\\\ & ST-RNN &.149 (+8.76\\%) & **.143 (+28.83\\%)** &.033 (+17.86\\%) &.127 (+19.81\\%) \\\\ & Deep Move &.165 (+4.43\\%) & **.164 (+41.38\\%)** &.041 (+13.89\\%) &.151 (+32.46\\%) \\\\ & LSTPM &.171 (+12.50\\%) & **.182 (+61.06\\%)** &.043 (+34.38\\%) &.151 (+36.04\\%) \\\\ & STAN &.206 (+6.77\\%) & **.178 (+43.55\\%)** &.059 (+15.69\\%) &.146 (+22.69\\%) \\\\ \\hline \\multirow{5}{*}{Taxi Porto} & RNN &.421 (+2.68\\%) & **.069 (+4.54\\%)** &.296 (+1.02\\%) &.398 (+17.95\\%) \\\\ & ST-RNN &.427 (+2.64\\%) &.077 (+5.47\\%) &.313 (+3.98\\%) & **.418 (+5.55\\%)** \\\\ & DeepMove &.466 (+5.42\\%) & **.104 (+6.12\\%)** &.341 (+3.96\\%) &.434 (+6.11\\%) \\\\ & LSTPM &.457 (+6.52\\%) & **.095 (+6.74\\%)** &.336 (+6.32\\%) &.419 (+5.01\\%) \\\\ & STAN &.483 (+6.62\\%) & **.111 (+7.96\\%)** &.351 (+6.68\\%) &.440 (+7.05\\%) \\\\ \\hline \\multirow{5}{*}{Taxi SR} & RNN &.288 (+2.49\\%) &.193 (+4.89\\%) &.208 (+1.46\\%) & **.276 (+5.34\\%)** \\\\ & ST-RNN &.297 (+4.95\\%) &.200 (+5.82\\%) &.225 (+6.64\\%) & **.298 (+8.76\\%)** \\\\ \\cline{1-1} & Deep Move &.313 (+5.74\\%) &.202 (+4.12\\%) &.227 (+4.13\\%) & **.297 (+6.45\\%)** \\\\ \\cline{1-1} & LSTPM &.301 (+5.24\\%) &.199 (+3.65\\%) &.238 (+9.68\\%) & **.293 (+5.02\\%)** \\\\ \\cline{1-1} & STAN &.330 (+5.11\\%) &.208 (+3.48\\%) &.233 (+4.48\\%) & **.309 (+5.79\\%)** \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: ACC@5 of all the models on all the datasets. We find a significant relative improvement, especially on the trajectories with a 0-20 overlap. Regarding check-in datasets, we have the greatest relative improvement on the stratification based on JS (in bold). In taxi datasets, we have similar improvements on JS and OFE and while in Taxi Porto, we have the best improvements on JS, on Taxi San Francisco, we reach the best improvements on OFE. In general, the improvements in check-in datasets are higher with respect to taxi datasets. There is not a specific model on which we have the best improvements.
While predicting known mobility is a simple task, inferring mobility patterns for fragmentary mobility and novel mobility presents challenges (e.g., dealing with under-represented locations or not represented at all in the training set). From a modeling perspective, this may suggest that current models are excellent in memorizing already seen trajectories but cannot generalize well. Some works suggest that reranking techniques or few-short learning algorithms may help solve this problem [47]. Also, results indicated that NLs might not be evaluated adequately. In this sense, here we provide a set of recommendations for the evaluation of NLs:
1. MMCs achieve performance similar to NLs. Therefore, we claim that MMCs and other Markov chains approaches should always be used as a baseline.
2. Although NLs achieve good overall performance, they are significantly biased due to trajectory overlap. Besides the NLs' average performance, researchers should report the performance for the known mobility and novel mobility scenarios. It is indeed crucial to understand whether the improved performance of the proposed NL is actually due to its increasing generalization capability or because it is memorizing better the trajectories in the training set;
3. NLs achieve the worst performance on the 0-20 overlap bin. We can improve the performance on this bin, hence increasing NLs' generalization capability with the support of well-known spatial mobility laws, which are loosely captured by state-of-the-art NLs given their reliance on RNNs.
From other perspectives (e.g., urban planning, sustainability, and others), having models that can generalize well is fundamentally important. First, NLs that generalize can be used to perform better simulations and to analyze what-if scenarios more realistically. For instance, we may be able to see how attractive a new POI in a specific place would be. We cannot solve such problems with a model that only memorizes seen trajectories. Also, it can help urban planners make decisions about traffic and transportation and, thus, reduce pollution. We can also use an NL that can generalize to predict better and understand the mobility of individuals who have never been seen in a region (e.g., a tourist). Also, a generalized model may be geographically transferable (e.g., trained in an area and tested on a new territory). This may represent a significant step toward solutions to some of the United Nations' Sustainable Development Goals. In particular, we may use such models to run simulations or investigate pollution, inclusion, and the design of better cities in territories where we do not have data or have a scarcity of data.
## 9 Conclusions
In this work, we investigate the generalization capabilities of next-location prediction datasets. We find that model performance is considerably affected by trajectory test-train overlap, suggesting that NLs memorize training trajectories rather than generalizing. We show we mitigate this issue by injectingmobility laws into state-of-the-art NLs, achieving relative improvement on test sets with low overlap with the training ones. We aim to consider other mobility laws and use more sophisticated models to rerank the results in future work. It would also be helpful to use explainable AI techniques to understand better the role of mobility laws and the relations between the DL modules composing NLs.
## Declarations
**Funding** Luca Pappalardo has been partially supported by EU project SoBigData++ grant agreement 871042.
**Conflict of Interest** The authors have no competing interests to declare that are relevant to the content of this article.
**Ethics approval** not applicable
**Consent to participate** not applicable
**Consent for publication** not applicable
**Availability of data and materials** All the data are publicly available and can be downloaded using the links at github.com/scikit-mobility/DeepLearning4HumanMobility.
**Code availability** The code used to compute the overlap can be found at github.com/MassimilianoLuca/overlap-processing the code of the models can be found at github.com/LibCity/Bigscity-LibCity
**Authors' contributions** M.L. designed the methodology to compute the overlap and the rerank methodology. G.B. directed the study. All the authors contributed to interpreting the results and writing the paper. G.B. developed this work prior joining Amazon.
## References
* (1) Shi, Y., Feng, H., Geng, X., Tang, X., Wang, Y.: A survey of hybrid deep learning methods for traffic flow prediction. In: Proceedings of the 2019 3rd International Conference on Advances in Image Processing, pp. 133-138. Association for Computing Machinery,?? (2019)
* (2) Khaidem, L., Luca, M., Yang, F., Anand, A., Lepri, B., Dong, W.: Optimizing transportation dynamics at a city-scale using a reinforcement learning framework. IEEE Access **8**, 171528-171541 (2020)
* (3) Barlacchi, G., Perentis, C., Mehrotra, A., Musolesi, M., Lepri, B.: Are you getting sick? predicting influenza-like symptoms using human mobility behaviors. EPJ Data Science, 27 (2017)
* (4) Canzian, L., Musolesi, M.: Trajectories of depression: unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 1293-1304 (2015)
* (5) Pappalardo, L., Vanhoof, M., Gabrielli, L., Smoreda, Z., Pedreschi, D., Giannotti, F.: An analytical framework to nowcast well-being using mobile phone data. International Journal of Data Science and Analytics, 75-92 (2016)
* (6) Voukelatou, V., Gabrielli, L., Miliou, I., Cresci, S., Sharma, R., Tesconi, M., Pappalardo, L.: Measuring objective and subjective well-being: dimensions and data sources. International Journal of Data Science and Analytics (2020)
* (7) Zhu, W.-Y., Peng, W.-C., Chen, L.-J., Zheng, K., Zhou, X.: Modeling user mobility for location promotion in location-based social networks. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1573-1582 (2015)
* (8) Burbey, I., Martin, T.L.: A survey on predicting personal mobility. International Journal of Pervasive Computing and Communications (2012)
* (9) Wu, R., Luo, G., Shao, J., Tian, L., Peng, C.: Location prediction on trajectory data: A review. Big Data Min. Anal. **1**, 108-127 (2018)
* (10) Zheng, X., Han, J., Sun, A.: A survey of location prediction on twitter. IEEE Transactions on Knowledge and Data Engineering **30**(9), 1652-1671 (2018)
* (11) Zhao, L.: Event prediction in big data era: A systematic survey. arXiv preprint arXiv:2007.09815 (2020)Trajectory Test-Train Overlap in Next-Location Prediction Datasets_
* [12] Barbosa, H., Barthelemy, M., Ghoshal, G., James, C.R., Lenormand, M., Louail, T., Menezes, R., Ramasco, J.J., Simini, F., Tomasini, M.: Human mobility: Models and applications. Physics Reports **734**, 1-74 (2018)
* [13] Luca, M., Barlacchi, G., Lepri, B., Pappalardo, L.: A survey on deep learning for human mobility. ACM Comput. Surv. **55**(1) (2021). [https://doi.org/10.1145/3485125](https://doi.org/10.1145/3485125)
* [14] Song, C., Qu, Z., Blumm, N., Barabasi, A.-L.: Limits of predictability in human mobility. Science, 1018-1021 (2010)
* [15] Amichi, L., Viana, A.C., Crovella, M., Loureiro, A.A.: Understanding individuals' proclivity for novelty seeking. In: Proceedings of the 28th International Conference on Advances in Geographic Information Systems, pp. 314-324 (2020)
* [16] Lewis, P., Stenetorp, P., Riedel, S.: Question and answer test-train overlap in open-domain question answering datasets. arXiv preprint arXiv:2008.02637 (2020)
* [17] Sen, P., Saffari, A.: What do models learn from question answering datasets? arXiv preprint arXiv:2004.03490 (2020)
* [18] Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: A survey. arXiv preprint arXiv:2103.02503 (2021)
* [19] Gambs, S., Killijian, M.-O., del Prado Cortez, M.N.: Next place prediction using mobility markov chains. In: Proceedings of the First Workshop on Measurement, Privacy, and Mobility, pp. 1-6 (2012)
* [20] Liu, L., Lewis, P., Riedel, S., Stenetorp, P.: Challenges in Generalization in Open Domain Question Answering (2021)
* [21] Zhang, C., Zhao, K., Chen, M.: Beyond the limits of predictability in human mobility prediction: Context-transition predictability. IEEE Transactions on Knowledge and Data Engineering (2022)
* [22] Smolak, K., Sila-Nowicka, K., Delvenne, J.-C., Wierzbinski, M., Rohm, W.: The impact of human mobility data scales and processing on movement predictability. Scientific Reports **11**(1), 1-10 (2021)
* [23] Kulkarni, V., Mahalunkar, A., Garbinato, B., Kelleher, J.D.: Examining the limits of predictability of human mobility. Entropy **21**(4), 432 (2019)
* [24] Hofman, J.M., Sharma, A., Watts, D.J.: Prediction and explanation in social systems. Science **355**(6324), 486-488 (2017)
* [25] do Couto Teixeira, D., Almeida, J.M., Viana, A.C.: On estimating the predictability of human mobility: the role of routine. EPJ Data Science **10**(1), 49 (2021)
* [26] Pappalardo, L., Simini, F., Rinzivillo, S., Pedreschi, D., Giannotti, F., Barabasi, A.-L.: Returners and explorers dichotomy in human mobility. Nature communications **6**(1), 1-8 (2015)
* [27] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation, pp. 318-362. MIT Press,?? (1986)
* [28] Liu, Q., Wu, S., Wang, L., Tan, T.: Predicting the next location: A recurrent model with spatial and temporal contexts. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)
* [29] Sun, K., Qian, T., Chen, T., Liang, Y., Nguyen, Q.V.H., Yin, H.: Where to go next: Modeling long-and short-term user preferences for point-of-interest recommendation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 214-221 (2020)
* [30] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794-7803 (2018)
* [31] Chang, S., Zhang, Y., Han, W., Yu, M., Guo, X., Tan, W., Cui, X., Witbrock, M., Hasegawa-Johnson, M.A., Huang, T.S.: Dilated recurrent neural networks. Advances in neural information processing systems **30** (2017)
* [32] Feng, J., Li, Y., Zhang, C., Sun, F., Meng, F., Guo, A., Jin, D.: Deep-move: Predicting human mobility with attentional recurrent networks. In: Proceedings of the 2018 World Wide Web Conference, pp. 1459-1468 (2018)
* [33] Luo, Y., Liu, Q., Liu, Z.: Stan: Spatio-temporal attention network for next location recommendation. In: Proceedings of the Web Conference 2021, pp. 2177-2185 (2021)
* [34] Yao, D., Zhang, C., Huang, J., Bi, J.: Serm: A recurrent model for next location prediction in semantic trajectories. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 2411-2414 (2017)
* [35] Kong, D., Wu, F.: Hst-lstm: A hierarchical spatial-temporal long-short term memory network for location prediction. In: IJCAI, pp. 2341-2347 (2018)* (36) Gao, Q., Zhou, F., Trajcevski, G., Zhang, K., Zhong, T., Zhang, F.: Predicting human mobility via variational attention. In: The World Wide Web Conference, pp. 2750-2756 (2019)
* (37) Yang, D., Fankhauser, B., Rosso, P., Cudre-Mauroux, P.: Location prediction over sparse user mobility traces using rnns: Flashback in hidden states! In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pp. 2184-2190 (2020)
* (38) Kawaguchi, K., Kaelbling, L.P., Bengio, Y.: Generalization in deep learning. arXiv preprint arXiv:1710.05468 (2017)
* (39) Schlapfer, M., Dong, L., O'Keeffe, K., Santi, P., Szell, M., Salat, H., Anklesaria, S., Vazifeh, M., Ratti, C., West, G.B.: The universal visitation law of human mobility. Nature **593**(7860), 522-527 (2021)
* (40) Cho, E., Myers, S.A., Leskovec, J.: Friendship and mobility: user movement in location-based social networks. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1082-1090 (2011)
* (41) Yang, D., Zhang, D., Zheng, V.W., Yu, Z.: Modeling user activity preference by leveraging user spatial temporal characteristics in lbsns. IEEE Transactions on Systems, Man, and Cybernetics: Systems **45**(1), 129-142 (2014)
* (42) Piorkowski, M., Sarafijanovic-Djukic, N., Grossglauser, M.: CRAWDAD dataset epfl/mobility (v. 2009-02-24). Downloaded from [https://crawdad.org/epfl/mobility/20090224](https://crawdad.org/epfl/mobility/20090224) (2009). [https://doi.org/10.15783/C7J010](https://doi.org/10.15783/C7J010)
* (43) Moreira-Matias, L., Gama, J., Ferreira, M., Mendes-Moreira, J., Damas, L.: Predicting taxi-passenger demand using streaming data. IEEE Transactions on Intelligent Transportation Systems **14**(3), 1393-1402 (2013)
* (44) Wang, J., Jiang, J., Jiang, W., Li, C., Zhao, W.X.: Libcity: An open library for traffic prediction. In: Proceedings of the 29th International Conference on Advances in Geographic Information Systems. SIGSPATIAL '21, pp. 145-148. Association for Computing Machinery, New York, NY, USA (2021). [https://doi.org/10.1145/3474717.3483923](https://doi.org/10.1145/3474717.3483923). [https://doi.org/10.1145/3474717.3483923](https://doi.org/10.1145/3474717.3483923)
* (45) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* (46) Pappalardo, L., Simini, F., Barlacchi, G., Pellungrini, R.: scikit-mobility: A python library for the analysis, generation and risk assessment of mobility data. arXiv preprint arXiv:1907.07062 (2019)* (47) Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR) **53**(3), 1-34 (2020) | Next-location prediction, consisting of forecasting a user's location given their historical trajectories, has important implications in several fields, such as urban planning, geo-marketing, and disease spreading. Several predictors have been proposed in the last few years to address it, including last-generation ones based on deep learning. This paper tests the generalization capability of these predictors on public mobility datasets, stratifying the datasets by whether the trajectories in the test set also appear fully or partially in the training set. We consistently discover a severe problem of trajectory overlapping in all analyzed datasets, highlighting that predictors memorize trajectories while having limited generalization capacities. We thus propose a methodology to rerank the outputs of the next-location predictors based on spatial mobility patterns. With these techniques, we significantly improve the predictors' generalization capability, with a relative improvement on the accuracy up to 96.15% on the trajectories that cannot be memorized (i.e., low overlap with the training set).
**Keywords:** Human Mobility; Next-Location Prediction; Deep Learning; Generalization; Test-Train Overlap | Condense the content of the following passage. | 214 |
arxiv-format/2108_07323v1.md | # Clustering augmented Self-Supervised Learning: An application to Land Cover Mapping
Rahul Ghosh
[email protected]
Xiaowei Jia
[email protected]
University of Pittsburgh
Pittsburgh, PA, USA
Chenxi Lin
[email protected]
Zhenong Jin
[email protected]
Vipin Kumar
[email protected]
University of Minnesota
Minneapolis, MN, USA
2001
## 1. Introduction
Global demand for land resources to support human livelihoods and well-being through food, fiber, energy and living space will continue to grow in response to the population expansion and socioeconomic development. This poses a great challenge to the human society, given the increasing competition for land from the need to maintain other essential ecosystem services. Addressing this challenge will require timely information on land use and land cover changes, e.g., the conversion of forest to farmland or plantations, the loss of productive cropland due to urbanization, and the degradation of soil due to inappropriate management practices.
Recent advances in storing and processing remote sensing data collected by sensors onboard arrerafts or satellites provide tremendous potential for mapping a variety of land covers, including plantations (Han et al., 2017), agricultural facilities (Beng et al., 2017), roads (Zhou et al., 2018), buildings (Kang et al., 2018), and many more (Kang et al., 2018). Accurate mapping of these land covers can provide critical information at desired spatial and temporal scales to assist in decision making for development investment and sustainable resource management.
Given the success of machine learning, especially deep learning, in the domain of computer vision (e.g., image segmentation), researchers have found a lot of promise for using these techniques in automated land cover mapping at large scale through analysis of remote sensing data. Existing works have mostly focused on the supervised learning setup which requires ample labeled data. However, collecting land cover labels is often expensive and requires expert staff, equipment, and in-field measurements and thus can become a major obstacle for training advanced machine learning models.
One common approach to deal with limited availability of labeled datasets is to pre-train an ML model on existing large labels data sets for a related problem, and then refine it using a small number of labeled samples for the problem of interest. For example, models for image recognition are first trained using large-scale datasets like ImageNet (Beng et al., 2017) and then are fine-tuned on the limited-size dataset for the downstream task (Beng et al., 2017). However, such approaches cannot be used for remote sensing due to the difference in the spectral bands captured by different satellites and such large-scale labeled datasets for capturing all the data modalities are either unavailable or these
Figure 1. (a) Examples of high density Cashew plantations, low density Cashew plantations and other trees. We also show the decision boundaries (b) learned by traditional methods and (c) after the clustering structure is informed.
efforts are still in nascent stage, resulting in the need for more research.
Self-supervised learning is an alternative approach that learns feature representation from unlabeled images. Numerous methods have been proposed under this paradigm where the central idea is to propose various pretext tasks for the network to solve, in the hope that the network will learn important feature representations by minimizing the objective function of the pretext task, such as inpainting patches [25] and image colorization [14, 38]. The representation learned by these techniques can be transferred to a classification/segmentation model.
However, existing self-supervised learning methods can be less helpful for remote sensing data since the pretext tasks they create, e.g., colorization [34], do not make full use of all the spectral bands of remote sensing data to capture the land cover heterogeneity. For example, the identification of cashew plantations (Fig. 1 (a)) requires differentiating other trees from all types of cashew plantations with varying density. High-density plantations are easily separable with other trees while low-density plantations are more likely to be confused with other trees. These self-supervised learning methods can learn similar representation between low-density plantations and other trees, which can cause potential confusion amongst classes. This poses a challenge for the segmentation model to learn a decision boundary that can correctly classify all the modes in each class during the fine-tuning process (Fig. 1 (b)). Intuitively, if we can detect these modes by leveraging the information from all the spectral bands and inform the segmentation model of the obtained clustering structure, the segmentation model can easily learn decision boundaries to separate different classes as long as we have a few representative samples from each mode (Fig. 1 (c)).
In this paper, we develop a self-supervised learning framework, Clustering-Augmented Segmentation (CAS), which uses clustering to capture underlying land cover heterogeneity. In particular, our clustering algorithm is inspired by DEC [35], which is a representation learning method for image classification. Although optimizing the clustering at image-patch level improves the classification, it results in the loss of the fine-level details which severely degrades segmentation performance. To address this issue, we build an auto-encoder-based framework which promotes the discriminative representation learning by optimizing the clustering structure over image patches while also preserving the local pixel-wise information for reconstruction. Here the clustering structure helps better represent heterogeneous land covers while the pixel-wise information is essential for improving the segmentation accuracy. We define a loss function that combines the image patch-level clustering loss and the pixel-level reconstruction loss and then iteratively refine the obtained clustering and learning representations. It is noteworthy that our proposed method can also incorporate other clustering methods to capture land cover heterogeneity.
We show the superiority of our method over existing self-supervised learning methods in two societally relevant applications, cashew plantation mapping and crop detection. We have demonstrated the effectiveness of the proposed method in learning both discriminative feature representation and the underlying clustering structure. We also conduct active sampling to show the potential of achieving high mapping accuracy given a limited budget of annotating.
Our contributions can be summarized as follows:
* We develop a self-supervised learning framework that leverages DEC to capture land cover heterogeneity.
* We have demonstrated the effectiveness of the proposed method in learning with small labeled data in the context of two applications of great societal relevance.
* We release the code and dataset used in this work to promote reproducibility 1. Footnote 1: [https://drive.google.com/drive/folders/1Faf7m4e07y30g0CryHdelGaJwm7y9A7usp-sharing](https://drive.google.com/drive/folders/1Faf7m4e07y30g0CryHdelGaJwm7y9A7usp-sharing)
## 2. Related Work
### Land Use and Land Cover mapping
Mapping land use and land cover (LULC) changes is essential for managing natural resources and monitoring the impact of changing climate. Recent works [4] have explored deep learning techniques like feed forward neural networks (FFNN) [40], CNN [7, 30], LSTM [10] for LULC mapping. CNNs have been shown to be effective in extracting both spectral and spatial information, whereas RNN and LSTM make use of the temporal information in modeling land cover transitions and have shown promising performance in sequence labelling. Land cover mapping can also be framed as a semantic segmentation problem [29, 31, 32], where each pixel in an aerial/satellite image is classified as a land cover class. One of the most widely models in semantic segmentation is Fully Convolutional Network (FCN) [17], which supplements the output of the deeper layers with that of the shallower layers to increase the resolution of the prediction. Based on this idea, several modifications to FCN were proposed in recent years such as SegNet [1], DeconvNet [22] and UNet [28]. In this work, we adopt the UNet architecture, which consists of two paths, contraction path (encoder) and symmetric expanding path (decoder). The encoder consists of a stacked set of convolutional and max-pooling layers, that captures the context and a semantic understanding of the image. The decoder involves convolutional and upconvolutional layers to generate precise label maps from the output of the encoder.
LULC mapping differs from the standard semantic segmentation in several ways. First, due to the heterogeneity in the land covers, the same class can look different in different areas and thus each class can have multiple modes/subclasses. Many of these land cover classes/subclasses cannot be easily distinguished using only RGB channels but require information from other spectral bands provided in remote sensing datasets. Moreover, existing segmentation methods require large amount of labeled data, which is often scarce in remote sensing. Several methods have been proposed to address this issue via pre-training [21]. Amongst these approaches, self-supervised learning has shown much success in improving the accuracy using limited annotated satellite images [9, 34].
### Representation Learning
Unsupervised learning and self-supervised learning are commonly used to generate feature representation without the need for labour-intensive annotations. Most unsupervised learning methods focus on reconstructing unlabeled data, such as auto-encoders [16, 27, 33] and deep belief networks (DBN) [15]. In the self-supervised setting, the networks learn discriminative representations after training with pseudo labels created from pretext tasks. The representations learned from such pretext tasks can then be transferred to the downstream tasks. Numerous pretext tasks have been explored in previous literature. For example, image colorization [14, 38] aims to predict the accurate color version of a photograph, given its gray-scale version as input. Effectively colorizing an image requires the extraction of visual features to capture the semantic understanding of the objects and therefore, visual features can be learned by accomplishing this task. Several deep-learning approaches have been proposed for deep image colorization models [13, 14, 39, 38]. Recently this technique has been adopted in the RS domain [34], where an auto-encoder is used to predict RGB channels given the input from other channels.
Another direction for pretext task, which is commonly used in Natural Language Processing, is the representation learning based on context-similarity [19, 26], where the central the idea is that words that appear in similar contexts should have similar representations. By redefining context as spatial neighborhoods, Tile2Vec [9] used this idea in the RS domain where it promotes nearby tiles to have similar representations than the tiles that are far apart. Other popular pretext tasks used in computer vision include image inpainting [25], solving image-jigsaw [23], learning by counting [24], predicting rotations [5], etc. For a comprehensive understanding of Self-supervised representation learning, we would like to redirect the reader to this survey [11].
Clustering has also been used used for representation learning. In [37], the authors propose a recurrent framework for clustering and optimises a triplet loss for joint representation learning and clustering. DEC [35] starts with an initial feature representation and cluster assignment, and then iteratively refines both based on the confident samples based on the Kullback-Leibler (KL) divergence loss. One major drawback of these approaches is its tendency to map arbitrary data samples into the same cluster due to the lack of a criteria which respect the local information in image patches. We introduce a reconstruction loss that helps preserve the local information which is essential for semantic-segmentation.
## 3 Problem definition and preliminaries
In this section, we will introduce the available data and our objective. We will also briefly describe the general structure of the segmentation network.
### Problem setting
We consider the task of land cover mapping and frame it as a semantic segmentation problem, with the goal of predicting the land cover class of each pixel using the multi-spectral satellite/aerial imagery. In particular, we aim to predict the land cover class \\(\\mathbf{l}\\in\\{1, ,L\\}\\) of each pixel in an image. During the training process, we have access to limited labeled data and sufficient unlabeled data, which can be described as follows:
1. Limited labeled dataset with features and ground truth labels given as \\(\\mathbf{X^{l}}=[X^{l}_{1},\\dots,X^{l}_{N_{l}}]\\) where \\(X^{l}_{i}\\in\\mathbb{R}^{H\\times W\\times C}\\) is an aerial/satellite image of size \\((H,W)\\) and having \\(C\\) multi-spectral channels, and \\(\\mathbf{Y^{l}}=[Y^{l}_{1},\\dots,Y^{l}_{N_{l}}]\\) where \\(Y^{l}_{i}\\in\\mathbb{R}^{H\\times W\\times L}\\) and \\(L\\) is the number of land-cover classes.
2. Unlabeled dataset with features given as \\(\\mathbf{X^{u}}=[X^{u}_{1},\\dots,X^{u}_{N_{u}}]\\) where, \\(X^{u}_{1}\\in\\mathbb{R}^{H\\times W\\times C}\\). Due to the relatively high cost involved in labeling, it is more likely that \\(N_{u}>>N_{l}\\).
### Segmentation network
A segmentation network \\(f(X_{i};\\theta)\\) aims to predict the label of each pixel for an image \\(X_{i}\\). The parameter \\(\\theta\\) is estimated through a training process on a fully labeled dataset by minimizing an objective function of empirical risk, such as the pixel-wise cross entropy, as follows:
\\[\\mathcal{L}(\\theta|\\mathbf{X^{l}},\\mathbf{Y^{l}})=-\\frac{1}{NHW}\\sum_{i}\\sum_{ (h,w)}\\sum_{c}(Y_{i})^{c}_{h,w}\\log f(X_{i};\\theta)^{c}_{h,w} \\tag{1}\\]
where, \\(f(X_{i};\\theta)^{c}_{h,w}\\) is the likelihood of the \\((h,w)\\)'th pixel belonging to class \\(c\\) as predicted by the fully-convolutional network and \\((Y_{i})^{c}_{h,w}=1\\) if the \\((h,w)\\)'th pixel of image \\(i\\) belongs to the class \\(c\\).
## 4 Method
In this section, we will describe our proposed method CAS. Annotating the multi-spectral images is a labour intensive process and often the labelled dataset do not capture the heterogeneity of the earth due to differences in atmospheric conditions, geography and season when the image was captured. As a result the DNN model learned fail to generalize over the earth's surface. We start with describing the proposed self-supervised learning method CAS using large scale unlabeled data. We then discuss fine-tuning the pre-trained network using limited labeled dataset and the applications in few-shots learning and active learning.
In this paper, we use the UNet architecture [28] which consists of an encoder and a decoder, thus, formulating the segmentation function \\(f(X_{i};\\theta)\\) as a composition of two functions as follows:
\\[f(X_{i};\\theta)=g(h(X_{i};\\theta_{h});\\theta_{g}) \\tag{2}\\]
where, \\(h(X_{i};\\theta_{h})\\) is the encoder function with parameters \\(\\theta_{h}\\) which map the input image \\(X_{i}\\) to an embedding space and, \\(g(\\,\\cdot\\,;\\theta_{g})\\) is the decoder functions with parameters \\(\\theta_{g}\\) which maps the embeddings back to the image domain.
### Clustering-Augmented Self-supervised Learning (CAS)
The UNet model trained from scratch using limited labeled samples can easily overfit the training data. Hence, the learned embeddings become less informative which leads to a poor generalizability of the UNet model. We propose to use a clustering-based pretext learning task to help extract meaningful representation that helps address the land cover heterogeneity. In particular, we adapt DEC as the clustering method, which uses the clustering structure obtained at the image-patch level to naturally separate different land cover modes. We also use additional reconstruction loss to preserve fine-level image details and avoid degenerate solutions (e.g., collapsed clusters) resulting from the standard DEC. Both the DEC and the reconstruction objective are optimized during the self-supervised learning (i.e., model pre-training). In the following, we will describe the details of these involved components.
#### 4.1.1. Representation Learning with Clustering
The objective of self-supervised training is to pre-train the segmentation model to extract embeddings that naturally separate image patches with different land cover distributions. In CAS, such representation learning is conducted using large unlabeled dataset in two steps: Phase 1 - model initialization and Phase 2 - representation learning with clustering objective. In the first phase, we use the encoder-decoder from our UNet model and modify it by removing the skip connections and replacing the last classification layer by a reconstruction layer. This modified UNet model is tasked to reconstruct input images. By removing the skip connections, we handicap the use of input information in the reconstruction process, which forces the encoder-decoder model to extract better quality embeddings that fully capture representative features to reconstruct the image without the additional help from the skip connections. In this phase the model is trained by minimizing the following loss function:
\\[\\min\\frac{1}{N_{t}}\\sum_{i=1}^{N_{t}}\\|g(h(X_{i};\\theta_{h});\\theta_{g})-X_{i} \\|_{2}^{2}, \\tag{3}\\]
where \\(X_{i}\\in X^{l}\\cup X^{u}\\) and \\(N_{t}=(N_{l}+N_{u})\\). Given the obtained embeddings, we conduct KMeans clustering in the embedding space by minimizing the following loss function:
\\[\\begin{split}\\min\\frac{1}{N_{t}}\\sum_{i=1}^{N_{t}}\\|g(h(X_{i}; \\theta_{h});\\theta_{g})-Ms_{i}\\|_{2}^{2}\\\\ s.t.\\quad s_{i}\\in\\{0,1\\}^{K},1^{T}s_{i}=1\\forall i,\\end{split} \\tag{4}\\]
where \\(s_{i}\\) is the assignment vector for the \\(i\\)'th data point, \\(K\\) is the number of clusters, and the \\(k\\)'th column of \\(M\\) is the centroid of the \\(k\\)'th cluster. The pre-trained autoencoder along with the cluster centroids provide a good initialization point for the encoder parameters \\(\\theta_{h}\\) and cluster centroids \\(M\\).
In the second phase, the encoder parameters and the centroids are refined by learning from the high confidence assignments using an Expectation-Maximisation (EM) style algorithm inspired by the previous work [35]. In the E step the cluster assignment and the target assignment are computed while keeping the encoder parameters and cluster centroids fixed. Specifically, we use a soft-assignment based on the similarity of the embedded data point with the cluster centroid, measured using the Student's t-distribution [18]. Specifically, the soft-assignment of data \\(i\\) to cluster \\(j\\) is computed as follows:
\\[q_{ij}=\\frac{(1+\\|h(X_{i};\\theta_{h})-M_{j}\\|^{2}/\\alpha)^{\\frac{\\alpha+1}{2} }}{\\sum_{j=1}^{K}(1+\\|h(X_{i};\\theta_{h})-M_{j}\\|^{2}/\\alpha)^{\\frac{\\alpha+1 }{2}}} \\tag{5}\\]
where \\(h(X_{i};\\theta_{h})\\) is the embedded data point, \\(\\alpha\\) is the degree of freedom which is set as 1 in our experiments, and \\(q_{ij}\\) is the probability
Figure 2. Illustration of the self-supervised pre-trained architecture(best viewed in color). The components that are specifically present during the Pre-training and Fine-tuning stage are drawn in blue and red respectively, while the common components of these two stages are drawn in black. During the self-supervised pre-training step, the skip connections are removed and the classification layer is replaced by a reconstruction layer. These components, highlighted in red, are added back while fine-tuning using the limited labeled samples.
of assigning the \\(i\\)'th data point to the \\(j\\)'th cluster. To strengthen prediction and to promote learning from data-points which are assigned with high confidence, the target assignment is computed as:
\\[p_{ij}=\\frac{q_{ij}^{2}/\\sum_{i}q_{ij}}{\\sum_{j^{\\prime}=1}^{K}(q_{ij^{\\prime}}^{2 }/\\sum_{i}q_{ij^{\\prime}})} \\tag{6}\\]
Once cluster assignment and the target assignment are computed, in the M step we estimate the encoder parameters and the cluster centroids using gradient descent while keeping the cluster and the target assignment fixed. The objective is defined as the KL divergence loss between the soft assignments and the target assignment as follows:
\\[\\min KL(P\\|Q)=\\min\\frac{1}{N_{t}}\\sum_{i=1}^{N_{t}}\\sum_{j=1}^{K}p_{ij}\\log \\frac{p_{ij}}{q_{ij}} \\tag{7}\\]
The proposed method faces a number of issues for their use in the semantic-segmentation problem setting. First, there is no provision to avoid degenerate solutions, where the model parameters learned for cluster centroids lead to a trivial solution with the clusters collapsed to a single entity and the representations being zeroed. Second, this approach cannot handle the special scenario where arbitrary data samples are mapped to tight clusters. Finally, since this approach is only to optimize the clustering performance, it forces the embeddings of the data points in the same cluster to be very similar, where we start to lose the finer details of original input images. This is evident from the similar reconstruction of the embedding vectors from two different images from the same class as shown in figure 3 (a). This loss of fine-level image details becomes a serious issue in the semantic segmentation problem since we aim to assign a label to each pixel in the image instead of assigning a single label to the entire image as in the image classification setting.
#### 4.1.2 Preserving fine-level details
To enable learning from the confident samples while also preserving the finer details and overcome the issues mentioned in the previous subsection, CAS augments the KL Divergence based clustering loss with the reconstruction loss. Specifically, we add a decoder that reconstructs the data-point using the embeddings while the clustering task is performed at the bottle-neck layer. The encoder parameters, decoder parameters and the cluster centroids are refined according to the objective:
\\[\\mathcal{L}=\\frac{1}{N_{t}}\\sum_{i=1}^{N_{t}}\\left(\\sum_{j=1}^{K}p_{ij}\\log \\frac{p_{ij}}{q_{ij}}+\\lambda\\|g(h(X_{i};\\theta_{h});\\theta_{g})-X_{i}\\|_{2}^ {2}\\right), \\tag{8}\\]
where \\(\\lambda\\) is a hyper-parameter to balance the clustering loss and the reconstruction loss.
The proposed modifications provides a number of benefits. First, reconstruction loss prevents the model to collapse to a degenerate solution by ensuring that the decoder can reconstruct the data point using the embeddings. Second, since the decoder has to reconstruct the images from the embeddings, it prevents the embeddings to lose the fine-level details thus helping in the segmentation. Finally, the trained decoder provides as a good initialization for the decoder of the segmentation network.
### Downstream applications
After obtaining the pre-trained model through self-supervised learning, we describe two downstream applications where we use labeled data to fine-tune the model.
#### 4.2.1 Few-shots segmentation
After training the encoder-decoder model, we feed the learned weight parameters to the U-Net segmentation model with skip connections (see Fig. 2). This model can be fine-tuned using pixel-wise labels by minimizing the cross-entropy loss using labeled data (see Eq. 1).
#### 4.2.2 Active learning
The clustering structure extracted by the proposed method also enables actively select query image patches so as to reduce the manual efforts in data labeling. The objective is to select a small number of query image patches to ask for labeling so that the performance of segmentation model is optimized after it is trained with these labeled patches. In particular, we uniformly select image patches from different clusters that are closest to cluster centroids. Since the clustering structure automatically divides the whole data space into \\(K\\) disjoint set of data points, uniformly selected patches are representative samples that cover different types of data in the entire data space.
Furthermore, we can extend this approach to handle the scenario where the budget (i.e., the number of query samples) is not divisible by the number of clusters. In this case, we aim to take more samples from clusters of higher uncertainty. Intuitively, each cluster contains images with similar data distribution and thus the labels predicted by a well-trained segmentation model should be similar for all the images within a cluster. Specifically, we first predict pixel-wise labels for all the images and then estimate the majority class for each image. We measure the uncertainty of each cluster \\(k\\) as the entropy of these obtained majority classes.
## 5 Experimental results
We evaluate our proposed strategy for semantic segmentation on two real-world applications of great societal impacts. In the first example,
Figure 3: The reconstructed images for the same class using the embeddings learned by (a) DEC and (b) CAS.
we aim to map cashew plantation in Benin, which contribute nearly 10% of the country's export income. Benin government is actively looking for inventory information of cashew to assist the distribution of their recent $100 million loan from World Bank, aiming at further developing the cashew industry. In the second example, we investigate crop mapping in the US Midwest, the world's bread basket. Mapping crops is a key step towards many applications, such as forecasting yield, guiding sustainable management practices and evaluating progress in conservation efforts.
### Datasets
* **Cashew Plantation Mapping** We use the multi-spectral images captured by AIRBUS in 2018 to study an area in Africa. The images have 4 spectral bands namely red, green, blue and NIR (near infrared) at a spatial resolution of 0.5 metres. For our experiment, we divide our study region into patches of size \\(68\\times 68\\) and each pixel within this patch is assigned a class label \\(l\\in\\{\\) Cashew, Forest, Urban, Background \\(\\}\\). The ground truth was created using manual annotation over the entire study region provided by our collaborators in Benin, Africa 2. Footnote 2: Given the proprietary nature of the Planet Lab composite and the Airbus imagery, we do not have permission to make this data publicly available.
* **Crop Mapping** We used publicly available multi-spectral images observed by the Sentinel-2 Constellation. The Sentinel-2 data product has 13 spectral bands 3 at three different spatial resolutions of 10, 20 and 60 metres. For consistency, bands with 20 and 60 metres resolution are resampled by using the nearest neighbour method to 10 metres. For our experiment, we consider the region of southwestern Minnesota,US, where we aim to classify each pixel to a class label \\(l\\in\\{\\) Corn, Soybean, Sugarbeats, Water, Urban \\(\\}\\). Our data is taken in August 8, 2019. The labels are obtained from the USDA Crop Data Layer product [2].
Footnote 3: [https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SRtbands](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SRtbands)
### Baselines
We use the UNet architecture as the base model for semantic segmentation and compare our representation learning strategy against the following baselines. Here all the representation learning methods are trained on the entire training set (labeled + unlabeled data).
1. **OnlyLabeled** This method considers training a UNet from scratch, only using the labeled dataset.
2. **AutoEncoder** We pre-train the UNet model by transforming it into an autoencoder structure by removing the skip connections and conduct reconstruction in the final layer (described in Section 4).
3. **Tile2Vec** We adopt this method [9] to learn representation by leveraging spatial contextual similarities. To prevent the model from collapsing and providing degenerate solution, we initialize the model using the AutoEncoder baseline. The model is optimized using a triplet loss among the achor, neighbors and distant patches.
4. **Colorization**[34] The segmentation model has two independent branches which takes in the spectral bands and the RGB channels, respectively. The first branch is pre-trained using the colorization task and the second branch is pre-trained on ImageNet [3]. As proposed by the authors, both of the branches are fine-tuned separately on the limited labeled samples and we average their predictions as final outputs.
5. **DEC** We adopt the method presented in [35] to learn representations that optimizes a clustering-based loss. This optimisation is performed at the image patch-level and thus disregards the fine-level image details.
### Few-Shot Learning
Here we evaluate the methods for few-shot learning setting where we progressively increase the number of labelled samples for training. The average accuracy and standard deviation of 5 runs for all the algorithms are reported in Table 1. The model trained from scratch using only labelled instances (_OnlyLabeled_) performs the worst. _AutoEncoder_ takes advantage of the larger unlabeled dataset in learning the representations and thus shows an increase in performance than _OnlyLabeled_. The representations learned by _OnlyLabeled_ and _AutoEncoder_ do not capture discriminative information of land covers and thus they do not perform as well as _DEC_. The next baselines of _Tile2Vec_ and _Colorization_ makes use of alternate ways of representation learning on the unlabelled data as described in section 5.2. Each of these provide limited improvement over AutoEncoder. _Tile2Vec_
\\begin{table}
\\begin{tabular}{|c||c|c|c|c|c||c|c|c|c|c|} \\hline & \\multicolumn{4}{c||}{**D1: Cashew Plantation Mapping**} & \\multicolumn{4}{c|}{**D2: Crop Mapping**} \\\\ \\hline
**Method** & 10 & 20 & 40 & 120 & 160 & 200 & 10 & 20 & 50 & 100 & 150 & 200 \\\\ \\hline \\hline OnlyLabeled & 0.402 & 0.572 & 0.609 & 0.704 & 0.712 & 0.724 & 0.426 & 0.634 & 0.700 & 0.788 & 0.809 & 0.837 \\\\ & (0.098) & (0.059) & (0.050) & (0.021) & (0.018) & (0.017) & (0.121) & (0.073) & (0.047) & (0.016) & (0.015) & (0.014) \\\\ \\hline AutoEncoder & 0.481 & 0.629 & 0.663 & 0.717 & 0.737 & 0.743 & 0.508 & 0.666 & 0.722 & 0.798 & 0.814 & 0.839 \\\\ & (0.098) & (0.053) & (0.035) & (0.026) & (0.018) & (0.016) & (0.139) & (0.054) & (0.051) & (0.016) & (0.013) & (0.007) \\\\ \\hline Tile2Vec & 0.507 & 0.632 & 0.686 & 0.739 & 0.740 & 0.745 & 0.566 & 0.688 & 0.757 & 0.800 & 0.825 & 0.841 \\\\ & (0.048) & (0.021) & (0.024) & (0.008) & (0.008) & (0.008) & (0.057) & (0.026) & (0.026) & (0.017) & (0.014) & (0.004) \\\\ \\hline Colorization & 0.609 & 0.660 & 0.710 & 0.756 & 0.762 & 0.776 & 0.543 & 0.678 & 0.729 & 0.789 & 0.823 & 0.837 \\\\ & (0.044) & (0.037) & (0.013) & (0.008) & (0.004) & (0.005) & (0.046) & (0.039) & (0.014) & (0.011) & (0.007) \\\\ \\hline DEC & 0.628 & 0.688 & 0.709 & 0.747 & 0.751 & 0.756 & 0.600 & 0.723 & 0.763 & 0.814 & 0.837 & 0.843 \\\\ & (0.024) & (0.016) & (0.016) & (0.008) & (0.008) & (0.007) & (0.043) & (0.023) & (0.019) & (0.008) & (0.007) & (0.007) \\\\ \\hline CAS(ours) & **0.674** & **0.721** & **0.736** & **0.767** & **0.774** & **0.783** & **0.656** & **0.759** & **0.792** & **0.831** & **0.845** & **0.847** \\\\ & (0.030) & (0.020) & (0.008) & (0.007) & (0.008) & (0.002) & (0.058) & (0.024) & (0.010) & (0.007) & (0.004) & (0.002) \\\\ \\hline All Data & \\multicolumn{4}{c||}{0.795 (1500 patches)} & \\multicolumn{4}{c|}{0.87 (700 patches)} \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Comparison with baselines in terms of Mean F1 Score (and standard deviation) with increasing number of samples. The last row (All Data) shows the performance of using all the available data for supervised training (without pre-training).
uses assumptions that the nearby spatial tiles are similar and far away are different which can sometimes be inaccurate. _Colorization_ learns representations by learning to colorize the images which can sometimes be ineffective in distinguishing the regions where the color is not distinctive. Next, we see that our adaptation of _DEC_ (that captures information about different types of land covers via clustering) is able to do nearly as well or better (especially for small number of samples) than the schemes such as _Colorization_ that are able to explicitly preserve fine levels details. Finally, our proposed scheme _CAS_ outperforms all these baselines.
In Fig. 4, we show the mapping results of different methods in several example regions from D1. The segmentation results shown are obtained from the models trained using 40 labeled samples. We can see the detection results produced by CAS are more consistent to the ground truth and the satellite images. In contrast, other self-supervised learning methods (DEC and Colorization) often cannot precisely delineate land cover boundaries. This is because the plantations that are close the boundary commonly have lower density and thus are more likely to be confused with other land covers.
#### 5.3.1 Effect of more labeled training samples
Due to the limited number of labeled samples in the downstream task, the performance of the models trained from scratch depend on the representability of those small subset of data points. The limited data samples do not capture the whole data domain and thus the representations learned using them are not robust. Self-supervised learning aims to decouple the representation learning phase and the classification phase. _CAS_ tries to leverage the unlabeled data to capture the representations and then learn the classification rules using the limited dataset. With the increase in the number of labeled instances, the representations learned using them become increasingly more robust. This results in a reduction in the gain obtained by using the unlabeled data in the representation learning manner. This is evident from the result shown in Table 1, where we increase the number of labeled patches for both the datasets. We observe that the accuracy of all methods increase with the increase in the number of labeled patches.
### Clustering-based Evaluation of Representations
Here we evaluate the quality of representation produced by different approaches using the quality of clustering produced using them. Specifically, we measure the clustering performance using aggregated labels of image patches. For each image patch, we define the
Figure 4: Examples of land cover mapping made by different methods. The first column shows the reference RGB images and the second column shows the manually-created ground-truth data.
Figure 5: Average entropy of the clusters obtained by different methods on Dataset (a) D1 and (b) D2.
aggregated label as the majority label from all the pixels of this image patch. Intuitively, we expect image patches within a cluster to have have the same aggregated labels. Hence, we estimate the clustering performance using the weighted entropy of aggregated labels. Specifically, given a clustering structure, we first compute the entropy of aggregated labels for each cluster. Then we compute the weighted average of entropy values over all the clusters based on their cluster sizes. The lower value of the average entropy indicates better clustering performance.
We compare the clusters extracted by the baselines with our proposed method (Fig. 5). In _AutoEncoder_ and _Colorization_, KMeans clustering is conducted on the obtained embeddings. It can be seen that the proposed method significantly outperforms _Autoencoder_ and _Colorization_ in both datasets. _DEC_ and our proposed method _CAS_ achieve very pure clusters even using no more than five clusters. Besides, our method achieves similar performance with _DEC_ even though we simultaneously optimize the clustering performance and the reconstruction error. Although, _DEC_ achieves good clusters, it is plagued with the issues highlighted in Fig. 3, which hampers its segmentation performance.
An example of the clusters formed by the methods are shown in Fig. 6. We observe that the clusters formed by _CAS_ capture the intra-class heterogenity and form pure clusters, while the other cluster formed by the other methods highlight several issues which we motivated in the introduction. As shown in Fig. 6, one of the clusters formed by _AutoEncoder_ has a mixture of high, medium and low density clusters which points towards the intra-class confusion. The images of the cluster formed by _Colorization_ are covered by other trees, low-density cashew and mixture of other trees and cashew respectively. This highlights the inter-class confusion due to plantations being confused with other trees.
### Using Clusters for Active Sampling
Here we show the effectiveness of the active learning strategy. In particular, we use obtained clusters to query patches rather than randomly sampling patches for labeling. Fig. 7 shows the segmentation performance when we label different amount of samples either using our active learning approach or by using random sampling. We also show the performance of random sampling both for _CAS_ model and the best-performing baseline in each dataset (_Colorization_ in D1 and _DEC_ in D2).
According to the segmentation performance, we can observe that the active learning method leads to better performance, especially when we only label small amount of samples. This demonstrate the effectiveness of using the clustering structure obtained from CAS to select most representative samples given a limited budget. When we label sufficient amount of samples (\\(>\\)200 samples), all the methods achieve similar performance.
## 6. Conclusion
In this paper we propose the use of clustering based self-supervised learning to pre-train the model for few-shot segmentation. This method is able to preserve fine-level details while also extracting a clustering structure to naturally separate heterogeneous land cover modes. The obtained clustering structure can also be used in an active learning setting. We conduct experiments on two real world datasets related to land-cover mapping to show the benefits brought by using the abundant unlabeled data. Further, we compare our method with other forms of self-supervised learning strategies adopted in the Remote Sensing domain, namely Colorization and Tile2Vec, to show the effectiveness of our proposed strategy.
Given the effectiveness of our proposed method in mapping heterogeneous land covers using limited labels, our framework has the potential for creating large-scale (e.g., global) land-cover maps using satellite imagery and small amount of manually-created labels. Moreover, our proposed framework can be generally applied to a variety of spatial datasets (e.g., traffic and crime data) which exhibits strong heterogeneity.
Although our proposed method has produced improved accuracy in land cover mapping, it remains limited in discovering temporal patterns from multi-temporal satellite data which is often available in public satellite datasets. Another important direction is to combine the pretext task of clustering with pretext tasks that is defined to reflect land cover distinctions based on domain knowledge.
Figure 6. First three columns are seperate clusters formed by CAS which clearly show a clusters of high, medium and low density. The last two columns are one of the clusters formed by AutoEncoder and Colorization respectively.
Figure 7. Our method is compared with the next best method while using active learning on Dataset (a) D1 and (b) D2. CAS_CLUSTER represents the method to actively sample from clusters obtained from CAS.
## 7. Acknowledgements
This work was funded by the NSF awards 1838159 and 1739191 and National Aeronautics and Space Administration (NASA) Land Cover Land Use Change program, grant number 80NSSC20K1485. Rahul Ghosh is supported by UMII MNDrive Graduate Fellowship. Access to computing facilities was provided by the Minnesota Supercomputing Institute.
## References
* (1)
* Badrinarayanan et al. (2017) Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. 2017. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. _IEEE transactions on pattern analysis and machine intelligence_ 39, 12 (2017), 2481-2495.
* C. (2021)CDL 2021. USDA Cropland Data Layer. [https://www.nass.usda.gov/Research_and_Science/Cropland/SA81aBp](https://www.nass.usda.gov/Research_and_Science/Cropland/SA81aBp).
* J. Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 248-255.
* H. Hosu et al. (2021) Rahul Ghosh, Xiaowei Jia, and Vipin Kumar. 2021. Land Cover Mapping in Limited Labels Scenario: A Survey. _CoRR_ abs/2103.02429 (2021). arXiv:2103.02429 [https://arxiv.org/abs/2103.02429](https://arxiv.org/abs/2103.02429)
* Gidaris et al. (2018) Syros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. _arXiv preprint arXiv:1803.07728_ (2018).
* Handan-Nader and E. Ho (2019) Cassandra Handan-Nader and Daniel E Ho. 2019. Deep learning to map concentrated animal feeding operations. _Nature Sustainability_ 2, 4 (2019), 298-306.
* Hu et al. (2018) Yunfeng Hu, Qiqinli Zhang, Yunzhi Zhang, and Huimin Yan. 2018. A deep convolution neural network method for land cover mapping: a case study of quinhuangdao, China. _Remote Sensing_ 10, 12 (2018), 2053.
* H. Huh et al. (2016) Minyoung Huh, Pulkit Agrawal, and Alexei A Hoss. 2016. What makes ImageNet good for transfer learning? _arXiv preprint arXiv:1608.08614_ (2016).
* Jean et al. (2019) Neal Jean, Sherrie Wang, Ansul Samar, George Azzari, David Lobell, and Stefano Ermon. 2019. Tile2vec: Unsupervised representation learning for spatially distributed data. In _Proceedings of the AAAI Conference on Artificial Intelligence_, Vol. 33. 3967-3974.
* Li et al. (2017) Xiaowei Dai, Ankush Khandelwal, Gurungrasad Nayak, James Gerber, Kimberly Carlson, Paul West, and Vipin Kumar. 2017. Incremental dual-memory lstm in land cover prediction. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_. 867-876.
* Jing and Tian (2020) Longlong Jing and Yingli Tian. 2020. Self-supervised visual feature learning with deep neural networks: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ (2020).
* Karpatne et al. (2016) Anuj Karpatne, Zhe Jiang, Ranga Raju Vatsavi, Shashi Shekhar, and Vipin Kumar. 2016. Monitoring land-cover changes: A machine-learning perspective. _IEEE Geoscience and Remote Sensing Magazine_ 4, 2 (2016), 8-21.
* Larsson et al. (2016) Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2016. Learning representations for automatic colorization. In _European conference on computer vision_. Springer, 577-593.
* Larsson et al. (2017) Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2017. Colorization as a proxy task for visual understanding. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 6874-6883.
* U. Le (2013) Quoc V Le. 2013. Building high-level features using large scale unsupervised learning. In _2013 IEEE international conference on acoustics, speech and signal processing_. IEEE, 8595-8598.
* Lee et al. (2009) Honglak Lee, Roger Grosse, Rajesh Rangamath, and Andrew Y Ng. 2009. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In _Proceedings of the 26th annual international conference on machine learning_. 609-616.
* Long et al. (2015) Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 3431-3440.
* van der Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. _Journal of machine learning research_ 9, Nov (2008), 2579-2605.
* Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_. 3111-3119.
* Nayak et al. (2020) Gurungrasad Nayak, Rahul Ghosh, Xiaowei Jia, Varun Mithadi, and Vipin Kumar. 2020. Semi-supervised Classification using Attention-based Regularization on Coarse-resolution Data. In _Proceedings of the 2020 SIAM International Conference on Data Mining_. SIAM, 253-261.
* Neumann et al. (2019) Maxim Neumann, Andre Susano Pinto, Xiaohua Zhai, and Neil Houlsby. 2019. In-domain representation learning for remote sensing. _arXiv preprint arXiv:1911.06721_ (2019).
* Noh et al. (2015) Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. 2015. Learning deconvolution network for semantic segmentation. In _Proceedings of the IEEE international conference on computer vision_. 1520-1528.
* Noroozi and Favaro (2016) Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In _European Conference on Computer Vision_. Springer, 69-84.
* Noroozi et al. (2017) Mehdi Noroozi, Hamed Pirsiavash, and Paolo Favaro. 2017. Representation learning by learning to count. In _Proceedings of the IEEE International Conference on Computer Vision_. 5898-5906.
* Pathak et al. (2016) Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. 2016. Context encoders: Feature learning by inpainting. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2536-2544.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_. 1532-1543.
* Ranzato et al. (2007) Marc'Aurelio Ranzato, Fu Jie Huang, Y-Lan Boureau, and Yann LeCun. 2007. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In _2007 IEEE conference on computer vision and pattern recognition_. IEEE, 1-8.
* Ronneberger et al. (2015) Olz Ronneberger, Philip Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_. Springer, 234-241.
* Saralioglu and Gungor (2020) Ekrem Saralioglu and Oguz Gungor. 2020. Semantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network. _Geometrica_ (2020), 1-21.
* Sioian et al. (2019) Andreit Sioian, Vincent Poutila, Jordi Ingalda, Victor Poughon, and Dawa Derksen. 2019. Land cover maps production with high resolution satellite image time series and convolutional neural networks: Adapuations and limits for operational systems. _Remote Sensing_ 11, 17 (2019), 1986.
* Su and Chen (2019) Renee Su and Rong Chen. 2019. Land Cover Change Detection via Semantic Segmentation. _arXiv preprint arXiv:1911.12903_ (2019).
* Ulmas and Liu (2020) Piti Ulmas and Imar Liv. 2020. Segmentation of Satellite Imagery using U-Net Models for Land Cover Classification. _arXiv preprint arXiv:2003.02899_ (2020).
* Vincent et al. (2008) Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting all composing robust features with denoising autoencoders. In _Proceedings of the 25th international conference on Machine learning_. 11096-1108.
* Vincenzi et al. (2020) Stefano Vincenzi, Angelo Porrello, Pietro Buzzega, Marco Cipiano, Pietro Fronte, Roberto Cuccu, Carla Ipipoliti, Ansumaria Conte, and Simone Calderara. 2020. The color out of space: learning self-supervised representations for Earth Observatory imagery. _arXiv preprint arXiv:2006.12119_ (2020).
* Xie et al. (2016) Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In _International conference on machine learning_. 478-487.
* Xie et al. (2015) Michael Xie, Neal Jean, Marshall Burke, David Lobell, and Stefano Ermon. 2015. Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping. arXiv:1510.00098 (cs.CV).
* Yang et al. (2016) Jiawei Yang, Devi Prikh, and Dhruv Batra. 2016. Joint unsupervised learning of deep representations and image clusters. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 5147-5156.
* Zhang et al. (2016) Richard Zhang, Phillip Isola, and Alexei A Efros. 2016. Colorful image colorization. In _European conference on computer vision_. Springer, 649-666.
* Zhang et al. (2017) Richard Zhang, Phillip Isola, and Alexei A Efros. 2017. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 1058-1067.
* Zhou et al. (2008) L Zhou, X Yang, et al. 2008. Use of neural networks for land cover classification from remotely sensed imagery. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ 37 (2008), 575-578. | Collecting large annotated datasets in Remote Sensing is often expensive and thus can become a major obstacle for training advanced machine learning models. Common techniques of addressing this issue, based on the underlying idea of pre-training the Deep Neural Networks (DNN) on freely available large datasets, cannot be used for Remote Sensing due to the unavailability of such large-scale labeled datasets and the heterogeneity of data sources caused by the varying spatial and spectral resolution of different sensors. Self-supervised learning is an alternative approach that learns feature representation from unlabeled images without using any human annotations. In this paper, we introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning. We demonstrate the effectiveness of the method on two societally relevant applications from the aspect of segmentation performance, discriminative feature representation learning and the underlying cluster structure. We also show the effectiveness of the active sampling using the clusters obtained from our method in improving the mapping accuracy given a limited budget of annotating. | Provide a brief summary of the text. | 202 |
arxiv-format/2111_08478v1.md | # Spatial machine-learning model diagnostics:
A model-agnostic distance-based approach
Alexander Brenning
Friedrich Schiller University Jena, Department of Geography and Michael Stifel Center Jena for Data-Driven and Simulation Science (MSCJ), Jena, Germany
## 1 Introduction
Machine-learning (ML) and hybrid geostatistical-ML models such as regression-kriging have become increasingly popular in spatial prediction modeling (for example, Hengl et al., 2015; Sekulic et al., 2020). While parametric geostatistical techniques such as kriging provide estimates of predictive uncertainty that are backed by statistical theory, most ML models do not, and therefore computational estimation procedures are needed, which must be adapted to the spatial context (Brenning, 2012; Le Rest et al., 2014; Roberts et al., 2017). Similarly, the interpretation of complex black-box ML models can be challenging, and explicitly spatial perspectives are still limited. Overall, we can therefore detect a remarkable lack of model-agnostic computational diagnostics for an explicitly spatial model assessment and interpretation. The present contribution aims to enrich this field by offering novel tools and perspectives.
In _spatial model assessment_, (non-spatial) leave-one-out cross-validation (LOO-CV) and cross-validation in general have long been used in the context of geostatistical regionalization (Isaaks and Srivastava, 1989; Goovaerts, 2000; Webster and Oliver, 2007; Fouedjio and Klump, 2020), but they do not provide a spatially differentiated summary. Their results can furthermore be misleading when the spatial distribution of observations is uneven (Isaaks and Srivastava, 1989), as, for example, weather stations tend to be clustered in densely populated regions. Spatial adaptations proposed so far have partitioned the study region into spatially disjoint training and validation areas (Ru8 and Brenning, 2010; Brenning, 2012; Bahn and McGill, 2013). Some studies have proposed to establish a distance buffer between training and test sets or a spatial block size based on the --or some sort of-- autocorrelation distance (Brenning, 2005; Le Rest et al., 2014; Roberts et al., 2017; Valavi et al., 2019), an _ad hoc_ practice that needs to be critically discussed. Considering these recent developments, there's a clear need and opportunity to establish model-agnostic model assessment tools that detect if and how the predictive performance of different models deteriorates as the prediction distance increases.
In _spatial model interpretation_, permutation-based variable importance (PVI) is a popular tool for interpreting ML models. Given its limitations when predictors are strongly dependent (Hooker and Mentch, 2019; Molnar, 2019), modifications have been proposed to interpret predictor effects in transformed space (Brenning, 2021), or to examine conditional importance measures (Strobl et al., 2008). A spatial adaptation of PVI has been proposed to measure how much a predictor contributes to the model's skill in different locations, such as adjacent regions (Russ and Brenning, 2010b). Nevertheless, this spatial diagnostic does not yet show how each variable's contribution deteriorates or increases with spatial distance, depending on a model's structure and capabilities. To better elucidate such effects, this contribution proposes to extend the spatial PVI to a continuous distance scale in order to construct spatial variable importance profiles (SVPs) as a novel model-agnostic interpretation tool for ML models.
This paper is organized around the two proposed ideas and two case studies. Specifically, the following section introduces the fundamental concepts of the proposed spatial model assessment and model interpretation tools. Two case studies are then introduced, representing a regionalization (or spatial regression) problem from environmental science, and a classification task from remote sensing. The results are finally discussed regarding their utility, their relationships to other resampling-based as well as theoretically derived performance estimates, and their broader implications.
## 2 Proposed method
### Spatial leave-one-out for model assessment
In spatial prediction of categorical response variables (i.e., classification) and quantitative response variables (i.e., regression or regionalization), we use a model \\(\\widehat{M}\\) to predict unobserved response values based on observed values of \\(p\\) predictor variables or features, \\(\\mathbf{f}=(f^{(1)},\\ldots,f^{(p)})^{T}\\). The model is trained on a training sample \\(L\\) comprising \\(n\\) observations of response and predictors, and its performance is estimated on a test sample or with a cross-validation (CV) or bootstrap estimator (Efron and Gong, 1983), including spatial resampling approaches (Brenning, 2012) and leave-one-out cross-validation (LOO-CV).
In LOO-CV, one observation \\(\\mathbf{o}_{i}:=(y_{i},\\mathbf{f}_{i})\\in L\\) at a time is removed from the dataset in order to use it for error estimation, while the remaining data serves as the training sample \\(L_{-i}:=L\\setminus\\{\\mathbf{o}_{i}\\}\\) for training a model \\(\\widehat{M}_{i}\\). Each of these \\(n\\) models is fitted for the sole purpose of predicting \\(y_{i}\\). This prediction is denoted by \\(\\hat{y}_{i}^{(-i)}\\) to emphasize the removal of observation \\(i\\) from the training sample. The LOO error is calculated by comparing these \\(n\\) predictions to the observed responses \\(y_{1},\\ldots,y_{n}\\) by means of some error (or accuracy) measure such as the root-mean-square error (RMSE) or the misclassification error rate. This LOO estimate is referred to as \\(\\widehat{e}\\widehat{r}_{L}^{loo}(M)\\).
LOO estimation has been used in the comparison of regionalization models since it is implicitly spatial due to the spatial separation of training and test locations (for example, Goovaerts, 2000). Nevertheless, this estimator exerts no direct control on the separation distance, and the mean nearest-neighbour distance may be substantially smaller than the mean prediction distance, especially when observations are spatially clustered (Isaaks and Srivastava, 1989).
Several authors have proposed to enforce spatial exclusion buffers around the LOO test locations, making reference to the concepts of spatial autocorrelation and statistical independence (Brenning, 2005; Le Rest et al., 2014; Roberts et al., 2017; Pohjankukka et al., 2017; Veronesi and Schillaci, 2019); the theoretical shortcomings of such requirements are discussed later in Section 5.3.
From a practical perspective, however, it is critical to know how well a model is able to predict the response at relevant prediction distances that occur in the application of the model. The range of relevant distances may depend on the size of the gaps between point observations, or the maximum distance of regions to which the model is to be transferred.
For this purpose, spatial LOO with buffer, or simply _spatial LOO_ in this study, is formally defined as follows, and without imposing _a priori_ limitations on the separation distance, or prediction horizon.
In spatial LOO, the training sample for the \\(i\\)th iteration is defined as
\\[L_{-D(i,r)}:=L\\setminus D(i,r),\\]
where \\(D(i,r)\\) is the subset of \\(L\\) located within a spatial distance \\(\\leq r\\) from the spatial location \\(\\mathbf{x}_{i}\\) of observation \\(\\mathbf{o}_{i}\\). The actual spatial separation distance
\\[d_{i,r}:=\\min\\{|\\mathbf{x}_{i}-\\mathbf{x}_{j}|:\\mathbf{o}_{j}\\in L_{-D(i,r)}\\}\\]
may be (usually only slightly) greater than the specified \\(r\\), depending on the spatial distribution of observations. Therefore, the \\(d_{i,r}\\) (and not \\(r\\) itself) are recorded as the prediction distances, along with the predictions \\(\\hat{y}_{i}^{(-D(i,r))}\\). The recorded values are denoted by \\(\\hat{y}_{(k)}\\), \\(y_{(k)}\\) and \\(d_{(k)}\\), and the spatial LOO error, calculated from values recorded at an approximate distance of \\(r\\), is referred to as
\\[\\widehat{err}_{L}^{(r)}(M).\\]
Note that for \\(r=0\\), this spatial LOO becomes equivalent to conventional LOO if (and only if) all observations are at unique locations. In order to embed LOO within the spatial LOO framework, it is therefore convenient to define \\(D(i,r):=\\{\\mathbf{o}_{i}\\}\\) for \\(r<0\\).
### Spatial prediction error profiles (SPEPs)
In order to visualize the average relationship between prediction error and distance, a spatial LOO error \\(\\widehat{err}_{L}^{(r)}(M)\\) needs to be estimated as a function of prediction distance. For this purpose, the recorded distances \\(d_{(k)}\\) are binned. Within each of these bins, denoted by an index set \\(B=\\{k_{1},\\ldots,k_{c}\\}\\), the performance \\(\\widehat{err}_{L}^{(B)}(M)\\) is calculated from all corresponding \\(\\hat{y}_{(k)}\\) and \\(y_{(k)}\\) values, \\(k\\in B\\). The lag distance \\(\\hat{d}_{B}\\) assigned to this estimate is the median of the recorded distances, \\(\\mathrm{median}\\{d_{(k)}:k\\in B\\}\\).
The resulting series of \\((\\hat{d},\\widehat{err})\\) values is referred to as a _spatial prediction error profile_ (_SPEP_). It allows us to address the following key questions of spatial prediction modeling:
* How does model performance deteriorate with increasing distance from the training data? In other words, how well does a model fill local gaps, and how well does it extrapolate?
* How do models differ in their ability to predict at small and greater distances?
It shall be noted that training sample size may decrease substantially as the separation distance \\(r\\) increases, which in itself may result in a drop in model performance and in some cases a biased or poorly representative distribution of the remaining data. Also, individual observations can be used multiple times in the calculation of \\(\\widehat{err}_{L}^{B}(M)\\) for a given distance bin since they may be located within a very similar distance from multiple other observation locations. As a result, the estimation of confidence intervals for \\(\\widehat{err}_{L}^{(r)}(M)\\) cannot be addressed with standard parametric techniques. Similar issues are known from the estimation of confidence intervals for empirical semivariograms, where resampling procedures have therefore been proposed (Clark and Allingham, 2011; Olea and Pardo-Iguzquiza, 2011), which could be adapted to SPEPs.
In this study, \\(r\\) was chosen randomly within a desired range of separation distances, and \\(N=5000\\) and \\(N=25000\\) repetitions were used to obtain sufficient data for the estimation of profile functions in the Meuse and Maipo case studies, respectively. A substantially smaller number may be sufficient and will be optimized in the future. Binning was based on quadratically increasing breakpoints in order to show more detail at shorter distances. The resulting error estimates were slightly smoothed with a weighted moving average.
### Spatial variable importance profiles (SVIPs)
Model-agnostic tools that aid in the interpretation of ML models are a key topic in explainable artificial intelligence research, and permutation-based techniques are a simple and popular approach to this end (Molnar, 2019). Permutation-based variable importance (PVI) is defined as the decrease in model accuracy (or increase in error) obtained when making predictions \\(\\hat{y}_{(k)}\\) using permuted or'scrambled' feature values. Specifically, a model is first fitted using the undisturbed training data; its performance is measured on test data. Then, a feature's data is replaced with a randomly permuted series of data values, and the model's performance \\(\\widehat{err}_{L}^{(r)}(M)\\) is measured a second time using this data, \\(\\{(\\hat{y}_{(k)},y_{(k)},d_{(k)}),k\\in B\\}\\). This is repeated multiple times for each predictor, and the mean decrease in predictive accuracy is measured for each variable (Molnar, 2019). This algorithm can be embedded in resampling-based model assessment procedures such as the bootstrap (Breiman, 2001) and CV, including spatial CV (Russ and Brenning, 2010). It can also be applied to trained models that cannot be retrained, although with the disadvantage, in the present context, that the separation distance cannot be controlled.
One decision to make in permuting a predictor variable is where to take the candidate values from: (1) the test sample, (2) the training sample, or (3) the entire dataset. In LOO estimation, the test sample contains only one observations, which is why the entire dataset is chosen as a source of resampled values.
Similar to the estimation of SPEPs, the SVI at a specific separation distance is obtained by first binning the distances, and then estimating the prediction errors \\(\\widehat{err}\\) and \\(\\widehat{err}\\) within these bins from undisturbed and permuted data. The SVIP is obtained from these performance differences.
Spatial LOO estimation with varying separation distances creates an opportunity to assess the contribution of each predictor in a spatially differentiated manner. Specifically, _spatial variable importance profiles_ (_SVIPs_) target the following questions:
* Which predictors contribute the most to a model's spatial prediction skill at a given prediction distance?
* Which predictors continue to be informative at greater distances, when extrapolating from the study region into uncharted terrain?
* How do models differ in their ability to exploit information related to predictors and/or location?
### Beyond geographic space
Distance concepts, and prediction models that try to overcome distance, are not only relevant in geographic space, but also in time, in space-time, in three-dimensional physical space, in phylogenetic trees, and in feature space, to name only a few such settings that are relevant to the spatial and environmental sciences. These emerging fields of distance-based CV estimation have been reviewed at depth by Roberts et al. (2017), and therefore only selected aspects are highlighted here as they relate to a possible adoption of distance-based prediction error and variable importance profiles in these applications.
In space-time, it is important to recognize the conceptual and also mathematical differences between distances in geographic space and in time (Cressie and Wikle, 2011). As a consequence, resampling schemes must be designed with a specific prediction objective in mind, for example hindcasting, forecasting, or regionalization. While recently proposed space-time resampling schemes focused on hindcasting and regionalization, or a combination of both (Meyer et al., 2018), forecasting has received particular attention in time-series research, where the consideration of an appropriate forecasting horizon has been identified as a critical issue in assessing model performance (Bergmeier and Benitez, 2012). The relationship between predictibility and lead time has received much attention in climatology (Palmer and Hagedorn, 2006). Similar to the spatial tasks studied here, it is therefore of critical importance to choose suitable spatial, temporal, or spatio-temporal distance metrics and prediction horizons in the assessment of spatio-temporal ML models instead of estimating performances based on 'target' locations or times that are dictated by the sample itself, as in LOO-CV. In this context, the distinction between hindcasting and forecasting is of particular importance since the effects of events or external stimuli propagate into the future, not into the past.
In three-dimensional geological space, the vertical dimension often represents, to some extent, time (for example, in stratigraphic sequences), and in the atmospheric sciences, air masses are stratified mainly vertically and not horizontally. The concept of prediction distance may therefore depend on the specific application setting. In the case of geometric anisotropy, this can be accounted for by means of a linear coordinate transformation that defines a common distance metric across all three dimensions (Cressie and Wikle, 2011).
Distance-based profiles can further be applied in feature space using an appropriate distance metric in individual or multiple predictors (environmental blocking, Roberts et al., 2017). Given the dependencies among predictor variables, the Mahalanobis distance appears to be a reasonable choice in situations involving quantitative (real-valued) predictors.
Distance-based profile functions of prediction error and variable importance, as proposed here, may be a useful tool in many of these situations. Meaningful concepts of prediction distance can be defined based on, for example, the forecasting horizon, the vertical or three-dimensional physical distance, or graph-theoretical distances in a phylogenetic tree (Roberts et al., 2017), in a stream network (Skoien et al., 2006), or in a social network. An extension of the proposed prediction error and variable importance profiles to these distance metrics is straightforward and will not be further explored in this study.
### Implementation
The proposed methods were implemented in the open-source data analysis environment R using the _sperrorest_ package, which provides a flexible framework for resampling-based model assessment (Brenning, 2012). The code is available in a GitHub repository under an open-source licence ([https://github.com/alexanderbrenning/spdiag](https://github.com/alexanderbrenning/spdiag)) and will be integrated into _sperrorest_ to provide additional user-level functions for the estimation and visualization of SPEPs and SVIPs.
## 3 Case Study 1: regionalization using ML and geostatistics
The first case study is a well-known dataset on topsoil heavy-metal concentration on a floodplain of the Meuse river in the Netherlands as included in the sp package in R (Pebesma and Bivand, 2005). It is widely used to introduce geostatistical interpolation techniques and demonstrate the combination of kriging with regression.
The combined use of spatial predictor variables, often derived from digital terrain models or remotely-sensed data, and spatial autocorrelation to spatially predict (or'regionalize') a quantitative response variable, is a common task in environmental science. This case study is a typical use case for kriging with external drift (Cressie, 1993) as well as for models that only exploit the information available from the predictor variables.
This study explores spatial prediction skill and variable importances of a selection of regionalization techniques that is intended to cover a broad spectrum from pure interpolation to spatial and non-spatial ML.
### Case study description: the Meuse dataset
The Meuse dataset contains 155 observations of (logarithmic) topsoil zinc concentration (_logZn_ in log-ppm) as the response variable, and several possible predictor variables. Zinc concentrations in this study area are related to the amount of contaminated sediment deposited on the floodplain, and therefore to predictors such as elevation (_elev_) and distance to river. This study uses these predictor variables, applying a square-root transformation to distance (_sqrt.dist_), in addition to UTM \\(x\\) and \\(y\\) as predictors that represent possible spatial trends. A linear model with these four predictors explains 72.6% of the variance of _logZn_, and has a residual autocorrelation range of 926 m with a nugget-to-sill ratio of 0.27. For comparison, _logZn_ itself has a range of 897 m with a nugget-to-sill ratio of 0.27, making it also very suitable for ordinary kriging interpolation without any trend predictors.
The floodplain is approximately 4 km \\(\\times\\) 1 km in size. The average nearest-neighbour distance of sampling locations is 112 m (minimum: 44 m). If the goal is to make spatial predictions on the floodplain itself, which is the usual use case for this dataset in the literature, it should be noted that the average prediction distance is 96 m (1st, 3rd quartiles: 53 and 120 m; see Appendix A). Mean nearest-neighbour distance and mean prediction distance are similar in this case study due to the relatively uniform, nearly random distribution of sampling locations on the floodplain.
### Regionalization techniques and their assessment
In this case study, spatial diagnostics of the following contrasting spatial prediction methods were compared:
1. Nearest-neigbour interpolation (NN) was chosen as a simple deterministic baseline method.
2. Ordinary kriging (OK) was included as a basic geostatistical technique without predictors of trend (Cressie, 1993).
3. Kriging with external drift (KED, or universal kriging), is a geostatistical technique that incorporates all four variables as linear predictors (Cressie, 1993).
4. Multiple linear regression (MLR) using the same four predictors was included as it (also) models a linear trend, but it does not exploit spatial dependence.
5. Geographically weighted regression (GWR) was selected as a locally linear model with spatially varying coefficients (Fotheringham et al., 2002).
6. Random forest (RF) was chosen as it is a popular nonlinear ML technique that is agnostic of the spatial setting (Breiman, 2001).
7. A combined OK-RF model was furthermore designed as an experimental hybrid geostatistical-ML technique that fades linearly from a pure OK interpolation at 0 m prediction distance to a pure RF at \\(\\geq 500\\) m distance.
Model parameter values and implementation details are reported in Appendix B. The chosen algorithm settings were not optimized since the present study is not a benchmarking exercise. The models were rather selected to illustrate the spatial behaviour of various model types. In particular, OK-RF was built for pedagogical reasons as the proposed SPEPs and SVIPS should be able to highlight the constrasting short- and long-distance behaviours of this model. Figure 1 shows the prediction maps obtained with OK, KED, OK-RF, and RF.
In order to contextualize SPEPs in the broader context of spatial model assessment, various other error estimators were also calculated:
1. Resubstitution error, estimated on the training sample, which is inherently overoptimistic;
2. LOO-CV at the level of sample locations, ignoring spatial autocorrelation;
3. Non-spatial, random 10-fold CV at the level of point observations, with the same limitation;
4. Spatial 10-fold CV using 10-means clustering to partition the study region (Russ and Brenning, 2010a). Both types of 10-fold CV were repeated 50 times.
### Spatial prediction error profiles
In the Meuse case study, the SPEPs revealed a strong dependence of performance on prediction distance for all methods, with some surprising similarities between (geo-)statistical and ML techniques (Figure 2).
Overall, interpolation techniques that do not incorporate predictor variables (NN, OK), had higher prediction errors especially at greater prediction distances. In general terms, OK's increase in RMSE with distance is consistent with the skill expected based on the target variable's semivariogram (square root of nugget effect: 0.10, of total sill: 0.35), which provides a rough indication of the kriging prediction error at near-zero and at long prediction distances (greater than the autocorrelation range).
At short prediction distances (up to about 300 m), KED, GWR, and RF displayed relatively similar error profiles, closely followed by MLR. This is unexpected considering their completely disparate approaches to spatial prediction. Considering the distribution of prediction distances on the floodplain from the sampling locations, KED had an edge over RF, which was followed by GWR.
At greater distances, GWR and RF (and OK-RF, which is identical to RF at distances \\(\\geq 500\\) m) showed substantially higher prediction errors, indicating a poorer spatial transferability compared to the simpler linear models underlying KED and MLR.
The weakest, but still noticeable dependence of performance on distance was found in MLR. The relatively large resubstitution error of MLR also underlines the fact that the limited flexibility limits its ability to fit --or to overfit-- to local patterns or anomalies. Note that the drop in RMSE towards small separation distances is not in contradiction to this. If a left-out observation represents, for instance, a positive anomaly, then nearby observations will also tend to
Figure 1: Spatial prediction maps of \\(logZn\\) in the Meuse case study using four selected geostatistical, ML, and hybrid models.
have above-average values. This will pull the prediction surface --for example, through a change in MLR's intercept--towards larger predicted values, which will thus reduce the RMSE at short distances.
This indirect effect of spatial autocorrelation on non-spatial MLR and RF models can therefore explain distance-dependent variation in predictive performances of non-spatial models. It was more pronounced in RF than in MLR since RF was better able to (over-)fit to to the training sample --a behaviour that appears to be beneficial in short-distance regionalization in the presence of strong spatial autocorrelation.
Figure 2 also highlights limitations of the proposed approach. At distances shorter than the minimum nearest-neighbour distance (44 m), it will inevitably have a blind spot. But even at slightly greater separation distances around the median nearest-neighbour distance (107 m), only limited and possibly geographically biased data may be available for estimating spatial prediction skill. In this study, below-average nearest-neighbour distances occur throughout the study area, with the exception of the southeastern fringe.
Considering the longer prediction distances, in this case study, robust training sample sizes were available for all displayed distances. The average training sample size (out of \\(n=155\\)) dropped below 140 for separation distances >415 m, and below 100 only for >1000 m distance in spatial LOO resampling.
### Comparison to other performance estimators
As expected, spatial CV estimates of model performance showed a consistently larger prediction error than non-spatial CV estimates. This can be attributed to the greater mean separation distance between test and training locations of 298 m for spatial CV versus 116 and 112 m for non-spatial CV and LOO-CV, respectively. Considering the relatively uniform sample distribution, surprisingly the mean prediction distance throughout the entire floodplain (i.e., at unsampled locations) of 96 m was even smaller than the mean prediction distances of CV estimators. These differences in mean separation distances, which are usually not reported along with CV estimates, underline the need to assess model performance at specific distances more explicitly, and in a more targeted manner.
When comparing the various models, spatial and non-spatial CV estimators alike placed RF among the top 1-2 models, along with KED. However, due to their implicit focus on specific separation distances, these estimators failed to detect the consistently larger RMSE of RF at distances <100 m (+122% compared to KED) and >500 m (about +110% compared to KED and MLR). Although the practical relevance of the increase at large versus short separation distances will depend on the application scenario at hand, overall only the SPEPs revealed the outperformance of RF by KED in this case study.
Figure 2: Spatial prediction error profiles for the prediction of logZn in the Meuse case study. The x axes are square-root transformed. LOO-CV and other point estimators are placed close to their mean prediction distance. Left: All models; right: detailed view of KED, MLR, RF, and OKβRF. Models: NN (brown), OK (grey), KED (black), MLR (blue), GWR (light blue), RF (dark green), OKβRF (light green).
### Spatial variable importance profiles
Interpolation, regression, and hybrid models displayed a clear difference in the relative importance of _sqrt.dist_ and _elev_ compared to the _x/y_ coordinates. Specifically, MLR, KED and GWR showed, on average, very similar, distance-insensitive importance profiles for _sqrt.dist_ and _elev_. OK predictions were more dependent on location for short prediction distances than at greater ones, where spatial autocorrelation weakens and OK predictions approach the overall sample mean (Cressie, 1993). The NN method, in contrast, mathematically does not exhibit this averaging behaviour, and consequently the empirically estimated importance of \\(x\\) and \\(y\\) remained independent of distance. OK's and NN's inability to account for spatial trends is documented by flat SVIP lines on zero, which explains the poor predictive skill at long distances.
SVIPs of RF are furthermore instructive as they help to explain its distance-dependent prediction error. RF's importance of _sqrt.dist_ and _elev_ increased continuously towards shorter prediction distances, which is quite remarkable considering the non-spatial design of standard RF. As discussed in the previous section, it appears that this is evidence of RF's excellent ability to learn, and perhaps memorize, small variations in regression relationships, which are only valid locally near the training locations. This explains how RF implicitly benefits from spatial autocorrelation, even without explicitly exploiting prediction distance information.
OK-RF, in constrast, was deliberately designed to mainly rely on spatial autocorrelation at short distances via the use of OK, and to fade into a RF model up to a distance of 500 m. The SVIPs reflect this model structure very clearly as the importance of _sqrt.dist_ and _elev_ ramps up from 0 to 500 m prediction distance, where they become identical to the SVIP curves of RF. Similarly, OK-RF's SVIPs of \\(x\\) and \\(y\\) show a decreasing trend. They do not reach exactly zero; this is a consequence of the permutation method randomly assigning permuted \\(x\\) and \\(y\\) values that may correspond to short prediction distances, which in turn switches the OK-RF model from RF to OK mode.
Figure 3: Spatial variable importance profiles in the prediction of \\(logZn\\) in the Meuse case study. Models: NN (brown), OK (grey), KED (black), MLR (blue), GWR (light blue), RF (dark green), OKβRF (light green).
### Comparison to other importance estimators
Again, due to the relatively short mean prediction distances of established spatial and non-spatial resampling techniques, only the proposed SVIPs were able to make relevant distance-related differences in variable importance visible. In the case of OK-RF, however, these variable importance estimates were surprisingly inconsistent with the SVIP-based assessment, which were the most reliable diagnostics for detecting the (in this case, known) spatial structure of this experimentally designed hybrid model.
## 4 Case study 2: spatial classification
Crop classification using multispectral satellite image time series is a broad and important ML task in environmental remote sensing. Knowledge of SPEPs is important in order to assess the potential of classifiers to be applied in adjacent study regions. This involves a large number of correlated predictors representing vegetation phenology, which are difficult to analyze separately but can be projected into a lower-dimensional transformed space for better visualization (Brenning, 2021).
### Case study description: the Maipo dataset
The dataset used is a well-documented case study consisting of 400 fields (7713 grid cells in total) with 4 different fruit-tree crops in central Chile (Pena and Brenning, 2015). To simulate use cases with typical learning sample sizes, data from 100 fields (25 from each crop type) was sampled repeatedly, and results were averaged. The feature set comprises 48 features representing visible and near- to shortwave-infrared spectral reflectances from Landsat images taken at 8 time points during one growing season, and 16 derived spectral indices. Specifically, the normalized differences vegetation index (NDVI) and the normalized differences water index (NDWI) were included. Refer to Pena and Brenning (2015) for details.
These features are strongly correlated with each other, especially for subsequent time points (since fruit-tree characteristics don't change dramatically within a few weeks), and physiologically or mathematically related features. Correlation is particularly strong among image dates 1 and 2 (early-season features), within image date 3 (mid-season), and among dates 4-8 (late season).
### Classifiers and their assessment
In this case study, spatial diagnostics of three contrasting spatial and non-spatial classifiers were compared:
1. Random forest (RF) was chosen since it is a popular nonlinear technique that is widely used in remote sensing (Breiman, 2001; Pal, 2005).
2. Linear discriminant analysis (LDA) was included as it is a simple but robust classification technique.
3. A combination of (spatial) nearest-neighbour classification (at prediction distances \\(\\leq 100\\) m) with LDA (at greater distances) is included as a simple, illustrative approach that uses only spatial proximity or only remotely-sensed features, depending on target distance (NN-LDA).
Only the third, illustrative technique is designed to explicitly account for spatial dependence in the data. Details are given in the Appendix.
SPEPs and SVIPs were calculated for separation distances ranging from 30 m (i.e., grid resolution) to over 10 km (diameter of study area: about 40 km) using the misclassification rate as the error measure.
Similar to the Meuse case study, model performances and variable importances were furthermore estimated using other resampling-based techniques for comparison. In addition to LOO-CV, non-spatial CV and \\(k\\)-means-based spatial CV (see section 3.2), a second type of spatial CV was used in which fields (i.e. groups of grid cells) are resampled (field-level CV, the method used by (Pena and Brenning, 2015)).
Given the high dimensionality of the feature space and the strong correlations among features, the approach proposed by Brenning (2021) was adopted to estimate variable importances from a transformed perspective. Specifically, principal-component (PC) transformations were applied to feature subspaces spanned by early-, mid- and late-season predictors, respectively. For convenience, only the SVIPs of the first PCs are presented. SVI assessment in transformed PC space bypasses the problem that permutation techniques should not be applied to strongly dependent features (Hooker and Mentch, 2019; Brenning, 2021).
In the interpretation of the following results it is important to remember that predictions at less than 100 m distance and up to about 500 m distance may occur within fields that are already included in the training sample. This could be relevant in gap-filling applications, but not in the usual setting of land cover classification in which 'new' fields in the same or an adjacent region are to be classified. Note that in this case study, the training sample size in the LOO procedure does not decrease with increasing separation distance since the same number of fields is always sampled from the large pool of data in the remaining area.
### Spatial prediction error profiles
The SPEPs of LDA and NN-LDA clearly highlight the capability of the proposed approach to detect spatial differences in prediction skill (Figure 4). By construction, NN-LDA will flawlessly classify crops up to 100 m distance whenever the target location lies within a field from the training sample. At greater distances, NN-LDA is indistinguishable from LDA by design. This behaviour was effectively detected with the help of the SPEP.
RF interestingly also showed a decrease in error rate towards the shortest prediction distances. This can be attributed to the overfitting of RF to the training sample, which proves to be an advantage over LDA in within-field classification. Nevertheless, RF did not achieve the same skill in within-field classification as NN-LDA, which leverages expert knowledge on the spatial structure of the classification task. At distances beyond the field scale, LDA (and NN-LDA) outperformed RF, as detected previously with field-level CV (Pena and Brenning, 2015).
### Comparison to other performance estimators
The various CV estimators also showed important differences. Overall, error rates estimated at short prediction distances, and non-spatial random CV in particular, grossly underestimated the regional-scale prediction error. This holds true especially in the case of RF (due to overfitting) and NN-LDA (due to its spatial design). Beyond the field scale, i.e. at distances greater than about 500 m, error rates increased only slightly, which can be attributed to largely homogeneous agricultural and environmental conditions within this study region.
Considering the use case of classifying crops in the entire study region, prediction distances obtained with field-level CV resampling were most similar to regional-scale prediction distances (mean prediction distance of 871 m in field-level CV versus 831 m overall; see Appendix A for histograms). \\(k\\)-means-based spatial-CV resampling represented, on average, much larger prediction distances (6664 m). These were consistent with spatial-LOO prediction errors at similar distances, but they come at a substantially lower computational cost than spatial LOO estimation, which requires fitting a new model for each LOO prediction. Nevertheless, the results underline the importance of reporting mean prediction distances along with resampling-based performance measures, comparing them to the prediction distances of the application setting.
Figure 4: Spatial prediction error profiles for the classification of crop type in the Maipo case study (RF: dotted line, triangle; LDA: solid line, bullet; NNβLDA: dashed, empty circle). Point estimates from different CV types are plotted at their mean prediction distances. Results of LOO-CV and non-spatial random CV are visually indistinguishable.
### Spatial variable importance profiles
SVIPs were effectively able to detect the striking difference in model structure between LDA and NN-LDA (Figure 5). They clearly indicated that at short distances, NN-LDA did not make use of the available predictors. LDA and RF both showed increases in SVI at short distances for some groups of variables (e.g., first PC of late-season predictors, _Late1_), which may be indicative of overfitting. The generally higher SVIs in RF than in LDA (despite the poorer overall error rate) are attributed to a stronger concentration of RF in predictors associated with _Early1_, _Late1_ and _Late2_ PCs, which only represent a fraction of the overall variance of the 64 available predictors.
## 5 Discussion
### Distance-based spatial model assessment and interpretation
In the model-agnostic spatial model assessment, SPEPs demonstrated their ability to highlight strengths and weaknesses of different models in predicting the response locally, and in transferring the modeled relationships to more distant regions. In combination with knowledge of the intended distribution of prediction distances, they allow modelers to make better-informed choices regarding model design and selection. In spatial model assessments, we need to shift the focus from the question 'At what distance does the test data become independent?' to 'At what distances would I like to predict the response?'
With regards to the case studies, KED was superior at very short distances due to its mathematical optimality as a best linear unbiased predictor, but RF achieved very similar performances even at relatively short prediction distances despite its non-geostatistical nature. In classification, SPEPs identified very sharply the contrasting behaviour of LDA and an experimentally designed hybrid NN-LDA that incorporates nearest-neighbour interpolation at short distances.
Figure 5: Spatial variable importance profiles in crop classification in the Maipo case study for the first PC of early- and mid-season features and the first two PCs of late-season features. Models: RF (dotted line), LDA (solid line), NNβLDA (dashed).
Again, RF showed a remarkable drop in prediction error towards short distances, which was interpreted as a positive side-effect of overfitting.
Given the ability of SPEPs to measure and visualize model performance seamlessly across scales, they bridge the scale gaps between resubstitution error, non-spatial model assessments, and different types of spatial assessment (e.g., field-level or region-based). This reminds us that predictive performance strongly depends on the objective of the prediction, e.g. local gap filling, or regional generalization and model transfer. SPEPs have the potential to offer a more differentiated perspective on model performance than simple non-spatial assessments of models such as kriging, spatial regression, and random forests, which may exhibit contrasting spatial behaviour by design. This has not been acknowledged sufficiently in previous benchmarking studies that (1) either ignore the scale of spatial prediction (Goovaerts, 2000; Fox et al., 2020), (2) postulate that spatial dependence needs to be taken care of based on the range of residual autocorrelation (Brenning, 2005; Roberts et al., 2017; Valavi et al., 2019), or (3) focus on other fixed scales of spatial prediction (e.g., Pena and Brenning, 2015; Goetz et al., 2015). After all, the spatial scale of model assessments should not be dictated by some poorly defined (residual?) spatial autocorrelation criterion, but by the purpose of the prediction.
In the context of model interpretation, SVIPs proved to be capable of identifying spatially aware model structures as present in the combined NN-LDA and OK-RF techniques examined in classification and regionalization, respectively. Despite the well-known limitations of permutation-based variable importance measures, this novel approach offers a simple and intuitive approach to spatially differentiated model-agnostic interpretation of ML models. It may also serve as a template for spatially nuancing other diagnostic tools for explainable ML, and for developing similar approaches in the spatio-temporal domain or in feature space.
### Computational versus theoretically motivated measures of spatial model performance
Theoretically derived measures of uncertainty such as kriging variances or prediction intervals of linear regression models provide a reliable uncertainty assessment when their model assumptions are satisfied. In the regionalization case study, computational SPEPs were consistent with theoretical expectations where available (OK, KED). Geostatistically based performance measures have a low computational cost, but it may be difficult to judge the effects of the possible violation of model assumptions. As a matter of fact, OK's stationarity assumption is violated when a trend or external drift is present, as in the Meuse case study, and therefore kriging variances output by OK and KED cannot be compared directly. Computational tools as presented in this study, in contrast, are model-agnostic and therefore allow us to compare different algorithms independently of their underlying assumptions and paradigms, and regardless of whether or not we believe they are satisfied. They offer a data-driven second opinion on prediction performance even where model-based variance estimates are available.
Nevertheless, we should acknowledge that SPEPs that only depend on distance, as presented here, only provide a simplified, stationary and isotropic perspective on predictive model behaviour, as only distance, and not location or orientation, is taken into account. Unlike the semivariogram in geostatistics, which can be estimated for specific directions by filtering suitable pairs of points, the directional approach cannot be transferred to the estimation of SPEPs and SVIPs.
### The role of autocorrelation and independence in spatial model assessment
It has previously been proposed to choose the buffer distance based on the range of residual autocorrelation (Brenning, 2005; Le Rest et al., 2014; Valavi et al., 2019). Nevertheless, this starts from the intuition that test samples must be independent, although often without providing a precise definition (e.g., Pohjankukka et al., 2017; Valavi et al., 2019). What makes things worse is that the range of autocorrelation of the residuals will inevitably be model-dependent, and in the case of overfitting models, they provide a biased estimate of model error and consequently of the autocorrelation range.
From a practical perspective, independent test data is not even a desirable property in predictive tasks such as interpolation or regionalization that precisely build upon and require spatial dependence (Cressie, 1993). Not only geostatistical models but also hybrid ML interpolation techniques increasingly exploit this dependence (this study and Sekulic et al., 2020). In these situations, spatial prediction uncertainty will inevitably depend on prediction distance or horizon, which we must therefore incorporate in our model assessment and interpretation, as proposed in this study.
## 6 Conclusions
The proposed distance-based spatial model assessment and interpretation tools enrich the toolkit available for explaining the decisions of ML models in the spatial domain. They produce intuitively interpretable visualizations of the spatial transferability of modeled relationships. Results obtained in two environmental-science and remote-sensing case studies were encouraging and identified important differences as well as similarities among various statistical, geostatistical, ML, and hybrid models. SPEPs and SVIPs were effectively able to identify key differences, in particular whether a black-box model exploits proximity information or entirely relies on predictor-response relationships.
Compared with increasingly popular resampling-based spatial model assessments with a fixed spatial block size, the continuous-distance-based approach offers substantially more detail and relates performance directly to prediction distance. In this context, the practice of using the range of residual spatial autocorrelation as a minimum separation distance should be abandoned as it lacks a coherent theoretical justification and is not derived from the spatial prediction task at hand, which may specifically exploit spatial dependence. The distance-dependence of performance further implies that mean prediction distances of prediction tasks and of performance estimators such as CV should be comparable. They should routinely be reported in spatial prediction modeling.
It is suggested that the wider use of spatially aware model-assessment and interpretation tools has the potential to improve the practice of spatial prediction modeling in fields ranging from remote sensing to ecology and the environmental sciences. Data-driven diagnostics provide a valuable, assumption-free second opinion on model performance even in situations where theory-based prediction variances are available. These tools furthermore generate opportunities for designing improved classification and regionalization models that focus on a clearly defined spatial prediction horizon.
## References
* Bahn and McGill (2013) Bahn, V. and McGill, B. J. (2013). Testing the predictive performance of distribution models, _Oikos_**122**: 321-331.
* Bergmeir and Benitez (2012) Bergmeir, C. and Benitez, J. M. (2012). On the use of cross-validation for time series predictor evaluation, _Information Sciences_**191**: 192-213.
* Bivand and Yu (2020) Bivand, R. and Yu, D. (2020). _spspwr: Geographically Weighted Regression_. R package version 0.6-34.
* URL: [https://CRAN.R-project.org/package=spgwr](https://CRAN.R-project.org/package=spgwr)
* Breiman (2001) Breiman, L. (2001). Random forests, _Machine Learning_**45**: 5-32.
* Brenning (2005) Brenning, A. (2005). Spatial prediction models for landslide hazards: Review, comparison and evaluation, _Natural Hazards and Earth System Sciences_**5**(6): 853-862.
* Brenning (2012) Brenning, A. (2012). Spatial cross-validation and bootstrap for the assessment of prediction rules in remote sensing: The R package sperrorest, _2012 IEEE International Geoscience and Remote Sensing Symposium_, pp. 5372-5375.
* Brenning (2021) Brenning, A. (2021). Transforming feature space to interpret machine learning models, arXiv:2104.04295.
* Clark and Allingham (2011) Clark, R. G. and Allingham, S. (2011). Robust resampling confidence intervals for empirical variograms, _Mathematical Geosciences_**43**: 243-259.
* Cressie (1993) Cressie, N. A. C. (1993). _Statistics for Spatial Data_, John Wiley & Sons.
* Cressie and Wikle (2011) Cressie, N. A. C. and Wikle, C. K. (2011). _Statistics for Spatio-Temporal Data_, John Wiley & Sons.
* Efron and Gong (1983) Efron, B. and Gong, G. (1983). A leisurely look at the bootstrap, the jackknife, and cross-validation, _The American Statistician_**37**(1): 36-48.
* Fotheringham et al. (2002) Fotheringham, A. S., Brunsdon, C. and Charlton, M. E. (2002). _Geographically weighted regression_, Wiley, Chichester.
* Fouedjio and Klump (2020) Fouedjio, F. and Klump, J. (2020). Exploring prediction uncertainty of spatial data in geostatistical and machine learning approaches, _Environmental Earth Sciences_**78**: 1-24.
* Fox et al. (2020) Fox, E. W., Ver Hoef, J. M. and Olsen, A. R. (2020). Comparing spatial regression to random forests for large environmental data sets, _PLoS ONE_**15**(3): e0229509.
* Goetz et al. (2015) Goetz, J. N., Brenning, A., Petschko, H. and Leopold, P. (2015). Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling, _Computers & Geosciences_**81**: 1-11.
* Goovaerts (2000) Goovaerts, P. (2000). Geostatistical approaches for incorporating elevation into the spatial interpolation of rainfall, _Journal of Hydrology_**228**: 113-129.
* Gruler et al. (2016) Gruler, B., Pebesma, E. and Heuvelink, G. (2016). Spatio-temporal interpolation using gstat, _The R Journal_**8**: 204-218.
* Hengl et al. (2015) Hengl, T., Heuvelink, G. B. M., Kempen, B., Leenaars, J. G. B., Walsh, M. G., Shepherd, K. D., Sila, A., MacMillan, R. A., Mendes de Jesus, J., Tamee, L. and Tondoh, J. E. (2015). Mapping soil properties of africa at 250 m resolution: Random forests significantly improve current predictions, _PLoS ONE_**10**: e0125814.
* Hooker and Mentch (2019) Hooker, G. and Mentch, L. (2019). Please stop permuting features: an explanation and alternatives, arXiv:1905.03151.
* Isaaks and Srivastava (1989) Isaaks, E. H. and Srivastava, R. M. (1989). _Applied Geostatistics_, Oxford University Press, New York.
* Le Rest et al. (2014) Le Rest, K., Pinaud, D., Monestiez, P., Chadoeuf, J. and Bretagnolle, V. (2014). Spatial leave-one-out cross-validation for variable selection in the presence of spatial autocorrelation, _Global Ecology and Biogeography_**23**(7): 811-820.
* Liaw and Wiener (2002) Liaw, A. and Wiener, M. (2002). Classification and regression by randomforest, _R News_**2**(3): 18-22.
* Meyer et al. (2018) Meyer, H., Reudenbach, C., Hengl, T., Katurji, M. and Nauss, T. (2018). Improving performance of spatio-temporal machine learning models using forward feature selection and target-oriented validation, _Environmental Modelling & Software_**101**: 1-9.
* Molnar (2019) Molnar, C. (2019). _Interpretable machine learning_. [https://christophm.github.io/interpretable-ml-book/](https://christophm.github.io/interpretable-ml-book/).
* Olea and Pardo-Iguzquiza (2011) Olea, R. and Pardo-Iguzquiza, E. (2011). Generalized bootstrap method for assessment of uncertainty in semivariogram inference, _Mathematical Geosciences_**43**: 203-228.
* Pal (2005) Pal, M. (2005). Random forest classifier for remote sensing classification, _International Journal of Remote Sensing_**26**(1): 217-222.
* Palmer and Hagedorn (2006) Palmer, T. and Hagedorn, R. (2006). _Predictability of Weather and Climate_, Cambridge University Press.
* Pena and Brenning (2015) Pena, M. A. and Brenning, A. (2015). Assessing fruit-tree crop classification from Landsat-8 time series for the Maipo valley, Chile, _Remote Sensing of Environment_**171**: 234-244.
* Pebesma and Bivand (2005) Pebesma, E. J. and Bivand, R. S. (2005). Classes and methods for spatial data in R, _R News_**5**(2): 9-13.
* Pohjankukka et al. (2017) Pohjankukka, J., Pahikkala, T., Nevalainen, P. and Heikkonen, J. (2017). Estimating the prediction performance of spatial models via spatial k-fold cross validation, _International Journal of Geographical Information Science_**31**(10): 2001-2019.
* Roberts et al. (2017) Roberts, D. R., Bahn, V., Ciuti, S., Boyce, M. S., Elith, J., Guillera-Arroita, G., Hauenstein, S., Lahoz-Monfort, J. J., Schroder, B., Thuiller, W., Warton, D. I., Wintle, B. A., Hartig, F. and Dormann, C. F. (2017). Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure, _Ecography_**40**(8): 913-929.
* Russ and Brenning (2010a) Russ, G. and Brenning, A. (2010a). Data mining in precision agriculture: Management of spatial information, _in_ E. Hullermeier, R. Kruse and F. Hoffmann (eds), _Computational Intelligence for Knowledge-Based Systems Design_, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, pp. 350-359.
* Russ and Brenning (2010b) Russ, G. and Brenning, A. (2010b). Spatial variable importance assessment for yield prediction in precision agriculture, _in_ P. R. Cohen, N. M. Adams and M. R. Berthold (eds), _Advances in Intelligent Data Analysis IX_, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 184-195.
* Sekulic et al. (2020) Sekulic, A., Kilibarda, M., Heuvelink, G. B. M., Nikolic, M. and Bajat, B. (2020). Random forest spatial interpolation, _Remote Sensing_**12**(10): 1687.
* geostatistics on stream networks, _Hydrology and Earth System Sciences_**10**(2): 277-287.
* Strobl et al. (2008) Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T. and Zeileis, A. (2008). Conditional variable importance for random forests, _BMC Bioinformatics_**9**(1): 307.
* Valavi et al. (2019) Valavi, R., Elith, J., Lahoz-Monfort, J. J. and Guillera-Arroita, G. (2019). blockCV: An R package for generating spatially or environmentally separated folds for k-fold cross-validation of species distribution models, _Methods in Ecology and Evolution_**10**(2): 225-232.
* Venables and Ripley (2002) Venables, W. N. and Ripley, B. D. (2002). _Modern Applied Statistics with S_, fourth edn, Springer, New York.
* Veronesi and Schillaci (2019) Veronesi, F. and Schillaci, C. (2019). Comparison between geostatistical and machine learning models as predictors of topsoil organic carbon with a focus on local uncertainty estimation, _Ecological Indicators_**101**: 1032-1044.
* Webster and Oliver (2007) Webster, R. and Oliver, M. A. (2007). _Geostatistcs for Environmental Scientists_, John Wiley & Sons, Inc., Chichester.
## Appendix A Distribution of prediction distances
The histograms in Figures 6 and 7 display the distributions of prediction distances in both case studies in different validation scenarios and. For comparison, the diagrams show the distribution of prediction distances in the prediction task in which a ML model trained on the/a training sample is used to predict the response variable throughout the entire study area.
## Appendix B Model details
NN interpolation was implemented based on the observed value of the (one) nearest observation according to the Euclidean distance in UTM coordinate space.
OK was applied as a global interpolation technique with a spherical semivariogram model that was re-fitted to each training sample using iteratively re-weighted least squares (Cressie, 1993). Empirical semivariograms were estimated each time with Cressie's robust estimator. KED's residual semivariogram was similarly modeled with a spherical model fitted to a robust empirical residual semivariogram. Microscale variability was represented as a measurement error variance instead of the more common nugget effect. This turns OK and KED into smoothers, while a nugget effect would produce a local discontinuity at the training locations. This only affects the reported resubstitution errors, which would be 0 otherwise. The OK and KED implementation in R's _gstat_ package were used (Graler et al., 2016).
Figure 6: Histograms of prediction distances in the Meuse case study. Top left: Distribution based on the 155 training locations and 3103 target grid cells as prediction locations. Top right: LOO-CV. Bottom left: Non-spatial (random) CV. Bottom right: \\(k\\)-means-based spatial CV. Distributions for CV-based estimators are based on 50 repetitions.
GWR in the R implementation in package \\(spgwr\\)(Bivand and Yu, 2020) was used with an inner CV for optimizing the bandwidth parameter. A global bandwidth parameter instead of local ones was chosen to reduce the probability of overfitting. Only _sqrt.dist_ and _elev_ served as predictors in this model; UTM \\(x\\) and \\(y\\) were only used to define the geographic space within which the coefficients vary.
The RF model from the original R implementation in the _randomForest_ package was used with default settings, such as 500 trees (Liaw and Wiener, 2002).
OK-RF regionalization was implemented by fitting OK and RF models as described above, and by combining their predictions \\(\\hat{y}_{OK}\\) and \\(\\hat{y}_{RF}\\) depending on the prediction distance \\(d\\), up to a maximum distance \\(d_{\\text{max}}:=500\\) m:
\\[\\hat{y}_{OK\\text{-}RF}:=\\rho\\hat{y}_{RF}+(1-\\rho)\\hat{y}_{OK},\\]
where \\(\\rho=\\text{min}\\{d/d_{\\text{max}},1\\}\\). No attempt was made to optimize \\(d_{\\text{max}}\\) as this model was primarily designed for demonstration purposes.
LDA classification was based on the implementation in the _MASS_ package (Venables and Ripley, 2002).
NN-LDA was implemented based on the LDA classifier and a (one-)nearest-neighbour classifier that uses Euclidean distance in UTM coordinate space to measure proximity. The classifier switches from nearest-neighbour to LDA mode at a separation distance of 100 m with no transition zone. This simple and transparent setting was chosen for illustrative purposes. No attempt was made to optimize the threshold distance, which is nevertheless generally consistent with typical field sizes in the study area, or to implement a transition zone.
Figure 7: Histograms of prediction distances in the Maipo case study. Left: Distribution based on randomly selecting 100 fields for training and the remaining 300 fields as prediction locations. Center and right: Distributions for field-level CV and \\(k\\)-means-based spatial CV, respectively. All histograms are aggregated over 50 samples or CV repetitions. | While significant progress has been made towards explaining black-box machine-learning (ML) models, there is still a distinct lack of diagnostic tools that elucidate the spatial behaviour of ML models in terms of predictive skill and variable importance. This contribution proposes spatial prediction error profiles (SPEPs) and spatial variable importance profiles (SVIPs) as novel model-agnostic assessment and interpretation tools for spatial prediction models with a focus on prediction distance. Their suitability is demonstrated in two case studies representing a regionalization task in an environmental-science context, and a classification task from remotely-sensed land cover classification. In these case studies, the SPEPs and SVIPs of geostatistical methods, linear models, random forest, and hybrid algorithms show striking differences but also relevant similarities. Limitations of related cross-validation techniques are outlined, and the case is made that modelers should focus their model assessment and interpretation on the intended spatial prediction horizon. The range of autocorrelation, in contrast, is not a suitable criterion for defining spatial cross-validation test sets. The novel diagnostic tools enrich the toolkit of spatial data science, and may improve ML model interpretation, selection, and design.
interpretable machine learning spatial prediction error spatial variable importance spatial cross-validation | Give a concise overview of the text below. | 246 |
arxiv-format/1310_4590v3.md | A sufficient condition for the subexponential asymptotics of GI/G/1-type Markov chains with queueing applications
Hiroyuki Masuyama
Department of Systems Science, Graduate School of Informatics, Kyoto University
Kyoto 606-8501, Japan
E-mail: [email protected]
## 1 Introduction
This paper studies the subexponential asymptotics of the stationary distribution of a GI/GI/1-type Markov chain (see, e.g., He 2014) without jumps from level \"infinity\" to level zero. For simplicity, we call such Markov chains _GI/GI/1-type Markov chains without disasters_ because they are often used to analyze semi-Markovian queues without \"disasters\", which are negative customers who remove all the customers in the system (including themselves) on their arrivals. It should be noted that every M/G/1-type Markov chain is a GI/GI/1-type Markov chain without disasters (see, e.g., He 2014).
Several researchers have studied the subexponential asymptotics of the stationary distributions of GI/GI/1-type Markov chains (including M/G/1-type ones). Asmussen and Moller(1999) derive subexponential asymptotic formulas for the stationary distribution of a M/GI/1-type Markov chain with subexponential level increments. Li and Zhao (2005) study a GI/GI/1-type Markov chain with subexponential level increments, though some of their asymptotic formulas are incorrect (for details, see Masuyama 2011). Takine (2004) presents a subexponential asymptotic formula for M/GI/1-type Markov chains, under the assumption that the integrated tail distribution of level increments is subexponential. It should be noted that Takine (2004)'s assumption does not necessarily imply the subexponentiality of level increments themselves (see, e.g., Remark 3.5 in Sigman 1999). Focusing on the period of the \\(G\\)-matrix, Masuyama (2011) establishes sufficient conditions for the subexponential asymptotics for M/GI/1-type Markov chains, which are weaker than those presented in the literature (Asmussen and Moller 1999; Li and Zhao 2005; Takine 2004), except for being limited to the M/G/1-type Markov chain. Masuyama (2011) also points out that Takine (2004)'s derivation of the asymptotic formula implicitly assumes the aperiodicity of the \\(G\\)-matrix. Kim and Kim (2012) weaken Masuyama (2011)'s sufficient condition in the case where the \\(G\\)-matrix is periodic. Kimura et al. (2013) present a comprehensive study on the subexponential asymptotics of GI/GI/1-type Markov chains. They study the _locally_ subexponential asymptotics (Asmussen et al. 2003) as well as the (ordinarily) subexponential asymptotics. The sufficient conditions presented in Kimura et al. (2013) are weaker than those reported in the literature mentioned above.
The main result of this paper is to present a new sufficient condition for the subexponential asymptotics of the stationary distribution of a GI/GI/1-type Markov chain without disasters. This sufficient condition is weaker than the corresponding one presented in Kimura et al. (2013).
In this paper, we demonstrate the application of the main result to the stationary queue length distribution in the (standard) BMAP/GI/1 queue (see, e.g., Lucantoni 1991). According to Takine (2000), the stationary queue length distribution in the BMAP/GI/1 queue is equivalent to the stationary distribution of a certain M/G/1-type Markov chain. Combining this fact and the main result of this paper, we derive four subexponential asymptotic formulas for the stationary queue length distribution. Two of the four formulas are proved under weaker conditions than the two corresponding ones presented in Masuyama et al. (2009); and the other two formulas are shown for a BMAP/GI/1 queue with consistently varying service times, which is not considered in Masuyama et al. (2009).
We also apply the main result of this paper to a single-server queue with Markovian arrivals and the \\((a,b)\\)-bulk-service rule, denoted by MAP\\(/\\mathrm{GI}^{(a,b)}/\\)1 queue (see, e.g., Singh et al. 2013). For the MAP\\(/\\mathrm{GI}^{(a,b)}/\\)1 queue, we construct a GI/GI/1-type Markov chain without disasters by observing the queue length process at departure points. Thus using the main result, we obtain a subexponential asymptotic formula for the stationary queue length distribution at departure points. Combining the obtained formula with the relationship between the stationary queue length distribution at departure points and that at an arbitrary time point, we have a subexponential asymptotic formula for the stationary queue length distribution at an arbitrary time point.
The rest of this paper is divided into four sections. Section 2 provides basic definitions,notation and preliminary results. Section 3 presents the main result of this paper. Sections 4 and 5 discuss the applications of the main result.
## 2 Preliminaries
### Basic definitions and notation
Let \\(\\mathbb{Z}=\\{0,\\pm 1,\\pm 2,\\dots\\}\\), \\(\\mathbb{Z}_{+}=\\{0,1,2,\\dots\\}\\) and \\(\\mathbb{N}=\\{1,2,3,\\dots\\}\\), respectively. For any distribution function \\(F\\) on \\(\\mathbb{R}_{+}:=[0,\\infty)\\), let \\(\\overline{F}=1-F\\) and \\(F_{\\mathrm{e}}\\) denote the equilibrium distribution function of \\(F\\), i.e., \\(F_{\\mathrm{e}}(x)=\\int_{0}^{x}\\overline{F}(y)\\mathrm{d}y\\)\\(\\int_{0}^{\\infty}\\overline{F}(y)\\mathrm{d}y\\) for \\(x\\geq 0\\), which is well-defined if \\(F\\) has a positive finite mean. For any nonnegative random variable \\(Y\\) with positive finite mean, let \\(Y_{\\mathrm{e}}\\) denote the equilibrium random variable of \\(Y\\) such that
\\[\\mathsf{P}(Y_{\\mathrm{e}}\\leq x)=\\frac{1}{\\mathsf{E}[Y]}\\int_{0}^{x}\\mathsf{ P}(Y>y)\\mathrm{d}y,\\qquad x\\in\\mathbb{Z}_{+};\\]
and \\(Y_{\\mathrm{de}}=\\lfloor Y_{\\mathrm{e}}\\rfloor\\), which is called the discretized equilibrium random variable of \\(Y\\). If \\(Y\\) is nonnegative integer-valued, then
\\[\\mathsf{P}(Y_{\\mathrm{de}}=k)=\\frac{1}{\\mathsf{E}[Y]}\\mathsf{P}(Y>k),\\qquad k \\in\\mathbb{Z}_{+}.\\]
We now define \\(\\boldsymbol{e}\\) and \\(\\boldsymbol{I}\\) as the column vector of ones and the identity matrix, respectively, with appropriate dimensions according to the context. The superscript \"\\(\\mathrm{t}\\)\" represents the transpose operator for vectors and matrices. The notation \\([\\,\\cdot\\,]_{i,j}\\) (rep. \\([\\,\\cdot\\,]_{i}\\)) denotes the \\((i,j)\\)th (resp. \\(i\\)th) element of the matrix (resp. vector) in the square brackets.
For any matrix sequence \\(\\{\\boldsymbol{M}(k);k\\in\\mathbb{Z}\\}\\), let \\(\\overline{\\boldsymbol{M}}(k)=\\sum_{l=k+1}^{\\infty}\\boldsymbol{M}(l)\\) and \\(\\overline{\\boldsymbol{M}}(k)=\\sum_{l=k+1}^{\\infty}\\overline{\\boldsymbol{M}}(l)\\) for \\(k\\in\\mathbb{Z}\\). For any two matrix sequences \\(\\{\\boldsymbol{M}(k);k\\in\\mathbb{Z}\\}\\) and \\(\\{\\boldsymbol{N}(k);k\\in\\mathbb{Z}\\}\\) such that their products are well-defined, let \\(\\{\\boldsymbol{M}*\\boldsymbol{N}(k);k\\in\\mathbb{Z}\\}\\) denote the convolution of \\(\\{\\boldsymbol{M}(k)\\}\\) and \\(\\{\\boldsymbol{N}(k)\\}\\), i.e.,
\\[\\boldsymbol{M}*\\boldsymbol{N}(k)=\\sum_{l\\in\\mathbb{Z}}\\boldsymbol{M}(k-l) \\boldsymbol{N}(l)=\\sum_{l\\in\\mathbb{Z}}\\boldsymbol{M}(l)\\boldsymbol{N}(k-l),\\qquad k\\in\\mathbb{Z}.\\]
In addition, for any square matrix sequence \\(\\{\\boldsymbol{M}(k);k\\in\\mathbb{Z}\\}\\), let \\(\\{\\boldsymbol{M}^{*n}(k);k\\in\\mathbb{Z}\\}\\) (\\(n\\in\\mathbb{N}\\)) denote the \\(n\\)-fold convolution of \\(\\{\\boldsymbol{M}(k)\\}\\) with itself, i.e.,
\\[\\boldsymbol{M}^{*n}(k)=\\sum_{l\\in\\mathbb{Z}}\\boldsymbol{M}^{*(n-1)}(k-l) \\boldsymbol{M}(l),\\qquad k\\in\\mathbb{Z},\\]
where \\(\\boldsymbol{M}^{*0}(0)=\\boldsymbol{I}\\) and \\(\\boldsymbol{M}^{*0}(k)=\\boldsymbol{O}\\) for \\(k\\in\\mathbb{Z}\\setminus\\{0\\}\\).
Finally, for simplicity, we may write \\(\\boldsymbol{Z}(x)=o(f(x))\\) and \\(\\boldsymbol{Z}(x)\\stackrel{{ x}}{{\\sim}}\\widetilde{\\boldsymbol{Z} }f(x)\\) to represent
\\[\\lim_{x\\to\\infty}\\frac{\\boldsymbol{Z}(x)}{f(x)}=\\boldsymbol{O},\\qquad\\lim_{x \\to\\infty}\\frac{\\boldsymbol{Z}(x)}{f(x)}=\\widetilde{\\boldsymbol{Z}},\\]
respectively.
The above definitions and notation for matrices are applied to vectors and scalars in an appropriate manner.
### Stationary distribution of GI/G/1-type Markov chain
Let \\(\\mathbb{M}_{0}=\\{1,2,\\dots,M_{0}\\}\\) and \\(\\mathbb{M}=\\{1,2,\\dots,M\\}\\), where \\(M_{0},M\\in\\mathbb{N}\\). We then define \\(\\{(X_{n},S_{n});n\\in\\mathbb{Z}_{+}\\}\\) as a Markov chain with state space \\(\\mathbb{F}:=(\\{0\\}\\times\\mathbb{M}_{0})\\cup(\\mathbb{N}\\times\\mathbb{M})\\) and transition probability matrix \\(\\boldsymbol{T}\\), which is given by
\\[\\boldsymbol{T}=\\left(\\begin{array}{ccccc}\\boldsymbol{B}(0)&\\boldsymbol{B}(1 )&\\boldsymbol{B}(2)&\\boldsymbol{B}(3)&\\cdots\\\\ \\boldsymbol{B}(-1)&\\boldsymbol{A}(0)&\\boldsymbol{A}(1)&\\boldsymbol{A}(2)& \\cdots\\\\ \\boldsymbol{B}(-2)&\\boldsymbol{A}(-1)&\\boldsymbol{A}(0)&\\boldsymbol{A}(1)& \\cdots\\\\ \\boldsymbol{B}(-3)&\\boldsymbol{A}(-2)&\\boldsymbol{A}(-1)&\\boldsymbol{A}(0)& \\cdots\\\\ \\vdots&\\vdots&\\vdots&\\vdots&\\ddots\\end{array}\\right), \\tag{2.1}\\]
where \\(\\boldsymbol{B}(0)\\) and \\(\\boldsymbol{A}(0)\\) in the diagonal blocks are \\(M_{0}\\times M_{0}\\) and \\(M\\times M\\) matrices, respectively. Each element of \\(\\boldsymbol{T}\\) is specified by two nonnegative integers \\((k,i)\\in\\mathbb{F}\\), where the first variable \\(k\\) is called _level_ and the second one \\(i\\) is called _phase_.
Throughout this paper, we make the following assumption:
**Assumption 2.1**: (i)_\\(\\boldsymbol{T}\\) is irreducible and stochastic; (ii)\\(\\sum_{k=1}^{\\infty}k\\boldsymbol{B}(k)\\boldsymbol{e}<\\infty\\); (iii)\\(\\boldsymbol{A}:=\\sum_{k\\in\\mathbb{Z}}\\boldsymbol{A}(k)\\) is irreducible and stochastic; (iv)\\(\\sum_{k\\in\\mathbb{Z}}|k|\\boldsymbol{A}(k)<\\infty\\); (v)\\(\\,\\sigma:=\\boldsymbol{\\pi}\\sum_{k\\in\\mathbb{Z}}k\\boldsymbol{A}(k) \\boldsymbol{e}<0\\), where \\(\\boldsymbol{\\pi}:=(\\pi_{i})_{i\\in\\mathbb{M}}\\) is the stationary probability vector of \\(\\boldsymbol{A}:=\\sum_{k\\in\\mathbb{Z}}\\boldsymbol{A}(k)\\)._
**Remark 2.1**: \\(\\boldsymbol{T}\\) is positive recurrent if and only if \\(\\sigma<0\\) and \\(\\sum_{k=1}^{\\infty}k\\boldsymbol{B}(k)\\boldsymbol{e}<\\infty\\), provided that \\(\\boldsymbol{T}\\) and \\(\\boldsymbol{A}\\) are irreducible and stochastic (see, e.g., Asmussen 2003, Chapter XI, Proposition 3.1). Therefore Assumption 2.1 is equivalent to condition (I) of Assumption 2 in Kimura et al. (2013).
**Remark 2.2**: For \\(k\\in\\mathbb{N}\\), we have \\(\\boldsymbol{B}(-k)\\boldsymbol{e}+\\sum_{l=-k+1}^{\\infty}\\boldsymbol{A}(l) \\boldsymbol{e}=\\boldsymbol{e}\\). Thus condition (iii) of Assumption 2.1 implies \\(\\lim_{k\\to\\infty}\\boldsymbol{B}(-k)=\\boldsymbol{O}\\), which shows that the one-step transition probability from level \"infinity\" to level zero is equal to zero, i.e., no \"disasters\" happen in the context of queueing models.
Let \\(\\boldsymbol{x}:=(\\boldsymbol{x}(0),\\boldsymbol{x}(1),\\boldsymbol{x}(2),\\dots)\\) denote the unique stationary probability vector of \\(\\boldsymbol{T}\\), where \\(\\boldsymbol{x}(0)\\) (resp. \\(\\boldsymbol{x}(k)\\); \\(k\\in\\mathbb{N}\\)) is a \\(1\\times M_{0}\\) (resp. \\(1\\times M\\)) subvector of \\(\\boldsymbol{x}\\) corresponding to level zero (resp. level \\(k\\)). To characterize \\(\\boldsymbol{x}=(\\boldsymbol{x}(0),\\boldsymbol
**Proposition 2.2** (Kimura et al. 2013, Lemma 3.1.2): _If Assumption 2.1 holds, then_
\\[\\lim_{n\\to\\infty}\\sum_{l=0}^{\\tau-1}\\boldsymbol{L}(n\\tau+l)=\\tau\\boldsymbol{e} \\boldsymbol{\\psi},\\]
_where_
\\[\\boldsymbol{\\psi}=\\boldsymbol{\\pi}(\\boldsymbol{I}-\\boldsymbol{R})(\\boldsymbol {I}-\\boldsymbol{\\Phi}(0))/(-\\sigma), \\tag{2.6}\\]
_and \\(\\tau\\) denotes the period of an Markov additive process with kernel \\(\\{\\boldsymbol{A}(k);k\\in\\mathbb{Z}\\}\\) (see Appendix B in Kimura et al. 2010)._
**Remark 2.3**: Proposition 2.1 implies that \\(\\boldsymbol{\\psi}\\) is finite.
### Long-tailed distributions
We begin with the definitions of the long-tailed class and higher-order long-tailed classes.
**Definition 2.1**: A nonnegative random variable \\(U\\) and its distribution \\(F_{U}\\) are said to be long-tailed if \\(\\mathsf{P}(U>x)>0\\) for all \\(x\\geq 0\\) and \\(\\mathsf{P}(U>x+y)\\stackrel{{ x}}{{\\sim}}\\mathsf{P}(U>x)\\) for some (thus all) \\(y>0\\). The class of long-tailed distributions is denoted by \\(\\mathcal{L}\\).
**Definition 2.2**: A nonnegative random variable \\(U\\) and its distribution \\(F_{U}\\) are said to be the \\(\\mu\\)th-order long-tailed if \\(U^{1/\\mu}\\in\\mathcal{L}\\), where \\(\\mu\\geq 1\\). The class of the \\(\\mu\\)th-order long-tailed distributions is denoted by \\(\\mathcal{L}^{\\mu}\\). Further if \\(U\\in\\mathcal{L}^{\\mu}\\) (resp. \\(F_{U}\\in\\mathcal{L}^{\\mu}\\)) for all \\(\\mu\\geq 1\\), we write \\(U\\in\\mathcal{L}^{\\infty}\\) (resp. \\(F_{U}\\in\\mathcal{L}^{\\infty}\\)) and call \\(U\\) (resp. \\(F_{U}\\)) infinite-order long-tailed.
The basic properties of the higher-order long-tailed classes (including the long-tailed class) are summarized in Proposition 2.3 below.
**Proposition 2.3** (Masuyama 2013, Lemmas A.1-A.3):
* \\(\\mathcal{L}^{\\mu_{2}}\\subset\\mathcal{L}^{\\mu_{1}}\\) _for_ \\(1\\leq\\mu_{1}<\\mu_{2}\\)_._
* _If_ \\(U\\in\\mathcal{L}^{\\mu}\\) _(_\\(\\mu\\geq 1\\)_), then_ \\(\\mathsf{P}(U>x)=\\exp\\{-o(x^{1/\\mu})\\}\\)_._
* \\(U\\in\\mathcal{L}^{\\mu}\\) _(_\\(\\mu\\geq 1\\)_) if and only if_ \\(\\mathsf{P}(U>x-\\xi x^{1-1/\\mu})\\stackrel{{ x}}{{\\sim}}\\mathsf{P}(U>x)\\) _for some (thus all)_ \\(\\xi\\in\\mathbb{R}\\backslash\\{0\\}\\)_._
Next we introduce the subexponential class, which is the largest tractable subclass of \\(\\mathcal{L}\\).
**Definition 2.3** (Goldie and Kluppelberg 1998; Sigman 1999): A nonnegative random variable \\(U\\) and its distribution \\(F_{U}\\) are said to be subexponential if \\(\\mathsf{P}(U>x)>0\\) for all \\(x\\geq 0\\) and
\\[\\mathsf{P}(U_{1}+U_{2}>x)\\stackrel{{ x}}{{\\sim}}2\\mathsf{P}(U>x),\\]
where \\(U_{i}\\)'s (\\(i=1,2,\\dots\\)) are independent copies of \\(U\\). The class of subexponential distributions is denoted by \\(\\mathcal{S}\\).
**Remark 2.4**: The class \\(\\mathcal{S}\\) includes Pareto, heavy-tailed Weibull, lognormal, Burr, and loggamma distributions, etc (see, e.g., Goldie and Kluppelberg 1998).
The following proposition is used several times in the subsequent sections.
**Proposition 2.4** (Masuyama 2011, Proposition A.3): _Let \\(\\{\\boldsymbol{M}(k);k\\in\\mathbb{Z}_{+}\\}\\) and \\(\\{\\boldsymbol{N}(k);k\\in\\mathbb{Z}_{+}\\}\\) denote finite-dimensional nonnegative matrix sequences such that their convolution \\(\\{\\boldsymbol{M}*\\boldsymbol{N}(k);k\\in\\mathbb{Z}_{+}\\}\\) is well-defined and \\(\\boldsymbol{M}:=\\sum_{k=0}^{\\infty}\\boldsymbol{M}(k)\\) and \\(\\boldsymbol{N}:=\\sum_{k=0}^{\\infty}\\boldsymbol{N}(k)\\) are finite. Suppose that for some random variable \\(U\\in\\mathcal{S}\\),_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\boldsymbol{M}}(k)}{\\mathsf{P}(U>k)}= \\widetilde{\\boldsymbol{M}}\\geq\\boldsymbol{O},\\qquad\\lim_{k\\to\\infty}\\frac{ \\overline{\\boldsymbol{N}}(k)}{\\mathsf{P}(U>k)}=\\widetilde{\\boldsymbol{N}} \\geq\\boldsymbol{O},\\]
_where \\(\\widetilde{\\boldsymbol{M}}=\\widetilde{\\boldsymbol{N}}=\\boldsymbol{O}\\) is allowed. We then have_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\boldsymbol{M}*\\boldsymbol{N}}(k)}{\\mathsf{ P}(U>k)}=\\widetilde{\\boldsymbol{M}}\\boldsymbol{N}+\\boldsymbol{M}\\widetilde{ \\boldsymbol{N}}.\\]
Finally we describe two subclasses of \\(\\mathcal{S}\\), which are used to apply the main result of this paper to the BMAP/GI/1 queue in Section 4.
**Definition 2.4** (Shneer 2006): A nonnegative random variable \\(U\\) and its distribution function \\(F_{U}\\) and cumulative hazard function \\(Q_{U}:=-\\log\\overline{F}_{U}\\) belong to the subexponential concave class \\(\\mathcal{SC}\\) with index \\(\\alpha\\) (\\(0<\\alpha<1\\)) if the following hold: (i) \\(Q_{U}\\) is eventually concave; (ii) \\(\\log x=o(Q_{U}(x))\\); and (iii) there exist some \\(x_{0}>0\\) such that \\(Q_{U}(x)/x^{\\alpha}\\) is nonincreasing for all \\(x\\geq x_{0}\\), i.e.,
\\[\\frac{Q_{U}(x)}{Q_{U}(u)}\\leq\\left(\\frac{x}{u}\\right)^{\\alpha},\\qquad x\\geq u \\geq x_{0}.\\]
The subexponential concave class with index \\(\\alpha\\) is denoted by \\(\\mathcal{SC}_{\\alpha}\\).
**Remark 2.5**: \\(\\mathcal{SC}_{\\alpha}\\subset\\mathcal{L}^{1/\\beta}\\) for all \\(0<\\alpha<\\beta\\leq 1\\) (see Lemma A.6 in Masuyama 2013). In addition, typical examples of \\(Q_{U}\\in\\mathcal{SC}\\) are (i) \\(Q_{U}(x)=(\\log x)^{\\gamma}x^{\\alpha}\\) and (ii) \\(Q_{U}(x)=(\\log x)^{\\beta}\\), where \\(0<\\alpha<1\\), \\(\\beta>1\\) and \\(\\gamma\\in\\mathbb{R}\\). See Appendix A.2 in Masuyama (2013) for further remarks.
**Definition 2.5**: A nonnegative random variable \\(U\\) and its distribution function \\(F_{U}\\) belong to the consistent variation class \\(\\mathcal{C}\\) if \\(\\overline{F}_{U}(x)>0\\) for all \\(x\\geq 0\\) and
\\[\\lim_{v\\downarrow 1}\\liminf_{x\\to\\infty}\\frac{\\overline{F}_{U}(vx)}{\\overline{F }_{U}(x)}=1\\ \\ \\text{or equivalently,}\\ \\ \\lim_{v\\uparrow 1}\\limsup_{x\\to\\infty}\\frac{ \\overline{F}_{U}(vx)}{\\overline{F}_{U}(x)}=1.\\]
**Remark 2.6**: It is known that (i) \\(\\mathcal{C}\\subset\\mathcal{L}^{\\infty}\\) (see Lemma A.4 in Masuyama 2013); (ii) \\(\\mathcal{R}\\subset\\mathcal{C}\\subset\\mathcal{L}\\cap\\mathcal{D}\\subset \\mathcal{S}\\) where \\(\\mathcal{D}\\) and \\(\\mathcal{R}\\) denote the dominated variation class and the regular variation class, respectively (see, e.g., the introduction of Aleskeviene et al. 2008).
## 3 Main Result
Before presenting the main result, we first show a related result.
**Proposition 3.1** (Kimura et al. 2013, Theorem 3.1.1): _Suppose that (i) Assumption 2.1 is satisfied; and (ii) there exists some random variable \\(U\\) in \\(\\mathbb{Z}_{+}\\) with positive finite mean such that \\(U_{\\rm de}\\in\\mathcal{S}\\) and_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{A}}(k)\\mathbf{e}}{\\mathsf{P}(U>k)}=\\frac{\\bm {c}_{A}}{\\mathsf{E}[U]},\\qquad\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{B}}(k)\\mathbf{e }}{\\mathsf{P}(U>k)}=\\frac{\\mathbf{c}_{B}}{\\mathsf{E}[U]}, \\tag{3.1}\\]
_where \\(\\mathbf{c}_{A}\\) and \\(\\mathbf{c}_{B}\\) are \\(M\\times 1\\) and \\(M_{0}\\times 1\\) nonnegative vectors, respectively, satisfying \\(\\mathbf{c}_{A}\
eq\\mathbf{0}\\) or \\(\\mathbf{c}_{B}\
eq\\mathbf{0}\\). We then have_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{x}}(k)}{\\mathsf{P}(U_{\\rm de}>k)}=\\frac{ \\mathbf{x}(0)\\mathbf{c}_{B}+\\overline{\\mathbf{x}}(0)\\mathbf{c}_{A}}{-\\sigma}\\cdot\\mathbf{\\pi}.\\]
In this section, we present a more general result than the above proposition. For this purpose, we make the following assumption:
**Assumption 3.1**: There exists some random variable \\(Y\\) in \\(\\mathbb{Z}_{+}\\) such that
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{A}}}(k)\\mathbf{e}}{\\mathsf{P}(Y>k) }=\\mathbf{c}_{A},\\qquad\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{B}}}(k)\\mathbf{ e}}{\\mathsf{P}(Y>k)}=\\mathbf{c}_{B}, \\tag{3.2}\\]
where \\(\\mathbf{c}_{A}\\) and \\(\\mathbf{c}_{B}\\) are \\(M\\times 1\\) and \\(M_{0}\\times 1\\) nonnegative vectors, respectively, satisfying \\(\\mathbf{c}_{A}\
eq\\mathbf{0}\\) or \\(\\mathbf{c}_{B}\
eq\\mathbf{0}\\).
**Remark 3.1**: We suppose that (3.1) holds for some some random variable \\(U\\) in \\(\\mathbb{Z}_{+}\\) with positive finite mean (\\(U_{\\rm de}\\in\\mathcal{S}\\) is not necessarily assumed). It then follows from (3.1) that
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{A}}}(k)\\mathbf{e}}{\\mathsf{P}(U_{ \\rm de}=k)}=\\mathbf{c}_{A},\\qquad\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{B}}(k)\\mathbf{e }}{\\mathsf{P}(U_{\\rm de}=k)}=\\mathbf{c}_{B},\\]
which yield
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{A}}}(k)\\mathbf{e}}{\\mathsf{P}(U_{ \\rm de}>k)}=\\mathbf{c}_{A},\\qquad\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{B }}}(k)\\mathbf{e}}{\\mathsf{P}(U_{\\rm de}>k)}=\\mathbf{c}_{B}.\\]
Thus Assumption 3.1 holds for \\(Y=U_{\\rm de}\\).
The following theorem is the main result of this paper.
**Theorem 3.1**: _Suppose that (i) Assumption 2.1 is satisfied; and (ii) Assumption 3.1 holds for some \\(Y\\in\\mathcal{S}\\). We then have_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{x}}(k)}{\\mathsf{P}(Y>k)}=\\frac{\\mathbf{x}(0) \\mathbf{c}_{B}+\\overline{\\mathbf{x}}(0)\\mathbf{c}_{A}}{-\\sigma}\\cdot\\mathbf{\\pi}. \\tag{3.3}\\]Before proving Theorem 3.1, we compare the above theorem with Proposition 3.1. According to Remark 3.1, condition (ii) of Proposition 3.1 is sufficient for condition (ii) of Theorem 3.1. On the other hand, the latter do not imply the former. To confirm this, we suppose that (3.2) holds for a random \\(Y\\) in \\(\\mathbb{Z}_{+}\\) such that
\\[\\mathsf{P}(Y>k)=\\left\\{\\begin{array}{ll}\\mathsf{P}(U_{\\rm de}>2n),&k=2n,\\;n \\in\\mathbb{Z}_{+},\\\\ \\frac{1}{2}\\left\\{\\mathsf{P}(U_{\\rm de}>2n)+\\mathsf{P}(U_{\\rm de}>2n+1)\\right\\},&k=2n+1,\\;n\\in\\mathbb{Z}_{+},\\end{array}\\right. \\tag{3.4}\\]
where \\(U\\) is a random variable in \\(\\mathbb{Z}_{+}\\) such that \\(U\\in\\mathcal{S}\\) and \\(U_{\\rm de}\\in\\mathcal{S}\\) (see Goldie and Kluppelberg 1998 and also Definition A.3 and Proposition A.2 in Masuyama 2011). It follows from \\(U_{\\rm de}\\in\\mathcal{S}\\) and (3.4) that \\(\\mathsf{P}(Y>k)\\stackrel{{ k}}{{\\sim}}\\mathsf{P}(U_{\\rm de}>k)\\) and thus \\(Y\\in\\mathcal{S}\\) (Sigma 1999, Proposition 2.8), which shows that condition (ii) of Theorem 3.1 holds for \\(Y\\in\\mathcal{S}\\
_then_
\\[\\lim_{k\\to\\infty}\\sum_{m=1}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L }(m)}{\\mathsf{P}(Y>k)}=\\frac{\\mathbf{c}_{A}\\mathbf{\\pi}(\\mathbf{I}-\\mathbf{R})(\\mathbf{I}-\\mathbf{\\Phi}( 0))}{-\\sigma}, \\tag{3.7}\\] \\[\\lim_{k\\to\\infty}\\sum_{m=1}^{\\infty}\\frac{\\overline{\\mathbf{B}}(k+m) \\mathbf{L}(m)}{\\mathsf{P}(Y>k)}=\\frac{\\mathbf{c}_{B}\\mathbf{\\pi}(\\mathbf{I}-\\mathbf{R})(\\mathbf{I}-\\bm {\\Phi}(0))}{-\\sigma}. \\tag{3.8}\\]
Proof.: See Appendix A.1.
**Lemma 3.2**: _Suppose that Assumption 2.1 is satisfied. If Assumption 3.1 holds for some \\(Y\\in\\mathcal{L}\\), then_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{R}}(k)}{\\mathsf{P}(Y>k)} =\\frac{\\mathbf{c}_{A}\\mathbf{\\pi}(\\mathbf{I}-\\mathbf{R})}{-\\sigma}, \\tag{3.9}\\] \\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{R}_{0}}(k)}{\\mathsf{P}(Y>k)} =\\frac{\\mathbf{c}_{B}\\mathbf{\\pi}(\\mathbf{I}-\\mathbf{R})}{-\\sigma}. \\tag{3.10}\\]
Proof.: From (2.5), we have
\\[\\overline{\\mathbf{R}}(k)=\\left[\\overline{\\mathbf{A}}(k)+\\sum_{m=1}^{\\infty}\\overline{ \\mathbf{A}}(k+m)\\mathbf{L}(m)\\right](\\mathbf{I}-\\mathbf{\\Phi}(0))^{-1}. \\tag{3.11}\\]
Further it follows from (3.2) and \\(Y\\in\\mathcal{L}\\) that
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{A}}(k)}{\\mathsf{P}(Y>k)}\\leq\\lim_{k\\to \\infty}\\frac{\\overline{\\overline{\\mathbf{A}}}(k-1)\\mathbf{e}\\mathbf{e}^{\\mathrm{t}}- \\overline{\\overline{\\mathbf{A}}}(k)\\mathbf{e}\\mathbf{e}^{\\mathrm{t}}}{\\mathsf{P}(Y>k)}=\\bm {O}.\\]
Thus (3.11) yields
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{R}}(k)}{\\mathsf{P}(Y>k)}=\\lim_{k\\to \\infty}\\sum_{m=1}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m)}{\\mathsf{P}(Y>k )}(\\mathbf{I}-\\mathbf{\\Phi}(0))^{-1}. \\tag{3.12}\\]
Substituting (3.7) into (3.12), we obtain (3.9). Similarly, we can prove (3.10).
**Lemma 3.3**: _Suppose that Assumption 2.1 is satisfied. If Assumption 3.1 holds for some \\(Y\\in\\mathcal{S}\\), then_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{F}}(k)}{\\mathsf{P}(Y>k)}=\\frac{(\\mathbf{I}- \\mathbf{R})^{-1}\\mathbf{c}_{A}\\mathbf{\\pi}}{-\\sigma}. \\tag{3.13}\\]
Proof.: It follows from (2.2) that
\\[\\sum_{k=0}^{\\infty}\\mathbf{F}(k)=(\\mathbf{I}-\\mathbf{R})^{-1}. \\tag{3.14}\\]
Further combining (2.2) with Lemma 6 in Jelenkovic and Lazar (1998) and (3.14) yields
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{F}}(k)}{\\mathsf{P}(Y>k)}=(\\mathbf{I}-\\mathbf{R}) ^{-1}\\lim_{k\\to\\infty}\\frac{\\overline{\\mathbf{R}}(k)}{\\mathsf{P}(Y>k)}(\\mathbf{I}-\\bm {R})^{-1}.\\]From this and (3.9), we have (3.13). \\(\\Box\\)
We now provide the proof of Theorem 3.1.
_Proof of Theorem 3.1._ Applying Proposition 2.4 to (2.3) and using (3.10), (3.13) and (3.14), we obtain
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\boldsymbol{x}}(k)}{\\mathsf{P}(Y>k)}=\\frac{ \\boldsymbol{x}(0)}{-\\sigma}\\left[\\boldsymbol{c}_{B}\\boldsymbol{\\pi}+ \\boldsymbol{R}_{0}(\\boldsymbol{I}-\\boldsymbol{R})^{-1}\\boldsymbol{c}_{A} \\boldsymbol{\\pi}\\right].\\]
Substituting (2.4) into the above equation yields (3.3). \\(\\Box\\)
## 4 Application to BMAP/GI/1 Queue
This section discusses the application of the main result to the standard BMAP/G/1 queue.
### Model description
We first introduce the batch Markovian arrival process (BMAP) (Lucantoni 1991). Let \\(\\{J(t);t\\geq 0\\}\\) denote a Markov chain with state space \\(\\mathbb{M}=\\{1,2,\\ldots,M\\}\\), which is called background Markov chain. Let \\(\\{N(t);t\\geq 0\\}\\) denote the counting process of arrivals from the BMAP. We assume that the bivariate process \\(\\{(N(t),J(t));t\\geq 0\\}\\) is a Markov chain with state space \\(\\mathbb{Z}_{+}\\times\\mathbb{M}\\) and the following infinitesimal generator \\(\\boldsymbol{Q}\\):
\\[\\boldsymbol{Q}=\\left(\\begin{array}{ccccc}\\boldsymbol{C}&\\boldsymbol{D}(1)& \\boldsymbol{D}(2)&\\boldsymbol{D}(3)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{C}&\\boldsymbol{D}(1)&\\boldsymbol{D}(2)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{O}&\\boldsymbol{C}&\\boldsymbol{D}(1)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{O}&\\boldsymbol{O}&\\boldsymbol{C}&\\ddots\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\ddots\\end{array}\\right), \\tag{4.1}\\]
where \\(\\boldsymbol{D}(k)\\geq\\boldsymbol{O}\\) (\\(k\\in\\mathbb{N}\\)), \\([\\boldsymbol{C}]_{i,i}<0\\) (\\(i\\in\\mathbb{M}\\)), \\([\\boldsymbol{C}]_{i,j}\\geq 0\\) (\\(i\
eq j\\), \\(i,j\\in\\mathbb{M}\\)) and \\((\\boldsymbol{C}+\\sum_{k=1}^{\\infty}\\boldsymbol{D}(k))\\,\\boldsymbol{e}= \\boldsymbol{0}\\). Thus the BMAP is characterized by the rate matrices \\(\\{\\boldsymbol{C},\\boldsymbol{D}(1),\\boldsymbol{D}(2),\\ldots\\}\\).
Let \\(\\widehat{\\boldsymbol{D}}(z)=\\sum_{k=1}^{\\infty}z^{k}\\boldsymbol{D}(k)\\) and \\(\\boldsymbol{D}=\\widehat{\\boldsymbol{D}}(1)=\\sum_{k=1}^{\\infty}\\boldsymbol{D }(k)\\). It then follows from (4.1) that
\\[\\mathsf{E}[z^{N(t)}\\mbox{1l}(J(t)=j)\\mid J(0)=i]=\\left[\\exp\\{(\\boldsymbol{C}+ \\widehat{\\boldsymbol{D}}(z))t\\}\\right]_{i,j},\\quad i,j\\in\\mathbb{M},\\;t\\geq 0,\\]
and that \\(\\boldsymbol{C}+\\boldsymbol{D}\\) is the infinitesimal generator of the background Markov chain \\(\\{J(t);t\\geq 0\\}\\). For analytical convenience, we assume that \\(\\boldsymbol{C}+\\boldsymbol{D}\\) is irreducible, and then define \\(\\boldsymbol{\\varpi}:=(\\varpi_{i})_{i\\in\\mathbb{M}}>\\boldsymbol{0}\\) as the unique stationary probability vector of \\(\\boldsymbol{C}+\\boldsymbol{D}\\). In this setting, the mean arrival rate, denoted by \\(\\lambda\\), is given by
\\[\\lambda=\\boldsymbol{\\varpi}\\sum_{k=1}^{\\infty}\\boldsymbol{D}(k)\\boldsymbol{e}, \\tag{4.2}\\]
which is assumed to be strictly positive (i.e., \\(\\lambda>0\\)) in order to exclude a trivial case.
Customers are served on the first-come-first-served basis, and their service times are independent and identically distributed (i.i.d.) according to distribution function \\(H\\) with mean \\(h\\in(0,\\infty)\\) and \\(H(0)=0\\). We assume that the offered load \\(\\rho:=\\lambda h>0\\) satisfies
\\[\\rho<1,\\]
which ensures that the BMAP/GI/1 queue is stable (Loynes 1962).
Let \\(\\boldsymbol{y}(k)\\) denote a \\(1\\times M\\) vector such that \\([\\boldsymbol{y}(k)]_{i}=\\mathsf{P}(L=k,J=i)\\) for \\((k,i)\\in\\mathbb{Z}_{+}\\times\\mathbb{M}\\), where \\(L\\) and \\(J\\) denote generic random variables for the number of customers in the system and the state of the background Markov chain, respectively, in steady state. It is known that \\(\\boldsymbol{y}:=(\\boldsymbol{y}(0),\\boldsymbol{y}(1),\\boldsymbol{y}(2),\\dots)\\) is the stationary probability vector of the following transition probability matrix of M/G/1 type (Takine 2000):
\\[\\boldsymbol{T}_{\\rm M/G/1}:=\\left(\\begin{array}{ccccc}\\boldsymbol{P}(0)& \\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\boldsymbol{P}(3)&\\cdots\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\boldsymbol{P}(3)& \\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{O}&\\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\vdots&\\ddots\\end{array}\\right), \\tag{4.3}\\]
where \\(\\boldsymbol{P}(k)\\) (\\(k\\in\\mathbb{Z}_{+}\\)) denotes an \\(M\\times M\\) matrix such that
\\[\\widehat{\\boldsymbol{P}}(z):=\\sum_{k=0}^{\\infty}z^{k}\\boldsymbol{P}(k)=\\int_ {0}^{\\infty}\\exp\\{(\\boldsymbol{C}+\\widehat{\\boldsymbol{D}}(z))x\\}{\\rm d}H(x). \\tag{4.4}\\]
It is easy to see that \\(\\boldsymbol{T}_{\\rm M/G/1}\\) is equivalent to \\(\\boldsymbol{T}\\) in (2.1) with
\\[\\boldsymbol{A}(k)=\\left\\{\\begin{array}{ll}\\boldsymbol{P}(k+1),&k\\geq-1,\\\\ \\boldsymbol{O},&k\\leq-2,\\end{array}\\right.\\qquad\\boldsymbol{B}(k)=\\left\\{ \\begin{array}{ll}\\boldsymbol{P}(k),&k\\in\\mathbb{Z}_{+},\\\\ \\boldsymbol{P}(0),&k=-1,\\\\ \\boldsymbol{O},&k\\leq-2.\\end{array}\\right. \\tag{4.5}\\]
Note here that (4.2), (4.4) and \\(\\rho=\\lambda h\\) yield
\\[\\boldsymbol{\\varpi}\\sum_{k=1}^{\\infty}k\\boldsymbol{P}(k)\\boldsymbol{e}= \\boldsymbol{\\varpi}\\widehat{\\boldsymbol{P}}^{\\prime}(1)\\boldsymbol{e}= \\boldsymbol{\\varpi}\\sum_{k=1}^{\\infty}k\\boldsymbol{D}(k)\\boldsymbol{e}\\cdot \\int_{0}^{\\infty}x{\\rm d}H(x)=\\lambda h=\\rho. \\tag{4.6}\\]
We now define \\(\\boldsymbol{P}_{\\rm e}(k)\\) (\\(k\\in\\mathbb{Z}_{+}\\)) as an \\(M\\times M\\) matrix such that
\\[\\widehat{\\boldsymbol{P}}_{\\rm e}(z):=\\sum_{k=0}^{\\infty}z^{k}\\boldsymbol{P}_{ \\rm e}(k)=\\int_{0}^{\\infty}\\exp\\{(\\boldsymbol{C}+\\widehat{\\boldsymbol{D}}(z) )x\\}{\\rm d}H_{\\rm e}(x), \\tag{4.7}\\]
where \\(H_{\\rm e}\\) is the equilibrium distribution of the service time distribution \\(H\\). We then have the following lemma:
**Lemma 4.1**: \\[\\overline{\\boldsymbol{P}}(k)\\boldsymbol{e}=h\\cdot\\boldsymbol{P}_{\\rm e}* \\overline{\\boldsymbol{D}}(k)\\boldsymbol{e},\\qquad k\\in\\mathbb{Z}_{+}.\\] (4.8)Proof.: Post-multiplying both sides of (4.7) by \\(-\\mathbf{C}-\\widehat{\\mathbf{D}}(z)\\) and integrating the right hand side by parts yield
\\[\\widehat{\\mathbf{P}}_{\\rm e}(z)(-\\mathbf{C}-\\widehat{\\mathbf{D}}(z))=h^{-1}(\\mathbf{I}-\\widehat {\\mathbf{P}}(z)),\\qquad|z|<1. \\tag{4.9}\\]
It follows from (4.9) and \\(-\\mathbf{C}\\mathbf{e}=\\mathbf{D}\\mathbf{e}=\\widehat{\\mathbf{D}}(1)\\mathbf{e}\\) that
\\[\\widehat{\\mathbf{P}}_{\\rm e}(z)\\frac{\\widehat{\\mathbf{D}}(1)\\mathbf{e}-\\widehat{\\mathbf{D}}(z )\\mathbf{e}}{1-z}=h^{-1}\\frac{\\mathbf{e}-\\widehat{\\mathbf{P}}(z)\\mathbf{e}}{1-z},\\qquad|z|<1. \\tag{4.10}\\]
Note here that
\\[\\sum_{k=0}^{\\infty}z^{k}\\overline{\\mathbf{D}}(k)\\mathbf{e}=\\frac{\\widehat{\\mathbf{D}}(1) \\mathbf{e}-\\widehat{\\mathbf{D}}(z)\\mathbf{e}}{1-z},\\qquad\\sum_{k=0}^{\\infty}z^{k}\\overline {\\mathbf{P}}(k)\\mathbf{e}=\\frac{\\mathbf{e}-\\widehat{\\mathbf{P}}(z)\\mathbf{e}}{1-z}.\\]
Substituting these equations into (4.10), we have
\\[\\widehat{\\mathbf{P}}_{\\rm e}(z)\\sum_{k=0}^{\\infty}z^{k}\\overline{\\mathbf{D}}(k)\\mathbf{e} =h^{-1}\\sum_{k=0}^{\\infty}z^{k}\\overline{\\mathbf{P}}(k)\\mathbf{e},\\]
and thus
\\[\\overline{\\mathbf{P}}(k)\\mathbf{e}=h\\cdot\\sum_{l=0}^{k}\\mathbf{P}_{\\rm e}(l)\\overline{\\bm {D}}(k-l)\\mathbf{e},\\qquad k\\in\\mathbb{Z}_{+},\\]
which shows that (4.8) holds.
### Asymptotic formulas for the queue length
In this subsection, we present some subexponential asymptotic formulas for the stationary queue length distribution of the BMAP/GI/1 queue. For this purpose, we use the following result:
**Corollary 4.1**: _Suppose that there exists some random variable \\(Y\\) in \\(\\mathbb{Z}_{+}\\) such that \\(Y\\in\\mathcal{S}\\) and_
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\overline{\\mathbf{P}}}(k)\\mathbf{e}}{{\\sf P}(Y>k)}= \\mathbf{c}\\geq\\mathbf{0},\
eq\\mathbf{0}. \\tag{4.11}\\]
_We then have_
\\[\\overline{\\mathbf{y}}(k)\\stackrel{{ k}}{{\\sim}}\\frac{\\overline{\\mathbf{ \\omega}}\\mathbf{c}}{1-\\rho}\\overline{\\mathbf{\\omega}}\\cdot{\\sf P}(Y>k). \\tag{4.12}\\]
Proof.: Recall that \\(\\mathbf{T}_{\\rm M/G/1}\\) in (4.3) is equivalent to \\(\\mathbf{T}\\) in (2.1) with block matrices \\(\\mathbf{A}(k)\\) and \\(\\mathbf{B}(k)\\) (\\(k\\in\\mathbb{Z}\\)) satisfying (4.5). Recall also that \\(\\overline{\\mathbf{\\omega}}\\) is the stationary probability vector of \\(\\mathbf{C}+\\mathbf{D}\\). Thus (4.4) implies that \\(\\overline{\\mathbf{\\omega}}\\) satisfies \\(\\overline{\\mathbf{\\omega}}\\widehat{\\mathbf{P}}(1)=\\overline{\\mathbf{\\omega}}\\) and corresponds to the stationary probability vector \\(\\mathbf{\\pi}\\) of \\(\\mathbf{A}=\\sum_{k\\in\\mathbb{Z}}\\mathbf{A}(k)\\). Combining these facts with (4.5), (4.6) and (4.11), we have
\\[\\overline{\\overline{\\mathbf{A}}}(k)\\mathbf{e}\\stackrel{{ k}}{{\\sim}} \\overline{\\overline{\\mathbf{B}}}(k)\\mathbf{e}\\stackrel{{ k}}{{\\sim}}\\mathbf{c} \\cdot{\\sf P}(Y>k),\\]
\\[\\sigma=\\overline{\\mathbf{\\omega}}\\sum_{k=0}^{\\infty}(k-1)\\mathbf{P}(k)\\mathbf{e}=\\rho-1.\\]
Therefore (4.12) follows from Theorem 3.1 and \\([\\mathbf{y}(0)]_{i}+[\\overline{\\mathbf{y}}(0)]_{i}={\\sf P}(J=i)=\\varpi_{i}\\) (\\(i\\in\\mathbb{M}\\)).
In the following, we consider three cases: (i) the service time distribution is light-tailed; (ii) second-order long-tailed; and (iii) consistently varying.
#### 4.2.1 Light-tailed service time
Let \\(G\\) denote a random variable in \\(\\mathbb{Z}_{+}\\) such that \\(\\mathsf{P}(G=0)=0\\) and
\\[\\mathsf{P}(G=k)=\\frac{\\varpi\\boldsymbol{D}(k)\\boldsymbol{e}}{\\lambda_{G}}, \\qquad k\\in\\mathbb{N}, \\tag{4.13}\\]
where \\(\\lambda_{G}\\) is the arrival rate of batches, i.e., \\(\\lambda_{G}=\\boldsymbol{\\varpi}\\boldsymbol{D}\\boldsymbol{e}\\). From the definition of \\(G\\), we have \\(\\mathsf{E}[G]=\\lambda/\\lambda_{G}\\) and thus
\\[\\mathsf{P}(G_{\\rm de}>k)=\\frac{\\overline{\\boldsymbol{\\varpi}\\overline{ \\boldsymbol{D}}}(k)\\boldsymbol{e}}{\\lambda},\\qquad k\\in\\mathbb{Z}_{+}. \\tag{4.14}\\]
We now make the following assumption:
**Assumption 4.1**: There exists some \\(\\widetilde{\\boldsymbol{d}}_{G}\\geq\\boldsymbol{0},\
eq\\boldsymbol{0}\\) such that
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\boldsymbol{\\overline{D}}}(k)\\boldsymbol{e}} {\\mathsf{P}(G_{\\rm de}>k)}=\\widetilde{\\boldsymbol{d}}_{G}. \\tag{4.15}\\]
**Theorem 4.1**: _Suppose that \\(H\\) is light-tailed, i.e., \\(\\int_{0}^{\\infty}{\\rm e}^{\\delta x}{\\rm d}H(x)<\\infty\\) for some \\(\\delta>0\\). Further if Assumption 4.1 holds and \\(G_{\\rm de}\\in\\mathcal{S}\\), then_
\\[\\overline{\\boldsymbol{\\overline{P}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}h\\widehat{\\boldsymbol{P}}_{\\rm e}(1)\\widetilde{ \\boldsymbol{d}}_{G}\\cdot\\mathsf{P}(G_{\\rm de}>k), \\tag{4.16}\\]
_and_
\\[\\mathsf{P}(L>k,J=i)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1-\\rho} \\varpi_{i}\\cdot\\mathsf{P}(G_{\\rm de}>k). \\tag{4.17}\\]
_Proof._ It follows from (4.7) and \\(\\boldsymbol{\\varpi}(\\boldsymbol{C}+\\boldsymbol{D})=\\boldsymbol{0}\\) that
\\[\\boldsymbol{\\varpi}\\widehat{\\boldsymbol{P}}_{\\rm e}(1)=\\boldsymbol{\\varpi}, \\tag{4.18}\\]
and from (4.14) and (4.15) that
\\[\\boldsymbol{\\varpi}\\widetilde{\\boldsymbol{d}}_{G}=\\lambda. \\tag{4.19}\\]
Thus if (4.16) holds, then (4.18), (4.19) and Corollary 4.1 yield
\\[\\overline{\\boldsymbol{y}}(k)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1- \\rho}\\boldsymbol{\\varpi}\\cdot\\mathsf{P}(G_{\\rm de}>k),\\]
which shows that (4.17) holds.
In what follows, we prove (4.16). Let \\(\\boldsymbol{\\Lambda}(k)\\) (\\(k\\in\\mathbb{Z}_{+}\\)) denote
\\[\\boldsymbol{\\Lambda}(k)=\\left\\{\\begin{array}{ll}\\boldsymbol{I}+\\theta^{-1} \\boldsymbol{C},&k=0,\\\\ \\theta^{-1}\\boldsymbol{D}(k),&k\\in\\mathbb{N},\\end{array}\\right. \\tag{4.20}\\]
where \\(\\theta=\\max_{j\\in\\mathbb{M}}|[\\boldsymbol{C}]_{j,j}|\\). We then rewrite (4.7) as
\\[\\sum_{k=0}^{\\infty}z^{k}\\boldsymbol{P}_{\\rm e}(k)=\\int_{0}^{\\infty}\\sum_{n=0} ^{\\infty}{\\rm e}^{-\\theta x}\\frac{(\\theta x)^{n}}{n!}{\\rm d}H_{\\rm e}(x)\\left[ \\sum_{k=0}^{\\infty}z^{k}\\boldsymbol{\\Lambda}(k)\\right]^{n},\\]which implies that
\\[\\overline{\\mathbf{P}}_{\\rm e}(k)=\\int_{0}^{\\infty}\\sum_{n=1}^{\\infty}e^{-\\theta_{x}} \\frac{(\\theta x)^{n}}{n!}{\\rm d}H_{\\rm e}(x)\\overline{\\mathbf{\\Lambda}^{*n}}(k), \\qquad k\\in\\mathbb{Z}_{+}. \\tag{4.21}\\]
According to Corollary 3.3 in Sigman (1999), \\(G_{\\rm de}\\in\\mathcal{S}\\subset\\mathcal{L}\\) implies \\({\\sf P}(G>k)=o({\\sf P}(G_{\\rm de}>k))\\). It thus follows from (4.13), (4.14), (4.20) and \\(\\mathbf{\\varpi}>\\mathbf{0}\\) that for \\(i\\in\\mathbb{M}\\),
\\[[\\overline{\\mathbf{\\Lambda}}(k)\\mathbf{e}]_{i} =\\frac{\\lambda_{G}}{\\theta}\\frac{[\\overline{\\mathbf{D}}(k)\\mathbf{e}]_{i} }{\\lambda_{G}}\\leq\\frac{\\lambda_{G}}{\\theta\\varpi_{i}}\\frac{\\mathbf{\\varpi} \\overline{\\mathbf{D}}(k)\\mathbf{e}}{\\lambda_{G}}\\] \\[=\\frac{\\lambda_{G}}{\\theta\\varpi_{i}}{\\sf P}(G>k)=o({\\sf P}(G_{ \\rm de}>k)). \\tag{4.22}\\]
Using this and Proposition 2.4, we obtain
\\[\\overline{\\mathbf{\\Lambda}^{*n}}(k)=o({\\sf P}(G_{\\rm de}>k)),\\qquad n\\in\\mathbb{N}. \\tag{4.23}\\]
Note here that \\(H\\) is light-tailed if and only if \\(H_{\\rm e}\\) is light-tailed. Therefore similarly to the proof of Lemma 3.5 in Masuyama et al. (2009), we can readily prove from (4.21) and (4.23) that
\\[\\overline{\\mathbf{P}}_{\\rm e}(k)=o({\\sf P}(G_{\\rm de}>k)). \\tag{4.24}\\]
As a result, we obtain (4.16) by applying Proposition 2.4 to (4.8) and using (4.15) and (4.24). \\(\\Box\\)
Masuyama et al. (2009) present a similar result:
**Proposition 4.1** (Masuyama et al. 2009, Theorem 3.2): _Suppose that (i) \\(H\\) is light-tailed; and (ii) there exists some \\(\\widetilde{\\mathbf{D}}\\geq\\mathbf{O},\
eq\\mathbf{O}\\) such that \\(\\overline{\\mathbf{D}}(k)\\stackrel{{ k}}{{\\sim}}\\widetilde{\\mathbf{D}}{ \\sf P}(G>k)\\). Further if \\(G\\in\\mathcal{S}\\) and \\(G_{\\rm de}\\in\\mathcal{S}\\), then (4.17) holds._
Theorem 4.1 shows that the condition \\(G\\in\\mathcal{S}\\) in Proposition 4.1 is not necessary for the subexponential asymptotic formula (4.17). In addition, condition (ii) of Proposition 4.1 implies Assumption 4.1 whereas its converse does not. This fact is confirmed similarly to the comparison of Theorem 3.1 and Proposition 3.1 in Section 3. As a result, the conditions of Proposition 4.1 are more restrictive than those of Theorem 4.1.
#### 4.2.2 Second-order long-tailed service time
**Theorem 4.2**: _Suppose that (i) \\(H_{\\rm e}\\in\\mathcal{L}^{\\mu}\\) for some \\(\\mu\\geq 2\\); and (ii) \\(\\sum_{k=1}^{\\infty}{\\rm e}^{Q(k)}\\mathbf{D}(k)<\\infty\\) for some cumulative hazard function \\(Q\\in\\mathcal{SC}\\) such that \\(x^{1/\\mu}=O(Q(x))\\). We then_
\\[\\overline{\\mathbf{P}}_{\\rm e}(k)\\stackrel{{ k}}{{\\sim}}\\mathbf{e}\\mathbf{ \\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda). \\tag{4.25}\\]
_In addition, if (iii) \\(H_{\\rm e}\\in\\mathcal{S}\\), then_
\\[\\overline{\\overline{\\mathbf{P}}}(k)\\mathbf{e}\\stackrel{{ k}}{{\\sim}}\\rho \\mathbf{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda), \\tag{4.26}\\]_and_
\\[\\mathsf{P}(L>k,J=i)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1-\\rho}\\varpi_{i} \\cdot\\overline{H}_{\\rm e}(k/\\lambda). \\tag{4.27}\\]
**Remark 4.1**: Condition (i) implies that \\(\\overline{H}_{\\rm e}(x)=\\exp\\{-o(x^{1/\\mu})\\}\\) (see Proposition 2.3 (ii)). Further condition (ii) implies that \\(\\overline{\\boldsymbol{D}}(k)=o(\\exp\\{-\\delta k^{1/\\mu}\\})\\) for some \\(\\delta>0\\). Thus \\(\\overline{\\boldsymbol{D}}(k)=o(\\overline{H}_{\\rm e}(k))\\).__
_Proof of Theorem 4.2._ Let \\(T\\) denote a nonnegative random variable distributed with \\(H_{\\rm e}\\) independently of BMAP \\(\\{\\boldsymbol{C},\\boldsymbol{D}(1),\\boldsymbol{D}(2),\\dots\\}\\). We can readily obtain
\\[\\mathsf{P}(N(T)>k\\mid J(0)=i)\\stackrel{{ k}}{{\\sim}}\\mathsf{P}(T >k/\\lambda),\\qquad i\\in\\mathbb{M}, \\tag{4.28}\\]
by following the proof of Lemma 3.1 in Masuyama et al. (2009) and using Corollary B.1 instead of Lemma 2.1 in Masuyama et al. (2009). Further similarly to the proof of Lemma 3.2 in Masuyama et al. (2009), we can prove from (4.28) that
\\[\\mathsf{P}(N(T)>k,J(T)=j\\mid J(0)=i)\\stackrel{{ k}}{{\\sim}} \\varpi_{j}\\mathsf{P}(T>k/\\lambda),\\qquad i,j\\in\\mathbb{M},\\]
which shows that (4.25) holds.
Next we prove (4.26). According to Remark 4.1, \\(\\overline{\\boldsymbol{D}}(k)=o(\\exp\\{-\\delta k^{1/\\mu}\\})\\) for some \\(\\delta>0\\), which implies that
\\[\\overline{\\overline{\\boldsymbol{D}}}(k) \\leq o(\\exp\\{-(\\delta/2)k^{1/\\mu}\\})\\sum_{l=k+1}^{\\infty}\\exp\\{-( \\delta/2)l^{1/\\mu}\\}\\] \\[= o(\\exp\\{-(\\delta/2)k^{1/\\mu}\\}).\\]
Thus since \\(\\overline{H}_{\\rm e}(k/\\lambda)=\\exp\\{-o(k^{1/\\mu})\\}\\) (see Remark 4.1), we obtain
\\[\\overline{\\overline{\\boldsymbol{D}}}(k)=o(\\overline{H}_{\\rm e}(k/\\lambda)). \\tag{4.29}\\]
Applying Proposition 2.4 to (4.8) and using (4.25) and (4.29) yield
\\[\\overline{\\overline{\\boldsymbol{P}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}h\\boldsymbol{e}\\boldsymbol{\\varpi}\\sum_{k=0}^{\\infty} \\overline{\\boldsymbol{D}}(k)\\boldsymbol{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda )=\\rho\\boldsymbol{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\]
where the last equality is due to (4.2) and \\(\\rho=\\lambda h\\). Therefore we have (4.26).
Finally, from (4.26) and Corollary 4.1, we have
\\[\\overline{\\boldsymbol{y}}(k)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1- \\rho}\\boldsymbol{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\]
which shows that (4.27) holds. \\(\\Box\\)
We now compare Theorem 4.2 with a similar result presented in Masuyama et al. (2009), which is as follows:
**Proposition 4.2** (Masuyama et al. 2009, Theorem 3.1): _If (i) \\(H\\in\\mathcal{L}^{2}\\) and \\(H_{\\rm e}\\in\\mathcal{S}\\); and (ii) \\(\\sum_{k=1}^{\\infty}{\\rm e}^{\\phi\\sqrt{k}}\\boldsymbol{D}(k)<\\infty\\) for some \\(\\phi>0\\), then (4.27) holds._
Note that if \\(H\\in\\mathcal{L}^{2}\\), then \\(H_{\\rm e}\\in\\mathcal{L}^{2}\\) (see Lemma A.2 in Masuyama et al. 2009). Note also that \\(H_{\\rm e}\\in\\mathcal{L}^{2}\\) if and only if \\(H_{\\rm e}\\in\\mathcal{L}^{\\mu}\\) for some \\(\\mu\\geq 2\\) (see Proposition 2.3 (i)). Thus conditions (i) and (iii) of Theorem 4.2 are weaker than condition (i) of Proposition 4.2. Further if \\(Q(x)=\\phi\\sqrt{x}\\), then condition (ii) of Theorem 4.2 is reduced to condition (ii) of Proposition 4.2. As a result, Theorem 4.2 is a more general result than Proposition 4.2.
Actually, Asmussen et al. (1999) consider an M/GI/1 queue with arrival rate \\(\\lambda\\) and service time distribution \\(H\\), and the authors prove that if \\(H_{\\rm e}\\in\\mathcal{L}^{2}\\cap\\mathcal{S}\\),
\\[\\mathsf{P}(L>k)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1-\\rho} \\overline{H}_{\\rm e}(k/\\lambda).\\]
Theorem 4.2 includes this result as a special case whereas Proposition 4.2 does not.
#### 4.2.3 Consistently varying service time
**Theorem 4.3**: _Suppose that (i) \\(H_{\\rm e}\\in\\mathcal{C}\\) and \\(\\int_{0}^{\\infty}\\overline{H}_{\\rm e}(x){\\rm d}x<\\infty\\) and (ii) \\(\\overline{\\boldsymbol{D}}(k)=o(\\overline{H}_{\\rm e}(k))\\). We then have (4.25). Further if (iii) there exists some finite \\(\\widetilde{\\boldsymbol{d}}_{H}\\geq\\boldsymbol{0}\\) such that \\(\\overline{\\overline{\\boldsymbol{D}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}\\overline{H}_{\\rm e}(k/\\lambda)\\widetilde{ \\boldsymbol{d}}_{H}\\), then_
\\[\\overline{\\boldsymbol{\\boldsymbol{P}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}\\left(\\rho\\boldsymbol{e}+h\\widehat{\\boldsymbol{P}}_{\\rm e }(1)\\widetilde{\\boldsymbol{d}}_{H}\\right)\\overline{H}_{\\rm e}(k/\\lambda), \\tag{4.30}\\]
_and_
\\[\\mathsf{P}(L>k,J=i)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho+h \\varpi\\widetilde{\\boldsymbol{d}}_{H}}{1-\\rho}\\varpi_{i}\\cdot\\overline{H}_{\\rm e }(k/\\lambda). \\tag{4.31}\\]
Proof: As in the proof of Theorem 4.2, let \\(T\\) denote a nonnegative random variable distributed with \\(H_{\\rm e}\\) independently of BMAP \\(\\{\\boldsymbol{C},\\boldsymbol{D}(1),\\boldsymbol{D}(2),\\ldots\\}\\). It is easy to see that the conditions of Proposition B.2 are satisfied. Using Proposition B.2, we can obtain (4.28) and thus (4.25) in the same way as the proof of Theorem 4.2, where we do not require condition (iii).
In addition, applying Proposition 2.4 to (4.8) and using (4.25) and condition (iii), we obtain
\\[\\overline{\\boldsymbol{\\boldsymbol{P}}}(k)\\boldsymbol{e} \\stackrel{{ k}}{{\\sim}}h\\left(\\boldsymbol{e} \\varpi\\sum_{k=0}^{\\infty}\\overline{\\boldsymbol{D}}(k)\\boldsymbol{e}+\\widehat{ \\boldsymbol{P}}_{\\rm e}(1)\\widetilde{\\boldsymbol{d}}_{H}\\right)\\overline{H}_{ \\rm e}(k/\\lambda)\\] \\[=\\left(\\rho\\boldsymbol{e}+h\\widehat{\\boldsymbol{P}}_{\\rm e}(1) \\widetilde{\\boldsymbol{d}}_{H}\\right)\\overline{H}_{\\rm e}(k/\\lambda),\\]
where the last equality follows from (4.2) and \\(\\rho=\\lambda h\\). Therefore we have (4.30). Combining (4.30), (4.18) and Corollary 4.1 yields
\\[\\overline{\\boldsymbol{y}}(k)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho+h \\varpi\\widetilde{\\boldsymbol{d}}_{H}}{1-\\rho}\\varpi\\cdot\\overline{H}_{\\rm e }(k/\\lambda),\\]
which leads to (4.31).
Suppose \\(\\widetilde{\\mathbf{d}}_{H}={\\bf 0}\\). It then follows that asymptotic formula (4.31) in Theorem 4.3 has the same expression as (4.27) in Theorem 4.2. The two theorems assume that \\(\\overline{\\mathbf{D}}(k)=o(\\overline{H}_{\\rm e}(k))\\) (see Remark 4.1 and condition (ii) of Theorem 4.3) and thus that the service time distribution has a dominant impact on the tail of the stationary queue length distribution.
Conversely, the following theorem assumes, as with Theorem 4.1, that the batch size distribution has a dominant impact on the tail of the stationary queue length distribution.
**Theorem 4.4**: _Suppose that conditions (i) and (ii) of Theorem 4.3 are satisfied. Further suppose that Assumption 4.1 holds for \\(G_{\\rm de}\\in{\\cal S}\\) such that \\(\\overline{H}_{\\rm e}(k/\\lambda)=o({\\sf P}(G_{\\rm de}>k))\\). We then have (4.16) and thus (4.17)._
_Proof._ As shown in the proof of Theorem 4.3, the asymptotics (4.25) holds under conditions (i) and (ii) of Theorem 4.3. From (4.25) and \\(\\overline{H}_{\\rm e}(k/\\lambda)=o({\\sf P}(G_{\\rm de}>k))\\), we have (4.24), i.e., \\(\\overline{\\mathbf{P}}_{\\rm e}(k)=o({\\sf P}(G_{\\rm de}>k))\\). The rest of the proof is the same as that of Theorem 4.1. \\(\\Box\\)
## 5 Application to MAP/GI\\({}^{(a,b)}\\)/1 Queue
In this section, we apply out main result to a single-server queue with Markovian arrivals and the \\((a,b)\\)-bulk-service rule, which is denoted by MAP/GI\\({}^{(a,b)}\\)/1 queue (Singh et al. 2013).
### Model description
We assume that the arrival process is a Markovian arrival process (MAP), which is a special case of the BMAP \\(\\{\\mathbf{C},\\mathbf{D}(1),\\mathbf{D}(2),\\dots\\}\\) (introduced in Section 4) such that \\(\\mathbf{D}(k)=\\mathbf{O}\\) for all \\(k\\geq 2\\). For convenience, we use the symbols defined for the BMAP in Section 4, though we denote, for simplicity, \\(\\mathbf{D}(1)\\) by \\(D\\). Thus the MAP is characterized by \\(\\{\\mathbf{C},\\mathbf{D}\\}\\). As with Section 4, we assume that \\(\\mathbf{C}+\\mathbf{D}\\) is irreducible and that the arrival rate \\(\\lambda\\) is strictly positive, i.e.,
\\[\\lambda=\\mathbf{\\varpi}\\mathbf{D}\\mathbf{e}>0, \\tag{5.1}\\]
where \\(\\varpi\\) is the unique stationary probability vector of \\(\\mathbf{C}+\\mathbf{D}\\).
We also assume that the server works according to the \\((a,b)\\)-bulk-service rule (Singh et al. 2013). To explain the \\((a,b)\\)-bulk-service rule, we suppose that \\(l\\) customers are waiting in the queue at the completion of a service. The \\((a,b)\\)-bulk-service rule is as follows:
* If \\(0\\leq l<a\\), the server keeps idle until the queue length is equal to the lower threshold \\(a\\) and then starts serving all the \\(a\\) customers when the queue length reaches \\(a\\); and
* If \\(l\\geq a\\), the server immediately starts serving \\(\\min(l,b)\\) customers in the queue and makes the other \\(l-b\\) customers (if any) be in the queue.
The service times are assumed to be independent of the number of customers in service and i.i.d. according to distribution function \\(H\\) with mean \\(h\\in(0,\\infty)\\) and \\(H(0)=0\\). We assume that the offered load \\(\\rho=\\lambda h\\) satisfies
\\[\\rho<b, \\tag{5.2}\\]
under which the system is stable (Loynes 1962).
It should be noted that since \\(\\boldsymbol{D}(k)=\\boldsymbol{O}\\) for all \\(k\\geq 2\\), (4.4) and (4.7) are reduced to
\\[\\widehat{\\boldsymbol{P}}(z) =\\int_{0}^{\\infty}\\exp\\{(\\boldsymbol{C}+z\\boldsymbol{D})x\\} \\mathrm{d}H(x), \\tag{5.3}\\] \\[\\widehat{\\boldsymbol{P}}_{\\mathrm{e}}(z) =\\int_{0}^{\\infty}\\exp\\{(\\boldsymbol{C}+z\\boldsymbol{D})x\\} \\mathrm{d}H_{\\mathrm{e}}(x). \\tag{5.4}\\]
In addition, since \\(\\overline{\\boldsymbol{D}}(0)=\\boldsymbol{D}\\) and \\(\\overline{\\boldsymbol{D}}(k)=\\boldsymbol{O}\\) for all \\(k\\in\\mathbb{N}\\), it follows from Lemma 4.1 that
\\[\\overline{\\boldsymbol{P}}(k)\\boldsymbol{e}=h\\cdot\\boldsymbol{P}_{\\mathrm{e}}( k)\\boldsymbol{D}\\boldsymbol{e},\\qquad k\\in\\mathbb{Z}_{+}. \\tag{5.5}\\]
### Queue length process
Let \\(L^{(a,b)}(t)\\) (\\(t\\geq 0\\)) denote the total number of customers in the system at time \\(t\\). Let \\(J(t)\\) (\\(t\\geq 0\\)) denote the state of the background Markov chain at time \\(t\\). Let \\(0=t_{0}\\leq t_{1}\\leq t_{2}\\leq\\cdots\\) denote time points at each of which a service is completed.
Let \\(L^{(a,b)}_{n}\\) and \\(J_{n}\\) (\\(n\\in\\mathbb{Z}_{+}\\)) denote
\\[L^{(a,b)}_{n}=\\lim_{\\varepsilon\\downarrow 0}L^{(a,b)}(t_{n}+\\varepsilon), \\quad J_{n}=\\lim_{\\varepsilon\\downarrow 0}J(t_{n}+\\varepsilon).\\]
Thus \\(L^{(a,b)}_{n}\\) and \\(J_{n}\\) denote the number of customers in the queue and the state of the background Markov chain, respectively, immediately after the completion of the \\(n\\)th service. It follows (Singh et al. 2013) that \\(\\{(L^{(a,b)}_{n},J_{n});n\\in\\mathbb{N}_{+}\\}\\) is a discrete-time Markov chain with state space \\(\\mathbb{Z}_{+}\\times\\mathbb{M}\\), whose transition probability matrix \\(\\boldsymbol{T}^{(a,b)}_{+}\\) is given by
\\[\\boldsymbol{T}^{(a,b)}_{+}=\\left(\\begin{array}{ccccccccc}\\boldsymbol{P}_{0 }(0)&\\boldsymbol{P}_{0}(1)&\\boldsymbol{P}_{0}(2)&\\cdots&\\boldsymbol{P}_{0}(a) &\\cdots&\\boldsymbol{P}_{0}(b)&\\cdots\\\\ \\boldsymbol{P}_{1}(0)&\\boldsymbol{P}_{1}(1)&\\boldsymbol{P}_{1}(2)&\\cdots& \\boldsymbol{P}_{1}(a)&\\cdots&\\boldsymbol{P}_{1}(b)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\vdots&\\ddots&\\vdots&\\ddots\\\\ \\boldsymbol{P}_{a-1}(0)&\\boldsymbol{P}_{a-1}(1)&\\boldsymbol{P}_{a-1}(2)& \\cdots&\\boldsymbol{P}_{a-1}(a)&\\cdots&\\boldsymbol{P}_{a-1}(b)&\\cdots\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a )&\\cdots&\\boldsymbol{P}(b)&\\cdots\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a )&\\cdots&\\boldsymbol{P}(b)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\vdots&\\ddots&\\vdots&\\ddots\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a )&\\cdots&\\boldsymbol{P}(b)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\cdots&\\boldsymbol{P}(a -1)&\\cdots&\\boldsymbol{P}(b-1)&\\cdots\\\\ \\boldsymbol{O}&\\boldsymbol{O}&\\boldsymbol{P}(0)&\\cdots&\\boldsymbol{P}(a-2)& \\cdots&\\boldsymbol{P}(b-2)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\vdots&\\ddots&\\vdots&\\ddots\\\\ \\end{array}\\right), \\tag{5.6}\\]
It then follows from (5.9) that
\\[\\boldsymbol{y}^{(a,b)}(k) =\\frac{1}{\\eta}\\sum_{l=0}^{k}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[(- \\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{k-l}(-\\boldsymbol{C})^{-1},\\qquad 0 \\leq k\\leq a-1, \\tag{5.10}\\] \\[\\boldsymbol{y}^{(a,b)}(a) =\\frac{1}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\int_{0}^{\\infty}\\boldsymbol {P}(x,0)\\overline{H}(x)\\mathrm{d}x,\\] (5.11) \\[\\boldsymbol{y}^{(a,b)}(k) =\\frac{1}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\int_{0}^{\\infty}\\boldsymbol {P}(x,k-a)\\overline{H}(x)\\mathrm{d}x\\] \\[\\qquad\\qquad+\\frac{1}{\\eta}\\sum_{l=a+1}^{k}\\boldsymbol{y}_{+}^{( a,b)}(l)\\int_{0}^{\\infty}\\boldsymbol{P}(x,k-l)\\overline{H}(x)\\mathrm{d}x,\\quad k \\geq a+1. \\tag{5.12}\\]
Note here that \\(H_{\\mathrm{e}}^{\\prime}(x)=h^{-1}\\overline{H}(x)\\) for \\(x\\geq 0\\). Note also that
\\[\\int_{0}^{\\infty}\\boldsymbol{P}(x,k)H_{\\mathrm{e}}^{\\prime}(x)\\mathrm{d}x=\\int _{0}^{\\infty}\\boldsymbol{P}(x,k)\\mathrm{d}H_{\\mathrm{e}}(x)=\\boldsymbol{P}_{ \\mathrm{e}}(k),\\]
where the last equality is due to (4.7). Thus (5.11) and (5.12) can be rewritten as
\\[\\boldsymbol{y}^{(a,b)}(a) =\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\boldsymbol{P}_{\\mathrm{e}} (0), \\tag{5.13}\\] \\[\\boldsymbol{y}^{(a,b)}(k) =\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\boldsymbol{P}_{\\mathrm{e}} (k-a)\\] \\[\\quad+\\frac{h}{\\eta}\\sum_{l=a+1}^{k}\\boldsymbol{y}_{+}^{(a,b)}(l) \\boldsymbol{P}_{\\mathrm{e}}(k-l), k\\geq a+1. \\tag{5.14}\\]
### Asymptotic formulas for the queue length
Let \\(\\boldsymbol{S}(0)\\) denote a \\(bM\\times bM\\) matrix such that
\\[\\boldsymbol{S}(0)=\\left(\\begin{array}{ccccccc}\\boldsymbol{P}_{0}(0)& \\boldsymbol{P}_{0}(1)&\\boldsymbol{P}_{0}(2)&\\cdots&\\boldsymbol{P}_{0}(a)& \\cdots&\\boldsymbol{P}_{0}(b-1)\\\\ \\boldsymbol{P}_{1}(0)&\\boldsymbol{P}_{1}(1)&\\boldsymbol{P}_{1}(2)&\\cdots& \\boldsymbol{P}_{1}(a)&\\cdots&\\boldsymbol{P}_{1}(b-1)\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\vdots&\\ddots&\\vdots\\\\ \\boldsymbol{P}_{a-1}(0)&\\boldsymbol{P}_{a-1}(1)&\\boldsymbol{P}_{a-1}(2)& \\cdots&\\boldsymbol{P}_{a-1}(a)&\\cdots&\\boldsymbol{P}_{a-1}(b-1)\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a) &\\cdots&\\boldsymbol{P}(b-1)\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a )&\\cdots&\\boldsymbol{P}(b-1)\\\\ \\vdots&\\vdots&\\vdots&\\ddots&\\vdots&\\ddots&\\vdots\\\\ \\boldsymbol{P}(0)&\\boldsymbol{P}(1)&\\boldsymbol{P}(2)&\\cdots&\\boldsymbol{P}(a )&\\cdots&\\boldsymbol{P}(b-1)\\\\ \\end{array}\\right), \\tag{5.15}\\]and let \\(\\mathbf{S}(k)\\) (\\(k\\in\\mathbb{N}\\)) denote a \\(bM\\times M\\) matrix such that
\\[\\mathbf{S}(k)=\\left(\\begin{array}{c}\\mathbf{P}_{0}(k+b-1)\\\\ \\mathbf{P}_{1}(k+b-1)\\\\ \\vdots\\\\ \\mathbf{P}_{a-1}(k+b-1)\\\\ \\mathbf{P}(k+b-1)\\\\ \\vdots\\\\ \\mathbf{P}(k+b-1)\\end{array}\\right). \\tag{5.16}\\]
Further let \\(\\mathbf{S}(-k)\\) (\\(k=1,2,\\ldots,b\\)) denote an \\(M\\times bM\\) matrix such that
\\[\\mathbf{S}(-k)=\\left(\\overbrace{\\mathbf{O},\\mathbf{O},\\ldots,\\mathbf{O}}^{k-1},\\mathbf{P}(0),\\bm {P}(1),\\ldots,\\mathbf{P}(b-k)\\right). \\tag{5.17}\\]
We then rewrite (5.6) as
\\[\\mathbf{T}_{+}^{(a,b)}=\\left(\\begin{array}{c|cccc}\\mathbf{S}(0)&\\mathbf{S}(1)&\\mathbf{S}(2) &\\mathbf{S}(3)&\\cdots\\\\ \\hline\\mathbf{S}(-1)&\\mathbf{P}(b)&\\mathbf{P}(b+1)&\\mathbf{P}(b+2)&\\cdots\\\\ \\mathbf{S}(-2)&\\mathbf{P}(b-1)&\\mathbf{P}(b)&\\mathbf{P}(b+1)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\vdots&\\ddots\\\\ \\mathbf{S}(-b)&\\mathbf{P}(1)&\\mathbf{P}(2)&\\mathbf{P}(3)&\\cdots\\\\ \\mathbf{O}&\\mathbf{P}(0)&\\mathbf{P}(1)&\\mathbf{P}(2)&\\cdots\\\\ \\mathbf{O}&\\mathbf{O}&\\mathbf{P}(0)&\\mathbf{P}(1)&\\cdots\\\\ \\vdots&\\vdots&\\vdots&\\vdots&\\ddots\\end{array}\\right), \\tag{5.18}\\]
which is a GI/G/1-type Markov chain without disasters.
**Lemma 5.1**: _Suppose that the arrival process is the MAP \\(\\{\\mathbf{C},\\mathbf{D}\\}\\), i.e., a BMAP characterized by \\(\\{\\mathbf{C},\\mathbf{D}(1),\\mathbf{D}(2),\\ldots\\}\\) such that \\(\\mathbf{D}(k)=\\mathbf{O}\\) for all \\(k\\geq 2\\). If \\(H_{\\rm e}\\in\\mathcal{L}^{2}\\), then_
\\[\\overline{\\mathbf{P}}_{\\rm e}(k) \\stackrel{{ k}}{{\\sim}}\\mathbf{e}\\mathbf{\\varpi}\\cdot \\overline{H}_{\\rm e}(k/\\lambda), \\tag{5.19}\\] \\[\\overline{\\overline{\\mathbf{P}}}(k) \\stackrel{{ k}}{{\\sim}}\\rho\\mathbf{e}\\cdot\\overline{H}_{ \\rm e}(k/\\lambda),\\] (5.20) \\[\\overline{\\overline{\\mathbf{S}}}(k) \\stackrel{{ k}}{{\\sim}}\\rho\\mathbf{e}\\cdot\\overline{H}_{ \\rm e}(k/\\lambda). \\tag{5.21}\\]
_Proof._ Since conditions (i) and (ii) of Theorem 4.2 are satisfied, the asymptotic equation (5.19) hold. Substituting (5.19) into (5.5) and using (5.1) and \\(\\rho=\\lambda h\\) yield
\\[\\overline{\\overline{\\mathbf{P}}}(k)\\mathbf{e}\\stackrel{{ k}}{{\\sim}}h\\mathbf{e} \\mathbf{\\varpi}\\mathbf{D}\\mathbf{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda)=\\rho\\mathbf{e}\\cdot \\overline{H}_{\\rm e}(k/\\lambda),\\]
which shows that (5.20) holds. Further applying (5.20) to (5.7) and using \\((-\\mathbf{C})^{-1}\\mathbf{D}\\mathbf{e}=\\mathbf{e}\\), we obtain for \\(l=0,1,\\ldots,b-1\\),
\\[\\overline{\\overline{\\mathbf{P}}}_{l}(k)\\mathbf{e}\\stackrel{{ k}}{{\\sim}} \\left[(-\\mathbf{C})^{-1}\\mathbf{D}\\right]^{a-l}\\rho\\mathbf{e}\\cdot\\overline{H}_{\\rm e}(k/ \\lambda)=\\rho\\mathbf{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda).\\]Finally, incorporating this and (5.20) into (5.16) yields (5.21). \\(\\Box\\)
**Theorem 5.1**: _If \\(H_{\\rm e}\\in\\mathcal{L}^{2}\\cap\\mathcal{S}\\), then_
\\[\\overline{\\boldsymbol{y}}_{+}^{(a,b)}(k) \\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{b-\\rho} \\boldsymbol{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda), \\tag{5.22}\\] \\[\\overline{\\boldsymbol{y}}^{(a,b)}(k) \\stackrel{{ k}}{{\\sim}}\\frac{h}{\\eta}\\frac{b}{b- \\rho}\\boldsymbol{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda). \\tag{5.23}\\]
_Proof._ Note that \\(\\boldsymbol{T}_{+}^{(a,b)}\\) in (5.18) is equivalent to \\(\\boldsymbol{T}\\) in (2.1) with
\\[\\boldsymbol{A}(k)=\\left\\{\\begin{array}{ll}\\boldsymbol{P}(k+b),&k\\geq-b,\\\\ \\boldsymbol{O},&k\\leq-b-1,\\end{array}\\right.\\qquad\\boldsymbol{B}(k)=\\left\\{ \\begin{array}{ll}\\boldsymbol{S}(k),&k\\geq-b,\\\\ \\boldsymbol{O},&k\\leq-b-1.\\end{array}\\right. \\tag{5.24}\\]
It then follows from (4.6) and (5.2) that
\\[\\boldsymbol{\\varpi}\\sum_{k\\in\\mathbb{Z}}k\\boldsymbol{A}(k)\\boldsymbol{e}= \\boldsymbol{\\varpi}\\sum_{k=-b}^{\\infty}k\\boldsymbol{P}(k+b)\\boldsymbol{e}= \\rho-b<0.\\]
It also follows from (5.20), (5.21) and (5.24) that
\\[\\overline{\\boldsymbol{\\overline{A}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}\\rho\\boldsymbol{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\quad \\overline{\\boldsymbol{\\overline{B}}}(k)\\boldsymbol{e}\\stackrel{{ k}}{{\\sim}}\\rho \\boldsymbol{e}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\]
where the dimensions of \\(\\overline{\\boldsymbol{\\overline{A}}}(k)\\boldsymbol{e}\\) and \\(\\overline{\\boldsymbol{\\overline{B}}}(k)\\boldsymbol{e}\\) are different each other. Combining these results and Theorem 3.1 yields
\\[\\overline{\\boldsymbol{y}}_{+}^{(a,b)}(k)\\stackrel{{ k}}{{\\sim}} \\frac{\\rho\\sum_{k=0}^{\\infty}\\boldsymbol{y}_{+}^{(a,b)}(k)\\boldsymbol{e}}{b- \\rho}\\boldsymbol{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda)=\\frac{\\rho}{b- \\rho}\\boldsymbol{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\]
which shows that (5.22) holds.
Next we prove (5.23). From (5.14), we have for \\(k\\geq a\\),
\\[\\overline{\\boldsymbol{y}}^{(a,b)}(k) =\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\overline{\\boldsymbol{P}}_{ \\rm e}(k-a)\\] \\[\\qquad+\\frac{h}{\\eta}\\overline{\\boldsymbol{y}_{+}^{(a,b)}* \\boldsymbol{P}}_{\\rm e}(k)-\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b )}(l)\\overline{\\boldsymbol{P}}_{\\rm e}(k-l).\\]
Applying (5.19), (5.22) and Proposition 2.4 to the above equation and using the long-tailed property of \\(H_{\\rm e}\\), we obtain
\\[\\lim_{k\\to\\infty}\\frac{\\overline{\\boldsymbol{y}}^{(a,b)}(k)}{ \\overline{H}_{\\rm e}(k/\\lambda)} =\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l)\\left[ (-\\boldsymbol{C})^{-1}\\boldsymbol{D}\\right]^{a-l}\\boldsymbol{e}\\boldsymbol{ \\varpi}-\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l) \\boldsymbol{e}\\boldsymbol{\\varpi}\\] \\[\\qquad+\\frac{h}{\\eta}\\left[\\sum_{l=0}^{\\infty}\\boldsymbol{y}_{+}^ {(a,b)}(l)\\boldsymbol{e}\\boldsymbol{\\varpi}+\\frac{\\rho}{b-\\rho}\\boldsymbol{ \\varpi}\\sum_{l=0}^{\\infty}\\boldsymbol{P}_{\\rm e}(l)\\right]\\] \\[=\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_{+}^{(a,b)}(l) \\boldsymbol{e}\\boldsymbol{\\varpi}-\\frac{h}{\\eta}\\sum_{l=0}^{a}\\boldsymbol{y}_ {+}^{(a,b)}(l)\\boldsymbol{e}\\boldsymbol{\\varpi}+\\frac{h}{\\eta}\\left[ \\boldsymbol{\\varpi}+\\frac{\\rho}{b-\\rho}\\boldsymbol{\\varpi}\\right]\\] \\[=\\frac{h}{\\eta}\\frac{b}{b-\\rho}\\boldsymbol{\\varpi},\\]where the second equality follows from
\\[(-\\mathbf{C})^{-1}\\mathbf{D}\\mathbf{e}=\\mathbf{e},\\quad\\mathbf{\\varpi}\\sum_{l=0}^{\\infty}\\mathbf{P}_{ \\rm e}(l)=\\mathbf{\\varpi},\\quad\\sum_{l=0}^{\\infty}\\mathbf{y}_{+}^{(a,b)}(l)\\mathbf{e}=1.\\]
The proof is completed. \\(\\Box\\)
**Remark 5.1**: Suppose \\(a=b=1\\). It then follows that the MAP/G\\({}^{(a,b)}\\)/1 queue is reduced to the standard MAP/GI/1 queue, which is a special case of the BMAP/GI/1 queue. Further from (5.8) and (5.10), we have
\\[1=\\frac{h}{\\eta}+\\frac{\\mathbf{y}_{+}^{(1,1)}(0)(-\\mathbf{C})^{-1}\\mathbf{e}}{\\eta}=\\frac{ h}{\\eta}+\\mathbf{y}^{(1,1)}(0)\\mathbf{e}=\\frac{h}{\\eta}+1-\\rho, \\tag{5.25}\\]
where the last equality holds because \\(\\mathbf{y}^{(1,1)}(0)\\mathbf{e}=1-\\rho\\) (due to Little's law). The equation (5.25) yields \\(h/\\eta=\\rho\\). Substituting this into (5.23), we have
\\[\\overline{\\mathbf{y}}^{(1,1)}(k)\\stackrel{{ k}}{{\\sim}}\\frac{\\rho}{1- \\rho}\\mathbf{\\varpi}\\cdot\\overline{H}_{\\rm e}(k/\\lambda),\\]
which is consistent with (4.27) in Theorem 4.2.
## Appendix A Proofs
### Proof of Lemma 3.1
We prove (3.7) only. The proof of (3.8) is omitted because it is similar to that of (3.7).
According to Proposition 2.2, we fix \\(\\varepsilon>0\\) arbitrarily and \\(m_{*}:=m_{*}(\\varepsilon)\\) such that for all \\(m\\geq m_{*}\\) and \\(l=0,1,\\ldots,\\tau-1\\),
\\[\\mathbf{e}(\\tau\\mathbf{\\psi}-\\varepsilon\\mathbf{e}^{\\rm t})\\leq\\sum_{l=0}^{\\tau-1}\\mathbf{L}( \\lfloor m/\\tau\\rfloor\\tau+l)\\leq\\mathbf{e}(\\tau\\mathbf{\\psi}+\\varepsilon\\mathbf{e}^{\\rm t}).\\] (A.1)
Further since \\(\\mathbf{L}(m)\\leq\\mathbf{e}\\mathbf{e}^{\\rm t}\\) for all \\(m\\in\\mathbb{N}\\), it follows from (3.2) and \\(Y\\in\\mathcal{L}\\) that
\\[\\limsup_{k\\to\\infty}\\sum_{m=1}^{m_{*}-1}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m) }{{\\sf P}(Y>k)}\\leq\\sum_{m=1}^{m_{*}-1}\\limsup_{k\\to\\infty}\\frac{\\overline{ \\overline{\\mathbf{A}}}(k+m-1)\\mathbf{e}\\mathbf{e}^{\\rm t}-\\overline{\\overline{\\mathbf{A}}}(k+m )\\mathbf{e}\\mathbf{e}^{\\rm t}}{{\\sf P}(Y>k)}=\\mathbf{O},\\]
and thus
\\[\\lim_{k\\to\\infty}\\sum_{m=1}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m)}{{ \\sf P}(Y>k)}=\\lim_{k\\to\\infty}\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A}}( k+m)\\mathbf{L}(m)}{{\\sf P}(Y>k)}.\\] (A.2)To prove (3.7) it suffices to show that for any fixed \\(\\varepsilon>0\\),
\\[\\limsup_{k\\to\\infty}\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k +m)\\mathbf{L}(m)}{\\mathsf{P}(Y>k)} \\leq\\mathbf{c}_{A}(\\mathbf{\\psi}+\\varepsilon\\mathbf{e}^{\\mathrm{t}}/\\tau),\\] (A.3) \\[\\liminf_{k\\to\\infty}\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A }}(k+m)\\mathbf{L}(m)}{\\mathsf{P}(Y>k)} \\geq\\mathbf{c}_{A}(\\mathbf{\\psi}-\\varepsilon\\mathbf{e}^{\\mathrm{t}}/\\tau).\\] (A.4)
Indeed, letting \\(\\varepsilon\\downarrow 0\\) in (A.3) and (A.4) we obtain
\\[\\lim_{k\\to\\infty}\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k +m)\\mathbf{L}(m)}{\\mathsf{P}(Y>k)}=\\mathbf{c}_{A}\\mathbf{\\psi}=\\frac{\\mathbf{c}_{A}\\mathbf{\\pi}( \\mathbf{I}-\\mathbf{R})(\\mathbf{I}-\\mathbf{\\Phi}(0))}{-\\sigma},\\]
where the second equality is due to (2.6). Substituting the obtained equation into (A.2), we have (3.7).
We first prove (A.3). By definition, \\(\\{\\overline{\\mathbf{A}}(k);k\\in\\mathbb{Z}_{+}\\}\\) is nonincreasing. We thus obtain
\\[\\sum_{m=m_{*}}^{\\infty}\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m) \\leq\\sum_{n=\\lfloor m_{*}/\\tau\\rfloor}^{\\infty}\\sum_{l=0}^{\\tau- 1}\\overline{\\mathbf{A}}(k+n\\tau+l)\\mathbf{L}(n\\tau+l)\\] \\[\\leq\\sum_{n=\\lfloor m_{*}/\\tau\\rfloor}^{\\infty}\\overline{\\mathbf{A}} (k+n\\tau)\\sum_{l=0}^{\\tau-1}\\mathbf{L}(n\\tau+l)\\] \\[\\leq\\sum_{n=\\lfloor m_{*}/\\tau\\rfloor}^{\\infty}\\frac{1}{\\tau} \\sum_{i=0}^{\\tau-1}\\overline{\\mathbf{A}}(k+n\\tau-i)\\cdot\\sum_{l=0}^{\\tau-1}\\mathbf{L} (n\\tau+l).\\]
Substituting (A.1) into the above inequality yields
\\[\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m)}{ \\mathsf{P}(Y>k)} \\leq\\sum_{n=\\lfloor m_{*}/\\tau\\rfloor}^{\\infty}\\sum_{i=0}^{\\tau- 1}\\frac{\\overline{\\mathbf{A}}(k+n\\tau-i)\\mathbf{e}}{\\mathsf{P}(Y>k)}(\\mathbf{\\psi}+ \\varepsilon\\mathbf{e}^{\\mathrm{t}}/\\tau).\\] \\[=\\frac{\\overline{\\overline{\\mathbf{A}}}(k+\\lfloor m_{*}/\\tau\\rfloor \\tau-\\tau)\\mathbf{e}}{\\mathsf{P}(Y>k)}(\\mathbf{\\psi}+\\varepsilon\\mathbf{e}^{\\mathrm{t}}/ \\tau).\\] (A.5)
From (A.5), (3.2) and \\(Y\\in\\mathcal{L}\\), we have (A.3).
Next we prove (A.4). Since \\(\\{\\overline{\\mathbf{A}}(k)\\}\\) is nonincreasing, we have
\\[\\sum_{m=m_{*}}^{\\infty}\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m) \\geq\\sum_{n=\\lceil m_{*}/\\tau\\rceil}^{\\infty}\\sum_{l=0}^{\\tau-1} \\overline{\\mathbf{A}}(k+n\\tau+l)\\mathbf{L}(n\\tau+l)\\] \\[\\geq\\sum_{n=\\lceil m_{*}/\\tau\\rceil}^{\\infty}\\overline{\\mathbf{A}}(k +n\\tau+\\tau+1)\\sum_{l=0}^{\\tau-1}\\mathbf{L}(n\\tau+l)\\] \\[\\geq\\sum_{n=\\lceil m_{*}/\\tau\\rceil}^{\\infty}\\frac{1}{\\tau}\\sum_ {i=1}^{\\tau}\\overline{\\mathbf{A}}(k+n\\tau+\\tau+i)\\cdot\\sum_{l=0}^{\\tau-1}\\mathbf{L}(n \\tau+l).\\]Combining this with (A.1) yields
\\[\\sum_{m=m_{*}}^{\\infty}\\frac{\\overline{\\mathbf{A}}(k+m)\\mathbf{L}(m)}{{\\sf P} (Y>k)} \\geq\\sum_{n=\\lceil m_{*}/\\tau\\rceil+1}^{\\infty}\\sum_{i=1}^{\\tau} \\frac{\\overline{\\mathbf{A}}(k+n\\tau+i)\\mathbf{e}}{{\\sf P}(Y>k)}(\\mathbf{\\psi}-\\varepsilon \\mathbf{e}^{\\rm t}/\\tau).\\] \\[=\\frac{\\overline{\\overline{\\mathbf{A}}}(k+\\lceil m_{*}/\\tau\\rceil\\tau+ \\tau)\\mathbf{e}}{{\\sf P}(Y>k)}(\\mathbf{\\psi}-\\varepsilon\\mathbf{e}^{\\rm t}/\\tau).\\]
Therefore similarly to (A.3), we can obtain (A.4). \\(\\Box\\)
## Appendix B Cumulative process sampled at heavy-tailed random times
This section summarizes some of the results presented in Masuyama (2013), which are used in Sections 4 and 5.
Let \\(\\{B(t);t\\geq 0\\}\\) denote a stochastic process on \\((-\\infty,\\infty)\\), where \\(|B(0)|<\\infty\\) with probability one (w.p.1). We assume that there exist regenerative points \\(0\\leq\\tau_{0}<\\tau_{1}<\\tau_{2}<\\cdots\\) such that \\(\\{B(t+\\tau_{n})-B(\\tau_{n});t\\geq 0\\}\\) (\\(n\\in\\mathbb{Z}_{+}\\)) is independent of \\(\\{B(u);0\\leq u<\\tau_{n}\\}\\) and is stochastically equivalent to \\(\\{B(t+\\tau_{0})-B(\\tau_{0});t\\geq 0\\}\\). The process \\(\\{B(t);t\\geq 0\\}\\) is called _(regenerative) cumulative process_, which is introduced by Smith (1955).
Let \\(\\Delta\\tau_{0}=\\tau_{0}\\) and \\(\\Delta\\tau_{n}=\\tau_{n}-\\tau_{n-1}\\) for \\(n\\in\\mathbb{N}\\). Let
\\[\\Delta B_{n}=\\left\\{\\begin{array}{ll}B(\\tau_{0}),&n=0,\\\\ B(\\tau_{n})-B(\\tau_{n-1}),&n\\in\\mathbb{N},\\end{array}\\right.\\Delta B_{n}^{*}= \\left\\{\\begin{array}{ll}\\sup_{0\\leq t\\leq\\tau_{0}}\\max(B(t),0),&n=0,\\\\ \\sup_{\\tau_{n-1}\\leq t\\leq\\tau_{n}}B(t)-B(\\tau_{n-1}),&n\\in\\mathbb{N}.\\end{array}\\right.\\]
It is easy to see that \\(\\Delta B_{n}^{*}\\geq\\Delta B_{n}\\) for \\(n\\in\\mathbb{Z}_{+}\\) and that \\(\\{\\Delta\\tau_{n};n\\in\\mathbb{N}\\}\\) (resp. \\(\\{\\Delta B_{n};n\\in\\mathbb{N}\\}\\) and \\(\\{\\Delta B_{n}^{*};n\\in\\mathbb{N}\\}\\)) is a sequence of i.i.d. random variables, which is independent of \\(\\Delta\\tau_{0}\\) (resp. \\(\\Delta B_{0}\\) and \\(\\Delta B_{0}^{*}\\)).
**Remark B.1**: The counting process \\(\\{N(t);t\\geq 0\\}\\) of BMAP \\(\\{\\mathbf{C},\\mathbf{D}(1),\\mathbf{D}(2),\\dots\\}\\) is a cumulative process such that regenerative points are hitting times to any fixed background state and the regenerative cycle follows a phase-type distribution (see equations (3.3)-(3.5) in Masuyama 2013).
We now assume that
\\[{\\sf P}(0\\leq\\Delta\\tau_{n}<\\infty)={\\sf P}(0\\leq\\Delta B_{n}^{*} <\\infty)=1\\ \\ (n=0,1),\\] \\[{\\sf E}[|\\Delta B_{1}|]<\\infty,\\ \\ \\ 0<{\\sf E}[\\Delta\\tau_{1}]< \\infty,\\ \\ \\ b:={\\sf E}[\\Delta B_{1}]\\big{/}{\\sf E}[\\Delta\\tau_{1}]>0.\\]
We then obtain the following results.
**Proposition B.1** (Masuyama 2013, Theorem 3.3): _Suppose that \\(T\\) is a nonnegative random variable independent of \\(\\{B(t);t\\geq 0\\}\\). Further suppose that (i) \\(T\\in{\\sf L}^{\\mu}\\) for some \\(\\mu\\geq 2\\); (ii) \\({\\sf E}[(\\Delta\\tau_{1})^{2}]<\\infty\\) and (iii) \\({\\sf E}[\\exp\\{Q(\\Delta B_{n}^{*})\\}]<\\infty\\) (\\(n=0,1\\)) for some cumulative hazard function \\(Q\\in{\\cal SC}\\) such that \\(x^{1/\\mu}=O(Q(x))\\). We then have \\({\\sf P}(B(T)>bx)\\stackrel{{ x}}{{\\sim}}{\\sf P}(T>x)\\)._
**Corollary B.1**: _Suppose that \\(T\\) is a nonnegative random variable independent of \\(\\{(N(t),J(t));t\\geq 0\\}\\), where \\(\\{N(t)\\}\\) and \\(\\{J(t)\\}\\) denote the counting process and the background Markov chain, respectively, of BMAP \\(\\{\\mathbf{C},\\mathbf{D}(1),\\mathbf{D}(2),\\dots\\}\\) introduced in subsection 4.1. Suppose that (i) \\(T\\in\\mathcal{L}^{\\mu}\\) for some \\(\\mu\\geq 2\\); and (ii) \\(\\sum_{k=1}^{\\infty}\\exp\\{Q(k)\\}\\mathbf{D}(k)<\\infty\\) (\\(n=0,1\\)) for some cumulative hazard function \\(Q\\in\\mathcal{SC}\\) such that \\(x^{1/\\mu}=O(Q(x))\\). We then have \\(\\mathsf{P}(N(T)>k)\\stackrel{{ k}}{{\\sim}}\\mathsf{P}(T>k/\\lambda)\\)._
_Proof._ It suffices to prove that conditions (i)-(iii) of Proposition B.1 are satisfied. For this purpose, fix \\(B(t)=N(t)\\) for \\(t\\geq 0\\). Since the regenerative cycle follows a phase-type distribution (see Remark B.1), we have \\(\\mathsf{E}[(\\Delta\\tau_{1})^{2}]<\\infty\\). Further since \\(\\{B(t)=N(t);t\\geq 0\\}\\) is nondecreasing, we have \\(\\Delta B_{n}^{*}=\\Delta B_{n}\\) for all \\(n\\in\\mathbb{Z}_{+}\\). Therefore it follows from the renewal reward theorem (see, e.g., Wolff 1989, Chapter 2, Theorem 2) that
\\[\\frac{\\mathsf{E}[\\Delta B_{1}^{*}]}{\\mathsf{E}[\\Delta\\tau_{1}]}=\\lambda\\in(0,\\infty),\\qquad\\frac{\\mathsf{E}[\\exp\\{Q(\\Delta B_{1}^{*})\\}]}{\\mathsf{E}[ \\Delta\\tau_{1}]}=\\boldsymbol{\\pi}\\sum_{k=1}^{\\infty}\\exp\\{Q(k)\\}\\mathbf{D}(k)\\mathbf{e}<\\infty,\\]
which lead to \\(\\mathsf{E}[\\exp\\{Q(\\Delta B_{1}^{*})\\}]<\\infty\\) and thus \\(\\mathsf{E}[(\\Delta B_{1})^{2}]<\\infty\\).
It remains to prove \\(\\mathsf{E}[\\exp\\{Q(\\Delta B_{0}^{*})\\}]<\\infty\\). Let \\(i_{0}\\) denote the background state at regenerative points, i.e., \\(J(\\tau_{n})=i_{0}\\) for all \\(n\\in\\mathbb{Z}_{+}\\). Suppose that there exists some \\(i\\in\\mathbb{M}\\) such that
\\[\\mathsf{E}[\\exp\\{Q(N(\\tau_{0}))\\}\\cdot\\mbox{\\rm 1l}(J(\\tau_{0})=i_{0})\\mid J(0)=i]=\\infty,\\] (B.1)
where \\(\\tau_{0}=\\inf\\{t\\geq 0;J(t)=i_{0}\\}\\). Let \\(T_{i}^{\\geqslant\\tau_{0}}=\\inf\\{t\\geq\\tau_{0};J(t)=i\\}\\). Since the background Markov chain is irreducible, we have
\\[\\mathsf{P}(T_{i}^{\\geqslant\\tau_{0}}<\\tau_{1}\\mid J(\\tau_{0})=i_{0})>0,\\] (B.2)
where \\(\\tau_{1}=\\inf\\{t\\geq\\tau_{0};J(t)=i_{0}\\}\\). It follows from \\(\\Delta B_{1}^{*}=N(\\tau_{1})-N(\\tau_{0})\\), (B.1) and (B.2) that
\\[\\mathsf{E}[\\exp\\{Q(\\Delta B_{1}^{*})\\}] =\\mathsf{E}[\\exp\\{Q(N(\\tau_{1})-N(\\tau_{0}))\\}]\\] \\[\\geq\\mathsf{P}(T_{i}^{\\geqslant\\tau_{0}}<\\tau_{1}\\mid J(\\tau_{0})= i_{0})\\] \\[\\qquad\\times\\mathsf{E}[\\exp\\{Q(N(\\tau_{1})-N(T_{i}^{\\geqslant \\tau_{0}}))\\}\\mid J(T_{i}^{\\geqslant\\tau_{0}})=i,T_{i}^{\\geqslant\\tau_{0}}< \\tau_{1}]\\] \\[=\\mathsf{P}(T_{i}^{\\geqslant\\tau_{0}}<\\tau_{1}\\mid J(\\tau_{0})=i_ {0})\\] \\[\\qquad\\times\\mathsf{E}[\\exp\\{Q(N(\\tau_{0}))\\}\\mid J(0)=i]=\\infty,\\]
which is inconsistent with \\(\\mathsf{E}[\\exp\\{Q(\\Delta B_{1}^{*})\\}]<\\infty\\). Thus (B.1) is not true. As a result, for any \\(i\\in\\mathbb{M}\\), we have \\(\\mathsf{E}[\\exp\\{Q(N(\\tau_{0}))\\}\\cdot\\mbox{\\rm 1l}(J(\\tau_{0})=i_{0})\\mid J(0)=i]=\\infty\\), which implies that \\(\\mathsf{E}[\\exp\\{Q(\\Delta B_{0}^{*})\\}]<\\infty\\). \\(\\Box\\)
A similar result is presented in Masuyama (2013).
**Proposition B.2** (Masuyama 2013, Corollary 3.1): _Suppose that \\(T\\) is a nonnegative random variable independent of \\(\\{(N(t),J(t));t\\geq 0\\}\\), where \\(\\{N(t)\\}\\) and \\(\\{J(t)\\}\\) denote the countingprocess and the background Markov chain, respectively, of BMAP \\(\\{\\mathbf{C},\\mathbf{D}(1),\\mathbf{D}(2),\\dots\\}\\) introduced in subsection 4.1. Suppose that (i) \\(T\\in\\mathcal{C}\\); (ii) \\(\\mathsf{E}[T]<\\infty\\); and (iii) \\(\\overline{\\mathbf{D}}(k)=o(\\mathsf{P}(T>k))\\). We then have \\(\\mathsf{P}(N(T)>k)\\stackrel{{ k}}{{\\sim}}\\mathsf{P}(T>k/\\lambda)\\)._
## Acknowledgments
Research of the author was supported in part by Grant-in-Aid for Young Scientists (B) of Japan Society for the Promotion of Science under Grant No. 24710165.
## References
* (1)
* Aleskeviciene et al. (2008) Aleskeviciene, A., Leipus, R., & Siaulys, J. (2008). Tail behavior of random sums under consistent variation with applications to the compound renewal risk model. _Extremes, 11_(3), 261-279.
* Asmussen (2003) Asmussen, S. (2003). _Applied Probability and Queues_ (2nd ed.). New York: Springer.
* Asmussen & Moller (1999) Asmussen, S., & Moller J. R. (1999). Tail asymptotics for M/G/1 type queueing processes with subexponential increments. _Queueing Systems, 33_(1-3), 153-176.
* Asmussen et al. (1999) Asmussen, S., Kluppelberg, C., & Sigman, K. (1999). Sampling at subexponential times, with queueing applications. _Stochastic Processes and their Applications, 79_(2), 265-286.
* Asmussen et al. (2003) Asmussen, S., Foss, S., & Korshunov, D. (2003). Asymptotics for sums of random variables with local subexponential behaviour. _Journal of Theoretical Probability, 16_(2), 489-518.
* Cinlar (1975) Cinlar, E. (1975). _Introduction to Stochastic Processes_. Englewood Cliffs, NJ: Prentice-Hall.
* Goldie & Kluppelberg (1998) Goldie, C. M., & Kluppelberg, C. (1998). Subexponential distributions. In R. J. Adler, R. E. Feldman, & M. S. Taqqu (Eds), A Practical Guide to Heavy Tails: Statistical Techniques and Applications (pp. 435-459). Boston: Birkhauser.
* He (2014) He, Q. M. (2014). _Fundamentals of Matrix-Analytic Methods_. New York: Springer.
* Jelenkovic & Lazar (1998) Jelenkovic, P. R., & Lazar, A. A. (1998). Subexponential asymptotics of a Markov-modulated random walk with queueing applications. _Journal of Applied Probability, 35_(2), 325-347.
* Kim & Kim (2012) Kim, B., & Kim, J. (2012). A note on the subexponential asymptotics of the stationary distribution of M/G/1 type Markov chains. _European Journal of Operational Research, 220_(1), 132-134.
* Kimura et al. (2010) Kimura, T., Daikoku, K., Masuyama, H., & Takahashi, Y. (2010). Light-tailed asymptotics of stationary tail probability vectors of Markov chains of M/G/1 type. _Stochastic Models, 26_(4), 505-548.
* Kimura et al. (2011)* Kimura et al. (2013) Kimura, T., Masuyama, H., & Takahashi, Y. (2013). Subexponential asymptotics of the stationary distributions of GI/G/1-type Markov chains. _Stochastic Models, 29_(2), 190-239.
* Li & Zhao (2005) Li, Q. L., & Zhao, Y. Q. (2005). Heavy-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. _Advances in Applied Probability, 37_(2), 482-509.
* Loynes (1962) Loynes, R. M. (1962). The stability of a queue with non-independent inter-arrival and service times. _Mathematical Proceedings of the Cambridge Philosophical Society, 58_(3), 497-520.
* Lucantoni (1991) Lucantoni, D. M. (1991). New results on the single server queue with a batch Markovian arrival process. _Stochastic Models, 7_(1), 1-46.
* Masuyama (2011) Masuyama, H. (2011). Subexponential asymptotics of the stationary distributions of M/G/1-type Markov chains. _European Journal of Operational Research, 213_(3), 509-516.
* Masuyama (2013) Masuyama, H. (2013). Tail asymptotics for cumulative processes sampled at heavy-tailed random times with applications to queueing models in Markovian environments. _Journal of the Operations Research Society of Japan, 56_(4), 257-308.
* Masuyama et al. (2009) Masuyama, H., Liu, B., & Takine, T. (2009). Subexponential asymptotics of the BMAP/GI/1 queue. _Journal of the Operations Research Society of Japan, 52_(4), 377-401.
* Shneer (2006) Shneer, V. V. (2006). Estimates for interval probabilities of the sums of random variables with locally subexponential distributions. _Siberian Mathematical Journal, 47_(4), 779-786.
* Sigman (1999) Sigman, K. (1999). Appendix: A primer on heavy-tailed distributions. _Queueing Systems, 33_(1-3), 261-275.
* Singh et al. (2013) Singh, G., Gupta, U. C., & Chaudhry, M. L. (2013). Computational analysis of bulk service queue with Markovian arrival process: MAP/R\\({}^{(a,b)}\\)/1 queue. _Opsearch, 50_(4), 582-603.
* Smith (1955) Smith, W. L. (1955). Regenerative stochastic processes. _Proceedings of the Royal Society of London, Series A, 232_(1188), 6-31.
* Takine (2000) Takine, T. (2000). A new recursion for the queue length distribution in the stationary BMAP/G/1 queue. _Stochastic Models, 16_(2), 335-341.
* Takine (2004) Takine, T. (2004). Geometric and subexponential asymptotics of Markov chains of M/G/1 type. _Mathematics of Operations Research, 29_(3), 624-648.
* Wolff (1989) Wolff, R. W. (1989). _Stochastic Modeling and the Theory of Queues_. Englewood Cliffs, NJ: Prentice Hall. | The main contribution of this paper is to present a new sufficient condition for the subexponential asymptotics of the stationary distribution of a GI/GI/1-type Markov chain without jumps from level \"infinity\" to level zero. For simplicity, we call such Markov chains _GI/GI/1-type Markov chains without disasters_ because they are often used to analyze semi-Markovian queues without \"disasters\", which are negative customers who remove all the customers in the system (including themselves) on their arrivals. In this paper, we demonstrate the application of our main result to the stationary queue length distribution in the standard BMAP/GI/1 queue. Thus we obtain new asymptotic formulas and prove the existing formulas under weaker conditions than those in the literature. In addition, applying our main result to a single-server queue with Markovian arrivals and the \\((a,b)\\)-bulk-service rule (i.e., \\(\\mathrm{MAP}/\\mathrm{GI}^{(a,b)}/1\\) queue), we obatin a subexponential asymptotic formula for the stationary queue length distribution.
**Keywords:** Subexponential asymptotics; GI/G/1-type Markov chain; disaster; bulk service; BMAP/GI/1 queue; \\(\\mathrm{MAP}/\\mathrm{GI}^{(a,b)}/1\\) queue
**Mathematics Subject Classification:** Primary 60K25; Secondary 60J10 | Give a concise overview of the text below. | 301 |
arxiv-format/1808_02601v2.md | Constraints on the hybrid equation of state with a crossover hadron-quark phase transition in the light of GW170817
Cheng-Ming Li\\({}^{1,6}\\)
[email protected]
Yan Yan\\({}^{2}\\)
[email protected]
Jin-Jun Geng\\({}^{3}\\)
[email protected]
Yong-Feng Huang\\({}^{3}\\)
[email protected]
Hong-Shi Zong\\({}^{1,4,5,6}\\)
[email protected] \\({}^{1}\\) Department of Physics, Nanjing University, Nanjing 210093, China \\({}^{2}\\) School of mathematics and physics, Changzhou University, Changzhou, Jiang 213164, China \\({}^{3}\\) School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China \\({}^{4}\\) Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing 210093, China \\({}^{5}\\) State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, CAS, Beijing, 100190, China \\({}^{6}\\) Nanjing Proton Source Research and Design Center, Nanjing 210093, China
## I Introduction
The simultaneous direct detection of the gravitational wave (GW) and its electromagnetic counterpart by LIGO-VIRGO collaboration [1] and \\(\\sim\\)70 astronomical detectors [2] opens a new era of multi-messenger astronomy. All these observations indicate that the event GW170817 is related to a binary neutron star(BNS) merger. Many works have followed up after that. In paper [3], the central engine of the short gamma ray burst (GRB) has been studied; while in paper [4], the study of heavy elements as well as their abundance in the universe has been done. In addition to that, the internal structure of NSs has also been studied with thorough analysis of the new data [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], but definitive answers are still difficult to find.
It is believed that with more observations of GW events in the future, a better understanding and constraint on the EOS can be achieved, thus considerably promoting research on dense nuclear matter physics [19]. In fact, during the inspiral phase, a star can exert a static tidal field on its companion in the binary, and the quadrupolar response of the field is relevant to the EOS-dependent TD parameter. In papers [20; 21; 22; 23], the authors demonstrate the connection between this parameter and the inspiral signal of GW. From the observation data of GW170817, the LIGO-VIRGO collaboration provided a constraint on the dimensionless TD for 1.4 \\(M_{\\odot}\\) as \\(\\Lambda(1.4M_{\\odot})\\leq 800\\)[1]. The upper limit is revised to be 900 for a low-spin prior in the recent paper [24]. The restriction considerably influences the study of pure hadronic NSs [6], quark stars [11], and HSs [9; 15]. It is noteworthy that for the study of HSs, different aspects and approaches to hadron-quark phase transition will lead to different results. For example, in Ref. [9], a first-order phase transition is considered with the parametrization approach; and in Ref. [15] a smooth phase transition with the Gibbs construction is adopted.
Different from the Gibbs construction, the three-window interpolating approach corresponds to a crossover hadron-quark phase transition and the EOS of which can be differentiated to infinite order during the transition region. In addition to that, this interpolating approach is feasible especially when we demand a null radii of NSs, i.e. \\(R\\lesssim 13\\) km, or the EOS to be soft at low density but stiff at high density [25]. Considering the possibility of HSs with a crossover between hadronic matter and quark matter inside [25; 26; 27; 28; 29; 30; 31], it is reasonable to evaluate the influence of TD parameter on stars of this type. Thus, in this paper, we will investigate the constraint on HSs constructed by the three-window interpolating approach [26; 27] to connect the quark phase and hadronic phase, which is described by 2+1 flavors NJL model [32; 33; 34; 35; 36] and relativistic mean field (RMF) NL3\\(\\omega\\rho\\) model [37; 38], respectively. It is noteworthy that many studies [25; 27; 28; 29; 30] with this approach have obtained good results for the mass of HSs, namely, the maximum mass compatible with 2 \\(M_{\\odot}\\). However, the choice of the interpolating parameters \\((\\tilde{\\mu},\\Gamma)\\) seems somewhat arbitrary in relevant studies. With the recent updated source properties of GW170817 [24] as well as other four constraints (the mass constraint from PSR J0348+0432 [39], the studies of hadron-quark transition in Refs. [40; 41] implying that \\(\\mu_{\\rm deconfinement}>\\mu_{\\rm ChiralRestoration}\\sim 1\\) GeV at zero temperature with finite chemical potential, the stability of hybrid EOSs [29], the stability of the heaviest HS) on hybrid EOSs, we try to restrict the parameter space and demonstrate the type of two stars in the binary for nine representative hybrid EOSs.
The article is organized as follows: In Sec. II, we present the EOS of hadronic matter at low densities and calculate the EOS of quark matter at high densities. A link between the two phases via three-window interpolating approach is also introduced. Then the methods of constraining parameters are presented in Sec. III. In Sec. IV, we give the result of hybrid EOSs and the restricted parameter space of it. A brief summary and discussion are provided in Sec. V. Finally, detailed derivations and calculations of quark condensate are presented in the Appendix VI.
## II Construction of the hybrid EOS
### EOS of hadronic matter
The RMF model NL3\\(\\omega\\rho\\)[37; 38] is very successful in describing the confined hadronic matter in beta-equilibrium. The Lagrangian of it reads
\\[\\mathcal{L} = \\sum_{N=p,n}\\bar{\\psi}_{N}[\\gamma^{\\mu}(i\\partial_{\\mu}-g_{\\omega N }\\omega_{\\mu}-\\frac{g_{\\rho N}}{2}\\tau\\cdot\\rho_{\\mu})-(m_{N}-g_{\\sigma N} \\sigma)]\\psi_{N} \\tag{1}\\] \\[+ \\frac{1}{2}\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma-\\frac{1}{2} m_{\\sigma}^{2}\\sigma^{2}-\\frac{1}{4}\\Omega^{\\mu\
u}\\Omega_{\\mu\
u}+\\frac{1}{2}m_{ \\omega}^{2}\\omega_{\\mu}\\omega^{\\mu}-\\frac{1}{4}\\rho^{\\mu\
u}\\cdot\\rho_{\\mu \
u}+\\frac{1}{2}m_{\\rho}^{2}\\rho^{\\mu}\\cdot\\rho_{\\mu}\\] \\[- \\frac{1}{3}bm_{N}(g_{\\sigma N}\\sigma)^{3}-\\frac{1}{4}c(g_{\\sigma N }\\sigma)^{4}+\\Lambda_{\\omega}(g_{\\omega}^{2}\\omega_{\\mu}\\omega^{\\mu})(g_{\\rho }^{2}\\rho_{\\mu}\\cdot\\rho^{\\mu}).\\]
Compared with the RMF model NL3, this Lagrangian has one more term, i.e., nonlinear \\(\\omega\\rho\\) term, resulting in softer dependence of the symmetry energy on density. In addition, the exclusion of a quartic term on \\(\\omega\\)-meson makes the EOS of NL3\\(\\omega\\rho\\) model very stiff at large densities. Thus the neutron star constructed by NL3\\(\\omega\\rho\\) has a very large maximum mass, which is calculated to be about 2.75 solar mass (\\(M_{\\odot}\\)), well above the 2.01\\(\\pm\\)0.04 M\\({}_{\\odot}\\) constraint of PSR J0348+0432 [39]. In Ref. [37], we can see from calculations of microscopic neutron matter that this model is compatible with various critical constraints: theoretical, experimental and astrophysical. The saturation properties of NL3\\(\\omega\\rho\\) are shown in the following: saturated density \\(\\rho_{0}=\\)0.148 fm\\({}^{-3}\\), energy per nucleon \\(E/A=\\)-16.2 MeV, incompressibility \\(K=\\)271.6 MeV, symmetry energy \\(J=\\)31.7 MeV, slope of symmetry energy \\(L=\\)55.5 MeV.
It is known that the structure of a neutron star can be divided into four parts, that is, the envelope, the outer crust, the inner crust and liquid core as the energy density increases. The envelope of the neutron star with energy density smaller than \\(10^{6}\\) g/cm\\({}^{3}\\) possesses a tiny mass (\\(10^{-10}\\)\\(M_{\\odot}\\)), and its conformation and structure can also be affected by many factors such as strong magnetic field [42] and the accretion of interstellar matter. Therefore, in this paper, we will restrict our calculation to \\(\\epsilon>10^{6}\\) g/cm\\({}^{3}\\). Then to build an EOS for the hadronic matter, in the outer crust where \\(\\rho<3\\times 10^{-4}\\) fm\\({}^{-3}\\), we employ the Baym-Pethick-Sutherland (BPS) EOS which describes the nuclear matter in this region quite well; in the inner crust and the core where \\(\\rho>3\\times 10^{-4}\\) fm\\({}^{-3}\\), we adopt NL3\\(\\omega\\rho\\) EOS which characterizes the properties of hadronic matter in this region very well. In the meanwhile, these two EOSs intersect at the density of \\(3\\times 10^{-4}\\) fm\\({}^{-3}\\). As a result, the maximum mass of neutron star calculated by this hadronic EOS is about 2.754 \\(M_{\\odot}\\) with a radius \\(R=13.01\\) km, implying a very small mass of the outer crust too. In addition, we do not consider the contribution of hyperons in this paper because the interactions among them are complicated and still unknown.
### EOS of quark matter
The Lagrangian of 2+1 flavors NJL model has a general form as
\\[\\mathcal{L}= \\bar{\\psi}(i\
ot{\\partial}-m)\\psi+\\sum_{\\rm i=0}^{8}G[(\\bar{\\psi }\\lambda_{\\rm i}\\psi)^{2}+(\\bar{\\psi}i\\gamma_{5}\\lambda_{\\rm i}\\psi)^{2}]\\] \\[-K\\left(\\det[\\bar{\\psi}(1+\\gamma_{5})\\psi]+\\det[\\bar{\\psi}(1- \\gamma_{5})\\psi]\\right), \\tag{2}\\]
here \\(G\\) and \\(K\\) are four-fermion and six-fermion coupling constant, respectively; \\(\\lambda^{\\rm i},{\\rm i}=1\\to 8\\) is the Gell-Mann matrix and \\(\\lambda^{0}=\\sqrt{\\frac{2}{3}}\\)\\(I\\) (\\(I\\) is the identity matrix). In this model, the quark propagator \\(S_{\\rm i}\\) can be expressed as
\\[S_{\\rm i}(p^{2})=\\frac{1}{\
ot{p}-M_{\\rm i}}, \\tag{3}\\]
where the subscript \\({\\rm i}=u,d,s\\) denotes the flavor of the quark and \\(M_{\\rm i}\\) represents the constituent quark mass. Then the gap equation can be derived with the mean field approximation as
\\[M_{\\rm i}=m_{\\rm i}-4G\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i}+2K\\langle\\bar{\\psi} \\psi\\rangle_{\\rm j}\\langle\\bar{\\psi}\\psi\\rangle_{\\rm k}. \\tag{4}\\]
Here \\(\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i}\\) and \\(m_{\\rm i}\\) are the quark condensate and current quark mass of flavor i respectively, and (i, j, k) is a permutation of \\((u,d,s)\\). On account of the isospin symmetry between u and d quark in 2+1 flavors NJL model, we can obtain that \\(M_{\\rm u}=M_{\\rm d}\\), \\(\\langle\\bar{\\psi}\\psi\\rangle_{\\rm u}=\\langle\\bar{\\psi}\\psi\\rangle_{\\rm d}\\) and \\(m_{\\rm u}=m_{\\rm d}\\). By definition, the quark condensate is
\\[\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i} = -\\int\\frac{{\\rm d}^{4}p}{(2\\pi)^{4}}{\\rm Tr}[iS^{\\rm i}(p^{2})] \\tag{5}\\] \\[= -N_{\\rm c}\\int_{-\\infty}^{+\\infty}\\frac{{\\rm d}^{4}p}{(2\\pi)^{4} }\\frac{4iM_{\\rm i}}{p^{2}-M_{\\rm i}^{2}}.\\]
The trace \"Tr\" is performed in Dirac and color spaces. To proceed with the following calculation, we will make a Wick rotation from Minkowski space to Euclidean space and introduce the Proper Time Regularization (PTR). After that, a generalization from zero temperature and chemical potential to zero temperature but finite chemical potential will be made. The detailed definition and derivation can be found in the Appendix VI. Then the quark condensate becomes
\\[\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i} = \\left\\{\\begin{array}{ll}-\\frac{3M_{\\rm i}}{4\\pi^{2}}\\int_{\\tau _{\\rm UV}}^{\\infty}{\\rm d}\\tau\\frac{e^{-\\tau M_{\\rm i}^{2}}}{\\tau^{2}},&(for \\ T=0,\\,\\mu=0)\\\\ -\\frac{3M_{\\rm i}}{\\pi^{2}}\\int_{\\sqrt{\\mu^{2}-M_{\\rm i}^{2}}}^{+\\infty}{\\rm d }p\\frac{\\left[1-{\\rm Erf}(\\sqrt{M_{\\rm i}^{2}+p^{2}}\\sqrt{\\tau_{\\rm UV}}) \\right]p^{2}}{\\sqrt{M_{\\rm i}^{2}+p^{2}}},&(for\\ T=0,\\,\\mu\
eq 0,\\ and\\ M_{\\rm i}<\\mu)\\\\ \\frac{3M_{\\rm i}}{4\\pi^{2}}\\left[-M_{\\rm i}^{2}{\\rm Ei}(-M_{\\rm i}^{2}\\tau_{ \\rm UV})-\\frac{e^{-M_{\\rm i}^{2}\\tau_{\\rm UV}}}{\\tau_{\\rm UV}}\\right],&(for \\ T=0,\\,\\mu\
eq 0,\\ and\\ M_{\\rm i}>\\mu)\\end{array}\\right.\\] \\[\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i} = \\left\\{\\begin{array}{ll}-\\frac{3M_{\\rm i}}{4\\pi^{2}}\\int_{\\tau _{\\rm UV}}^{\\infty}{\\rm d}\\tau\\frac{e^{-\\tau M_{\\rm i}^{2}}}{\\tau^{2}},&(for \\ T=0,\\,\\mu=0)\\\\ -\\frac{3M_{\\rm i}}{\\pi^{2}}\\int_{\\sqrt{\\mu^{2}-M_{\\rm i}^{2}}}^{+\\infty}{\\rm d }p\\frac{\\left[1-{\\rm Erf}(\\sqrt{M_{\\rm i}^{2}+p^{2}}\\sqrt{\\tau_{\\rm UV}}) \\right]p^{2}}{\\sqrt{M_{\\rm i}^{2}+p^{2}}},&(for\\ T=0,\\,\\mu\
eq 0,\\ and\\ M_{\\rm i}<\\mu)\\\\ \\frac{3M_{\\rm i}}{4\\pi^{2}}\\left[-M_{\\rm i}^{2}{\\rm Ei}(-M_{\\rm i}^{2}\\tau_{ \\rm UV})-\\frac{e^{-M_{\\rm i}^{2}\\tau_{\\rm UV}}}{\\tau_{\\rm UV}}\\right],&(for \\ T=0,\\,\\mu\
eq 0,\\ and\\ M_{\\rm i}>\\mu)\\end{array}\\right.
on Table. I seem abnormally low compared to what is standard in the literature and the difference between the dynamical mass in vacuum for the strange quark and its critical chemical potential is relatively large (\\(\\sim\\)40 MeV). In the following, we demonstrate the reasons for that: it is well known that the NJL model is not a renormalizable theory, so we need to use an appropriate regularization to eliminate the ultraviolet (UV) divergence. In the framework of the usual NJL model, three dimensional (3D) momentum cutoff (\\(\\Lambda_{\\rm UV}\\)) regularization is often used to realize that. In this regularization scheme, dynamical quark masses are \\(M_{\\rm u}\\sim 350\\) MeV, \\(M_{\\rm s}\\sim 520\\) MeV, which are much larger than the corresponding dynamical quark masses obtained herein, and the chiral phase transition in this case for zero temperature and finite chemical potential is first order. It should be pointed out that for a QCD effective model, \\(\\Lambda_{\\rm UV}\\) implies the adaptation range of the effective model. Under the normal NJL model framework, the UV cutoff \\(\\Lambda_{\\rm UV}\\) is about 630 MeV, which means that the NJL model regularized by 3D momentum cutoff cannot be used in principle for physical systems with energy scales greater than \\(\\Lambda_{\\rm UV}\\)= 630 MeV. We know that the energy scale involved in the study of neutron stars is about 1 GeV, thus in this case, we have to abandon the common used 3D momentum cutoff and use PTR instead. This is because PTR is not plagued by the interruption of UV momentum. In this scheme, we can see that the integral limit \\(\\tau_{\\rm UV}\\) is actually a soft cutoff with the integral variable \\(\\tau\\) presenting in the exponential function, and the UV cutoff \\(\\Lambda_{\\rm UV}=(\\tau_{\\rm UV})^{-1/2}\\) is set to be larger than 1 GeV by fitting the experimental data. Additionally, the chiral phase transition for T=0 with finite chemical potential is a crossover in PTR. From above we can see that different regularization schemes cause different results. In fact, a certain regularization approach is already employed in the process of parameter fixing. For example, in Refs. [30; 45; 46], PTR is also used in NJL model, and the dynamical masses of quarks in these studies (\\(M_{\\rm u}\\sim 210\\) MeV, \\(M_{\\rm s}\\sim 400\\) MeV) are also quite smaller than the usual dynamical quark masses in the normal NJL model (\\(M_{\\rm u}\\sim 350\\) MeV, \\(M_{\\rm s}\\sim 520\\) MeV). In Fig. 1 of the manuscript, we demonstrate the densities of quarks versus the chemical potential for \\(m_{\\rm u}=3.3\\) MeV, 3.4 MeV whose corresponding dynamical masses of \\(s\\) quark in vacuum are fixed to be about 360 MeV. The difference between dynamical mass in vacuum and \\(\\mu_{\\rm C}\\) for \\(s\\) quark is about 40 MeV. Actually, for other studies in the framework of 2+1 flavors NJL model with PTR such as Refs. [30; 46], the difference is also large, that is, \\(\\sim 40\\) MeV in Ref. [46] and \\(\\sim 80\\) MeV in Ref. [30].
Considering the internal environment of a hybrid star, we have to take the chemical equilibrium and electric charge neutrality into account,
\\[\\left\\{\\begin{array}{l}\\mu_{\\rm d}=\\mu_{\\rm u}+\\mu_{\\rm e}.\\\\ \\mu_{\\rm s}=\\mu_{\\rm u}+\\mu_{\\rm e}.\\\\ \\frac{2}{3}\\rho_{\\rm u}-\\frac{1}{3}\\rho_{\\rm d}-\\frac{1}{3}\\rho_{\\rm s}-\\rho_{ \\rm e}=0.\\end{array}\\right. \\tag{7}\\]
Then we can get the baryon chemical potential dependence of the quark densities, which is presented in Fig. 2. As we can see, for a given flavor of quark, the density dependences on baryon chemical potential for \\(m_{\\rm u}=3.3\\) MeV and 3.4 MeV are also very similar. The critical baryon chemical potential is \\(\\mu_{B}^{c}=600\\) MeV for \\(u\\) and \\(d\\) quark, and \\(\\mu_{B}^{c}=920\\) MeV for \\(s\\) quark. After the corresponding \\(\\mu_{B}^{c}\\), each density in this picture increases monotonously and smoothly.
According to definition, at zero temperature and finite chemical potential, the EOS of QCD can be written as [47; 48]
\\[P(\\mu)=P(\\mu=0)+\\int_{0}^{\\mu}d\\mu^{\\prime}\\rho(\\mu^{\\prime}), \\tag{8}\\]
Figure 2: Considering the chemical equilibrium and electric charge neutrality of the quark system, density of \\(u,d\\) and \\(s\\) quark with parameters fixed for \\(m_{\\rm u}=\\)3.3 MeV, 3.4 MeV are shown respectively. Two red lines nearly coincide and so do two green lines and two blue lines.
Figure 1: Quark number density of \\(u,d\\) and \\(s\\) quark as a function of \\(\\mu\\) at \\(T=0\\) with parameters fixed for \\(m_{\\rm u}=\\)3.3 MeV, 3.4 MeV respectively. Four lines (two red lines and two green lines) nearly coincide and so do the other two blue lines.
here \\(P(\\mu=0)\\) represents the negative pressure of the vacuum, which is taken as a phenomenological model-dependent parameter. Furthermore, it can reflect the confinement of QCD just like in the MIT bag model. Same to Ref. [28], we regard \\(P(\\mu=0)\\) as \\(-B\\) (vacuum bag constant). From Eq. (8), we can deduce that similar behaviors of quark densities between two schemes \\(m_{\\rm u}=3.3\\) MeV and \\(3.4\\) MeV will result in similar EOSs of quark matter. Thus we will take the scheme of \\(m_{\\rm u}=3.4\\) MeV to continue the following study. After we determine the value of \\(B\\), the energy density can be calculated by [49; 50]
\\[\\epsilon=-P+\\sum_{i}\\mu_{i}\\rho_{\\rm i}. \\tag{9}\\]
### Hybrid EOS constructed by a three-window modeling
To get the hybrid EOS with a crossover hadron-quark phase transition, we have to employ a suitable interpolating approach to connect the hadronic EOS and quark EOS. In Refs. [28; 29; 25; 26; 27; 26; 28; 29; 30], a three-window modeling is adopted. In particular, Refs. [25; 26; 29] employ the \\(\\epsilon-\\)interpolation in \\(\\epsilon-\\rho\\) plane or/\\(P-\\)interpolation in \\(P-\\rho\\) plane; Refs. [28; 30] take \\(P-\\)interpolation in \\(P-\\mu\\) plane. Just as Ref. [29] claims, the three-window modeling is a phenomenological modeling approach. Beyond mere interpolation, different interpolating schemes will have different additional thermodynamic corrections to the interpolated variables, meanwhile, the additional corrections have to preserve the thermodynamic consistency between the variables. In fact, any interpolating approach above is applicable. Although the hybrid EOSs in these three schemes contain different variables, they all satisfy the thermodynamic consistency in the crossover region. As a result, they should match each other in the same plane. In addition, for densities that are very small or very large, the hybrid EOSs will revert to the hadronic EOS or quark EOS, thus matching each other too. In this paper, we will use the same interpolating approach as Refs. [28; 30]. By definition, the interpolation function is
\\[P(\\mu) =P_{\\rm H}(\\mu)f_{-}(\\mu)+P_{\\rm Q}(\\mu)f_{+}(\\mu),\\] \\[f_{\\pm}(\\mu) =\\frac{1}{2}(1\\pm\\tanh{(\\frac{\\mu-\\tilde{\\mu}}{\\Gamma})}), \\tag{10}\\]
and the energy density is obtained from the thermodynamic relation
\\[\\epsilon(\\mu) =\\epsilon_{\\rm H}(\\mu)f_{-}(\\mu)+\\epsilon_{\\rm Q}(\\mu)f_{+}(\\mu)+ \\Delta\\epsilon,\\] \\[\\Delta\\epsilon =\\mu(P_{\\rm Q}-P_{\\rm H})g(\\mu), \\tag{11}\\]
where \\(P_{\\rm H}\\) and \\(P_{\\rm Q}\\) denote the pressure in hadronic phase and quark phase, respectively. The sigmoid interpolating functions \\(f_{\\pm}\\) can realize a smooth DPT in the region of \\(\\tilde{\\mu}-\\Gamma\\lesssim\\mu\\lesssim\\tilde{\\mu}+\\Gamma\\), which is named the window of the function. In this region, hadrons are hybrid with quarks: they coexist and interact strongly. In Eq. (II.2), \\(\\Delta\\epsilon\\) is the additional term that guarantees thermodynamic consistency with \\(g(\\mu)=\\frac{2}{\\Gamma}(e^{\\rm X}+e^{-\\rm X})^{-2}\\) and \\({\\rm X}=(\\mu-\\tilde{\\mu})/\\Gamma\\). From Eq. (II.2) and Eq. (II.2), we can see that there are two parameters in our interpolating procedure: the central baryon chemical potential of the interpolating area \\(\\tilde{\\mu}\\) and half of the interpolating interval \\(\\Gamma\\).
From the study above, we can conclude that the constructed hybrid EOS contains three parameters undetermined totally, i.e. \\(B\\), \\(\\tilde{\\mu}\\), and \\(\\Gamma\\). Thus we can regard our hybrid EOS as a function of these three parameters.
## III Methods
In our study, we consider the following five constraints to restrict the EOS of hybrid stars:
(1) The mass constraint from PSR J0348+0432 requires the maximum mass of the neutron star larger than \\(1.97~{}M_{\\odot}\\)[39].
(2) Because of the uncertainty of \\(\\mu_{\\rm deconfinement}\\), many studies employ an assumption that \\(\\mu_{\\rm deconfinement}\\sim\\mu_{\\rm ChiralRestoration}\\)[26; 27; 28]. However, the studies of QCD phase diagram [40; 41] imply that \\(\\mu_{\\rm deconfinement}>\\mu_{\\rm ChiralRestoration}\\sim 1\\) GeV at zero temperature with finite chemical potential. Thus in this paper, we take a relatively loose constraint that \\(\\tilde{\\mu}-\\Gamma\\geq 1\\) GeV in the hybrid construction.
(3) The latest update of the source properties for GW170817 from LIGO and Virgo collaborations [24] demonstrates that the dimensionless combined tidal deformability \\(\\tilde{\\Lambda}\\) has a considerable change compared with the former observable, that is, \\(\\tilde{\\Lambda}\\sim 280^{+490}_{-190}\\) for the case of symmetric 90% credible interval and \\(\\tilde{\\Lambda}\\sim 280^{+410}_{-230}\\) for the case of highest posterior density (HPD) 90% credible interval. The definition of it is shown in the following,
\\[\\tilde{\\Lambda}=\\frac{16}{13}\\frac{(M_{1}+12M_{2})M_{1}^{4}\\Lambda_{1}+(M_{2}+ 12M_{1})M_{2}^{4}\\Lambda_{2}}{(M_{1}+M_{2})^{5}}. \\tag{12}\\]
Here \\(\\Lambda_{1},\\Lambda_{2}\\) are the deformability of the two members of BNS, and \\(M_{1},M_{2}\\) are the corresponding gravitational masses, respectively. The detailed calculation method of \\(\\Lambda\\) and its dependence on \\(M\\) can be found in Ref. [22]. With the additional waveform model SEOBNRT, the chirp mass \\(\\mathcal{M}=(M_{1}M_{2})^{3/5}(M_{1}+M_{2})^{-1/5}\\) is fixed to \\(1.186\\pm 0.0001M_{\\odot}\\) (This value determines the relation of \\(M_{1}\\) and \\(M_{2}\\)).
(4) The stability in interpolating between the quark EOS and hadronic EOS demands \\({\\rm d}P/{\\rm d}\\rho>0\\), and it is very restrictive to the interpolated EOS [29]. Actually, \\({\\rm d}P/{\\rm d}\\rho\\) is relevant to the sound velocity of the system which is defined as \\(v=\\sqrt{{\\rm d}P/{\\rm d}\\epsilon}\\). Via Eqs. (8) and (9), we can derive that \\(v^{2}={\\rm d}P/{\\rm d}\\epsilon={\\rm d}P/(-{\\rm d}P+{\\rm d}\\mu+{\\rm d}\\rho)=1/ \\mu\\cdot{\\rm d}P/{\\rm d}\\rho\\). Thus this constraint is equivalent to \\(v^{2}>0\\).
(5) The stability of the hybrid star with a maximum mass requires \\(\\mu_{\\rm C}>\\mu_{\\rm BE}\\), where \\(\\mu_{\\rm C}\\) is the baryon chemical potential in the center of the star, and \\(\\mu_{\\rm BE}\\) represents the baryon chemical potential of the intersection between quark binding energy and hadronic binding energy. For \\(\\mu<\\mu_{\\rm BE}\\), the hadronic matter is more stable with a lower binding energy than quark matter; but for \\(\\mu>\\mu_{\\rm BE}\\), the inverse is true. Therefore, \\(\\mu_{\\rm C}>\\mu_{\\rm BE}\\) should be satisfied to forbid the quark matter decaying into the hadronic matter in the center of the heaviest star. Only in this way, the deconfined regime (pure or mixed phase) can be achieved, and the hybrid star not the pure neutron star (a scenario which we find ruled out by the latest observation data from GW170817 pertaining tidal deformability) can exist.
## IV Results
We choose \\(B^{\\frac{1}{4}}=167\\), 170, and 171 MeV as three representative values to compare the EOSs of quark matter and hadronic matter, and the result is shown in Fig. 3. We can see that for a larger value of \\(B^{\\frac{1}{4}}\\), the pressure is also larger for the same \\(\\mu_{B}\\), but quark EOSs do not differ too much in these three cases. The intersections of quark EOSs and the NL3\\(\\omega\\rho\\) EOS are located at around \\(\\mu_{B}=1.3\\) GeV. Then we calculate the binding energy \\(\\epsilon/\\rho\\) of quarks for the three representative values of \\(B^{\\frac{1}{4}}\\), and compare the result with that of NL3\\(\\omega\\rho\\) model, which is shown in Fig. 4. From this figure, we can find that for a certain density, as \\(B^{\\frac{1}{4}}\\) increases, the binding energy also increases, and the intersections of quark binding energy and hadronic binding energy are close to \\(\\rho=0.004\\) GeV\\({}^{3}\\). In the left side domain of the intersection, the binding energy of hadrons is smaller than that of quarks, indicating hadrons are more stable than quarks. However, in the right side domain of the intersection, conversely, quarks are more stable with a smaller binding energy than hadrons.
Then we extend our study to various hybrid EOS models with different parameter sets of \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)\\). With the five constraints considered in Sec. III, it is possible for us to get reasonable choice of the parameter set. Firstly, we consider the constraint on \\((B^{\\frac{1}{4}},\\Gamma)\\) and \\((B^{\\frac{1}{4}},\\tilde{\\mu})\\) with an appropriate value of \\(\\tilde{\\mu}\\) and \\(\\Gamma\\), respectively. In other words, supposing the allowed space of \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)\\) forming a three-dimensional image, we extract its projection on \\(\\Gamma\\)-\\(B^{\\frac{1}{4}}\\) plain and \\(\\tilde{\\mu}\\)-\\(B^{\\frac{1}{4}}\\) plain, respectively. The result is presented in Fig. 5. From the graph, we can see that for the hybrid EOS, the range of \\(B^{\\frac{1}{4}}\\) is restricted to (166.16, 171.06) MeV, and as \\(B^{\\frac{1}{4}}\\) increases, the allowed intervals of \\(\\Gamma\\) and \\(\\tilde{\\mu}\\) reduce with both the upper limit and lower limit rising. In particular, for \\(B^{\\frac{1}{4}}=166.16\\) MeV, the range of \\(\\Gamma\\) and \\(\\tilde{\\mu}\\) is (1.47, 2.51) GeV and (2.47, 3.51) GeV respectively; but for \\(B^{\\frac{1}{4}}=171.06\\) MeV, \\(\\Gamma\\) and \\(\\tilde{\\mu}\\) is constrained to 3.37 and 4.37 GeV respectively.
Generally, if we want to get the constraint on sub parameter set \\((\\tilde{\\mu},\\Gamma)\\), \\(B^{\\frac{1}{4}}\\) should be fixed to a certain value. In the following, we will study it for three representative schemes, i.e. \\(B^{\\frac{1}{4}}=167\\) MeV, 170 MeV, and 171 MeV, and the result is shown in Fig. 6. From the comparison of the three subgraphs (a), (b), and (c), we can see that the area of allowed parameter space of \\((\\tilde{\\mu},\\Gamma)\\) experiences expansion and then narrowing as \\(B^{\\frac{1}{4}}\\) increases. For \\(B^{\\frac{1}{4}}=167\\) MeV, the allowed region is long and narrow with \\(\\tilde{\\mu}\\in(2.57,3.56)\\) GeV and \\(\\Gamma\\in(1.54,2.56)\\) GeV. In addition, the longitudinal distance of \\(\\tilde{\\mu}-\\Gamma=\\)1 line and SEOBNRT line is about 0.03 GeV for \\(\\tilde{\\mu}=2.5\\) GeV; while for \\(\\tilde{\\mu}=3.6\\) GeV, the distance is about 0.04 GeV. For \\(B^{\\frac{1}{4}}=170\\) MeV, the allowed space is larger than that of \\(B^{\\frac{1}{4}}=167\\) MeV with \\(\\tilde{\\mu}\\) constrained to (2.99, 3.99) GeV and \\(\\Gamma\\) constrained to (1.85, 2.99) GeV. The longitudinal distance of \\(\\tilde{\\mu}-\\Gamma=\\)1 line and SEOBNRT line is about 0.14 GeV here for \\(\\tilde{\\mu}=2.99\\) GeV, and 0.22 GeV for \\(\\tilde{\\mu}=4\\) GeV. For \\(B^{\\frac{1}{4}}=171\\) MeV, the area of the al
Figure 3: Comparison of quark EOSs and hadronic EOS. The black solid line is the NL3\\(\\omega\\rho\\) EOS while the red dashed line, the green dot-dashed line and the blue dotted line are the quark EOSs with \\(B^{\\frac{1}{4}}=\\)167 MeV, 170 MeV, and 171 MeV respectively.
Figure 4: Comparison of binding energy of hadrons and quarks. The black solid line is for the NL3\\(\\omega\\rho\\) EOS while the red dashed line, the green dot-dashed line and the blue dotted line are for the quark EOSs with \\(B^{\\frac{1}{4}}=\\)167 MeV, 170 MeV, and 171 MeV respectively.
lowed region reduces compared to \\(B^{\\frac{1}{4}}=170\\) MeV, that is, (3.42, 4.36) GeV and (2.42, 3.36) GeV for the range of \\(\\tilde{\\mu}\\) and \\(\\Gamma\\), respectively. And the longitudinal distance of the constraint area expands and then narrows as \\(\\tilde{\\mu}\\) increases. In fact, from our calculations, we also find the following two trends with \\(B^{\\frac{1}{4}}\\) increasing: 1, the intersection of \\(\\tilde{\\mu}-\\Gamma=1\\) line and SEOBNRT line as well as the intersection of \\(\\tilde{\\mu}-\\Gamma=1\\) line and \\((v/c)^{2}_{min}=0\\) line move to the right side of \\(\\tilde{\\mu}-\\Gamma\\) plane. 2, the SEOBNRT line is trending to the direction of \\(\\tilde{\\mu}\\) axis but \\((v/c)^{2}_{min}=0\\) line is trending to \\(\\tilde{\\mu}-\\Gamma=1\\) line. It should be mentioned that the mass constraint is not shown in Fig. 6, because the mass constraint here is relatively loose compared with the other four constraints.
For a more detailed demonstration of the properties of hybrid EOSs with the parameter set in the constrained region of Fig. 6, we will choose three representative points of \\((\\tilde{\\mu},\\Gamma)\\) for each of the scheme: \\(B^{\\frac{1}{4}}=167\\) MeV, \\(170\\) MeV, and \\(171\\) MeV, to get nine hybrid EOSs. And then we calculate the corresponding sound velocities, \\(M-R\\) relation and tidal deformability \\((\\Lambda_{1},\\Lambda_{2})\\), which are shown in Fig. 7, Fig. 8 and Fig. 9 respectively. From Fig. 7, we can see that all sound velocities of the hybrid stars are smaller than \\(0.7\\) times speed of light, demonstrating the rationality of the hybrid EOSs. In Fig. 8, the maximum gravitational masses of HSs are from \\(2.10~{}M_{\\odot}\\) to \\(2.19~{}M_{\\odot}\\) with a radius from \\(11.99\\) km to \\(12.13\\) km, well beyond the mass constraint of \\(1.97~{}M_{\\odot}\\). And the radius of the hybrid stars with a mass of \\(1.4~{}M_{\\odot}\\) is from \\(11.90\\)
Figure 5: Constraints on parameter set \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)\\). The gray shaded region is the allowed space for the sub parameter set (a) \\((B^{\\frac{1}{4}},\\Gamma)\\), and (b) \\((B^{\\frac{1}{4}},\\tilde{\\mu})\\) respectively with five constraints considered in Sec. III
Figure 6: Constraints on sub parameter set \\((\\tilde{\\mu},\\Gamma)\\) with (a) \\(B^{\\frac{1}{4}}=167\\) MeV, (b) \\(B^{\\frac{1}{4}}=170\\) MeV, and (c) \\(B^{\\frac{1}{4}}=171\\) MeV respectively. The gray shaded region is the allowed parameter space for these three cases. The black solid line, green dashed line, red dotted line, and orange dot-dashed line correspond to the constraints (2), (3), (4), and (5) in Sec. III, respectively. The mass constraint (1) does not appear in these graphs because this constraint is relatively loose. When \\(\\tilde{\\mu}>1.95\\) GeV, the maximum masses of hybrid stars constructed by the hybrid EOS are already well beyond \\(1.97~{}M_{\\odot}\\).
to 12.18 km. The detailed information of HSs based on these nine hybrid EOSs is listed in Table. 2. According to the chirp mass prediction from SEOBNRT, the mass of two stars in the BNS is calculated to be 1.17 \\(M_{\\odot}\\)-1.36 \\(M_{\\odot}\\) and 1.36 \\(M_{\\odot}\\)-1.59 \\(M_{\\odot}\\), respectively. Therefore, we also present the corresponding central baryon chemical potential \\(\\mu_{C}\\) for 1.17 \\(M_{\\odot}\\), 1.36 \\(M_{\\odot}\\), and 1.59 \\(M_{\\odot}\\) in this table. We can see that the value of \\(\\mu_{C}\\)(1.17) in each hybrid EOS is larger than the corresponding \\(\\tilde{\\mu}-\\Gamma\\), i.e., the starting point of DPT in our hybrid EOS, thus suggesting that both two stars of BNS from GW170817 can be HSs shown in Table. 2. In addition, nine values of \\(\\mu_{C}\\) in this table are all located in their corresponding interpolating window, namely, the phase transition region, demonstrating that the heaviest star constructed by our hybrid EOS does not have a pure quark core but a mixed-phase inside. The combined dimensionless tidal deformability \\(\\tilde{\\Lambda}\\) with a flat prior (symmetric/HPD) are also shown in Table. 2 whose values are all in the region of 90% credible interval predicted by SEOBNRT. In Fig. 9, we can see that the constraint for tidal deformability pairs \\(\\Lambda_{1}\\) and \\(\\Lambda_{2}\\) from SEOBNRT shrinks significantly compared to the former. Although the relation of \\(\\Lambda_{1}\\) and \\(\\Lambda_{2}\\) for NL3\\(\\omega\\rho\\) EOS is very close to the former constraint, it is far beyond the recent prediction of SEOBNRT. Different from that, the results from the nine representative hybrid EOSs are all in accordance with the constraint. Among them, the hybrid EOSs with the schemes of \\(B^{\\frac{1}{4}}=\\)167 and 170 give very similar tidal deformability parameter.
## V Summary and discussion
In this paper, we try to use the constraint of the additional waveform model SEOBNRT on tidal deformability from the latest GW170817 source properties [24] to restrict the hybrid EOS constructed by a smooth three-window interpolating approach on \\(P-\\mu\\) plain [28; 30] between hadronic phase and quark phase. The quark matter is described by 2+1 flavors NJL model and the hadronic matter is characterized by RMF NL3\\(\\omega\\rho\\) model [37; 38]. In 2+1 flavors NJL model, there are seven model parameters and five of them can be fixed by fitting five experimental data if the other two (\\(m_{\\rm u}\\) and \\(m_{\\rm s}\\)) are determined. To satisfy the prediction of these two parameters from the recent study [44], we choose two sets of parameters within \\(m_{\\rm u}=\\)3.3 MeV and 3.4 MeV respectively to continue the following calculation but to find the quark densities under these two schemes are very similar, which can spontaneously cause a similarity between their corresponding EOSs. Thus the parameter set within \\(m_{\\rm u}=\\)3.4 MeV is set as the representative one to participate in our calculations. It is noteworthy that three parameters are still free in the hybrid EOS, i.e., \\(B^{\\frac{1}{4}}\\) from the quark EOS, \\(\\tilde{\\mu}\\) and \\(\\Gamma\\) from the interpolating process.
Then by the constraint of SEOBNRT, the mass prediction from PSR J0348+0432 [39], the studies of hadron-quark transition in Refs. [40; 41] implying that \\(\\mu_{\\rm deconfinement}>\\mu_{\\rm ChiralRestoration}\\sim 1\\) GeV at zero temperature with finite chemical potential, the stability of hybrid EOS [29], and the stability of the heaviest HS, we restrict the sub parameter set (\\(B^{\\frac{1}{4}}\\), \\(\\tilde{\\mu}\\)) and (\\(B^{\\frac{1}{4}}\\), \\(\\Gamma\\)) to a
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline \\(B^{\\frac{1}{4}}\\) & \\(\\tilde{\\mu}\\) & \\(\\Gamma\\) & \\(M_{\\rm max}\\) & \\(R_{m}\\) & \\(\\mu_{C}\\) & \\(R(1.4)\\) & \\(\\mu_{C}(1.17)\\) & \\(\\mu_{C}(1.36)\\) & \\(\\mu_{C}(1.59)\\) & \\(\\tilde{\\Lambda}\\) \\\\
[MeV] & [GeV] & [GeV] & [\\(M_{\\odot}\\)] & [km] & [GeV] & [km] & [GeV] & [GeV] & [GeV] & (symmetric/HPD) \\\\ \\hline & 2.6 & 1.58 & 2.10 & 12.13 & 1.60 & 12.15 & 1.15 & 1.18 & 1.23 & 601/570 \\\\
167 & 3.0 & 1.98 & 2.14 & 12.00 & 1.63 & 12.00 & 1.15 & 1.18 & 1.23 & 595/565 \\\\ & 3.5 & 2.48 & 2.17 & 12.07 & 1.65 & 12.09 & 1.15 & 1.18 & 1.23 & 590/561 \\\\ \\hline & 3.0 & 1.90 & 2.15 & 11.99 & 1.63 & 11.90 & 1.15 & 1.19 & 1.23 & 602/571 \\\\
170 & 3.4 & 2.30 & 2.17 & 12.00 & 1.65 & 11.99 & 1.15 & 1.19 & 1.24 & 595/565 \\\\ & 3.8 & 2.70 & 2.19 & 12.01 & 1.67 & 12.00 & 1.16 & 1.19 & 1.24 & 592/561 \\\\ \\hline & 3.45 & 2.45 & 2.14 & 12.00 & 1.66 & 12.00 & 1.16 & 1.20 & 1.25 & 632/600 \\\\
171 & 3.8 & 2.75 & 2.17 & 12.03 & 1.67 & 12.18 & 1.16 & 1.19 & 1.24 & 599/568 \\\\ & 4.2 & 3.19 & 2.17 & 12.00 & 1.68 & 12.00 & 1.16 & 1.19 & 1.25 & 624/592 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Some quantities of HSs corresponding to the nine representative hybrid EOSs: maximum gravitational mass \\(M_{\\rm max}\\), radius \\(R_{m}\\), central baryon chemical potential \\(\\mu_{C}\\), radius of 1.4\\(M_{\\odot}\\) star \\(R(1.4)\\), central baryon chemical potential of 1.17\\(M_{\\odot}\\) star \\(\\mu_{C}\\)(1.17), central baryon chemical potential of 1.36\\(M_{\\odot}\\) star \\(\\mu_{C}\\)(1.36), central baryon chemical potential of 1.59\\(M_{\\odot}\\) star \\(\\mu_{C}\\)(1.59), and the combined dimensionless tidal deformability \\(\\tilde{\\Lambda}\\) with flat prior (symmetric/HPD).
reasonable space by projecting the allowed space of \\((B^{\\frac{1}{4}}\\), \\(\\tilde{\\mu}\\), \\(\\Gamma\\)) to \\(\\Gamma\\)-\\(B^{\\frac{1}{4}}\\) plain and \\(\\tilde{\\mu}\\)-\\(B^{\\frac{1}{4}}\\) plain, respectively. We find that \\(B^{\\frac{1}{4}}\\) is well constrained to a range of (166.16, 171.06) MeV, differing from the result of (134.1, 141.4) MeV in Ref. [11] and {(140, 143) MeV, for \\(a_{4}=0.5\\); (147, 155) MeV, for \\(a_{4}=0.6\\)) in Ref. [15]. In addition to that, different value of \\(B^{\\frac{1}{4}}\\) can result in different parameter space of \\((\\tilde{\\mu},\\Gamma)\\). Therefore, we set \\(B^{\\frac{1}{4}}=\\)167 MeV, 170 MeV, and 171 MeV respectively to study the difference. Then we find that as \\(B^{\\frac{1}{4}}\\) increases, the restricted parameter space \\((\\tilde{\\mu},\\Gamma)\\) is moving to the upper right along the line of \\(\\tilde{\\mu}-\\Gamma=1\\), and becomes larger first and then shrinks. For a detailed study of the constrained hybrid EOS, we choose nine representative parameter sets to calculate their corresponding sound velocities, \\(M-R\\) relation and tidal deformability. As a result, these representative hybrid EOSs are relatively soft but with the maximum mass of HSs well beyond 2 \\(M_{\\odot}\\) and radius about 12 km. By a comparison of the phase transition window \\(\\tilde{\\mu}-\\Gamma\\lesssim\\mu\\lesssim\\tilde{\\mu}+\\Gamma\\) and the central baryon chemical potential of 1.17 \\(M_{\\odot}\\), 1.36 \\(M_{\\odot}\\), 1.59 \\(M_{\\odot}\\), and \\(M_{\\rm max}\\), we can see that both two member stars of BNS from GW170817 are HSs, and they do not have a quark core but a mixed-phase in center. What's more, the NL3\\(\\omega\\rho\\) model to construct the pure neutron star has already been excluded by the observation of tidal deformability from GW170817, but this model is still suggested to be effective to describe the hadronic phase in HSs.
As further point it should be noted that we also con
Figure 7: The sound velocities of the nine representative hybrid EOSs with parameter set of \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)=\\)(167, 2.6, 1.58), (167, 3.0, 1.98), (167, 3.5, 2.48), (170, 3.0, 1.9), (170, 3.4, 2.3), (170, 3.8, 2.7), (171, 3.45, 2.45), (171, 3.8, 2.75), and (171, 4.2, 3.19), corresponding to the red solid line, red dotted line, red dashed line, green dashed line, green dotted line, green solid line, blue dotted line, and blue solid line respectively.
Figure 8: The \\(M-R\\) relation of the hybrid stars constructed by the nine representative hybrid EOSs with parameter set of \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)=\\)(167, 2.6, 1.58), (167, 3.0, 1.98), (167, 3.5, 2.48), (170, 3.0, 1.9), (170, 3.4, 2.3), (170, 3.8, 2.7), (171, 3.45, 2.45), (171, 3.8, 2.75), and (171, 4.2, 3.19), corresponding to the red solid line, red dashed line, green dashed line, green dotted line, blue dashed line, blue dotted line, blue dashed line, blue dotted line, and blue solid line respectively. The gray shaded area represents the mass constraint of PSR J0348+0432.
Figure 9: Comparison of the tidal deformability of hybrid stars constructed by the nine representative hybrid EOSs with parameter set of \\((B^{\\frac{1}{4}},\\tilde{\\mu},\\Gamma)=\\)(167, 2.6, 1.58), (167, 3.0, 1.98), (167, 3.5, 2.48), (170, 3.0, 1.9), (170, 3.4, 2.3), (170, 3.8, 2.7), (171, 3.45, 2.45), (171, 3.8, 2.75), and (171, 4.2, 3.19), corresponding to the red solid line, red dotted line, red dashed line, green dashed line, green dotted line, blue dashed line, blue dotted line, and blue solid line respectively. For \\(B^{\\frac{1}{4}}=167\\) and 170 MeV, the tidal deformability is very similar. The orange solid line represents the tidal deformability \\((\\Lambda_{1},\\Lambda_{2})\\) calculated by NL3\\(\\omega\\rho\\) EOS. Both the black dashed line and gray dashed line are the 90% posterior probability enclosed inside for the low spin prior case in GW170817. The difference is that the gray one represents the former prediction and the black one is the recent prediction in the light of the additional waveform model SEOBNRT. The brown dotted line indicates the \\(\\Lambda_{1}=\\Lambda_{2}\\) boundary.
sidered the possibility of an hybrid EOS constructed with the NL3 hadronic model but could not find a parameter set satisfying the five constraints presented in this paper. In addition, the Maxwell construction between hadronic phase and quark phase can be viewed as a limit situation of \\(\\Gamma=0\\) and \\(\\tilde{\\mu}\\) fixed to the intersection of quark EOS and hadronic EOS in \\(P-\\mu\\) plane. From Fig. 5(a), we can see that the parameter space implies \\(\\Gamma\
eq 0\\), thus hybrid EOSs constructed with NL3\\(\\omega\\rho\\) model and 2+1 flavors NJL model by this approach should be excluded.
In a word, calculations of the hybrid EOS are still model-dependent, but two prospects are hopeful in the future: on one hand, a better constrained tidal deformability from the future observation of GW will help the further reduction of the parameter space; on the other hand, the determination of hadron-quark transition point \\(\\mu_{\\rm deconfinement}\\) and the EOS from the first principle of QCD in future are expected to give a definitive answer.
###### Acknowledgements.
This work is supported in part by the National Natural Science Foundation of China (under Grants No. 11475085, No. 11535005, No. 11690030, No. 11473012, and No. 11873030), the Fundamental Research Funds for the Central Universities (under Grant No. 020414380074), the National Basic Research Program of China (\"973\" Program, Grant No. 2014CB845800), the Strategic Priority Research Program of the Chinese Academy of Sciences \"Multi-waveband Gravitational Wave Universe\" (Grant No. XDB23040000), the National Major state Basic Research and Development of China (Grant No. 2016YFE0129300), the National Post-doctoral Program for Innovative Talents (Grant No. BX201700115), and by the China Postdoctoral Science Foundation funded project (Grant No. 2017M620199).
## VI Appendix: Derivation of quark condensate
In QCD, quark condensate is defined in the Minkowski space. However, it is noteworthy that nonperturbative theories are always proposed and calculated in the Euclidean space, such as lattice QCD (LQCD), because Euclidean QCD action at zero chemical potential defines a probability measure where various numerical simulation algorithms are available. What's more, calculating in the Euclidean space is not only for pragmatic: Euclidean lattice field theory is considered as a primary candidate currently for rigorous definition of the interacting quantum field theory since it makes the definition of generating functional via a proper limiting procedure possible [51]. Thus we will take a Wick rotation to translate calculations from the Minkowski space to the Euclidean space. In addition, we also introduce PTR because the Lagrangian of NJL model cannot be renormalized. The PTR is defined as,
\\[\\frac{1}{A^{n}} =\\frac{1}{(n-1)!}\\int_{0}^{\\infty}{\\rm d}\\tau\\tau^{n-1}e^{-\\tau A}\\] \\[\\xrightarrow{\\rm UVcutoff}\\frac{1}{(n-1)!}\\int_{{}_{\\rm TV}}^{ \\infty}{\\rm d}\\tau\\tau^{n-1}e^{-\\tau A}. \\tag{13}\\]
With the two operations above, the quark condensate defined in Eq. (5) at zero temperature and chemical potential becomes
\\[\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i} =-N_{\\rm c}\\int_{-\\infty}^{+\\infty}\\frac{{\\rm d}^{4}p^{\\rm E}}{( 2\\pi)^{4}}\\frac{4iM_{i}}{(p^{\\rm E})^{2}+M_{i}^{2}}\\] \\[=-\\frac{N_{\\rm c}}{(2\\pi)^{4}}\\int_{-\\infty}^{+\\infty}\\int_{- \\infty}^{+\\infty}{\\rm d}^{3}\\overrightarrow{p}{\\rm d}p_{4}\\frac{4M_{i}}{p_{4} ^{2}+\\overrightarrow{p}{}^{2}+M_{i}^{2}}\\] \\[=-\\frac{3M_{i}}{\\pi^{2}}\\int_{0}^{+\\infty}{\\rm d}p\\frac{p^{2}}{ \\sqrt{p^{2}+M_{i}^{2}}}\\] \\[=-\\frac{3M_{i}}{\\pi^{\\frac{2}{3}}}\\int_{{}_{\\rm TV}}^{\\infty} \\int_{0}^{+\\infty}{\\rm d}\\tau{\\rm d}p\\tau^{-\\frac{1}{2}}p^{2}e^{-\\tau(M_{i}^{ 2}+p^{2})}\\] \\[=-\\frac{3M_{i}}{4\\pi^{2}}\\int_{{}_{\\rm TV}}^{\\infty}{\\rm d}\\tau \\frac{e^{-\\tau M_{i}^{2}}}{\\tau^{2}}, \\tag{14}\\]
here the superscript E denotes the Euclidean space.
On account of the temperature of NSs which can be approximated to zero compared with the chemical potential, we have to generalize our calculation to zero temperature and finite chemical potential. In the Euclidean space, it is equivalent to perform a transformation [52] of
\\[p_{4}\\to p_{4}+i\\mu. \\tag{15}\\]
And then we can derive the quark condensate in the following,\\[\\langle\\bar{\\psi}\\psi\\rangle_{\\rm i} =-N_{\\rm c}\\int_{-\\infty}^{+\\infty}\\frac{{\\rm d}^{4}p}{(2\\pi)^{4}} \\frac{4M_{\\rm i}}{(p_{4}+i\\mu)^{2}+M_{\\rm i}^{2}+\\,\\overline{p}^{\\,2}} \\tag{16}\\] \\[=-\\frac{3M_{\\rm i}}{\\pi^{3}}\\int_{0}^{+\\infty}\\!{\\rm d}p\\int_{- \\infty}^{+\\infty}{\\rm d}p_{4}\\frac{p^{2}}{(p_{4}+i\\mu)^{2}+M_{\\rm i}^{2}+p^{2}}\\] \\[=\\left\\{\\begin{array}{ll}-\\frac{3M_{\\rm i}}{\\pi^{2}}\\int_{\\sqrt {\\mu^{2}-M_{\\rm i}^{2}}}^{+\\infty}{\\rm d}p\\frac{\\left[1-{\\rm Erf}(\\sqrt{M_{\\rm i }^{2}+p^{2}}\\sqrt{\\tau_{\\rm UV}})\\right]^{2}}{\\sqrt{M_{\\rm i}^{2}+p^{2}}},&M_{ \\rm i}<\\mu\\\\ \\frac{3M_{\\rm i}}{4\\pi^{2}}\\left[-M_{\\rm i}^{2}{\\rm Ei}(-M_{\\rm i}^{2}\\tau_{ \\rm UV})-\\frac{e^{-M_{\\rm i}^{2}\\tau_{\\rm UV}}}{\\tau_{\\rm UV}}\\right],&M_{ \\rm i}>\\mu\\end{array}\\right.\\]
where \\({\\rm Ei}({\\rm x})\\)\\(=-\\int_{-x}^{+\\infty}{\\rm d}y\\frac{e^{-y}}{t}\\) is an Exponential Integral function and \\({\\rm Erf}({\\rm x})\\)\\(=\\frac{2}{\\sqrt{\\pi}}\\int_{0}^{x}e^{-\\eta^{2}}{\\rm d}\\eta\\) is the error function. We can see that the quark condensate depends on its constituent mass and chemical potential. Specifically, for \\(\\mu<M_{\\rm i}\\), the quark condensate is independent of chemical potential, just like the result in Ref. [53].
## References
* Abbott and [2017]B. P. Abbott and et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. **119**, 161101 (2017).
* Abbott and [2017]B. P. Abbott and et al. (LIGO Scientific Collaboration and Virgo Collaboration), Astrophys. J. Lett. **848**, L12 (2017).
* Abbott and [2017]B. P. Abbott and et al. (LIGO Scientific Collaboration, Virgo Collaboration, F. Gamma-Ray Burst Monitor, and INTEGRAL), Astrophys. J. Lett. **848**, L13 (2017).
* Abbott _et al._ [2017]B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, and et al. (LIGO Scientific Collaboration and Virgo Collaboration), Astrophys. J. Lett. **850**, L39 (2017).
* Annala _et al._ [2018]E. Annala, T. Gorda, A. Kurkela, and A. Vuorinen, Phys. Rev. Lett. **120**, 172703 (2018).
* Fattoyev _et al._ [2018]F. J. Fattoyev, J. Piekarewicz, and C. J. Horowitz, Phys. Rev. Lett. **120**, 172702 (2018).
* Margalit and Metzger [2017]B. Margalit and B. D. Metzger, Astrophys. J. Lett. **850**, L19 (2017).
* Bauswein _et al._ [2017]A. Bauswein, O. Just, H.-T. Janka, and N. Stergioulas, Astrophys. J. Lett. **850**, L34 (2017).
* Paschalidis _et al._ [2018]V. Paschalidis, K. Yagi, D. Alvarez-Castillo, D. B. Blaschke, and A. Sedrakian, Phys. Rev. D **97**, 084038 (2018).
* Shibata _et al._ [2017]M. Shibata, S. Fujibayashi, K. Hotokezaka, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Tanaka, Phys. Rev. D **96**, 123012 (2017).
* Zhou _et al._ [2018]E.-P. Zhou, X. Zhou, and A. Li, Phys. Rev. D **97**, 083015 (2018).
* Ruiz _et al._ [2018]M. Ruiz, S. L. Shapiro, and A. Tsokaros, Phys. Rev. D **97**, 021501 (2018).
* Radice _et al._ [2018]D. Radice, A. Perego, F. Zappa, and S. Bernuzzi, Astrophys. J. Lett. **852**, L29 (2018).
* Rezzolla _et al._ [2018]L. Rezzolla, E. R. Most, and L. R. Weih, Astrophys. J. Lett. **852**, L25 (2018).
* Nandi and Char [2018]R. Nandi and P. Char, Astrophys. J **857**, 12 (2018).
* Zhu _et al._ [2018]Z.-Y. Zhu, E.-P. Zhou, and A. Li, Astrophys. J **862**, 98 (2018), arXiv:1802.05510.
* Ai _et al._ [2018]S. Ai, H. Gao, Z.-G. Dai, X.-F. Wu, A. Li, B. Zhang, and M.-Z. Li, Astrophys. J **860**, 57 (2018), arXiv:1802.00571.
* Ma _et al._ [2018]Y.-L. Ma, H. K. Lee, W.-G. Paeng, and M. Rho, (2018), arXiv:1804.00305 [nucl-th].
* Bose _et al._ [2018]S. Bose, K. Chakravarti, L. Rezzolla, B. S. Sathyaprakash, and K. Takami, Phys. Rev. Lett. **120**, 031102 (2018).
* Flanagan and Hinderer [2008]E. E. Flanagan and T. Hinderer, Phys. Rev. D **77**, 021502 (2008).
* Hinderer [2008]T. Hinderer, Astrophys. J **677**, 1216 (2008).
* Hinderer _et al._ [2010]T. Hinderer, B. D. Lackey, R. N. Lang, and J. S. Read, Phys. Rev. D **81**, 123016 (2010).
* Most _et al._ [2018]E. R. Most, L. R. Weih, L. Rezzolla, and J. Schaffner-Bielich, Phys. Rev. Lett. **120**, 261103 (2018).
* Abbott _et al._ [2018]B. P. Abbott _et al._ (Virgo, LIGO Scientific), (2018), arXiv:1805.11579 [gr-qc].
* Kojo _et al._ [2016]T. Kojo, P. D. Powell, Y. Song, and G. Baym, Nucl. Phys. A **956**, 821 (2016), the XXV International Conference on Ultrarelativistic Nucleus-Nucleus Collisions: Quark Matter 2015.
* Masuda _et al._ [2013]K. Masuda, T. Hatsuda, and T. Takatsuka, Prog. Theor. Exp. Phys. **2013** (2013), 10.1093/ptep/ptt045.
* Masuda _et al._ [2013]K. Masuda, T. Hatsuda, and T. Takatsuka, Astrophys. J **764**, 12 (2013).
* Zhao _et al._ [2015]T. Zhao, S.-S. Xu, Y. Yan, X.-L. Luo, X.-J. Liu, and H.-S. Zong, Phys. Rev. D **92**, 054012 (2015).
* Whittenbury _et al._ [2016]D. L. Whittenbury, H. H. Matevosyan, and A. W. Thomas, Phys. Rev. C **93**, 035807 (2016).
* Li _et al._ [2017]C.-M. Li, J.-L. Zhang, T. Zhao, Y.-P. Zhao, and H.-S. Zong, Phys. Rev. D **95**, 056018 (2017).
* Li _et al._ [2018]C.-M. Li, J.-L. Zhang, Y. Yan, Y.-F. Huang, and H.-S. Zong, Phys. Rev. D **97**, 103013 (2018).
* Klevansky [1992]S. P. Klevansky, Rev. Mod. Phys. **64**, 649 (1992).
* Buballa [2005]M. Buballa, Phys. Rep. **407**, 205 (2005).
* Buballa _et al._ [2004]M. Buballa, F. Neumann, M. Oertel, and I. Shovkovy, Phys. Lett. B **595**, 36 (2004).
* Kishn _et al._ [2007]T. Kishn, D. Blaschke, F. Sandin, C. Fuchs, A. Faessler, H. Grigorian, G. Ropke, and J. Trumper, Phys. Lett. B **654**, 170 (2007).
* Pereira _et al._ [2016]R. C. Pereira, P. Costa, and C. m. c. Providencia, Phys. Rev. D **94**, 094001 (2016).
* Horowitz and Piekarewicz [2001]C. J. Horowitz and J. Piekarewicz, Phys. Rev. Lett. **86**, 5647 (2001).
* Fortin _et al._ [2016]M. Fortin, C. Providencia, A. R. Raduta, F. Gulminelli, J. L. Zdunik, P. Haensel, and M. Bejger, Phys. Rev. C **94**, 035804 (2016).
* Antoniadis _et al._ (2013)J. Antoniadis, P. C. C. Freire, N. Wex, T. M. Tauris, R. S. Lynch, M. H. van Kerkwijk, M. Kramer, C. Bassa, V. S. Dhillon, T. Driebe, J. W. T. Hessels, V. M. Kaspi, V. I. Kondratiev, N. Langer, T. R. Marsh, M. A. McLaughlin, T. T. Pennucci, S. M. Ransom, I. H. Stairs, J. van Leeuwen, J. P. W. Verbiest, and D. G. Whelan, Science **340** (2013).
* Fukushima (2008)K. Fukushima, Phys. Rev. D **77**, 114028 (2008).
* Fukushima and Hatsuda (2011)K. Fukushima and T. Hatsuda, Rep. Prog. Phys. **74**, 014001 (2011).
* Chabrier and Schatzman (1994)G. Chabrier and E. Schatzman, _The Equation of State in Astrophysics, by Edited by Gilles Chabrier, Evry Schatzman, Cambridge, UK: Cambridge University Press, 1994_ (1994).
* Hatsuda and Kunihiro (1994)T. Hatsuda and T. Kunihiro, Phys. Rep. **247**, 221 (1994).
* Tanabashi and et al. (2018)M. Tanabashi and et al. (Particle Data Group), Phys. Rev. D **98**, 030001 (2018).
* Zhang _et al._ (2016)J.-L. Zhang, Y.-M. Shi, S.-S. Xu, and H.-S. Zong, Mod. Phys. Lett. A **31**, 1650086 (2016).
* Zhao _et al._ (2018)Y.-P. Zhao, C.-M. Li, and H.-S. Zong, J. Exp. Theor. Phys. **127**, 64 (2018).
* Zong and Sun (2008)H.-S. Zong and W.-M. Sun, Int. J. Mod. Phys. A **23**, 3591 (2008).
* Zong and Sun (2008)H.-S. Zong and W.-M. Sun, Phys. Rev. D **78**, 054001 (2008).
* Yan _et al._ (2012)Y. Yan, J. Cao, X.-L. Luo, W.-M. Sun, and H.-S. Zong, Phys. Rev. D **86**, 114028 (2012).
* Benvenuto and Lugones (1995)O. G. Benvenuto and G. Lugones, Phys. Rev. D **51**, 1989 (1995).
* Roberts and Schmidt (2000)C. Roberts and S. Schmidt, Prog. Part. Nucl. Phys. **45, Supplement 1**, S1 (2000).
* Zong _et al._ (2005)H.-S. Zong, L. Chang, F.-Y. Hou, W.-M. Sun, and Y.-X. Liu, Phys. Rev. C **71**, 015205 (2005).
* Halasz _et al._ (1998)M. A. Halasz, A. D. Jackson, R. E. Shrock, M. A. Stephanov, and J. J. M. Verbaarschot, Phys. Rev. D **58**, 096007 (1998). | In this paper, we use the recent updated source properties of GW170817 to constrain the hybrid equation of state (EOS) constructed by a three-window modeling between the hadronic EOS and quark EOS. Specifically, the hadronic EOS is described by NL3\\(\\omega\\rho\\) model whose corresponding pure neutron star (NS) is already excluded by the constraint of tidal deformability (TD) from GW170817, and the quark EOS is calculated with 2+1 flavors Nambu-Jona-Lasinio (NJL) model. We also consider other four constraints on the hybrid EOS. As a result, we find the parameter set \\((B^{\\frac{3}{4}},\\tilde{\\mu},\\Gamma)\\) can be well constrained, indicating the possible existence of the hybrid star (HS) with a crossover inside. The type of the two stars in the binary system for nine representative hybrid EOSs is shown in this paper too. Furthermore, the HSs restricted by five constraints do not suggest a pure quark core but a mixed-phase in center.
Key-words: equation of state, crossover, hybrid star, GW170817
pacs: 12.38.Lg, 25.75.Nq, 21.65.Mn | Provide a brief summary of the text. | 266 |
arxiv-format/2307_04495v1.md | # Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML
Simon Raedler\\({}^{1,2}\\)
TuCM School of Computation, Information and Technology;
Department of Computer Science, Technical University of Munich,
Boltzmannstrasse 3, Garching b. Munchen, 85748, Germany.
Juergen Mangler\\({}^{1}\\) and Stefanie Rinderle-Ma\\({}^{1}\\)
\\({}^{1}\\)TUM School of Computation, Information and Technology;
Department of Computer Science, Technical University of Munich,
Boltzmannstrasse 3, Garching b. Munchen, 85748, Germany.
\\({}^{2}\\)Business Informatics Group, Technical University of Vienna,
Favoritenstrasse 9-11/194-3, Vienna, 1040, Austria.
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected];
[email protected];
######
**Methods:** This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering in the formalization of the systems modeling language SysML. The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps within the machine learning support.
**Results:** By consolidating the knowledge of domain and machine learning experts, a powerful tool to describe machine learning tasks by formalizing knowledge using the systems modeling language SysML is introduced. The method is evaluated based on two use cases, i.e., a smart weather system that allows to predict weather forecasts based on sensor data, and a waste prevention case for 3D printer filament that cancels the printing if the intended result cannot be achieved (image processing). Further, a user study is conducted to gather insights of potential users regarding perceived workload and usability of the elaborated method.
**Conclusion:** Integrating machine learning-specific properties in systems engineering techniques allows non-data scientists to understand formalized knowledge and define specific aspects of a machine learning problem, document knowledge on the data, and to further support data scientists to use the formalized knowledge as input for an implementation using (semi-) automatic code generation. In this respect, this work contributes by consolidating knowledge from various domains and therefore, fosters the integration of machine learning in industry by involving several stakeholders.
Keywords:Model-Driven Engineering, SysML, Systems Engineering, Machine Learning, Knowledge Formalization, Data-Driven Engineering, PLM, MDE4AI
## Acknowledgments.
This project has been partially supported and funded by the Austrian Research Promotion Agency (FFG) via the Austrian Competence Center for Digital Production (CDP) under the contract number 881843.
## 1 Introduction
Leveraging data to allow experts making informed decisions during the product lifecycle of a product is recently defined as data-driven engineering [1]. The knowledge required for implementing data-driven engineering can be characterized in a two-fold way [2], i.e., by i) profound machine learning skills with respect to processing and analytics of data and implementation of algorithms, and ii) by domain knowledge regarding the product of interest, relevant product lifecycle data, and related business processes with the entangled IT infrastructures to identify data provenance and information flows. Regarding i) profound machine learning skills, a recent industrial survey revealed that companies have fewer machine learning experts and too little knowledge to implement solutions themselves. Further, few experts are available on the market [3].
To still connect the domain and machine learning knowledge, various methods have been recently proposed in literature [4, 5]. However, these methods lack support for defining machine learning tasks and do not sufficiently represent the perspective of engineers. Additionally, the methods mainly integrate engineering methods into data science methodologies supporting data scientists rather than allowing engineers to apply the methods to support the elaboration of machine learning support.
Therefore, this work aims to integrate machine learning knowledge into systems engineering to support engineers in the definition of machine learning tasks, to consequently enable data-driven engineering and, ultimately, to support the product development for the definition of prerequisites for the machine learning integration. Particularly, means of Model-Based Engineering (MBE) are adapted to define tasks for data-driven engineering by leveraging data from the product lifecycle of a system. The method of this work builds upon the systems modeling language SysML [6], a general-purpose modeling language allowing to formalize a system from various viewpoints and disciplines. The interdisciplinary formalization of systems knowledge refers to the term Model-Based Systems Engineering (MBSE) [7]. Additionally, the CRISP-DM [8] methodology is used as a basis for the organization of the machine learning task definition. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a methodology consisting of common approaches used by data mining professionals to work out a data mining project from inception (requirements and business understanding) through processing (data understanding, data preparation and modeling) to evaluation anddeployment. Ultimately, the method proposed in this work aims to formalize machine learning tasks during product development and to use the formalized knowledge to derive parts of the machine learning and to guide the implementation, respectively. The method is evaluated using a case study representing a weather station with multiple subsystems to predict weather forecasts and a second study to prevent wasting of 3D printer filament by canceling the printing if the intended result cannot be achieved.
The contribution of this work is manifold:
* The proposal of a SysML metamodel extension to include stereotypes that are used to describe machine learning functions for domain-specific data objects
* A method that fits to the latest research areas of the modeling community and is called MDE4AI [9; 10]
* A means of structuring the models based on the CRISP-DM methodology.
* Two case studies using the proposed concepts for modeling machine learning support based on simple input data, followed by a discussion of the strengths and weaknesses of the method.
* A user study showing the workload and usability of the method as rated by experts and computer scientists.
This work lays a foundation for allowing non-programmers to define machine learning tasks by formalizing knowledge from the problem domain into a high-level model and to communicate formalized knowledge. Additionally, semantic connection of data from various Product-Lifecycle Management (PLM) [11] sources allows to describe the origination and composition of data relations. With the availability of such models, the goal is to support the automatic decomposition of SysML models and the (semi-)automatic generation of executable machine learning modules.
This work constitutes an extension of our previous work presented in [12] and expands [12] in several ways by
* providing more extensive background information to foster understanding.
* extending the presented method with a generic and fine-grained sample of the modeling method.
* applying the method in two case studies from industry.
* conducting a user study on the perceived workload and usability of mechanical engineers and computer scientists* discussing advantages and disadvantages of the method in a more thorough way.
The remainder of this paper is structured as follows: Section 2.2 presents the background regarding MBSE, data science methodologies and related work of data-driven engineering. In Section 3, the elaborated method is introduced in detail and evaluated based on two case studies in Section 4. Further, a user study is presented in Section 5 that evaluates the perceived workload and the usability of the method with mechanical engineers and computer scientists. Based on the findings of the evaluation and the user study, an extensive discussion on advantages and disadvantages is presented in Section 6. Finally, the study is summarized in conclusion with future remarks in Section 7.
## 2 Background
First, the concepts of model-based systems engineering (MBSE) and the systems modeling language SysML are explained. Second, machine learning and the CRISP-DM [8] methodology are introduced, acting as a basis for the method presented in Section 3. Next, related methods are depicted with special focus on data-driven engineering. Finally, Section 2.4 presents a summary of the background.
### Model-Based Systems Engineering and SysML
Systems engineering, particularly MBSE, aims to integrate various engineering disciplines in product development to establish a single-source of truth by formalizing system requirements, behavior, structure and parametric relations of a system. Conventional systems engineering focuses on storing artifacts in several (text) documents maintained in case of changes. In a model-based method, the relevant information to describe an abstract system are stored in a model [13]. The literature concerning graphical MBSE methods promises to increase design performance while supporting the communication of relevant stakeholders of a system [14, 15]. MBSE is a term explicitly considering aspects of a system. Nevertheless, other terms can be considered interchangeable depending on the level of automation and the focus of the application1. Independent of the level of automation and the focus of the modeling language, a metamodel defines the modeling concept, relations and all possible instances of a specific set of models. Models are instances of metamodels describinga specific system. The model characteristics must match all aspects of the associated metamodel. However, extensions such as additional attributes can be added directly on a model without changing the metamodel. If a metamodel does not represent an aspect, an extension for a specific group of use cases can be defined using so-called stereotypes [16]. A stereotype is a means of modeling to extend metaclasses by defining additional semantics for a specific class concept. A metaclass is a class describing a set of classes, e.g. the metaclass _block_ is a general purpose structuring mechanism that describes a system, subsystem, logical or physical component without the software-specific details implicitly given in UML structured classes [6]. The use of stereotypes in modeling methods have been proven to support the understanding and standardization of a model [17]. In MBSE, the Systems Modeling Language SysML is the most prominent modeling language [18]. SysML is based on the UML standard with a special focus on the formalization of systems instead of modeling classes and objects for software engineering. The language supports the formalization of structural, behavioral and functional specifications [19]. Structural diagrams describe the composition of systems and subsystems with their attributes and relations [16; 19]. Figure 1 depicts core elements of a block definition diagram modeled in the Eclipse-based open-source software Papyrus2. On top of Figure 1, a Block with the name _Human_ is defined, consisting of one attribute of type _String_ with the attribute name _Name_ and the visibility _public_ indicated by the plus (+). A block can also have operations, ports etc. which are not relevant for this work and, therefore not introduced here. Underneath the _Human_-Block, two inheriting elements are defined by the white arrows between the blocks. The attribute _Name_ is inherited from the parent block marked by the tailing dash. One child has an additional property _Age_, which only affects the block (as long as no deeper inheritance is available). The second block consists of a subsystem, indicated by the black diamond being a part association (a.k.a. composition). A part association determines that a block describes a whole element and a part of the whole element is additionally described in another element3. The 1 and the 0..2 indicate the multiplicity, allowing to define the cardinality, e.g. number of elements. In this sample, it means one element _Child2_ can have zero, one or two legs. The white diamond between _Leg_ and _Shoe_ indicates a shared association, which is a weaker form of the part association. It refers to a relationship where the part element is still valid if the whole element is deleted, e.g. if the element _Leg_ is not valid anymore, the _Shoe_ is still valid. The multiplicity * indicates that one can have any number of shoes. Since various software represent slightly different parts, the description of the block definition diagram can vary.
In SysML, the execution of single activities can be modeled using activity diagrams. A state diagram has an entry-point and an exit-point. The arrow between the states indicates a transition and describes that one state has been completed and another is active. Behind a state, the execution of one or multiple activities can be triggered, whereas an activity is a sequential execution of single actions [6], see Figure 2.
### Data Science and Methodologies
Data Science and Business Intelligence refer to the extraction of information and knowledge from data through analysis to assist people with various types of insights, such as analysis or prediction, among many others [20, 21]. The digging of such information to derive knowledge is called data mining (DM)[22]. Machine learning (ML) is one subfield of DM, which automatically allows computer programs to improve through experience [23]. Machine learning algorithms aim to solve a (specific) problem to eliminate the need for being explicitly programmed [24].
Figure 1: Block Definition Diagram sample with a human.
Figure 2: State diagram sample.
To support the implementation of machine learning applications, methodologies have been proposed in a general manner [8, 25, 26]. Additionally, extensions of such methods with particular support for data science in the engineering domain are introduced [4, 27]. In literature, the methods of the CIRSP-DM [8] and KDD [26] are assessed in a comparative study [28]. According to [28], CRISP-DM is a kind of implementation of the KDD process. In the following, CRISP-DM is described and used as basis for the structure of the proposed method described in Section 3.
In CRISP-DM, six core steps are defined supporting the implementation of a DM application:
1. **Business Understanding:** Project objectives, requirements and an understanding from a business level is achieved. Based thereon, a DM problem is defined and a rough roadmap is elaborated.
2. **Data Understanding:** Data is collected to understand the situation from a data point of view.
3. **Data Preparation** The construction of the final dataset for the learning algorithm based on raw data and data transformations.
4. **Modeling:** Various or sometimes one algorithm is selected and applied to elaborated dataset from the previous step. In this step, so-called hyperparameter tuning is applied to vary on parameter values and achieve a most valuable result.
5. **Evaluation:** The result of the algorithm is evaluated against metrics and the objectives from the first step.
6. **Deployment:** The achievements are presented in a way that a customer or an implementation team can use it for further integration.
### Related Work
In literature, various methods supporting the formalization of data-driven engineering or machine learning using modeling languages, are given. The method of [29] is based on the Kevoree Modeling Framework KMF [30], which is similar to the Eclipse Modeling Framework (EMF) that is the basis for the open source modeling framework Papyrus4. [29] proposes to model the domain knowledge and small learning units in a single domain modeling method since both are highly entangled. The method is based on a textual modeling syntax and describes what should be learned, how and from which attributes and relations. Additionally, templates are given to render code based on the model. However, the open-source framework seems to be out of maintenance since the repository is not updated since 20175.
Footnote 5: [https://github.com/dulkeboard/kevoree-modeling-framework](https://github.com/dulkeboard/kevoree-modeling-framework)
An active maintained framework family with means to model machine learning is shown in [31]. The method is based on the MontiAnna framework [32] and focuses on modeling artificial neural networks. The MontiAnna framework is part of the MontiCore Workbench Family[33]. Similar to [29], textual modeling is used to formalize the learning units and related input and output. The formalization is used as input for template-based code generation. However, the method does not reflect domain-specific (business) knowledge from an engineering perspective.
In [34], focus is put on the integration of executable machine learning units modeled on a cloud platform, enabling the fast deployment of distributed systems. However, the method is stiff regarding extendability and advanced data preparation as of the current development state.Additionally, the integration of domain knowledge is hardly given and the focus on the formalisation of data-driven algorithms is not present.
The integration of ML in CPS modeling is supported by the textual modeling framework ThingML+[35]. The method extends the ThingML [36] modeling method, intended to support the development of IoT devices. As with the other methods, focus is put on machine learning modeling without considering domain knowledge. The method allows deriving executable code based on model transformation using xtext.
### Summary
MBSE has been proven beneficial in increasing the design performance of systems [14, 15]. According to [37], the number of components and functions are increasing in future, leading to more complex systems, requiring advanced support in the development and analysis using means of data science. Development support for data science is given in methodologies such as CRISP-DM. However, guidance specific for the engineering domain is limited [27] and the integration in a model-based method is unavailable as of the author's knowledge. In literature, various methods introduce specific metamodels and languages to describe a data science task and eventually enable to derive executable code. However, the methods are not based on a MBSE compatible modeling language such as SysML rather than introducing single domain-specific modeling environments. Therefore, little support for interdisciplinary communication is given and the methods are more applicable for computer scientists than to domain outsiders such as mechanical engineers with little knowledge in programming. Moreover, the domain-specific modeling methods are not aligned with the CRISP-DM methodology, leading to little support from a methodological perspective. Last but not least, the proposed methods use model transformation to reduce the implementation effort, but are seldomly built in a generic way, allowing to extend the modeling or the derivation of code without extensive changes in the generation. Therefore, maintenance and applicability in practice is rather limited.
## 3 Method
This section describes a method to formalize machine learning tasks based on SysML and the application of an extended metamodel. In the following, first, the extension of the SysML metamodel using stereotypes is described. Special attention is given to the package structure for organizing the stereotypes, extensibility for different purposes, and generalization so that stereotypes can be used for multiple use cases. Second, a package structure aligned with the CRISP-DM methodology is presented, enabling to guide the application of the newly defined stereotypes. Next, a syntax and semantic is introduced, allowing to interpret the formalized machine learning model enriched with the introduced stereotypes. Finally, means of SysML state diagram is used to define the tasks' execution order.
### Metamodel Extension using Stereotypes
In the following subsections, six packages are introduced, which allow to group stereotypes that semantically describe required functionalities. Subsequently, an exemplary stereotype hierarchy for defining higher-order functions for domain-specific data transformation purposes is described in detail.
#### 3.1.1 Stereotype Package Structure
SysML packages are used to group and organize a model and to reduce the complexity of system parts. Similarly, it can be applied for the organization of stereotypes, as depicted in Figure 3.
The organization of the stereotypes is as follows: in _Common_, general stereotypes are defined that are used in other packages as basis, e.g. a stereotype _ML_ is defined in _Common_, each defined stereotype related to machine learning inherits from this stereotype to indicate that it is a machine learning stereotype. Additionally, stereotypes can be defined allowing to categorize other stereotypes, e.g. an abstract _Pre-Processing_ stereotype allows to identify that all inheriting stereotypes are introduced for the data preparation step of the CRISP-DM methodology. In _Attributes_, stereotypes for a more detailed definition of attributes are defined. These attribute stereotypes cannot be applied to blocks, only to attributes of a block. Consequently, the stereotypes extend primitive data types such as _Integer_ or _Float_. The purpose of the extension are additional characteristics to describe the data, e.g. valid ranges of a value or the format of a datetime property or a regular expression to collect or describe a part of a text value. The package _DataStorage_ defines available data interfaces from a general perspective required for the loading and processing of data from various data sources, e.g. SQL servers, Application Programmable Interface (API) or other file formats (e.g. CSV). The purpose of the stereotypes are to support the _data understanding_ of the CRISP-DM methodology. Additionally, it allows to bridge the gap between business and data understanding due to the explicit formats. Further details in Section 3.3. In the _Algorithm_ package, various machine learning algorithms are defined and grouped with respect to algorithm types, e.g. regression or clustering algorithms. Particularly, the focus is put on key characteristics of an algorithm implementation, such as mandatory hyper-parameter or the stereotype description. Optional algorithm parameters are not described in the stereotype, but can be added during the modeling, as later illustrated in Figure 6. The _PreProcessing_ package (a.k.a. as data preparation) is the most complex and extensive package due to the number of functionalities required. Additionally, a survey revealed that computer scientists spend the most effort in preparing and cleaning data [38]. Within this package, functions are defined allowing to transform data so that a cleaned and applicable dataset for the machine learning algorithm is defined. Finally, the _AlgorithmWorkflow_ package, consisting of stereotypes for states of the state diagram, allowing to define the implementation order of the machine learning tasks. Typically in SysML, states are connected to activities, which are a
Figure 3: The organization of the metamodel.
sequence of execution steps. However, in practice, we found out that it is very time consuming to prepare activities first. Additionally, a function abstracted as a single block can be considered as a set of activities. Consequently, state diagrams are used instead of activity diagrams to reduce the implementation effort and complexity.
#### 3.1.2 Stereotypes Hierarchy
As mentioned in Section 3.1.1, each package represents a specific hierarchy of stereotypes, allowing to describe various aspects of machine learning subtasks. An example definition of stereotypes related to data pre-processing is depicted in Figure 4. As described in Section 2.1, stereotypes can be hierarchically composed to describe specific attributes only once for a set of stereotypes. On top, the _ML_ stereotype defined in the _Common_ package is depicted, indicating that all inheriting stereotypes are related to machine learning. Formalizing a machine learning task is intended to be iteratively, which is why some stereotypes are abstract, illustrated by italic letters. If a stereotype is abstract, it means that the stereotype requires further detailization or that a child stereotype with additional information is required, e.g., _DataTransformation_ cannot be used without further details as it can be arbitrary transformation of data. The purpose of abstraction is to support the early definition of tasks in the product development without details already being known, e.g., the final file-format used to store the data. From top to bottom in Figure 4, the level of detail increases and the task is more fine-grained chosen. Consequently, leaves are the most fine-grain representation. The inheritance additionally allows to group functions of a specific kind, e.g., functions regarding outlier detection etc. Due to the grouping of functions, the composition of stereotypes strongly depends on the preferences of the implementing expert and the purpose of the composition in terms of inheritance of attributes. Note that attributes defined in a parent stereotype are also available in a child or grandchild stereotype, respectively. Therefore, each level should only represent mandatory attributes. This especially applies for algorithms with a lot of hyper-parameters, e.g. logistic regression with more than 20 parameter and attributes6. In case a parameter is not defined in the stereotype, it sill can be add during the modeling and application of the stereotypes. A sample can be found in Section 4. Additionally, it is possible to add a set of values using _Enumerations_ for a single attribute, e.g. _MissingValueFunction_ highlightedin green. In this respect, modeling is more precise and guided by a fixed set of valid options. Similarly, specific stereotypes can be used as an attribute, which means that only blocks or attributes that apply the specific stereotype can be assigned, e.g. _Method_Attribute_Input_ indicating that only properties with a stereotype defined in the package _Attributes_ can be applied because each attribute stereotype inherit from that stereotype. Finally, the application of the keyword _BlackBox_ can be used if a function shall be hidden due to security reasons or the implementation is unknown, e.g. _BlackBox_Outliers_ on the right side of Figure 4.
### Package structure guiding the implementation.
CRISP-DM as described in Section 2.2 consists of six steps, each describing a specific aspect required for the development of a machine learning project. Figure 5 illustrates the package structure aligned with the CRISP-DM methodology. _Business Understanding_ consists of block definition diagrams describing the system under study with the composition from a system configuration point of view. In this respect, the VAMOS method (Variant Modeling with SysML, [39]) is integrated to describe a specific system configuration. The integration of the VAMOS method focuses on the data interfaces and attributes of a particular configuration of a system, as different configurations of a system might lead to other data output. In this method, the VAMOS method is used to focus on data interfaces. Therefore, other systems engineering knowledge is presented in other diagrams, which is out of the scope of this work. Still, the knowledge modeled in other diagrams is connected to the instance of a block used in the VAMOS method and therefore, multiple disciplines are enabled to work on the same model. The second step, _Data
Figure 4: The metamodels for data pre-processing/preparation.
_Understanding_, details the _Business Understanding_ with the definition of delivered data on an attribute and data format level. Particularly, the data type and the name of the delivered data attribute are described using block definition diagrams. Additionally, attribute stereotypes are used to describe the data in detail as described in Section 3.1.1. With the application of stereotypes on a block level, the type of data interface is defined, e.g. CSV files or SQL servers. As a result of the formalization of the interfaces in this package: The information exchange between the systems engineering and the data engineering can be considered as completed. Based on the _Data Understanding_, the _Pre-Processing_ is applied to transform and prepare the data in a final dataset that can be used in the _Modeling_. In the _Pre-Processing_, the most effort is required due to the possible number of required data transformations to create a dataset usable for machine learning. The result of the _Pre-Processing_ is a final dataset, considered to be ready for the machine learning algorithm. Within the _Modeling_, algorithms are applied to the final dataset. Additionally, train-test-splitting and other required functions on the machine learning algorithm are applied. In the _Evaluation_ package, various metrics are used to asses and prove the validity of the algorithm result of the _Modeling_ package. Finally, the _Workflow_ package, which describes the execution order of the formalization in the previous packages using state diagrams. For each state, a custom stereotype is applied allowing to connect a block that is connected to a stereotype inherited from _ML_. The method to assign blocks to states allows to overcome the necessity to define activities, making the method less heavy for the application and reduces time for the formalization of the machine learning. Typically in CRISP-DM, the very last step is the _deployment_. However, the deployment is considered out of scope in this work and therefore the method ends with the workflow.
### Syntax and Semantics
For the purpose of implementing ML functionalities, the utilization of functional programming paradigm is intuitive [40]. It utilizes higher order
Figure 5: The implementation structure aligned with CRISP-DM.
functions, invoked on (data-)objects which are returning objects. This allows for step-by-step decomposition, filtering and transformation of data, without side-effects (changes to variables), in comparison to the imperative programming paradigm.
This sequence of function invocation aligns well with how UML and other modeling languages implement abstraction-levels to reflect a relevant selection of properties to focus on the aspects of interest [16]. Functions are blackboxes with processing capability that are associated with (data-)artifacts upon which they can be called, and are associated with data-artifacts they produce as output. The abstraction is realized by describing functions or a set of functions with a single stereotype and instances with blocks.
A class in UML is defined among others by attributes, stereotypes, operations (methods), constraints and relationships to other classes. In SysML, a block describes a system or subsystem with a similar definition as a class in UML. A machine learning task and the respective subtasks can be seen as a system with subsystems. Therefore, each subtask is modeled using blocks, aligned with the syntax described in section 2.1. Particularly, only input values represented as attributes of a block and the relation to other blocks are modeled. The operations (methods) are defined as stereotypes with abstracted implementations. Attributes defined on the stereotype are mandatory input values for the definition of a machine learning subtask. The attributes defined on a block itself are optional for documentation or to extend the stereotype with fine-grained details, e.g. _utc_ attribute in the _Format_Date2_ block in Figure 6. The output of a subtask (block) is implicitly defined in the implementation of the code snippet related to a stereotype and not explicitly depicted in the model. The output of a block can be used as input for other blocks, e.g. _CSV_1_ block as input for the _Format_Date_ block.
Figure 6 depicts a few samples of the aforementioned syntax and semantics. On top right, a date conversion subtask is modeled as _Format_Date_. The date conversion stereotype has a mandatory attribute to define the format of the output of the conversion. The input for the date conversion is the block _CSV_1_, connected using a part association. In this sample, the _date_ attribute is the only input value matching due to the stereotype _Datetime_. However, if the input is ambiguous because the datetime is stored for instance as integer or multiple attributes of the connected block are in the correct input format, it is necessary to add additional
attributes to the date conversion to select the particular input, e.g. with a new attribute which value is the particular input attribute from the connected block. The block _Format_Date2_ inherits from _Format_Date_. Therefore, the input and the attributes are the same except of manual overwritten values, e.g. changes on the output datetime format or the added additional attribute _utc_.
Another sample in Figure 6 shows the integration of multiple inputs. The _Merge_DF_ block consists of two input blocks and the attributes on which the merging function shall be applied are defined using an attribute that consists of two values (_MergeOn_). The _MergeOn_ attribute is mandatory and therefore defined on the stereotype.
Although the implicit execution order of the subtasks is defined by the associations and the necessity to compute first inputs, the execution order might be ambiguous, e.g. execute first the _Format_Date_ or the _Merge_DF_. As described in section 2.1, structural diagram elements, such as blocks, requires the integration in behavioral diagrams to allow the definition of an execution order [16].
To enable the connection of a block with a state in a state diagram, custom stereotypes are applied. The stereotypes for the states consist of a single mandatory attribute. The mandatory attribute references a block with a stereotype that inheritate from the root parent stereotype _ML_.
Figure 6: Machine learning data pre-processing based on a sample in Section 4.
## 4 Case Studies
This section presents two case studies, i.e., a weather system that predicts weather forecasts based on sensor data, and an image similarity check that makes it possible to assess whether the actual print of a 3D model with a 3D printer corresponds to the desired output. As a result, the printing process can be stopped prematurely, saving filament and time.
### UC1 - Weather Forecast based on Sensor Data
Figure 7 illustrates the composition of the weather system that is split in two parts. On the left side, a local station is equipped with various sensors, delivering a CSV file with measuring and on the right side, a weather forecast additionally delivers a CSV file with weather forecasts over the internet.
From a systems engineering perspective, the weather system is a cyber-physical system and can be configured with various sensors. Figure 8 depicts the SysML model of the weather system with a specific configuration aligned with Figure 7. Particularly, Figure 8 depicts an method aligned with [39] that allows to formalize variations. Additionally, the modeling of the system from an business perspective is the first step of the method. Focus is put on the values of interest, which are the output values of the subsystems, to keep the business understanding as concise as possible. In the middle of the figure, the core weather system configuration is depicted. The surrounding subsystems are sensors or subsystems, e.g., an API (right side). The attributes of the sensors are output values
Figure 7: Illustration of the weather system use case.
of each subsystems to align with the CRISP-DM business understanding that aims to get a general idea of the system and from where data originates.
To transform the business understanding in valuable data understanding, connections between the system in the business understanding and output data formats are established. Particularly, a _realization_ connection between the CPS and blocks describing the data format using stereotypes inheriting from _ML_ are modeled. In the blocks, each attribute has a type representing the actual data type in the data source and a stereotype with a _ML_ attribute describing the representation in the machine learning method, e.g., _CSV_2_ attribute _date_date_ is of type _String_ and is mapped to the stereotype _Datetime_ that considers aspects such as the datetime format. Additionally, stereotype attributes are defined such as the _Encoding_ or the _Delimiter_ to describe the composition of the _CSV_ file.
Figure 6 depicts a set of subtasks applied to the data sources defined in Figure 9. For and explanation of Figure 6, please refer to Section 3.
Figure 10 illustrates the application of a train-test-split and the integration of the split data into two different regression algorithms, which are specified in a mandatory attribute. As of the definition of the stereotypes, no further parameters are mandatory. For the _RandomForestRegressor_, the optional hyper-parameter _max_depth_ is defined.
Figure 11 depicts the prediction and the application of metrics such as mean absolute error (MAE). The mandatory parameter text is a placeholder allowing to add text that shall be implemented with the evaluation result.
Figure 8: Business Understanding of the weather system.
The method's final step is integrating the blocks into an execution workflow. Figure 12 illustrates the execution order of the algorithm steps. As can be seen, the _Format_Date2_ block modeled in Figure 6 is not depicted in the workflow, meaning that it is not taken into concern during the implementation and is left out as an artifact from the formalization
Figure 11: Evaluation of the weather forecast prediction.
Figure 10: Modeling of machine learning algorithms.
Figure 9: Data Understanding of the weather system.
time. The state's name is to readily understand the workflow and the blocks connected with the _ML_Block_Connection_ stereotype.
As the scope of this work is to formalize the machine learning and not to improve the executable code or to derive the code automatically, the result of the machine learning and the implementation itself are not depicted and left to future work.
### UC2 - 3D Printer Success Evaluation during Printing
The purpose of the application is to detect faulty 3D prints during the printing process by comparing the actual status of the printed model with the intended model. This use case illustrates the method's applicability to other data sources, such as image data, and the integration of the method into an executable workflow engine. Additionally, the integration of pre-trained models is depicted by integrating TensorFlow Hub. The idea of image similarity is based on an image similarity tutorial7.
Footnote 7: [https://towardsdatascience.com/image-similarity-with-deep-learning-c17d83068f59](https://towardsdatascience.com/image-similarity-with-deep-learning-c17d83068f59)
The use case process is described below and illustrated in Figure 13. We adopt the CPEE process engine [41, 42] to orchestrate the application process, as the CPEE provides a lightweight and straightforward user interface to orchestrate any application that allows interaction via REST web services. Figure 13 shows the workflow of the application, consisting of image generation and printing. The first three process steps define the slicing of a STL file and the generation of the reference images. Particularly, a Python script is called that generates the slices based on a given STL file and stores the generated reference images for later comparison and similarity check. The second part of the process consists of a loop that prints a slice, takes a photo with a camera from the top center of the working area, and calls a similarity script to compare the intended and actual printed model. The image similarity algorithm is defined using the machine learning formalization method, proposed here. The defined
Figure 12: Sample integration of the workflow.
algorithm provides a similarity index compared to a threshold value. If the threshold is exceeded, the printing process is aborted, otherwise, it is repeated.
The machine learning model integrated into the printing process is formalized below. Figure 14 shows input data consisting of two images: the image sliced from the STL file and the photo from the 3D printer camera. In contrast to the first use case, the data attributes are not further detailed with stereotypes because the input data do not show any variations, i.e. the format and resolution of the images do not change.
Figure 15 depicts the scaling of the images such that they have the same dimension. The conversion parameter \\(L\\) allows comparing the images on a black-and-white basis. Normalization of the pixels and colors between 0 and 1 is also applied. The normalization in the block _Convert_PixelsAndNormalize_ should be defined as a new stereotype. In this case, we show the application of the _CustomCode_ stereotype, allowing for the injection of program code, which allows rapid prototyping. However, flaws, such as vulnerability or hijacking of the method might lead to reduced understanding and reproducibility. Additionally, it is not the
Figure 14: Image definition used for the similarity prediction.
Figure 13: Workflow Integrating the formalized machine learning method to early abort 3D printing.
purpose of the method to insert programmed code. For further discussion, see Section 6.3.
With respect to potentially wrong use of the method, Figure 16 depicts the wrong used stereotype _CustomCode_ on top and below the correct use of stereotypes for the same result with a slightly changed code sequence.
Further, the two images are fed to the classification algorithm, as illustrated in Figure 17. The input value _Model_ describes a TensorFlow Hub input, a pre-trained model to classify images. Finally, the result is measured using _cosine_ distance metrics. The threshold for canceling the printing is implemented in the workflow and can be adjusted by the user.
Finally, Figure 18 depicts the execution sequence of the algorithm.
Figure 16: On top the wrong application of the method and below correct use.
Figure 15: Image scaling and normalization used for data preprocessing.
## 5 User Study
Typical user of the presented method are computer scientists and engineers from various disciplines, depending on the application area. Therefore, this study aims to assess and compare computer scientists' and mechanical engineers' subjective workload and user experience regarding understanding, modifying, and creating machine learning functions in a model-based method. Further, the time required for applying changes or creating constructs in SysML is assessed to allow a comparison of the participants based on previous experiences, e.g., programming or modeling prior knowledge. Since the study and the modeling is conducted using the SysML modeling tool Papyrus8, it is impossible to eliminate distortions due to the usability of the underlying tool, e.g., \"How to model a
Figure 17: Integration of pre-trained model and prediction with cosine distance to express the similarity of the images.
Figure 18: The execution workflow of the TensorFlow-based prediction algorithm.
block\". Therefore, the study director will provide verbal assistance if a participant requires support due to the tool's usability.
Large sample sizes are necessary to enable quantitative evaluation, which is not applicable due to resource constraints. Therefore, the principles of discount usability are applied to test only a small group of customers and to identify the main usability problems by applying small qualitative user studies with three to five users, a detailed scenario, and a think-aloud method [43]. According to [43], a 70% chance to find 80% of the usability issues is given with five users. However, in literature, there are reports that the increase of five participants to ten significantly changes the amount of found issues [44]. In this respect, a total number of 12 users were tested, equally distributed among the two groups, Computer Scientists (CS) and Mechanical Engineers (ME).
In the following, the experimental setting is illustrated. Next, an introduction to the evaluation procedure is given, followed by an introduction of the test cases in Section 5.3. Finally, the results of the user studies are depicted in Section 5.4. A discussion on the implications from the user study is given in Section 6.4.
### Experimental Setting
The user study was conducted with 12 participants. Each participant has a university degree (B.Sc., M.Sc., or Ph.D.) and received a basic introduction to programming at university. Half of the participants are CSs, and half MEs. Other engineers can serve as potential users and equally valid test users, as well. However, to obtain a more homogeneous group, engineers are limited to MEs.
Due to the participants' different knowledge in modeling, programming, and data science, a self-assessment of their experience was made at the beginning of the user test. Table 1 summarizes the knowledge levels of the participants based on their highest university degree, years of experience, position at the current job, and self-assessment on the three relevant dimensions.
### Evaluation Procedure
The study started with a basic introduction to SysML and an overview of the method introduced in this work, taking approximately 10 minutes and involving the presentation of two predefined block definition diagrams as samples with a focus on the modeling and understanding of a block definition diagram and the application of the introduced stereotypes.
Following this, the users had to perform three tasks, i.e., (1) showing that they understand the purpose of the modeling and the basic idea of the method by describing the modeled methods in Figure 6, (2) replacing a _CSV_ stereotype with _Text-file_ stereotype and redefining the attribute properties of the text file, and (3) adding a new function by connecting a new block with a particular stereotype to an existing block.
Each of the tasks (1) - (3) is subdivided into sub-activities to allow fine-grained evaluation of the tasks and the performance achieved by the participants. The sub-activities are presented with their tasks in Table 2.
For each participant, the time taken to perform the tasks is recorded. After each of the three tasks, NASA Task Load Index (NASA-TLX, [45, 46]) and the Systems Usability Scale (SUS, [47]) questionnaire are filled out by the users to assess the participants' subjective workload and usability. Before filling out the questionnaire, the users were explicitly told to evaluate the method's usability, not Papyrus's.
\\begin{table}
\\begin{tabular}{l l l l l l l} \\hline \\hline User & Univ. & Years of & \\multirow{2}{*}{Position} & \\multirow{2}{*}{Programming} & Data Science & UML/ \\\\ & Degree & & Experience & & Skills & Skills & SysML \\\\ \\hline CS-1 & B.Sc. & 5 & & Software & 7 & 3 & 6 \\\\ CS-2 & M.Sc. & 3 & & Software & 8 & 6 & 7 \\\\ CS-3 & M.Sc. & 1 & & Ph.D. & 7 & 6 & 3 \\\\ CS-4 & M.Sc. & 2 & & Ph.D. & 6 & 7 & 6 \\\\ CS-5 & M.Sc. & 1 & & Student & 6 & 7 & 8 \\\\ CS-6 & B.Sc. & 1 & & Application & 7 & 4 & 4 \\\\ & & & Manager & & & 4 & 1 & 2 \\\\ ME-1 & M.Sc. & 6 & & Project & & & \\\\ ME-2 & B.Sc. & 11 & & Manager & 2 & 3 & 1 \\\\ & & & Digital & & & \\\\ ME-3 & Ph.D. & 10 & & Engineering & 6 & 4 & 8 \\\\ & & & Manager & & & \\\\ ME-4 & B.Sc. & 2 & & Simulation & 2 & 2 & 1 \\\\ ME-5 & M.Sc. & 3 & & Engineer & & & \\\\ ME-6 & M.Sc. & 3 & & Expert & 2 & 1 & 3 \\\\ & & & Powertrain & & & \\\\ ME-6 & M.Sc. & 1 & & Manufacturing & & & \\\\ ME-6 & M.Sc. & 1 & & Engineer & & 1 & 2 & 1 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Participants of the user study aligned with self-assessment of experience.
### Test Cases
Table 2 depicts the subtasks to accomplish the tasks of the user study. Therefore, each subtask is assessed by the study leader to determine whether they are completed correctly or not. If a user could not find a specific button due to the usability of Papyrus, but could justify why it is being searched for, e.g., \"I need to remove a stereotype and add a new one so that a new function is defined\", the task is evaluated as correct.
To achieve reproducibility, the tasks were set exactly with the following wordings:
Task 1 Understanding: Please describe what can be seen in the currently displayed diagram and what function it fulfills. Additionally, please answer the following questions:
1. What are the two input files, and in which format?
2. What values are stored within CSV_2?
3. What is the type of date_date, and how is it represented in the ML model?
4. What are the path and encoding of the two input files?
5. What are the properties of DataFrame_Merge Stereotype?
Task 2 Function Exchange: Behind the here presented _TextFile_ function, a _CSV_ stereotype is defined. However, the type is incorrect. Please change the file type to _Text-File_. Additionally, set the encoding to _UTF-8_ and the path to _C:/file.txt_.
Task 3 Adding a Function: In the following view, you can see two input files connected to a merge block. Additionally, a normalization of the
\\begin{table}
\\begin{tabular}{l|l}
**Main Task** & **Subtask** \\\\ \\hline
**Task 1 Understanding** & Identification of input files \\\\ & Description of values stored in CSV_2 input file \\\\ & Description of attributes of the data stereotype of CSV_2 values \\\\ & Identification of stereotype properties, e.g. path of CSV_2 file \\\\
**Task 2 Changes** & Stereotype identified \\\\ & Stereotype removed \\\\ & Stereotype added \\\\ & Stereotype attribute identified \\\\ & Stereotype attribute value set \\\\
**Task 3 Modeling** & Block added to view \\\\ & Block associated with input \\\\ & Stereotype added \\\\ & Stereotype attribute value set \\\\ \\end{tabular}
\\end{table}
Table 2: The three main tasks to be performed by the participants, with subtasks that can be used to assess whether the task has been completed.
merge block is required. Please add the function for _Normalization_ and set the value of the normalization method to _MaxAbsScalar_.
### Survey Results
Figure 19 shows boxplots of the required times for the individual tasks grouped per task and training of the participants in CS or ME.
For Task1, the time required is higher than for Task2 and Task3, whereas Task2 and Task3 shows a comparable average and distribution. One reason for the higher time for Task1 is that the users had to describe a model and this task is therefore more time-consuming. It was also observed that repetitive tasks made the users faster, which also came as feedback from the participants. Further, the dispersion of Task1 for ME is higher compared to CS. This scatter might be explained because of the varying experience levels of the participants with respect to modeling and data science. However, there was no correlation between the time spent and the correctness of the execution of the sub-activities. Regarding the
Figure 19: The time required by the participants per task and training direction.
dispersion of CS, interestingly, Tasks 2 and 3 vary more than Task1. This can mainly be explained by the familiarity with the Papyrus modeling environment. Thus, participants with more Papyrus experience had completed the tasks much faster than those who used Papyrus for the first time.
Figure 20 shows the result of the individual tasks in terms of correctness in relation to the subtasks of Table 2. CS perform better for T1 and T2, which can be explained by the extended prior experience regarding UML of CS obtained during university education. In T3, however, ME perform better. This can be explained by an outlier value for CS that performs significantly below the average. The overall accuracy of ME increased with the evolving tasks although the average of T2 is lower than for T1.
The results of the applied NASA-TLX test to indicate the perceived workload of the participants for the specific tasks are presented in Figure 21. The lower the value of a dimension of the NASA-TLX, the lower the
Figure 20: The degree of correct performance of the tasks.
perceived workload. Consequently, a low scale value is seen as positive. The _Effort_ dimension shows, for example, that with increasing experience or task, the perceived effort decreases. Further, the frustration increases and the performance decreases compared to T1. For T3, the standard error is larger than for T1 and T2. Both might be justified due to the increasing complexity of the tasks. However, it is a contrast compared to the achieved accuracy in Figure 20.
The raw overall scores of the tasks are depicted in Table 22. According to [48, 49], the workload is categorized as'medium', which is the second best score and ranging from 10 to 29 points. The cumulative results of CS and ME shows a decreasing workload among the evolving tasks. For CS, the workload appears to be higher than for ME, especially for T3. As of the user feedback, no justification can be given on the difference between CS and ME.
The results of the SUS test with different rating scales are shown in Table 3 based on [50].
Figure 21: Result of the NASA-TLX questionnaire.
Figure 22: NASA-TLX overall score.
\\begin{table}
\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \\hline \\multirow{2}{*}{**Variable**} & \\multicolumn{2}{l|}{**SUS**} & \\multicolumn{2}{l|}{**SPCore**} & \\multicolumn{1}{l|}{**Percentile**} & \\multicolumn{1}{l|}{**SD**} & \\multicolumn{1}{l|}{**Min**} & \\multicolumn{1}{l|}{**Max**} & \\multicolumn{1}{l|}{**1. Quartile**} & \\multicolumn{1}{l|}{**Median**} & \\multicolumn{1}{l|}{**3. Quartile**} & \\multicolumn{1}{l|}{**Adjective**} & \\multicolumn{1}{l|}{**Quartile**} & \\multicolumn{1}{l|}{**Acceptability**} \\\\ & \\multicolumn{1}{l|}{**Score**} & \\multicolumn{1}{l|}{**(mean)**} & & & & & & & & & & & & \\\\ \\hline
**T1 - CS** & 75.0 & 72.77 & 10.7 & 60.0 & 92.5 & 67.5 & 71.25 & 86.8
Figure 23 presents the SUS score as a boxplot, prepared with an online tool for analyzing SUS questionnaire results [50].
The _adjective scale_ score in the boxplot is aligned with [51], which is based on [52]. The figure highlights that each task achieves the rating good for both CS and ME. The standard error of CS is slightly higher than for ME, which can also be seen in Table 3. The values of quartile scale shown in Table 3 are according to [53] and acceptability scale according to [52]. ME increased the score in T3, T1 and T2 are equal. CS decreased the score among the tasks. However, the changes in the scores are little and therefore not justifiable.
Figure 24 depicts the percentile scale based on [54]. Since the percentile score is not uniform or normally distributed, a percentile score was created based on 5000 SUS studies. In this respect, the comparison shows that the tests achieved a percentile between 60 and 79. T3 ME over performed with 79. For CS and ME the average percentile is 66. T1 and T2 for ME have exactly the same value, which is why they are shown as one colour in the Figure.
proposed structure are discussed. Next, the benefits and shortcomings of the modeling semantic are assessed with a particular focus on the applicability and potential ambiguous interpretation. Next, potential risks of model-driven machine learning and future work are presented. Finally, the implications of the user study are presented and discussed.
### Stereotypes and Structure of the Custom Metamodel
The integration of custom stereotypes has been proven beneficial in the literature [17]. In this method, the use of stereotypes to encapsulate and abstract knowledge about machine learning tasks is beneficial as implementation details are hidden, thus supporting communication between different engineers not necessarily experienced in machine learning or programming. With structuring the stereotypes using packages, a stereotype organization aligned to the CRISP-DM methodology is given, supporting refinements and extension in a fine-grained, hierarchical manner. Particularly, the definition of blackbox and abstract stereotypes allows the description of various functions without the necessity to specify each machine learning function in detail. In the custom metamodel, custom _Enumerations_ are defined to limit the number of attribute values, which reduces the model's wrong specifications. Another opportunity to reduce the scope of possible selections is to reduce the number of allowed stereotypes, e.g., only inheritance of the abstract stereotype _PreProcessing_ can be assigned as a value for a specific attribute. However, the filtering of stereotypes requires specific rules that have not yet been integrated or elaborated. Although various methods are defined using stereotypes, the
Figure 24: Percentile curve of the SUS questionnaire.
level of detail might be too little for practical application. _DateConversion_, for example, can be applied to manifold input values and various outputs, e.g., output representation as a string or Coordinated Universal Time (UTC). Adding multiple _DateConversion_ stereotypes for each case is possible. Still, with a growing number of stereotypes, the complexity of selecting the correct, unambiguous stereotype increases while the maintainability decreases. Similarly, if too many stereotype attributes have to be set, the complexity and the effort for the application increases. With respect to these uncertainties at the level of detail required for fine-grained definition of machine learning tasks, industrial case studies have to be conducted to elaborate and validate sufficient degree of detail and additionally define future work.
### Complexity of Unambiguous Modeling
The definition of an implementation structure aligned with the CRISP-DM methodology starting from the business understanding and ending with the definition of evaluation and workflows, is promising to be useful due to the integration of a comprehensive and mature methodology in a MBSE method. Additionally, more experienced computer scientists aware of CRISP-DM can rely on experiences and the benefits of CRISP-DM. Furthermore, in practice, one third of data scientists lack business understanding and communication skills[38], which can be supported by the model-based method of CRISP-DM.
Each block implementing a _ML_ stereotype within the implementation structure can be seen as an encapsulated subtask. Each subtask provides an output that can be used as input for another block. However, the given method does not explicitly specify the output of a block. Therefore, the output is defined by the implementing computer scientist, which may lead to different results due to the range of experience of the decisions and the laziness of the semantics, which allows to create arbitrary associations that may not be implementable. In this respect, future work requires the integration of model checking to reduce orphan associations, infeasible implementations and unwanted side effects on changing associations.
Despite the ambitiousness of the modeling and the potential errors of the associations, the method supports the elaboration and definition of machine learning tasks from early development, which is beneficial. The authors believe that the flaws in the beginning of the method are getting less with the application due to the possibility of reusing certain parts of the formalization. The reuse additionally allows to preserve knowledge and contribute to standardization in the modeling and implementation, which further leads to a reduction of cost and risk in the design [37] and the maintenance of machine learning applications.
### Potential of Model-Driven Machine Learning
The given proposal to describe machine learning tasks using a model-based method has some benefits but also disadvantages. A core disadvantage is the initial effort to introduce stereotypes and formalize the model. In this respect, traditional programming might be less time consuming and therefore, users might use the _CustomCode_ stereotype to inject code. However, it is not the purpose of the method to insert code injection due to vulnerability risks and the reduced documentation and understanding by others. Consequently, future work is required to investigate an extension of the method that allows to generate code from the model but with limitations so that code injections like described in the use case are not possible. Another disadvantage of the stereotypes is the potential effort for maintenance if interfaces are proprietary or rapidly changing, e.g. due to configuration changes or replacement of machines. Closely related, for huge projects, the complexity of the resulting models might be very high, including potential errors in the model or ambiguous associations, which might be very hard to find and thus lead to additional communication effort. Nevertheless, the shortcoming of a complex ramp-up might also be a benefit in the end due to the possibility of introducing model libraries containing well-defined models, leading to standardized parts that can be reused. Further, the method allows to use the formalization as documentation of the implemented technologies that improve the maintainability and extendability for various engineers. Additionally, with further investigations regarding model validation and model debugging features, errors in the semantics can be found and repaired without actually implementing the machine learning application. However, to use this efficiently, the integration into advanced model lifecycle management [55] might be necessary to allow collaborative working.
Due to the non-programming description of machine learning, the method is promising to increase the communication among various disciplines. In particular, with the integration of the general-purpose language SysML and the intersection of CRISP-DM and MBSE, the heterogeneous communities are broadly supported, which favors the implementation of machine learning in industrial practice and supports to shift knowledge in enterprises regarding machine learning. Further, the method can be integratedinto early product development due to the abstract definition that allows to foresee various data interfaces which might have been forgotten during the development. This potentially leads to increased accuracy of the machine learning applications and might reduce failing machine learning projects, which is a well-known problem in industries [3]. In this section, the advantages and potential shortcomings of the method have been shown. However, the key advantages of formalized knowledge was not detailed yet. The machine-readable artifacts (models) are usable with model transformations so to generate executable code, such as a Python script. Particularly, each _ML_ stereotype consists of knowledge to describe a specific subtask, which is a function in a programming language, e.g. a date conversion. The function parameters are defined in the stereotype (mandatory parameters) or on the block (optional parameters). Since stereotypes have to be uniquely named, each can be mapped to a generic code template in a dedicated programming language, e.g. Python. The templates consist of fixed code and generic parts with placeholders, which are filled based on the model's attributes. The state diagram defines the execution order; all blocks are a well-encapsulated functionality; hence, each block can generate a single code block in an Jupyter Notebook9. With the automatic derivation of executable machine learning code, the effort for the documentation and implementation is reduced and potentially lead to less errors in the interpretation. In this respect, future work consists of implementing a proof of concept showing that a derivation and decomposition of formalized machine learning knowledge is beneficial.
Footnote 9: [https://ipython.org/notebook.html](https://ipython.org/notebook.html)
### Implications from the User Study
The user study was conducted with two groups that are representative for using the method presented in this work in practice. The results show that the majority of the tasks were successfully accomplished. From a study perspective, the users could perform each task without additional guidance on the modeling method. Still, problems occurred with the user-interface of Papyrus, e.g., expanding a group of elements to select a _block_ element for modeling. However, learning effects could be observed among the tasks on both CS and ME.
The assessment of the NASA-TLX showed that the mental demand for each task is comparable. A similar observation can be made for the level of frustration, which is slightly lower for the first task. Contrary to expectations, the participants perceived the effort as decreasing. Withregard to the task, the effort for modeling should have been higher than for understanding a model. Nevertheless, it can be implied that both CS and ME can use the method in terms of task load without being more strained.
From an usability perspective, the method achieved good results. Users rated especially the consistency of the method as very high. Comparing the method with others using the percentile curve, it achieved a rank over 66.
However, the first positive results could be due to some shortcomings in the study design. In particular, the demand for rating Papyrus might have a larger impact on the study design than expected. The usability feeling of the users is more dedicated to the experience with Papyrus than to the method, although it was said before to focus on the method. In this respect, a paper prototype where users had to move paper snippets on the table might have been more valuable. Furthermore, most of the participants reported their data science knowledge as low and yet were able to explain what happens in a given model or create a model building block themselves. However, modeling their own data science application might not be possible, as the general understanding of data science is too low.
Nevertheless, it can be seen as a result of the study that the modeled knowledge can be used as a communication medium. Therefore, it should also be possible for non-data scientists to perform a plausibility analysis, as they can gain an understanding of the process without understanding programming code.
However, this would need to be evaluated in a further study. Similarly, an evaluation of the results with the help of a larger study should be sought.
## 7 Conclusions
In this work machine learning task definition using means of SysML is depicted. Particularly, the metamodel of SysML is extended with stereotypes to reflect functions from the machine learning domain. Additionally, the CRISP-DM methodology is used as basis for the structure of the models to organize the development with specific viewpoints. The method is evaluated in a case study showing the integration of machine learning task definition in a cyber-physical system as well as in a case study where a workflow engine is integrated for the interruption of a 3D printer task if the aimed result cannot be achieved. Additionally, a user study is performed to collect an overview of the perceived workload using NASA-TLX questionnaire and to check usability of the system using the SUS questionnaire. The findings of the evaluation showed that the entire workflow of a machine learning solution can be reflected using SysML. Additionally, the connection between the domain of (mechanical/electrical) engineers and machine learning experts is shown. With the MBSE integration and the involvement of various stakeholders from different disciplines, an improvement in communication is expected as shown in a user study. The user study implies that non-experts in data science can use the method as medium of communication. Future work consists of the extension of the method to automatically derive executable machine learning code acting as a basis for the implementation. In addition, a case study must be conducted to develop a minimum level of detail required to sufficiently define a machine learning model that can be used for communication, and thus guide the implementation of the executable code through the formalization of the machine learning model.
## References
* [1] J. Trauer, S. Schweigert-Recksiek, L. Onuma Okamoto, K. Spreitzer, M. Mortl, M. Zimmermann, in _Balancing Innovation and Operation_ (The Design Society, 2020). [https://doi.org/10.35199/NORDDESIGN2020.46](https://doi.org/10.35199/NORDDESIGN2020.46)
* [2] M. Hesenius, N. Schwenzfeier, O. Meyer, W. Koop, V. Gruhn, in _2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE)_ (IEEE, Montreal, QC, Canada, 2019), pp. 35-41. [https://doi.org/10.1109/RAISE.2019.00014](https://doi.org/10.1109/RAISE.2019.00014)
* [3] S. Radler, E. Rigger, A Survey on the Challenges Hindering the Application of Data Science, Digital Twins and Design Automation in Engineering Practice. Proceedings of the Design Society **2**, 1699-1708 (2022). [https://doi.org/10.1017/pds.2022.172](https://doi.org/10.1017/pds.2022.172)
* [4] S. Radler, E. Rigger, in _Product Lifecycle Management Enabling Smart X_, vol. 594 (Springer International Publishing, Cham, 2020), pp. 680-694. [https://doi.org/10.1007/978-3-030-62807-9_54](https://doi.org/10.1007/978-3-030-62807-9_54)
* [5] P. Stanula, A. Ziegenbein, J. Metternich, Machine learning algorithms in production: A guideline for efficient data source selection. Procedia CIRP **78**, 261-266 (2018). [https://doi.org/10.1016/j.procir.2018.08.177](https://doi.org/10.1016/j.procir.2018.08.177)
* [6] OMG. OMG Systems Modeling Language (OMG SysML(tm), Version 1.6) (2019). URL [http://www.omg.org/spec/SysML/1.6/PDF/](http://www.omg.org/spec/SysML/1.6/PDF/)* [7] J.A. Estefan, Survey of model-based systems engineering (MBSE) methodologies. Incose MBSE Focus Group **25**, 1-70 (2007). URL [https://edisciplinas.usp.br/pluginfile.php/5348231/mod_resource/content/1/MBSE_Methodology_Survey_RevB.pdf](https://edisciplinas.usp.br/pluginfile.php/5348231/mod_resource/content/1/MBSE_Methodology_Survey_RevB.pdf)
* [8] R. Wirth, J. Hipp, CRISP-DM: Towards a standard process model for data mining. Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining (2000)
* [9] L. Burgueno, A. Burdusel, S. Gerard, M. Wimmer, in _Proceedings of the 22nd International Conference on Model Driven Engineering Languages and Systems_ (IEEE Press, 2021), MODELS '19, pp. 168-169. [https://doi.org/10.1109/MODELS-C.2019.00028](https://doi.org/10.1109/MODELS-C.2019.00028). URL [https://doi.org/10.1109/MODELS-C.2019.00028](https://doi.org/10.1109/MODELS-C.2019.00028)
* [10] L. Burgueno, M. Kessentini, M. Wimmer, S. Zschaler, MDE Intelligence 2021: 3rd Workshop on Artificial Intelligence and Model-Driven Engineering. 2021 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C) pp. 148-149 (2021). [https://doi.org/10.1109/MODELS-C53483.2021.00026](https://doi.org/10.1109/MODELS-C53483.2021.00026)
* [11] J. Stark, _PLM Implementation Strategy and Plan_ (Springer International Publishing, Cham, 2016), pp. 555-565. [https://doi.org/10.1007/978-3-319-24436-5_29](https://doi.org/10.1007/978-3-319-24436-5_29)
* [12] S. Radler, E. Rigger, J. Mangler, S. Rinderle-Ma, in _2022 IEEE 20th International Conference on Industrial Informatics (INDIN)_ (IEEE, Perth, Australia, 2022), pp. 546-551. [https://doi.org/10.1109/INDIN51773.2022.9976107](https://doi.org/10.1109/INDIN51773.2022.9976107)
* [13] A.M. Madni, M. Sievers, Model-Based Systems Engineering: Motivation, Current Status, and Needed Advances. Disciplinary Convergence in Systems Engineering Research pp. 311-325 (2018)
* [14] T. Huldt, I. Stenius, State-of-practice survey of model-based systems engineering. Systems Engineering **22**(2), 134-145 (2019). [https://doi.org/10.1002/sys.21466](https://doi.org/10.1002/sys.21466)
* [15] K. Henderson, A. Salado, Value and benefits of model-based systems engineering (MBSE): Evidence from the literature. Systems Engineering **24**(1), 51-66 (2021). [https://doi.org/10.1002/sys.21566](https://doi.org/10.1002/sys.21566)
* [16] M. Brambilla, J. Cabot, M. Wimmer, _Model-Driven Software Engineering in Practice_. Synthesis Lectures on Software Engineering (Springer International Publishing, Cham, 2017). [https://doi.org/10.1007/978-3-031-02549-5](https://doi.org/10.1007/978-3-031-02549-5)
* [17] L. Kuzniarz, M. Staron, C. Wohlin, in _Proceedings. 12th IEEE International Workshop on Program Comprehension, 2004._ (IEEE, Bari, Italy, 2004), pp. 14-23. [https://doi.org/10.1109/WPC.2004.1311043](https://doi.org/10.1109/WPC.2004.1311043)* [18] A. Albers, C. Zingel, in _Smart Product Engineering_, ed. by M. Abramovici, R. Stark (Springer, Berlin, Heidelberg, 2013), Lecture Notes in Production Engineering, pp. 83-92. [https://doi.org/10.1007/978-3-642-30817-8_9](https://doi.org/10.1007/978-3-642-30817-8_9)
* [19] J. Holt, S. Perry, _SysML for Systems Engineering: A Model-Based Approach_. Computing and Networks (Institution of Engineering and Technology, 2013). URL [https://books.google.de/books?id=JIRHAgAAQBAJ](https://books.google.de/books?id=JIRHAgAAQBAJ)
* [20] L.N. Sanchez-Pinto, Y. Luo, M.M. Churpek, Big Data and Data Science in Critical Care. Chest **154**(5), 1239-1248 (2018). [https://doi.org/10.1016/j.chest.2018.04.037](https://doi.org/10.1016/j.chest.2018.04.037)
* [21] W. Grossmann, S. Rinderle-Ma, _Fundamentals of Business Intelligence_ (Springer, 2015)
* [22] F. Provost, T. Fawcett, Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data **1**(1), 51-59 (2013). [https://doi.org/10.1089/big.2013.1508](https://doi.org/10.1089/big.2013.1508)
* [23] J.G. Carbonell, R.S. Michalski, T.M. Mitchell, in _Machine Learning_, ed. by R.S. Michalski, J.G. Carbonell, T.M. Mitchell (Morgan Kaufmann, San Francisco (CA), 1983), pp. 3-23. [https://doi.org/10.1016/B978-0-08-051054-5.50005-4](https://doi.org/10.1016/B978-0-08-051054-5.50005-4)
* [24] J. Koza, F. Bennett III, D. Andre, M. Keane, Automated Design Of Both The Topology And Sizing Of Analog Electrical Circuits Using Genetic Programming (1998). [https://doi.org/10.1007/978-94-009-0279-4_9](https://doi.org/10.1007/978-94-009-0279-4_9)
* [25] U. Shafique, H. Qaiser, A Comparative Study of Data Mining Process Models (KDD, CRISP-DM and SEMMA). International Journal of Innovation and Scientific Research **12**(1), 217-222 (2014). URL [http://www.ijisr.issr-journals.org/abstract.php?article=IJISR-14-281-04](http://www.ijisr.issr-journals.org/abstract.php?article=IJISR-14-281-04)
* [26] U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, The KDD process for extracting useful knowledge from volumes of data. Communications of the ACM **39**(11), 27-34 (1996)
* Analytics and Applications_, ed. by P. Haber, T. Lampoltshammer, M. Mayr, K. Plankensteiner (Springer Fachmedien, Wiesbaden, 2021), pp. 38-43. [https://doi.org/10.1007/978-3-658-32182-6_7](https://doi.org/10.1007/978-3-658-32182-6_7)
* [28] A. Azevedo, M.F. Santos, KDD, SEMMA and CRISP-DM: A Parallel Overview. IADIS European Conference Data Mining pp. 182-185 (2008)
* [29] T. Hartmann, A. Moawad, F. Fouquet, Y.L. Traon, The Next Evolution of MDE: A Seamless Integration of Machine Learning into Domain Modeling. 2017ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems (MODELS) pp. 180-197 (2017). [https://doi.org/10.1109/MODELS.2017.32](https://doi.org/10.1109/MODELS.2017.32)
* [30] F. Fouquet, G. Nain, B. Morin, E. Daubert, O. Barais, N. Plouzeau, J.M. Jezequel, in _Models 2012_ (2012). URL [https://hal.inria.fr/hal-00714558](https://hal.inria.fr/hal-00714558)
* [31] E. Kusmenko, S. Pavlitskaya, B. Rumpe, S. Stuber, in _2019 34th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW)_ (IEEE, San Diego, CA, USA, 2019), pp. 126-133. [https://doi.org/10.1109/ASEW.2019.00042](https://doi.org/10.1109/ASEW.2019.00042)
* [32] E. Kusmenko, S. Nickels, S. Pavlitskaya, B. Rumpe, T. Timmermanns, in _2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems (MODELS)_ (2019), pp. 283-293. [https://doi.org/10.1109/MODELS.2019.00012](https://doi.org/10.1109/MODELS.2019.00012)
* [33] B. Rumpe, K. Holldobler, R. Aachen (eds.), _MontiCore 5 Language Workbench_, edition 2017 edn. No. Band 32 in Aachener Informatik-Berichte, Software-Engineering (Shaker Verlag, Aachen, 2017)
* [34] A. Bhattacharjee, Y. Barve, S. Khare, S. Bao, Z. Kang, A. Gokhale, T. Damiano, in _2019 IEEE International Conference on Big Data (Big Data)_ (2019), pp. 1607-1612. [https://doi.org/10.1109/BigData47090.2019.9006518](https://doi.org/10.1109/BigData47090.2019.9006518)
* [35] A. Moin, M. Challenger, A. Badii, S. Gunnemann, A Model-Driven approach to machine learning and software modeling for the IoT: Generating full source code for smart Internet of Things (IoT) services and cyber-physical systems (CPS). Software and Systems Modeling (2022). [https://doi.org/10.1007/s10270-021-00967-x](https://doi.org/10.1007/s10270-021-00967-x)
* [36] N. Harrand, F. Fleurey, B. Morin, K.E. Husa, in _Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems_ (Association for Computing Machinery, New York, NY, USA, 2016), MODELS '16, pp. 125-135. [https://doi.org/10.1145/2976767.2976812](https://doi.org/10.1145/2976767.2976812)
* Systems Engineering Vision 2025. Tech. rep., INCOSE, San Diego, California (2014)
* [38] State of Data Science Report 2022. Tech. rep. (2022). URL [https://www.anaconda.com/state-of-data-science-report-2022](https://www.anaconda.com/state-of-data-science-report-2022)
* [39] T. Weikliens, _Variant Modeling with SysML_ (Leanpub, 2014). URL [https://leanpub.com/vamos](https://leanpub.com/vamos)* [40] K. Nygaard, Basic concepts in object oriented programming. ACM SIGPLAN Notices **21**(10), 128-132 (1986). [https://doi.org/10.1145/323648.323751](https://doi.org/10.1145/323648.323751)
* [41] J. Mangler, S. Rinderle-Ma, in _Proceedings of the BPM Demo Sessions 2014 Colocated with the 12th International Conference on Business Process Management (BPM 2014), Eindhoven, The Netherlands, September 10, 2014_, _CEUR Workshop Proceedings_, vol. 1295, ed. by L. Limonad, B. Weber (CEUR-WS.org, 2014), p. 51. URL [http://ceur-ws.org/Vol-1295/paper22.pdf](http://ceur-ws.org/Vol-1295/paper22.pdf)
* [42] J. Mangler, S. Rinderle-Ma. Cloud Process Execution Engine: Architecture and Interfaces (2022). [https://doi.org/10.48550/arXiv.2208.12214](https://doi.org/10.48550/arXiv.2208.12214)
* [43] J. Nielsen, _Usability Engineering_ (Academic Press, Boston, 1993)
* [44] L. Faulkner, Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers **35**(3), 379-383 (2003). [https://doi.org/10.3758/BF03195514](https://doi.org/10.3758/BF03195514)
* [45] S.G. Hart, L.E. Staveland, in _Advances in Psychology_, _Human Mental Workload_, vol. 52, ed. by P.A. Hancock, N. Meshkati (North-Holland, 1988), pp. 139-183. [https://doi.org/10.1016/S0166-4115](https://doi.org/10.1016/S0166-4115)(08)62386-9
* [46] S.G. Hart, Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting **50**(9), 904-908 (2006). [https://doi.org/10.1177/154193120605000909](https://doi.org/10.1177/154193120605000909)
* [47] J. Brooke, SUS: A 'Quick and Dirty' Usability Scale. Usability Evaluation In Industry pp. 207-212 (1996). [https://doi.org/10.1201/9781498710411-35](https://doi.org/10.1201/9781498710411-35)
* [48] P.A. HANCOCK, E. NAJMEDIN MESHKATI, Human mental workload. Human mental workload **52**, XVI-382 p (1988)
* [49] A.D. Prabaswari, C. Basumerda, B.W. Utomo, The Mental Workload Analysis of Staff in Study Program of Private Educational Organization. IOP Conference Series: Materials Science and Engineering **528**(1), 012,018 (2019). [https://doi.org/10.1088/1757-899X/528/1/012018](https://doi.org/10.1088/1757-899X/528/1/012018)
* [50] J. Blattgerste, J. Behrends, T. Pfeiffer, in _Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments_ (Association for Computing Machinery, New York, NY, USA, 2022), PETRA '22, pp. 237-246. [https://doi.org/10.1145/3529190.3529216](https://doi.org/10.1145/3529190.3529216)
* MeasuringU (2018). URL [https://measuringu.com/interpret-sus-score/](https://measuringu.com/interpret-sus-score/)
* [52] A. Bangor, P.T. Kortum, J.T. Miller, An Empirical Evaluation of the System Usability Scale. International Journal of Human-Computer Interaction **24**(6),574-594 (2008). [https://doi.org/10.1080/10447310802205776](https://doi.org/10.1080/10447310802205776)
* [53] A. Bangor, P. Kortum, J. Miller, Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies **4**(3), 114-123 (2009)
* [54] J. Sauro, J.R. Lewis, _Quantifying the User Experience: Practical Statistics for User Research_ (Morgan Kaufmann, 2016)
* [55] A. Fisher, M. Nolan, S. Friedenthal, M. Loeffler, M. Sampson, M. Bajaj, L. Van-Zandt, K. Hovey, J. Palmer, L. Hart, 3.1.1 Model Lifecycle Management for MBSE. INCOSE International Symposium **24**(1), 207-229 (2014). [https://doi.org/10.1002/j.2334-5837.2014.tb03145.x](https://doi.org/10.1002/j.2334-5837.2014.tb03145.x) | **Motivation:** Systems Engineering is a transdisciplinary and integrative approach, that enables the design, integration, and management of complex systems in systems engineering life cycles. In order to use data generated by cyber-physical systems (CPS), systems engineers cooperate with data scientists, to develop customized mechanisms for data extraction, data preparation, and/or data transformation. While interfaces in CPS systems may be generic, data generated for custom applications must be transformed and merged in specific ways so that insights into the data can be interpreted by system engineers or dedicated applications to gain additional insights. To foster efficient cooperation between systems engineers and data scientists, the systems engineers have to provide a fine-grained specification that describes (a) all parts of the CPS, (b) how the CPS might interact, (c) what data is exchanged between them, (d) how the data interrelates, and (e) what are the requirements and goals of the data extraction. A data scientist can then iteratively(including further refinements of the specification) prepare the necessary custom machine-learning models and components. | Write a summary of the passage below. | 217 |
arxiv-format/1408_1469v2.md | # A Multiple Hypothesis Testing Approach to Low-Complexity Subspace Unmixing
Waheed U. Bajwa and Dustin G. Mixon
Preliminary versions of some of the results reported in this paper were presented at the \\(50\\)th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 1-5, 2012 [1]. WUB is with the Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ 08854 (Email: [email protected]). DGM is with the Department of Mathematics and Statistics, Air Force Institute of Technology, Dayton, OH 45433 (Email: [email protected]).The research of WUB is supported in part by the National Science Foundation under grant CCF-1218942 and by the Army Research Office under grant W911NF-14-1-0295. The research of DGM is supported in part by the AFOSR Young Investigator Research Program award, by the National Science Foundation under grant DMS-1321779, and by the Air Force Office of Scientific Research under grant F4FGA05076J002.
######
#### I-B2 Spectral Unmixing in Hyperspectral Remote Sensing
Hyperspectral remote sensing has a number of civilian and defense applications, which typically involve identifying remote objects from their spectral signatures. Because of the low spatial resolution of hyperspectral imaging systems in most of these applications, individual hyperspectral pixels tend to comprise multiple objects (e.g., soil and vegetation). Spectral unmixing is the problem of decomposition of a \"mixed\" hyperspectral pixel into its constituent objects. In order to pose this spectral unmixing problem in terms of the subspace unmixing problem studied in this paper, we need two assumptions that are often invoked in the literature. First, the spectral variability of each object in different scenes can be captured through a low-dimensional subspace. Second, the mixture of spectra of different objects into a hyperspectral pixel can be described by a linear model. The spectral unmixing problem under these assumptions is the subspace unmixing problem under the PS3 model, with \\(y\\in\\mathbb{R}^{D}\\) denoting the \\(D\\)-dimensional hyperspectral pixel of an imaging system with \\(D\\) spectral bands, \\(\\{\\mathcal{S}_{i}\\subset\\mathbb{R}^{D}\\}_{i=1}^{N}\\) denoting the low-dimensional subspaces of \\(\\mathbb{R}^{D}\\) associated with the spectra of individual objects, \\(N\\) denoting the total number of objects of interest, and \\(y\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}+\\text{noise}\\) with \\(n:=|\\mathcal{A}|\\ll N\\) since only a small number of objects are expected to contribute to a single hyperspectral pixel.
#### I-B3 Group Model Selection in High-Dimensional Statistics
Model selection in statistical data analysis is the problem of learning the relationship between the samples of a dependent or response variable (e.g., the malignancy of a tumor, the health of a network) and the samples of independent or predictor variables (e.g., the expression data of genes, the traffic data in the network). There exist many applications in statistical model selection where the implication of a single predictor in the response variable implies presence of other related predictors in the true model. In such situations, the problem of model selection is often reformulated in a \"group\" setting. This problem of group model selection in high-dimensional settings, where the number of predictors tends to be much larger than the number of samples, can also be posed as the subspace unmixing problem under the PS3 model. In this context, \\(y\\in\\mathbb{R}^{D}\\) denotes the \\(D\\)-dimensional response variable with \\(D\\) representing the total number of samples, \\(N\\) denotes the total number of groups of predictors that comprise the design matrix, \\(\\{\\mathcal{S}_{i}\\subset\\mathbb{R}^{D}\\}_{i=1}^{N}\\) denotes the low-dimensional subspaces of \\(\\mathbb{R}^{D}\\) spanned by each of the groups of predictors, and \\(y\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}+\\text{noise}\\) with \\(\\mathcal{A}\\) denoting the indices of the groups of predictors that truly affect the response variable.
#### I-B4 Sparsity Pattern Recovery in Block-Sparse Compressed Sensing
Compressed sensing is an alternative sampling paradigm for signals that have sparse representations in some orthonormal bases. In recent years, the canonical compressed sensing theory has been extended to the case of signals that have block-sparse representations in some orthonormal bases. Sparsity pattern recovery in block-sparse compressed sensing is the problem of identifying the nonzero \"block coefficients\" of the measured signal. The problem of sparsity pattern recovery in block-sparse compressed sensing, however, can also be posed as the subspace unmixing problem under the PS3 model. In this context, \\(y\\in\\mathbb{R}^{D}\\) denotes the \\(D\\)-dimensional measurement vector with \\(D\\) being the total number of measurements, \\(N\\) denotes the total number of blocks of coefficients, \\(\\{\\mathcal{S}_{i}\\subset\\mathbb{R}^{D}\\}_{i=1}^{N}\\) denotes the low-dimensional subspaces of \\(\\mathbb{R}^{D}\\) spanned by the \"blocks of columns\" of the composite matrix \\(\\Phi\\Psi\\) with \\(\\Phi\\) being the measurement matrix and \\(\\Psi\\) being the sparsifying basis, and \\(y\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}+\\text{noise}\\) with \\(\\mathcal{A}\\) denoting the indices of the nonzero blocks of coefficients of the signal in \\(\\Psi\\).
### _Relationship to Prior Work_
Since the subspace unmixing problem under the PS3 model has connections to a number of application areas, it is invariably related to prior works in some of those areas. In the context of multiuser detection, the work that is most closely related to ours is [3]. However, the setup of [3] can be considered a restrictive version of the two random signal generation models considered in here. Roughly speaking, the signal generation model in [3] can be described as a _randomly-modulated PS3 model_, \\(y\\in\\sum_{i\\in\\mathcal{A}}\\varepsilon_{i}\\mathcal{S}_{i}+\\text{noise}\\) with \\(\\{\\varepsilon_{i}\\}_{i=1}^{N}\\) being independent and identically distributed isotropic random variables. In addition, the results of [3] do not allow for an explicit control of the _family-wise error rate_ (FWER) and also rely on parameters that cannot be easily translated into properties of the subspaces alone. Finally, [3] relies on a convex optimization procedure for multiuser detection that has superlinear (in \\(D\\) and \\(N\\)) computational complexity.
In the context of group model selection and block-sparse compressed sensing, our work can be considered related to [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. None of these works, however, help us understand the problem of subspace unmixing under the PS3 model in its general form. Some of these works, when translated into the general subspace unmixing problem under the PS3 model, consider only random subspaces [5, 6, 7, 15], study subspaces generated through a Kronecker operation [11, 12, 13, 14, 15, 16, 17, 18], or ignore additive noise in the observation [19]. Some other works that do not translate into \\(\\mathcal{X}_{N}\\) being a collection of random/Kronecker-structured subspaces suggest that, fixing the dimensions of subspaces, the total number of active subspaces can at best scale as \\(O\\left(\\sqrt{D}\\right)\\)[7, 8, 9, 10]--the so-called \"square-root bottleneck.\" Further, many of these works either focus on computational approaches that have superlinear complexity [4, 5, 6, 7, 10, 13, 17, 19] or suggest that low-complexity approaches suffer from the \"dynamic range of active subspaces\" [9, 14]. Finally, none of these works help control the FWER of the subspace unmixing problem.
We conclude this discussion by pointing out that the subspace unmixing problem under the PS3 model is effectively a solved problem for the case of one-dimensional subspaces (\\(d=1\\)). Notable works in this regard that neither consider random subspaces nor suffer from the square-root bottleneck include [20, 21, 22, 23, 24]. Among these works, [20, 21, 22] focus on computational approaches with superlinear complexity and do not facilitate control of the FWER, while [23, 24] analyze a low-complexity approach. Despite the fact that [20, 21, 22, 23, 24] do not explicitly address the subspace unmixing problem, one of the main insights offered by these works is that the square-root bottleneck in high-dimensional problems can often be broken through the use of appropriate random signal models. We leverage this insight in the following and rely on two random signal generation models for our analysis that can be considered natural generalizations of the ones in [20, 21, 22, 23, 24] for multi-dimensional (\\(d>1\\)) subspaces.
### _Our Contributions_
The main contributions of this paper are as follows. First, it formally puts forth the problem of subspace unmixing under the PS3 model that provides a mathematically unified view of many problems studied in other application areas. Second, it presents a low-complexity solution to the problem of unmixing under the PS3 model that has linear complexity in \\(D\\), \\(N\\), and the dimensions of the individual subspaces. Third, it presents comprehensive analyses of the proposed solution, termed _marginal subspace detection_ (MSD), under two random signal generation models that, while assuming the contributions of different subspaces to the observation to be random, do not require the subspaces themselves to be random. In particular, the resulting analyses rely on geometric measures that can be explicitly computed in polynomial time and provide means of controlling the FWER of the subspace unmixing problem at any level \\(\\alpha\\in[0,1]\\). Finally, the analyses under both signal generation models neither suffer from the square-root bottleneck nor get affected by the dynamic range of the active subspaces. We conclude by pointing out that a preliminary version of this work appeared in [1]. However, that work was focused primarily on group model selection, it did not account for noise in the observation, and the ensuing analysis lacked details in terms of the metrics of multiple hypothesis testing.
### _Notation and Organization_
The following notational convention is used throughout the rest of this paper. We use the standard notation \\(:=\\) to denote definitions of terms. The notation \\(|\\cdot|\\) is used for both the cardinality of a set and the absolute value of a real number. Similarly, \\(\\|\\cdot\\|_{2}\\) is used for both the \\(\\ell_{2}\\)-norm of a vector and the operator 2-norm of a matrix. The notation \\(\\setminus\\) denotes the set difference operation. Finally, we make use of the following \"_Big-O_\" notation for scaling relations: \\(f(n)=O(g(n))\\) if \\(\\exists c_{o}>0,n_{o}:\\forall n\\geq n_{o},f(n)\\leq c_{o}g(n)\\), \\(f(n)=\\Omega(g(n))\\) (alternatively, \\(f(n)\\succeq g(n)\\)) if \\(g(n)=O(f(n))\\), and \\(f(n)=\\Theta(g(n))\\) if \\(g(n)=O(f(n))\\) and \\(f(n)=O(g(n))\\).
The rest of this paper is organized as follows. In Sec. II, we formulate the problem of subspace unmixing under the PS3 model, put forth the two random signal generation models studied in this paper, define the relevant metrics used to measure the performance of subspace unmixing algorithms, and introduce different geometric measures of the collection of subspaces involved in the subspace unmixing problem. In Sec. III, we describe our proposed algorithm for subspace unmixing under the PS3 model. In Sec. IV, we provide an analysis of the proposed algorithm under one of the random signal generation models and discuss the significance of our results in the context of related results in the literature on group model selection and block-sparse compressed sensing. In Sec. V, we extend the analysis in Sec. IV to provide the most general results for unmixing under the PS3 model. In Sec. VI, we present some numerical results to support our analyses and we finally conclude in Sec. VII.
## II Problem Formulation
Consider the \\(D\\)-dimensional Euclidean space \\(\\mathbb{R}^{D}\\) and the Grassmann manifold \\(\\mathfrak{G}(d,D)\\), which denotes the collection of all \\(d\\)-dimensional subspaces of \\(\\mathbb{R}^{D}\\). Next, consider a collection of \\(N\\gg D/d\\gg 1\\) subspaces given by \\(\\mathcal{X}_{N}=\\left\\{\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\right\\}\\) such that \\(\\mathcal{S}_{1},\\ldots,\\mathcal{S}_{N}\\) are pairwise disjoint: \\(\\mathcal{S}_{i}\\cap\\mathcal{S}_{j}=\\{0\\}\\ \\forall i,j=1,\\ldots,N,i\
eq j\\). Heuristically, this means each of the subspaces in \\(\\mathcal{X}_{N}\\) is low-dimensional and, collectively, the subspaces can potentially \"fill\" the ambient space \\(\\mathbb{R}^{D}\\). The fundamental assumptions in the problem of subspace unmixing under the parsimonious subspace-sum (PS3) model considered in this paper are that only a small number \\(n<D/d\\ll N\\) of the subspaces are active at any given instance and the observation \\(y\\in\\mathbb{R}^{D}\\) corresponds to a noisy version of an \\(x\\in\\mathbb{R}^{D}\\) that lies in the sum of the active subspaces. Mathematically, we can formalize these assumptions by defining \\(\\mathcal{A}=\\{i:\\mathcal{S}_{i}\\in\\mathcal{X}_{N}\\text{ is active}\\}\\), writing \\(x\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}\\), and stating that the observation \\(y=x+\\eta\\), where \\(\\eta\\in\\mathbb{R}^{D}\\) denotes noise in the observation. For the sake of this exposition, we assume \\(\\eta\\) to be either bounded energy, deterministic error, i.e., \\(\\|\\eta\\|_{2}<\\epsilon_{\\eta}\\), or independent and identically distributed (i.i.d.) Gaussian noise with variance \\(\\sigma^{2}\\), i.e., \\(\\eta\\sim\\mathcal{N}(0,\\sigma^{2}I)\\).
The final detail we need in order to complete formulation of the problem of subspace unmixing is a mathematical model for generation of the \"noiseless signal\" \\(x\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}\\). In this regard, we first assume the following probabilistic model for the _activity pattern_ of the underlying subspaces:
* _Random Activity Pattern:_ The set of indices of the active subspaces \\(\\mathcal{A}\\) is a random \\(n\\)-subset of \\(\\{1,\\ldots,N\\}\\) with \\(\\Pr(\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\})=1/\\binom{N}{n}\\).
Next, we state the most-general generative model, termed _random directions model_, for \\(x\\) studied in this paper.
* _Random Directions Model:_ Conditioned on the random activity pattern \\(\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\}\\), the noiseless signal \\(x\\) can be expressed as \\(x:=\\sum_{j=1}^{n}x_{i_{j}}\\). Next, define an \\(n\\)-tuple of (unit) direction vectors as \\[\\mathfrak{X}^{n}:=\\big{(}x_{i_{1}}/\\|x_{i_{1}}\\|_{2},\\ldots,x_{i_{n}}/\\|x_{i_{ n}}\\|_{2}\\big{)}\\in\\mathfrak{B}^{n}:=(\\mathbb{S}^{D-1}\\cap\\mathcal{S}_{i_{1}}) \\times\\cdots\\times(\\mathbb{S}^{D-1}\\cap\\mathcal{S}_{i_{n}}),\\] where \\(\\mathbb{S}^{D-1}\\) denotes the unit sphere in \\(\\mathbb{R}^{D}\\). Then, \\(\\mathfrak{X}^{n}\\) is drawn independently of \\(\\mathcal{A}\\) from \\(\\mathfrak{B}^{n}\\) according to a product probability measure \\(\\lambda_{\\mathfrak{B}^{n}}\\) on \\(\\mathfrak{B}^{n}\\); that is, for all Borel sets \\(B^{n}\\subset\\mathfrak{B}^{n}\\), we have \\[\\Pr(\\mathfrak{X}^{n}\\in B^{n}|\\mathcal{A})=\\Pr(\\mathfrak{X}^{n}\\in B^{n})= \\lambda_{\\mathfrak{B}^{n}}(B^{n}).\\]
Given this random directions generative model, the goal of subspace unmixing in this paper is to identify the set of indices of active subspaces \\(\\mathcal{A}\\) using knowledge of the collection of subspaces \\(\\mathcal{X}_{N}\\) and the noisy observation \\(y\\in\\mathbb{R}^{D}\\). In particular, our focus is on unmixing solutions with linear (in \\(d\\), \\(N\\), and \\(D\\)) computational complexity.
A few remarks are in order now regarding the stated assumptions and signal generation model. First, the assumption of pairwise disjointness of the subspaces is much weaker than the assumption of linear independence of the subspaces, which is typically invoked in the literature on subspace-based information processing [25, 2].2 In particular, while pairwise disjointness implies pairwise linear independence, it does not preclude the possibility of an element in one subspace being representable through a linear combination of elements in two or more subspaces. Second, the rather mild assumption on the randomness of the activity pattern can be interpreted as the lack of a priori information concerning the activity pattern of subspaces. Third, unlike works such as [5, 6, 7, 15] in the literature on group model selection and block-sparse compressed sensing, the random directions model does not assume that the collection of subspaces \\(\\mathcal{X}_{N}\\) are drawn randomly from \\(\\mathfrak{G}(d,D)\\). Rather, \\(\\mathcal{X}_{N}\\) can be any arbitrary (random or deterministic) collection of subspaces and the model makes a significantly weaker assumption that the contributions of active subspaces to the observation \\(y\\) point in random directions that are independent of the indices of active subspaces. It is worth noting here that the random directions model is one of the key reasons our analysis will be able to break the square-root bottleneck for arbitrary collections of subspaces that satisfy certain geometric properties (cf. Sec. IV-C and Sec. V). And while the motivation for this model comes from the existing literature on compressed sensing and model selection [19, 20, 21, 22], the algorithmic and analytical approaches used in here as well as the nature of the final results are fundamentally different from earlier works. In particular, while our work allows \\(\\lambda_{\\mathfrak{B}^{n}}\\) to be any arbitrary product probability measure, prior works such as [19, 20, 21, 22] provide results for a significantly restrictive class of product probability measures.
Although the random directions model is adequate for the problem of unmixing under the PS3 model, our forthcoming analysis will also require the description of an alternative generative model for the noiseless signal \\(x\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}\\). The purpose of the alternative model, which we term _fixed mixing bases model_, is twofold. First, it will turn out that results derived under the (seemingly restrictive) fixed mixing bases model can be generalized for the random directions model in a straightforward manner (cf. Sec. V). Second, despite the somewhat specialized nature of the fixed mixing bases model, it does arise explicitly in application areas such as group model selection and block-sparse compressed sensing in which the contribution of each subspace is explicitly representable using a fixed orthonormal basis. Formally, the fixed mixing bases model has the following description.
* _Fixed Mixing Bases Model:_ Each subspace \\(\\mathcal{S}_{i}\\) in the collection \\(\\mathcal{X}_{N}\\) is associated with an orthonormal basis \\(\\Phi_{i}\\in\\mathbb{R}^{D\\times d}\\), i.e., \\(\\mathrm{span}(\\Phi_{i})=\\mathcal{S}_{i}\\) and \\(\\Phi_{i}^{\\mathrm{T}}\\Phi_{i}=I\\). Further, there is a deterministic but unknown collection of \"mixing coefficients\" \\(\\{\\theta_{j}\\in\\mathbb{R}^{d},j=1,\\ldots,n\\}\\) such that the noiseless signal \\(x\\) is given by \\(x:=\\sum_{j=1}^{n}x_{i_{j}}\\) with \\(x_{i_{j}}:=\\Phi_{i_{j}}\\theta_{j}\\in\\mathcal{S}_{i_{j}}\\), where the random activity pattern \\(\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\}\\).
Readers familiar with detection under the classical linear model [26, Sec. 7.7] will recognize the assumption \\(x=\\sum_{j=1}^{n}\\Phi_{i_{j}}\\theta_{j}\\) as a simple generalization of that setup for the problem of subspace unmixing. Notice that unlike the random directions model, the fixed mixing bases model has no randomness associated with the contribution \\(x_{i_{j}}\\) of each active subspace, which is a relaxation of related assumptions in the literature on model selection and compressed sensing [19, 20, 21, 22]. On the other hand, unlike the fixed mixing bases model, the random directions model is completely agnostic to the representation of the contribution \\(x_{i_{j}}\\) of each active subspace to the observation \\(y\\).
### _Performance Metrics_
In this paper, we address the problem of subspace unmixing under the PS3 model by transforming it into a multiple hypothesis testing problem (cf. Sec. III). While several measures of error have been used over the years in multiple hypothesis testing problems, the two most widely accepted ones in the literature remain the _family-wise error rate_ (FWER) and the _false discovery rate_ (FDR) [27]. Mathematically, if we use \\(\\widehat{\\mathcal{A}}\\subset\\{1,\\ldots,N\\}\\) to denote an estimate of the indices of active subspaces returned by an unmixing algorithm then controlling the FWER at level \\(\\alpha\\) in our setting means \\(\\texttt{FWER}:=\\Pr(\\widehat{\\mathcal{A}}\
ot\\subset\\mathcal{A})\\leq\\alpha\\). In words, \\(\\texttt{FWER}\\leq\\alpha\\) guarantees that the probability of declaring even one inactive subspace as active (i.e., a single _false positive_) is controlled at level \\(\\alpha\\). On the other hand, controlling the FDR in our setting controls the _expected proportion_ of inactive subspaces that are incorrectly declared as active by an unmixing algorithm [28].
While the FDR control is less stringent than the FWER control [28], our goal in this paper is control of the FWER under both signal generation models. This is because control of the FDR in the case of dependent test statistics, which will be the case in our setting (cf. Sec. III), is a challenging research problem [29]. Finally, once we control the FWER at some level \\(\\alpha\\), our goal is to have as large a fraction of active subspaces identified as active by the unmixing algorithm as possible. The results reported in the paper in this context will be given in terms of the _non-discovery proportion_ (NDP), defined as \\(\\texttt{NDP}:=\\frac{|\\mathcal{A}\\setminus\\bar{\\mathcal{A}}|}{|\\mathcal{A}|}\\).
### _Preliminaries_
In this section, we introduce some definitions that will be used throughout the rest of this paper to characterize the performance of our proposed approach to subspace unmixing under both the random directions model and the fixed mixing bases model. It is not too difficult to convince oneself that the \"hardness\" of subspace unmixing problem should be a function of the \"similarity\" of the underlying subspaces: _the more similar the subspaces in \\(\\mathcal{X}_{N}\\), the more difficult it should be to tell them apart_. In order to capture this intuition, we work with the similarity measure of _subspace coherence_ in this paper, defined as:
\\[\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{j}):=\\max_{w\\in\\mathcal{S}_{i},z\\in \\mathcal{S}_{j}}\\frac{|\\langle w,z\\rangle|}{\\|w\\|_{2}\\|z\\|_{2}}, \\tag{1}\\]
where \\((\\mathcal{S}_{i},\\mathcal{S}_{j})\\) denote two \\(d\\)-dimensional subspaces in \\(\\mathbb{R}^{D}\\). Note that \\(\\gamma:\\mathfrak{G}(d,D)\\times\\mathfrak{G}(d,D)\\rightarrow[0,1]\\) simply measures cosine of the smallest principal angle between two subspaces and has appeared in earlier literature [10, 30]. In particular, given (any arbitrary) orthonormal bases \\(U_{i}\\) and \\(U_{j}\\) of \\(\\mathcal{S}_{i}\\) and \\(\\mathcal{S}_{j}\\), respectively, it follows that \\(\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{j}):=\\|U_{i}^{\\mathrm{T}}U_{j}\\|_{2}\\). Since we are interested in unmixing _any_ active collection of subspaces, we will be stating our main results in terms of the _local \\(2\\)-subspace coherence_ and the _quadratic-mean subspace coherence_ of individual subspaces, defined in the following.
**Definition 1** (Local \\(2\\)-Subspace Coherence).: Given a collection of subspaces \\(\\mathcal{X}_{N}=\\big{\\{}\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\big{\\}}\\), the local \\(2\\)-subspace coherence of subspace \\(\\mathcal{S}_{i}\\) is defined as \\(\\gamma_{2,i}:=\\max_{j\
eq i,k\
eq i:j\
eq k}\\big{[}\\gamma(\\mathcal{S}_{i}, \\mathcal{S}_{j})+\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{k})\\big{]}\\).
**Definition 2** (Quadratic-Mean Subspace Coherence).: Given a collection of subspaces \\(\\mathcal{X}_{N}=\\big{\\{}\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\big{\\}}\\), the quadratic-mean subspace coherence of subspace \\(\\mathcal{S}_{i}\\) is defined as \\(\\gamma_{\\textsf{rms},i}:=\\sqrt{\\frac{1}{N-1}\\sum_{j\
eq i}\\gamma^{2}(\\mathcal{ S}_{i},\\mathcal{S}_{j})}\\).
In words, \\(\\gamma_{2,i}\\) measures closeness of \\(\\mathcal{S}_{i}\\) to the worst pair of subspaces in the collection \\(\\mathcal{X}_{N}^{-i}:=\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{i}\\}\\), while \\(\\gamma_{\\textsf{rms},i}\\) measures its closeness to the entire collection of subspaces in \\(\\mathcal{X}_{N}^{-i}\\) in terms of the quadratic mean. Note that \\(\\gamma_{\\textsf{rms},i}\\) is a generalization of the _mean square coherence_ defined in [31] to the case of multi-dimensional subspaces. It follows from the definition of subspace coherence that \\(\\gamma_{2,i}\\in[0,2]\\) and \\(\\gamma_{\\textsf{rms},i}\\in[0,1]\\), with \\(\\gamma_{2,i}=\\gamma_{\\textsf{rms},i}=0\\) if and only if every subspace in \\(\\mathcal{X}_{N}^{-i}\\) is orthogonal to \\(\\mathcal{S}_{i}\\), while \\(\\gamma_{2,i}=2\\) (resp., \\(\\gamma_{\\textsf{rms},i}=1\\)) if and only if two (resp., all) subspaces in \\(\\mathcal{X}_{N}^{-i}\\) are the same as \\(\\mathcal{S}_{i}\\). Because of our assumption of pairwise disjointness, however, we have that \\(\\gamma_{2,i}\\) (resp., \\(\\gamma_{\\textsf{rms},i}\\)) is strictly less than \\(2\\) (resp., 1) in this paper. We conclude our discussion of the local \\(2\\)-subspace coherence and the quadratic-mean subspace coherence by noting that both of them are trivially computable in polynomial time.
_Remark 1_.: Since \\(n\\) subspaces contribute to the observation, it is natural to ask whether one should utilize some measure of _local \\((n-1)\\)-subspace coherence_ in lieu of \\(\\gamma_{2,i}\\) and \\(\\gamma_{\\textsf{rms},i}\\) to analyze the problem of subspace unmixing; see, e.g, the related notion of _cumulative coherence_ for \\(d=1\\) in the literature on compressed sensing [32]. While this is a valid line of reasoning, measures such as local \\((n-1)\\)-subspace coherence cannot be explicitly computed in polynomial time. In contrast, \\(\\gamma_{2,i}\\) and \\(\\gamma_{\\textsf{rms},i}\\) are not only easily computable, but their use in our analysis also allows for linear scaling of the number of active subspaces for appropriate collections of subspaces.
The next definition we need to characterize the performance of subspace unmixing is _active subspace energy_.
**Definition 3** (Active Subspace Energy).: Given the set of indices of active subspaces \\(\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\}\\) and the noiseless signal \\(x=\\sum_{j=1}^{n}x_{i_{j}}\\) with \\(x_{i_{j}}\\in\\mathcal{S}_{i_{j}}\\), the energy of the \\(i_{j}\\)-th active subspace is defined as \\(\\mathcal{E}_{i_{j}}:=\\|x_{i_{j}}\\|_{2}^{2}\\).
In the case of the fixed mixing bases model, notice that \\(\\mathcal{E}_{i_{j}}\\equiv\\|\\theta_{j}\\|_{2}^{2}\\) due to the orthonormal nature of the mixing bases. Inactive subspaces of course contribute no energy to the observation, i.e., \\(\\mathcal{E}_{i}=0\\)\\(\\forall i\
ot\\in\\mathcal{A}\\). But it is important for us to specify the energy of active subspaces for subspace unmixing. Indeed, active subspaces that contribute too little energy to the final observation to the extent that they get buried in noise cannot be identified using any computational method.
Next, the low-complexity algorithm proposed in this paper requires an additional definition of _cumulative active subspace energy_ to characterize its unmixing performance under the PS3 model.
**Definition 4** (Cumulative Active Subspace Energy).: Given the set of indices of active subspaces \\(\\mathcal{A}\\), the cumulative active subspace energy is defined as \\(\\mathcal{E}_{\\mathcal{A}}:=\\sum_{i\\in\\mathcal{A}}\\mathcal{E}_{i}\\).
In words, cumulative active subspace energy can be considered a measure of \"signal energy\" and together with the noise energy/variance, it characterizes signal-to-noise ratio for the subspace unmixing problem.
Finally, we also need the definition of _average mixing coherence_ of individual subspaces for the analysis of our proposed unmixing algorithm under the fixed mixing bases model.
**Definition 5** (Average Mixing Coherence).: Given a collection of subspaces \\(\\mathcal{X}_{N}=\\big{\\{}\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\big{\\}}\\) and the associated mixing bases \\(\\mathcal{B}_{N}:=\\big{\\{}\\Phi_{i}:\\mathrm{span}(\\Phi_{i})=\\mathcal{S}_{i},\\Phi _{i}^{\\mathrm{T}}\\Phi_{i}=I,i=1,\\ldots,N\\big{\\}}\\) under the fixed mixing bases model, the average mixing coherence of subspace \\(\\mathcal{S}_{i}\\) is defined as \\(\\rho_{i}:=\\frac{1}{N-1}\\left\\|\\sum_{j\
eq i}\\Phi_{i}^{\\mathrm{T}}\\Phi_{j} \\right\\|_{2}\\).
In words, average mixing coherence measures the \"niceness\" of the mixing bases in relation to each of the subspaces in the collection \\(\\mathcal{X}_{N}\\). Since we are introducing average mixing coherence for the first time in the literature,3 it is worth understanding its behavior. First, unlike (local \\(2\\)-)subspace coherence, it is not invariant to the choice of the (mixing) bases. While this suggests it won't be useful for analysis of the general subspace unmixing problem, we will later see in Sec. V that the average mixing coherence is intricately related to the quadratic-mean subspace coherence under the random directions model. Second, note that \\(\\rho_{i}\\in[0,1]\\). To see this, observe that \\(\\rho_{i}=0\\) if the subspaces in \\(\\mathcal{X}_{N}\\) are orthogonal to each other. Further, we have from triangle inequality and the definition of subspace coherence that \\(\\rho_{i}\\leq\\sum_{j\
eq i}\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{j})/(N-1)\\leq 1\\). Clearly, the _average subspace coherence_ of the subspace \\(\\mathcal{S}_{i}\\), defined as \\(\\overline{\\gamma}_{i}:=\\sum_{j\
eq i}\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{j})/(N-1)\\), is a trivial upper bound on \\(\\rho_{i}\\). We conclude by noting that the average mixing coherence, \\(\\rho_{i}\\), is trivially computable in polynomial time under the fixed mixing bases model.
## III Marginal Subspace Detection for Subspace Unmixing
We present our low-complexity approach to subspace unmixing in this section, while its performance under the two random signal generation models introduced in Sec. II will be characterized in the following sections. Recall that the observation \\(y\\in\\mathbb{R}^{D}\\) is given by \\(y=x+\\eta\\) with \\(x\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}\\). Assuming the cardinality of the set of indices of active subspaces, \\(n=|\\mathcal{A}|\\), is known, one can pose the subspace unmixing problem as an \\(M\\)-ary hypothesis testing problem with \\(M=\\binom{N}{n}\\). In this formulation, we have that the \\(k\\)-th hypothesis, \\(\\mathcal{H}_{k},\\,k=1,\\ldots,M\\), corresponds to one of the \\(M\\) possible choices for the set \\(\\mathcal{A}\\). While an optimal theoretical strategy in this setting will be to derive the \\(M\\)-ary maximum likelihood decision rule, this will lead to superlinear computational complexity since one will have to evaluate \\(M=\\binom{N}{n}\\succeq\\left(\\frac{N}{n}\\right)^{n}\\) test statistics, one for each of the \\(M\\) hypotheses, in this formulation. Instead, since we are interested in low-complexity approaches in this paper, we approach the problem of subspace unmixing as \\(N\\) individual binary hypothesis testing problems. An immediate benefit of this approach, which transforms the problem of subspace unmixing into a multiple hypothesis testing problem, is the computational complexity: _we need only evaluate \\(N\\) test statistics in this setting_. The challenges in this setting of course are specifying the decision rules for each of the \\(N\\) binary hypotheses and understanding the performance of the corresponding low-complexity approach in terms of the Fwer and the NDP. We address the first challenge by describing a matched subspace detector-based multiple hypothesis testing approach to subspace unmixing in the following, while the second challenge will be addressed for the fixed mixing bases model and the random directions model in Sec. IV and Sec. V, respectively.
In order to solve the problem of subspace unmixing, we propose to work with \\(N\\) binary hypothesis tests on the observation \\(y=x+\\eta\\), as defined below.
\\[\\mathcal{H}_{0}^{k}\\ :\\ x\\in\\sum_{j=1}^{n}\\mathcal{S}_{i_{j}}\\quad \\text{s.t.}\\quad k\
ot\\in\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\},\\quad k=1, \\ldots,N, \\tag{2}\\] \\[\\mathcal{H}_{1}^{k}\\ :\\ x\\in\\sum_{j=1}^{n}\\mathcal{S}_{i_{j}}\\quad \\text{s.t.}\\quad k\\in\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\},\\quad k=1, \\ldots,N. \\tag{3}\\]
In words, the null hypothesis \\(\\mathcal{H}_{0}^{k}\\) being true signifies that subspace \\(\\mathcal{S}_{k}\\) is not active, while the alternative hypothesis \\(\\mathcal{H}_{1}^{k}\\) being true signifies that \\(\\mathcal{S}_{k}\\) is active. Note that if we fix a \\(k\\in\\{1,\\ldots,N\\}\\) then deciding between \\(\\mathcal{H}_{0}^{k}\\) and \\(\\mathcal{H}_{1}^{k}\\) is equivalent to detecting a subspace \\(\\mathcal{S}_{k}\\) in the presence of an interference signal and additive noise. While this setup is reminiscent of the subspace detection problem studied in [2], the fundamental differences between the binary hypothesis test(s) in our problem and that in [2] are that: (\\(i\\)) the interference signal in [2] is assumed to come from a _single_, known subspace, while the interference signal in our problem setup is a function of the underlying activity pattern of the subspaces; and (\\(ii\\)) our problem setup involves multiple hypothesis tests (with dependent test statistics), which therefore requires control of the Fwer. Nonetheless, since _matched subspace detectors_ are known to be (quasi-)optimal in subspace detection problems [2], we put forth the test statistics for our \\(N\\) binary hypothesis tests that are based on matched subspace detectors.
Specifically, in order to decide between \\(\\mathcal{H}_{0}^{k}\\) and \\(\\mathcal{H}_{1}^{k}\\) for any given \\(k\\), we compute the test statistic \\(T_{k}(y):=\\|U_{k}^{T}y\\|_{2}^{2}\\), where \\(U_{k}\\) denotes any orthonormal basis of the subspace \\(\\mathcal{S}_{k}\\). Notice that \\(T_{k}(y)\\) is invariant to the choice of the basis \\(U_{k}\\) and therefore it can be computed irrespective of whether \\(x\\) is generated under the random directions model or the fixed mixing bases model. In order to relate this test statistic to the classical subspace detection literature, note that \\(T_{k}(y)=\\|U_{k}U_{k}^{T}y\\|_{2}^{2}=\\|\\mathcal{P}_{\\mathcal{S}_{k}}y\\|_{2}^{2}\\). That is, the test statistic is equivalent to projecting the observation onto the subspace \\(\\mathcal{S}_{k}\\) and computing the energy of the projected observation, which is the same operation that arises in the classical subspace detection literature [2, 34]. The final decision between \\(\\mathcal{H}_{0}^{k}\\) and \\(\\mathcal{H}_{1}^{k}\\) then involves comparing the test statistic against a threshold \\(\\tau_{k}\\):
\\[T_{k}(y)\\begin{array}{l}\\mathcal{H}_{1}^{k}\\\\ \\mathcal{H}_{0}^{k}\\end{array}\\tau_{k},\\quad k=1,\\ldots,N. \\tag{4}\\]
Once we obtain these marginal decisions, we can use them to obtain an estimate of the set of indices of the active subspaces by setting \\(\\widehat{\\mathcal{A}}=\\{k:\\mathcal{H}_{1}^{k}\\text{ is accepted}\\}\\). We term this entire procedure, outlined in Algorithm 1, as _marginal subspace detection_ (MSD) because of its reliance on detecting the presence of subspaces in the active set using marginal test statistics. The challenge then is understanding the behavior of the test statistics for each subspace under the two hypotheses and specifying values of the thresholds \\(\\{\\tau_{k}\\}\\) that lead to acceptable FWER and NDP figures for the two random signal generation models under consideration. Further, a key aspect of any analysis of MSD involves understanding the number of active subspaces that can be tolerated by it as a function of the subspace collection \\(\\mathcal{X}_{N}\\), the ambient dimension \\(D\\), the subspace dimension \\(d\\), etc. In order to address these questions, one would ideally like to understand the distributions of the test statistics for each of the \\(N\\) subspaces under the two different hypotheses. However, specifying these distributions under the two signal generation models of Sec. II _and_ ensuring that (\\(i\\)) the final results can be interpreted in terms of the geometry of the underlying subspaces, and (\\(ii\\)) the number of active subspaces can be allowed to be almost linear in \\(\\frac{D}{d}\\) appears to be an intractable problem. Therefore, we will instead focus on characterizing the (right and left) tail probabilities (i.e., \\(\\Pr\\left(T_{k}(y)\\geq\\tau\\big{|}\\mathcal{H}_{0}^{k}\\right)\\) and \\(\\Pr\\left(T_{k}(y)\\leq\\tau\\big{|}\\mathcal{H}_{1}^{k}\\right)\\)) of the test statistics for each subspace under the two random signal generation models.
## IV Performance of Marginal Subspace Detection Under the Fixed Mixing Bases Model
Our goal in this section is performance characterization of MSD under the assumption of \\(x\\) being generated using the fixed mixing bases model. Interestingly, we will later see in Sec. V that the results derived in this setting can be generalized to the case of \\(x\\) being generated using the random directions model in a straightforward manner.
We begin with an evaluation of \\(\\Pr\\left(T_{k}(y)\\geq\\tau\\big{|}\\mathcal{H}_{0}^{k}\\right)\\), which will help control the fwer of MSD at a prescribed level \\(\\alpha\\). To this end, we assume an arbitrary (but fixed) \\(k\\in\\{1,\\ldots,N\\}\\) in the following and derive the right-tail probability under the null hypothesis, i.e., \\(y=\\sum_{j=1}^{n}x_{ij}+\\eta=\\sum_{j=1}^{n}\\Phi_{ij}\\theta_{j}+\\eta\\) and \\(k\
ot\\in\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\}\\), where the \\(\\Phi_{i}\\)'s denote the _fixed_ mixing bases. In order to facilitate the forthcoming analysis, we note that since \\(T_{k}(y)\\) is invariant to the choice of \\(U_{k}\\), we have \\(T_{k}(y)=\\big{\\|}\\sum_{j=1}^{n}U_{k}^{\\mathrm{T}}\\Phi_{i_{j}}\\theta_{j}+U_{k} ^{\\mathrm{T}}\\eta\\big{\\|}_{2}^{2}\\equiv\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{ \\mathrm{T}}\\Phi_{i_{j}}\\theta_{j}+\\Phi_{k}^{\\mathrm{T}}\\eta\\big{\\|}_{2}^{2}\\). We now state the result that characterizes the right-tail probability of \\(T_{k}(y)\\) under the null hypothesis, \\(\\mathcal{H}_{0}^{k}\\).
**Lemma 1**.: _Under the null hypothesis \\(\\mathcal{H}_{0}^{k}\\) for any fixed \\(k\\in\\{1,\\ldots,N\\}\\), the test statistic has the following right-tail probability for the fixed mixing bases model:_
1. _In the case of bounded deterministic error_ \\(\\eta\\) _and the assumption_ \\(\\tau>(\\epsilon_{\\eta}+\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}})^{2}\\)_, we have_ \\[\\Pr\\left(T_{k}(y)\\geq\\tau\\big{|}\\mathcal{H}_{0}^{k}\\right)\\leq e^{2}\\exp \\left(-\\frac{c_{0}(N-n)^{2}\\big{(}\\sqrt{\\tau}-\\epsilon_{\\eta}-\\rho_{k}\\sqrt{n \\mathcal{E}_{\\mathcal{A}}}\\big{)}^{2}}{N^{2}\\gamma_{2,k}^{2}\\mathcal{E}_{ \\mathcal{A}}}\\right).\\] (5)
2. _In the case of i.i.d. Gaussian noise_ \\(\\eta\\)_, define_ \\(\\epsilon:=\\sigma\\sqrt{d+2\\delta+2\\sqrt{d\\delta}}\\) _for any_ \\(\\delta>0\\)_. Then, under the assumption_ \\(\\tau>(\\epsilon+\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}})^{2}\\)_, we have_ \\[\\Pr\\left(T_{k}(y)\\geq\\tau\\big{|}\\mathcal{H}_{0}^{k}\\right)\\leq e^{2}\\exp \\left(-\\frac{c_{0}(N-n)^{2}\\big{(}\\sqrt{\\tau}-\\epsilon-\\rho_{k}\\sqrt{n \\mathcal{E}_{\\mathcal{A}}}\\big{)}^{2}}{N^{2}\\gamma_{2,k}^{2}\\mathcal{E}_{ \\mathcal{A}}}\\right)+\\exp(-\\delta).\\] (6)
_Here, the parameter \\(c_{0}:=\\frac{\\epsilon^{-1}}{256}\\) is an absolute positive constant._
The proof of this lemma is given in Appendix A. Our next goal is evaluation of \\(\\Pr\\left(T_{k}(y)\\leq\\tau\\big{|}\\mathcal{H}_{1}^{k}\\right)\\), which will help understand the ndp performance of MSD under the fixed mixing bases model when its fwer is controlled at level \\(\\alpha\\). In this regard, we once again fix an arbitrary \\(k\\in\\{1,\\ldots,N\\}\\) and derive the left-tail probability under the alternative hypothesis, \\(\\mathcal{H}_{1}^{k}\\), i.e., \\(y=\\sum_{j=1}^{n}\\Phi_{i_{j}}\\theta_{j}+\\eta\\) such that the index \\(k\\in\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\}\\).
**Lemma 2**.: _Under the alternative hypothesis \\(\\mathcal{H}_{1}^{k}\\) for any fixed \\(k\\in\\{1,\\ldots,N\\}\\), the test statistic has the following left-tail probability for the fixed mixing bases model:_
1. _In the case of bounded deterministic error_ \\(\\eta\\) _and under the assumptions_ \\(\\mathcal{E}_{k}>(\\epsilon_{\\eta}+\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}- \\mathcal{E}_{k})})^{2}\\) _and_ \\(\\tau<(\\sqrt{\\mathcal{E}_{k}}-\\epsilon_{\\eta}-\\rho_{k}\\sqrt{n(\\mathcal{E}_{ \\mathcal{A}}-\\mathcal{E}_{k})})^{2}\\)_, we have_ \\[\\Pr\\left(T_{k}(y)\\leq\\tau\\big{|}\\mathcal{H}_{1}^{k}\\right)\\leq e^{2}\\exp \\left(-\\frac{c_{0}(N-n)^{2}\\big{(}\\sqrt{\\mathcal{E}_{k}}-\\sqrt{\\tau}- \\epsilon_{\\eta}-\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})} \\big{)}^{2}}{(2N-n)^{2}\\gamma_{2,k}^{2}(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_ {k})}\\right).\\] (7)
2. _In the case of i.i.d. Gaussian noise_ \\(\\eta\\)_, define_ \\(\\epsilon:=\\sigma\\sqrt{d+2\\delta+2\\sqrt{d\\delta}}\\) _for any_ \\(\\delta>0\\)_. Then, under the assumptions_ \\(\\mathcal{E}_{k}>(\\epsilon+\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E} _{k})})^{2}\\) _and_ \\(\\tau<(\\sqrt{\\mathcal{E}_{k}}-\\epsilon-\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}- \\mathcal{E}_{k})})^{2}\\)_, we have_ \\[\\Pr\\left(T_{k}(y)\\leq\\tau\\big{|}\\mathcal{H}_{1}^{k}\\right)\\leq e^{2}\\exp \\left(-\\frac{c_{0}(N-n)^{2}\\big{(}\\sqrt{\\mathcal{E}_{k}}-\\sqrt{\\tau}- \\epsilon-\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})}\\big{)}^{ 2}}{(2N-n)^{2}\\gamma_{2,k}^{2}(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})} \\right)+\\exp(-\\delta).\\] (8)_Here, the parameter \\(c_{0}:=\\frac{\\epsilon^{-1}}{256}\\) is an absolute positive constant._
The proof of this lemma is provided in Appendix B. Before proceeding with the implications of Lemmas 1 and 2 for the fixed mixing bases model, it is instructive to provide an intuitive interpretation of these lemmas for individual subspaces (i.e., in the absence of a formal correction for multiple hypothesis testing [27, 28]). We provide such an interpretation in the following for the case of bounded deterministic error \\(\\eta\\), with the understanding that extensions of our arguments to the case of i.i.d. Gaussian noise \\(\\eta\\) are straightforward.
### _Discussion of the Lemmata_
Lemma 1 characterizes the probability of _individually_ rejecting the null hypothesis \\(\\mathcal{H}_{0}^{k}\\) when it is true under the fixed mixing bases model (i.e., declaring the subspace \\(\\mathcal{S}_{k}\\) to be active when it is inactive). Suppose for the sake of argument that \\(\\mathcal{H}_{0}^{k}\\) is true and \\(\\mathcal{S}_{k}\\) is orthogonal to every subspace in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\), in which case the \\(k\\)-th test statistic reduces to \\(T_{k}(y)\\equiv\\|\\eta\\|_{2}^{2}\\). It is then easy to see in this hypothetical setting that the decision threshold \\(\\tau_{k}\\) must be above the _noise floor_, \\(\\tau_{k}>\\epsilon_{\\eta}^{2}\\), to ensure one does not reject \\(\\mathcal{H}_{0}^{k}\\) when it is true. Lemma 1 effectively generalizes this straightforward observation under the fixed mixing bases model to the case when the \\(\\mathcal{S}_{k}\\) cannot be orthogonal to every subspace in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\). First, the lemma states in this case that an _effective noise floor_, defined as \\(\\epsilon_{\\text{eff}}^{2}:=(\\epsilon_{\\eta}+\\rho_{k}\\sqrt{n\\mathcal{E}_{ \\mathcal{A}}})^{2}\\), appears in the problem and the decision threshold must now be above this effective noise floor, \\(\\tau_{k}>\\epsilon_{\\text{eff}}^{2}\\), to ensure one does not reject \\(\\mathcal{H}_{0}^{k}\\) when it is true. It can be seen from the definition of the effective noise floor that \\(\\epsilon_{\\text{eff}}\\) has an intuitive additive form, with the first term \\(\\epsilon_{\\eta}\\) being due to the additive error \\(\\eta\\) and the second term \\(\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\) being due to the mixing with non-orthogonal bases (subspaces). In particular, \\(\\epsilon_{\\text{eff}}\\searrow\\epsilon_{\\eta}\\) as the average mixing coherence \\(\\rho_{k}\\searrow 0\\) (recall that \\(\\rho_{k}\\equiv 0\\) for the case of \\(\\mathcal{S}_{k}\\) being orthogonal to the subspaces in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\)). Second, once a threshold above the effective noise floor is chosen, the lemma states that the probability of rejecting the true \\(\\mathcal{H}_{0}^{k}\\) decreases exponentially as the gap between the threshold and the effective noise floor increases and/or the local \\(2\\)-subspace coherence \\(\\gamma_{2,k}\\) of \\(\\mathcal{S}_{k}\\) decreases. In particular, the probability of rejecting the true \\(\\mathcal{H}_{0}^{k}\\) in this case has the intuitively pleasing characteristic that it approaches zero exponentially fast as \\(\\gamma_{2,k}\\searrow 0\\) (recall that \\(\\gamma_{2,k}\\equiv 0\\) for the case of \\(\\mathcal{S}_{k}\\) being orthogonal to the subspaces in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\)).
We now shift our focus to Lemma 2, which specifies the probability of individually rejecting the alternative hypothesis \\(\\mathcal{H}_{1}^{k}\\) under the fixed mixing bases model when it is true (i.e., declaring the subspace \\(\\mathcal{S}_{k}\\) to be inactive when it is indeed active). It is once again instructive to first understand the hypothetical scenario of \\(\\mathcal{S}_{k}\\) being orthogonal to every subspace in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\). In this case, the \\(k\\)-th test statistic under \\(\\mathcal{H}_{1}^{k}\\) being true reduces to \\(T_{k}(y)\\equiv\\|x_{k}+U_{k}^{\\mathrm{T}}\\eta\\|_{2}^{2}\\), where \\(x_{k}\\) denotes the component of the noiseless signal \\(x\\) that is contributed by the subspace \\(\\mathcal{S}_{k}\\). Notice in this hypothetical setting that the rotated additive error \\(U_{k}^{\\mathrm{T}}\\eta\\) can in principle be antipodally aligned with the signal component \\(x_{k}\\), thereby reducing the value of \\(T_{k}(y)\\). It is therefore easy to argue in this idealistic setup that ensuring one does accept \\(\\mathcal{H}_{1}^{k}\\) when it is true requires: (\\(i\\)) the energy of the subspace \\(\\mathcal{S}_{k}\\) to be above the _noise floor_, \\(\\mathcal{E}_{k}>\\epsilon_{\\eta}^{2}\\), so that the test statistic remains strictly positive; and (\\(ii\\)) the decision threshold \\(\\tau_{k}\\) to be _below_ the _subspace-to-noise gap_, \\(\\tau_{k}<(\\sqrt{\\mathcal{E}_{k}}-\\epsilon_{\\eta})^{2}\\), so that the antipodal alignment of \\(U_{k}^{\\mathrm{T}}\\eta\\) with does not result in a false negative. We now return to the statement of Lemma 2 and note that it also effectively generalizes these straightforward observations under the fixed mixing bases model to the case when the \\(\\mathcal{S}_{k}\\) cannot be orthogonal to every subspace in \\(\\mathcal{X}_{N}\\setminus\\{\\mathcal{S}_{k}\\}\\). First, similar to the case of Lemma 1, this lemma states in this case that an _effective noise floor_, defined as \\(\\epsilon_{\\text{eff}}^{2}:=(\\epsilon_{\\eta}+\\rho_{k}\\sqrt{n(\\mathcal{E}_{ \\mathcal{A}}-\\mathcal{E}_{k})})^{2}\\), appears in the problem and the energy of the subspace \\(\\mathcal{S}_{k}\\) must now be above this effective noise floor, \\(\\mathcal{E}_{k}>\\epsilon_{\\text{eff}}^{2}\\), to ensure that the test statistic remains strictly positive. In addition, we once again have an intuitive additive form of \\(\\epsilon_{\\text{eff}}\\), with its first term being due to the additive error \\(\\eta\\), its second term being due to the mixing with non-orthogonal bases (subspaces), and \\(\\epsilon_{\\text{eff}}\\searrow\\epsilon_{\\eta}\\) as the average mixing coherence \\(\\rho_{k}\\searrow 0\\). Second, the lemma states that the decision threshold must now be below the _subspace-to-effective-noise gap_, \\(\\tau_{k}<(\\sqrt{\\mathcal{E}_{k}}-\\epsilon_{\\text{eff}})^{2}\\). Third, once a threshold below the subspace-to-effective-noise gap is chosen, the lemma states that the probability of rejecting the true \\(\\mathcal{H}_{1}^{k}\\) decreases exponentially as the gap between \\((\\sqrt{\\mathcal{E}_{k}}-\\epsilon_{\\text{eff}})^{2}\\) and the threshold increases and/or the local \\(2\\)-subspace coherence \\(\\gamma_{2,k}\\) of \\(\\mathcal{S}_{k}\\) decreases. In particular, Lemma 2 once again has the intuitively pleasing characteristic that the probability of rejecting the true \\(\\mathcal{H}_{1}^{k}\\) approaches zero exponentially fast as \\(\\gamma_{2,k}\\searrow 0\\).
### _Main Results for the Fixed Mixing Bases Model_
It can be seen from the preceding discussion that increasing the values of the decision thresholds \\(\\{\\tau_{k}\\}\\) in MSD should decrease the FWER under the fixing mixing bases model. Such a decrease in the FWER of course will come at the expense of an increase in the NDP. We will specify this relationship between the \\(\\tau_{k}\\)'s and the NDP in the following. But we first characterize one possible choice of the \\(\\tau_{k}\\)'s that helps control the FWER of MSD at a predetermined level \\(\\alpha\\) for the fixed mixing bases model. The following theorem makes use of Lemma 1 and the Bonferroni correction for multiple hypothesis testing [27].
**Theorem 1**.: _The family-wise error rate of the marginal subspace detection (Algorithm 1) can be controlled at any level \\(\\alpha\\in[0,1]\\) under the fixed mixing bases model by selecting the decision thresholds \\(\\{\\tau_{k}\\}_{k=1}^{N}\\) as follows:_
1. _In the case of bounded deterministic error_ \\(\\eta\\)_, select_ \\[\\tau_{k}=\\left(\\epsilon_{\\eta}+\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}+ \\frac{\\gamma_{2,k}N}{N-n}\\sqrt{c_{0}^{-1}\\mathcal{E}_{\\mathcal{A}}\\log\\left( \\frac{e^{2}N}{\\alpha}\\right)}\\right)^{2},\\quad k=1,\\ldots,N.\\]
2. _In the case of i.i.d. Gaussian noise_ \\(\\eta\\)_, select_ \\[\\tau_{k}=\\left(\\sigma\\sqrt{d+2\\log\\left(\\frac{2N}{\\alpha}\\right)+2\\sqrt{d\\log \\left(\\frac{2N}{\\alpha}\\right)}}+\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}+ \\frac{\\gamma_{2,k}N}{N-n}\\sqrt{c_{0}^{-1}\\mathcal{E}_{\\mathcal{A}}\\log\\left( \\frac{e^{2}2N}{\\alpha}\\right)}\\right)^{2},\\quad k=1,\\ldots,N.\\]
Proof.: The Bonferroni correction for multiple hypothesis testing dictates that the FWER of the MSD is guaranteed to be controlled at a level \\(\\alpha\\in[0,1]\\) as long as the probability of false positive of each _individual_ hypothesis is controlled at level \\(\\frac{\\alpha}{N}\\)[27], i.e., \\(\\Pr\\left(T_{k}(y)\\geq\\tau_{k}\\big{|}\\mathcal{H}_{0}^{k}\\right)\\leq\\frac{ \\alpha}{N}\\). The statement for the bounded deterministic error \\(\\eta\\) can now be shown to hold by plugging the prescribed decision thresholds into Lemma 1. Similarly, the statement for the i.i.d. Gaussian noise \\(\\eta\\) can be shown to hold by plugging \\(\\delta:=\\log\\left(\\frac{2N}{\\alpha}\\right)\\) and the prescribed decision thresholds into Lemma 1.
A few remarks are in order now regarding Theorem 1. We once again limit our discussion to the case of bounded deterministic error, since its extension to the case of i.i.d. Gaussian noise is straightforward. In the case of deterministic error \\(\\eta\\), Theorem 1 requires the decision thresholds to be of the form \\(\\tau_{k}=(\\epsilon_{\\eta}+\\epsilon_{m,1}+\\epsilon_{m,2})^{2}\\), where \\(\\epsilon_{\\eta}\\) captures the effects of the additive error, \\(\\epsilon_{m,1}\\) is due to the mixing with non-orthogonal bases, and \\(\\epsilon_{m,2}\\) (which is invariant to the choice of the mixing bases) captures the effects of both the mixing with non-orthogonal subspaces and the FWER\\(\\alpha\\).4 Other factors that affect the chosen thresholds under the fixed mixing bases model include the total number of subspaces, the number of active subspaces, and the cumulative active subspace energy. But perhaps the most interesting aspect of Theorem 1 is the fact that as the mixing bases/subspaces become \"closer\" to being orthogonal, the chosen thresholds start approaching the noise floor \\(\\epsilon_{\\eta}^{2}\\): \\(\\tau_{k}\\searrow\\epsilon_{\\eta}^{2}\\) as \\(\\rho_{k},\\gamma_{2,k}\\searrow 0\\).
Footnote 4: In here, we are suppressing the dependence of \\(\\epsilon_{m,1}\\) and \\(\\epsilon_{m,2}\\) on the subspace index \\(k\\) for ease of notation.
While Theorem 1 helps control the FWER of MSD under the fixed mixing bases model, it does not shed light on the corresponding NDP figure for MSD. In order to completely characterize the performance of MSD for the fixed mixing bases model, therefore, we also need the following theorem.
**Theorem 2**.: _Suppose the family-wise error rate of the marginal subspace detection (Algorithm 1) for the fixed mixing bases model is controlled at level \\(\\alpha\\in[0,1]\\) by selecting the decision thresholds \\(\\{\\tau_{k}\\}_{k=1}^{N}\\) specified in Theorem 1. Then the estimate of the indices of active subspaces returned by MSD under the fixed mixing bases model satisfies \\(\\widehat{\\mathcal{A}}\\supset\\mathcal{A}_{*}\\) with probability exceeding \\(1-\\varepsilon\\), where:_
1. _In the case of bounded deterministic error_ \\(\\eta\\)_, we have_ \\(\\varepsilon:=N^{-1}+\\alpha\\) _and_ \\[\\mathcal{A}_{*}:=\\left\\{i\\in\\mathcal{A}:\\mathcal{E}_{i}>\\left(2\\epsilon_{\\eta} +\\rho_{i}\\sqrt{n\\mathcal{E}_{1,i}}+\\frac{\\gamma_{2,i}N}{N-n}\\sqrt{c_{0}^{-1} \\mathcal{E}_{2,i}}\\right)^{2}\\right\\}\\] _with parameters_ \\(\\mathcal{E}_{1,i}:=\\left(\\sqrt{\\mathcal{E}_{\\mathcal{A}}}+\\sqrt{\\mathcal{E}_ {\\mathcal{A}}-\\mathcal{E}_{i}}\\right)^{2}\\) _and_ \\(\\mathcal{E}_{2,i}:=\\left(\\sqrt{\\mathcal{E}_{\\mathcal{A}}\\log(\\frac{\\varepsilon ^{2}N}{\\alpha})}+(2-\\frac{n}{N})\\sqrt{2(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E} _{i})\\log(eN)}\\right)^{2}\\)_._
2. _In the case of i.i.d. Gaussian noise_ \\(\\eta\\)_, we have_ \\(\\varepsilon:=N^{-1}+\\frac{3}{2}\\alpha\\) _and_ \\[\\mathcal{A}_{*}:=\\left\\{i\\in\\mathcal{A}:\\mathcal{E}_{i}>\\left(2\\epsilon+\\rho_ {i}\\sqrt{n\\mathcal{E}_{1,i}}+\\frac{\\gamma_{2,i}N}{N-n}\\sqrt{c_{0}^{-1} \\mathcal{E}_{2,i}}\\right)^{2}\\right\\}\\] _with the three parameters_ \\(\\epsilon:=\\sigma\\sqrt{d+2\\log\\left(\\frac{2N}{\\alpha}\\right)+2\\sqrt{d\\log\\left( \\frac{2N}{\\alpha}\\right)}}\\)_,_ \\(\\mathcal{E}_{1,i}:=\\left(\\sqrt{\\mathcal{E}_{\\mathcal{A}}}+\\sqrt{\\mathcal{E}_ {\\mathcal{A}}-\\mathcal{E}_{i}}\\right)^{2}\\) _and_ \\(\\mathcal{E}_{2,i}:=\\left(\\sqrt{\\mathcal{E}_{\\mathcal{A}}\\log(\\frac{\\varepsilon ^{2}2N}{\\alpha})}+(2-\\frac{n}{N})\\sqrt{2(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E} _{i})\\log(eN)}\\right)^{2}\\)_._
Proof:: In order to prove the statement for the bounded deterministic error \\(\\eta\\), pick an arbitrary \\(i\\in\\mathcal{A}_{*}\\) and notice that the assumptions within Lemma 2 for the subspace \\(\\mathcal{S}_{i}\\in\\mathcal{X}_{N}\\) are satisfied by virtue of the definition of \\(\\mathcal{A}_{*}\\) and the choice of the decision thresholds in Theorem 1. It therefore follows from (7) in Lemma 2 that \\(i\
ot\\in\\widehat{\\mathcal{A}}\\) with probability at most \\(N^{-2}\\). We can therefore conclude by a simple union bound argument that \\(\\mathcal{A}_{*}\
ot\\subset\\widehat{\\mathcal{A}}\\) with probability at most \\(N^{-1}\\). The statement now follows from a final union bound over the events \\(\\mathcal{A}_{*}\
ot\\subset\\widehat{\\mathcal{A}}\\) and \\(\\widehat{\\mathcal{A}}\
ot\\subset\\mathcal{A}\\), where the second event is needed since we are _simultaneously_ controlling the FWER at level \\(\\alpha\\). Likewise,the statement for the i.i.d. Gaussian noise \\(\\eta\\) can be shown to hold by first plugging \\(\\delta:=\\log\\left(\\frac{2N}{\\alpha}\\right)\\) into (8) in Lemma 2 and then making use of similar union bound arguments.
_Remark 2_.: An astute reader will notice that we are being loose in our union bounds for the case of i.i.d. Gaussian noise. Indeed, we are double counting the event that the sum of squares of \\(d\\) i.i.d. Gaussian random variables exceeds \\(\\epsilon^{2}\\), once during Lemma 1 (which is used for FWER calculations) and once during Lemma 2 (which is used for this theorem). In fact, it can be shown through a better bookkeeping of probability events that \\(\\varepsilon=N^{-1}+\\alpha\\) for i.i.d. Gaussian noise also. Nonetheless, we prefer the stated theorem because of the simplicity of its proof.
It can be seen from Theorem 2 that if one controls the FWER of the MSD using Theorem 1 then its NDP figure for the fixed mixing bases model satisfies \\(\\texttt{NDP}\\leq\\frac{|\\mathcal{A}\\setminus\\mathcal{A}_{*}|}{n}\\) with probability exceeding \\(1-N^{-1}-\\Theta(\\alpha)\\). Since \\(\\mathcal{A}_{*}\\subset\\mathcal{A}\\), it then follows that the NDP figure is the smallest when the cardinality of \\(\\mathcal{A}_{*}\\) is the largest. It is therefore instructive to understand the nature of \\(\\mathcal{A}_{*}\\) under the fixed mixing bases model, which is the set of indices of active subspaces that are guaranteed to be identified as active by the MSD algorithm. Theorem 2 tells us that _any_ active subspace whose energy is not \"too small\" is a member of \\(\\mathcal{A}_{*}\\) under the fixed mixing bases model. Specifically, in the case of bounded deterministic error, the threshold that determines whether the energy of an active subspace is large or small for the purposes of identification by MSD takes the form \\((2\\epsilon_{\\eta}+\\tilde{\\epsilon}_{m,1}+\\tilde{\\epsilon}_{m,2})^{2}\\). Here, similar to the case of Theorem 1, we observe that \\(\\tilde{\\epsilon}_{m,1}\\) and \\(\\tilde{\\epsilon}_{m,2}\\) are _pseudo-noise terms_ that appear _only_ due to the mixing with non-orthogonal bases/subspaces and that depend upon additional factors such as the total number of subspaces, the number of active subspaces, the cumulative active subspace energy, and the FWER.5 In particular, we once again have the intuitive result that \\(\\tilde{\\epsilon}_{m,1},\\tilde{\\epsilon}_{m,2}\\searrow 0\\) as \\(\\rho_{i},\\gamma_{2,i}\\searrow 0\\), implying that any active subspace whose energy is on the order of the noise floor will be declared as active by the MSD algorithm in this setting. Since this is the best that any subspace unmixing algorithm can be expected to accomplish, one can argue that the MSD algorithm under the fixed mixing bases model performs near-optimal subspace unmixing for the case when the average mixing coherences and the local \\(2\\)-subspace coherences of individual subspaces in the collection \\(\\mathcal{X}_{N}\\) are significantly small. Finally, note that this intuitive understanding of MSD can be easily extended to the case of i.i.d. Gaussian noise, with the major difference being that \\(\\epsilon_{\\eta}\\) in that case gets replaced by \\(\\epsilon=\\sigma\\sqrt{d+2\\log\\left(\\frac{2N}{\\alpha}\\right)+2\\sqrt{d\\log\\left( \\frac{2N}{\\alpha}\\right)}}\\).
Footnote 5: We are once again suppressing the dependence of \\(\\tilde{\\epsilon}_{m,1}\\) and \\(\\tilde{\\epsilon}_{m,2}\\) on the subspace index for ease of notation.
### _Breaking the Square-Root Bottleneck_
Theorem 1 establishes that the FWER of MSD under the fixed mixing bases model can be controlled at any level \\(\\alpha\\in[0,1]\\) through appropriate selection of the decision thresholds. Further, Theorem 2 shows that the selected thresholds enable the MSD algorithm to identify all active subspaces whose energies exceed _effective_ noise floors characterized by additive error/noise, average mixing coherences, local \\(2\\)-subspace coherences, etc. Most importantly, these effective noise floors approach the \"true\" noise floor as the average mixing coherences and the local \\(2\\)-subspace coherences of individual bases/subspaces approach zero, suggesting near-optimal nature of MSD for such collections of mixing subspaces in the \"\\(D\\) smaller than \\(N\\)\" setting. But we have presented no mathematical evidence to suggest the average mixing coherences and the local \\(2\\)-subspace coherences of individual bases/subspaces can indeed be small enough for the effective noise floors of Theorem 2 to be on the order of \\(\\big{(}\\text{true noise floor}+o(1)\\big{)}\\). Our primary goal in this section is to provide evidence to this effect by arguing for the existence of collection of bases/subspaces whose average mixing coherences and local \\(2\\)-subspace coherences approach zero at significantly fast rates. But in the process, we also make an important observation in the context of group model selection and block-sparsity pattern recovery, namely, _it is possible to break the square-root bottleneck in such problems without resorting to either random/Kronecker-structured or one-dimensional subspaces_ (cf. Sec. I-A and Sec. II).
_Remark 3_.: Note that an approach is said to break the square-root bottleneck as long as it allows \\(nd=\\Omega(D^{\\varrho})\\) with \\(\\varrho>1/2\\) for _some_ collections of subspaces; see, e.g., [20, 21, 35] for one-dimensional subspaces. Prior to this work, however, there existed no results that could be translated into such a guarantee for _any_ given collection of (non-random) multi-dimensional subspaces in the \\(D\\) smaller than \\(N\\) setting.
Recall from the statement of Theorem 2 and the subsequent discussion that the effective noise floor for the \\(i\\)-th subspace involves additive pseudo-noise terms of the form
\\[\\epsilon_{f}^{i}:=\\rho_{i}\\sqrt{n\\mathcal{E}_{1,i}}+\\frac{\\gamma_{2,i}N}{N-n} \\sqrt{c_{0}^{-1}\\mathcal{E}_{2,i}}, \\tag{9}\\]
where \\(\\sqrt{\\mathcal{E}_{1,i}}=\\Theta\\Big{(}\\sqrt{\\mathcal{E}_{\\mathcal{A}}}\\Big{)}\\) and \\(\\sqrt{\\mathcal{E}_{2,i}}=\\Theta\\Big{(}\\sqrt{\\mathcal{E}_{\\mathcal{A}}\\log(N/ \\alpha)}\\Big{)}\\). Since we are assuming that the number of active subspaces \\(n=O(N)\\), it follows that \\(\\epsilon_{f}^{i}=\\Theta\\Big{(}\\rho_{i}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\Big{)} +\\Theta\\Big{(}\\gamma_{2,i}\\sqrt{\\mathcal{E}_{\\mathcal{A}}\\log(N/\\alpha)}\\Big{)}\\). In order to ensure \\(\\epsilon_{f}^{i}=o(1)\\), therefore, we need the following two conditions to hold under the fixed mixing bases model:
\\[\\rho_{i} =O\\left(\\frac{1}{\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}}\\right),\\quad \\text{and} \\tag{10}\\] \\[\\gamma_{2,i} =O\\left(\\frac{1}{\\sqrt{\\mathcal{E}_{\\mathcal{A}}\\log(N/\\alpha)}} \\right). \\tag{11}\\]
Together, we term the conditions (10) and (11) as _subspace coherence conditions_. Both these conditions are effectively statements about the geometry of the mixing subspaces and the corresponding mixing bases. In order to understand the implications of these two conditions, we parameterize the cumulative active subspace energy as \\(\\mathcal{E}_{\\mathcal{A}}=\\Theta(n^{\\delta})\\) for \\(\\delta\\in[0,1]\\). Here, \\(\\delta=0\\) corresponds to one extreme of the cumulative active subspace energy staying constant as the number of active subspaces increases, while \\(\\delta=1\\) corresponds to other extreme of the cumulative active subspace energy increasing linearly with the number of active subspaces.
We now turn our attention to the extreme of \\(\\delta=1\\), in which case the subspace coherence conditions reduce to \\(\\rho_{i}=O(n^{-1})\\) and \\(\\gamma_{2,i}=O(n^{-1/2}\\log^{-1/2}(N/\\alpha))\\). We are interested in this setting in understanding whether there indeed exist subspaces and mixing bases that satisfy these conditions. We have the following theorem in this regard, which also sheds light on the maximum number of active subspaces that can be tolerated by the MSD algorithm under the fixed mixing bases model.
**Theorem 3**.: _Suppose the number of active subspaces satisfies \\(n\\leq\\min\\left\\{\\sqrt{N}-1,\\frac{c_{1}^{2}D(N-1)}{(Nd-D)\\log(N/\\alpha)}\\right\\}\\) for some constant \\(c_{1}\\in(0,1)\\). Then there exist collections of subspaces \\(\\mathcal{X}_{N}=\\left\\{\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\right\\}\\) and corresponding mixing bases \\(\\mathcal{B}_{N}=\\left\\{\\Phi_{i}:\\mathrm{span}(\\Phi_{i})=\\mathcal{S}_{i},\\Phi_{i}^{ \\mathrm{T}}\\Phi_{i}=I,i=1,\\ldots,N\\right\\}\\) such that \\(\\rho_{i}\\leq n^{-1}\\) and \\(\\gamma_{2,i}\\leq c_{2}n^{-1/2}\\log^{-1/2}(N/\\alpha)\\) for \\(i=1,\\ldots,N\\), where \\(c_{2}\\geq\\max\\{2c_{1},1\\}\\) is a positive numerical constant._
Proof:: The proof of this theorem follows from a combination of results reported in [33]. To begin, note from the definition of local \\(2\\)-subspace coherence that \\(\\frac{\\gamma_{2,i}}{2}\\leq\\mu(\\mathcal{X}_{N}):=\\max_{i\
eq j}\\gamma(\\mathcal{ S}_{i},\\mathcal{S}_{j})\\). We now argue there exist \\(\\mathcal{X}_{N}\\)'s such that \\(\\mu(\\mathcal{X}_{N})=0.5c_{2}n^{-1/2}\\log^{-1/2}(N/\\alpha)\\), which in turn implies \\(\\gamma_{2,i}\\leq c_{2}n^{-1/2}\\log^{-1/2}(N/\\alpha)\\) for such collections of subspaces. The quantity \\(\\mu(\\mathcal{X}_{N})\\), termed _worst-case subspace coherence_, has been investigated extensively in the literature [33, 36]. The first thing we need to be careful about is the fact from [36, Th. 3.6][33, Th. 2.3] that \\(\\mu(\\mathcal{X}_{N})\\geq\\sqrt{\\frac{Nd-D}{D(N-1)}}\\), which is ensured by the conditions \\(n\\leq\\frac{c_{2}^{2}D(N-1)}{(Nd-D)\\log(N/\\alpha)}\\) and \\(c_{2}\\geq 2c_{1}\\). The existence of such collections of subspaces now follows from [33], which establishes that the worst-case subspace coherences of many collections of subspaces (including subspaces drawn uniformly at random from \\(\\mathfrak{G}(d,D)\\)) come very close to meeting the lower bound \\(\\sqrt{\\frac{Nd-D}{D(N-1)}}\\).
In order to complete the proof, we next need to establish that if a collection of subspaces has \\(\\mu(\\mathcal{X}_{N})=0.5c_{2}n^{-1/2}\\log^{-1/2}(N/\\alpha)\\) then there exists _at least_ one corresponding mixing bases for that collection such that \\(\\rho_{i}\\leq n^{-1}\\). In this regard, note that \\(\\rho_{i}\\leq\
u(\\mathcal{B}_{N}):=\\max_{i}\\rho_{i}\\). The quantity \\(\
u(\\mathcal{B}_{N})\\), termed _average group/block coherence_, was introduced in [1] and investigated further in [33]. In particular, it follows from [33, Lemma 3.4] that every collection of subspaces \\(\\mathcal{X}_{N}\\) has at least one mixing bases with \\(\
u(\\mathcal{B}_{N})\\leq\\frac{\\sqrt{N}+1}{N-1}\\), which can in turn be upper bounded by \\(n^{-1}\\) for \\(n\\leq\\sqrt{N}-1\\).
Recall that our problem formulation calls for \\(n<D/d\\ll N\\). Theorem 3 helps quantify these inequalities under the fixed mixing bases model for the case of linear scaling of cumulative active subspace energy. Specifically, note that \\(\\frac{D(N-1)}{(Nd-D)\\log(N/\\alpha)}=O\\left(\\frac{D}{d\\log(N/\\alpha)}\\right)\\) for large \\(N\\). We therefore have that Theorem 3 allows the number of active subspaces to scale linearly with the extrinsic dimension \\(D\\) modulo a logarithmic factor. Stated differently, Theorem 3 establishes that the total number of _active dimensions_, \\(nd\\), can be proportional to the extrinsic dimension \\(D\\), while the total number of subspaces in the collection, \\(N\\), affect the number of active dimensions only through a logarithmic factor. Combining Theorem 3 with the earlier discussion, therefore, one can conclude that the MSD algorithm under the fixed mixing bases model does not suffer from the \"square-root bottleneck\" of \\(nd=O(\\sqrt{D})\\) despite the fact that its performance is being characterized in terms of polynomial-time computable measures. This is in stark contrast to related results in [7, 8, 9, 10] on group model selection and block-sparsity pattern recovery, which do not allow for linear scaling of the number of active dimensions in _any_ setting due to the fundamental limit \\(\\mu(\\mathcal{X}_{N})\\geq\\sqrt{\\frac{Nd-D}{D(N-1)}}\\) (cf. Remark 3). Finally, we note that the constraint \\(n=O(\\sqrt{N})\\) in Theorem 3 appears due to our use of [33, Lemma 3.4], which not only guarantees existence of appropriate mixing bases but also provides a polynomial-time algorithm for obtaining those mixing bases. If one were interested in merely proving existence of \"good\" mixing bases then this condition can be relaxed to \\(n=O(N)\\) by making use of [33, Th. 3.2] instead in the proof.
Since Theorem 3 guarantees existence of subspaces and mixing bases that satisfy the subspace coherence conditions for \\(\\delta=1\\), it also guarantees the same for any other sublinear scaling \\((0\\leq\\delta<1)\\) of cumulative active subspace energy. Indeed, as \\(\\delta\\searrow 0\\), the subspace coherence conditions (cf. (10) and (11)) only become more relaxed. In fact, it turns out that the order-wise performance of the MSD algorithm no longer remains a function of the mixing bases for certain collections of subspaces when cumulative active subspace energy reaches the other extreme of \\(\\delta=0\\). This assertion follows from the following theorem and the fact that \\(\\delta=0\\) reduces the subspace coherence conditions to \\(\\rho_{i}=O(n^{-1/2})\\) and \\(\\gamma_{2,i}=O(\\log^{-1/2}(N/\\alpha))\\).
**Theorem 4**.: _Suppose the number of active subspaces satisfies \\(n\\leq\\frac{c_{3}D(N-1)}{Nd-D}\\) for some constant \\(c_{3}\\in(0,1)\\) and the total number of subspaces in the collection \\(\\mathcal{X}_{N}\\) satisfies \\(N\\leq\\alpha\\exp(n/4)\\). In such cases, there exist collections of subspaces that satisfy \\(\\mu(\\mathcal{X}_{N}):=\\max_{i\
eq j}\\gamma(\\mathcal{S}_{i},\\mathcal{S}_{j}) \\leq n^{-1/2}\\). Further, all such collections satisfy \\(\\rho_{i}\\leq n^{-1/2}\\) and \\(\\gamma_{2,i}\\leq\\log^{-1/2}(N/\\alpha)\\) for \\(i=1,\\ldots,N\\)._
Proof.: The proof of this theorem also mainly follows from [33], which establishes that there exist many collections of subspaces for which \\(\\mu(\\mathcal{X}_{N})=\\sqrt{\\frac{Nd-D}{c_{3}D(N-1)}}\\) for appropriate constants \\(c_{3}\\in(0,1)\\). Under the condition \\(n\\leq\\frac{c_{3}D(N-1)}{Nd-D}\\), therefore, it follows that \\(\\mu(\\mathcal{X}_{N})\\leq n^{-1/2}\\). Since \\(\\gamma_{2,i}\\leq 2\\mu(\\mathcal{X}_{N})\\), we in turn obtain \\(\\gamma_{2,i}\\leq\\log^{-1/2}(N/\\alpha)\\) under the condition \\(N\\leq\\alpha\\exp(n/4)\\). Finally, we have from the definition of the average mixing coherence that \\(\\rho_{i}\\leq\\mu(\\mathcal{X}_{N})\\), which in turn implies \\(\\rho_{i}\\leq n^{-1/2}\\) and this completes the proof of the theorem.
Once again, notice that Theorem 4 allows linear scaling of the number of active dimensions as a function of the extrinsic dimension. In words, Theorem 4 tells us that MSD can be used for unmixing of collections of subspaces that are _approximately equi-isoclinic_[36], defined as ones with same principal angles between any two subspaces, regardless of the underlying mixing bases as long as the cumulative active subspace energy does not scale with the number of active subspaces.
We conclude our discussion of the fixed mixing bases model by reiterating that since this model is not invariant to the choice of bases, it does not address the subspace unmixing problem in its most general form. Nonetheless, as noted earlier, analysis of MSD under this model leads to equivalent results under the random directions model in a straightforward manner (cf. Sec. V). Further, the subspace unmixing problem in the context of group model selection and block-sparse compressed sensing is precisely given by the fixed mixing bases model. As such, the results reported in this section are also useful in their own right.
## V Performance of Marginal Subspace Detection Under the Random Directions Model
While Sec. IV provides results for the subspace unmixing problem for the fixed mixing bases model, it does not provide us with the most general results for subspace unmixing. First, the results have been derived under the fixed mixing bases model, which is arguably not the best model for the problem of subspace unmixing. Second, the thresholds selected in Theorem 1 require knowledge of the mixing bases due to their dependence on the average mixing coherences of the subspaces. Third, the performance of MSD described in Theorem 2 is also a function of the average mixing coherences of the subspaces. A natural question to ask at this point is whether it is possible to derive results for subspace unmixing in the sense that they do not require explicit use of the mixing bases. It turns out that doing so is relatively easy as long as one considers the random directions model discussed in Sec. II.
In order to leverage the results of Sec. IV for the random directions model, we first use the probabilistic method to establish that any collection of subspaces \\(\\mathcal{X}_{N}\\) has associated with it at least one corresponding collection of orthonormal bases \\(\\mathcal{U}_{N}:=\\big{\\{}U_{i}:\\mathrm{span}(U_{i})=\\mathcal{S}_{i},U_{i}^{\\mathrm{ T}}U_{i}=I,i=1,\\ldots,N\\big{\\}}\\) such that \\(\\rho_{i}(\\mathcal{U}_{N})=O\\big{(}\\frac{\\gamma_{\\mathsf{rms},i}\\sqrt{\\log(dN)}}{ \\sqrt{N}}\\big{)}\\).
**Lemma 3**.: _Let \\(d\\geq 3\\) and fix any \\(c_{4}>1\\). Then every collection of subspaces \\(\\mathcal{X}_{N}=\\big{\\{}\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D),i=1,\\ldots,N\\big{\\}}\\) has at least one collection of orthonormal bases \\(\\mathcal{U}_{N}=\\big{\\{}U_{i}:\\mathrm{span}(U_{i})=\\mathcal{S}_{i},U_{i}^{ \\mathrm{T}}U_{i}=I,i=1,\\ldots,N\\big{\\}}\\) such that_
\\[\\rho_{i}=\\frac{1}{N-1}\\Big{\\|}\\sum_{j\
eq i}U_{i}^{\\mathrm{T}}U_{j}\\Big{\\|}_{ 2}<\\bar{\\rho}_{i}:=\\frac{\\gamma_{\\mathsf{rms},i}\\sqrt{\\log(c_{4}d^{2}N)}}{ \\sqrt{c_{0}^{\\prime}(N-1)}},\\quad i=1,\\ldots,N.\\]
_Here, the parameter \\(c_{0}^{\\prime}:=\\frac{\\epsilon^{-\\frac{3}{2}}}{256}\\) is an absolute positive constant._
The proof of this lemma is provided in Appendix C. Lemma 3 helps us overcome all the challenges associated with the analysis of Sec. IV that have been outlined at the start of this section. Specifically, notice that all the results reported in Sec. IV under the fixed mixing bases model can have the \\(\\rho_{i}\\)'s in them replaced with upper bounds on the average mixing coherences. To this end, Lemma 3 provides such upper bounds, \\(\\bar{\\rho}_{i}\\), that only depend on the geometry of the underlying collection of subspaces. This, coupled with the fact that the MSD algorithm is invariant to the choice of subspace bases, implies that the results of Sec. IV immediately lead us to equivalent results for subspace unmixing that are fully characterized in terms of the local 2-subspace coherences and the quadratic-mean subspace coherences of the underlying subspaces. Nonetheless, there is still one point that has been left unaddressed in this discussion: _it seems we are requiring the signal \\(x=\\sum_{j=1}^{n}x_{i_{j}}\\) to have been generated under the fixed mixing bases model, with the subspace bases being given by the ones in Lemma 3._ We now argue that this requirement is in fact unnecessary for the case of \\(x\\) being generated under the random directions model.
Let \\(x=\\sum_{j=1}^{n}x_{i_{j}}\\) be a signal generated according to the random directions model. We can then rewrite \\(x\\) as
\\[x=\\sum_{j=1}^{n}\\mathcal{E}_{i_{j}}\\frac{x_{i_{j}}}{\\|x_{i_{j}}\\|_{2}}=\\sum_{j =1}^{n}U_{i_{j}}(\\mathcal{E}_{i_{j}}\\widetilde{\\theta}_{i_{j}})=\\sum_{j=1}^{n} U_{i_{j}}\\theta_{i_{j}}, \\tag{12}\\]
where \\(\\theta_{i_{j}}:=\\mathcal{E}_{i_{j}}\\widetilde{\\theta}_{i_{j}}\\), while the unit vector \\(\\widetilde{\\theta}_{i_{j}}\\in\\mathbb{R}^{d}\\) denotes the expansion of \\(x_{i_{j}}/\\|x_{i_{j}}\\|_{2}\\) under the (fixed) collection of orthonormal bases \\(\\mathcal{U}_{N}\\) obtained in Lemma 3; in other words, \\(\\widetilde{\\theta}_{i_{j}}=U_{i_{j}}^{\\mathrm{T}}(x_{i_{j}}/\\|x_{i_{j}}\\|_{2})\\). Given that \\(\\mathfrak{X}^{n}=\\big{(}x_{i_{1}}/\\|x_{i_{1}}\\|_{2},\\ldots,x_{i_{n}}/\\|x_{i_{n} }\\|_{2}\\big{)}\\) is drawn independently of \\(\\mathcal{A}\\), it follows that \\(\\Xi^{n}:=\\big{(}\\widetilde{\\theta}_{i_{1}},\\ldots,\\widetilde{\\theta}_{i_{n}} \\big{)}\\) is also independent of \\(\\mathcal{A}\\) under the random directions model. Consequently, conditioning (12) on \\(\\Xi^{n}\\) under the random directions model reduces it to the fixed mixing bases model. It is then straightforward to derive results equivalent to Lemma 1 and Lemma 2 under the random directions model by combining the analysis of Sec. IV with Lemma 3 and noting that
\\[\\Pr\\Big{(}T_{k}(y)\\gtreqless\\tau\\big{|}\\mathcal{H}_{0}^{k}\\Big{)}=\\int_{\\Xi^{n} }\\Pr\\Big{(}T_{k}(y)\\gtreqless\\tau\\big{|}\\mathcal{H}_{0}^{k},\\Xi^{n}\\Big{)} \\,\\lambda_{\\mathfrak{B}^{n}}(\\Xi^{n}). \\tag{13}\\]
This trivially leads to the following theorem concerning the FWER of MSD under the random directions model.
**Theorem 5**.: _Fix any \\(\\alpha\\in[0,1]\\) and define_
\\[\\bar{\\rho}_{k}:=\\frac{\\gamma_{\\mathsf{rms},k}\\sqrt{\\log(c_{4}d^{2}N)}}{\\sqrt{c_ {0}^{\\prime}(N-1)}},\\quad k=1,\\ldots,N,\\]_where \\(c_{4}>1\\) is a fixed constant and \\(c_{0}^{\\prime}\\) is as defined in Lemma 3. Then the family-wise error rate of the marginal subspace detection (Algorithm 1) can be controlled at any level \\(\\alpha\\in[0,1]\\) under the random directions model by selecting the decision thresholds \\(\\{\\tau_{k}\\}_{k=1}^{N}\\) as follows:_
1. _In the case of bounded deterministic error_ \\(\\eta\\)_, select_ \\[\\tau_{k}=\\left(\\epsilon_{\\eta}+\\bar{\\rho}_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}} }+\\frac{\\gamma_{2,k}N}{N-n}\\sqrt{c_{0}^{-1}\\mathcal{E}_{\\mathcal{A}}\\log \\left(\\frac{\\epsilon^{2}N}{\\alpha}\\right)}\\right)^{2},\\quad k=1,\\ldots,N.\\]
2. _In the case of i.i.d. Gaussian noise_ \\(\\eta\\)_, select_ \\[\\tau_{k}=\\left(\\sigma\\sqrt{d+2\\log\\left(\\frac{2N}{\\alpha}\\right)+2\\sqrt{d\\log \\left(\\frac{2N}{\\alpha}\\right)}}+\\bar{\\rho}_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A }}}+\\frac{\\gamma_{2,k}N}{N-n}\\sqrt{c_{0}^{-1}\\mathcal{E}_{\\mathcal{A}}\\log \\left(\\frac{\\epsilon^{2}2N}{\\alpha}\\right)}\\right)^{2},\\quad k=1,\\ldots,N.\\]
Similar to Theorem 5, one can also trivially derive an equivalent of Theorem 2 under the random directions model by simply replacing \\(\\rho_{i}\\) with \\(\\bar{\\rho}_{i}:=\\frac{\\gamma_{\\text{rms},i}\\sqrt{\\log\\left(c_{4}d^{2}N\\right)}} {\\sqrt{c_{0}^{2}(N-1)}}\\) in the definition of the set \\(\\mathcal{A}_{*}\\) within the theorem statement. In conclusion, the advantages of MSD outlined in Sec. IV for the fixed mixing bases model remain valid for the random directions model; the only difference here being that the measure of average mixing coherence gets replaced by the ratio of the measure of quadratic-mean subspace coherence and square-root of the total number of subspaces (modulo a logarithmic factor). Further, given that \\(\\gamma_{\\text{rms},i}=O(\\gamma_{2,i})\\), the subspace coherence condition (10) is simpler to satisfy under the random directions model for subspaces that are not too similar to each other (cf. (11)). Finally, it is straightforward to combine this discussion with Theorem 3 and Theorem 4 and conclude that the MSD algorithm also does not suffer from the square-root bottleneck under the random directions model.
## VI Numerical Results
In this section, we report results of numerical experiments that further shed light on the relationships between the local 2-subspace coherences, quadratic-mean subspace coherences, average mixing coherences, and the MSD algorithm for the problem of subspace unmixing. The subspaces used in all these experiments are independently drawn at random from \\(\\mathfrak{G}(d,D)\\) according to the natural uniform measure induced by the Haar measure on the _Stiefel manifold_\\(\\mathbb{S}(d,D)\\), which is defined as \\(\\mathbb{S}(d,D):=\\{U\\in\\mathbb{R}^{D\\times d}:U^{\\mathrm{T}}U=I\\}\\). Computationally, we accomplish this by resorting to the numerical algorithm proposed in [37] for random drawing of elements from \\(\\mathbb{S}(d,D)\\) according to the Haar measure. In doing so, we not only generate subspaces \\(\\mathcal{X}_{N}=\\{\\mathcal{S}_{i}\\}_{i=1}^{N}\\) from \\(\\mathfrak{G}(d,D)\\) for the random directions model, but we also generate the associated mixing bases \\(\\mathcal{B}_{N}=\\{\\Phi_{i}\\}_{i=1}^{N}\\) from \\(\\mathbb{S}(d,D)\\) for the fixed mixing bases model. Mathematically, given a subspace \\(\\mathcal{S}_{i}\\in\\mathfrak{G}(d,D)\\) and its equivalence class in the Stiefel manifold \\([\\mathcal{S}_{i}]\\subset\\mathbb{S}(d,D)\\), its associated mixing basis \\(\\Phi_{i}\\in\\mathbb{S}(d,D)\\) is effectively drawn at random from \\([\\mathcal{S}_{i}]\\) according to the Haar measure on \\([\\mathcal{S}_{i}]\\). It is important to note here that once we generate the \\(\\mathcal{S}_{i}\\)'s and the \\(\\Phi_{i}\\)'s, they remain fixed throughout our experiments. In other words, our results are not averaged over different realizations of the subspaces and the mixing bases; rather, they correspond to a _fixed_ set of subspaces (random directions models) and mixing bases (fixed mixing bases model).
Our first set of experiments evaluates the local 2-subspace coherences and quadratic-mean subspace coherences of the \\(\\mathcal{S}_{i}\\)'s and the average mixing coherences of the corresponding \\(\\Phi_{i}\\)'s for different values of \\(d\\), \\(D\\), and \\(N\\). The results of these experiments are reported in Figs. 1 and 2. Specifically, Fig. 1(a) and Fig. 1(b) plot \\(\\sum_{i=1}^{N}\\gamma_{2,i}/N\\) as well as the range of the \\(\\gamma_{2,i}\\)'s using error bars for \\(N=1500\\) and \\(N=2000\\), respectively. Similarly, Fig. 1(c) and Fig. 1(d) plot plot \\(\\sum_{i=1}^{N}\\gamma_{\\text{rms},i}/N\\) as well as the range of the \\(\\gamma_{\\text{rms},i}\\)'s using error bars for \\(N=1500\\) and \\(N=2000\\), respectively. Finally, Fig. 1(e) and Fig. 1(f) plot \\(\\sum_{i=1}^{N}\\rho_{i}/N\\) as well as the range of the \\(\\rho_{i}\\)'s using error bars for \\(N=1500\\) and \\(N=2000\\), respectively. It can be seen from these figures that all three coherence measures under consideration decrease with an increase in \\(D\\), while they increase with an increase in \\(d\\). In addition, it appears from these figures that the \\(\\gamma_{2,i}\\)'s and the \\(\\rho_{i}\\)'s start concentrating around their average values for larger values of \\(d\\) and \\(D\\). In contrast, the \\(\\gamma_{\\text{rms},i}\\)'s appear highly concentrated around their average values, which is attributable to the random generation of the \\(\\mathcal{S}_{i}\\)'s. Another important thing to notice from Fig. 1 is that the average mixing coherences tend to be more than two orders of magnitude smaller than the local 2-subspace coherences, which is indeed desired under the fixed mixing bases model according to the discussion in Sec. IV. We can also make a similar observation from Fig. 1(c)-(d) about the \\(\\bar{\\rho}_{i}\\)'s defined in Theorem 5 for the random directions model; e.g., \\(\\bar{\\rho}_{i}=(7\\times 10^{-2})\\gamma_{\\text{rms},i}\\) for \\(d=3\\) and \\(N=2000\\) under the assumption of \\(c_{0}^{\\prime}=c_{4}=1\\) (more on this assumption later). Finally, since the error bars in Fig. 1 do not give insights into distributions of the \\(\\gamma_{2,i}\\)'s, \\(\\gamma_{\\text{rms},i}\\)'s and \\(\\rho_{i}\\)'s, we also plot histograms of the three coherences in Fig. 2 for \\(N=2000\\) corresponding to \\(D=600\\) (Figs. 2(a), 2(c), and 2(e)) and \\(D=1400\\) (Figs. 2(b), 2(d), and 2(f)).
Our second set of experiments evaluates the performance of the MSD algorithm for subspace unmixing under both the fixed mixing bases and the random directions models. We run these experiments for _fixed_ subspaces and mixing bases for the following four sets of choices for \\((d,D,N)\\): \\((3,600,2000),(3,1400,2000),(15,600,2000)\\), and \\((15,1400,2000)\\). The results reported for these experiments are averaged over 5000 realizations of subspace activity patterns, mixing coefficients, and additive Gaussian noise. In all these experiments, we use \\(\\sigma=0.01\\) and \\(\\mathcal{E}_{\\mathcal{A}}=n\\), divided equally among all active subspaces, which means that all active subspaces lie above the additive noise floor. In terms of the selection of thresholds for Algorithm 1, we rely on Theorem 1 and Theorem 5 for fixed mixing bases model and random directions model, respectively, but with a small caveat. Since our proofs use a number of probabilistic bounds, the theorem statements invariably result in conservative thresholds. In order to remedy this, we use the thresholds \\(\\bar{\\tau}_{k}:=c_{1}^{2}\\tau_{k}\\) with \\(\\tau_{k}\\) as in Theorem 1 and Theorem 5_but_ using \\(c_{0}=c_{4}=1\\), \\(c_{1}\\in(0,1)\\), and \\(c_{0}^{\\prime}\\in[1,d]\\). We tune the parameter \\(c_{1}\\) using cross validation and set \\(c_{1}=0.136\\) and \\(c_{1}=0.107\\) for \\(d=3\\) and \\(d=15\\), respectively. In order to understand the effects of tuning \\(c_{0}^{\\prime}\\) for the random directions models, we set \\(c_{0}^{\\prime}=c_{0}\\) for \\(d=3\\), while we explicitly tune it by cross validation for \\(d=15\\) and set \\(c_{0}^{\\prime}=15\\) in this case. Finally, we set the final thresholds to control the FWER in all these experiments at level \\(\\alpha=0.1\\).
The results of these experiments for our choices of the parameters are reported in Fig. 3(a) and Fig. 3(b) for \\(d=3\\) and \\(d=15\\), respectively. We not only plot the FWER and the NDP in these figures for both the fixed mixing bases and the random directions models, but we also plot another metric of _false-discovery proportion_ (FDP), defined as \\(\\texttt{FDP}:=\\frac{|\\widehat{\\mathcal{A}}|,\\mathcal{A}|}{|\\widehat{\\mathcal{ A}}|}\\), as a measure of the FDR. Indeed, the expectation of the FDP is the FDR[27]. We first compare the FWER plots for \\(D=600\\) and \\(D=1400\\) in these figures for the fixed mixing bases model (solid and dashed lines). We can see from Fig. 2 that the \\(\\gamma_{2,i}\\)'s and the \\(\\rho_{i}\\)'s are smaller for \\(D=1400\\), which means that the thresholds \\(\\bar{\\tau}_{k}\\)'sFig. 1: Plots of local 2-subspace coherences, quadratic-mean subspace coherences, and average mixing coherences for different values of \\(d\\), \\(D\\), and \\(N\\). (a) and (b) correspond to local 2-subspace coherences, (c) and (d) correspond to quadratic-mean subspace coherences, and (e) and (f) correspond to average mixing coherences. The error bars in the plots depict the range of coherences for the different subspaces.
Fig. 2: Histograms of local 2-subspace coherences, quadratic-mean subspace coherences, and average mixing coherences for \\(N=2000\\) and different values of \\(d\\). (a) and (b) correspond to local 2-subspace coherences, (c) and (d) correspond to quadratic-mean subspace coherences, and (e) and (f) correspond to average mixing coherences.
are also smaller for \\(D=1400\\) (cf. Theorem 1). But Fig. 3 shows that the FWER for \\(D=1400\\) mostly remains below \\(D=600\\), which suggests that Theorem 1 is indeed capturing the correct relationship between the FWER of MSD and the properties of the underlying mixing bases. In addition, the NDP plots (solid and dashed lines) in these figures for \\(D=600\\) and \\(D=1400\\) under the fixed mixing bases model also help validate Theorem 2. Specifically, Theorem 2 suggests that the NDP of MSD should remain small for larger values of \\(n\\) as long as the \\(\\gamma_{2,i}\\)'s and the \\(\\rho_{i}\\)'s remain small. Stated differently, since the \\(\\gamma_{2,i}\\)'s and the \\(\\rho_{i}\\)'s are smaller for \\(D=1400\\) than for \\(D=600\\) (cf. Fig 2), Theorem 2 translates into a smaller NDP figure for larger values of \\(n\\) for \\(D=1400\\). It can be seen from the NDP plots in Fig. 3 that this is indeed the case. Finally, we turn our attention to FWER, NDP, and FDP plots in Fig. 3(a) and Fig. 3(b) for thresholds under the random directions model (circles (\\(\\circ\\)) and crosses (\\(\\times\\))). Careful examination of these plots confirm that indeed: (\\(i\\)) the MSD algorithm does not require explicit knowledge of the mixing bases for calculations of the decision thresholds; and (\\(ii\\)) the upper bounds derived in Lemma 3 for the \\(\\rho_{i}\\)'s are (order-wise) tight. Specifically, it can be seen from Fig. 3 that the thresholds derived in Sec. V for the random directions model result in performance that is either close to (\\(d=3\\) with untuned \\(c^{\\prime}_{0}\\)) or similar to (\\(d=15\\) with tuned \\(c^{\\prime}_{0}\\)) the one using thresholds that rely on knowledge of the mixing bases.
## VII Conclusion
In this paper, we motivated and posed the problem of subspace unmixing under the parsimonious subspace-sum (PS3) model as well as discussed its connections with problems in wireless communications, hyperspectral imaging, high-dimensional statistics and compressed sensing. We proposed and analyzed a low-complexity algorithm, termed _marginal subspace detection_ (MSD), that solves the subspace unmixing problem under the PS3 model by turning it
Fig. 3: Plots of FWER, NDP, and FDP as a function of the number of active subspaces, \\(n\\), under both the fixed mixing bases (FMB) and the random directions (RD) models. These plots correspond to \\(D=600\\) (FMB: solid lines; RD: circles (\\(\\circ\\))) and \\(D=1400\\) (FMB: dashed lines; RD: crosses (\\(\\times\\))).
into a multiple hypothesis testing problem. We showed that the MSD algorithm can be used to control the family-wise error rate at any level \\(\\alpha\\in[0,1]\\) for an arbitrary collection of subspaces on the Grassmann manifold under two random signal generation models. We also established that the MSD algorithm allows for linear scaling of the number of active subspaces as a function of the ambient dimension. Numerical results presented in the paper further validated the usefulness of the MSD algorithm and the accompanying analysis. Future work in this direction includes design and analysis of algorithms that perform better than the MSD algorithm as well as study of the subspace unmixing problem under mixing models other than the PS3 model.
## Appendix A Proof of Lemma 1
We begin by defining \\(\\widetilde{T}_{k}(y):=\\sqrt{T_{k}(y)}\\) and noting \\(\\widetilde{T}_{k}(y)\\leq\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j }}\\theta_{j}\\big{\\|}_{2}+\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\eta\\big{\\|}_{2}\\). In order to characterize the right-tail probability of \\(T_{k}(y)\\) under \\(\\mathcal{H}_{0}^{k}\\), it suffices to characterize the right-tail probabilities of \\(Z_{1}^{k}:=\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j}}\\theta_{j} \\big{\\|}_{2}\\) and \\(Z_{2}^{k}:=\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\eta\\big{\\|}_{2}\\) under \\(\\mathcal{H}_{0}^{k}\\). This is rather straightforward in the case of \\(Z_{2}^{k}\\). In the case of deterministic error \\(\\eta\\), we have \\(Z_{2}^{k}\\geq\\epsilon_{\\eta}\\) with zero probability. In the case of \\(\\eta\\) being distributed as \\(\\mathcal{N}(0,\\sigma^{2}I)\\), we have that \\(\\eta_{k}:=\\Phi_{k}^{\\mathrm{T}}\\eta\\in\\mathbb{R}^{d}\\sim\\mathcal{N}(0,\\sigma^ {2}I)\\). In that case, the right-tail probability of \\(Z_{2}^{k}\\) can be obtained by relying on a concentration of measure result in [38, Sec. 4, Lem. 1] for the sum of squares of i.i.d. Gaussian random variables. Specifically, it follows from [38] that \\(\\forall\\delta_{2}>0\\),
\\[\\Pr\\left(Z_{2}^{k}\\geq\\sigma\\sqrt{d+2\\delta_{2}+2\\sqrt{d\\delta_{2}}}\\right) \\leq\\exp(-\\delta_{2}). \\tag{14}\\]
We now focus on the right-tail probability of \\(Z_{1}^{k}\\), conditioned on the null hypothesis. Recall that \\(\\mathcal{A}\\) is a random \\(n\\)-subset of \\(\\{1,2,\\ldots,N\\}\\) with \\(\\Pr(\\mathcal{A}=\\{i_{1},i_{2},\\ldots,i_{n}\\})=1/\\binom{N}{n}\\). Therefore, defining \\(\\bar{\\Pi}:=(\\pi_{1},\\ldots,\\pi_{N})\\) to be a random permutation of \\(\\{1,\\ldots,N\\}\\) and using \\(\\Pi:=(\\pi_{1},\\ldots,\\pi_{n})\\) to denote the first \\(n\\)-elements of \\(\\bar{\\Pi}\\), the following equality holds in distribution:
\\[\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j}}\\theta_{j}\\Big{\\|}_{2} \\ :\\ k\
ot\\in\\mathcal{A}\\ \\stackrel{{ dist}}{{=}}\\ \\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}} \\theta_{j}\\Big{\\|}_{2}\\ :\\ k\
ot\\in\\Pi. \\tag{15}\\]
We now define a probability event \\(E_{0}^{k}:=\\big{\\{}\\Pi=(\\pi_{1},\\ldots,\\pi_{n}):k\
ot\\in\\Pi\\big{\\}}\\) and notice from (15) that
\\[\\Pr(Z_{1}^{k}\\geq\\delta_{1}\\big{|}\\mathcal{H}_{0}^{k})=\\Pr\\Bigg{(}\\Big{\\|}\\sum _{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_{2}\\geq\\delta_ {1}\\big{|}E_{0}^{k}\\Bigg{)}. \\tag{16}\\]
The rest of this proof relies heavily on a Banach-space-valued Azuma's inequality (Proposition 1) stated in Appendix D. In order to make use of Proposition 1, we construct an \\(\\mathbb{R}^{d}\\)-valued Doob's martingale \\((M_{0},M_{1},\\ldots,M_{n})\\) on \\(\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\) as follows:
\\[M_{0} :=\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{ j}}\\big{|}E_{0}^{k}\\big{]}\\theta_{j},\\ \\ \\text{and} \\tag{17}\\] \\[M_{\\ell} :=\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{ j}}\\big{|}\\pi_{1}^{\\ell},E_{0}^{k}\\big{]}\\theta_{j},\\ \\ell=1,\\ldots,n, \\tag{18}\\]where \\(\\pi_{1}^{\\ell}:=(\\pi_{1},\\ldots,\\pi_{\\ell})\\) denotes the first \\(\\ell\\) elements of \\(\\Pi\\). The next step involves showing that the constructed martingale has bounded \\(\\ell_{2}\\) differences. In order for this, we define
\\[M_{\\ell}(u):=\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}} \\big{|}\\pi_{1}^{\\ell-1},\\pi_{\\ell}=u,E_{0}^{k}\\big{]}\\theta_{j} \\tag{19}\\]
for \\(u\\in\\{1,\\ldots,N\\}\\setminus\\{k\\}\\) and \\(\\ell=1,\\ldots,n\\). It can then be established using techniques very similar to the ones used in the _method of bounded differences_ for scalar-valued martingales that [39, 40]
\\[\\|M_{\\ell}-M_{\\ell-1}\\|_{2}\\leq\\sup_{u,v}\\|M_{\\ell}(u)-M_{\\ell}(v)\\|_{2}. \\tag{20}\\]
In order to upper bound \\(\\|M_{\\ell}(u)-M_{\\ell}(v)\\|_{2}\\), we define a \\(D\\times d\\) matrix \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}\\) as
\\[\\widetilde{\\Phi}_{\\ell,j}^{u,v}:=\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}} \\big{|}\\pi_{1}^{\\ell-1},\\pi_{\\ell}=u,E_{0}^{k}\\big{]}-\\mathbb{E}\\big{[}\\Phi_{ \\pi_{j}}\\big{|}\\pi_{1}^{\\ell-1},\\pi_{\\ell}=v,E_{0}^{k}\\big{]},\\quad\\ell=1, \\ldots,n, \\tag{21}\\]
and note that \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=0\\) for \\(j<\\ell\\) and \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=\\Phi_{u}-\\Phi_{v}\\) for \\(j=\\ell\\). In addition, notice that the random variable \\(\\pi_{j}\\) conditioned on \\(\\big{\\{}\\pi_{1}^{\\ell-1},\\pi_{\\ell}=u,E_{0}^{k}\\big{\\}}\\) has a uniform distribution over \\(\\{1,\\ldots,N\\}\\setminus\\{\\pi_{1}^{\\ell-1},u,k\\}\\), while \\(\\pi_{j}\\) conditioned on \\(\\big{\\{}\\pi_{1}^{\\ell-1},\\pi_{\\ell}=v,E_{0}^{k}\\big{\\}}\\) has a uniform distribution over \\(\\{1,\\ldots,N\\}\\setminus\\{\\pi_{1}^{\\ell-1},v,k\\}\\). Therefore, we get \\(\\forall j>\\ell\\),
\\[\\widetilde{\\Phi}_{\\ell,j}^{u,v}=\\frac{1}{N-\\ell-1}\\left(\\Phi_{u}-\\Phi_{v} \\right). \\tag{22}\\]
It now follows from the preceding discussion that
\\[\\|M_{\\ell}(u)-M_{\\ell}(v)\\|_{2}=\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{ \\mathrm{T}}\\widetilde{\\Phi}_{\\ell,j}^{u,v}\\theta_{j}\\big{\\|}_{2} \\stackrel{{(a)}}{{\\leq}}\\sum_{j=1}^{n}\\big{\\|}\\Phi_{k}^{ \\mathrm{T}}\\widetilde{\\Phi}_{\\ell,j}^{u,v}\\big{\\|}_{2}\\|\\theta_{j}\\|_{2}\\] \\[\\leq\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\left(\\Phi_{u}-\\Phi_{v}\\right) \\big{\\|}_{2}\\|\\theta_{\\ell}\\|_{2}+\\frac{\\sum_{j>\\ell}\\big{\\|}\\Phi_{k}^{ \\mathrm{T}}\\left(\\Phi_{u}-\\Phi_{v}\\right)\\big{\\|}_{2}\\|\\theta_{j}\\|_{2}}{N- \\ell-1}\\] \\[\\leq\\left(\\gamma(\\mathcal{S}_{k},\\mathcal{S}_{u})+\\gamma(\\mathcal{ S}_{k},\\mathcal{S}_{v})\\right)\\left(\\|\\theta_{\\ell}\\|_{2}+\\frac{\\sum_{j>\\ell}\\| \\theta_{j}\\|_{2}}{N-\\ell-1}\\right), \\tag{23}\\]
where \\((a)\\) is due to the triangle inequality and submultiplicativity of the operator norm. It then follows from (20), (23) and definition of the local \\(2\\)-subspace coherence that
\\[\\|M_{\\ell}-M_{\\ell-1}\\|_{2}\\leq\\underbrace{\\gamma_{2,k}\\left(\\|\\theta_{\\ell} \\|_{2}+\\frac{\\sum_{j>\\ell}\\|\\theta_{j}\\|_{2}}{N-\\ell-1}\\right)}_{b_{\\ell}}. \\tag{24}\\]
The final bound we need in order to utilize Proposition 1 is that on \\(\\|M_{0}\\|_{2}\\). To this end, note that \\(\\pi_{j}\\) conditioned on \\(E_{0}^{k}\\) has a uniform distribution over \\(\\{1,\\ldots,N\\}\\setminus\\{k\\}\\). It therefore follows that
\\[\\|M_{0}\\|_{2}=\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\big{(} \\sum_{\\begin{subarray}{c}q=1\\\\ q\
eq k\\end{subarray}}^{N}\\frac{\\Phi_{q}}{N-1}\\big{)}\\theta_{j}\\Big{\\|}_{2} \\stackrel{{(b)}}{{\\leq}}\\frac{1}{N-1}\\Big{\\|}\\sum_{ \\begin{subarray}{c}q=1\\\\ q\
eq k\\end{subarray}}^{N}\\Phi_{k}^{\\mathrm{T}}\\Phi_{q}\\Big{\\|}_{2}\\Big{\\|} \\sum_{j=1}^{n}\\theta_{j}\\Big{\\|}_{2}\\stackrel{{(c)}}{{\\leq}}\\rho_ {k}\\sqrt{n\\mathcal{E}_{A}}. \\tag{25}\\]
Here, \\((b)\\) is again due to submultiplicativity of the operator norm, while \\((c)\\) is due to definitions of the average mixing coherence and the cumulative active subspace energy as well as the triangle inequality and the Cauchy-Schwarz inequality. Next, we make use of [41, Lemma B.1] to note that \\(\\zeta_{\\mathcal{B}}(\\tau)\\) defined in Proposition 1 satisfies\\(\\zeta_{\\mathcal{B}}(\\tau)\\leq\\tau^{2}/2\\) for \\((\\mathcal{B},\\|\\cdot\\|)\\equiv\\big{(}L_{2}(\\mathbb{R}^{d}),\\|\\cdot\\|_{2}\\big{)}\\). Consequently, under the assumption \\(\\delta_{1}>\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\), it can be seen from our construction of the Doob martingale \\((M_{0},M_{1},\\ldots,M_{n})\\) that
\\[\\Pr\\left(\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j} }\\theta_{j}\\Big{\\|}_{2}\\geq\\delta_{1}\\big{|}E_{0}^{k}\\right) =\\Pr\\left(\\|M_{n}\\|_{2}\\geq\\delta_{1}\\big{|}E_{0}^{k}\\right)= \\Pr\\left(\\|M_{n}\\|_{2}-\\|M_{0}\\|_{2}\\geq\\delta_{1}-\\|M_{0}\\|_{2}\\big{|}E_{0}^{ k}\\right)\\] \\[\\stackrel{{(d)}}{{\\leq}}\\Pr\\left(\\|M_{n}-M_{0}\\|_{2 }\\geq\\delta_{1}-\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\,\\big{|}E_{0}^{k}\\right)\\] \\[\\stackrel{{(e)}}{{\\leq}}e^{2}\\exp\\left(-\\frac{c_{0} \\big{(}\\delta_{1}-\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\big{)}^{2}}{\\sum_ {\\ell=1}^{n}b_{\\ell}^{2}}\\right), \\tag{26}\\]
where \\((d)\\) is mainly due to the bound on \\(\\|M_{0}\\|_{2}\\) in (25), while \\((e)\\) follows from the Banach-space-valued Azuma inequality in Appendix D. In addition, we can establish using (24), the inequality \\(\\sum_{j>\\ell}\\|\\theta_{j}\\|_{2}\\leq\\sqrt{n\\mathcal{E}_{\\mathcal{A}}}\\), and some tedious algebraic manipulations that
\\[\\sum_{\\ell=1}^{n}b_{\\ell}^{2}=\\gamma_{2,k}^{2}\\sum_{\\ell=1}^{n} \\bigg{(}\\|\\theta_{\\ell}\\|_{2}+\\frac{\\sum_{j>\\ell}\\|\\theta_{j}\\|_{2}}{N-\\ell-1} \\bigg{)}^{2}\\leq\\gamma_{2,k}^{2}\\mathcal{E}_{\\mathcal{A}}\\left(\\frac{N}{N-n} \\right)^{2}. \\tag{27}\\]
Combining (16), (26) and (27), we therefore obtain \\(\\Pr(Z_{1}^{k}\\geq\\delta_{1}\\big{|}\\mathcal{H}_{0}^{k})\\leq e^{2}\\exp\\left(- \\frac{c_{0}(N-n)^{2}\\big{(}\\delta_{1}-\\rho_{k}\\sqrt{n\\mathcal{E}_{\\mathcal{A} }}\\big{)}^{2}}{N^{2}\\gamma_{2,k}^{2}\\mathcal{E}_{\\mathcal{A}}}\\right)\\).
We now complete the proof by noting that
\\[\\Pr\\big{(}T_{k}(y)\\geq\\tau\\big{|}\\mathcal{H}_{0}^{k}\\big{)} =\\Pr\\Big{(}\\widetilde{T}_{k}(y)\\geq\\sqrt{\\tau}\\big{|}\\mathcal{H}_{ 0}^{k}\\Big{)}\\leq\\Pr\\big{(}Z_{1}^{k}+Z_{2}^{k}\\geq\\sqrt{\\tau}\\big{|}\\mathcal{ H}_{0}^{k}\\big{)}\\] \\[\\leq\\Pr\\big{(}Z_{1}^{k}+Z_{2}^{k}\\geq\\sqrt{\\tau}\\big{|}\\mathcal{H }_{0}^{k},Z_{2}^{k}<\\epsilon_{2}\\big{)}+\\Pr\\big{(}Z_{2}^{k}\\geq\\epsilon_{2} \\big{|}\\mathcal{H}_{0}^{k}\\big{)}\\] \\[\\leq\\Pr\\big{(}Z_{1}^{k}\\geq\\sqrt{\\tau}-\\epsilon_{2}\\big{|} \\mathcal{H}_{0}^{k}\\big{)}+\\Pr\\big{(}Z_{2}^{k}\\geq\\epsilon_{2}\\big{)}\\,. \\tag{28}\\]
The two statements in the lemma now follow from the (probabilistic) bounds on \\(Z_{2}^{k}\\) established at the start of the proof and the probabilistic bound on \\(Z_{1}^{k}\\) obtained in the preceding paragraph.
## Appendix B Proof of Lemma 2
We once again define \\(\\widetilde{T}_{k}(y):=\\sqrt{T_{k}(y)}\\) and note that \\(\\widetilde{T}_{k}(y)\\geq\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j }}\\theta_{j}\\big{\\|}_{2}-\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\eta\\big{\\|}_{2}\\). Therefore, characterization of the left-tail probability of \\(Z_{1}^{k}:=\\big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j}}\\theta_{j} \\big{\\|}_{2}\\) and the right-tail probability of \\(Z_{2}^{k}:=\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\eta\\big{\\|}_{2}\\) under \\(\\mathcal{H}_{1}^{k}\\) helps us specify the left-tail probability of \\(T_{k}(y)\\) under \\(\\mathcal{H}_{1}^{k}\\). Since the right-tail probability of \\(Z_{2}^{k}\\) for both deterministic and stochastic errors has already been specified in the proof of Lemma 1, we need only focus on the left-tail probability of \\(Z_{1}^{k}\\) under \\(\\mathcal{H}_{1}^{k}\\) in here.
In order to characterize \\(\\Pr(Z_{1}^{k}\\leq\\delta_{1}\\big{|}\\mathcal{H}_{1}^{k})\\), we once again define \\(\\bar{\\Pi}:=(\\pi_{1},\\ldots,\\pi_{N})\\) to be a random permutation of \\(\\{1,\\ldots,N\\}\\) and use \\(\\Pi:=(\\pi_{1},\\ldots,\\pi_{n})\\) to denote the first \\(n\\)-elements of \\(\\bar{\\Pi}\\). We then have the following equality in distribution:
\\[\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{i_{j}}\\theta_{j} \\Big{\\|}_{2}\\ :\\ k\\in\\mathcal{A}\\ \\stackrel{{ dist}}{{=}}\\ \\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j} \\Big{\\|}_{2}\\ :\\ k\\in\\Pi. \\tag{29}\\]We now define a probability event \\(E_{1}^{k}:=\\big{\\{}\\Pi=(\\pi_{1},\\ldots,\\pi_{n}):k\\in\\Pi\\big{\\}}\\) and notice from (29) that
\\[\\Pr(Z_{1}^{k}\\leq\\delta_{1}\\big{|}\\mathcal{H}_{1}^{k})=\\Pr\\Bigg{(}\\Big{\\|}\\sum_{ j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_{2}\\leq\\delta_{1} \\big{|}E_{1}^{k}\\Bigg{)}. \\tag{30}\\]
Next, we fix an arbitrary \\(i\\in\\{1,\\ldots,n\\}\\) and define another probability event \\(E_{2}^{i}:=\\{\\pi_{i}=k\\}\\). It then follows that
\\[\\Pr\\Bigg{(}\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{ j}}\\theta_{j}\\Big{\\|}_{2}\\leq\\delta_{1}\\big{|}E_{1}^{k}\\Bigg{)} =\\sum_{i=1}^{n}\\Pr\\Bigg{(}\\Big{\\|}\\sum_{j=1}^{n}\\Phi_{k}^{\\mathrm{ T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_{2}\\leq\\delta_{1}\\big{|}E_{1}^{k},E_{2}^{i} \\Bigg{)}\\Pr(E_{2}^{i}\\big{|}E_{1}^{k})\\] \\[=\\sum_{i=1}^{n}\\Pr\\Bigg{(}\\Big{\\|}\\theta_{i}+\\sum_{\\begin{subarray} {c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_ {2}\\leq\\delta_{1}\\big{|}E_{1}^{k},E_{2}^{i}\\Bigg{)}\\Pr(E_{2}^{i}\\big{|}E_{1}^{ k})\\] \\[\\overset{(a)}{\\leq}\\sum_{i=1}^{n}\\Pr\\Bigg{(}\\Big{\\|}\\sum_{ \\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_ {2}\\geq\\sqrt{\\mathcal{E}_{k}}-\\delta_{1}\\big{|}E_{2}^{i}\\Bigg{)}\\Pr(E_{2}^{i} \\big{|}E_{1}^{k}), \\tag{31}\\]
where \\((a)\\) follows for the facts that (\\(i\\)) \\(\\|\\theta_{i}+\\sum_{j\
eq i}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\|_{2} \\geq\\|\\theta_{i}\\|_{2}-\\|\\sum_{j\
eq i}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}} \\theta_{j}\\|_{2}\\), (\\(ii\\)) \\(\\|\\theta_{i}\\|_{2}\\) conditioned on \\(E_{2}^{i}\\) is \\(\\sqrt{\\mathcal{E}_{k}}\\), and (\\(iii\\)) \\(E_{2}^{i}\\subset E_{1}^{k}\\). It can be seen from (30) and (31) that our main challenge now becomes specifying the right-tail probability of \\(\\|\\sum_{j\
eq i}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\|_{2}\\) conditioned on \\(E_{2}^{i}\\). To this end, we once again rely on Proposition 1 in Appendix D.
Specifically, we construct an \\(\\mathbb{R}^{d}\\)-valued Doob martingale \\((M_{0},M_{1},\\ldots,M_{n-1})\\) on \\(\\sum_{j\
eq i}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\) as follows. We first define \\(\\Pi^{-i}:=(\\pi_{1},\\ldots,\\pi_{i-1},\\pi_{i+1},\\ldots,\\pi_{n})\\) and then define
\\[M_{0}:=\\sum_{\\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}} \\big{|}E_{2}^{i}\\big{]}\\theta_{j},\\quad\\text{and} \\tag{32}\\] \\[M_{\\ell}:=\\sum_{\\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}} \\big{|}\\pi_{1}^{-i,\\ell},E_{2}^{i}\\big{]}\\theta_{j},\\ \\ell=1,\\ldots,n-1, \\tag{33}\\]
where \\(\\pi_{1}^{-i,\\ell}\\) denotes the first \\(\\ell\\) elements of \\(\\Pi^{-i}\\). The next step in the proof involves showing \\(\\|M_{\\ell}-M_{\\ell-1}\\|_{2}\\) is bounded for all \\(\\ell\\in\\{1,\\ldots,n-1\\}\\). To do this, we use \\(\\pi_{\\ell}^{-i}\\) to denote the \\(\\ell\\)-th element of \\(\\Pi^{-i}\\) and define
\\[M_{\\ell}(u):=\\sum_{\\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}} \\big{|}\\pi_{1}^{-i,\\ell-1},\\pi_{\\ell}^{-i}=u,E_{2}^{i}\\big{]}\\theta_{j} \\tag{34}\\]
for \\(u\\in\\{1,\\ldots,N\\}\\setminus\\{k\\}\\) and \\(\\ell=1,\\ldots,n-1\\). It then follows from the argument in Lemma 1 that \\(\\|M_{\\ell}-M_{\\ell-1}\\|_{2}\\leq\\sup_{u,v}\\|M_{\\ell}(u)-M_{\\ell}(v)\\|_{2}\\). We now define a \\(D\\times d\\) matrix \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}\\) as
\\[\\widetilde{\\Phi}_{\\ell,j}^{u,v}:=\\mathbb{E}\\big{[}\\Phi_{\\pi_{j}}\\big{|}\\pi_{1} ^{-i,\\ell-1},\\pi_{\\ell}^{-i}=u,E_{2}^{i}\\big{]}-\\mathbb{E}\\big{[}\\Phi_{\\pi_{j }}\\big{|}\\pi_{1}^{-i,\\ell-1},\\pi_{\\ell}^{-i}=v,E_{2}^{i}\\big{]},\\quad\\ell=1, \\ldots,n. \\tag{35}\\]
It is then easy to see that \\(\\forall j>\\ell+1,j\
eq i\\), the random variable \\(\\pi_{j}\\) conditioned on the events \\(\\{\\pi_{1}^{-i,\\ell-1},\\pi_{\\ell}^{-i}=u,E_{2}^{i}\\}\\) and \\(\\{\\pi_{1}^{-i,\\ell-1},\\pi_{\\ell}^{-i}=v,E_{2}^{i}\\}\\) has a uniform
\\(\\ell<i-1\\), it can be further argued that \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=0\\)\\(\\forall j<\\ell\\), \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=\\Phi_{u}-\\Phi_{v}\\) for \\(j=\\ell\\), and \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=\\frac{1}{N-\\ell-1}(\\Phi_{u}-\\Phi_{v})\\) for \\(j=\\ell+1\\). Combining all these facts together, we have the following upper bound:
\\[\\|M_{\\ell}(u)-M_{\\ell}(v)\\|_{2} =\\big{\\|}\\sum_{\\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\widetilde{\\Phi}_{\\ell,j}^{u,v} \\theta_{j}\\big{\\|}_{2}\\stackrel{{(b)}}{{\\leq}}\\sum_{ \\begin{subarray}{c}j\\geq\\ell\\\\ j\
eq i\\end{subarray}}^{(b)}\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\widetilde{\\Phi}_{ \\ell,j}^{u,v}\\big{\\|}_{2}\\|\\theta_{j}\\|_{2}\\] \\[\\stackrel{{(c)}}{{\\leq}}\\big{\\|}\\Phi_{k}^{\\mathrm{T} }\\left(\\Phi_{u}-\\Phi_{v}\\right)\\big{\\|}_{2}\\bigg{(}\\|\\theta_{\\ell}\\|_{2}1_{\\{ \\ell\
eq i\\}}+\\|\\theta_{\\ell+1}\\|_{2}1_{\\{\\ell\
eq i-1\\}}+\\sum_{\\begin{subarray} {c}j>\\ell+1\\\\ j\
eq i\\end{subarray}}\\frac{\\|\\theta_{j}\\|_{2}}{N-\\ell-1}\\bigg{)}\\] \\[\\leq\\big{(}\\gamma(\\mathcal{S}_{k},\\mathcal{S}_{u})+\\gamma( \\mathcal{S}_{k},\\mathcal{S}_{v})\\big{)}\\bigg{(}\\|\\theta_{\\ell}\\|_{2}1_{\\{ \\ell\
eq i\\}}+\\|\\theta_{\\ell+1}\\|_{2}1_{\\{\\ell\
eq i-1\\}}+\\sum_{\\begin{subarray} {c}j>\\ell+1\\\\ j\
eq i\\end{subarray}}\\frac{\\|\\theta_{j}\\|_{2}}{N-\\ell-1}\\bigg{)}. \\tag{36}\\]
Here, \\((b)\\) and \\((c)\\) follow from the preceding facts that \\(\\widetilde{\\Phi}_{\\ell,j}^{u,v}=0\\)\\(\\forall j<\\ell\\) and \\(\\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\widetilde{\\Phi}_{\\ell,j}^{u,v}\\big{\\|}_{2}\\leq \\big{\\|}\\Phi_{k}^{\\mathrm{T}}\\left(\\Phi_{u}-\\Phi_{v}\\right)\\big{\\|}_{2}\\) for \\(j=\\ell\\) and \\(j=\\ell+1\\). Consequently, it follows from (36) and definition of the local \\(2\\)-subspace coherence that
\\[\\|M_{\\ell}-M_{\\ell-1}\\|_{2}\\leq\\underbrace{\\gamma_{2,k}\\bigg{(}\\|\\theta_{\\ell }\\|_{2}1_{\\{\\ell\
eq i\\}}+\\|\\theta_{\\ell+1}\\|_{2}1_{\\{\\ell\
eq i-1\\}}+\\sum_{ \\begin{subarray}{c}j>\\ell+1\\\\ j\
eq i\\end{subarray}}\\frac{\\|\\theta_{j}\\|_{2}}{N-\\ell-1}\\bigg{)}}_{b_{\\ell}}. \\tag{37}\\]
The next step needed to utilize Proposition 1 involves an upper bound on \\(\\|M_{0}\\|_{2}\\), which is given as follows:
\\[\\|M_{0}\\|_{2}=\\Big{\\|}\\sum_{j\
eq i}\\Phi_{k}^{\\mathrm{T}}\\big{(}\\sum_{ \\begin{subarray}{c}q=1\\\\ q\
eq k\\end{subarray}}^{N}\\frac{\\Phi_{q}}{N-1}\\big{)}\\theta_{j}\\Big{\\|}_{2}\\leq \\frac{1}{N-1}\\Big{\\|}\\sum_{\\begin{subarray}{c}q=1\\\\ q\
eq k\\end{subarray}}^{N}\\Phi_{k}^{\\mathrm{T}}\\Phi_{q}\\Big{\\|}_{2}\\Big{\\|}\\sum _{j\
eq i}\\theta_{j}\\Big{\\|}_{2}\\stackrel{{(d)}}{{\\leq}}\\rho_{k} \\sqrt{(n-1)(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})}. \\tag{38}\\]
Here, \\((d)\\) primarily follows from the fact that, conditioned on \\(E_{2}^{i}\\), \\(\\sum_{j\
eq i}\\|\\theta_{j}\\|_{2}^{2}=\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k}\\).
Our construction of the Doob martingale, Proposition 1 in Appendix D, [41, Lemma B.1] and the assumption \\(\\sqrt{\\mathcal{E}_{k}}-\\delta_{1}>\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}- \\mathcal{E}_{k})}\\) now provides us the following upper bound:
\\[\\Pr\\bigg{(}\\bigg{\\|}\\sum_{\\begin{subarray}{c}j=1\\\\ j\
eq i\\end{subarray}}^{n}\\Phi_{k}^{\\mathrm{T}}\\Phi_{\\pi_{j}}\\theta_{j}\\Big{\\|}_{2} \\geq\\sqrt{\\mathcal{E}_{k}}-\\delta_{1}\\big{|}E_{2}^{i}\\bigg{)}\\] \\[=\\Pr\\big{(}\\|M_{n-1}\\|_{2}-\\|M_{0}\\|_{2}\\geq\\sqrt{\\mathcal{E}_{k}} -\\delta_{1}-\\|M_{0}\\|_{2}\\big{|}E_{2}^{i}\\big{)}\\] \\[\\stackrel{{(e)}}{{\\leq}}\\Pr\\Big{(}\\|M_{n-1}-M_{0}\\|_{ 2}\\geq\\sqrt{\\mathcal{E}_{k}}-\\delta_{1}-\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A }}-\\mathcal{E}_{k})}\\,\\big{|}E_{0}^{k}\\Big{)}\\] \\[\\leq e^{2}\\exp\\left(-\\frac{c_{0}\\big{(}\\sqrt{\\mathcal{E}_{k}}- \\delta_{1}-\\rho_{k}\\sqrt{n(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})}\\big{)}^ {2}}{\\sum_{\\ell=1}^{n-1}b_{\\ell}^{2}}\\right), \\tag{39}\\]
where \\((e)\\) is primarily due to the bound on \\(\\|M_{0}\\|_{2}\\) in (38). Further, it can be shown using (37), the inequality \\(\\sum_{\\ell=1}^{n-1}\\|\\theta_{\\ell}\\|_{2}1_{\\{\\ell\
eq i\\}}\\cdot\\|\\theta_{\\ell+1 }\\|_{2}1_{\\{\\ell\
eq i-1\\}}\\leq(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})\\), and some tedious manipulations that the following holds:
\\[\\sum_{\\ell=1}^{n-1}b_{\\ell}^{2}\\leq\\gamma_{2,k}^{2}(\\mathcal{E}_{\\mathcal{A}}- \\mathcal{E}_{k})\\left(\\frac{2N-n}{N-n}\\right)^{2}. \\tag{40}\\]
Combining (30), (31), (39) and (40), we obtain \\(\\Pr(Z_{1}^{k}\\leq\\delta_{1}\\big{|}\\mathcal{H}_{1}^{k})\\leq e^{2}\\exp\\left(- \\frac{c_{0}(N-n)^{2}\\left(\\sqrt{\\mathcal{E}_{k}}-\\delta_{1}-\\rho_{k}\\sqrt{n( \\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})}\\right)^{2}}{(2N-n)^{2}\\gamma_{2,k}^ {2}(\\mathcal{E}_{\\mathcal{A}}-\\mathcal{E}_{k})}\\right)\\).
The proof of the lemma can now be completed by noting that
\\[\\Pr\\left(T_{k}(y)\\leq\\tau\\big{|}\\mathcal{H}_{1}^{k}\\right) =\\Pr\\left(\\widetilde{T}_{k}(y)\\leq\\sqrt{\\tau}\\big{|}\\mathcal{H}_{1}^ {k}\\right)\\leq\\Pr\\left(Z_{1}^{k}-Z_{2}^{k}\\leq\\sqrt{\\tau}\\big{|}\\mathcal{H}_{1} ^{k}\\right)\\] \\[\\leq\\Pr\\left(Z_{1}^{k}-Z_{2}^{k}\\leq\\sqrt{\\tau}\\big{|}\\mathcal{H}_ {1}^{k},Z_{2}^{k}<\\epsilon_{2}\\right)+\\Pr\\left(Z_{2}^{k}\\geq\\epsilon_{2}\\big{|} \\mathcal{H}_{1}^{k}\\right)\\] \\[\\leq\\Pr\\left(Z_{1}^{k}\\leq\\sqrt{\\tau}+\\epsilon_{2}\\big{|} \\mathcal{H}_{1}^{k}\\right)+\\Pr\\left(Z_{2}^{k}\\geq\\epsilon_{2}\\right). \\tag{41}\\]
The two statements in the lemma now follow from the (probabilistic) bounds on \\(Z_{2}^{k}\\) established at the start of the proof of Lemma 1 and the probabilistic bound on \\(Z_{1}^{k}\\) obtained in the preceding paragraph.
## Appendix C Proof of Lemma 3
We begin with _any_ arbitrary (but fixed) collection of orthonormal bases of the subspaces \\(\\{\\mathcal{S}_{i}\\}_{i=1}^{N}\\), denoted by \\(\\big{\\{}\\Psi_{i}\\in\\mathbb{R}^{D\\times d}\\big{\\}}_{i=1}^{N}\\). Next, let \\(\\big{\\{}R_{i}\\in\\mathbb{R}^{d\\times d}\\big{\\}}_{i=1}^{N}\\) be a collection of random rotation matrices that are drawn in an independent manner using the Haar measure, \\(\\lambda_{R}\\), on the space \\(O(d)\\) of \\(d\\times d\\) rotation matrices. Given these \\(R_{i}\\)'s, notice that \\(\\{R_{i}\\Psi_{i}\\}_{i=1}^{N}\\) also form a collection of orthonormal bases of the subspaces \\(\\{\\mathcal{S}_{i}\\}_{i=1}^{N}\\). Our goal now is to leverage the probabilistic method and establish that
\\[\\Pr\\left(\\bigcap_{i=1}^{N}\\left\\{\\frac{1}{N-1}\\Big{\\|}\\sum_{j\
eq i}R_{i}^{ \\mathrm{T}}\\Psi_{i}^{\\mathrm{T}}\\Psi_{j}R_{j}\\Big{\\|}_{2}<\\bar{\\rho}_{i}\\right\\} \\right)>0. \\tag{42}\\]
Assuming (42) holds, it then follows that there exists a _deterministic_ collection \\(\\mathcal{Q}_{N}=\\big{\\{}Q_{i}\\in O(d)\\big{\\}}_{i=1}^{N}\\) such that
\\[\\frac{1}{N-1}\\Big{\\|}\\sum_{j\
eq i}(Q_{i}\\Psi_{i})^{\\mathrm{T}}(Q_{j}\\Psi_{j} )\\Big{\\|}_{2}<\\bar{\\rho}_{i},\\quad i=1,\\ldots,N. \\tag{43}\\]
We can afterward define the promised bases as \\(U_{i}:=Q_{i}\\Psi_{i}\\), which then completes the proof of the lemma.
In order to establish (42), notice that
\\[\\Pr\\left(\\bigcap_{i=1}^{N}\\left\\{\\frac{1}{N-1}\\Big{\\|}\\sum_{j \
eq i}R_{i}^{\\mathrm{T}}\\Psi_{i}^{\\mathrm{T}}\\Psi_{j}R_{j}\\Big{\\|}_{2}<\\bar{ \\rho}_{i}\\right\\}\\right) =1-\\Pr\\Bigg{(}\\bigcup_{i=1}^{N}\\left\\{\\frac{1}{N-1}\\Big{\\|}\\sum_{ j\
eq i}R_{i}^{\\mathrm{T}}\\Psi_{i}^{\\mathrm{T}}\\Psi_{j}R_{j}\\Big{\\|}_{2}\\geq \\bar{\\rho}_{i}\\right\\}\\Bigg{)}\\] \\[\\geq 1-\\sum_{i=1}^{N}\\Pr\\Bigg{(}\\frac{1}{N-1}\\Big{\\|}\\sum_{j\
eq i }R_{i}^{\\mathrm{T}}\\Psi_{i}^{\\mathrm{T}}\\Psi_{j}R_{j}\\Big{\\|}_{2}\\geq\\bar{\\rho} _{i}\\Bigg{)}. \\tag{44}\\]
Thus, if we can establish that each term in the summation in (44) is strictly upper bounded by \\(N^{-1}\\) then that equivalently establishes (42). To this end, we first fix the index \\(i=1\\) since identical results for other indices follow in a similar manner. Next, let \\(\\|B\\|_{S(p)},1\\leq p<\\infty\\), denote the Schatten \\(p\\)-norm of the matrix \\(B\\), defined as \\(\\|B\\|_{S(p)}:=\\left(\\sum_{k\\geq 1}s_{k}^{p}(B)\\right)^{1/p}\\), where \\(s_{k}(B)\\) denotes the \\(k\\)-th largest singular value of \\(B\\)[42]. It then follows from the definitions of \\(\\|\\cdot\\|_{2}\\) and \\(\\|\\cdot\\|_{S(p)}\\) that
\\[\\frac{1}{N-1}\\Big{\\|}\\sum_{j>1}R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{j} R_{j}\\Big{\\|}_{2}\\leq\\rho_{1,S(p)}:=\\frac{1}{N-1}\\Big{\\|}\\sum_{j>1}R_{1}^{ \\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{j}R_{j}\\Big{\\|}_{S(p)}\\leq\\frac{d^{1/p} }{N-1}\\Big{\\|}\\sum_{j>1}R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{j}R_{j} \\Big{\\|}_{2}. \\tag{45}\\]
We therefore have from (45) that \\(\\Pr\\big{(}\\frac{1}{N-1}\\|\\sum_{j>1}R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}} \\Psi_{j}R_{j}\\|_{2}\\geq\\bar{\\rho}_{1}\\big{)}\\leq\\Pr(\\rho_{1,S(p)}\\geq\\bar{\\rho} _{1})\\).
In order to bound \\(\\Pr(\\rho_{1,S(p)}\\geq\\bar{\\rho}_{1})\\), we once again utilize Proposition 1 in Appendix D. To this end, we construct an \\(\\mathbb{R}^{d\\times d}\\) matrix-valued Doob's martingale \\((M_{0},M_{1},\\ldots,M_{N-1})\\) as follows: \\(M_{0}\\equiv 0\\) and
\\[M_{\\ell}=\\sum_{j=2}^{N}R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{j}\\mathbb{E }\\big{[}R_{j}|R_{1},R_{2},\\ldots,R_{\\ell+1}\\big{]}\\stackrel{{(a)} }{{=}}\\sum_{j=2}^{\\ell+1}R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{j}R_{j}, \\quad\\ell=1,\\ldots,N-1, \\tag{46}\\]
where \\((a)\\) follows from the mutual independence and zero mean of the random rotation matrices. Next, notice that
\\[\\forall\\ell\\geq 1,\\quad\\|M_{\\ell}-M_{\\ell-1}\\|_{S(p)} =\\|R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{\\ell+1}R_{\\ell+1} \\|_{S(p)}\\leq d^{1/p}\\|R_{1}^{\\mathrm{T}}\\Psi_{1}^{\\mathrm{T}}\\Psi_{\\ell+1}R_{ \\ell+1}\\|_{2}\\] \\[\\leq d^{1/p}\\|R_{1}\\|_{2}\\|\\Psi_{1}^{\\mathrm{T}}\\Psi_{\\ell+1}\\|_ {2}\\|R_{\\ell+1}\\|_{2}=d^{1/p}\\gamma(\\mathcal{S}_{1},\\mathcal{S}_{\\ell+1}), \\tag{47}\\]
Finally, in order to translate Proposition 1 for \\((\\mathcal{B},\\|\\cdot\\|)\\equiv(\\mathbb{R}^{d\\times d},\\|\\cdot\\|_{S(p)})\\), note that \\(\\forall p\\geq 2,\\zeta_{\\mathcal{B}}(\\tau)\\leq\\frac{p-1}{2}\\tau^{2}\\)[43]. It then follows that
\\[\\Pr(\\rho_{1,S(p)}\\geq\\bar{\\rho}_{1}) =\\int_{R_{1}\\in O(d)}\\Pr\\Big{(}\\|M_{N-1}\\|_{S(p)}\\geq(N-1)\\bar{ \\rho}_{1}\\,\\big{|}\\,R_{1}\\Big{)}\\lambda_{R}(dR_{1})\\] \\[\\leq e^{\\max\\{\\frac{p}{2},2\\}}\\exp\\bigg{(}-\\frac{c_{0}(N-1)^{2} \\bar{\\rho}_{1}^{2}}{d^{2/p}\\sum_{j>1}\\gamma^{2}(\\mathcal{S}_{1},\\mathcal{S}_{ j})}\\bigg{)}\\int_{R_{1}\\in O(d)}\\lambda_{R}(dR_{1})\\] \\[=e^{\\max\\{\\frac{p}{2},2\\}}\\exp\\bigg{(}-\\frac{c_{0}(N-1)\\bar{\\rho} _{1}^{2}}{d^{2/p}\\gamma_{\\mathrm{Im},1}^{2}}\\bigg{)}. \\tag{48}\\]
Finally, replacing \\(p=4\\log(d)\\) and \\(\\bar{\\rho}_{1}=\\frac{\\gamma_{\\mathrm{Im},1}\\sqrt{\\log(c_{4}d^{2}N)}}{\\sqrt{ \\zeta_{0}(N-1)}}\\) in (48) results in \\(\\Pr(\\rho_{1,S(p)}\\geq\\bar{\\rho}_{1})\\leq(c_{4}N)^{-1}<N^{-1}\\). This suffices to establish the statement of the lemma.
## Appendix D Banach-Space-Valued Azuma's Inequality
In this appendix, we state a Banach-space-valued concentration inequality from [43] that is central to some of the proofs in this paper.
**Proposition 1** (Banach-Space-Valued Azuma's Inequality).: _Fix \\(s>0\\) and assume that a Banach space \\((\\mathcal{B},\\|\\cdot\\|)\\) satisfies_
\\[\\zeta_{\\mathcal{B}}(\\tau):=\\sup_{\\begin{subarray}{c}u,v\\in\\mathcal{B}\\\\ \\|u\\|=\\|v\\|=1\\end{subarray}}\\bigg{\\{}\\frac{\\|u+\\tau v\\|+\\|u-\\tau v\\|}{2}-1 \\bigg{\\}}\\leq s\\tau^{2}\\]
_for all \\(\\tau>0\\). Let \\(\\{M_{k}\\}_{k=0}^{\\infty}\\) be a \\(\\mathcal{B}\\)-valued martingale satisfying the pointwise bound \\(\\|M_{k}-M_{k-1}\\|\\leq b_{k}\\) for all \\(k\\in\\mathbb{N}\\), where \\(\\{b_{k}\\}_{k=1}^{\\infty}\\) is a sequence of positive numbers. Then for every \\(\\delta>0\\) and \\(k\\in\\mathbb{N}\\), we have_
\\[\\Pr\\big{(}\\|M_{k}-M_{0}\\|\\geq\\delta\\big{)}\\leq e^{\\max\\{s,2\\}}\\exp\\bigg{(}- \\frac{c_{0}\\delta^{2}}{\\sum_{\\ell=1}^{k}b_{\\ell}^{2}}\\bigg{)},\\]
_where \\(c_{0}:=\\frac{e^{-1}}{256}\\) is an absolute constant._
_Remark 4_.: Theorem 1.5 in [43] does not explicitly specify \\(c_{0}\\) and also states the constant in front of \\(\\exp(\\cdot)\\) to be \\(e^{s+2}\\). Proposition 1 stated in its current form, however, can be obtained from the proof of Theorem 1.5 in [43].
## References
* [1] W. U. Bajwa and D. Mixon, \"Group model selection using marginal correlations: The good, the bad and the ugly,\" in _Proc. 50th Annu. Allerton Conf. Communication, Control, and Computing_, Monticello, IL, Oct. 2012, pp. 494-501.
* [2] L. L. Scharf and B. Friedlander, \"Matched subspace detectors,\" _IEEE Trans. Signal Processing_, vol. 42, no. 8, pp. 2146-2157, Aug. 1994.
* [3] L. Applebaum, W. U. Bajwa, M. F. Duarte, and R. Calderbank, \"Asynchronous code-division random access using convex optimization,\" _Phy. Commun._, vol. 5, no. 2, pp. 129-147, Jun. 2012.
* [4] M. Yuan and Y. Lin, \"Model selection and estimation in regression with grouped variables,\" _J. Roy. Statist. Soc. Ser. B_, vol. 68, no. 1, pp. 49-67, 2006.
* [5] F. Bach, \"Consistency of the group lasso and multiple kernel learning,\" _J. Machine Learning Res._, vol. 9, no. 6, pp. 1179-1225, Jun. 2008.
* [6] Y. Nardi and A. Rinaldo, \"On the asymptotic properties of the group lasso estimator for linear models,\" _Electron. J. Stat._, vol. 2, pp. 605-633, 2008.
* [7] J. Huang and T. Zhang, \"The benefit of group sparsity,\" _Ann. Statist._, vol. 38, no. 4, pp. 1978-2004, Aug. 2010.
* [8] Y. C. Eldar, P. Kuppinger, and H. Bolcksei, \"Block-sparse signals: Uncertainty relations and efficient recovery,\" _IEEE Trans. Signal Processing_, vol. 58, no. 6, pp. 3042-3054, Jun. 2010.
* [9] Z. Ben-Haim and Y. C. Eldar, \"Near-oracle performance of greedy block-sparse estimation techniques from noisy measurements,\" _IEEE J. Select. Topics Signal Processing_, vol. 5, no. 5, pp. 1032-1047, Sep. 2011.
* [10] E. Elhamifar and R. Vidal, \"Block-sparse recovery via convex optimization,\" _IEEE Trans. Signal Processing_, vol. 60, no. 8, pp. 4094-4107, Aug. 2012.
* [11] S. Cotter, B. Rao, K. Engan, and K. Kreutz-Delgado, \"Sparse solutions to linear inverse problems with multiple measurement vectors,\" _IEEE Trans. Signal Processing_, vol. 53, no. 7, pp. 2477-2488, Jul. 2005.
* [12] J. Tropp, A. Gilbert, and M. Strauss, \"Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit,\" _Signal Processing_, vol. 86, no. 3, pp. 572-588, Apr. 2006.
* [13] J. Tropp, \"Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,\" _Signal Processing_, vol. 86, no. 3, pp. 589-602, Apr. 2006.
* [14] R. Gribonval, H. Rauhut, K. Schnass, and P. Vandergheynst, \"Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms,\" _J. Fourier Anal. Appl._, vol. 14, no. 5-6, pp. 655-687, Dec. 2008.
* [15] M. Stojnic, F. Parvaresh, and B. Hassibi, \"On the reconstruction of block-sparse signals with an optimal number of measurements,\" _IEEE Trans. Signal Processing_, vol. 57, no. 8, pp. 3075-3085, Aug. 2009.
* [16] Y. C. Eldar and H. Rauhut, \"Average case analysis of multichannel sparse recovery using convex relaxation,\" _IEEE Trans. Inform. Theory_, vol. 56, no. 1, pp. 505-519, Jan. 2010.
* [17] G. Obozinski, M. Wainwright, and M. Jordan, \"Support union recovery in high-dimensional multivariate regression,\" _Ann. Statist._, vol. 39, no. 1, pp. 1-47, Jan. 2011.
* [18] M. Davies and Y. Eldar, \"Rank awareness in joint sparse recovery,\" _IEEE Trans. Inform. Theory_, vol. 58, no. 2, pp. 1135-1146, Feb. 2012.
* [19] W. U. Bajwa, M. F. Duarte, and R. Calderbank, \"Conditioning of random block subdictionaries with applications to block-sparse recovery and regression,\" _IEEE Trans. Inform. Theory_, vol. 61, no. 7, pp. 4060-4079, Jul. 2015.
* [20] J. A. Tropp, \"On the conditioning of random subdictionaries,\" _Appl. Comput. Harmon. Anal._, vol. 25, pp. 1-24, 2008.
* [21] ----, \"Norms of random submatrices and sparse approximation,\" in _C. R. Acad. Sci., Ser. I_, Paris, 2008, vol. 346, pp. 1271-1274.
* [22] E. J. Candes and Y. Plan, \"Near-ideal model selection by \\(\\ell_{1}\\) minimization,\" _Ann. Statist._, vol. 37, no. 5A, pp. 2145-2177, Oct. 2009.
* [23] W. U. Bajwa, R. Calderbank, and S. Jafarpour, \"Why Gabor frames? Two fundamental measures of coherence and their role in model selection,\" _J. Commun. Netw._, vol. 12, no. 4, pp. 289-307, Aug. 2010.
* [24] W. U. Bajwa, R. Calderbank, and D. G. Mixon, \"Two are better than one: Fundamental parameters of frame coherence,\" _Appl. Comput. Harmon. Anal._, vol. 33, no. 1, pp. 58-78, Jul. 2012.
* [25] D. Manolakis, C. Siracusa, and G. Shaw, \"Hyperspectral subpixel target detection using the linear mixing model,\" _IEEE Trans. Geoscience Remote Sens._, vol. 39, no. 7, pp. 1392-1409, Jul. 2001.
* [26] S. M. Kay, _Fundamentals of Statistical Signal Processing: Detection Theory_. Upper Saddle River, NJ: Prentice Hall, 1998.
* [27] A. Farcomeni, \"A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion,\" _Statistical Methods in Medical Research_, vol. 17, no. 4, pp. 347-388, Aug. 2008.
* [28] Y. Benjamini and Y. Hochberg, \"Controlling the false discovery rate: A practical and powerful approach to multiple testing,\" _J. Roy. Statist. Soc. Ser. B_, vol. 57, no. 1, pp. 289-300, 1995.
* [29] Y. Benjamini, A. M. Krieger, and D. Yekutieli, \"Adaptive linear step-up procedures that control the false discovery rate,\" _Biometrika_, vol. 93, no. 3, pp. 491-507, 2006.
* [30] Z. Drmac, \"On principal angles between subspaces of Euclidean space,\" _SIAM J. Matrix Analy. App._, vol. 22, no. 1, pp. 173-194, 2000.
* [31] A. Barg, A. Mazumdar, and R. Wang, \"Restricted isometry property of random subdictionaries,\" _IEEE Trans. Inform. Theory_, vol. 61, no. 8, pp. 4440-4450, Aug. 2015.
* [32] J. A. Tropp, \"Greed is good: Algorithmic results for sparse approximation,\" _IEEE Trans. Inform. Theory_, vol. 50, no. 10, pp. 2231-2242, Oct. 2004.
* [33] R. Calderbank, A. Thompson, and Y. Xie, \"On block coherence of frames,\" _Appl. Comput. Harmon. Anal._, vol. 38, no. 1, pp. 50-71, Jan. 2015.
* [34] A. P. Schaum, \"Spectral subspace matched filtering,\" in _Proc. SPIE 4381, Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VII_, Orlando, FL, Apr. 2001, pp. 1-17.
* [35] P. Kuppinger, G. Durisi, and H. Bolcskei, \"Uncertainty relations and sparse signal recovery for pairs of general signal sets,\" _IEEE Trans. Inform. Theory_, vol. 58, no. 1, pp. 263-277, Jan. 2012.
* [36] P. W. H. Lemmens and J. J. Seidel, \"Equi-isoclinic subspaces of Euclidean spaces,\" _Indagationes Mathematicae (Proceedings)_, vol. 76, no. 2, pp. 98-107, 1973.
* [37] F. Mezzadri, \"How to generate random matrices from the classical compact groups,\" _Notices of the AMS_, vol. 54, no. 5, pp. 592-604, May 2007.
* [38] B. Laurent and P. Massart, \"Adaptive estimation of a quadratic functional by model selection,\" _Ann. Statist._, vol. 28, no. 5, pp. 1302-1338, Oct. 2000.
* [39] C. McDiarmid, \"On the method of bounded differences,\" in _Surveys in Combinatorics_, J. Siemons, Ed. Cambridge University Press, 1989, pp. 148-188.
* [40] R. Motwani and P. Raghavan, _Randomized Algorithms_. New York, NY: Cambridge University Press, 1995.
* [41] M. Donahue, C. Darken, L. Gurvits, and E. Sontag, \"Rates of convex approximation in non-Hilbert spaces,\" in _Constructive Approximation_. New York, NY: Springer, Jun. 1997, vol. 13, no. 2, pp. 187-220.
* [42] R. A. Horn and C. R. Johnson, _Topics in Matrix Analysis_. Cambridge University Press, 1994.
* [43] A. Naor, \"On the Banach-space-valued Azuma inequality and small set isoperimetry of Alon-Roichman graphs,\" _Combinatorics, Probability and Computing_, vol. 21, no. 04, pp. 623-634, Jul. 2012. | We consider a wireless network comprising a large number of users in which some of the users simultaneously transmit data to a base station. It is imperative for the base station in this case to identify the users that are communicating with it at any given instance, which is termed as the problem of multiuser detection. This problem of multiuser detection in wireless networks can also be posed as a subspace unmixing problem under the PS3 model. In this context, users in the network communicate with the base station using \\(D\\)-dimensional codewords in \\(\\mathbb{R}^{D}\\), each individual user is assigned a codebook that spans a low-dimensional subspace \\(\\mathcal{S}_{i}\\) of \\(\\mathbb{R}^{D}\\), the total number of users in the network is \\(N\\), the number of active users at any given instanceis \\(n\\ll N\\), and the base station receives \\(y\\in\\sum_{i\\in\\mathcal{A}}\\mathcal{S}_{i}+\\text{noise}\\) due to the superposition property of the wireless medium, where \\(\\mathcal{A}\\) denotes the indices of the users actively communicating with the base station. | Write a summary of the passage below. | 267 |
mdpi/029c6223_8555_496c_ba86_e867d7d26d98.md | Assessment of the Ecological Condition of Informal Settlements Using the Settlement Surface Ecological Index
Naledzani Mudau
1School of Geography, Archaeological & Environmental Studies, Faculty of Science, University of the Witwatersrand, Johannesburg 2000, South Africa; [email protected]
1
Paidamwoyo Mhangara
1School of Geography, Archaeological & Environmental Studies, Faculty of Science, University of the Witwatersrand, Johannesburg 2000, South Africa; [email protected]
## 1 Introduction
The urban ecosystem provides services that directly impact human health and security, including runoff mitigation, urban cooling, and air purification [1]. The availability of vegetation in urban areas can help improve air quality and reduce flood severity [2]. In addition, improving green infrastructure in informal settlements can help enhance social and cultural interaction [3; 4]. In addition, an increase in the impervious surface and low vegetation cover negatively affects the local microclimate and increases the formation of surface urban heat island (SUHI) [5]. Exposure to excessive heat in certain areas can cause physiological and socio-economic stress, amplify existing health issues, and increase premature death or disability [6]. Populations in informal settlements are likely to be affected the most by increased heat exposure as the dwelling structures in these areas lack cooling services [7]. Urbanization can also result in soil erosion or contamination, which may threaten human health [8].
Satellite images have been widely used to assess and detect land use features such as human settlement developments [9; 10; 11; 12], informal settlements [13; 14; 15], and urban growth rates [16; 17]. Vegetation cover and density of impervious surface are the two land cover classes that have been thoroughly investigated in an attempt to automatically detect informal settlements from satellite imagery [18; 19; 20]. Other studies have used satellite images to map and measure urban morphology [21; 22].
Urban surfaces and their characteristics play an important role in achieving sustainable and resilient cities as they can influence the people's quality of life and the settlements' environmental conditions [23]. Several studies have assessed the environmental conditionsof cities using a vegetation-impervious surface-soil (VIS) model [24] and medium spatial resolution images [25; 26]. In addition to the VIS model, assessing other biophysical characteristics such as surface temperature, wetness, and air quality provides more variables to assess the surface ecological status of cities. The remote sensing ecological index (RSEI) is the first model that utilizes remotely sensed data to assess the status of the urban ecology of cities [27]. RSEI uses vegetation index, humidity, land surface temperature, and built-up and bareness index [27]. Researchers have explored the RSEI to evaluate the status of urban ecology across cities [28; 29; 30].
The methodologies commonly used for mapping the land cover biophysical characteristics include the thresholding radiometric values and the maximum-likelihood algorithm (MLA). Due to the heterogeneity of land use features in urban areas, these methods suffer from spectral mixing issues. A hierarchical algorithm that uses textural features has proven to perform better than MLA [31]. Using object-based image analysis (OBIA) in mapping urban land cover using high-resolution imagery improves the results significantly compared to pixel-based classification and MLA [32].
The previous studies conducted on the assessment of ecological studies are limited to a city level, providing data required to develop a city-level intervention. Since informal settlements are illegal and may lack essential services such as sanitation, water, and waste removal [33], they pose several social and environmental challenges. Environmental challenges such as land degradation and pollution of natural resources have been associated with informal settlements [34]. This may be attributed to unmanaged land use activities and lack of access to basic services [33]. In addition, the effect of climate change and global change may be more severe in informal settlements than in formal settlements as they are located in undesirable locations such as flood-prone or high-slope areas [35]. With future urbanization expected to take place mostly in developing countries that are already struggling with informal settlement developments, understanding the vulnerability of informal settlements can help develop interventions to improve the wellbeing of people living in informal settlements.
The studies that used RSEI to assess the ecological conditions of the cities assess the broad land-use classes. Since human settlements are not the same, a detailed analysis of urban ecological conditions is required to improve understanding of urban environments and develop necessary solutions to build green infrastructure across a city. This study builds on RSEI and develops a settlement surface ecological index (SSEI) that uses the tree, grassland, impervious surface, soil, land surface temperature, and vegetation moisture to assess the ecological status of informal and formal settlements using biophysical parameters derived from high and medium spatial resolution imagery.
## 2 Study Area
The study area covers residential, commercial, and industrial areas in the eastern part of the City of Tshwane, Gauteng, South Africa, and lies between \\(-25^{\\circ}41^{\\prime}\\) and \\(-25^{\\circ}46^{\\prime}\\) latitude and \\(28^{\\circ}17^{\\prime}\\) and \\(28^{\\circ}26^{\\prime}\\) longitude. The study area contains low- and high-density residential areas on the eastern side of the metropolitan municipality and formal high-density and informal settlements in Mamelodi township, located about 30 km from the city's central business district; see Figure 1: Location of the study area and different human settlement types assessed in the study.
The main economic activities in the municipality are finance and manufacturing. About 16% of municipal households were informal settlements in 2016 [36]. Mamelodi is one of the townships experiencing increased development of informal settlements [37]. The municipality plans to upgrade informal settlements by providing essential services or top structures [38]. In addition, several initiatives are aimed at greening the cities to improve air quality and sociocultural interaction, especially in townships. The zoom-in pictures of selected areas are shown in Appendix A.
The selection of the different human settlement types was performed through visual interpretation using the Google Earth platform. The maps of the selected settlement were generated using Esri base maps. The description of the selected settlements was guided by Census 2011 metadata [39] and the South African National Land Cover Classification standard, 19144-2:2014.
## 3 Data
We used Satellite Pour Observation de la Terre (SPOT) 7 images acquired on 8 November 2017 to classify urban land cover classes. The SPOT 7 sensor acquires images both in multispectral and panchromatic modes. The scene IDs of the images used are IMG_SPOT7_MS_201711070754099_ORT_SPOT7_20180925_0743431nq51eejut843_1_R3C2 and img_spot7_ms_201711070754099_ort_spot7_20180925_0743431nq51eejut843_1_r3c2. The spectral bands, wavelength, and spatial resolution of SPOT 7 are presented in Table 1.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Spectral Band** & **Wavelength (\\(\\mu\\)m)** & **Spatial Resolution (m)** \\\\ \\hline \\multicolumn{3}{c}{SPOT 7} \\\\ \\hline Panchromatic & 0.45β0.75 & 1.5 \\\\ \\hline Blue & 0.45β0.52 & 6 \\\\ \\hline Green & 0.53β0.06 & 6 \\\\ \\hline Red & 0.62β0.69 & 6 \\\\ \\hline Near-Infrared & 0.76β0.89 & 6 \\\\ \\hline \\multicolumn{3}{c}{Landsat 8} \\\\ \\hline Panchromatic & 0.50β0.68 & 15 \\\\ \\hline Coastal Blue & 0.43β0.45 & 30 \\\\ \\hline Blue & 0.45β0.67 & 30 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: List of the spectral band, bandwidth, and spatial resolutions of SPOT 7 and Landsat 8 satellite images.
Figure 1: Location of the study area and different human settlement types assessed in the study.
A Landsat 8 image acquired on 08 November 2017, scene ID LC81700782017312LGN00, was used to derive land surface temperature and vegetation moisture information.
## 4 Methodology
### Classification of Urban Land-Use Classes
The classification of impervious surface tree, grass, and soil cover and features from the SPOT 7 satellite image was performed using the object-based image analysis (OBIA) technique in Trimble eCognition Software. The OBIA technique has been widely used to detect urban land cover and land use classes from high spatial resolution satellite imagery [18; 40]. This technique has generated more accurate results than pixel-based urban land use classification [32]. The first fundamental step of OBIA is image segmentation. This process partitions an image into image objects that represent desired land use or land cover features with similar spectral and spatial properties [41]. The quality of results achieved using OBIA techniques depends on the image objects created during image segmentation [41].
The multiresolution segmentation method was used to partition the image into image objects. This bottom-up, region-merging technique partitions an image into objects based on the user-defined homogeneity criteria, i.e., scale, compactness, and shape [42]. Two segmentation levels were created using 400 and 25 scale parameters using the multi-spectral bands. Segmentation parameters were selected using the trial-and-error method by visually inspecting segmentation results. The scale parameter 400 generated Level 1 image objects representing non-built-up and built-up land cover and land use classes. In contrast, the scale parameter 25 generated Level 2 image objects representing urban land use objects. The compactness and shape parameters of 0.5 and 0.1 were selected for both segmentation levels. The scale, compactness, and shape parameters were selected using trial and error by visually inspecting the results and adjusting the segmentation parameters [43; 44].
The OBIA rule-based classification technique was used to classify level 1 image objects into built-up and non-built-up classes. A ruleset that uses radiometric values, vegetation indices and textural features was developed to classify built-up and non-built-up classes. Classifying built-up and non-built-up areas using the gray-level co-occurrence matrix (GLCM) dissimilarity texture [45]. The GLCM dissimilarity texture measures the distance between pairs of pixels within an image object [45]. Due to the heterogeneity of land use features in urban areas, the dissimilarity values are expected to be higher dissimilarity texture values than in non-built-up areas [12]. Most image objects in urban areas are also expected to have higher brightness values than non-built-up areas [46]. A rule set that uses GLCM dissimilarity and brightness values is expected to separate built-up and non-built-up areas.
Level 2 image objects were classified into trees, grass, impervious surface, and soil using the soil-adjusted vegetation index (SAVI) [47], GLCM dissimilarity texture [45], Pantex [48], and iron oxide index [49].
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Spectral Band** & **Wavelength (\\(\\mu\\)m)** & **Spatial Resolution (m)** \\\\ \\hline Green & 0.53β0.59 & 30 \\\\ \\hline Red & 0.64β0.67 & 30 \\\\ \\hline Near-Infrared & 0.85β0.88 & 30 \\\\ \\hline Short-wave Infrared1 & 1.57β1.65 & 30 \\\\ \\hline Short-wave Infrared 2 & 2.11β2.29 & 30 \\\\ \\hline Cirrus & 1.36β1.38 & 30 \\\\ \\hline Thermal Infrared 1 & 10.6β11.19 & 100 \\\\ \\hline Thermal Infrared 2 & 11.50β12.51 & 100 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: _Cont._The SAVI is a vegetation index that minimizes the impact of soil properties in vegetated areas [47]. The SAVI has been widely used to map vegetation from other land cover classes, such as impervious surfaces, soil, and water [50; 51]. The impervious surfaces, including roads and building structures, are expected to have lower SAVI values than soil and vegetation classes [52; 53]. SAVI was used to classify trees from grass features.
Pantex is a texture-derived built-up presence that uses GLCM for different directions and displacements and has proven to improve the classification of human settlement land use types compared to other built-up indices [48]. Building structures are expected to have higher Pantex values than other land-use classes [48].
The GLCM dissimilarity texture was used to distinguish open areas from building structures. The iron oxide or iron index measures the amount of iron on the surface using the red and blue bands [49]. This index was used to distinguish grass from soil classes. SAVI was used to classify trees from grass features.
### Mapping of Land Surface Temperature
LST was derived using band 10 of the Landsat 8 satellite. The LST was calculated using the following formula:
\\[LST=\\frac{BT}{1+(\\lambda BT/\\rho)\\text{ln}\\epsilon} \\tag{1}\\]
where \\(BT\\) is brightness temperature, \\(\\lambda\\) is the length of emitted wavelength, \\(\\rho\\) is a constant obtained using the formula \\(h\\ *\\ \\frac{c}{\\sigma}\\), where \\(h\\) is Plank's constant, c is the velocity of light, \\(\\sigma\\) is the Boltzmann constant, and \\(\\epsilon\\) is land surface emissivity.
### Mapping of Vegetation Moisture
The assessment of vegetation moisture in the study area was performed by analyzing the normalized difference moisture index (NDMI), which measures vegetation water content. The NDMI was derived using the following formula:
\\[NDMI=\\frac{(NIR-SWIR1)}{(NIR+SWIR1)} \\tag{2}\\]
### The Assessment of Settlement Surface Ecological Index
The SSEI is a function of the urban land cover classes, LST, and vegetation moisture, and it was calculated using the following formula:
\\[\\text{SSEI = (Tree cover + Grass cover + vegetation moistures) - (Impervious surface + LST)} \\tag{3}\\]
The assessment of SSEI index was performed over 300 m \\(\\times\\) 300 m of the selected urban land-use classes. The assessed biophysical characteristics were standardized to the 0-1 range. The index is defined as the difference between characteristics that improve urban ecology and those that negatively alter the urban ecosystem. The values of SSEI range from 0 to 1. Values closer to one represent a better settlement ecological condition, and values closer to zero represent the worst.
### Quality Assurance
Quality assurance of mapped VIS classes was performed by assessing the classification results with the manually selected image objects representing the mapped classes. A total of 588 impervious surfaces, 71 trees, 42 grasses, and 99 soil samples were created through visual image interpretation using a random sampling method. The accuracy assessment used Trimble eCognition 9.0 software (41). We assessed overall, producer and user accuracies. These producer and user accuracy measurements assess the errors of omission and commission of selected samples on the classified image.
## 5 Results
### Image Segmentation
Multiresolution image segmentation process with scale parameter, compactness and shape values of 400, 0.1, and 0.5, respectively, was able to generate built-up and non-built-up image objects; see Figure 2. The segmentation results show that non-built-up areas adjacent to built-up areas were accurately separated from built-up areas. The segmentation parameters also created image objects that represent different human settlement types. Some of the open spaces within settlements were separated from the residential area. There were, however, a few cases where open spaces objects were merged within residential areas image objects. The image objects in industrial and primarily commercial areas contained individual building structures since the building structures have clear contrast from the surrounding areas.
The Level 2 multiresolution segmentation separated impervious surface, soil, and vegetation image objects. However, impervious surfaces, such as roads and building structures in industrial areas and in low-density settlements, experienced oversegmentation. Oversegmentation in low-density formal settlements may be attributed to the different orientations of the building structures.
Undersegmentation was observed in informal settlements where more than one building structure was merged into one image object.
### Image Classification
The OBIA ruleset-based classification method that uses GLCM dissimilarity texture successfully classified built-up areas from non-built-up land cover classes, Figure 3 see Table 2. The major roads were successfully classified as built-up areas. Some areas with bare soil without building structures were classified as built-up areas. Image objects formed in such areas had apparent contrast differences from the surrounding non-built-up areas. Some open-space image objects with small human settlement segments were classified as non-built-up areas. Non-built-up areas are primarily found in the bottom left of the study area.
Figure 2: Level 1 multiresolution segmentation results over SPOT 7 images (**a**) and Level 2 multiresolution segmentation results over SPOT 7 images (**b**).
The use of Pantex and dissimilarity texture was able to distinguish building structures from vegetation and soil because building structures in the study areas contain a clear contrast from the surrounding land use classes. The use of SAVI was able to separate trees from the grass class. The separation of bare soil and vegetated areas was successfully achieved using the iron index. The results show that the built-up areas on the bottom left of the study area contained higher tree coverage than the other built-up areas. In contrast, the rest of the built-up areas in the middle and top of the study area contain a higher percentage of impervious surface; see Figure 3. The soil class can be seen mainly from the center to the left and top right corner of the study area.
An overall accuracy of 91.1% was achieved in classifying trees, grass, impervious surface, and soil cover; see Table 2. The soil class achieved producer and user accuracies of less than 90% compared to the other classes. The error of omission in this class was found in the formal and informal settlements where gravel road segments were merged and classified as an impervious surface class or vegetation class.
The grass class achieved poor accuracy compared to other classes. Some of the grass objects were misclassified as trees. Some of the grass image objects were classified as trees. Some of the grass segments were misclassified as open spaces. Some of the building structures in informal settlements were not classified as impervious surfaces but misclassified as soil. That may be attributed to the fact that some of the dwelling structures in informal settlements are small structures and were merged with the soil image objects.
### Assessment of Biophysical Characteristics
The spatial distribution of the assessed human settlement types over LST and vegetation moisture is shown in Figures 4 and 5.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Class** & **User Accuracy \\%** & **Producer Accuracy \\%** \\\\ \\hline Impervious surface & 97.9 & 94.4 \\\\ \\hline Soil & 87.2 & 82.8 \\\\ \\hline Trees & 89.7 & 98.6 \\\\ \\hline Grass & 66.6 & 63.6 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Producer and user accuracies achieved during the classification of impervious surface, soil, grass, and tree cover.
Figure 3: Classification results and the location of assessed settlements.
The medium-density informal settlement experienced the highest LST of over 40 \\({}^{\\circ}\\)C, and the old medium-density informal settlement experienced the lowest LST of 34 \\({}^{\\circ}\\)C compared to other formal and informal settlement types; see Table 3. The lower LST in such an area may be attributed to the settlement located next to the mountain (53). The formal settlements experienced LST of 36-39 \\({}^{\\circ}\\)C on the assessment day. The low-density formal settlement with high tree cover experienced the lowest LST, while the formal low-density new development experienced the highest LST. The LST in industrial areas was 1 \\({}^{\\circ}\\)C higher than in commercial areas.
Figure 4: Spatial distribution of the different settlement types over the LST layer.
Figure 5: Spatial distribution of assessed settlement types over vegetation moisture layer.
The vegetation moisture was very low in all the settlement types, with values lower than 0.1; see Figure 5, Table 3. The settlements with the highest tree cover experienced the highest vegetation moisture of 0.07 compared to other settlement types.
The results show that the informal settlement class comprised less than 50% of the impervious surface, with new medium-density informal settlements containing the highest impervious surface cover of 49%. In comparison, the new informal settlements development contained the least impervious cover of 17%. The tree cover in informal settlements varied from 8% to 63%, with old informal settlements having the highest tree cover while the new medium-density informal settlement contained only 8% of tree cover. The soil cover in the informal settlements was extremely low, with a maximum cover of only 3%. The grass cover varied from 6% to 42%, with medium-density informal settlements having the highest grass cover compared to other informal settlement types.
Table 4 shows the normalized values of the assessed parameters. The normalized values were used to calculate the SSEI.
The low-density formal settlements had the highest composition of tree cover, 69%, compared to other settlement types. In contrast, medium formal density with backyard shacks and formal settlements with shacks had the lowest composition of trees. The soil cover was low in formal areas except in new formal density and formal settlements, with shacks with 46% and 48% cover. The impervious surface cover in formal settlements ranged between 24 and 91%. The low-density formal with a high cover of trees had the least impervious surface cover. The formal medium density with shacks had the highest impervious surface cover. Industrial areas had the second highest impervious surface cover of 90%, while the commercial area had the most impervious surface cover of 77%. Industrial and commercial areas had zero soil cover, with 10% and 22% vegetation, respectively.
### Assessment of Settlement Surface Ecological Index (SSEI)
The status of urban surface ecology is good in low-density formal settlements compared to other settlement types; see Figure 6. The assessment of SSEI in informal settlements in the study areas varies according to the composition of the biophysical characteristics. The old medium-density informal settlement with a high cover of trees is in a better condition than the new medium-density informal settlement with a very low coverage of trees.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Settlement Type** & **LST\\_Mean** & **Soil \\%** & \\begin{tabular}{c} **Vegetation** \\\\ **Moisture** \\\\ \\end{tabular} & \\begin{tabular}{c} **Impervious** \\\\ **Surface \\%** \\\\ \\end{tabular} & \\begin{tabular}{c} **Tree \\%** \\\\ \\end{tabular} &
\\begin{tabular}{c} **Grass \\%** \\\\ \\end{tabular} \\\\ \\hline Commercial & 36.90 & 0 & 0.00 & 77 & 19 & 3 \\\\ \\hline Industrial & 38.67 & 0 & \\(-\\)0.02 & 90 & 4 & 6 \\\\ \\hline Formal high density with shacks & 38.31 & 1 & \\(-\\)0.07 & 91 & 2 & 5 \\\\ \\hline Formal medium density & 37.88 & 16 & \\(-\\)0.05 & 66 & 6 & 11 \\\\ \\hline Formal shacks & 38.70 & 48 & \\(-\\)0.09 & 43 & 1 & 7 \\\\ \\hline Old informal medium density & 34.41 & 0 & \\(-\\)0.02 & 31 & 63 & 6 \\\\ \\hline Informal medium-density new development & 42.67 & 3 & \\(-\\)0.14 & 47 & 8 & 42 \\\\ \\hline Formal low density & 36.05 & 1 & 0.07 & 24 & 69 & 5 \\\\ \\hline Formal low-density new development & 39.16 & 3 & \\(-\\)0.03 & 53 & 8 & 37 \\\\ \\hline Formal medium-density new development & 37.76 & 46 & \\(-\\)0.08 & 49 & 0 & 4 \\\\ \\hline Formal high density (clusters) & 37.81 & 3 & 0.00 & 65 & 23 & 8 \\\\ \\hline Informal new development & 39.77 & 3 & \\(-\\)0.06 & 17 & 47 & 33 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Proportions and values of biophysical characteristics assessed during the study.
Commercial area and formal medium-density clusters have the lowest positive SSEI values. All other settlement types have negative SSEI values, with the formal settlement with backyard shacks and new informal settlements having the worst ecological condition. Grass cover and soil cover have an insignificant negative correlation with the SSEI. Since moisture cover is positively correlated with tree cover, the results show that increasing the coverage of trees can improve the ecological conditions of the settlements, reduce surface temperature, reduce soil erosion, and reduce the impact of flooding.
The results show a need to improve the ecological conditions of medium-density informal and formal settlements. The SSEI can also be used during the planning of the interventions, i.e., tree planting strategies in areas with high cover of impervious surfaces may be different from those with low impervious surface cover.
\\begin{table}
\\begin{tabular}{l l l l l l l} \\hline \\hline
**Settlement Type** & **LST** & \\begin{tabular}{l} **Vegetation** \\\\ **Moisture** \\\\ \\end{tabular} & **Soil** & \\begin{tabular}{l} **Impervious** \\\\ **Surface** \\\\ \\end{tabular} & **Tree** & **Grass** \\\\ \\hline Commercial & 0.30 & 0.00 & 0.66 & 0.81 & 0.28 & 0.28 \\\\ \\hline Industrial & 0.52 & 0.00 & 0.59 & 0.99 & 0.06 & 0.06 \\\\ \\hline Formal high density with shacks & 0.47 & 0.00 & 0.30 & 1.00 & 0.03 & 0.03 \\\\ \\hline Formal medium density & 0.42 & 0.33 & 0.42 & 0.66 & 0.09 & 0.09 \\\\ \\hline Formal shacks & 0.52 & 1.00 & 0.20 & 0.35 & 0.01 & 0.01 \\\\ \\hline Old informal medium density & 0.00 & 0.00 & 0.59 & 0.19 & 0.91 & 0.91 \\\\ \\hline
\\begin{tabular}{l} Informal medium-density new \\\\ development \\\\ \\end{tabular} & 1.00 & 0.06 & 0.00 & 0.41 & 0.12 & 0.12 \\\\ \\hline Formal low density & 0.20 & 0.02 & 1.00 & 0.09 & 1.00 & 1.00 \\\\ \\hline Formal low-density new development & 0.58 & 0.06 & 0.54 & 0.49 & 0.00 & 0.12 \\\\ \\hline Formal medium-density new & 0.41 & 0.96 & 0.25 & 0.43 & 0.00 & 0.00 \\\\ \\hline Formal high density (clusters) & 0.41 & 0.06 & 0.67 & 0.65 & 0.33 & 0.33 \\\\ \\hline Informal new development & 0.65 & 0.06 & 0.37 & 0.00 & 0.68 & 0.68 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Normalized values of the classification results.
Figure 6: SSEI values of different settlement types.
## 6 Discussions
The condition of surface ecology is influenced by the physical, chemical, and biological characteristics of the area of interest and is affected by natural processes and land use activities. Several studies have assessed the ecological conditions of cities using VIS and RSEI extracted from medium spatial resolution images. The studies show that impervious surface cover affects the quality of the condition of surface ecology, with higher cover resulting in unhealthy surface ecological conditions [28; 30]. These studies generated information that can support initiatives to manage and improve urban ecosystems at the city level. This study demonstrated that using biophysical parameters derived from high and medium spatial resolution images provides detailed information on the surface ecological conditions of different settlement types. The study shows that the surface ecological condition varies from one informal settlement to another. The results show that informal settlements with lower impervious surface and high tree cover have better ecological conditions than those with lower vegetation cover. The same trend was seen in formal settlements. This is well aligned with the previous studies that assessed ecological conditions using RSEI using medium spatial resolution conditions [29; 30]. The results show that some of the formal settlements have unhealthy environmental conditions than some of the informal settlements. Such areas have higher impervious surface cover and lower tree cover. The assessment of the results shows that settlements with higher tree cover have better ecological conditions than those with higher grass cover. Informal settlements with higher grass cover and higher impervious surface were at an unhealthy surface ecological state compared to other assessed informal settlements. Further investigation on the impact of classifying grass from trees on SSEI needs to be conducted. The assessment of the index at different informal settlement types has the potential to provide valuable information that can be incorporated during the planning of upgrade projects. High spatial resolution imagery and information on the location of informal settlements are not always available and may be a limitation in assessing this index in certain countries or cities.
## 7 Conclusions
The study assessed the surface ecological conditions of informal settlements. The analysis of impervious surface, tree and grass cover, LST, and vegetation moisture provides valuable information on the environmental vulnerability of informal settlements and other settlement types. The results achieved in this study can be used to develop green strategies suitable for the different informal settlements to improve wellbeing. The results can also be used to develop disaster management strategies to reduce the impact of disasters on informal settlements. In addition, the information provided in this study can be used as input during the planning of informal settlement upgrade projects to ensure that the planning of services takes into account the need to reduce the environmental vulnerability of the settlements. As urbanization and the effects of climate change continue to be challenges for many city authorities, addressing the environmental challenges of informal and formal settlements is key to achieving sustainable development. The developed index contributes to ongoing research to build resilient settlements and offers practical measurements that can be used as the foundation for further work to understand the resilience of the cities.
Conceptualization, N.M. and P.M.; methodology, N.M.; validation, N.M.; formal analysis, N.M.; investigation, N.M.; writing--original draft preparation, N.M.; writing--review and editing, N.M. and P.M. visualization, N.M.; supervision, P.M. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The data produced in this research may be made available upon request.
The authors declare no conflict of interest.
## Appendix A
## References
* (1) Bolund, P.; Hunhammar, S. Ecosystem services in urban areas. _Ecol. Econ._**1999**, _29_, 293-301. [CrossRef]
* (2) Gallagher, J.; Baldauf, R.; Fuller, C.H.; Kumar, P.; Gill, L.W.; McNabola, A. Passive Methods for Improving Air Quality in the Built Environment: A Review of Porous and Solid Barriers. _Atmos. Environ._**2015**, _120_, 61-70. Available online: [https://www.sciencedirect.com/science/article/pii/S1352231015303204](https://www.sciencedirect.com/science/article/pii/S1352231015303204) (accessed on 11 August 2023). [CrossRef]
* (3) Adegun, O.B. Green Infrastructure Can Improve the Lives of Slum Dwellers in African Cities. _Front. Sustain. Cities_**2021**, \\(3\\), 621051. [CrossRef]
* (4) Haase, D.; Kabisch, S.; Haase, A.; Andersson, E.; Banzhaf, E.; Baro, F.; Brenck, M.; Fischer, L.K.; Frantzeskaki, N.; Kabisch, N.; et al. Greening cities-To be socially inclusive? About the alleged paradox of society and ecology in cities. _Habitat Int._**2017**, _64_, 41-48. Available online: [https://www.sciencedirect.com/science/article/pii/S0197397516309390](https://www.sciencedirect.com/science/article/pii/S0197397516309390) (accessed on 11 August 2023). [CrossRef]
* (5) Morabito, M.; Crisci, A.; Guerri, G.; Messeri, A.; Congedo, L.; Munafo, M. Surface urban heat islands in Italian metropolitan cities: Tree cover and impervious surface influences. _Sci. Total Environ._**2021**, _751_, 142334. Available online: [https://www.sciencedirect.com/science/article/pii/S0048969720358630](https://www.sciencedirect.com/science/article/pii/S0048969720358630) (accessed on 11 August 2023). [CrossRef]
* (6) Anderson, B.; Bell, M. Weather-related mortality: How heat, cold, and heat waves affect mortality in the United States. _Epidemiology_**2009**, _20_, 205. [CrossRef]
* (7) Ramsay, E.E.; Fleming, G.M.; Faber, P.A.; Barker, S.F.; Sweeney, R.; Taruc, R.R.; Chown, S.L.; Duffy, G.A. Chronic heat stress in tropical urban informal settlements. _Iscience_**2021**, _24_, 103248. Available online: [https://www.sciencedirect.com/science/article/pii/S2589004221012177](https://www.sciencedirect.com/science/article/pii/S2589004221012177) (accessed on 11 August 2023). [CrossRef]
* (8) Wang, L.Y.; Xiao, Y.; Rao, E.M.; Jiang, L.; Xiao, Y.; Ouyang, Z.Y. An Assessment of the Impact of Urbanization on Soil Erosion in Inner Mongolia. _Int. J. Environ. Res. Public Health_**2018**, _15_, 550. Available online: [https://www.mdpi.com/1660-4601/15/3/550](https://www.mdpi.com/1660-4601/15/3/550) (accessed on 11 August 2023). [CrossRef]
* (9) Kemper, T.; Mudau, N.; Mangara, P.; Pesaresi, M. Towards an automated monitoring of human settlements in South Africa using high resolution SPOT satellite imagery. In _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences--ISPRS Archives_; International Society for Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2015; pp. 1389-1394.
* (10) Pesaresi, M.; Politis, P. _GHS-BUILT-S R2022A--GHS Built-Up Surface Grid, Derived from Sentinel2 Composite and Landsat, Multimemporal (1975-2030)_; European Commission, Joint Research Centre (IRC): Brussels, Belgium, 2022.
* (11) Marconcini, M.; Metz-Marconcini, A.; Ureyen, S.; Palacios-Lopez, D.; Hanke, W.; Bachofer, F.; Zeidler, J.; Esch, T.; Gorelick, N.; Kakarla, A.; et al. Outlining where humans live, the World Settlement Footprint 2015. _Sci. Data_**2020**, \\(7\\), 1-14. [CrossRef]
* (12) Mudau, N.; Mapurisa, W.; Tsoeleng, T.; Mashalane, M. Towards development of a national human settlement layer using high resolution imagery: A contribution to SDG reporting. _S. Afr. J. Geomat._**2022**, \\(9\\), 1-12. [CrossRef]
* (13) Gram-Hansen, B.J.; Helber, P.; Varatharajan, I.; Azam, F.; Coca-Castro, A.; Kopackova, V.; Bilinski, P. Mapping informal settlements in developing countries using machine learning and low resolution multi-spectral data. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27-28 January 2019; pp. 361-368.
* (14) Luo, E.; Kuffer, M.; Wang, J. Urban poverty maps-From characterising deprivation using geo-spatial data to capturing deprivation from space. _Sustain. Cities Soc._**2022**, _84_, 104033. Available online: [https://www.sciencedirect.com/science/article/pii/S2210670](https://www.sciencedirect.com/science/article/pii/S2210670) 722003535 (accessed on 11 August 2023). [CrossRef]
* (15) Owusu, M.; Kuffer, M.; Belgiu, M.; Grippa, T.; Lennert, M.; Georganos, S.; Vanhuysse, S. Towards user-driven earth observation-based slum mapping. _Comput. Environ. Urban Syst._**2021**, _89_, 101681. Available online: [https://www.sciencedirect.com/science/article/pii/S0198971521000880](https://www.sciencedirect.com/science/article/pii/S0198971521000880) (accessed on 11 August 2023). [CrossRef]
* (16) Yadav, V.; Ghosh, S.K. Assessment and prediction of urban growth for a mega-city using CA-Markov model. _Geocarto Int._**2021**, _36_, 1960-1992. [CrossRef]
* (17) Borana, S.L.; Vaishnav, A.; Yadav, S.K.; Parihar, S.K. Urban Growth Assessment Using Remote Sensing, GIS and Shannon's Entropy Model: A Case Study of Bhilwara City, Rajasthan. In Proceedings of the 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), Jaipur, India, 7-8 February 2020; pp. 1-6.
* (18) Mudau, N.; Mangara, P. Investigation of informal settlement indicators in a densely populated area using very high spatial resolution satellite imagery. _Sustainability_**2021**, _13_, 4735. [CrossRef]
* (19) Fallatah, A.; Jones, S.; Mitchell, D.; Kohli, D. Mapping informal settlement indicators using object-oriented analysis in the Middle East. _Int. J. Digit Earth_**2019**, _12_, 802-824. [CrossRef]
* (20) Hofmann, P.; Strobl, J.; Blaschke, T.; Kux, H. Detecting informal settlements from QuickBird data in Rio de Janeiro using an object based approach. In _Object-Based Image Analysis_; Springer: Berlin/Heidelberg, Germany, 2008; pp. 531-553.
* (21) Chen, W.; Wu, A.N.; Biljecki, F. Classification of Urban Morphology with Deep Learning: Application on Urban Vitality. _Comput. Environ. Urban Syst._**2021**, _90_, 101706. [CrossRef]
* (22) Wang, J.; Fleischmann, M.; Venerandi, A.; Romice, O.; Kuffer, M.; Porta, S. EO + Morphometrics: Understanding cities through urban morphology at large scale. _Landscr Urban._**2023**, _233_, 104691. [CrossRef]
* (23) Croce, S.; Vettorato, D. Urban surface uses for climate resilient and sustainable cities: A catalogue of solutions. _Sustain. Cities Soc._**2021**, _75_, 103313. [CrossRef]* (24) Ridd, M.K. Exploring a V-I-S (vegetation-impervious surface-soil) model for urban ecosystem analysis through remote sensing: Comparative anatomy for cities. _Int. J. Remote Sens._**1995**, _16_, 2165-2185. [CrossRef]
* (25) Phinn, S.; Stanford, M.; Scarth, P.; Murray, A.T.; Shyy, P.T. Monitoring the composition of urban environments based on the vegetation-impervious surface-soil (VIS) model by subpixel analysis techniques. _Int. J. Remote Sens._**2002**, _23_, 4131-4153. [CrossRef]
* (26) Aina, Y.A.; Adam, E.; Ahmed, F.; Wafer, A.; Alshuwaiikhat, H.M. Using multisource data and the V-I-S model in assessing the urban expansion of Riyadh city, Saudi Arabia. _Eur. J. Remote Sens._**2019**, _52_, 557-571. [CrossRef]
* (27) Hu, X.; Xu, H. A new remote sensing index for assessing the spatial heterogeneity in urban ecological quality: A case from Fuzhou City, China. _Ecol. Indic._**2018**, _89_, 11-21. Available online: [https://www.sciencedirect.com/science/article/pii/S1470160](https://www.sciencedirect.com/science/article/pii/S1470160) $times$18300827 (accessed on 11 August 2023). [CrossRef]
* (28) Yue, H.; Liu, Y.; Li, Y.; Lu, Y. Eco-Environmental Quality Assessment in China's 35 Major Cities Based On Remote Sensing Ecological Index. _IEEE Access_**2019**, \\(7\\), S1295-S1311. [CrossRef]
* (29) Firojaei, M.K.; Fathololoumi, S.; Weng, Q.; Kiavarz, M.; Alavipanah, S.K. Remotely sensed urban surface ecological index (RUSEI): An analytical framework for assessing the surface ecological status in urban environments. _Remote Sens._**2020**, _12_, 2029. [CrossRef]
* (30) Li, J.; Gong, J.; Guldmann, J.M.; Yang, J. Assessment of urban ecological quality and spatial heterogeneity based on remote sensing: A case study of the rapid urbanization of wuhan city. _Remote Sens._**2021**, _13_, 4440. [CrossRef]
* (31) Setiawan, H.; Mathieu, R.; Thompson-Fawcett, M. Assessing the applicability of the V-I-S model to map urban land use in the developing world: Case study of Yogyakarta, Indonesia. _Comput. Environ. Urban Syst._**2006**, _30_, 503-522. Available online: [https://www.sciencedirect.com/science/article/pii/S0198971505000360](https://www.sciencedirect.com/science/article/pii/S0198971505000360) (accessed on 11 August 2023). [CrossRef]
* (32) Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. _Remote Sens. Environ._**2011**, _115_, 1145-1161. [CrossRef]
* (33) UN-Habitat. The challenge of slums: Global report on human settlements 2003. _Manag. Environ. Qual. Int. J._**2004**, _15_, 337-338. [CrossRef]
* (34) Devi, P.P.; Lowry, J.H.; Weber, E. Global environmental impact of informal settlements and perceptions of local environmental threats: An empirical case study in Suva, Fiji. _Habitat Int._**2017**, _69_, 58-67. Available online: [https://www.sciencedirect.com/science/article/pii/S0197397516310293](https://www.sciencedirect.com/science/article/pii/S0197397516310293) (accessed on 11 August 2023). [CrossRef]
* (35) Ramin, B. Slums, climate change and human health in sub-Saharan Africa. _Bull. World Health Organ._**2009**, _87_, 886. Available online: [https://europepmc.org/articles/PMC2789375](https://europepmc.org/articles/PMC2789375) (accessed on 11 August 2023). [CrossRef]
* (36) Cooperative Governance and Traditional Affairs. 2020. Available online: [https://www.cogta.gov.za/index.php/page/36/?option=com_docman&task=cat_view&gid=111&limit=5&limitsart=5&order=name&dir=ASC](https://www.cogta.gov.za/index.php/page/36/?option=com_docman&task=cat_view&gid=111&limit=5&limitsart=5&order=name&dir=ASC) (accessed on 11 August 2023).
* (37) Mudau, N.; Mhangara, P. Towards understanding informal settlement growth patterns: Contribution to SDG reporting and spatial planning. _Remote Sens Appl._**2022**, _27_, 100801. [CrossRef]
* (38) City of Tshwane. Human Settlements. 2020. Available online: [https://www.dffe.gov.za/sites/default/files/reports/environmentoutlook_chapter5.pdf](https://www.dffe.gov.za/sites/default/files/reports/environmentoutlook_chapter5.pdf) (accessed on 11 August 2023).
* (39) Statistics South Africa. Census 2011 Metadata. Available online: [http://www.statssa.gov.za/census/census_2011/census_products/Census](http://www.statssa.gov.za/census/census_2011/census_products/Census). 2011_Metadata.pdf (accessed on 11 August 2023).
* (40) De Pinho, C.M.D.; Fonseca, L.M.G.; Korting, T.S.; de Almeida, C.M.; Kux, H.J.H. Land-cover classification of an intra-urban environment using high-resolution images and object-based image analysis. _Int. J. Remote Sens._**2012**, _33_, 5973-5995. [CrossRef]
* (41) Definiens. Definiens Definiens Developer 7 Developer 7 Reference Book Reference Book. 2007. Available online: www.definiens.comwww.definiens.com (accessed on 11 August 2023).
* (42) Baatz, M. Multi resolution segmentation: An optimum approach for high quality multi scale image segmentation. In _Beutrage zum AGIT-Symposium_; Salzburg: Heidelberg, Germany, 2000; pp. 12-23.
* (43) Mugiraneza, T.; Nascetti, A.; Ban, Y. WorldView-2 data for hierarchical object-based urban land cover classification in Kigali: Integrating rule-based approach with urban density and greenness indices. _Remote Sens._**2019**, _11_, 2128. [CrossRef]
* (44) Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. _ISPRS J. Photogramm. Remote Sens._**2017**, _130_, 277-293. Available online: [https://www.sciencedirect.com/science/article/pii/S092427161](https://www.sciencedirect.com/science/article/pii/S092427161) 630661X (accessed on 11 August 2023). [CrossRef]
* (45) Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. In _IEEE Transactions on Systems, Man, and Cybernetics_; IEEE: New York, NY, USA, 1973; pp. 610-621.
* (46) Van de Voorde, T.; De Genst, W.; Canters, F.; Stephenne, N.; Wolff, E.; Binard, M. _Extraction of Land Use/Land Cover-Related Information from Very High Resolution Data in Urban and Suburban Areas_; Millpress: Rotterdam, The Netherlands, 2004; pp. 237-244. Available online: [https://www.researchgate.net/publication/237134807](https://www.researchgate.net/publication/237134807) (accessed on 11 August 2023).
* (47) Huete, A.R. A soil-adjusted vegetation index (SAVI). _Remote Sens. Environ._**1988**, _25_, 295-309. Available online: [https://www.sciencedirect.com/science/article/pii/003442578890106X](https://www.sciencedirect.com/science/article/pii/003442578890106X) (accessed on 11 August 2023). [CrossRef]
* (48) Pesaresi, M.; Gerhardinger, A.; Kayitakire, F. A Robust Built-Up Area Presence Index by Anisotropic Rotation-Invariant Textual Measure. _IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens._**2008**, \\(1\\), 180-192. [CrossRef]* White et al. (1997) White, K.; Walden, J.; Drake, N.; Eckardt, F.; Settlell, J. Mapping the iron oxide content of dune sands, Namib Sand Sea, Namibia, using landsat thematic mapper data. _Remote Sens. Environ._**1997**, _62_, 30-39. Available online: [https://www.sciencedirect.com/science/article/pii/S0034425797000680](https://www.sciencedirect.com/science/article/pii/S0034425797000680) (accessed on 11 August 2023). [CrossRef]
* Stefanov and Ramsey (2001) Stefanov, W.L.; Ramsey, M.S.; Christensen, P.R. Monitoring urban land cover change: An expert system approach to land cover classification of semiarid to arid urban centers. _Remote Sens. Environ._**2001**, _77_, 173-185. Available online: [https://www.sciencedirect.com/science/article/pii/S0034425701002048](https://www.sciencedirect.com/science/article/pii/S0034425701002048) (accessed on 11 August 2023). [CrossRef]
* Chen and Sun (2015) Chen, X.; Sun, G.; Wang, Z. A case study on the urban impervious surface distribution based on a BCI index. In Proceedings of the International Symposium on Multispectral Image Processing and Pattern Recognition, Enshi, China, 31 October-1 November 2015; Available online: [https://api.semanticscholar.org/CorpusID:130192599](https://api.semanticscholar.org/CorpusID:130192599) (accessed on 11 August 2023).
* Ukhnaa et al. (2019) Ukhnaa, M.; Huo, X.; Gaudel, G. Modification of urban built-up area extraction method based on the thematic index-derived bands. _IOP Conf. Ser Earth Environ Sci._**2019**, _227_, 62009. [CrossRef]
* Peng et al. (2020) Peng, X.; Wu, W.; Zheng, Y.; Sun, J.; Hu, T.; Wang, P. Correlation analysis of land surface temperature and topographic elements in Hangzhou, China. _Sci. Rep._**2020**, _10_, 10451. [CrossRef] [PubMed]
**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | To manage urban ecological ecosystems adequately, understanding the urban areas' biophysical characteristics is required. This study developed a settlement surface ecological index (SSEI) using tree, soil, impervious surface and grass covers, land surface temperature (LST), and soil moisture derived from Satellite Pour L'Observation de la Terre (SPOT) 7 and Landsat 8 satellite images. The assessment of the SSEI was conducted over twelve sites of 300 m by 300 m. The selected sites contained formal and informal settlements of varying building densities. The SSEI values ranged from \\(-0.3\\) to \\(0.54\\). Seven assessed areas are in the worst ecological condition with an SSEI below zero. Only three settlement types had an SSEI index value of \\(0.2\\) and above, and two of these areas were informal settlements. The formal low-density settlement with higher tree coverage displayed the highest index value of \\(0.54\\), slightly higher than the medium-density informal settlement. Overall, there is no significant difference in the SSEI values between the surface ecological condition of formal and informal settlements. The results achieved in this study can be used to understand urban ecology better and develop urban greening strategies at a city or settlement level.
Keywords: informal settlements; land surface temperature; urban ecology +
Footnote β : journal: | Provide a brief summary of the text. | 280 |
isprs/60e16d35_f72b_4972_a47b_d0bcf070ea0c.md | # Cartosat-1 Stereo Imagery: Potentialities about Orientation, DSM Extraction and Orthorectification
M. Crespi
F. Barbato
DITS, Area di Geodesia e Geomatica, Sapienza Universita di Roma, via Eudossiana 18, 00184 Rome, Italy (mattia.crespi, laura.decureductis)@uniroma1.it
L. De Vendictis
G Iannucci
D. Poli
F. Volpe
CypereCity AG, in der Luberzen 9, CH 8902 Urdorf, Switzerland (dpoli, xwang)@cybercity.tv
X. Wang
## 1 Introduction
Cartosat-1 was launched on 5\\({}^{\\mathrm{th}}\\) May, 2005 at Satish Dhawan Space Centre (SDSC) SHAR, Sharikota, India's spaceport (Figure 1).
The satellite flies along a Sun-synchronous orbit at a mean altitude of 618 km, with an inclination of 97.87 degrees and a mean revolution period equal to 97.12 minutes.
The Sun-synchronous orbit provides imagery collection under near-constant illumination conditions and repetitive coverage of the same area in a specified interval.
The Cartosat-1 sensor offers a across-track resolution of 2.5 m in panchromatic mode.
In addition the optical sensor is configured with two pushbroom cameras which are mounted such that one camera is looking at +26\\({}^{\\circ}\\) (band F) and the other at -5\\({}^{\\circ}\\) (band A) along the track. These two cameras provide stereoscopic image pairs in the same pass (Krishnaswamy et al., 2006).
The Institutes of Area di Geodesia e Geomatica - Sapienza Universita di Roma, Eurimage S.p.A. and CyberCity AG joined as Investigators the ISPRS-ISRO Cartosat-1 Scientific Assessment Programme (C-SAP), for the investigation of Cartosat-1 stereo imagery for DEM/DSM generation, 3D features extraction and orthophoto production (C-Sap, 2006). This paper reports the second part of the investigations carried out on the available stereo imagery (TS-6).
After a brief description of the available and used data, the results of images orientation with a generic rigorous model and the comparison between the DSMs are presented and discussed.
## 2 Data
The data set of this investigation (TS-6) consisted of:
* a stereopair from Cartosat-1 satellite with corresponding metadata files;
* 43 ground points;
* DSM derived from a block of 6 aerial photos
In the next paragraphs the main data characteristics are described.
Figure 1: Cartosat-1 satellite
### Cartosat-1 stereo scenes
The two stereo images were acquired on 8th June 2005 in the morning; the time interval between the two images is 53 sec. Each image is 12'000 x 3'000 pixel large, with a ground resolution of 2.5 m.
The imagery width is of 3'000 pixel because only 3'000 of 12'000 sensor detectors were active; the overlapping is almost 90%.
The scenes cover an area of approximately 30.0x7.5 km\\({}^{2}\\) over the city of Rome and suburbs, is characterized by a quite flat terrain with elevations ranging between 20 and 60 m (Figure 2); the morphology is mainly of urban type, with buildings of different heights, generally with no more than 6-8 floors (20-25 m).
The scenes were acquired in panchromatic mode with attitude angles (roll, pitch, yaw) close to zero and approximately equal for both of them; as already mentioned the stereo viewing is guaranteed (with a B/H ratio of about 0.6) by the sensor structure which is configured such that one camera scans the ground with off-nadir angle of +26\\({}^{\\circ}\\) (forward image: band F) and the other with off-nadir angle of -5\\({}^{\\circ}\\) (backward image: band A).
The metadata files contain information on the acquisition time, image location, mean attitude angles and sensor geometry (detectors looking angles), but no data about ephemeris.
### Ground Control Points
For this investigation 43 ground points, well distributed over the whole area and suited for acting both as Ground Control Points - GCPs and as Check Points - CPs, were available. All these points were surveyed by geodetic class GPS surveys with 3D accuracy (RMSE) of 5-10 cm (Figure 3).
### Reference DSM
The reference DSM has been extracted from a block of 6 aerial photos (scale 1:9'000, footprint: 14 cm), on an area of about 3.5 x 4 km\\({}^{2}\\) (Figure 4).
The two strips have an along track and an across track overlap of about 60% and 20% respectively.
Unfortunately the 6 aerial photos available for the reference DSM extraction cover a small area of the satellite imagery (about 6 % only), characterized by a wide variety of urban environments (tall and low buildings, parks with high trees and bare soil).
## 3 Imagery orientation
The PCI-OrthoEngine software was used to orient the stereopair with an approach based on a so called \"generic rigorous model\" (Dowman and Michalis, 2003).
It is well known that a rigorous model should describe the physical properties of the sensor and its image acquisition mode; it is an approach mainly photogrammetric (collinearity equations) accounting for the satellite position, the sensor attitude and characteristics, and an eventual final cartographic transformation. (Crespi, 2006/3)
The used version of this software (v. 10.0) implements the well-known Toutin's rigorous model for several high-resolution satellites including IKONOS, Quickbird, EROS A and many others, but the sensor model of Cartosat-1 is not yet included (PCI, 2006).
However it's possible to create an orbital model for Cartosat-1 images using a generic rigorous model that necessitates manually entering approximate parameters about the satellite image and orbit data: across and along track angle, IFOV, mean satellite altitude, orbital period, orbit eccentricity and inclination, pixel spacing and scene centre. The corresponding values for Cartosat-1 were available in the metadata files.
After the block adjustment, it is possible to analyze the results of the stereo images orientation in terms of residuals both on GCPs and CPs.
### Results of imagery orientation
In this paragraph the residuals of CPs for the stereo scenes are reported (Table 5, Figure 6) and discussed. Several tests have been carried out using a different number of GCPs, always homogenously distributed, in order to find the lowest number of
Figure 4: Reference DSM from aerial photos
Figure 2: Cartosat-1 stereo scenes: band A (left) and band F (right) Figure 3: Distribution of the 43 ground pointsGPs to obtain the accuracy (RMSE on North and East CPs residuals) assessment.
Overall a horizontal and vertical accuracy at sub-pixel level is achievable. The accuracy stabilizes with a GCPs number between 21 and 25.
## 4 DSM Generation
DEM and DSM generation from high resolution satellite imagery is a hot topic for some years (Toutin, 2001; Toutin and Cheng, 2001; Toutin, 2004/1; Toutin, T., 2004/2).
In our experiment the DSM was generated at the resolution of 2 pixel (5 m) using for the orientation 21 GCPs well distributed on the whole images.
A quantitative evaluation of the DSM was carried out by the comparison with the reference DSM generated from the aerial photos, over three zones with different characteristics: one urban zone and 2 open zones.
We computed the 3D differences (Cartosat-1 DSM minus aerial photos DSM), by Geomagic Studio 9 software, between the interpolated DSM and the reference DSM, showing the results of the raw computations, without any a posteriori manual editing procedure. In Table 7 the results obtained are reported.
Overall the accuracy (RMSE) of the generated DSM is at the level of 1-2 pixel, better for open than for urban zones as likely expected. Nevertheless it is mandatory to point out some differences.
In fact for the test relative to the urban zone, where usually there are streets in very dense residential zones, the high percentage of buildings tends to raise DSM heights over the road level (positive errors) (Crespi et al. 2006/2) and, in the same time, to smooth the building edges (negative errors) (Figure 8).
As regards the tests over the open zones, it has to be evidenced a negative value of the average, due to the negative residuals localized in correspondence of tree-lined zones; this bias is likely to be caused again by the mentioned smoothing effect, even if further insights will be developed since almost all over the open areas seems to be evident a negative bias (Figure 9).
\\begin{table}
\\begin{tabular}{|l|c|c|c|} \\hline \\multicolumn{2}{|c|}{**STEREO ORIENTATION - RIGORUS MODEL**} \\\\ \\hline
**GCP** & **GCP + CP** & \\multicolumn{2}{c|}{**RMSE**} \\\\ \\cline{3-4} & **E (m)** & **N (m)** & **H (m)** \\\\ \\hline
9 & 9 + 34 & 1.81 & 1.76 & 2.91 \\\\ \\hline
12 & 12 + 31 & 2.01 & 1.70 & 2.27 \\\\ \\hline
15 & 15 + 28 & 2.01 & 1.69 & 2.14 \\\\ \\hline
18 & 18 + 25 & 2.03 & 1.64 & 1.93 \\\\ \\hline
**21** & **21 + 22** & **1.93** & **1.40** & **1.98** \\\\ \\hline
**25** & **25 + 18** & **2.02** & **1.46** & **1.77** \\\\ \\hline
29 & 29 + 14 & 2.04 & 1.46 & 1.97 \\\\ \\hline
34 & 34 + 9 & 2.01 & 1.41 & 1.57 \\\\ \\hline \\end{tabular}
\\end{table}
Table 5: RMSE of the CPs residuals with 3D orientation
Figure 8: Example of a profile of the differences
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline \\multicolumn{2}{|c|}{**STEREO ORIENTATION - RIGORUS MODEL**} \\\\ \\hline
**GCP** & **GCP + CP** & \\multicolumn{2}{c|}{**RMSE**} \\\\ \\cline{3-4} & **E (m)** & **N (m)** & **H (m)** \\\\ \\hline
9 & 9 + 34 & 1.81 & 1.76 & 2.91 \\\\ \\hline
12 & 12 + 31 & 2.01 & 1.70 & 2.27 \\\\ \\hline
15 & 15 + 28 & 2.01 & 1.69 & 2.14 \\\\ \\hline
18 & 18 + 25 & 2.03 & 1.64 & 1.93 \\\\ \\hline
**21** & **21 + 22** & **1.93** & **1.40** & **1.98** \\\\ \\hline
**25** & **25 + 18** & **2.02** & **1.46** & **1.77** \\\\ \\hline
29 & 29 + 14 & 2.04 & 1.46 & 1.97 \\\\ \\hline
34 & 34 + 9 & 2.01 & 1.41 & 1.57 \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Results of DSM accuracy over urban and open zones
Figure 6: Graph of the RMSE of the CPs residuals with 3D orientation
Figure 9: Negative errors in correspondence of the tree-lines
## 5 Discussions and Conclusions
The Institutes of Area di Geodesia e Geomatica, Eurimage S.p.A. and CyberCity AG are participating as Investigators in the frame of the ISPRS-ISRO Cartosat-1 Scientific Assessment Programme (C-SAP), in order to investigate the potentiality of Cartosat-1 stereo scenes.
During this first research period, these Institutes were involved in the processing of the dataset associated to TS-6 (Rome, Italy); in particular, DEM/DSMs generation, 3D features extraction for city modeling and orthophotos generation were investigated (Crespi et al. 2006/1, 2006/2).
The results, representing an update of those discussed in (Crespi et al. 2006/1), showed that a good orientation with horizontal and vertical accuracy at sub-pixel level may be obtained with a \"generic rigorous model\" too. It has to be underlined that these results may suffer for the singularity of the considered stereo pair, which is quite narrow due to the fact that only 3'000 of 12'000 sensor detectors were active.
Results about DSM extraction indicate that an accuracy (RMSE) of 1-2 pixel may be achieved with commercial image matchers, even if significant biases are evidenced over urban zones and with isolate tree-lines over bare soil.
Future research prospects will be focused on:
* test on larger areas, when new complete scenes will be available in the frame of the ISPRS-ISRO Cartosat-1 Scientific Assessment Programme (C-SAP)
* additional investigations on the DSM generation and 3D feature extraction, with manual measurement of breaklines and masspoints, image matching enhancement by suitable pre-filtering and use of more refined image matchers (Fraser et al., 2002; Gruen and Zhang, 2002; Kocaman et al. 2006; Poli, 2006; Ulm, 2003)
* studies on image quality, which can remarkably affect the achievable geometric accuracy both in orientation, DSM extraction and final orthophoto production (Zhang, 2005)
Finally, we are designing an original rigorous model for stereo orientation of Cartosat-1 imagery and to insert it in the software SISAR (Crespi et al., 2006/3) implemented at the Area di Geodesia e Geomatica -Sapienza Universita di Roma.
## References
* Crespi et al. (2006) Crespi, M., Barbato, F., De Vendictis, L., Poli D., Volpe, F., Wang, X., 2006/1. Orientation, orthorectification, DSM extraction and 3D city modeling by Cartosat-1 stereo imagery: first results of a test over Rome. Proceedings of ISPRS International Symposium on Geospatial Databases for Sustainable Development, Goa, India, 27-30 September
* Crespi et al. (2006) Crespi, M., De Vendictis, L., Onori, R., Volpe, F., 2006/2. DSM extraction from Quickbird Basic Stereo and Standard Orthoready imagery: quality assessment and comparison. Proceedings of 26th EARSeL Symposium New Developments and Challenges in Remote Sensing,Warsaw, 29 May- 2 June
* 7 July, Paris
* C-Sap (2006) C-Sap, 2006. [http://commission4.luphost.net/c-sap.htm](http://commission4.luphost.net/c-sap.htm). Last visited August 2006
* Dowman and Michalis (2003) Dowman, I. J., Michalis, P., 2003. Generic rigorous model for along track stereo satellite sensors. Proceedings of ISPRS Workshop High Resolution Mapping from Space 2003, Hannover, 4-6 October. Proceedings on CDROM
* Fraser et al. (2002) Fraser, C.S., Baltsavias, E., Gruen, A., 2002. Processing of Ikonos imagery for submetre 3D positioning and building extraction. _Phot. Eng. and Remote Sensing_, 56(3): pp. 177-194
* Gruen and Zhang (2002) Gruen, A., Zhang L., 2002. Automatic DTM Generation from Three-Line-Scanner (TLS) images. IAPRS, Vol. 34, Part 2A, Graz, Austria, pp. 131-137
* Kocaman et al. (2006) Kocaman, S., Zhang, L., Gruen, A., Poli, D., 2006. 3D city modeling from high-resolution satellite images. Proceedings of ISPRS Workshop Topographic Mapping from Space (with Special Emphasis on Small Satellites), Ankara, 14-16 February
* Krishnaswamy et al. (2005) Krishnaswamy, M., Kalyanaraman, and S., 2005. Indian Remote Sensing Satellite Cartosat-1: Technical features and data products. [http://www.gisdevelopment.net/technology/rs/techrs023.htm](http://www.gisdevelopment.net/technology/rs/techrs023.htm) (last visited August 2006)
* PCI (2006) PCI, 2006. Manual of Orthoengine Software, v. 10
* Poli (2006) Poli, 2006. Reality-based 3D City Models from Aerial and Satellite Data. GEOInformatics, March 2006, Volume 9, pp. 8-11.
* Toutin (2001) Toutin, T., 2001. Elevation modeling from satellite visible and infrared (VIR) data: a review. _Int. J. of Remote Sensing_, 22: pp. 1097-1125
* Toutin and Cheng (2001) Toutin, T., Cheng, P., 2001. DEM with Stereo IKONOS: A Reality if , _Earth Obs. Mag._, 10(7): pp. 13-17
* Toutin (2004) Toutin, T., 2004/1. DTM Generation from IKONOS In-track Stereo Images using 3D physical model. _Phot. Eng. and Remote Sensing_ (70): pp. 695-702
* Toutin (2004) Toutin, T., 2004/2. DSM Generation from QuickBird In-track Stereo Images with 3D Physical Modelling, _Int. J. of Remote Sensing_, 25: 5181-5193
* Ulm (2003) Ulm, K., 2003. Reality-based 3D city models with CyberCity-Modeler (CCTMTM) and laserscanner data. Optical 3D Measurement Techniques, 22-25 September 2003, ETH Zurich
* Zhang (2005) Zhang L., 2005. Automatic Digital Surface Model (DSM) Generation from Linear Array Images. PhD Dissertation, Institute of Geodesy and Photogrammetry. ETH Zurich, ISBN 3-906467-55-4
## Acknowledgements
The authors would like to thank very much Dr. Nandakumar, for the coordination of the ISPRS-ISRO Cartosat-1 Scientific Assessment Programme (C-SAP) and for accepting our Institutes as Investigators. | CARTOSAT-1 satellite was launched in May 2005; the optical sensor is configured with two pushbroom cameras, namely Aft and Fore, tilted in along track direction by -5\\({}^{\\circ}\\) and +26\\({}^{\\circ}\\), providing stereoscopic imagery in the same pass with a ground resolution of 2.5 m.
Within the ISRO-ISPRS Cartosat Scientific Assessment Program (C-SAP), investigations were carried out on a stereopair belonging to the test field called TS-6, acquired on 8th June 2005 over the city of Rome, Italy (30 x 7.5 km\\({}^{2}\\)), in order to test the sensor potentialities for DSM generation.
The stereo images were processed with PCI-OrthoEngine software, using a generic rigorous sensor model; the image orientation and the DSM geocoding were based on GCPs collected by GPS surveys with a mean 3D accuracy of 5-10 cm.
For the evaluation of the DSM accuracy, it was compared with the DSM generated from a block of 6 aerial photos (scale 1:9'000) over an area of about 3.5 x 4 km\\({}^{2}\\).
An orientation with horizontal and vertical accuracy at sub-pixel level may be obtained with a \"generic rigorous model\"; as regards DSM extraction, an accuracy ranging from 1 to 2 pixel, depending on the feature of the land cover (urban zone, bare soil) is achievable with commercial image matchers.
CARTOSAT-1 STREO | Summarize the following text. | 335 |
arxiv-format/2305_04054v1.md | SST-ReversibleNet: Reversible-prior-based Spectral-Spatial Transformer for Efficient Hyperspectral Image Reconstruction
Zeyu Cai, Jian Yu, Ziyu Zhang, Chengqian Jin, Feipeng Da
0000-0000000500.00 (c) 2021 IEEE
## I Introduction
With rich and unique features [1], hyperspectral images (HSIs) have been widely used for analysis and scene applications such as precision agriculture [2], national security [3], environmental protection [4], and astronomical observations [5]. In computer vision, HSIs can be extensively used for object tracking [6, 7], material classification [8], feature extraction [9], and medical image analysis [10].
To obtain spectral images, traditional methods typically scan scenes along the 1D or 2D spatial dimension or along spectral channels, sacrificing time through multiple exposures to reconstruct the spectral data of the scene. Although traditional methods perform well in terms of spectral detection range and accuracy [11], they are unsuitable for dynamic detection and therefore consumer applications. Recently, researchers have exploited advances in compressed sensing (CS) theory to capture HSIs using Snapshot Compressed Imaging (SCI) systems [12], which compress information from snapshots along the spectral dimension into a single 2D measurement. Among current SCI systems, coded aperture snapshot spectral imaging (CASSI) [13] stands out as a promising research direction.
Although the spectral data cube in the CASSI system is modulated by a coded mask and then dispersed, the complete data cube can be reconstructed from the redundant image information. Spectral reconstruction methods can be classified as traditional, Plug-and-Play (PnP), End-to-End (E2E) and Deep Unfolding Network (DU).
Traditional methods perform reconstructions based on overcomplete dictionaries or sparse spectral features that rely on hand-crafted priors and assumptions [14, 15]. The main drawback of these traditional methods is the need to manually adjust parameters, resulting in poor robustness and slow reconstruction. In recent years, deep learning methods have demonstrated powerful capabilities in image generation and reconstruction [16], such as image super-resolution, image denoising, and rain and fog removal [17, 18], and have also been applied to spectral image reconstruction. PnP introduces a denoising module based on the traditional method, but with limited improvement in reconstruction speed and accuracy. The current SOTA methods all belong to E2E and DU. The E2E directly establishes the mapping between the measurement and truth data, and the DU uses a depth module to simulate the iterations in a convex optimization algorithm. Although both E2E and DU have achieved good performance,
Fig. 1: PSNR-Params-GFLOs comparisons of our SST-ReversibleNet and SOTA HSI reconstruction methods.The vertical axis is PSNR (dB), the horizontal axis is GFLOPs (computational cost), and the circle radius is Params (memory cost).
there are still limitations to the current methods.
1) The E2E method is similar to an open-loop control system where the measurements no longer guide the reconstruction process during the reconstruction, in addition to lacking interpretability and a DU-like iterative framework. As a result, E2E is inefficient in improving network performance by increasing network depth and limits the scope for improving accuracy.
2) The DU networks are based on convex optimisation algorithms, but require transposition and invertible operations on the operation matrix during iteration. These conditions limit the structure of the network module and impose requirements on the design of the coding mask, as described in the specific detailed work in Sec. II.
3) The denoising modules in the Transformer-based E2E methods and DU networks learn either the global self-similarity of the spectral dimension or the local correlation of the spatial dimension, ignoring the global correlation of the spectral cube in both the spectral and spatial dimensions.
The motivation of this paper is to find an interpretable E2E method with a structure similar to DU, but not subject to the constraints of convex optimization methods, thus bridging the gap between E2E and DU. Furthermore, we are looking for a mapping network that learns both the self-similarity of the transformer-based spectral dimension and the spatial global dependence of the transformer-based spatial dimension, taking into account memory consumption and computational complexity.
To address the above issues, and inspired by the reversible nature of the optical path, we propose a framework based on the reversible optical path prior (Reversible-prior). Based on the learning of the residuals between the estimated and actual measurements of the reversible optical path, the new framework forms a closed loop that can effectively improve the reconstruction capability of the model, and the structure is shown in Fig. 2. Based on the new framework, a mapping network of Spectral-Spatial Transformer is designed to learn spectral and spatial self-similarity and global correlation using efficient spectral self-attention and spatial self-attention, respectively. We plug Spectral-Spatial Transformer into the reversible prior-based framework to establish a novel HSI reconstruction method, a Spectral-Spatial Transformer network based on reversible prior (SST-ReversibleNet). Finally, based on the unique design of the new framework, we propose a new reversible loss. Through the above proposed and improved methods, we establish a series of effective small-to-large SST-ReversibleNet families (SSTs), which outperform the state-of-the-art (SOTA) methods by a very large margin, as shown in Fig. 1.
Our contributions can be summarized as follows:
1) We propose a new framework that bridges the gap between E2E and DU, allowing E2E methods to have the interpretability and iterative capabilities of DU. In addition, we design a new reversible loss based on the new framework.
2) We present a Spectral-Spatial transformer module that can balance the parameters and reconstruction accuracy without deepening the depth of the module.
3) Our SST-ReversibleNet dramatically outperforms SOTA methods by a large margin while requiring cheaper computational and memory costs. Besides, SST-ReversibleNet yields more visually satisfying results in real-world HSI reconstruction.
## II Related Works
### _Methods of HSI reconstruction_
**End-to-end method** The E2E method has a powerful mapping capability by directly finding strong mapping relationships between measurements and spectral cubes, so the network structure is concise and diverse. E2E can be divided into Convolutional Neural Networks (CNN-based) and transformer-based networks. CNN-based networks [19, 20, 21, 22], such as TSA-net [21], learn local spatial correlations to reconstruct data, which has the advantage of fast inference, but tends to lose global features. Similarly, transformer-based networks use spectral self-attention to learn global similarity in spectral dimensions, or combine CNNs in space to compensate for local spatial information. However, the E2E networks all ignore how CASSI systems work and lack theoretical interpretability and flexibility.
**Deep unfolding network** The DU uses multi-stage network iterations to map measurements down a gradient into the HSI cube. DUs are derived from convex optimization algorithms, Half Quadratic Splitting (HQS), Alternating Direction Method of Multipliers (ADMM) and Proximal Gradident Descent (PGD) are common optimization algorithms with strong interpretability. These methods typically decompose the objective function into a data fidelity term and a regularized decoupling term, producing iterative schemes consisting of alternating solutions to a data subproblem and a prior sub-problem. However, the optimization-based approach has some conditional constraints in the solution process, and as in the HQS expansion framework, the 2-stage iterative process can be described as [23]:
\\[x_{k+1}=\\left(\\Phi^{\\mathrm{T}}\\Phi+\\mu\\mathrm{I}\\right)^{-1}\\left(\\Phi^{ \\mathrm{T}}y+\\mu z_{k}\\right) \\tag{1}\\]
\\[z_{k+1}=\\underset{z}{argmin}\\frac{1}{2\\left(\\sqrt{\\tau_{k+1}/\\mu_{k+1}}\\right) }\\left\\|z-x_{k+1}\\right\\|^{2}+R\\left(z\\right) \\tag{2}\\]
where \\(\\mathrm{I}\\) is an identity matrix. \\(\\Phi\\) is a fat matrix. \\(X_{k+1}\\) and \\(z_{k+1}\\) are two subproblems of (k+1)-stage. \\(\\mu\\), \\(\\mu_{k+1}\\) are hyperparameters. \\(R\\left(.\\right)\\) is a mapping function. It is clear from the formula that the optimization formula is valid on the premise that \\(\\left(\\Phi^{\\mathrm{T}}\\Phi+\\mu\\mathrm{I}\\right)^{-1}\\) is invertible. In addition, operations such as transpose multiplication \\(\\Phi^{\\mathrm{T}}\\Phi\\) and \\(\\Phi^{\\mathrm{T}}y\\) are involved in the operation.
### _3D cube feature extraction module_
Both E2E and DU require feature extraction in the measurement space. Much of the previous work has revolved around extracting local spatial information using CNN [24, 25], but these CNN-based models have limitations in capturinglong-range spatial dependencies and modelling non-local self-similarity. Recently, the emerging Transformer has provided a solution to address the shortcomings of CNN. MST [26] proposed the first spectral transformer-based model for HSI reconstruction. MST treats spectral maps as tokens and computes self-attention along the spectral dimension. In addition, TSA-Net [21] uses transformer-based spectral modules and CNN-based spatial modules to learn a non-linear mapping from the 2D measurement to the 3D hyperspectral cube. All these approaches ignore the global correlation in the spatial dimension.
## III Model of CASSI System
In CASSI system, the 3D hyperspectral cube is modulated via a coded mask and then dispersed by a dispersive prism (Fig. 3). Mathematically, considering a 3D HSIs cube, denoted by \\(X\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\), where \\(n_{x}\\), \\(n_{y}\\), \\(c\\) represent the HSIs's height, width, and number of wavelengths. \\(M\\in\\mathbb{R}^{n_{x}\\times n_{y}}\\) denoted a pre-defined mask. For each wavelength \\(m=1,2\\cdots c\\), the spectral image is modulated, and we can express it as:
\\[X^{\\prime}\\left(:,:,m\\right)=X\\left(:,:,m\\right)\\odot M \\tag{3}\\]
Where \\(X^{\\prime}\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\) denotes the modulated spectral data-cube, and \\(\\odot\\) denotes the element-wise multiplication. After passing the dispersive prism, \\(X^{\\prime}\\) becomes tilted and is considered to be sheared along the y-axis. We use \\(X^{\\prime\\prime}\\in\\mathbb{R}^{n_{x}\\times\\left(n_{y}+d\\left(c-1\\right)\\times c\\right)}\\) to denote the dispersed HSIs cube, where d represents the shifting step. We assume \\(\\lambda_{c}\\) to be the reference wavelength, i.e., \\(X^{\\prime\\prime}\\left(:,:,m\\right)\\) is not sheared along the y-axis. Then we have
\\[X^{\\prime\\prime}\\left(x,y,m\\right)=X^{\\prime}\\left(x,y+d_{m},m\\right) \\tag{4}\\]
where (x, y) represents coordinates of a point on the 3D HSI, \\(d_{m}\\) represents the spatial shifting of the m-th channel on \\({}^{\\prime\\prime}\\). Finally, the captured 2D compressed measurement \\(Y\\in\\mathbb{R}^{n_{x}\\times\\left(n_{y}+d\\left(c-1\\right)\\right)}\\) can be obtained by:
\\[Y=\\sum_{m=1}^{c}X^{\\prime\\prime}\\left(:,:,m\\right)+G \\tag{5}\\]
where \\(G\\in\\mathbb{R}^{n_{x}\\times\\left(n_{y}+d\\left(c-1\\right)\\right)}\\) is the random noise generated by the photon sensing detector during the measurement.
## IV Proposed Method
### _Overall architecture based on reversible prior_
Previous E2E methods look for violent mapping relations to obtain the solution to Eq. (5) in a single pass, which means that they only have the upper half of the process (Reconstruction Net) of Fig. 2. The single irreversible process also means that end-to-end methods cannot fine-tune the inference results, leading to a partial reduction in the performance of the model. According to the principle of optical path reversibility, it is easy to project the 3D cube of the reconstruction results back into the 2D measurement space relative to its inverse process. The construction of the residuals of the measured and reprojected data and the fine-tuning of the gap between the last learned data and the true value based on the residuals is the main difference between our network and the E2E and the DU. The overall architecture of SST-ReversibleNet is shown in Fig. 4. and is divided into a reversible module and a reconstruction subnet, which are represented as follows.
\\[z_{n}=\\mathcal{G}\\left(x_{n}\\right) \\tag{6}\\]
\\[x_{n+1}=\\mathcal{F}_{n+1}\\left(y-z_{n}\\right)+x_{n} \\tag{7}\\]
where \\(y\\) is the actual measurement from the CCD camera, \\(\\mathcal{G}\\) is the mapping of the spectral 3D cube to the 2D measurement, \\(\\mathcal{F}\\) is the mapping from the input to the spectral 3D cube, \\(z_{n}\\) is the output of the \\(n\\)-stage inverse process, and \\(x_{n+1}\\) is the reconstruction result of the \\(\\left(n+1\\right)\\)-stage.
According to the number of stages \\(n\\) of the output \\(x_{n}\\), we establish four SST-ReversibleNet with small, medium, large and extremely large parameter sizes and computational costs: SST-S (n=1), SST-M(n=2), SST-L (n=4), SST-Lplus (n=9). In SST-S, we use reversible prior between SpatialAB and SpecialAB, while in other networks, we only use reversible prior between two SST modules.
### _Reversible module_
The implementation of the inverse process is based on the output of the spectral reconstruction network at the n-th stage to obtain the predicted value of the spectral cube \\(x_{n}\\). According to Eq. (3) and Eq. (4), the predicted value \\(\\left({x_{n}}^{\\prime\\prime}\\right)^{-1}=x_{n}\\) can be projected back into the measurement space after mask encoding, dispersion and blending. As shown in the blue line in Fig. 4, \\(\\left({x_{n}}^{\\prime\\prime}\\right)^{-1}\\), \\(\\left({x_{n}}^{\\prime}\\right)^{-1}\\) correspond to the inverse predicted values of \\({x_{n}}^{\\prime\\prime}\\) and \\({x_{n}}^{\\prime}\\) in the forward process, respectively, and the inverse process is described as:
\\[\\left({x_{n}}^{\\prime}\\right)^{-1}\\left(x,y,:\\right)=\\left({x_{n}}^{\\prime \\prime}\\right)^{-1}\\left(x,y,:\\right)\\odot M \\tag{8}\\]
Fig. 3: Imaging process of CASSI.
Fig. 2: Schematic diagram of reversible optical path. According to the principle of reversible optical path, our network also includes two stages: forward and reverse.
\\[z_{n}\\left(x,y\\right)=\\sum_{m=1}^{c}\\left({x_{n}}^{\\prime}\\right)^{-1}\\left(x,y+d_ {m},m\\right) \\tag{9}\\]
After obtaining \\(z_{n}\\), our reconstruction network reconstructs a residual \\(y-z_{n}\\) of the measurements \\(y\\) and feeds it again into the reconstruction network to relearn the mapping of \\(y-z_{n}\\) to \\(x_{Truth}-x_{n}\\). The input \\(y_{n+1}\\) to the reconstruction subnet is represented as
\\[y_{n+1}=\\begin{cases}y&\\text{if }n=0\\\\ y-z_{n}&\\text{otherwise}\\end{cases} \\tag{10}\\]
### _Reconstruction subnet_
The function of the Reconstruction subnet is to establish a mapping between the different inputs and outputs, mainly consisting of a spectral-spatial transformer. In addition, considering that the measurement space is a compressed 2D space and that there is aliasing of data from different channels, we introduced a module for unaliasing and feature mapping before the mapping network input and after the output.
Given the measurement after initialization \\(x_{n+1}\\in\\mathbb{R}^{n_{x}\\times\\left({n_{y}+d_{m}}\\right)}\\), firstly, we divide the aliased data into input signals with different wavebands according to the backlight propagation. Obtained the initialized signal \\(X_{n+1}^{\\prime}\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\) as:
\\[x_{n+1}^{\\prime}\\left(x,y,m\\right)=y_{n+1}\\left(x,d_{m}:d_{m}+h\\right),m=1,2, \\ldots,c \\tag{11}\\]
where \\(x\\), \\(y\\) are the spatial coordinates of a point on the 3D cube, \\(d_{m}\\) is the offset of the spectral image on the m-channel, and \\(h\\) is the height of the 3D cube.
**Spectral Unmixing.** Subsequently, we use the prior of the mask to guide the input to unmix by passing the shifted y concatenated with mask M, then through convolution with \\(conv1*1\\) kernel to back to input signal \\(X_{n}^{\\prime\\prime}\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times 2c}\\longrightarrow X_{n} \\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\). The spectral unmixing is realized through the convolution layer with varying sizes to solve the aliasing problem under different receptive fields(\\(conv3*3\\), \\(conv5*5\\), \\(conv7*7\\)).
**Spectral-Spatial Transformer.** The proposed reconstruction subnet aims to reconstruct high-quality HSIs from the spectral images after unmixing. We use a W-shaped spectral-spatial transformer module (SST, Fig. 5), which is composed of encoding and decoding of spectral features and the encoding and decoding between spatial channels.
Both the SST-Spectral and SST-Spatial use an encoder-decoder unet-like architecture, which are connected by a series of nested dense SpatialAB and SpectralAB blocks, respectively. This architecture is designed to fuse the gaps between the feature maps of the encoder and decoder for the same feature in different dimensions.
**Spectral-Spatial-Wise Multi-Head Self-Attention.** The Cube of the spectrum has a spatial correlation in the spatial dimension, which is related to the target's properties and the surface's reflectivity. While, in the spectral dimension, the continuity of the spectrum determines that the adjacent spectra are similar, and the farther the spectral distance is, the more ranges are complementary. And since \\(W=H>>M\\)
Fig. 4: Diagram of the framework structure based on the reversible prior, the upper half is the forward process, reconstruction subnet including unmixing block, spectral-spatial transformer and mapping block. The blue line in the lower half indicates the inverse process, the reversible module. The reconstruction subnet corresponds to the inverse of the CASSI optical path and the reversible module corresponds to the forward direction of the CASSI optical path, both directions allowing the SST to form a closed-loop iterative capability. A reversible loss is proposed on the inverse of the network.
capturing spectral-wise interactions will be less cost-effective than modeling spatial-wise correlations. However, when the model reaches a certain scale, a single method cannot continue to mine the information of spectral Cube.
Our SpatialAB and SpectralAB are inspired by Swin-Transformer and MST respectively. SpectralAB is consistent with MSAB in MST [26]. SpectralAB's goal is to treats each spectral feature map as a token and calculates self-attention along the spectral dimension. The input \\(x_{n}\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\) is reshaped into tokens \\(x\\in\\mathbb{R}^{n_{x}n_{y}\\times c}\\). Then \\(x\\) is linearly projected into \\(queryQ\\), \\(keyK\\), \\(valueV\\in\\mathbb{R}^{n_{x}n_{y}\\times c}\\), and \\(Q=xW^{Q},K=xW^{K},V=xW^{V}\\), where \\(W^{Q},W^{K},W^{V}\\in\\mathbb{R}^{c\\times c}\\). Subsequently, \\(Q\\), \\(K\\), and \\(V\\) into N heads along the spectral channel dimension: \\(Q=[Q_{1},\\ldots,Q_{N}]\\), \\(k=[k_{1},\\ldots,k_{N}]\\), \\(v=[v_{1},\\ldots,v_{N}]\\). Therefore, the formula for each \\(head_{j}^{Spectral}\\) and SpectralAB is:
\\[head_{j}^{Spectral}=Softmax\\left(\\sigma_{j}Q_{j}K_{j}^{T}\\right)V_{j} \\tag{12}\\]
\\[SpectralAB\\left(X\\right)=\\underset{j=1}{concat}\\left(head_{j}\\right)W+f\\left(V\\right) \\tag{13}\\]
where \\(K_{j}^{T}\\) denotes the transposed matrix of \\(K_{j}\\). \\(W\\in\\mathbb{R}^{c\\times c}\\)are learnable parameters, \\(f\\left(\\cdot\\right)\\) is the function to generate position embedding.
SpatialAB makes improvements based on Swin-Transformer [24]. We remove Avgpooling blocks, and add the Feature Forward Network (FFN) module and a LayerNorm layer (Fig. 5). SpatialAB's goal is to treats each local spatial feature map as a token and calculates self-attention along the spatial dimension. The input \\(x_{n}\\in\\mathbb{R}^{n_{x}\\times n_{y}\\times c}\\) is reshaped into tokens \\(x\\in\\mathbb{R}^{\\frac{n_{x}}{\\sigma}\\cdot\\frac{n_{y}}{\\sigma}\\times s\\cdot s \\cdot c}\\), s represents the window-size (set to 8 by default) of each window. Than \\(x\\) is linearly projected into query \\(Q\\), key \\(K\\), and value \\(V\\), and \\(W^{Q},W^{K},W^{V}\\in\\mathbb{R}^{s\\times s}\\). Then, the next module adopts a windowing configuration that is shifted from that of the preceding layer, by displacing the windows by \\(\\left(\\left\\lfloor\\frac{s}{2}\\right\\rfloor,\\left\\lfloor\\frac{s}{2}\\right\\rfloor\\right)\\) pixels from the regularly partitioned windows.The formula for spatial Attention is as follows:
\\[Attention=SoftMax(QK^{T}/\\sqrt{d}+B)V \\tag{14}\\]
Where \\(Q,K,V\\in\\mathbb{R}^{s^{2}\\times c}\\) are the query, key and value matrices; \\(B\\) is the relative position embedding, \\(B\\in\\mathbb{R}^{s^{2}\\times s^{2}}\\); \\(d\\) is the \\(query/key\\) dimension, and \\(s^{2}\\) is the number of patches in a window.
### _Loss Function._
Our network has reversible module and reconstruction sub-net, so our loss includes outputting and reversible loss. The outputting loss is calculated as the L2 loss of \\(x_{out}-x_{truth}\\). The reversible loss calculation \\(x_{out}\\) is mapped back to the CCD under the nature of the reversible optical path to obtain the L2 loss of the \\(\\mathcal{G}\\left(x_{out}\\right)\\) value to the actual measurement \\(y\\). We defined the loss function as follows:
\\[\\mathcal{L}=\\left\\|x_{out}-x_{truth}\\right\\|_{2}^{2}+\\xi\\cdot\\left\\|\\mathcal{ G}\\left(x_{out}\\right)-y\\right\\|_{2}^{2} \\tag{15}\\]
where \\(x_{out}\\) is the final predicted values of the network, \\(\\mathcal{G}\\) represents the process of mask coding and dispersion of predicted values, \\(y\\) is the measurement of CCD. \\(\\xi\\) is the penalty coefficient, which is set to 0.2 by default.
## V Experiments
### _Experiment Setup_
In our implementation, the number of spectral channels c is 28, wavelengths from 450 nm to 650 nm. We perform experiments on both simulation and real HSIs datasets.
**Simulation HSIs Data.** We use two simulation hyperspectral image datasets, CAVE [27] and KAIST [28]. CAVE dataset is composed of 32 hyperspectral images at a spatial size of 512
Fig. 5: Diagram of Spectral-Spatial Transformer. (a) SST adopts a W-shaped structure. (b) SpatialAB consists of a Window Multi-head-Self-Attention (MSA), a Shifted-Window-MSA, an FFN, and three layer normalization. (c) SpectralAB consists of a Spectral-MSA, an FFN, and two layer normalization.(d) Components of FFN.
\\(\\times\\) 512. KAIST dataset consists of 30 hyperspectral images at a spatial size of 2704 \\(\\times\\) 3376. Following the schedule of TSA-Net, we adopt CAVE as the training set. 10 scenes from KAIST are selected for testing.
**Real HSIs Data.** We use the real HSIs dataset collected by the CASSI system developed in TSA-Net [21].
**Evaluation Metrics.** We adopt peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [29] as the metrics to evaluate the HSI reconstruction performance.
**Implementation Details.** We implement SST-ReversibleNet in Pytorch. Our SST-S, SST-M, SST-L are trained on 1 \\(\\times\\) RTX 3090 GPU, and SST-LPlus is trained on 2 \\(\\times\\) RTX 3090 GPUs. We adopt Adam optimizer (\\(\\beta_{1}=0.9\\) and \\(\\beta_{2}=0.999\\)) for 300 epochs. The learning rate is set to \\(4\\times 10^{-4}\\) in the beginning and is halved every 50 epochs during the training procedure. The metrics of PSNR and SSIM are employed to evaluate the reconstruction quality.
### _Simulation results_
We compare the Params, GFLOPs, PSNR, and SSIM of our SST-ReversibleNet with several SOTA HSI reconstruction algorithms, including \\(\\lambda\\)-net [22], ADMM-Net [30], TSA-Net [21], DIP-HSI [33], DGSMP [32], BIRNAT [34], MST series [26], [36], CST series [37], HDNet [35], and DAUHST series [23]. The Params, GFLOPs are tested with the same settings (test size = 256 \\(\\times\\) 256), PSNR and SSIM results of different methods on 10 scenes in the simulation datasets are listed in Table I.
**(i)**Our best model SST-LPlus yields very impressive results, i.e., 39.16 dB in PSNR and 97.4% in SSIM, which is more than 3 dB than the best PSNR of the SOTA published models, and the SSIM is more than 1.5%. SST-LPlus significantly outperforms DAUHST-9stg, BIRNAT, MST++, MST-L, HDNet, TSA-net and \\(\\lambda\\)-net of PSNR by 0.80, 1.58, 3.82, 4.07, 4.19, 6.86 and 7.39 dB, and 0.7%, 1.4%, 2.1%, 2.4%, 3.1%, 5.8% and 8.4% improvement of SSIM, suggesting the effectiveness of our method.
Fig. 6 plots the visual comparisons of our SST-LPlus and other SOTA methods on Scene 5 with 4 (out of 28) spectral channels. The top-right part shows the zoomed-in patches of the white boxes in the entire HSIs, the reconstructed HSIs produced by SSTs have more spatial details and clearer texture in different spectral channels than other SOTA methods. In addition, as illustrated in Fig. 7, in A, B, and C three positions, although all the restoration algorithms can better describe the qualitative trend of spectral changes, the spectral curves of the SSTs have higher spectral accuracy and better perceptual quality.
Fig. 6: Visual comparisons of our SST-ReversibleNet and other SOTA methods of Scene 5 with 4 out of 28 spectral channels on the KAIST dataset.
Fig. 7: Spectral curves of the SOTA methods in Fig. 6 on the randomly selected regions A, B and C.
**(ii)**It can be observed that our SST-ReversibleNet significantly surpass SOTA methods by a large margin while requiring much cheaper memory and computational costs. Compared with other Transformer-based method CST-L and MST-L, our SST-S outperforms CST-L [37] by 0.59 dB but only costs 35.3% (1.06/3.00) Params and 71.3% (19.83/27.81) GFLOPs, and SST-S outperforms MST-L [26] by 1.62 dB, but only costs 29.0% (10.6/3.66) Params and 70.4% (19.83/28.15) GFLOPs. Likewise, our SST-M outperforms DAUHST-5stg [23] by 0.13 dB but only costs 61.3% (2.11/3.44) Params and 78.5% (35.03/44.61) GFLOPs, and SST-M outperform CST-L-plu by 1.76 dB, but only costs 70.3% (2.11/3.00) Params and 87.4% (35.03/40.1) GFLOPs. More specifically, our SST-M acquire the equivalent SSIM (96.7%) of DAUHST-9stg [23] (the best model at present), but only costs 34.3% (2.11/6.15) Params and 44.1% (35.03/79.5) GFLOPs. In addition, our SST-L and SST-LPlus outperforms other competitors by very large margins. we provide PSNR-Params-GFLOPs comparisons of different reconstruction algorithms in Fig. 1.
### _Real data results_
To verify the effect of the proposed method on the real data, five compressive measurements captured by the real spectral SCI system are utilized for testing. For fair comparisons, all of the methods are trained on the CAVE datasets using the fixed real mask with 11-bit shot noise injected. Fig. 8 plots the visual comparisons of the proposed SST-M and the existing SOTA method DGSMP [32], TSA-Net [21], GAP-net [31], BIRNAT [34], MST++ [36], HDNet [35], CST [37], DAUHST [23]. Our SST-S surpasses previous algorithms in terms of high-frequency structural detail reconstruction and real noise suppression. In Scene 2, the proposed method is able to restore more texture and detail, especially at the edges of flowers.
of spectral self-attentive blocks (spectralAB) and spatial self-attentive blocks (spatialAB) on the model. Table II to Table V show the results of the comparison between PSNR and SSIM at different settings. In Table V, we build two networks A and B with similar number of parameters and GFLOPs as SST-S. In this case, A uses only the self-attention of the spectral channels and B only looks for correlations in the spatial dimension.
The results show that 1) the impact of reversible prior on the model is crucial, comparing two networks with similar number of parameters for deepening the network depth and using reversible prior in Table II, the use of reversible prior can effectively improve the reconstruction ability of the model, with PSNR and SSIM improving by 3.19 dB and 3.7%, respectively. 2) The reversible loss can be constraint on the model, and without changing the number of parameters or operations, the PSNR and SSIM are able to improve by 0.05db and 0.2% respectively. 3) The shape of the network structure also has a more obvious improvement on the reconstruction effect, the W-shaped structure can improve the PSNR and SSIM by 2.44 db/1.3% and 1.15 db/0.7%, respectively, compared to Unet-like and Unet+like shape. 4) The spectral-space transformer is a huge advantage over the spectral-transformer and spatial-transformer, especially the results for model B vs. SST-s. Model B has 7% higher number of parameters and 10% higher GFLOPs than SST-S, but the reconstruction results are 0.94 db less than SST-S.
Fig. 8: Real HSI reconstruction comparison of two Scenes. 6 out of 28 spectra are randomly selected.
### _Difference with DU_
To explore the differences between our iterative method and the DU, we compared the iterative process of SST-LPlus and DAUHST-9st on the simulated dataset. A randomly selected scene is visualised in both spectral channels and RGB changes. In addition, below the 636.3 nm visualization image, we have extracted the changes at different stages of each feature learning. Below the image we list the PSNR and SSIM changes from 1stg to 9stg, as shown in Fig. 9.
Visualizing the analysis of the results, we believe that the reversible framework benefits from the learning of residuals to effectively improve the learning of features, with DAUHST learning a large number of global features at 1stg and fine-tuning from 2stg onwards. Although SST only starts fine-tuning at 4stg, our SST learns more global features from 1stg to 4stg in the early stage and achieves results beyond DAUHST at 7stg to 9stg. We therefore believe that the feature learning capabilities of SST and DU are not the same.
## VI Conclusion
Inspired by the reversible light path, this paper proposes a novel SST-ReversibleNet for CASSI. The new framework significantly improves the reconstruction metrics and can be used for other algorithms. We use a W-shaped spectral-spatial transformer module to improve spatial and spectral feature extraction. In addition, we design a reversible loss. With these novel techniques, we establish a set of highly efficient SST-ReversibleNet models. Quantitative experiments show that our method outperforms SOTA algorithms by a wide margin, even when using significantly cheaper parameters and GFLOPs. However, as presented in Fig. 8, our framework is not as effective on real datasets as it is on simulated datasets, and we believe that there is a lack of noise estimation in the reversible module, but there is currently a lack of relevant datasets. Therefore, our future work will be to construct noisy datasets based on real scenarios and to optimise our framework.
## References
* [1]X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady (2016) Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Processing Magazine33 (5), pp. 95-108. Cited by: SSI.
* [2]T. Ishida, J. Kurihara, F. A. Viray, S. B. Namucao, E. C. Parinigt, G. J. Perez, Y. Takahashi, and J. J. Marciano Jr (2018) A novel approach for vegetation classification using uav-based hyperspectral imaging. Computers and Electronics in Agriculture144, pp. 80-85.
* [5] S. De Angelis, E. Ammannito, T. Di Iorio, M. C. De Sanctis, P. O. Manzari, F. Liberati, F. Tarchi, M. Dami, M. Olivieri, C. Pompei _et al._, \"The spectral imaging fidelity: Setup characterization,\" _Review of Scientific Instruments_, vol. 86, no. 9, p. 093101, 2015.
* [6] Y. Li, Y. Shi, K. Wang, B. Xi, J. Li, and P. Gamba, \"Target detection with unconstrained linear mixture model and hierarchical denoising autoencoder in hyperspectral imagery,\" _IEEE Transactions on Image Processing_, vol. 31, pp. 1418-1432, 2022.
* [7] M. H. Kim, T. A. Harvey, D. S. Kirtle, H. Rushmeier, J. Dorsey, R. O. Prum, and D. J. Brady, \"3d imaging spectroscopy for measuring hyperspectral patterns on solid objects,\" _TOG_, vol. 31, no. 4, pp. 1-11, 2012.
* [8] F. Xiong, J. Zhou, and Y. Qian, \"Material based object tracking in hyperspectral videos,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 3719-3733, 2020.
* [9] J. Li, Q. Hu, and M. Ai, \"Rift: Multi-modal image matching based on radiation-variation insensitive feature transform,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 3296-3310, 2020.
* [10] T. Liu, H. Liu, Y.-F. Li, Z. Chen, Z. Zhang, and S. Liu, \"Flexible first spectral imaging enhancement for industrial robot infrared vision sensing,\" _IEEE Transactions on Industrial Informatics_, vol. 16, no. 1, pp. 544-554, 2019.
* [11] M. Wang, Q. Wang, and J. Chanussot, \"Tensor low-rank constraint and \\(I\\)0 total variation for hyperspectral image mixed noise removal,\" _IEEE Journal of Selected Topics in Signal Processing_, vol. 15, no. 3, pp. 718-733, 2021.
* [12] H. Du, X. Tong, X. Cao, and S. Lin, \"A prism-based system for multispectral video acquisition,\" in _ICCV_, 2009, pp. 175-182.
* [13] A. Wagadarikar, R. John, R. Willett, and D. Brady, \"Single disperser design for coded aperture snapshot spectral imaging,\" _Applied Optics_, vol. 47, no. 10, pp. B44-B51, 2008.
* [14] L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, \"Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,\" _TPAMI_, vol. 39, no. 10, pp. 2104-2111, 2016.
* [15] S. Zhang, L. Wang, Y. Fu, X. Zhong, and H. Huang, \"Computational hyperspectral imaging based on dimension-discriminative low-rank tensor recovery,\" in _ICCV_, 2019, pp. 10 183-10 192.
* [16] A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lucic, and C. Schmid, \"Vivit: A video vision transformer,\" in _ICCV_, 2021, pp. 6836-6846.
* [17] Y. Song, Z. He, H. Qian, and X. Du, \"Vision transformers for single image dehazing,\" _IEEE Transactions on Image Processing_, vol. 32, pp. 1927-1941, 2023.
* [18] J. Liang, H. Zeng, and L. Zhang, \"Details or artifacts: A locally discriminative learning approach to realistic image super-resolution,\" in _CVPR_, 2022, pp. 5657-5666.
* [19] L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, \"Dual-camera design for coded aperture snapshot spectral imaging,\" _Applied Optics_, vol. 54, no. 4, pp. 848-858, 2015.
* [20] Z. Yang, Y. Wei, and Y. Yang, \"Associating objects with transformers for video object segmentation,\" _NeurIPS_, vol. 34, pp. 2491-2502, 2021.
* [21] Z. Meng, J. Ma, and X. Yuan, \"End-to-end low cost compressive spectral imaging with spatial-spectral self-attention,\" in _ECCV_, 2020, pp. 187-204.
* [22] X. Miao, X. Yuan, Y. Pu, and V. Athitsos, \"t-net: Reconstruct hyperspectral images from a snapshot measurement,\" in _ICCV_, 2019, pp. 4059-4069.
* [23] Y. Cai, J. Lin, H. Wang, X. Yuan, H. Ding, Y. Zhang, R. Timofte, and L. Van Gool, \"Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging,\" _NeurIPS_, 2022.
* [24] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, \"Swin transformer: Hierarchical vision transformer using shifted windows,\" in _ICCV_, 2021, pp. 10 1021-10 022.
* [25] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and D. J. F., \"Deformable transformers for end-to-end object detection,\" in _ICLR_, 2021.
* [26] Y. Cai, J. Lin, X. Hu, H. Wang, X. Yuan, Y. Zhang, R. Timofte, and L. Van Gool, \"Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction,\" in _CVPR_, 2022, pp. 17 502-17 511.
* [27] J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, \"Multispectral imaging using multiplexed illumination,\" in _ICCV_, 2007, pp. 1-8.
* [28] I. Choi, M. Kim, D. Gutierrez, D. Jeon, and G. Nam, \"High-quality hyperspectral reconstruction using a spectral prior,\" _TOG_, vol. 36, no. 6, p. 218, 2017.
* [29] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, \"Image quality assessment: from error visibility to structural similarity,\" _TIP_, vol. 13, no. 4, pp. 600-612, 2004.
* [30] J. Ma, X.-Y. Liu, Z. Zhou, and X. Yuan, \"Deep tensor admm-net for snapshot compressive imaging,\" in _ICCV_, 2019, pp. 10 223-10 232.
* [31] Z. Meng, S. Jalali, and X. Yuan, \"Gap-net for snapshot compressive imaging,\" _arXiv preprint arXiv:2012.08364_, 2020.
* [32] T. Huang, W. Dong, X. Yuan, J. Wu, and G. Shi, \"Deep gaussian scale mixture prior for spectral compressive imaging,\" in _CVPR_, 2021, pp. 16 216-16 225.
* [33] Z. Meng, Z. Yu, K. Xu, and X. Yuan, \"Self-supervised neural networks for spectral snapshot compressive imaging,\" in _ICCV_, 2021, pp. 2622-2631.
* [34] Z. Cheng, B. Chen, R. Lu, Z. Wang, H. Zhang, Z. Meng, and X. Yuan, \"Recurrent neural networks for snapshot compressive imaging,\" _TPAMI_, 2022.
* [35] X. Hu, Y. Cai, J. Lin, H. Wang, X. Yuan, Y. Zhang, R. Timofte, and L. Van Gool, \"Hdnet: High-resolution dual-domain learning for spectral compressive imaging,\" in _CVPR_, 2022, pp. 17 542-17 551.
* [36] Y. Cai, J. Lin, Z. Lin, H. Wang, Y. Zhang, H. Pfister, R. Timofte, and L. Van Gool, \"Mstst-tilt-stage spectral-wise transformer for efficient spectral reconstruction,\" in _CVPR_, 2022, pp. 745-755.
* [37] Y. Cai, J. Lin, X. Hu, H. Wang, X. Yuan, Y. Zhang, R. Timofte, and L. Van Gool, \"Coarse-to-fine sparse transformer for hyperspectral image reconstruction,\" in _ECCV_, 2022, pp. 686-704. | Spectral image reconstruction is an important task in snapshot compressed imaging. This paper aims to propose a new end-to-end framework with iterative capabilities similar to a deep unfolding network to improve reconstruction accuracy, independent of optimization conditions, and to reduce the number of parameters. A novel framework called the reversible-prior-based method is proposed. Inspired by the reversibility of the optical path, the reversible-prior-based framework projects the reconstructions back into the measurement space, and then the residuals between the projected data and the real measurements are fed into the network for iteration. The reconstruction subnet in the network then learns the mapping of the residuals to the true values to improve reconstruction accuracy. In addition, a novel spectral-spatial transformer is proposed to account for the global correlation of spectral data in both spatial and spectral dimensions while balancing network depth and computational complexity, in response to the shortcomings of existing transformer-based denoising modules that ignore spatial texture features or learn local spatial features at the expense of global spatial features. Extensive experiments show that our SST-ReversibleNet significantly outperforms state-of-the-art methods on simulated and real HSI datasets, while requiring lower computational and storage costs. [https://github.com/caizeyu1992/SST](https://github.com/caizeyu1992/SST)
Hyperspectral imaging, reconstruction, CASSI, Reversible, spectral-spatial. | Provide a brief summary of the text. | 276 |
arxiv-format/2310_17933v1.md | # A barycenter-based approach for the multi-model ensembling of subseasonal forecasts
Camille Le Coz
LMD/IPSL, Ecole Polytechnique, Institut Polytechnique de Paris, ENS, PSL Research University, Sorbonne Universite, CNRS, Palaiseau France
Alexis Tantet
LMD/IPSL, Ecole Polytechnique, Institut Polytechnique de Paris, ENS, PSL Research University, Sorbonne Universite, CNRS, Palaiseau France
Remi Flamary
CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, Palaiseau, France
Riwal Ploughoven
LMD/IPSL, Ecole Polytechnique, Institut Polytechnique de Paris, ENS, PSL Research University, Sorbonne Universite, CNRS, Palaiseau France
######
As a proof of concept, the L2- and the Wasserstein-barycenters are applied to combine two models from the S2S database, namely the European Centre Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) models. The performance of the two (weighted-) MMEs are evaluated for the prediction of weekly 2m-temperature over Europe for seven winters. The weights given to the models in the barycenters are optimized with respect to two metrics, the CRPS and the proportion of skiful forecasts. These weights have an important impact on the skill of the two barycenter-based MMEs. Although the ECMWF model has an overall better performance than NCEP, the barycenter-ensembles are generally able to outperform both. However, the best MME method, but also the weights, are dependent on the metric. These results constitute a promising first implementation of this methodology before moving to combination of more models.
Introduction
Multi-model ensemble methods (MME)Multi-model ensemble (MME) methods have been shown to improve forecast skill for different time scales, from short- (Heizenreder et al., 2006; Casanova and Ahrens, 2009) and medium-range (Hamill, 2012; Hagedorn et al., 2012) to seasonal forecasting (Palmer et al., 2004; Alessandri et al., 2011; Kirtman et al., 2014). The added-values of MME over single-model ensemble (SME) forecasts have been attributed to several factors. First, there is in general no \"best\" single-model (Hagedorn et al., 2005), the relative performances of the single-models vary depending on the considered target (i.e. region and variable of interest, metrics, ). The MME can take advantage of the complementary skill of the single-models and is able to perform better in average. Second, Hagedorn et al. (2005) identified error cancellation and non-linearity of the metric as the main reason for the MME performance being better than the average performance of the single-models. Third, MMEs allow us to explore a new dimension of uncertainty which remains unexplored by SMEs. Indeed, combining different models allows us to explore the uncertainty due to _model formulation_: by construction, a SME takes into account uncertainties in the model initialization and can introduce variations in parameterizations to sample some of the uncertainty due to parameterizations, but they can not take into account uncertainty due to model formulation. Besides, Weigel et al. (2008) showed that MME can improve the predictive skill if and only if the single-models 'fail to capture the full amount of forecast uncertainty'. Finally, in addition, MME benefit from their larger ensemble size, which implies a better sampling of the real probability distribution.
MME for sub-seasonal to seasonal (S2S) forecastsSub-seasonal to seasonal forecasting bridges the gap between weather (medium range) and seasonal forecast. It corresponds to the time range between two weeks and up to two months. Predictions at this time scale are the focus of the Sub-seasonal to Seasonal (S2S) prediction project, whose objectives are to improve their skill and to promote their use (Vitart et al., 2017). As part of this project, a database containing S2S forecasts from twelve (originally eleven) operational centers has been made available to the research community. One of the main research questions the S2S project aims to answer is \"what is the benefit of a multimodel forecast for subseasonal to seasonal prediction and how can it be constructed and implemented?\" (Vitart et al., 2017).
Several studies have been investigating the potential benefits of MME for S2S forecasts (Vigaud et al., 2017, 2020; Specq et al., 2020; Wang et al., 2020; Materia et al., 2020; Zheng et al., 2019; Pegion et al., 2019). These studies use different MME methods, variables and evaluation criteria, but they all concur that MMEs are generally performing as well or better than SMEs. Moreover, Vigaud et al. (2017) and Specq et al. (2020) suggest that MME improve not only the skill but also the reliability of the probabilistic forecasts. However, the studies using pooling method point out that the better performance of the MME is also related to its larger number of members (Zheng et al., 2019; Specq et al., 2020; Wang et al., 2020). Among other studies, Karpechko et al. (2018) investigate a specific Sudden Stratospheric Warming event with a MME, while Ferrone et al. (2017) evaluates several MME methods, but neither of them compare the skill of the MME to the ones of the SMEs. Thus, while the potential benefits of MME for the sub-seasonal scales have clearly been established, the question of how to best combine ensemble forecasts from different models has received little attention. The present study focuses on this question.
The most direct and often used method for multi-model combination is the \"pooling method\". This consists in simply concatenating the ensemble members from the different models. The members of the new multi-model ensemble can have the same weights or be given different weights based on the model's skills (e.g. Weigel et al. (2008) for seasonal scale and Wanders and Wood (2016) for sub-seasonal scale). In the above mentioned studies, all used some variation of the pooling method except Vigaud et al. (2017, 2020) and Ferrone et al. (2017). They focused on the prediction of terciles, and so couple multi-model combination with other post-processing methods (e.g. model output statistics method such as extended logarithmic regression) to predict the terciles directly. Also focusing on quantiles, Gonzalez et al. (2021) used a sequential learning algorithm to linearly combine predictors (from the SMEs but also from the climatology and persistence), their weights being updated at each step depending on the previous performances. Some other methods for weighted-MME have been explored, but have not been applied yet to the sub-seasonal scale. At the seasonal scale, Rajagopalan et al. (2002) developed a Bayesian methodology to combine ensemble forecasts for categorical predictands (also used by Robertson et al. (2004) and Weigel et al. (2008) at the seasonal scale to obtain the terciles of the MME). Other approaches, such as the Ensemble model output statistics (EMOS) and Bayesian model averaging (BMA), adopt a probability distribution perspective and aim at building the PDF of the MME. In the EMOS method, an assumption is made on the shape of the PDF of the MME and its parameters are then optimized with respect to a chosen score (e.g. the CRPS) on a training period (Gneiting et al., 2005). In contrast, in the BMA method, an assumption is made on the shape of the input distributions. The MME's PDF is then their weighted average, the weights being equal to the posterior probabilities of the input models (Raftery et al., 2005). The following framework of barycenter contains the BMA as will be discussed later.
MME as barycenters of discrete distributions : metrics and weightsIn this study, we propose to revisit the combination of multiple model ensembles from a different point of view. We consider each of the ensemble forecasts as a discrete probability distribution and reformulate the multi-model ensemble as a barycenter of those distributions. We show that, in this framework, the pooled MME is actually the (weighted)-barycenter with respect to the \\(L_{2}\\)-distance. As we work with distributions, instead of a collection of ensemble members, the notion of barycenter can then be extended to other metrics in the space of distributions. In particular, a natural distance in the distribution space is the Wasserstein distance that stems from the optimal transport theory (Villani, 2003). The Wasserstein distance is defined as the cost of the optimal transport between these two distributions.
Optimal transport and the Wasserstein distance have been used in diverse applications for climate and weather. It has been used to measure the response of climate attractors to different forcings (Robin et al., 2017) and to evaluate the performance of different climate models (Vissio et al., 2020) or different parametrization (Vissio and Lucarini, 2018). Moreover, Papayiannis et al. (2018) use Wasserstein barycenter to point-downscale wind speed from an atmospheric model. Robin et al. (2019) develop multivariate bias correction method based on optimal transport, while Ning et al. (2014) use it in the framework of data assimilation to deal with structural error in forecasts.
Here, we use the Wasserstein barycenters as a tool to build multi-model ensembles and compare it to the more traditional \\(L_{2}\\)-barycenter (i.e. pooling method). We investigate the impact of this change of metric on the MME's performances. The two barycenters (Wasserstein and \\(L_{2}\\)) are applied to the combination of two models from the S2S database. Focusing on two models allow us to also explore the importance of the weights given to the models in the barycenters. We indeed use weighted barycenter to take into account the single-models performance. The weights are learnt from the data and the two barycenter-based MMEs are compared for their respective optimal weights.
The paper is organized as follow. The use of the two barycenters as MME methods is presented in Section 2. We first link the pooling method to the \\(L_{2}\\)-barycenter (2.2.1) before introducing the Wasserstein distance and its barycenter (2.2.2). The case study is described in Section 3, including the datasets and the evaluation metrics. The skill of the two MMEs and two SMEs are evaluated and compared in Section 4. The results are discussed in Section 5, and the main conclusions are highlighted in Section 6.
## 2 Multi-model ensemble methods
### Ensemble forecasts as discrete probability distributions
At the S2S time scale, it is necessary to move from a deterministic to a probabilistic approach using ensemble forecasting (e.g. Kalnay (2003)). The ensemble is typically generated by perturbing the initial conditions and running the model for each of these perturbed states. The set of perturbed initial conditions represent the initial uncertainty associated with possible errors in the initial state of the atmosphere. This initial uncertainty is then transferred in time by the model. Thus, an ensemble forecast is a set of \\(N\\) perturbed forecasts representing the evolution in time of the probability density of atmospheric variables according to the model formulation.
An ensemble forecast aims at sampling the probability distribution of the forecasted variable. Now, this can also be considered as a discrete probability distribution \\(\\mu\\) such that
\\[\\mu=\\sum_{i=1}^{N}a_{i}\\delta_{\\mathbf{x}_{i}}\\]where \\(N\\) is the number of members in the ensemble, \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{t}}\\) is the position of the Dirac corresponding to the time-series of the \\(i\\)-th member, \\(n_{t}\\) is the number of time steps, and \\(a_{i}\\) is the weight of the \\(i\\)-th member (such that \\(\\forall i\\leq n,a_{i}\\geq 0\\) and \\(\\sum_{i=1}^{N}a_{i}=1\\)). Thus, here, \\(\\mathbf{x}_{i}\\) does not represent an instantaneous state, but a time series. In a standard ensemble forecast, all the members are equi-probable, so they have equal weights \\(a_{i}=\\frac{1}{N}\\). In the remainder of this section, we will look at ensemble forecast time-series from the angle of discrete distributions.
### Barycenters for multi-model combination
The goal of multi-model ensemble methods can be rephrased as combining the imperfect information from these distributions to obtain a new discrete distribution representing better the true probability distribution function of the forecasted variable. A way to summarize a collection of distributions \\((\\mu_{1},\\mu_{2}, ,\\mu_{d})\\) is to compute their barycenter. The barycenter is found by solving the following minimization problem
\\[\\operatorname*{arg\\,min}_{\\mu}\\quad\\sum_{k=1}^{d}\\lambda_{k}.d(\\mu_{k},\\mu)^ {2} \\tag{1}\\]
where \\(\\lambda_{k}\\) represents the weights given to distributions, and \\(d(.,.)\\) is a distance between distributions. The barycenter, also known as the Frechet mean, is effectively the distribution that best represents the input distributions with respect to a criterion given by the chosen distance \\(d\\).
As a first step, to both demonstrate the case of feasibility and to investigate the properties and skills of the different barycenters, we restrict ourselves to the barycenter of two distributions (or ensemble forecasts). We use the following notations for the two discrete probability distributions \\(\\mu_{1}\\) and \\(\\mu_{2}\\) on the space \\(\\Omega=\\mathbb{R}^{n_{t}}\\):
\\[\\mu_{1}=\\sum_{i=1}^{N_{1}}a_{i}\\delta_{\\mathbf{x}_{i}}\\quad\\text{and}\\quad\\mu _{2}=\\sum_{j=1}^{N_{2}}b_{j}\\delta_{\\mathbf{y}_{j}} \\tag{2}\\]
with \\(X=(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N_{1}})\\in\\Omega^{N_{1}}\\), \\(Y=(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{N_{2}})\\in\\Omega^{N_{2}}\\), and where \\(\\mathbf{a}=(a_{1},\\ldots,a_{N_{1}})\\in\\Sigma_{N_{1}}\\) and \\(\\mathbf{b}=(b_{1},\\ldots,b_{N_{2}})\\in\\Sigma_{N_{2}}\\) are probability vectors. The barycenter of \\(\\mu_{1}\\) and \\(\\mu_{2}\\) with respect to a distance \\(d\\) is then given by
\\[\\operatorname*{arg\\,min}_{\\mu}\\quad\\alpha.d(\\mu_{1},\\mu)^{2}+(1-\\alpha).d(\\mu _{2},\\mu)^{2} \\tag{3}\\]
where \\(\\alpha\\in[0,1]\\) represents the weight given to the first distribution, the second distribution having a weight of \\(1-\\alpha\\). These weights have an important impact on the barycenter and will allow us to take into account the fact that one distribution has generally better skill. They can be set from a priori knowledge, e.g. when one model is expected to represent better the variable than the other one and so it is given more weight. They can also be learned a posteriori from the data (e.g. from past forecasts). We choose this second approach. More details on the estimation of the weights are given in Section 3.3.
#### 2.2.1 \\(L_{2}\\) barycenter
The \\(L_{2}\\) distance between two distributions \\(\\mu_{1}\\) and \\(\\mu_{2}\\) is given by \\(\\left\\|\\mu_{1}-\\mu_{2}\\right\\|_{2}=\\left(\\int\\left(\\mu_{1}(z)-\\mu_{2}(z) \\right)^{2}dz\\right)^{1/2}\\). When using this distance in the barycenter equation (3), one can find the analytical formula for the \\(L_{2}\\)-barycenter:
\\[\\mu_{L_{2}}^{\\alpha}=\\alpha.\\mu_{1}+(1-\\alpha).\\mu_{2}=\\alpha\\sum_{i=1}^{N_{1 }}a_{i}\\delta_{\\mathbf{x}_{i}}+(1-\\alpha)\\sum_{j=1}^{N_{2}}b_{j}\\delta_{ \\mathbf{y}_{j}} \\tag{4}\\]
The first part of Equation (4) shows that the \\(L_{2}\\)-barycenter is a weighted average of distributions. This is similar to the BMA method, which can then be seen as a \\(L_{2}\\)-barycenter with specific weights derived from the Bayesian framework. Another difference is the assumptions on the shape of the input PDF made by BMA while we use the ensemble directly as discrete probability distributions (second part of (4)).
From the second part of Equation (4), one can see that the L2 barycenter corresponds to the concatenation of the members of the two ensembles with the re-scaling of the member's weights using \\(\\alpha\\). This can be seen as \"pooling\" together the ensemble's members from the two models. The pooling method is a simple and well-established MME method. It has been used for the seasonal scale (Hagedorn et al., 2005; Weigel et al., 2008; Robertson et al., 2004; Becker et al., 2014), for the decadal scale (Smith et al., 2013) and for the medium-range scale (Hamill, 2012; Hagedorn et al., 2012). More recently, it has also been applied on ensemble sub-seasonal forecasts by (Karpechko et al., 2018; Pegion et al., 2019; Zheng et al., 2019; Specq et al., 2020; Materia et al., 2020). It is interesting to note that the majority of MME methods for S2S forecasts use pooling and implicitly the \\(L_{2}\\) distance. However, the \\(L_{2}\\) distance is not the only possible distance in the distribution space. In the following, we introduce another distance, the Wasserstein distance, and its associated barycenter.
#### 2.2.2 Wasserstein barycenter
The Wasserstein distance stems from the optimal transport theory and can be seen as the cost of transportation between two distributions \\(\\mu_{1}\\) and \\(\\mu_{2}\\). It can be defined on discrete distributions as:
\\[W_{2}^{2}(\\mu_{1},\\mu_{2})=\\ \\min_{T\\in U(\\mathbf{a},\\mathbf{b})}\\langle \\mathbf{T},\\mathbf{C}\\rangle=\\min_{T\\in U(\\mathbf{a},\\mathbf{b})}\\sum_{i,j}t_ {i,j}\\|\\mathbf{x}_{i}-\\mathbf{y}_{j}\\|^{2} \\tag{5}\\]
where \\(U(\\mathbf{a},\\mathbf{b})=\\left\\{\\mathbf{T}\\in\\mathbb{R}_{+}^{N_{1},N_{2}}: \\mathbf{T}\\mathbf{1}_{N_{2}}=\\mathbf{a}\\ \\ \\text{and}\\ \\ \\mathbf{T}^{T}\\mathbf{1}_{N_{1}}=\\mathbf{b}\\right\\}\\) is the set of all the feasible transport matrix \\(\\mathbf{T}\\) between the probability vectors \\(\\mathbf{a}\\) and \\(\\mathbf{b}\\) (with \\(\\mathbf{1}_{N}\\) standing for the all-ones vector of size \\(N\\)), and \\(\\mathbf{C}\\in\\mathbb{R}^{N_{1}\\times N_{2}}\\) is a distance matrix whose elements are the pairwise squared euclidean distances between the elements of \\(X\\) and \\(Y\\). Here, \\(\\left(\\langle\\mathbf{T},\\mathbf{C}\\rangle\\right)^{1/2}\\) is the cost associated with the transport \\(\\mathbf{T}\\) (with \\(\\langle.,.\\rangle\\) being the element-wise matrix multiplication operator). The elements \\(t_{i,j}\\) of a transport matrix \\(\\mathbf{T}\\) describe the amount of mass going from \\(x_{i}\\) to \\(y_{j}\\), while \\(d_{i,j}\\) is the cost of moving one mass unit from \\(\\mathbf{x}_{i}\\) to \\(\\mathbf{y}_{j}\\). The minimization problem consists in searching for the optimal transport among all the feasible ones in \\(U(\\mathbf{a},\\mathbf{b})\\), i.e. the transport associated with the lowest cost denoted as the 2-Wasserstein distance. This is a short description of the Wasserstein distance for discrete distributions, for more information see Peyre and Cuturi (2020), Santambrogio (2015) or Villani (2003).
The barycenter of two distributions is a special case for which there is a closed-form expression depending on the optimal plan \\(\\mathbf{T}^{*}\\):
\\[\\mu_{W_{2}}^{\\alpha}(\\mu_{1},\\mu_{2})=\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{2}}t_{ i,j}^{*}\\delta_{\\alpha\\mathbf{x}_{i}+(1-\\alpha)\\mathbf{y}_{j}} \\tag{6}\\]
where the \\(\\mathbf{T}^{*}=(t_{i,j}^{*})\\) is the optimal transport matrix between \\(\\mu_{1}\\) and \\(\\mu_{2}\\), i.e. the solution of the minimization problem (5) (Santambrogio, 2015, theorem 5.27). For a linear problem on a convex polytope such as \\(U(\\mathbf{a},\\mathbf{b})\\), the optimum is always achieved on a vertex of the feasible region. Thus, the minimum is found on a vertex of the feasible set \\(U(\\mathbf{a},\\mathbf{b})\\), which means that the optimal matrix \\(\\mathbf{T}^{*}\\) has at most \\(N_{1}+N_{2}-1\\) non-zeros elements. Thus, the Wasserstein barycenter \\(\\mu_{W_{2}}\\) in (6) has a maximum of \\(N_{1}+N_{2}-1\\) points, or members in terms of ensemble forecast (and not \\(N_{1}\\times N_{2}\\) as the formula suggests).
An illustration of the Wasserstein barycenter between two distributions is shown in Figure 1. One can see an example with two 2D discrete distributions, their optimal transport plan \\(\\mathbf{T}^{*}\\) and their Wasserstein barycenter. In this example, all the points of a distribution have the same weights. However, the first distribution has more points than the second one and so its points have a smaller weights individually than the ones of the second distribution. This is representative of ensemble forecasts: all their members are usually equi-probable (i.e. they have the same weights) but the number of members per forecast vary from one model to the other. The optimal transport between the two input distributions is represented here by lines between their points. As shown by Equation (6), for \\(\\alpha=0.5\\), the points of the Wasserstein barycenter are located in the center of these lines. The weights of the points in the barycenter distribution are equal to the mass (\\(t_{i,j}^{*}\\)) transported along the lines.
#### 2.2.3 Illustration of the application of barycenters to ensemble forecasts
The \\(L_{2}\\) and \\(W_{2}\\) distances have different properties in the distribution space, and so lead to different multi-model ensembles. An interesting property of the \\(W_{2}\\)-distance is that it captures the proximity of two discrete distributions as a whole, whereas the \\(L_{2}\\)-distance treats all the Dirac distributions independently. For example, contrary to the behavior of the \\(W_{2}\\)-distance illustrated in Fig. 1, the \\(L_{2}\\)-distance between two distributions with disjoint supports is equal to the sum of their \\(L_{2}\\)-norms, no matter how distant their supports. These properties will reflect on the corresponding barycenters.
Figure 2 gives an illustration of the two barycenters applied to synthetic ensemble forecasts. The \\(L_{2}\\)-barycenter in Figure 1(b) is the concatenation of the two input ensembles in Figure 1(a). The weight of each member of the \\(L_{2}\\)-barycenter is given by the weight of the corresponding member in the input ensemble (\\(1/N_{1}\\) or \\(1/N_{2}\\)) multiplied by the corresponding model's weight (\\(\\alpha\\) or \\(1-\\alpha\\)). The \\(W_{2}\\)-barycenter shown in Fig. 1(c) has a different structure compared to the \\(L_{2}\\)-barycenter. One major difference is that none of its members belongs to the input ensembles. In other words, while the support of the \\(L_{2}\\)-barycenter is the union of the input's supports, the support of the \\(W_{2}\\)-barycenter is on the path of the optimal transport and so can be different. This also allows the \\(W_{2}\\)-barycenter to retain some characteristics of the input distributions. For example, in this illustrative case, both input distributions are unimodal and so is the \\(W_{2}\\)-barycenter. This is however not the case for the \\(L_{2}\\)-barycenter which has two modes, each associated with the mode of one of the input distributions.
These properties of the two barycenters may be attractive for different types of forecast error: the artificial example of Figure 2 is constructed to illustrate fundamental differences between the two barycenters. It may be tempting to interpret the \\(W_{2}\\)-barycenter as a mean to correct biases, or to ponder the advantage of the
Figure 1: Illustration of the \\(W_{2}\\)-barycenter between two 2D discrete distributions \\(\\mu_{1}\\) and \\(\\mu_{2}\\). The weights of the points are indicated by the size of the markers. The lines represent the optimal transport between \\(\\mu_{1}\\) and \\(\\mu_{2}\\), with the intensity of the line indicating the mass transferred to one point to the other. The \\(W_{2}\\)-barycenterβs points for \\(\\alpha=0.5\\) are located in the middle of these lines.
Figure 2: Illustration of multi-model ensembles using barycenters for a synthetic variable. The ensemble can be seen as a discrete distribution whose points are the time series of each member. The weight of the points in the distribution is indicated here by the thickness of the line.
\\(L_{2}\\)-barycenter in preserving bi-modality, which may be of interest (Bertossa et al., 2023). Actual forecasts (Figure 3) are more complex and an assessment of different barycenters will require extensive testing with different criteria. The purpose of Figure 2 is to emphasize the (hitherto largely untapped) variety of ways to build multi-model ensemble forecasts.
Figure 3 shows a similar example as Figure 2 but for two real S2S forecasts. In Fig. 2(a), one can see the forecasted 2m-temperature in Paris according to the ECMWF (in blue) and to the NCEP (in orange) models from the S2S database. Their \\(L_{2}\\)- and \\(W_{2}\\)-barycenters are shown in Fig. 2(b) and 2(c) respectively. It is interesting to note that, for a given parameter \\(\\alpha\\), the two barycenters have different ensemble members but their ensemble means are the same. The difference between them is the way they represent the forecast uncertainty. This raises the question: how different are the \\(L_{2}\\)- and \\(W_{2}\\)-distributions and which one captures better the forecast uncertainty?
## 3 Data and methodology
In this study, we focus on the sub-seasonal forecasting of 2m-temperature during boreal winter in Europe. The energy demand is higher in winter, and is particularly dependent on the temperature (due to the use of electrical heating). Moreover, temperature is a large-scale field for which forecast models generally have good skillmaking it a good starting point to evaluate the different MME methods.
### Data
To make predictions for this case and to validate these predictions, we need both a dataset of ensemble forecasts from multiple dynamical models to which the barycenters will be applied as well as a reference dataset against which to evaluate the skills of the forecasts.
#### 3.1.1 S2S data
For this first implementation, we selected two models from the S2S database (Vitart et al., 2017). The first one is the European Centre for Medium-Range Weather Forecasts (ECMWF) model, which has been shown to have some skill for European winters at the sub-seasonal scale (Monhart et al., 2018; Bueler et al., 2020; Goutham et al., 2022). For the second one, we chose the National Centers for Environmental Prediction (NCEP) model, since its development was essentially independent from the one of ECMWF. As explained earlier, one of the advantage of doing multi-model ensemble is to sample and compensate better the models error. If the models have similar construction, this error is not sampled properly. Moreover, both models have a long time range and a large ensemble size. Their main characteristics are summarized in Table 1.
The 2m-temperature forecasts and reforecasts from both models were retrieved from the S2S database through the ECMWF's Meteorological Archival and Retrieval System (MARS). We retrieve the forecasts of the temperature daily mean directly on a \\(1.5^{\\circ}\\times 1.5^{\\circ}\\) grid for our study domain shown in Figure 4, that
Figure 3: Same as Fig. 2 but for two real sub-seasonal ensemble forecasts of 2m-temperature over Paris (initialized at 2018-02-01 00:00:00). The reference 2m-temperature is added in black.
is Europe (\\(34^{\\circ}N-74^{\\circ}N\\), \\(13^{\\circ}W-40^{\\circ}E\\)). We select the forecasts started during the months of December-January-February (DJF) for the 2015-2022 period, for which ECMWF and NCEP have matching starting dates. At the end, we obtain a total of 180 simulations. Here, we are only using the perturbed forecast members of the forecasts to build the discrete distributions. For consistency, we also discard the control member of the reforecasts for the calibration.
Due to model errors, models tend to drift away from the observation climate toward the model climatology as lead time increases (Takaya, 2019). It is thus important to calibrate extended-range forecasts. The model climatology can be estimated from the reforecasts, which are \"retrospective\" forecasts run with the same model than the forecast but in the past, while the observed climatology is derived from a reference data set (Sect. 3.1.2 below). Then, calibration methods can be applied to statistically correct the forecasts. Here, we use the mean and variance adjustment (MVA, Leung et al. (1999); Manzanas et al. (2019)) method to
\\begin{table}
\\begin{tabular}{c l l} \\hline \\hline & ECMWF & NCEP \\\\ \\hline
**Forecasts** & & \\\\ Time range & d 0β46 & d 0β44 \\\\ Resolution & Tco639/Tco319 & T126 L64 \\\\ & L137 & \\\\ Ensemble size & 50+1 & 15+1 \\\\ Frequency & Twice a week & Daily \\\\ & (Monday, Thursday) & \\\\ \\hline
**Reforecasts** & & \\\\ Method & On the fly & Fixed \\\\ Period & Past 20 years & 1999β2010 \\\\ Frequency & Twice a week & Daily \\\\ Ensemble size & 10+1 & 3+1 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Description of the two models from the S2S database. See Vitart et al. (2017) for more details.
Figure 4: Study domain with the \\(1.5^{\\circ}\\times 1.5^{\\circ}\\) lat/lon grid from the forecasts. Only the land points (in grey) are used in this case study.
calibrate the forecasts as in Goutham et al. (2022).
The reforecast production method differs from center to center. At the ECMWF, they are produced \"on the fly\", that is each forecast is provided with its set of corresponding reforecasts (initialized for the same day of the year over the last previous 20 years). On the contrary, NCEP produced the reforecasts at once for all the days of a fixed period. From this \"fixed\" set, we select all the reforecasts initialized the same day of the year than the forecast's initialization date. This calibration approach is more favorable to the ECMWF forecasts since the climatological statistics are computed over a longer period (20 year and 10 members versus 12 years and 3 members) and their evolution is taken into account thanks to the rolling average, contrary to the NCEP forecasts for which a fixed time period is used. However, this approach was chosen to have a similar construction for both models and in accordance with the reforecasts availability.
#### 3.1.2 Reference data
The Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2, Gelaro et al. (2017)) reanalysis is used as reference for the calibration and the validation of the forecasts. We use a reanalysis as reference in order to have a spatially and temporally complete dataset. We choose the MERRA-2 reanalysis because it is a recent reanalysis covering both the calibration and validation period, and it is based on a different global circulation model than both the ECMWF and the NCEP S2S forecasts. This would not the case, for example, with the ERA5 reanalysis which is also produced by the ECMWF with a similar model as the S2S forecasts (Hersbach et al., 2020). The 2m-temperature daily means were retrieve from NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) center on MERRA-2's native grid, i.e. 0.5\\({}^{\\circ}\\) lat \\(\\times\\) 0.625\\({}^{\\circ}\\) lon grid. The data was re-gridded on the same \\(1.5^{\\circ}\\times 1.5^{\\circ}\\) lat/lon grid as the forecasts using bi-linear interpolation with the Climate Data Operators (CDO, Schulzweida (2022)).
We also use the MERRA-2 reanalysis to build a 30-year rolling climatology. The climatology is a common benchmark for forecast validation, and is used to compute skill scores (see Section 3.2). For a given day of the year, the climatology corresponds to the MERRA-2 values for the same day over the previous 30 years. Using a rolling climatology allows us to take into account the climatic trend of the temperature. The climatology is thus an ensemble with 30 members, each one corresponding to a different year.
### Skill metrics
The performance of the forecasts are evaluated for weekly averages for weeks 3 to 6, which corresponds to the sub-seasonal time scale. We only evaluate the forecasts' performance over the land points of our study domain (that is a total of 465 grid-points indicated in grey in Fig. 4), and, when spatial averaging is needed, the scores are weighted with the cosine latitude in order to account for the spherical geometry of the Earth. We consider several metrics to evaluate and compare the performances of the different ensemble forecasts.
#### 3.2.1 Continuous Ranked Probability Score (CRPS)
The CRPS is a widely used score for probabilistic forecasts of continuous variable (Matheson and Winkler, 1976; Hersbach, 2000; Wilks, 2019). The CRPS is actually the \\(L_{2}\\) distance between the Cumulative Density Function (CDF) of the ensemble forecast and the CDF of the observation:
\\[\\text{CRPS}_{n}=\\int_{-\\infty}^{+\\infty}\\left[F_{\\text{fc},n}(y)-F_{\\text{ ref},n}(y)\\right]^{2}dy \\tag{7}\\]
where \\(F_{\\text{fc}}\\) is the CDF of the forecast, \\(F_{\\text{ref}}\\) is the CDF of the observation and \\(n\\in\\llbracket 1,N\\rrbracket\\) is the simulation's number. The CDF are computed empirically from the ensembles. In general the reference is deterministic, and its CDF is then a step function
\\[F_{\\text{ref}}(y)=\\begin{cases}0&\\text{if }y<x_{\\text{ref}},\\\\ 1&\\text{if }y\\geq x_{\\text{ref}}.\\end{cases}\\]
where \\(x_{\\text{ref}}\\) is the deterministic value of the reference.
In the reminder, we evaluate the models with respect to the their CRPS averaged over the simulations. We denote this mean-CRPS by CRPSm (to avoid any confusion with \\(\\text{CRPS}_{n}\\) above). The CRPSm is negatively oriented with 0 being the perfect score.
#### 3.2.2 Proportions of skillful forecasts (CRPSp)
The second performance score we consider for the evaluation of the models is the proportion of skillful forecasts (Goutham et al., 2022). It corresponds to the percentage of simulations for which the model has a better CRPS than the climatology:
\\[\\text{CRPSp}=\\frac{100}{N}\\sum_{n=1}^{N}\\left[\\text{CRPS}_{\\text{fc},n}<\\text{ CRPS}_{\\text{clim},n}\\right] \\tag{8}\\]
where \\(N\\) is the number of simulations. The proportion of skillful forecasts (CRPSp) is positively oriented, and we consider that the model has some skill if 50% of the forecasts are skillful. It is also more robust to outliers than the CRPSm because it will not be affected much by a few simulations very far from the ground truth.
#### 3.2.3 Brier Score
In order to investigate different attributes of the ensemble forecasts, we also compute their Brier score and associated decomposition for the \"2m-temperature being below normal\". The Brier score is a measure of accuracy for probability forecasts of dichotomous events (similarly to the CRPS for a continuous predictand, Wilks (2019)). It is equal to the mean square error of such probabilistic forecast:
\\[\\text{Brier score}=\\frac{1}{N}\\sum_{n=1}^{N}(y_{n}-o_{n})^{2} \\tag{9}\\]
where \\(N\\) is the number of simulations (here \\(N=180\\) starting-dates), \\(y_{n}\\) is the forecasted probability of the event and \\(o_{n}\\) the observed event (i.e. 1 if it occurs, 0 otherwise). If the probability forecasts \\(y_{n}\\) are only allowed to take a finite number of values (here \\([0,0.2,0.4,0.6,0.8,1.0]\\)), it can be decomposed in three terms including the reliability and the resolution, which quantify different attributes of the forecasts (Murphy, 1973; Wilks, 2019). The reliability characterizes the correspondence between the predicted probability of the event with respect to the ensemble forecast and the relative event's frequency conditioned on the forecasted probability (and is positively oriented). The resolution describes whether different predictions lead to different outcomes (and is negatively oriented).
The ensemble forecast for the temperature is transformed into probabilistic forecasts \\(y\\) for this dichotomous event by counting the proportion of ensemble members that are in the lower tercile of the climatology and rounding it to the nearest allowed values.
#### 3.2.4 Relationships between skill metrics and barycenters
* The Wasserstein distance could also be used as a score for forecast validation. However, in the case of a deterministic observation, the Wasserstein distance is equivalent to the RMSE (see appendix A). This means that it does not take into account the uncertainty information of the ensemble. Thus, we do not use it for the evaluation of the forecasts in this study.
* It is interesting to note that for univariate distribution, the \\(L_{2}\\)-barycenter is also the barycenter with respect to the energy distance (see appendix B). Thus, since the CRPS is identical to the energy distance in 1D (see Wilks, 2019), we can expect to have good CRPS performance for the \\(L_{2}\\)-barycenter.
* One can show that the CRPS of the \\(L_{2}\\)-barycenter \\(\\mu_{L_{2}}^{\\alpha}(\\mu_{1},\\mu_{2})\\) can be expressed as a function of the CRPS of \\(\\mu_{1}\\) and \\(\\mu_{2}\\): \\[\\text{CRPS}(\\mu_{L2},\
u_{obs})=\\alpha.\\text{CRPS}(\\mu_{1},\
u_{obs})+(1- \\alpha).\\text{CRPS}(\\mu_{2},\
u_{obs})-\\alpha.(1-\\alpha).\\text{CRPS}(\\mu_{1}, \\mu_{2})\\] (10) where \\(\
u_{obs}\\) is the observation (see appendix C for proof).
### Model combinations with barycenters
#### 3.3.1 Estimation of the weight \\(\\alpha\\)
The ECMWF and NCEP ensembles are combined following the barycenter-based MME method described in Section 2. We consider two multi-model ensembles, one based on the \\(L_{2}\\)-barycenter and one based on the \\(W_{2}\\)-barycenter. The weights given to the models into the new multi-model ensemble are represented by the parameter \\(\\alpha\\in[0,1]\\) (see Eq. (3)). That is, the weight \\(\\alpha\\) is given to the ECMWF ensemble and the weight \\(1-\\alpha\\) is given to the NCEP ensemble. We estimate this parameter \\(\\alpha\\) from the data.
We perform cross-validation to validate the multi-model forecasts. To account for the non-stationarity associated with the seasonal cycle, the 180 simulations are divided in seven folds corresponding to seven winters (with 25 or 26 simulations each). Simulations from six winters are used to determine the optimal value of \\(\\alpha\\), then this value is used to compute the barycenter ensembles for the last winter. To find the optimal \\(\\alpha\\), we use a grid search in the interval \\([0,1]\\) with a step of 0.02.
Remark:We mentioned earlier that the BMA method is a particular case of the \\(L_{2}\\)-barycenter. In the case of the BMA, the weights are the posterior distributions of the input models. They are representing the probability that the associated model is the best model.
#### 3.3.2 Optimal \\(\\alpha\\) on the data
In fact, for each barycenter we estimate two optimal \\(\\alpha\\) values, one for the mean CRPSm and one for the mean CRPSp averaged over weeks 3 to 6 and the whole domain. These optimal \\(\\alpha\\) values are shown in Figure 5. The narrow spreads of the distributions of the optimal \\(\\alpha\\) for a given metric and barycenter indicate that the weights are consistent accross the different folds of the cross-validation. Our training period of six winters seems to be sufficient to derive stable weights.
One can see that for both barycenters and both scores, the optimal \\(\\alpha\\) values are above 0.5. That means that more weight is given to the ECMWF ensemble, which is in agreement with the known skill of the models. In the next section, we will see that the ECMWF ensemble indeed tends to show better performance than the NCEP ensemble for all the scores. For the \\(L_{2}\\)-barycenter, both scores are associated with similar optimal values for \\(\\alpha\\), around 0.8. For the \\(W_{2}\\) barycenter, the scores tend to be more sensitive to the value of \\(\\alpha\\). The optimal value of \\(\\alpha\\) is around 0.85 for the CRPS, and around 0.7 for the CRPSp.
Figure 5: Distributions of the optimal parameter \\(\\alpha\\) obtained by cross-validation for (a) the \\(L_{2}\\)-barycenter and (b) the \\(W_{2}\\)-barycenter. The optimal \\(\\alpha\\) across the seven folds are represented as boxplots and their mean is represented by a square marker.
Results
### Models evaluation
In this section, we validate and compare the weekly 2m-temperature from the two multi-model (barycenter) and the two single-model forecasts. All the scores shown here for the former are obtained with their respective optimal weight \\(\\alpha\\).
Spatial performancesFigure 6 shows an inter-comparison of the CRPSm and CRPSp of the different ensembles. In the first panel (a)a, one can see the average of the CRPSm over the domain. The distribution over the starting dates is represented by its mean in the plot, and is used to test if the model's scores are significantly different from each other at a 5% significance level. The significance is inferred using the Wilcoxon signed-rank test, a non-parametric paired statistical test (Wilcoxon, 1945). A first surprising observation is that the CRPS of the climatology decreases with lead-times. This can be explained by the effect of the seasonality on the performances of the climatology. Indeed, the CRPS of the climatology has a strong annual cycle that goes from about \\(0.9^{\\circ}C\\) in summer to \\(1.5^{\\circ}C\\) in winter (values obtained for the period 2016-2022 and averaged over Europe). Due to our choice of forecast's selection (i.e. forecasts initialized in DJF), the six weeks lead-time contains more weeks of spring while the four weeks lead-time contains mostly weeks of winter. We also observe that the NCEP ensemble has a significantly lower performance than the other ensemble forecasts, and even than the climatology in average. The three other ensembles have very similar mean CRPSm, with the \\(L_{2}\\)-barycenter being slightly better followed by the \\(W_{2}\\)-barycenter. However, the skills of the two barycenter ensembles are not significantly different from that of the ECMWF ensemble at the 5% significance level, except at week 5 (see Table 2).
Similarly, panel (b)b shows the mean of the (spatially averaged) CRPSp distribution over the starting-dates. It is interesting to note that, despite being worse than the climatology on average, NCEP is better than the climatology in terms of proportion for weeks 3 to 5. Indeed, its CRPSp is above 50%, meaning that NCEP performs better than the climatology in terms of CRPS at more than half the grid-points). However, the three other ensembles perform significantly better than the NCEP one. This time, the \\(W_{2}\\)-barycenter ensemble has a significantly better proportion of skillful forecasts than both ECMWF and \\(L_{2}\\)-barycenter. The \\(L_{2}\\)-barycenter also performs better than ECMWF, but the difference is significant at the 5% level only for week 4 (see Table 3).
Figure 6: Mean of (a) the spatial average of the CRPSm and (b) the spatial proportion of skillful forecasts CRPSp of the weekly 2m-temperature for the different models and as a function of the lead-time.
Spatial distribution of the best modelsThe performance of the ensembles also varies across the domain. The spatial patterns are similar for all four ensembles, in the sense that variations across the domain for each model tend to dominate over differences between models (not shown here). An exception is the NCEP ensemble that shows clearly worse performance than for the three other ensembles. Thus, for easier comparison, we build best model maps that show for each grid-point which of the four ensembles perform best with respect to the chosen score. The best-model maps with respect to the CRPSm for the different weeks is shown in Figure 7. The best model varies across the domain. There are no grid-points for which the NCEP ensemble performs significantly better than the ECMWF ensemble. However, in agreement with the average score from Figure (a)a, the \\(L_{2}\\)-barycenter has the best CRPSm at a majority of pixels, but is significantly different from the ECMWF ensemble only in specific areas, mostly in the North-East of Europe. This shows that, around this area, the information provided by the NCEP ensemble to the \\(L_{2}\\)-barycenter is helpful to forecasting, even though the performance of the NCEP ensemble alone tends to be worse than that of the ECMWF ensemble.
Similarly, best-model maps with respect to the proportion of skillful forecasts (CRPSp) are shown in Figure 8. Since this time we have a distribution of nominal values (true when the forecast is better than
Figure 7: Best-model maps with respect to the CRPSm of the 2m-temperature for different lead-times. The color of the pixels indicates which of the ensemble forecasts has the best score at this location. The crosses indicate grid-points for which the best model performs significantly better than the ECMWF ensemble at the 5% significance level (according to the Wilcoxon signed-rank test).
climatology, false otherwise), we use the McNemar test to test whether the best model is significantly better than the ECMWF ensemble (McNemar, 1947). The best-model maps for the CRPSp tend to be less smooth than for the CRPSm (in terms of spatial correlation). We can observe that the \\(W_{2}\\)-barycenter performs best at a majority of grid-points, but also that the significance of this result is limited to some areas, in northern and eastern Europe, depending on the lead-time. There are also a few grid-points where the \\(L_{2}\\)-barycenter performs better than the ECMWF ensemble. Thus, like for the CRPSm (Fig. 7), a barycenter performs significantly better than the ECMWF ensemble in northern Europe, but the barycenter performing best for the CRPSp is the \\(W_{2}\\) rather the \\(L_{2}\\) one.
### Forecast attributes
So far, the choice of scores has put the focus on the accuracy of the forecasts, without distinguishing the different attributes responsible for that accuracy (Wilks, 2019). Moreover, the focus was on forecasting the the whole distributions. However, in applications, one may be interested in predicting statistics about specific parts of the distributions, e.g. cold spells for the energy sector. To this end, we now focus on predicting the probability that the temperature will be in the lower tercile of the distribution compared to the climatology.
Figure 8: Best model maps with respect to the proportion of skillful forecast CRPSp of the 2m-temperature for different lead-times. The color of the pixels indicate which of the ensemble forecasts has the best score at this location. The crosses indicate grid-points for which the best model performs significantly better than the ECMWF ensemble at the 5% significance level (according to the McNemar test).
In particular, we want to investigate the impact of the weight \\(\\alpha\\) on different attributes of such forecasts. In order to do that, we use the Brier score and its decomposition into resolution and reliability.
Figure 9 shows the reliability and resolution for weeks 3 and 4 of the different ensembles: the ECMWF and NCEP ensembles, and the two barycenter ensembles for values of \\(\\alpha\\) varying from 0 (equal to NCEP) to 1 (equal to ECMWF). This allows to trace the variation of the Brier score and of the forecasts' attributes as more weight is given to one or the other initial ensemble. For both lead-times, the ECMWF ensemble has a better accuracy (i.e. better Brier score) but also better reliability and resolution than the NCEP ensemble. While the starting and end points of the two barycenter curves for varying \\(\\alpha\\) are the same, the paths followed differ. For both barycenters, the progression of the two attributes is relatively smooth between \\(\\alpha=0\\) and \\(\\alpha=0.6\\), but becomes more nonlinear as more weight is given to the ECMWF ensemble. For \\(\\alpha<0.6\\), the \\(L_{2}\\)-barycenter has a better reliability than the \\(W_{2}\\)-barycenter for a given resolution. Conversely, the \\(W_{2}\\)-barycenter has a better resolution for a given reliability. However, the overall Brier score is larger for both barycenters than for the ECMWF ensemble. To the contrary, for \\(\\alpha>0.6\\), the Brier scores of the barycenters become larger than the ECMWF scores. This is mostly due to the better resolution of both barycenters for week 3, while both reliability and resolution are improved for week 4. Overall, for a given value of \\(\\alpha\\), the \\(L_{2}\\)-barycenter has a better Brier score and reliability and the \\(W_{2}\\)-barycenter has a better resolution (not shown here). It is interesting to note that the best reliability and the best resolution are not reached for the same value of \\(\\alpha\\) and that the optimal values of \\(\\alpha\\) depend on the choice of barycenter. It also does not necessarily correspond to the best Brier score.
## 5 Discussion
### Performance of the MME
We evaluated the two (barycenter-based) multi-model and the two single-model ensembles of 2m-temperature with respect to different metrics: the mean CRPS (CRPSm), the proportion of skillful forecasts (CRPSp), as well as their Brier score decomposition into resolution and reliability. In general, the multi-model ensembles
Figure 9: Reliability-resolution diagrams showing the evolution of the forecastsβ attributes for the Brier score of the lower tercile of the temperature with respect to the barycentric weight \\(\\alpha\\) for (a) week 3 and (b) week 4. The optimal values of \\(\\alpha\\) with respect to the different attributes are indicated in the inserts. The dashed lines represent isolines of the Brier score, with largest values of the Brier score to the top left of the plots.
improve forecast skill compared to the single-model ones. On average, the \\(W_{2}\\)-barycenter has a significantly better CRPSp. It is also the best barycenter for this score at most of the locations where a barycenter performs significantly better than ECMWF. On the other hand, the \\(L_{2}\\)-barycenter has a better average CRPSm and tends to be the best barycenter at locations where the results are significant. In other words, the \\(L_{2}\\)-barycenter is better in average but the \\(W_{2}\\)-barycenter is better more often with respect to the CRPS. However, the best model also depends on the location. In particular, the ECMWF ensemble outperforms the other ensembles over (most of) the Iberian peninsula for both metrics. Regarding the prediction of the lowest tercile of the temperature distribution, it is also the \\(L_{2}\\)-barycenter that performs best in terms of accuracy (i.e. Brier score). The decomposition of the Brier score into reliability and resolution allows us to investigate how these attributes contribute to the accuracy of the ensemble forecasts. The \\(L_{2}\\)-barycenter has an overall better reliability than the \\(W_{2}\\)-barycenter, but the \\(W_{2}\\)-barycenter tends to have a better resolution.
There can be several reasons why merging the ECMWF ensemble with the less skillful NCEP ensemble, is leading to barycenters with improved performances. First, NCEP has generally less skill than ECMWF, but can be punctually better for some given locations, lead times or initialization dates. The barycenters can exploit this information to improve skill thanks to error cancellation and to the non-linearity of the skill metrics Hagedorn et al. (2005). Second, the ECMWF ensemble may have good performances but be overconfident. In that case, adding a model with lower skills increases the spread of the ensemble and can move the ensemble mean towards the truth, as shown by Weigel et al. (2008) for seasonal ensemble forecasts. This can be seen from Equation (10) in the case of the \\(L_{2}\\)-barycenter's CRPS. This equation shows that the CRPS of the \\(L_{2}\\)-barycenter is composed of two parts: the weighted average of the CRPS of the single-model ensemble and the (weighted) CRPS between them. Thus, if one model has a worse CRPS than the other, it can still improve the CRPS of the barycenter if the CRPS between the models is large enough to compensate its CRPS.
An example of an interesting case for which NCEP performed best is the cold-wave event that occurred in February 2018 in France. Figure 3 shows the ECMWF and NCEP forecasts initialized the 2018-02-01 (ensembles 1 and 2 respectively in the legend) as well as the daily 2m-temperature according to MERRA-2 (reference in black). One can observe that ECMWF seems to miss the cold-wave in week 4. On the other hand, some members of NCEP do predict an important temperature decrease. The peak of the temperature drop is shifted by one or two days in NCEP but remains within the same week, which is the the typical resolution for sub-seasonal timescale. Moreover, the large spread of NCEP's members during that week translate well the large uncertainty of the forecast.
### Importance of the model's weights
Equal weighting of the models in the pooling method is the simpler and most used approach. However, a few studies investigate the use weighted multi-model ensembles with divergent results (at weather or seasonal scale in Weigel et al. (2008); Casanova and Ahrens (2009); Kharin and Zwiers (2002) and climate in Haughton et al. (2015)). At the sub-seasonal scale, Wanders and Wood (2016) used muti-variable linear regression on the ensemble means to derive the model weights. They show that the weighted muti-model ensemble have a better deterministic performance but also probabilistic one (in terms of Brier score) than the non-weighted multi-model. In this paper, we also derive the weights on previous model's performances but using different criteria: the CRPSm and CRPSp (both related to the CRPS). Thus, we optimize the weight taking into account the whole distribution instead of focusing on the ensemble mean. In agreement with Wanders and Wood (2016), we observe that the weights have a large impact on the performance of the barycenter-ensemble. In fact, the superior skill of the barycenters were obtained for an optimal weighting learned from past data. This result does not hold for all values of the weight. The two barycenters do outperform the NCEP ensemble for all weights, but the ECMWF ensemble outperforms them for low values of the parameter \\(\\alpha\\) (equal to the weight on ECMWF). It is coherent that, due to the lower skill of the NCEP ensemble compared to the ECMWF ensemble, more weight should be put on the latter. This difference in performance is also reflected in the optimal model's weights, with the optimal values of \\(\\alpha\\) for both barycenters and metrics being above 0.5.
The optimal weight depends on the barycenter method and the metric. The optimal values of the weight are similar for both domain-average metrics for the \\(L_{2}\\)-barycenter (around 0.8), while they are more sensitive to the metric for the \\(W_{2}\\)-barycenter (around 0.9 for the CRPSm and 0.7 for the CRPSp). Moreover, in the case of the Brier score for the lowest tercile of the temperature and its decomposition, the reliability and the resolution do not have the same optimal value for the weight. Thus, the choice of the weight has to be done in function of the targeted application.
In this work, we assume that there is no best model and that the models are complementary as they sample different part of the forecast uncertainty. Our aim is thus to find the best combination of models, and the weights represent the contribution of each model in this combination. This is different from the BMA method which assumes that there is a best model but is uncertain about which one (Raftery et al., 2005). The BMA is a weighted average of distributions, similar to the \\(L_{2}\\)-barycenter, but its weights are representing the probability of the associated model to be the best one. However, it would be possible to combine our barycenter approach with a Bayesian framework but with uncertainty on the weights (instead of on the models as in BMA). We could in principle assign priors to the weights of our barycenters and follow a Bayesian approach to derive the corresponding posterior distributions, instead of using (estimated) optimal deterministic weights. To our knowledge, however, such an approach has not been developed yet and it remains to be shown whether or not it would be feasible analytically, or at least tractable computationally.
## 6 Conclusion
We have explored methods to combine ensemble forecasts from multiple models based on barycenters of ensembles. Building on the recognition of the relevance of probabilistic forecasts for S2S prediction, we work directly in the probability distribution space. That is, the ensemble forecasts are manipulated as discrete probability distributions. This allow us to use existing tools from this space and in particular the notion of barycenter. The barycenter of distributions is the probability distribution that best represents the collection of input distributions (with respect to a given metric). The barycenter can thus be seen as the combination of these distributions and so can be used to build a MME. The barycenter is defined with respect to a metric (i.e. a distance between distributions). Here, we explore two barycenters based on different metrics: the \\(L_{2}\\)-distance and the Wasserstein (\\(W_{2}\\))-distance. We show that the \\(L_{2}\\)-barycenter is in fact equivalent to the well-known pooling method and compare it to the new \\(W_{2}\\)-barycenter based method.
This first application of this framework to S2S prediction is illustrated for the combination of two single-model ensembles to predict the winter surface temperature over Europe. We show that despite the superior skill of one of the single-model ensemble over the other, it is still advantageous to combine them into a barycenter. This reconfirms the interest of multi-model methods shown by previous studies. However, the comparison of the two barycenter-based MMEs does not single out clearly a better method. The best method depend on the chosen metric, with the \\(L_{2}\\)-barycenter generally performing better with respect to the mean Continuous Ranked Probability Score (CRPSm), and the \\(W_{2}\\)-barycenter with respect to the Proportions of skillful forecasts (CRPSp).
Moreover, we also highlight the importance of weighing the models within the MME. The model's weights have a significant impact on both MME's performance. They are particularly important in our case study where the single-model ensembles have contrasting skill. In order to optimize the performance of the MMEs, we learn the weights from past forecasts (using cross-validation here). The weights are selected such as maximizing the MME's skill with respect to the chosen metrics: the CRPSm and CRPSp. This approach can easily be extended to other metrics.
This study is a proof of concept to develop the framework and investigate the properties of the barycenter-based MMEs. These results constitute a promising first step towards improving S2S predictions using barycenters to merge ensemble forecasts. A next step would be to implement the barycenter-based MMEs for the combination of more than two models with weights estimated from the data.
## Acknowledgment
The authors thank Naveen Goutham for code and advice on forecast calibration. The authors thank the institut de Mathematiques pour la Planete Terre for (partial) funding (iMPT 2021). This research was produced within the framework of Energy4Climate Interdisciplinary Center (E4C) of IP Paris and Ecole des Ponts ParisTech. This research was supported by 3rd Programme d'Investissements d'Avenir [ANR-18-EUR-0006-02].
## References
* Bertossa et al. (2023) Bertossa, C., Hitchcock, P., DeGaetano, A., and Plougonven, R. (2023). Coherent Bimodal Events in Ensemble Forecasts of 2-m Temperature. _Weather and Forecasting_.
* Bueler et al. (2020) Bueler, D., Beerli, R., Wernli, H., and Grams, C. M. (2020). Stratospheric influence on ECMWF sub-seasonal forecast skill for energy-industry-relevant surface weather in European countries. _Quarterly Journal of the Royal Meteorological Society_, 146(733):3675-3694.
* Ferrone et al. (2017) Ferrone, A., Mastrangelo, D., and Malguzzi, P. (2017). Multimodel probabilistic prediction of 2 m-temperature anomalies on the monthly timescale. _Advances in Science and Research_, 14:123-129.
* Gonzalez et al. (2021) Gonzalez, P. L. M., Brayshaw, D. J., and Ziel, F. (2021). A new approach to extended-range multi-model forecasting: Sequential learning algorithms. _Quarterly Journal of the Royal Meteorological Society_, 147(741):4269-4282.
* Hagedorn et al. (2012) Hagedorn, R., Buizza, R., Hamill, T. M., Leutbecher, M., and Palmer, T. N. (2012). Comparing TIGGE multimodel forecasts with reforecast-calibrated ECMWF ensemble forecasts. _Quarterly Journal of the Royal Meteorological Society_, 138(668):1814-1827.
* Hagedorn et al. (2005) Hagedorn, R., Doblas-Reyes, F. J., and Palmer, T. (2005). The rationale behind the success of multi-model ensembles in seasonal forecasting -- I. Basic concept. _Tellus A: Dynamic Meteorology and Oceanography_, 57(3):219-233.
* Haughton et al. (2015) Haughton, N., Abramowitz, G., Pitman, A., and Phipps, S. J. (2015). Weighting climate model ensembles for mean and variance estimates. _Climate Dynamics_, 45(11):3169-3181.
* Heizenreder et al. (2006) Heizenreder, D., Trepte, S., and Denhard, M. (2006). SRNWP-PEPS: A regional multi-model ensemble in Europe. _The European Forecaster: Newsletter of the WGCEF_, 11.
* Hagedorn et al. (2015)Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horanyi, A., Munoz-Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G., Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G., Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J., Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S., Hogan, R. J., Holm, E., Janiskova, M., Keeley, S., Laloyaux, P., Lopez, P., Lupu, C., Radnoti, G., de Rosnay, P., Rozum, I., Vamborg, F., Villaume, S., and Thepaut, J.-N. (2020). The ERA5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_, 146(730):1999-2049.
* Kalnay (2003) Kalnay, E. (2003). _Atmospheric modeling, data assimilation and predictability_. Cambridge University Press.
* Karpechko et al. (2018) Karpechko, A. Y., Charlton-Perez, A., Balmaseda, M., Tyrrell, N., and Vitart, F. (2018). Predicting Sudden Stratospheric Warming 2018 and Its Climate Impacts With a Multimodel Ensemble. _Geophysical Research Letters_, 45(24):13,538-13,546.
* Manzanas et al. (2019) Manzanas, R., Gutierrez, J. M., Bhend, J., Hemri, S., Doblas-Reyes, F. J., Torralba, V., Penabad, E., and Brookshaw, A. (2019). Bias adjustment and ensemble recalibration methods for seasonal forecasting: a comprehensive intercomparison using the C3S dataset. _Climate Dynamics_, 53(3):1287-1305.
* Matheson and Winkler (1976) Matheson, J. E. and Winkler, R. L. (1976). Scoring Rules for Continuous Probability Distributions. _Management Science_, 22(10):1087-1096.
* McNemar (1947) McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. _Psychometrika_, 12(2):153-157.
* Monhart et al. (2018) Monhart, S., Spirig, C., Bhend, J., Bogner, K., Schar, C., and Liniger, M. A. (2018). Skill of Subseasonal Forecasts in Europe: Effect of Bias Correction and Downscaling Using Surface Observations. _Journal of Geophysical Research: Atmospheres_, 123(15):7999-8016.
* Ning et al. (2014) Ning, L., Carli, F. P., Ebtehaj, A. M., Foufoula-Georgiou, E., and Georgiou, T. T. (2014). Coping with model error in variational data assimilation using optimal mass transport. _Water Resources Research_, 50(7):5817-5830.
* P. P.
Papayiannis, G. I., Galanis, G. N., and Yannacopoulos, A. N. (2018). Model aggregation using optimal transport and applications in wind speed forecasting. _Environmetrics_, 29(8):e2531.
* Peyre and Cuturi (2020) Peyre, G. and Cuturi, M. (2020). Computational Optimal Transport.
* Robin et al. (2019) Robin, Y., Vrac, M., Naveau, P., and Yiou, P. (2019). Multivariate stochastic bias corrections with optimal transport. _Hydrology and Earth System Sciences_, 23(2):773-786.
* Robin et al. (2017) Robin, Y., Yiou, P., and Naveau, P. (2017). Detecting changes in forced climate attractors with Wasserstein distance. _Nonlinear Processes in Geophysics_, 24(3):393-405.
* Santambrogio (2015) Santambrogio, F. (2015). Progress in nonlinear differential equations and their applications. In _Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling_, volume 87. Birkhauser Cham.
* Schulzweida (2022) Schulzweida, U. (2022). CDO User Guide.
* Smith et al. (2013) Smith, D. M., Scaife, A. A., Boer, G. J., Caian, M., Doblas-Reyes, F. J., Guemas, V., Hawkins, E., Hazeleger, W., Hermanson, L., Ho, C. K., Ishii, M., Kharin, V., Kimoto, M., Kirtman, B., Lean, J., Matei, D., Merryfield, W. J., Muller, W. A., Pohlmann, H., Rosati, A., Wouters, B., and Wyser, K. (2013). Real-time multi-model decadal climate predictions. _Climate Dynamics_, 41(11):2875-2888.
* Specq et al. (2020) Specq, D., Batte, L., Deque, M., and Ardilouze, C. (2020). Multimodel Forecasting of Precipitation at Subseasonal Timescales Over the Southwest Tropical Pacific. _Earth and Space Science_, 7(9):e2019EA001003.
* Forecast System Design, Configuration, and Complexity. In Robertson, A. W. and Vitart, F., editors, _Sub-Seasonal to Seasonal Prediction_, pages 245-259. Elsevier.
* Villani (2003) Villani, C. (2003). _Topics in Optimal Transportation_. American Mathematical Society.
* Vissio et al. (2020) Vissio, G., Lembo, V., Lucarini, V., and Ghil, M. (2020). Evaluating the Performance of Climate Models Based on Wasserstein Distance. _Geophysical Research Letters_, 47(21):e2020GL089385.
* Vissio and Lucarini (2018) Vissio, G. and Lucarini, V. (2018). Evaluating a stochastic parametrization for a fast-slow system using the Wasserstein distance. _Nonlinear Processes in Geophysics_, 25(2):413-427.
* Vissio et al. (2019)Vitart, F., Ardilouze, C., Bonet, A., Brookshaw, A., Chen, M., Codorean, C., Deque, M., Ferranti, L., Fucile, E., Fuentes, M., Hendon, H., Hodgson, J., Kang, H.-S., Kumar, A., Lin, H., Liu, G., Liu, X., Malguzzi, P., Mallas, I., Manoussakis, M., Mastrangelo, D., MacLachlan, C., McLean, P., Minami, A., Mladek, R., Nakazawa, T., Najm, S., Nie, Y., Rixen, M., Robertson, A. W., Ruti, P., Sun, C., Takaya, Y., Tolstykh, M., Venuti, F., Waliser, D., Woolnough, S., Wu, T., Won, D.-J., Xiao, H., Zaripov, R., and Zhang, L. (2017). The Subseasonal to Seasonal (S2S) Prediction Project Database. _Bulletin of the American Meteorological Society_, 98(1):163 - 173.
* Wanders and Wood (2016) Wanders, N. and Wood, E. F. (2016). Improved sub-seasonal meteorological forecast skill using weighted multi-model ensemble simulations. _Environmental Research Letters_, 11(9):094007.
* Wang et al. (2020) Wang, Y., Ren, H.-L., Zhou, F., Fu, J.-X., Chen, Q.-L., Wu, J., Jie, W.-H., and Zhang, P.-Q. (2020). Multi-Model Ensemble Sub-Seasonal Forecasting of Precipitation over the Maritime Continent in Boreal Summer. _Atmosphere_, 11(5).
* Weigel et al. (2008) Weigel, A. P., Liniger, M. A., and Appenzeller, C. (2008). Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts? _Quarterly Journal of the Royal Meteorological Society_, 134(630):241-260.
* Wilcoxon (1945) Wilcoxon, F. (1945). Individual Comparisons by Ranking Methods. _Biometrics Bulletin_, 1(6):80-83.
* Forecast Verification. In Wilks, D. S., editor, _Statistical Methods in the Atmospheric Sciences (Fourth Edition)_, pages 369-483. Elsevier, fourth edition edition.
* Zheng et al. (2019) Zheng, C., Chang, E. K.-M., Kim, H., Zhang, M., and Wang, W. (2019). Subseasonal to Seasonal Prediction of Wintertime Northern Hemisphere Extratropical Cyclone Activity by S2S and NMME Models. _Journal of Geophysical Research: Atmospheres_, 124(22):12057-12077.
## Appendix A Wasserstein distance as a measure of performance
Let consider an ensemble forecast with \\(N\\) members and and the corresponding deterministic observation, both written as discrete probability distributions on the space \\(\\Omega=\\mathbb{R}^{n_{t}}\\), respectively \\(\\mu\\) and \\(\
u\\).
\\[\\mu=\\sum_{i=1}^{N}a_{i}\\delta_{\\mathbf{x}_{i}}\\quad\
u=\\delta_{\\mathbf{y}} \\tag{11}\\]
with \\(X=(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N})\\in\\Omega^{N}\\) being the ensemble members, \\(\\mathbf{a}=(a_{1},\\ldots,a_{N})\\in\\Sigma_{N}\\) their weights, and \\(\\mathbf{y}\\) the observed time-series.
In that case, there is only one feasible transport matrix in \\(U(\\mathbf{a},\\mathbf{b})\\): \\(\\mathbf{T}=\\mathbf{a}\\). That is, the mass of the different members all go to the observation. The equation for the squared 2-Wasserstein distance becomes
\\[W_{2}^{2}(\\mu,\
u) =\\min_{T\\in U(\\mathbf{a},\\mathbf{b})}\\sum_{i=1}^{N}t_{i,j}.\\| \\mathbf{x}_{i}-\\mathbf{y}\\|^{2}=\\sum_{i=1}^{N}a_{i}.\\sum_{k=1}^{nt}\\left(x_{i,k} -y_{k}\\right)^{2}\\] \\[W_{2}^{2}(\\mu,\
u) =\\sqrt{\\sum_{i=1}^{N}\\sum_{k=1}^{nt}a_{i}.\\left(x_{i,k}-y_{k} \\right)^{2}}\\]
Thus, the \\(W_{2}\\)-distance is equal to the RMSE over the time-steps and the ensemble members (up to a multiplicative factor \\(1/\\sqrt{n_{t}}\\)). It does not take into account the information on the forecast uncertainty carried by the ensemble spread.
Energy distance and its associated barycenter
Let \\(\\mu_{1}\\) and \\(\\mu_{2}\\) be two distributions, and \\(F_{1}\\) and \\(F_{2}\\) be their CDF. That is, \\(\\forall x\\in\\mathbb{R}\\)
\\[F_{1}(x)=\\int_{-\\infty}^{x}\\mu_{1}(t)dt\\quad\\text{and}\\quad F_{2}(x)=\\int_{- \\infty}^{x}\\mu_{2}(t)dt\\]
The squared energy distance between \\(\\mu_{1}\\) and \\(\\mu_{2}\\) is
\\[\\mathcal{E}^{2}(\\mu_{1},\\mu_{2}) =\\int_{-\\infty}^{+\\infty}\\left[F_{1}(x)-F_{2}(x)\\right]^{2}dx\\] \\[=\\int_{-\\infty}^{+\\infty}\\left[\\int_{-\\infty}^{x}\\mu_{1}(t)dt- \\int_{-\\infty}^{x}\\mu_{2}(t)dt\\right]^{2}dx\\]
The energy barycenter of \\(\\mu_{1}\\) and \\(\\mu_{2}\\) is the solution of the following minimisation problem
\\[\\mu_{\\mathcal{E}}^{\\alpha}=\\underset{\\mu}{\\arg\\min}\\underbrace{\\alpha. \\mathcal{E}^{2}(\\mu,\\mu_{1})+(1-\\alpha).\\mathcal{E}^{2}(\\mu,\\mu_{2})}\\]
We have:
\\[\\frac{d}{d\\mu}\\mathcal{B}(\\mu) =\\frac{d}{d\\mu}\\left[\\alpha\\int_{-\\infty}^{+\\infty}\\left[F(x)-F_{ 1}(x)\\right]^{2}dx+(1-\\alpha)\\int_{-\\infty}^{+\\infty}\\left[F(x)-F_{2}(x) \\right]^{2}dx\\right]\\] \\[=\\frac{d}{d\\mu}\\int_{-\\infty}^{+\\infty}\\left(\\alpha\\left[\\int_{- \\infty}^{x}\\mu(t)dt-\\int_{-\\infty}^{x}\\mu_{1}(t)dt\\right]^{2}+(1-\\alpha)\\left[ \\int_{-\\infty}^{x}\\mu(t)dt-\\int_{-\\infty}^{x}\\mu_{2}(t)dt\\right]^{2}\\right)dx\\] \\[=\\int_{-\\infty}^{+\\infty}\\frac{d}{d\\mu}\\left(\\alpha\\left[\\int_{- \\infty}^{x}\\left(\\mu(t)-\\mu_{1}(t)\\right)dt\\right]^{2}+(1-\\alpha)\\left[\\int_{ -\\infty}^{x}\\left(\\mu(t)-\\mu_{2}(t)\\right)dt\\right]^{2}\\right)dx\\] \\[=\\int_{-\\infty}^{+\\infty}\\left(2\\alpha\\int_{-\\infty}^{x}\\left[\\mu (t)-\\mu_{1}(t)\\right]dt\\mu(x)\\right)dx+\\int_{-\\infty}^{+\\infty}\\left(2(1- \\alpha)\\int_{-\\infty}^{x}\\left[\\mu(t)-\\mu_{2}(t)\\right]dt\\mu(x)\\right)dx\\] \\[=2\\int_{-\\infty}^{+\\infty}\\left(\\int_{-\\infty}^{x}\\left(\\mu(t)- \\left[\\alpha\\mu_{1}(t)+(1-\\alpha)\\mu_{2}(t)\\right]\\right)dt\\right)\\mu(x)dx\\]
Thus,
\\[\\frac{d}{d\\mu}\\mathcal{B}(\\mu_{L_{2}}^{\\alpha})=0\\]
where \\(\\mu_{L_{2}}^{\\alpha}=\\alpha\\mu_{1}+(1-\\alpha)\\mu_{2}\\) is the \\(L_{2}\\) barycenter. The \\(L_{2}\\) barycenter is also a barycenter for the energy distance.
## Appendix C CRPS and \\(L_{2}\\)-barycenter
Let \\(\\mu_{1}\\) and \\(\\mu_{2}\\) be two distributions corresponding to two ensemble forecasts, and \\(\\mu_{L_{2}}^{\\alpha}\\) be their \\(L_{2}\\)-barycenter. The CRPS of the barycenter \\(\\mu_{L_{2}}^{\\alpha}\\) with respect to the observation \\(\
u_{obs}\\) can be computed as follow
\\[\\text{CRPS}(\\mu_{L2},\
u_{obs})=\\alpha\\text{CRPS}(\\mu_{1},\
u_{obs})+(1-\\alpha )\\text{CRPS}(\\mu_{2},\
u_{obs})-\\alpha(1-\\alpha)\\text{CRPS}(\\mu_{1},\\mu_{2})\\]
**Proof:** The CRPS can be applied to any pair of distributions \\(\\mu_{1}\\) and \\(\\mu_{2}\\). Let consider two discrete probability distributions on the space \\(\\Omega=\\mathbb{R}^{n_{t}}\\) such that
\\[\\mu_{1}=\\sum_{i=1}^{N_{1}}a_{i}\\delta_{\\mathbf{x}_{i}}\\quad\\text{and}\\quad\\mu _{2}=\\sum_{j=1}^{N_{2}}b_{j}\\delta_{\\mathbf{y}_{j}}\\]with \\(X=(\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N_{1}})\\in\\Omega^{N_{1}}\\), \\(Y=(\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{N_{2}})\\in\\Omega^{N_{2}}\\), and where \\(\\mathbf{a}=(a_{1},\\ldots,a_{N_{1}})\\in\\Sigma_{N_{1}}\\) and \\(\\mathbf{b}=(b_{1},\\ldots,b_{N_{2}})\\in\\Sigma_{N_{2}}\\) are probability vectors.
If we substitute the expressions for \\(\\mu_{1}\\) and \\(\\mu_{2}\\) in the formula of the CRPS from Eq. 7, we obtain
\\[\\text{CRPS}(\\mu_{1},\\mu_{2}) =\\int_{-\\infty}^{+\\infty}\\left[F_{\\mu_{1}}(u)-F_{\\mu_{2}}(u) \\right]^{2}du\\] \\[=\\int_{-\\infty}^{+\\infty}\\left[\\sum_{i=1}^{N_{1}}a_{i}I(x_{i} \\leq u)-\\sum_{j=1}^{N_{2}}b_{j}I(y_{j}\\leq u)\\right]^{2}du\\] \\[=\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{2}}a_{i}b_{j}|x_{i}-y_{j}|-\\sum _{i=1}^{N-1}\\sum_{k=i+1}^{N_{1}}a_{i}a_{k}|x_{i}-x_{k}|-\\sum_{j=1}^{N_{2}-1} \\sum_{l=j+1}^{N_{2}}b_{j}b_{l}|y_{j}-y_{l}| \\tag{12}\\]
Then, we substitute \\(\
u_{obs}=\\sum_{k=1}^{N_{0}}d_{k}\\delta_{\\mathbf{y_{k}}}\\) and \\(\\mu_{L2}=\\sum_{k=1}^{N_{1}+N_{2}}c_{k}\\delta_{\\mathbf{x_{L2i}}}=\\alpha\\sum_{ i=1}^{N_{1}}a_{i}\\delta_{\\mathbf{x_{1}}}+(1-\\alpha)\\sum_{j=1}^{N_{2}}b_{j} \\delta_{\\mathbf{x_{2j}}}\\) in the formula of the CRPS from Eq. 12.
\\[\\text{CRPS}(\\mu_{L2},\
u_{obs}) =\\sum_{i=1}^{N_{1}+N_{2}}\\sum_{j=1}^{N_{0}}c_{i}d_{j}|x_{L2i}-y_{ j}|-\\sum_{i=1}^{N_{1}+N_{2}-1}\\sum_{j=i+1}^{N_{1}+N_{2}}c_{i}c_{j}|x_{L2i}-x_{L2j} |-\\sum_{i=1}^{N_{0}-1}\\sum_{j=i+1}^{N_{0}}d_{i}d_{j}|y_{i}-y_{j}|\\] \\[=\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{0}}\\alpha a_{i}d_{j}|x_{1\\,i}-y _{j}|+\\sum_{i=1}^{N_{2}}\\sum_{j=1}^{N_{0}}(1-\\alpha)b_{i}d_{j}|x_{2\\,i}-y_{j}|\\] \\[\\quad-\\sum_{i=1}^{N_{1}-1}\\sum_{j=i+1}^{N_{1}}\\alpha^{2}a_{i}a_{ j}|x_{1\\,i}-x_{1j}|-\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{2}}\\alpha(1-\\alpha)a_{i}b_{j} |x_{1\\,i}-x_{2\\,j}|\\] \\[\\quad-\\sum_{i=1}^{N_{2}-1}\\sum_{j=i+1}^{N_{2}}(1-\\alpha)^{2}b_{i} b_{j}|x_{2\\,i}-x_{2\\,j}|-\\sum_{i=1}^{N_{0}-1}\\sum_{j=i+1}^{N_{0}}d_{i}d_{j}|y_{i }-y_{j}|\\] \\[=\\alpha\\left[\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{0}}a_{i}d_{j}|x_{1 \\,i}-y_{j}|-\\sum_{i=1}^{N_{1}-1}\\sum_{j=i+1}^{N_{1}}a_{i}a_{j}|x_{1\\,i}-x_{1\\, j}|-\\sum_{i=1}^{N_{0}-1}\\sum_{j=i+1}^{N_{0}}d_{i}d_{j}|y_{i}-y_{j}|\\right]\\] \\[\\quad+(1-\\alpha)\\left[\\sum_{i=1}^{N_{2}}\\sum_{j=1}^{N_{0}}b_{i}d_{ j}|x_{2\\,i}-y_{j}|-\\sum_{i=1}^{N_{2}-1}\\sum_{j=i+1}^{N_{2}}b_{i}b_{j}|x_{2\\,i}-x_{ 2\\,j}|-\\sum_{i=1}^{N_{0}-1}\\sum_{j=i+1}^{N_{0}}d_{i}d_{j}|y_{i}-y_{j}|\\right]\\] \\[\\quad+\\alpha(1-\\alpha)\\sum_{i=1}^{N_{1}-1}\\sum_{j=i+1}^{N_{1}}a_{ i}a_{j}|x_{1\\,i}-x_{1\\,j}|+(1-\\alpha)\\alpha\\sum_{i=1}^{N_{2}-1}\\sum_{j=i+1}^{N_{2}}b_{ i}b_{j}|x_{2\\,i}-x_{2\\,j}|\\] \\[\\quad-\\alpha(1-\\alpha)\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{2}}a_{i}b_{ j}|x_{1\\,i}-x_{2\\,j}|\\] \\[=\\alpha\\text{CRPS}(\\mu_{1},\
u_{obs})+(1-\\alpha)\\text{CRPS}(\\mu_{2 },\
u_{obs})\\] \\[\\quad-\\alpha(1-\\alpha)\\left[\\sum_{i=1}^{N_{1}}\\sum_{j=1}^{N_{2}}a_{ i}b_{j}|x_{1\\,i}-x_{2\\,j}|-\\sum_{i=1}^{N_{1}-1}\\sum_{j=i+1}^{N_{1}}a_{i}a_{j}|x_{1\\,i}-x_{1\\, j}|-\\sum_{i=1}^{N_{2}-1}\\sum_{j=i+1}^{N_{2}}b_{i}b_{j}|x_{2\\,i}-x_{2\\,j}|\\right]\\] \\[=\\alpha\\text{CRPS}(\\mu_{1},\
u_{obs})+(1-\\alpha)\\text{CRPS}(\\mu_{2 },\
u_{obs})-\\alpha.(1-\\alpha)\\text{CRPS}(\\mu_{1},\\mu_{2})\\]
\\begin{table}
\\begin{tabular}{l|c|c|c|c|c} & Week 3 & Week 4 & Week 5 & Week 6 \\\\ \\hline \\hline ECMWF - NCEP & 3.2e-08 & 3.6e-06 & 6.3e-04 & 8.1e-09 \\\\ W2 - NCEP & 2.0e-10 & 2.4e-08 & 1.3e-05 & 1.4e-10 \\\\ L2 - NCEP & 6.4e-12 & 9.9e-10 & 8.4e-07 & 3.9e-12 \\\\ W2 - ECMWF & 4.5e-01 & 8.5e-02 & 1.0e-02 & 8.6e-01 \\\\ L2 - ECMWF & 4.9e-01 & 9.6e-02 & 4.6e-03 & 7.5e-01 \\\\ W2 - L2 & 1.8e-01 & 7.7e-02 & 5.0e-05 & 3.1e-02 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Two-sided Wilcoxon testβs p-value for the spatial CRPSm. The green color indicate the weeks at which the two models are significantly different at the 5% significance level (i.e. p-values\\(<0.05\\)).
\\begin{table}
\\begin{tabular}{l|c|c|c|c} & Week 3 & Week 4 & Week 5 & Week 6 \\\\ \\hline \\hline ECMWF - NCEP & 1.4e-08 & 2.0e-04 & 3.7e-04 & 8.5e-08 \\\\ W2 - NCEP & 1.1e-16 & 2.1e-12 & 1.3e-12 & 1.1e-14 \\\\ L2 - NCEP & 2.0e-13 & 9.3e-08 & 1.6e-08 & 2.2e-11 \\\\ W2 - ECMWF & 1.5e-02 & 2.8e-04 & 2.7e-03 & 4.9e-02 \\\\ L2 - ECMWF & 2.1e-01 & 1.5e-02 & 1.0e-01 & 6.5e-01 \\\\ W2 - L2 & 3.6e-04 & 1.1e-06 & 2.5e-06 & 5.7e-06 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Two-sided Wilcoxon testβs p-value for the spatial CRPSp. The green color indicate the weeks at which the two models are significantly different at the 5% significance level (i.e. p-values\\(<0.05\\)). | Ensemble forecasts and their combination are explored from the perspective of a probability space. Manipulating ensemble forecasts as discrete probability distributions, multi-model ensembles (MMEs) are reformulated as barycenters of these distributions. Barycenters are defined with respect to a given distance. The barycenter with respect to the L2-distance is shown to be equivalent to the pooling method. Then, the barycenter-based approach is extended to a different distance with interesting properties in the distribution space: the Wasserstein distance. Another interesting feature of the barycenter approach is the possibility to give different weights to the ensembles and so to naturally build weighted MME. | Provide a brief summary of the text. | 135 |
isprs/b52b3497_02c9_4060_9d9d_c9dd0475d099.md | **ORTHRORECTIFICATION OF MONOSCOPIC BILSAT IMAGES BY A NEW DIFFERENTIAL IMAGE RECTIFICATION METHOD**
# Introduction
Global Positioning System (GPS) and star tracker on board Bilsat are used to find the position and attitude (orientation) of the CCD images to be rectified. Orthorectification procedures are usually accomplished by projecting the image to the flat earth surface, where the earth curvature can be corrected explicitly [2]. However, the new method projects the images directly on to the ellipsoid and corrects the relief displacements from height information extracted from a Digital Elevation Model (DEM) taking into account atmospheric refraction as well[7].
**2. Application of the Method to BILSAT Images**
Required parameters for the orthorectification method i.e. (position, attitude) are obtained from the Bilsat telemetry file of the corresponding epoch while atmospheric parameters are obtained from the meteorological stations of the region. Additionally SRTM DEM [1] is used in 3 arc second intervals for relief displacement corrections.
Differential image rectification methods rectify the image pixel by pixel. For this reason rectification of the 2048 x 2048 pixels Bilsat images are computationally very demanding. Resection algorithm can be illustrated by a flow chart step by step as in Figure 1.
The collinearity equation between the camera center, CCD array and the corresponding ground point can be written as;
\\[\\begin{bmatrix}X_{0}\\\\ Y_{0}\\\\ Z_{0}\\end{bmatrix}=\\begin{bmatrix}X_{c}\\\\ Y_{c}\\\\ Z_{c}\\end{bmatrix}+s\\begin{bmatrix}\\mathbf{r}_{\\mathbf{x}}\\\\ \\mathbf{r}_{\\mathbf{y}}\\\\ \\mathbf{r}_{\\mathbf{z}}\\end{bmatrix} \\tag{1}\\]
In this equation X\\({}_{0}\\), Y\\({}_{0}\\) and Z\\({}_{0}\\) are the coordinates of the corresponding point on the earth surface. X\\({}_{\\text{c}}\\), Y\\({}_{\\text{c}}\\) and Z\\({}_{\\text{c}}\\) are the coordinates of the camera. s is the distance between the ground point and the camera. \\(\\mathbf{r}\\) is the direction vector between camera center, corresponding sensor and the ground point. Both the position and the direction vectors are measured in earth fixed reference frame. To represent the direction vector in earth fixed reference frame consecutive rotations are needed.
First step is to transform the each pixel from image coordinates to camera coordinates. The unit of the pixels is converted from pixel value to mm by using the size of the one sensor in the CCD array and the coordinates of the principal point of the CCD. The next step is the application of lens distortion corrections to pixel positions by using the lens distortion parameters of the on board camera of Bilsat. Lens distortion effect is modeled by 4 parameters, 2 for the radial lens distortion and 2 for the tangential asymmetric lens distortion.
Figure 1: Flowchart for image rectification and mapping with transformations T1βT5.
By using the corrected pixel coordinates the direction vector (unit vector) \\(\\mathbf{r}\\), of each pixel is computed with respect to camera coordinate system, \\(S_{C}\\).
\\[\\mathbf{r}^{C}=\\left[x,y,z\\right]/\\sqrt{x^{2}+y^{2}+z^{2}} \\tag{2}\\]
where x, y and z are the corrected pixel coordinates in photo coordinate system in mm.
The next step is to transform the direction vector \\(\\mathbf{r}^{C}\\), from camera coordinate system to the Earth Centered Earth Fixed reference frame, \\(S_{E}\\) :
\\[\\mathbf{r}=\\mathbf{R}_{C}^{E}\\mathbf{r} \\tag{3}\\]
where \\(\\mathbf{R}_{C}^{E}\\) consists of following consecutive rotation matrixes [3]
\\[\\mathbf{R}_{C}^{E}=\\mathbf{R}_{I}^{E}\\mathbf{R}_{O}^{I}\\mathbf{R}_{B}^{O} \\mathbf{R}_{C}^{B}. \\tag{4}\\]
In Eq. (4) \\(\\mathbf{R}_{C}^{B}\\) is the fixed rotation matrix from camera coordinate system to body fixed reference system. \\(\\mathbf{R}_{B}^{O}\\) is the rotation matrix from body fixed reference frame to orbital reference frame is calculated in terms of quaternions on the basis of the observed star coordinates delivered by the high precision star tracker onboard the satellite (Strikwerda and Junkins [4]). \\(\\mathbf{R}_{O}^{I}\\) is the rotation matrix from orbital coordinate system to inertial reference system formed by using the position and velocity vector of the satellite while \\(\\mathbf{R}_{I}^{E}\\) gives the rotation matrix from inertial reference system to earth fixed reference system. During these two rotations precession, nutation and polar motion are taken into account.
The next step is to compute the coordinates of the intersection points of the direction vectors with the earth surface. To achieve this, colinearity equation and the equation of a point on the ellipsoidal surface are used so that the direct projection of the image coordinates onto the surface of the ellipsoid can be accomplished. Equation of an ellipsoid of revolution is:
\\[\\frac{X_{0}^{2}+Y_{0}^{2}}{a^{2}}+\\frac{Z_{0}^{2}}{b^{2}}=1 \\tag{5}\\]
where a and b are the semimajor and semiminor axes and \\(X_{o}\\), \\(Y_{o}\\) and \\(Z_{o}\\) are the Cartesian coordinates of the point on the ellipsoidal surface. WGS 84 reference ellipsoid is used as Earth model. After substitution of Eq. (5) into Eq. (1), the following colinearity equation is obtained:
\\[\\left[\\frac{S_{Ex}^{2}+S_{Ey}^{2}}{a^{2}}+\\frac{S_{Ez}^{2}}{b^{2} }\\right]*s^{2}+2*\\left[\\frac{S_{Ex}*X_{cam}+S_{Ey}*Y_{cam}}{a^{2}}+\\frac{S_{Ez }*Z_{cam}}{b^{2}}\\right]*s \\tag{6}\\] \\[+\\frac{X_{cam}^{2}+Y_{cam}^{2}}{a^{2}}+\\frac{Z_{cam}^{2}}{b^{2}} -1=0\\]Eq (6) is a quadratic equation, for this reason two solutions for s are available. The smaller root is the correct solution because the larger s will give the distance between the camera and the other side of the ellipsoid. Substituting smaller s in Eq (1) gives the Cartesian coordinates of the intersection point. After computing the intersection point, atmospheric refraction correction is applied by computing the zenith angle and its correction with atmospheric parameters.
Cartesian coordinates of the intersection point on the reference ellipsoid are converted to geodetic coordinates. The transformation to compute the ellipsoidal longitude and latitude (iterative) of is given by [5, p. 199]. But the geodetic coordinates are not the exact coordinates because the ellipsoidal height of a point is a very rare case to be exactly zero. For this reason ellipsoidal height of that point is extracted from DEM and an iterative procedure starts which corrects the geodetic coordinates of the point by using the elevation differences of the elevations. The elevations are obtained from DEM both for the previous and the current iteration steps. The iteration steps can be explained as:
First, direction vector measured in earth fixed coordinates should be converted into local ellipsoidal coordinates by the rotation matrix
\\[\\mathrm{R}=\\boldsymbol{Q}_{i}\\boldsymbol{R}_{2}\\text{(90 - $\\varphi$)}\\boldsymbol{R}_{3}\\text{($\\lambda$)} \\tag{7}\\]
where \\(\\boldsymbol{Q}_{1}\\) is a reflection matrix, \\(\\varphi\\) and \\(\\lambda\\) are the geodetic or ellipsoidal coordinates in terms of latitude and longitude.
And then, initial height of the intersection point, \\(h_{0}\\left(\\lambda_{0},\\phi_{0}\\right)\\) is computed as zero. The corresponding ellipsoidal height of that point is obtained from DEM and by using the zenith angle, \\(z\\) the absolute value of the relief displacement, \\(d_{n}\\) is computed. By using the ellipsoidal parameters _V, c_ and the azimuth angle, \\(\\alpha\\) changes in the geodetic coordinates are computed and the geodetic coordinates are corrected. The new ellipsoidal height of the corrected coordinates is obtained from the DEM and the difference between the two heights, \\(\\Delta h_{n}\\) is compared with the threshold value and the iteration continues until the required accuracy is reached. This procedure is explained in Eq (8) and Fig. 2.
\\[\\Delta h_{n}=h_{n}\\left(\\lambda_{n-1},\\phi_{n-1}\\right)-h_{n-1} \\left(\\lambda_{n-1},\\phi_{n-1}\\right)\\] (8a) \\[d_{n}=\\Delta h_{n}\\tan(z)\\] (8b) \\[\\lambda_{n}=\\lambda_{n-1}+d_{n}\\sin(\\alpha)*\\left(\\frac{V}{c} \\right)_{n-1}\\left(\\frac{1}{\\cos(\\phi_{n-1})}\\right)\\] (8c) \\[\\phi_{n}=\\phi_{n-1}+d_{n}\\cos(\\alpha_{n-1})*\\left(\\frac{V^{3}}{c} \\right)_{n-1}\\text{Where;}\\] (8d) \\[c=\\frac{a^{2}}{b},\\qquad\\quad V_{n}=\\sqrt{1-\\frac{a^{2}-b^{2}}{b^{2}}\\cos^{ 2}(\\phi_{0})_{n}}\\] (8e) [4, p. 105]
The threshold value to be satisfied is \\(\\left|\\Delta h_{n}\\right|<\\varepsilon\\), finally
\\[\\lambda_{a}=\\lambda_{n},\\quad\\quad\\phi_{a}=\\phi_{n}\\text{, \\ \\ \\ \\ \\ }h_{a}=h_{n}\\text{.}\\]
After n iterations final position of the point becomes \\(\\lambda_{a},\\phi_{n}\\) and \\(h_{n}\\)The geodetic coordinates are converted to isothermal coordinates in terms of UTM coordinates [5].
At final stage, after computing the map coordinates of each pixel the image is resampled in order to compute the brightness values of map coordinate grids. Among the resampling algorithms, nearest neighborhood method is used in order to conserve the original brightness values of the pixels[6].
## 3 Implementation Of the Method
The new differential image rectification method is implemented on Matlab software. The software reads the attitude and position data at the time of exposure of the image from a file and computes the velocity of the satellite. Furthermore the software requires the camera's inner and outer orientation parameters and atmosphere parameters from the user. The software eliminates the relief displacements by corresponding DEM and converts the ellipsoidal coordinates to UTM map coordinates. The algorithm is implemented by the images supplemented by Bilten. The raw and rectified images of Ankara are shown in Fig.3-4.
Figure 2: Illustration of the colinearity equation and the iterative solution for the relief displacement correction.
Fig. 3 Raw image of Ankara.
**References**:
[1] G. Sun, K. J. Ranson, V. I. Kharuk, and K. Kovacs, \"Validation of surface height from shuttle radar topography mission using shuttle laser altimeter,\" _Remote Sens. Environ._, vol. 88, pp. 401-411, 2003.
[2] E. M. Mikhail, C. McGlone, and J. S. Bethel, _Introduction to Modern Photogrammetry_. Chichester, U.K.: Wiley, 2001.
[3] O. Montenbruck, Quaternion representation of BIRD orientation and reference system transformations, DLR, Wessling, Germany, 2000.
[4] T. E. Strikwerda and J. L. Junkins, \"Star pattern recognition and spacecraft attitude Determination,\" Virginia Polytech., Blacksburg, VA, 1981.
* [5] S. Heitz, _Coordinates in Geodesy_. Berlin, Germany: Springer-Verlag, 1988.
* [6] G. J. Grevera and J. K. Udupa, \"Shape-based interpolation of multidimensional grey-level images,\" _IEEE Trans. Med. Imag._, vol. 15, no. 6, pp. 881-892, Dec. 1996.
* [7] Mahmut O. Karslioglu and Jurgen Friedrich. \"A New Differential Geometric Method to Rectify Digital Images of the Earth's Surface Using Isothermal Coordinates,\" IEEE Transactions On Geoscience And Remote Sensing, Vol. 43, No. 3, March 2005 | Karslioglu, M. O., Middle East Technical University, Department of Civil Engineering,
Geodesy and Photogrammetry Division, 06531 Ankara, [email protected].
Friedrich, J., Information Technology and Electronic Research Institute, TUBITAK Bilten
06531 Ankara. [email protected].
Bettemir, O. H., Middle East Technical University, Department of Civil Engineering,
Geodesy and Photogrammetry Division, 06531 Ankara, [email protected].
Abstract -- A new rectification method is implemented for the generation of orthoimages of the earth surface from monoscopic digital images. In the rectification procedure Bilsat images are used by taking into account the camera calibration parameters of Bilsat and atmospheric corrections. The new method maps every pixel vertically along the surface normal onto a curved surface of reference frame WGS84 directly under the condition that a precise enough surface elevation model is available. The ellipsoidal coordinates (latitude, longitude and height) of each pixel calculated are then transformed into isothermal coordinates i.e. the UTM projection coordinates. Resampling of the Bilsat images is accomplished on the basis of the transformation result between ellipsoidal geodetic coordinates and isothermal coordinates. | Provide a brief summary of the text. | 270 |
arxiv-format/1909_04901v1.md | # Development of Instruments for Space Exploration Using Meteorological-balloons
Debashis Bhowmick
Indian Centre for Space Physics, 43 Chalantika, Garia Station Rd., Kolkata 700084, India
Sandip K. Chakrabarti
Ritabrata Sarkar
Armab Bhattacharya
Indian Centre for Space Physics, 43 Chalantika, Garia Station Rd., Kolkata 700084, India
S.N. Bose National Centre for Basic Sciences, JD Block, Salt Lake, Kolkata 700097, India
Tata Institute of Fundamental Research, Homi Bhaba Road, Colaba, Mumbai 400005, India
## 1 Introduction
Traditionally, for balloon-borne astrophysical observations, large balloons of several million cubic meters are used (e.g., Ref. [1]) with payloads of several thousand kilograms. These are typically equipped with ballasts and valves to have long flights of several days to several months duration. These also typically have, apart from the main instruments, accurate pointing instruments to acquire data from precise directions. In the other end of the spectrum, there are meteorological balloons which can carry generally 'use and throw' equipment totaling a few hundred grams for measuring atmospheric parameters up to a height of \\(\\sim 20-25\\) km on a daily basis.
With the advent of modern miniaturized instruments, it is now possible to explore space using light weight payloads. This aspect has been the major goal of research by Indian Centre for Space Physics (ICSP) which has systematically developed a paradigm to study various objects emittinghigh energy radiation in space from very light weight meteorological balloons [2, 3]. Being light weight, these balloons can carry at the most about five kilograms of payload which must contain not only the main measurement unit, but also the auxiliary instruments, power-supply, parachutes for re-entry etc. Thus a great deal of innovation is required to make these low-cost space missions a success. One of our motivations is to test cubesat and nanosat instruments prior to actually flying them to space. Being low cost, our procedure is affordable and is a great learning tool for college and university students.
The instruments in these experiments can be used to measure the intensity of ionizing radiations, particularly X-rays which is very useful for the study of Cosmic Rays (CRs), solar activity, X-ray background and accreting compact objects. It is also possible to detect the high energy Gamma-Ray Bursts (GRBs) in these kind of experiments. Apart from these extra-terrestrial events, Terrestrial Gamma-ray Flashes (TGFs) from the cloud formation region of the atmosphere are other types of interesting and yet to be understood events which can be recorded by the instruments.
In the present paper, we discuss in details the instrumentation in this new paradigm of exploring space with balloons of small size and limited capabilities. As discussed in Ref. [2, 3], the balloons we use conventionally are rubber weather balloons and often two balloons are tied-up together to fly a heavier payloads of up to \\(4\\) kilogram reaching a ceiling altitude of about \\(35-39\\) km. We also use plastic (polyethylene) balloons of about \\(7-9\\) kg weight which can carry a combined payload of \\(\\sim 6\\) kg to a height of \\(40-42\\) km. We do not use any pointing device and thus we adjust our launch window to observe the target object(s) for a significant period of time, unless we want to measure only the CRs. We also tag each photon event (along with its timing and spectral information) with concurrent attitude of the payload [3, 4]. This enables us to compute RA and DEC of the detector direction during the record of each photon in conjunction with the instantaneous GPS information of the payload. However, the actual directional information of the recorded photons are limited by the Field-of-View (FoV) of the collimator used in the detector which is independent of the detector direction. Depending on the science goal and experimental conditions, we have used different collimators with FoV varying between 6-15\\({}^{\\circ}\\) and sometimes as wide as 40\\({}^{\\circ}\\). The measurement of detector or payload direction is also subjected to instrumental and systematic errors which has been calculated as \\(\\sim\\) 0.3-1.8\\({}^{\\circ}\\) depending on the rotational speed of the balloon. The other major part of the error comes in due to the slewing movement of the payload and the rate of data sampling for recording. We expect to improve this in future.
In the next Sec. 2 we briefly discuss the experimental aspects and mission strategies of this novel space exploration program. Subsequently, in Sec. 3, 4 and 5, we describe typical instruments which have been flown. Of course, many of these flights were dedicated to test the feasibility. For each instrument, we also present the electronic circuits used, the laboratory tests conducted before flying and an illustrative flight data of the corresponding detector. Finally, in Sec. 6, we summarize our results.
## 2 A Brief Mission Overview
A brief discussion of the experimental strategy with light-weight radiation detectors has been presented in Ref. [3]. Study of correlation between cosmic rays and solar activities using multi-mission data has been carried out in Ref. [5]. Presently, we discuss how each Mission is executed.
As already mentioned in the introduction, the major goal of these experiments is to measure several extraterrestrial and atmospheric radiation through light-weight radiation detectors onboard meteorological balloons. The payload used in this purpose contains the main detector for the radiation measurement using Geiger-Muller (GM) counters or scintillator detectors and ancillary equipment to supplement the data and help the mission operation. The carrier is usually one or two (depending on the payload weight) hydrogen filled rubber balloons or a plastic balloon capable of lifting payloads of \\(\\sim\\) 5 kg or less. The flight generally has no fixed cruising level (no ballasts used) and goes up to the maximum height till the balloon bursts and comes down with the help of a parachute (for rubber balloons) or using the torn balloon itself (in case of plastic balloons). The thermal shielding to the instruments is provided by using a styrofoam (thermocol) box, which also acts as the payload structure in which the instruments are embedded. Since unlike a rocket borne instrument, the frequency of the mechanical vibration during the entire flight is relatively low, this structure serves quite well, acting as a shock absorber during the entire flight. At the time of landing, the impact could be a bit severe. The payload box, along with additional shock absorber system made of simple hollow plastic cylinders placed at the bottom of the payload, absorbs the impact efficiently. Since a typical flight lasts for a few hours, the study of wind pattern is made carefully to ensure that the landing takes place within about a hundred kilometers of the launch site. Typically, we use two launching windows: the pre-monsoon window in April-May and the post-monsoon window in October-November[3]. A balloon flight trajectory along with a typical picture of the payload is shown in Fig. 1.
To ensure cost-effectiveness of our Missions, one of the most important tasks in this type of missions is to recover the payload on landing. This is necessary both for retrieving the payload for further use in future missions and the experimental data which is stored onboard in data storage devices (using micro-SD cards), as currently we do not transmit the data during flight. The recovery of the payload relies on our accurate flight path prediction and the tracking device. The flight path and expected landing location is calculated ahead of time by giving appropriate weightage to the balloon flight simulator[6] results. We also modify the parachute or balloon lift in order to avoid any particular patch (water body, hills, jungles) of land for landing. The tracking device onboard the payload transmits live location (obtained by the GPS receiver onboard the payload) to the mobile ground stations on vehicle which follow the payload near the predicted landing location. As a backup, we also use an SMS alert system which transmits the payload location on landing to several payload recovery vehicles.
Since, we cannot afford to place a pointing device due to weight constraints, a very important issue is the determination of payload attitude. Apart for the omnidirectional measurements such as atmospheric radiation, the payload attitude information is crucial to know the incoming direction of the detected radiation. The attitude measurement instrument is a very light weight Inertial Measurement Unit (IMU) chip which measure and save the attitude data at detection of every
Figure 1: (Left:) Schematic drawing of a balloon flight trajectory. (Right:) A typical payload used in the experiment: external view showing the overall payload box (top) and internal view showing the main measurement unit and other ancillary instruments (bottom).
photon. This data used during the offline data analysis. The details of the attitude measurement will be published in Ref. [4] (in preparation). However, to have a maximum exposure time of the source of interest in the detector we need to adjust the launch schedule and payload tilt angle (payload z-axis w.r.t. zenith) in such a way that the source approaches as close to the zenith as possible when the payload is near the maximum altitude. To avoid major corrections due to atmospheric absorption, we do not observe specific sources which are beyond \\(\\sim 45^{\\circ}\\) from the zenith.
There is no pressure chamber to protect the detectors from the rarefied atmosphere at high altitude. However, we conduct extensive tests on the detectors in simulated pressure and temperature chambers in the laboratory, to study the effects on them under such extreme conditions. We measure the atmospheric pressure and temperatures inside the payload and outside using sensors in each of our flights. These atmospheric parameters up to very high altitude can be used in long term weather predictions. Additionally, in some of the experiments an optical sensor (sun-sensor) is implemented to verify if the sun is inside the FoV of the detector. This brief discussion highlights the key points of the overall experiment and in the following sections we present the main radiation measurement units in more detail.
## 3 The Geiger-Muller Counter
One of the simplest measurements one could do is to measure integrated radiation counts in the atmosphere. We present the results of miniature Geiger-Muller Counters (GMC) in one of our missions. GMCs have been traditionally used for such purpose [7]. Data from several experiments may be used to study the CR variation with time or location.
The count pulses produced in the GM counter, due to the interaction of the incident \\(\\alpha\\), \\(\\beta\\) or \\(\\gamma\\)-rays (particles) are processed and stored in a micro-SD card. At the same time, we also acquire latitude, longitude, altitude and GPS time information from the GPS receiver. Hence, when the payload is launched, we detect high energy radiation counts mainly from the secondary cosmic rays, as a function of all the three coordinates. The detailed block diagram of the system is given in Fig. 2, along with the picture of the assembled detector featuring the GM tube which is used in the balloon flight experiment.
We used the GM counter (Model LND712) from _LND, INC_. The detailed dimensions and specifications of the detector can be found in the data sheet provided in Ref. [8]. The dimension of the detector assembly shown in Fig. 2 is 15\\(\\times\\)13\\(\\times\\)14 cm\\({}^{3}\\). The overall dimension of the total payload box which embeds the detector assembly and other ancillary instruments is 25\\(\\times\\)19\\(\\times\\)17 cm\\({}^{3}\\) and weights about \\(1.8\\) kg. A single \\(2.0\\) kg category rubber balloon is enough to have the complete flight of about three hour duration.
### High Voltage Power Supply
We generated 500 V required for the GM counter biasing from 5 V DC supply. The circuit consists of an oscillator followed by a voltage multiplier. We used a transformer coupling for producing a
Figure 2: (Left:) Block diagram of the Geiger MΓΌller counter setup and (right:) the assembled GM counter with power-supply battery.
high voltage and then used a voltage doubler circuit to achieve our goal.
### Readout System
The output signal from the GMC anode is taken through a suitable capacitance and is passed through a resistor-transistor logic circuit to convert the signal into pulse.
The output of the logical circuit is connected with a microcontroller as an interrupt signal. It counts the interrupt events per second and stores the raw format data in a micro-SD card. We use ATmega 32 microcontroller with \\(\\sim\\) 11 MHz crystal for the clock. The choice of microcontroller and the clock speed are sufficient for the radiation count rates we are interested in. A Real Time Clock (RTC) chip is used to generate an interrupt signal every \\(1\\) s and the count rate is transferred to the micro-SD card along with the time stamp for record.
### A sample result of atmospheric radiation counts
We flew the payload consisting the GM counter as the main measurement unit on several occasions to measure the integral radiation counts in the atmosphere at different heights. These atmospheric radiations are mainly due to the interaction of Galactic cosmic ray particles and solar energetic particles with the atmospheric nuclei. The window of the GM tube was directed upwards in the zenith direction and without any collimator. Thus, the detector provides an omnidirectional measurement of the atmospheric radiation. A result of our measurement made on 14th May, 2011 (Mission Id. D13) using the GM counter flown onboard a single rubber balloon is shown in Fig. 3, where the radiation count variation at different heights is plotted. The detected radiation count rate shows a maximum (Regener-Pfotzer maximum,[9] hereafter RP-max) near \\(\\sim\\) 16 km during ascent and descent of the detector. The RP-max arises due to the balance between generation of secondary radiation from the cosmic ray interaction in the atmosphere and its subsequent diminution from absorption and decay process at different altitudes. The radiation count gradually diminishes with height above the RP-max. However, the atmospheric radiation strongly depends on solar activity, latitude etc. The long term variability of the atmospheric radiation and its anti-correlation with solar activity has been studied by us using scintillator detectors in similar experiments [5].
## 4 Single Crystal Scintillator Detectors
For the purpose of X-ray and gamma-ray detection at different energies, from extraterrestrial sources and in the atmosphere, we use scintillator detectors mainly with Thallium doped Sodium Iodide (NaI(Tl)) crystal integrated with a Photo-Multiplier Tube (PMT) for the signal readout. These type of small X-ray detectors are particularly very useful for the study of solar activity which emits high intensity X-rays for which background noise is less severe. We used integrated detector units (Model 3M3/3 and 2M2/2) with scintillator crystals and PMTs made by _Saint-Gobain Crystals_.[10]
Figure 3: Radiation counts detected by GM counter at different heights in the atmosphere during the ascend (black) and descend (gray) of the payload.
### Detector Specifications
In this integrated design, the PMT is optically coupled directly to the scintillator crystals. The scintillator is mounted in a container (usually aluminum), and the PMT is shielded with mu-metal. The scintillator container and mu-metal shield are hermetically sealed together to form a low-mass and light-tight housing for the detector. The crystal used in 3M3/3 model is cylindrical in shape and the size is 3\" in diameter and height. The 2M2/2 model contains the crystal with the same shape except that the size is 2\" in diameter and height. The weight of the detector including the PMT is \\(\\sim\\) 2100 g for 3M3/3 and \\(\\sim\\) 1500 g for 2M2/2. The dimension of the assembled detector is about 17\\(\\times\\)17\\(\\times\\)40 cm\\({}^{3}\\). The overall payload box has the dimension about 40\\(\\times\\)40\\(\\times\\)70 cm\\({}^{3}\\) and weights about 3 kg for 2M2/2 and 4 kg for 3M3/3 model. A double rubber balloon configuration or a single plastic balloon is enough to have a decent mission with these instruments 3.
### Electronics for the Single Crystal Detectors
The PMT fitted with the scintillator crystal is provided with a bias voltage from the high voltage power supply. The signal readout system consists of an analog front-end circuit, a data processing/acquiring unit, a low voltage (\\(\\pm 5\\) V and \\(3.3\\) V) DC-DC converter unit and data storage unit. The overall signal readout scheme for the scintillator detector is shown in Fig. 4.
The basic purpose of the electronics signal readout system are:
* To generate a high voltage (\\(\\sim 950\\) V or \\(\\sim 650\\) V for 3M3/3 or 2M2/2 respectively) for biasing the PMT and a low voltage (\\(\\pm 5\\) V, \\(3.3\\) V) for electronics.
* To amplify or process the pulse generated from the detector for the signal processing.
* To process the pulse signals and record the event data for post-facto analysis.
* To work in space-like environment in a temperature range namely, \\(-5^{\\circ}\\)C to \\(+35^{\\circ}\\)C and qualification in the range from \\(-10^{\\circ}\\)C to \\(+40^{\\circ}\\)C and near vacuum condition without significant change in the performance.
The overall electronic circuit may be subdivided into: front-end electronics, power supply unit, digital signal processing unit and data storage unit which we discuss in more details subsequently.
#### 4.2.1 Front-end electronics
The analog front-end circuit is responsible for processing the analog signal after getting a pulse signal from the detector. This includes: preamplifier, post-amplifier, triggering unit and the peak detector.
PreamplifierTo reproduce the pulses that appear on the anode of the PMT (which are of short duration and spiky) it is necessary to have a wide band amplifier with high open loop gain to process
Figure 4: The block diagram of the detector and electronic readout system for the \\(2^{\\prime\\prime}\\times 2^{\\prime\\prime}\\) single crystal scintillator detector. A similar readout system is used for the \\(3^{\\prime\\prime}\\times 3^{\\prime\\prime}\\) scintillator. The detector assembly along with the electronic readout system, collimator and power supply is shown on the right.
fast and low amplitude electrical pulses from PMT. It is accomplished by using a single high frequency operational amplifier (op-amp) in inverting amplifier configuration, since the polarity of the detectors output is negative in current feedback mode. The operational characteristics of the preamplifier is summarized in Table 1.
Post-amplifierThe preamplifier output is further amplified in the post-amplifier without affecting the pulse shape (i.e. decay time). It also provides low impedance to the following processing and analyzing circuit. Since it is difficult for a single amplifier to cover a large dynamic energy range of the detector from \\(15\\) keV to \\(2\\) MeV, two different amplifiers with different voltage gains are provided. First amplifier (G1) covers the lower energy range of \\(15-140\\) keV while the second amplifier (G2) covers energy range \\(100\\) keV - \\(2\\) MeV. The operational characteristics of the post-amplifier (G1) is given in Table 2. The G2 amplifier is basically a unity gain amplifier while the other features are same as G1.
The post-amplifiers have the following features:
* The amplifier circuit is provided with \\(\\pm 5\\) V power supply for its operation.
* The total power consumption in the front-end electronics is \\(120\\) mW.
* The gain and saturation level of the post-amplifier can adjusted according to the requirement.
\\begin{table}
\\begin{tabular}{l l} \\hline Input supply voltage & \\(\\pm(5\\pm 0.5)\\) V \\\\ Rise time of the output pulse & \\(3\\)\\(\\mu\\)s \\\\ Decay time of the output pulse & \\(10-12\\)\\(\\mu\\)s \\\\ Polarity of the output & Unipolar \\\\ Voltage gain & \\(35\\) \\\\ Saturation level & \\(5.0\\) V \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Specifications of preamplifier.
* The voltage gain depends only on passive components.
* Band pass filters are provided to minimize the low and high frequency noise.
Triggering and peak detector circuitsThe outputs from G1 amplifier is fed to the input of the triggering circuit. In the present experiment, we use two comparators: one for low (\\(15-140\\) keV) and another for high (\\(100-2000\\) keV) energy. The low and high energies are distinguished by two different preset reference voltages while testing/calibration at the laboratory. Peak detectors are provided for the two amplifiers (G1 and G2) for analyzing separately.
#### 4.2.2 Power supply distribution unit
Power supply distribution unit consists of high voltage and low voltage power supplies.
High voltage power supplyThe main function of the HV supply is to bias the PMT (at +ve supply). The HV supply is adjusted such that the PMT gets a bias voltage of \\(\\sim 650\\) V for 2M2/2 and \\(\\sim 950\\) V for 3M3/3 detectors. We use EMCO F40[11] as the high voltage module. Since a balloon borne payload reaches \\(\\sim 40-42\\) km above ground, at this near vacuum situation, using such a high voltage requires potting with a very good quality insulating material to prevent
\\begin{table}
\\begin{tabular}{l l} \\hline Input supply voltage & \\(\\pm(5\\pm 0.5)\\) V \\\\ Rise time of the output pulse & \\(3\\)\\(\\mu\\)s \\\\ Decay time of the output pulse & \\(10-12\\)\\(\\mu\\)s \\\\ Polarity of the output & Unipolar \\\\ Voltage gain of two stages & \\(\\sim 12\\) \\\\ Saturation level & \\(5.0\\) V \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Specifications of G1 amplifier.
electrostatic discharge. A silicone elastomeric substrate from Dow Corning [12] has been used as a potting material.
Low voltage power suppliesThe low voltage DC-DC power supply unit generates a voltage of \\(\\pm 5\\) V and \\(3.3\\) V. The \\(5\\) V is required for the analog front-end while the on board computing unit works with the \\(3.3\\) V supply.
#### 4.2.3 Digital data processing, acquisition and control unit.
We use a Mini2440 board (ARM9 family processor) [13] as the main board for the data acquisition. The system continuously monitors for the trigger interrupt signal (event) and when it is found, the computing unit processes the signal in the following way.
The system gathers detector Pulse Height (PH) data along with sub-second time stamp in a temporary buffer. After a pre-defined time interval \\(\\delta t\\) all the accumulated events recorded in the buffer are transferred to the on board micro-SD card and the buffer is cleared. During the first part of \\(\\delta t\\), namely \\(\\delta t_{1}\\), data is procured and temporarily stored in the buffer. During the second part of \\(\\delta t\\), namely \\(\\delta t_{2}\\), the data is transferred to the micro-SD card. We chose this method, since it is faster to record the data in the buffer (RAM) rather that directly in the micro-SD card. Depending on the expected event rate, \\(dt_{1}\\) and \\(dt_{2}\\) vary. The whole processing cycle is shown in Fig. 5, 6, 8.
Process1 (P1) in Fig. 6 describes the process of digitizing each event and storing into the temporary buffer memory (RAM). The analog PH signals from G1 and G2 are digitized using 10 bit ADC embedded in the processor (ARM9). However, to exclude the low energy noise information and the saturated events, we consider only the events in channel 100-1020. This PH information of the deposited energy along with the time stamp of the corresponding event and the digitizedFigure 5: Flow diagram for initializing the data processor to process and write the events in the SD card. The subsequent acquisition of the event (Process1) and writing the data on SD card (Process2) are shown in Fig. 6 and 8.
value from the sun sensor is packed together and stored the buffer. After storing in the temporary buffer, a reset pulse of \\(10~{}\\mu\\)s is issued from the computing unit to reset the event (discharging the peak-hold circuit at the peak detector) to allow a fresh event to be captured.
The overall time sequence for the processing of a single event is shown in Fig. 7.
In Process2 (P2), (flow chart in Fig. 8) the computing unit writes the content of the temporary
Figure 6: Flow chart for processing (Process1 part in Fig. 5) an event both in low and high energy parts and accumulating in the temporary buffer.
Figure 7: Timing diagram of processing an event.
buffer memory, acquired during the designated time interval for Process1, into a permanent storage i.e., an on-board micro-SD card. After the completion of the data transfer to the micro-SD card the control is goes back to P1.
#### 4.2.4 Data format for storing
Writing to micro SD card requires the computing unit to access the card which is much more time consuming than accessing the RAM. We therefore optimize the data structure to reduce the data size so that the required CPU cycles to write the data in SD card is minimum. To reduce the number of bytes, the computing unit writes the data in hexadecimal format. The maximum possible outcome from the ADC is \\(1023\\) (0x3FF in hexadecimal) which requires \\(3\\) bytes. To distinguish between the low channel energy data (G1) and high channel data (G2), an offset interval of \\(2000\\) is added to the high channel. Thus the maximum possible outcome for the high channel becomes \\(2000\\) + \\(1023\\) = \\(3023\\) (0xBCF in hexadecimal). Thus \\(3\\) bytes can be allocated for energy data. The sun-sensor data from the ADC is binned into \\(70\\) different levels. Thus each level is \\(1023\\) / \\(70\\) =
Figure 8: Flow chart showing the process of writing the buffer data (Process2 part in Fig. 5) in SD card. The flow input βFrom Previous Stepβ refers to the input shown in Fig. 5.
\\(14\\) (0xE in hexadecimal) requiring only one byte. The division of a second is done based on the processor clock speed and resolution of the embedded timer hardware in the computing unit. The maximum value for such settings is \\(64859\\) which decreases to zero in one second. This requires \\(4\\) bytes. One byte is used for a delimiter : between energy and sun-sensor data. Finally a space is used as a delimiter between two events. Thus a total of \\(10\\) bytes are required for storing an event in micro SD card. A pictorial representation of the packet format for a single event is given in Fig. 9.
### Extracting Readable Data from the System
The hexadecimal data stored in the micro-SD card is first extracted with a Matlab[14] program to get the data in ASCII format. In the micro-SD card, to save CPU time cycles and storage space, the bytes are written in the raw format. The software program running in the ARM9 computing unit keeps track of the number of memory units that are being written to put the new data in new locations, thus preventing overwriting.
To extract the raw data stored in the blocks of the card, a low level access is required. To facilitate the data extraction and for redundancy, another on-board data storage card is used which stores minimal information of time stamp, memory address already used in the primary card with the raw data and the total number of events (low energy, high energy and total counts) in FAT format. This redundancy allows us to have a quick-look of the data.
Figure 9: Showing the packet format for a single event data (total of 10 bytes).
### Laboratory Tests
Before each Mission, we carry out several laboratory tests on the detector assembly for its health check and performance. The following standard functionality tests were conducted on the detector assembly.
* Detector performance test under normal laboratory condition.
* Energy-channel calibration and resolution of the detector.
* Detector performance stability under the low pressure condition.
* Detector performance under variable temperature to mimic ascend or descend of the payload.
In the following sections, we discuss briefly the tests performed on the detector assembly before each mission.
#### 4.4.1 Tests under normal laboratory condition
The primary check on the detector performance is to test its behaviour under normal conditions and also tuning its input parameters, such as, HV settings, reference voltage values in the trigger circuit etc. for its optimum performance. We used radioactive sources, such as Eu\\({}^{152}\\), Ba\\({}^{133}\\), Cs\\({}^{137}\\), Am\\({}^{241}\\) for calibration.
In Fig. 10, we show the pulse height spectrum in ADC channels for Eu\\({}^{152}\\) radiation source, for both low energy and high energy channels. The applied bias voltage is so chosen that the gain of the system permits desired energy range within the measurable limit (output \\(3.3\\) V). The gain parameters can be adjusted from the post amplifier section. An adjustment is made so that the minimum bias voltage can be obtained with highest resolution (at 59.5 keV of Am\\({}^{241}\\) source). Thisspectrum shows different radiation lines, such as, two lines at \\(39.50\\) and \\(121.9\\) keV in the low energy spectrum and six lines at \\(245\\), \\(344\\), \\(780\\), \\(960\\), \\(1110\\) and \\(1410\\) keV for high energy spectrum.
#### 4.4.2 Calibration and resolution
We used different emission lines at known energies from various radio active sources to calibrate the detector channels to convert the PH of the events into the photon energy. This channel-energy relation for low energy events in G1 and high energy events in G2 are shown in Fig. 11 and they respectively follow linear relations of the form \\(E=-3.12+C*0.16\\) and \\(E=-49.75+C*2.01\\).
We calculated the resolution of the detector in a standard way at various energies by fitting the detected lines using Gaussian functions. The Full Width at Half Maximum (FWHM) and peak energy (E\\({}_{peak}\\)) values were obtained from the fitting. For example, the resolution calculated for the detector at \\(59.5\\) keV of Am\\({}^{241}\\) source is 23.07% (FWHM/E\\({}_{peak}\\)). The energy spectrum for this radiation source along with the Gaussian fit at the \\(59.5\\) keV energy is shown in Fig. 12.
The resolutions of the \\(2^{\\prime\\
Figure 11: Low (left) and high (right) energy-channel callibration. The low energy part shows a gain factor of \\(0.16\\) keV/channel while the high energy part have the gain factor \\(2.01\\) keV/channel.
Figure 12: Energy spectrum of the low energy in the detector from Am241 radiation source. Gaussian fit at \\(59.5\\) keV gives 23.07% resolution. Residuals to the fit are shown in the lower panel.
Stability of gain of the detector for long duration operations was also tested under normal laboratory conditions for all the detectors. The tests were satisfactory. This is explained in more details in Sec. 5.3.2.
After calibration of the detector, the energy spectrum with \\(\\mathrm{Eu}^{152}\\) radiation source is plotted Fig. 14.
#### 4.4.3 Test under low pressure condition
Due to the weight constraint on the payload, we cannot use a pressure vessel to keep the detector under constant pressure condition. So the detector is exposed to low pressure condition in the atmosphere up to \\(\\sim 42\\) km during the flight. We performed a test where the detector is kept in a
\\begin{table}
\\begin{tabular}{c c c} \\hline Energy (keV) & \\(2^{\\prime\\prime}\\) resolution (\\%) & \\(3^{\\prime\\prime}\\) resolution (\\%) \\\\ \\hline \\(39.50\\) & \\(28.72\\) & \\(34.24\\) \\\\ \\(59.54\\) & \\(18.22\\) & \\(23.07\\) \\\\ \\(80.997\\) & \\(16.23\\) & \\(18.75\\) \\\\ \\(121.9\\) & \\(13.91\\) & \\(15.33\\) \\\\ \\(661.657\\) & \\(8.20\\) & \\(8.47\\) \\\\ \\(1408.0\\) & \\(6.45\\) & \\(6.98\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Resolutions of the \\(2^{\\prime\\prime}\\) and \\(3^{\\prime\\prime}\\) detectors at different calibrator line energies.
Figure 13: Energy resolution of the \\(2^{\\prime\\prime}\\) and \\(3^{\\prime\\prime}\\) detector at different energies as given in Table 3.
pressure chamber which reduces the pressure gradually till \\(\\sim 0.5\\) mBar was reached which roughly corresponds to \\(\\sim\\) 55 km altitude in the atmosphere. All the single crystal and phoswich detectors under this test show that the gain is unaffected by the pressure variation. As an example, the test result is shown for phoswich detector in Fig. 25 in Sec. 5.3.3.
#### 4.4.4 Test under temperature variation
We conducted tests on the detectors keeping them in a thermal chamber where the temperature was changed from the room temperature (about \\(27^{\\circ}\\)C) to \\(\\sim-10^{\\circ}\\)C inside a low temperature test chamber. Due to the thermal insulation provided by the thermocol enclosure of our payload, the temperature inside the payload box is maintained well in this limit during the flight. This test also shows no significant effect on the detector gain or other parameters due to progressive changes in temperature.
### Detection of solar radiation
We used 2\" and 3\" diameter single crystal scintillator detectors onboard balloon flights in several missions, for the measurement of atmospheric radiation due to CR interaction and extraterrestrial
Figure 14: Measured energy spectrum of Eu\\({}^{152}\\) after the calibration. Left and right panels are for the low energy (left) and the high energy (right) part.
radiations. For example, on 25th April, 2013 (Mission Id. D33), we used a 2\" diameter scintillator detector onboard a carrier of two rubber balloons tied together, to measure solar radiation. During this epoch, the sun was in a highly active phase with frequent solar flares. The experiment was designed and scheduled in such a way that when the sun is closest to the zenith, the payload is also near its highest altitude, thereby increasing the solar exposure to the detector and enhancing the chance of flare detection. Figure 15 shows the 25-60 keV radiation counts in the detector during a part of the flight. The plot from the altitude of 12 km till the balloon burst is shown. We also plot the solar irradiation measured by the detector onboard GOES satellite[15] in 3-25 keV range (scaled up by 10\\({}^{10}\\)), for the sake of comparison. However, our data is influenced by several experimental and environmental effects which we state below.
During the time of experiment, the closest approach of sun to the zenith was \\(\\sim 10^{\\circ}\\). At the time of the highest payload altitude, the sun was at \\(\\sim 12^{\\circ}\\) from zenith. To be able to have the sun inside our FOV during the experiment we needed to tilt the payload axis by \\(\\sim 12^{\\circ}\\) w.r.t. the zenith. This optimizes the detector exposure to the sun during higher payload altitude. However, due to the free rotation of the payload, the relative position of the sun in detector FoV (40\\({}^{\\circ}\\)) changed (since detector axis is inclined at 12\\({}^{\\circ}\\) with the rotation axis). So we expect a variation in solar exposure and hence intensity of detected solar radiation due to this rotational motion. This is evident from the \\(\\sim 33km\\) altitude data in Figure 15 where a deep is seen due to non-detection even when the flare is on. This type of observations do not affect the spectrum very much, though corrections of the raw data is needed taking care of the attitude of the payload. Since the altitude of the payload in the atmosphere is varied with time, the residual atmosphere causing the attenuation of the solar radiation is also modified. The light curve of flares depends on the energy of the emitted photons. The major difference in the light curves detected by GOES and by this experiment is due to different energy ranges of the detectors. Thus, we see sharp peaks only when the radiation is of energy higher that the GOES range. We verified that the spectrum shifts towards lower energy with higher counts, as the balloon goes up. At lower altitude (\\(\\sim\\) 14 to 16 km) we missed a part of the flare as seen in the GOES data. This is due to the strong atmospheric absorption. The detailed analysis of the data will be published elsewhere [16].
## 5 The Phoswich Detector
To use an X-ray detector for the study of extra-solar sources where the intensity of the source radiation is relatively low, it is important to reduce the background counts in the detector. Only passive shielding for the detector is not always sufficient and hence phoswich technique is used [17] to reduce the background through anti-coincidence method using two different scintillator crystals of different pulse decay time. Energy depositions in different crystals by an event can be identified from the corresponding pulse shapes so that the partial energy deposition causing significant background in the primary crystal can be identified and eliminated.
Figure 15: Radiation counts (25-60 keV) (black) in a 2β single crystal scintillator detector onboard a balloon flight during a solar flare and its comparison with the GOES data in 3-25keV (gray).
A complete phoswich X-ray detector module includes a phoswich scintillator detector, an on-board computing unit, power supply, and large data storage capability. The entire system weighs only \\(4.5\\) kg. Since it has a very flexible modular architecture, this payload can be reused after each flight and any of its units can be changed/replaced if needed. The dimension of the assembled detector is about 17\\(\\times\\)17\\(\\times\\)31 cm\\({}^{3}\\), while the overall dimension of the whole payload box is about 40\\(\\times\\)40\\(\\times\\)70 cm\\({}^{3}\\) and weights about 5.8 kg. This payload can be sent only by a \\(7-9\\) kg category plastic balloon.
### Detector Specifications
The heart of the module is a low-energy gamma-ray/hard X-ray detector system. The detector consists of thallium doped sodium iodide (NaI(Tl)) and sodium doped cesium iodide (CsI(Na)) scintillator crystals stacked together and viewed by a PMT. This assembly was produced by M/S Scionix Holland BV, The Netherlands [18].
The NaI(Tl) crystal is \\(3\\) mm thick and \\(116\\) mm in diameter and the CsI(Na) crystal is \\(25\\) mm thick with the same diameter. The two crystals are and the PMT (diameter \\(76\\) mm) is optically coupled to the CsI crystal through optically coupled and hermetically sealed with an entrance window on NaI side a light guide of \\(10\\) mm thick. Both the crystals are used in X-ray astronomy as a special choice of scintillator (with \\(10^{-2}\\) to \\(10^{-3}\\) mole Tl and Na impurities) by virtue of their following properties:
1. Relatively high effective atomic no. (\\(32\\) for NaI and \\(54\\) for CsI crystal) and hence is a good absorber of hard X-ray
2. Efficient optical light production (\\(415\\) nm wavelength emission from NaI and \\(420\\) nm wavelength emission from CsI).
3. In the current configuration of phoswich detector, the NaI(Tl) crystal is sensitive to X-ray photons of \\(15-100\\) keV and the CsI(Na) crystal in the energy range of \\(100-1000\\) keV. This is because it resides below the NaI crystal which absorbs the photons below 100 keV. Both the crystals are sensitive to charged particle background.
The light signal from the CsI(Na) crystal has a different scintillation decay time (\\(630\\) ns) than that from the NaI(Tl) crystal (\\(250\\) ns) and hence this distinction may be used to eliminate the events with partial energy deposition in both crystals. The scintillator signals from both the crystals are used in anti-coincidence for the background rejection. The high energy gamma-ray and charged particles which deposit their energy partially in both the crystals are identified and eliminated by this method.
The interaction of X-ray photons of energy up to \\(100\\) keV with NaI and CsI crystals is fully dominated by the photo-electric process and absorbed radiation (secondary electron-hole pair absorbed by the impurities) converts into light photons (due to the decay of the excited impurities). These photons eventually strike the photocathode of PMT (gain \\(\\sim 10^{6}\\)) and are converted into narrow electrical pulse whose magnitude (pulse height) is proportional to the energy of the incident radiation. The energy resolution (FWHM) of the scintillator phoswich is expected to be \\(18\\%\\)\\(60\\) keV and Pulse Height (PH) variation across the crystal will be less than \\(3\\%\\). The radioactive isotopes Am\\({}^{241}\\) (\\(59.5\\) keV), Eu\\({}^{152}\\) (\\(39.5\\) keV, \\(121.9\\) keV and \\(344.44\\) keV), Ba\\({}^{133}\\) (\\(30.97\\) keV and \\(80.997\\) keV) and Cs\\({}^{137}\\) (\\(32.194\\) keV and \\(661.657\\) keV) are used for laboratory calibration of the detectors.
### Detector Electronics and Readout System
The schematic block diagram of the overall phoswich detector is given in Fig. 16 along with a picture of the assembled phoswich detector. The front-end electronics receives signals from the PMT. These signals are amplified, digitized and analyzed in the way discussed below.
#### 5.2.1 The front-end electronics
Signal pulses from the PMT are amplified in a preamplifier and two post amplifiers (G1 and G2). G1 covers energy range from \\(15-100\\) keV and G2 from \\(100\\) keV up to \\(1\\) MeV. Due to different decay times of pulses in NaI and CsI crystals, a Pulse Shape Discriminator (PSD) technique is used to measure the width of the pulses giving the Pulse Shape (PS). An analog signal at the output of G1 originated from either crystal is used to measure the pulse shape. The width of the pulse at a certain fractional level of the signal peak voltage is measured using a counter. The output value of the counter is recorded as the pulse shape value of the event signal. The pulse height information is
Figure 16: (Left:) the schematic block diagram of the electronics and readout system for the phoswich detector. (Right:) assembled phoswich detector along with the collimator and detector electronics.
digitized using \\(10\\) bits ADC embedded in the processor providing 1023 channels. The processing of an event in different stages and the corresponding time lapse is shown in Fig. 17.
The data is stored into a buffer memory (RAM), for a preset time interval which depends on the experimental condition with expected maximum count rate. For example, if the experiment is done under high radiation environment, to limit the buffered memory, we must have a lower preset time. After that the buffered data is written on the data storage unit from the memory.
The key features of the phoswich electronics module are:
* Amplify phoswich detector output pulses while retaining the original shape of the pulse. This is to measure both energy and shape of the pulse to allow identification of the origin of the pulse (i.e., NaI or CsI crystal).
* Generate regulated power supply from the core power supply (\\(16\\) V, \\(10000\\) mAh battery backup) and provide appropriate Low Voltage DC (LVDC) to the electronics circuits and High Voltage DC (HVDC) to the PMT of the detector.
* Work in space-like environment in a temperature range of \\(-5^{\\circ}\\)C to \\(+35^{\\circ}\\)C (inside the temperature shielding) and qualify in the range from \\(-10^{\\circ}\\)C to \\(+40^{\\circ}\\)C without significant change in the performance.
Figure 17: Different stages of processing of an event pulse. First \\(4-5\\)\\(\\mu\\)s is used to measure the pulse, next \\(5\\) (\\(10\\)) \\(\\mu\\)s is used to digitize (ADC) the pulse height for pulse from single (double) crystal. Another \\(5\\)\\(\\mu\\)s elapses to digitize the sun-sensor data and final \\(10\\)\\(\\mu\\)s is required to reset the system.
* Optimize the power and space requirements resulting into less dimension and weight so as to be convenient for balloon borne programs.
#### 5.2.2 Digital data processing, acquisition and control unit
The digital data processing and acquisition system works almost similar to that described for the single crystal detector in Sec. 4.2.3. The only difference in this case is to generate the PSD value to distinguish the origin of the events from different crystals from their decay properties. This difference is shown in Fig. 18. For this case, each event energy or digitized PH value is also associated with their corresponding PSD count value along with other components mentioned for the single crystal detector and packed together as a data packet unit to be stored in the temporary buffer.
#### 5.2.3 Data format for storing
The format of the event data used to write in the SD card is similar to that used in single crystal detector described in Sec. 4.2.4. But in this case, an extra \\(2\\) bytes for the PSD information and an extra delimiter of \\(1\\) byte between sub-second and PSD count data are required for each event. So we need a total of \\(13\\) bytes to record an event data. The division of the data packet for one event is shown in Fig. 19.
The extraction of the data from the SD card is similar to that for the single crystal detector as discussed in detail in Sec. 4.3.
#### 5.2.4 Power supplies for different units (DC-DC converters)
A high voltage DC-DC converter is used for generating bias voltage (\\(\\sim 650\\) Volt) for the PMT. A low voltage DC-DC converter is used for the required supply voltage to drive the front-endFigure 19: The packet format of a single event data in phoswich detector.
Figure 18: Flow chart for processing (Process1 part in Fig. 5) an event both in low and high energy parts and accumulating in the temporary buffer for the phoswich detector.
electronics of the detector. The whole power needed by the detector is supplied from the on board \\(10000\\) mAh, \\(16\\) V battery power system.
### Laboratory Tests
We have conducted similar tests in laboratory as discussed in Sec. 4.4 for the phoswich detector, since both types of detectors are operational under similar conditions. The following functionality tests were conducted on the detector assembly.
* Detector performance test under normal laboratory condition.
* Energy-channel calibration and resolution of the detector.
* Detector performance stability under low pressure condition.
* Detector performance under variable temperature.
Here in the following sections we discuss briefly the tests performed on the detector assembly.
#### 5.3.1 Tests under normal laboratory condition
We used calibrating sources as mentioned in Sec. 4.4.1. To discriminate the spectrum from different crystals of phoswich data, first the PSD data is plotted. The plot helps in distinguishing events coming from NaI(Tl) or CsI(Na) crystals. Fig. 20 shows a PSD plot of the data for the Am\\({}^{241}\\) radiation source. We find two distinguishable peaks in the plot due to pulses in NaI and CsI with different decay times. The value at the minimum point near \\(\\sim 15\\) between the two peaks is taken as the PS cut value for separating events from two crystals during the analysis.
Fig. 21 shows the channel spectrum of the low energy data (from G1) for Am\\({}^{241}\\) source. The applied bias voltage is so chosen that the gain of the system permits desired energy range within measurable limit (output \\(3.3\\) V). The gain parameters can be adjusted from the post amplifier section. An adjustment is done so that the minimum bias voltage can be obtained with the highest resolution (at \\(59.5\\) keV of Am\\({}^{241}\\) source).
Figure 21: Channel spectrum of the events in NaI crystal in low energy from Am\\({}^{241}\\) radiation source.
Figure 20: Pulse decay time plot of the event pulses in phoswich using Am\\({}^{241}\\) radiation source. The two peaks near \\(10\\) and \\(24\\) are due to pulses generated in NaI and CsI respectively and the minimum value near \\(15\\) is the PS cut value to separate the pulses in two crystals.
#### 5.3.2 Calibration and resolution
The detector calibration provides the channel-energy relation for low energy events in NaI (G1) and high energy events (G2) are shown in Fig. 22. A linear fitting of the data gives gain factors of \\(0.096\\pm 0.005\\) and \\(1.09\\pm 0.02\\) keV/channel respectively and these relations are used to convert the PHs into energy deposition information.
The energy spectrum of the detector for the radiation source Am\\({}^{241}\\) using these calibration information are shown in Fig. 23 both for low and high energy.
We calculated the resolution of the detector at various energies by fitting the detected lines using Gaussian function in a similar way as discussed in Sec. 4.4.2. For example, the resolution obtained for the detector at \\(59.5\\) keV of Am\\({}^{241}\\) source is calculated as \\(18.72\\%\\). The energy spectrum in the detector showing the \\(59.5\\) keV line fitted using a Gaussian is plotted in Fig. 23. These gain factors and resolutions achieved in our design is comparable to other similar detectors used for satellite borne space experiments like RT-2[19] and BeppoSAX[20].
The gain stability of the detector for long duration operation was also tested under normal
Figure 22: Calibration of the detector for low energy events (left) and high energy events (right) using different radiation sources. For low energy calibration we used lines at 30.97 (Ba\\({}^{133}\\)), 32.19 (Cs\\({}^{137}\\)), 39.5 (Eu\\({}^{152}\\)) and 59.5 (Am\\({}^{241}\\)) keV. For high energy calibration we used lines at 344.44 (Eu\\({}^{152}\\)), 356.01 (Ba\\({}^{133}\\)) and 661.66 (Cs\\({}^{137}\\)) keV.
laboratory conditions and it was found to be satisfactory. For example, we show the dynamic energy spectra of the detector with time in Fig. 24 showing the peaks of Am\\({}^{241}\\) radiation source for a long time of about \\(11\\) ks. The 3\\(\\sigma\\) gain variation during this entire time, as measured w.r.t. the \\(59.5\\) keV line peak, is only about 18.6% of the energy resolution at that energy.
Figure 23: Energy spectrum of the low energy events in the detector from Am241 radiation source. The Gaussian fit of the line at 59.5 keV of the low energy spectrum gives 18.72% resolution. The residual plot of the fitting is shown in the lower panel.
Figure 24: Dynamic energy spectrum of the detector with time to test the stability of the detector for long operations. Am\\({}^{241}\\) source was used.
#### 5.3.3 Test under low pressure and low temperature condition
As in the case of single crystal detector, the phoswich detectors were tested under very low pressure condition. We gradually reduced the pressure inside a pressure vessel containing the detector till \\(\\sim 0.5\\) mBar (equivalent to pressure at \\(\\sim\\) 55 km in the atmosphere) to see any effect of the pressure variation on the detector operation. Fig. 25 shows that the detector operation presented in the upper panel in terms of the dynamic channel spectrum is unaffected by the pressure changes shown in the lower panel.
The phoswich detector also has been tested under low temperature as the single crystal detector to show no significant effect on the functio
Figure 25: Detector stability test under low pressure condition. The upper panel shows the channel spectrum for Am\\({}^{241}\\) source with time, while the lower panel shows the pressure variation.
### Detection of Crab radiation by phoswich detector
We show, in Fig. 26, an example of the radiation measurement using phoswich detector during its flight onboard a plastic meteorological balloon on 7th May, 2017 (Mission Id. 102). The experiment was designed to measure radiations from the Crab pulsar. The closest approach of Crab to the zenith during the time of experiment was about \\(2^{\\circ}\\). So, we scheduled the experiment in such a way that the payload reaches near its burst altitude when the Crab is near the zenith. In this way we can minimize the atmospheric absorption of the source radiation. The detector viewing direction was aligned with the payload rotation axis (i.e., zenith direction), so there was a small modulation due to the free rotation of the payload. The sensitivity of the detector is limited by the atmospheric background radiation and absorption of the source radiation in the atmosphere which is altitude dependant. From the previous background measurement experiment, we calculated the minimum sensitivity of the detector at 40 km and in the energy range of 20-60 keV is \\(\\sim 200\\) mCrab[21]. There was no other source brighter than this inside the FoV (\\(15^{\\circ}\\)) of the detector during the experiment, which was confirmed using other all sky monitor data onboard satellite (Swift/BAT[22]). So, the excess radiation which is well beyond \\(3\\sigma\\) significance level at the peak from the background is indeed from Crab pulsar.
The detected radiation count rate in the selected energy range of 25-60 keV shows the RP-max near \\(\\sim 15\\) km during ascent and descent. The origin of this radiation in the atmosphere has been discussed in Sec. 3.3 and in more detail in Ref. [5]. The radiation excess near the highest altitude of the payload indicates the detection of radiation from the Crab. The sudden dip in the count rate near the maximum altitude is due to payload attitude change during the balloon burst. The Figure also shows the background counts in the absence of any significant astrophysical sourcesinside the FoV of the detector. This background data was taken from another mission (Mission Id. D96, on 15th October, 2016) with the same instrument. The timing information of the background counts were adjusted to compare with the source data, using payload altitude information of both missions. A more detailed experimental methodology and results of the temporal and spectral measurement of the Crab radiation in another similar experiment can be found in Ref. [21].
## 6 Summary
Balloon borne space exploration has become an accepted and efficient method to obtain high energy radiation from space for several decades. With the advent of miniaturized instruments, it has become possible to send payloads of mass less than 5-6 kg with the main science data measuring units, location and attitude measurement units as well as the power supply for the entire mission and still achieve significant science goals [2, 3, 5, 16]. Though these detectors are modest in size, the technology that is developed in course of these experiments, can be used to test any new detector
Figure 26: Radiation counts (black data points) detected by the phoswich detector during its entire flight onboard a plastic meteorological balloon (see text for description). Background data in absence of significant astrophysical radiation source inside detector FoV is shown by gray data points. The gray solid line shows the detector altitude profile.
concepts. These detectors can also be used to keep a regular monitoring of cosmic ray intensity, background etc. apart from the radiation study from astrophysical objects. In the present paper, we showed how even the normal payloads such as scintillator detectors or phoswich detectors which could have been flown to space with regular satellites or large balloons and rockets, may also be integrated into our low-mass and low-cost space exploration missions. We do not have pointing systems and do to transmit data to the ground. This led to innovate new procedures to 'tag' every received photon with its directional information as obtained from the Inertial Measurement Unit (IMU) chipsets[4] and especially designed hardware to save data on board in micro-SD cards. Our present tagging is done with an accuracy of \\(0.3-1.8\\) degree depending on the rotational speed of the balloon. Our logical circuit circumvents the need of more sophisticated electronics by simply writing the data on SD cards in regular time intervals during which data is not collected. Our data quality is decided by the shielding materials[21]. The test and evaluation procedure of every instrument is done very strictly. We also carry out calibration of these instruments in near space conditions by simulating in-flight pressure and temperature variations. The time-stamped and attitude tagged photons are analyzed keeping in mind different levels of atmospheric absorption at different altitudes[21]. These details are beyond the scope of this paper and will be presented elsewhere.
### Disclosures
The authors have no relevant financial interests in the manuscript and no other potential conflicts of interest to disclose.
###### Acknowledgements.
The authors would like to thank Dr. S. Mondal, Mr. S. Chakraborty, Mr. S. Midya, Mr. H. Roy, Mr. R. C. Das and Mr. U. Sardar for their valuable helps in various forms during the mission operations and data collection. This work been done under partial financial support from the Science and Engineering Research Board (SERB, Department of Science and Technology, Government of India) project no. EMR/2016/003870. We also thank Ministry of Earth Sciences (Government of India) for partial financial support. Grant-in-aid from Department of Higher Education, Government of West Bengal is acknowledged by DB, SKC, RS and AB to carry out the research at ICSP.
## References
* [1] N. Yajima, N. Izutsu, T. Imamura, _et al._, _Scientific Ballooning: Technology and Applications of Exploration Balloons Floating in the Stratosphere and the Atmospheres of Other Planets_, Springer, Berlin (2009).
* [2] S. K. Chakrabarti, D. Bhowmick, S. Chakraborty, _et al._, \"Study of properties of cosmic rays and solar x-ray flares by balloon borne experiments,\" _IJP_**88**, 333-341 (2014).
* [3] S. K. Chakrabarti, R. Sarkar, D. Bhowmick, _et al._, \"Study of high energy phenomena from near space using low-cost meteorological balloons,\" _Exp. Astron._**43(3)**, 311-338 (2017).
* [4] R. Sarkar _et al._, \"Payload attitude measurement using micro-electronic inertia-measurement unit for small balloon borne missions,\" _Exp. Astron._**In Preparation** (2019b).
* [5] R. Sarkar, S. K. Chakrabarti, P. S. Pal, _et al._, \"Measurement of secondary cosmic ray intensity at regener-pfotzer height using low-cost weather balloons and its correlation with solar activity,\" _AdSpRes_**60**, 991-998 (2017).
* [6] \"Cambridge university space flight landing predictor.\" [http://predict.habhub.org](http://predict.habhub.org). Accessed: 19 Mar. 2019.
* [7] A. N. Charakhchyan, G. A. Bazilevskaya, Y. I. Stozhkov, _et al._, \"Investigation of the long-term variations of cosmic ray latitude effect in the earth atmosphere,\" in _Proc. 14th ICRC, Munchen_, _Proc. SPIE_**3**, 1020-1024 (1975).
* [8] \"Lnd, inc..\" [http://www.lndinc.com/products/geiger-mueller-tubes/712-2/](http://www.lndinc.com/products/geiger-mueller-tubes/712-2/). Accessed: 10 Mar. 2019.
* [9] E. Regener and G. Pfotzer, \"Intensity of the cosmic ultra-radiation in the stratosphere with the tube-counter,\" _Nature_**134**, 325 (1934).
* [10] \"Saint-gobain crystals.\" [https://www.crystals.saint-gobain.com/products/radiation](https://www.crystals.saint-gobain.com/products/radiation). Accessed: 15 Mar. 2019.
* [11] \"Emco high voltage.\" [https://www.xppower.com/Product/F-Series](https://www.xppower.com/Product/F-Series). Accessed: 19 Mar. 2019.
* [12] \"Dow corning.\" [https://consumer.dow.com/en-us/pdp.sylgard184siliconeelastome](https://consumer.dow.com/en-us/pdp.sylgard184siliconeelastome). Accessed: 15 Mar. 2019.
* [13] \"Friendly elec.\" [http://www.friendlyarm.com](http://www.friendlyarm.com). Accessed: 15 Mar. 2019.
* [14] \"Math works.\" [https://in.mathworks.com](https://in.mathworks.com). Accessed: 19 Mar. 2019.
* [15] \"Space weather prediction center.\" [http://www.swpc.noaa.gov](http://www.swpc.noaa.gov). Accessed: 10 Oct. 2018.
* [16] S. K. Chakrabarti, R. Sarkar, and D. Bhowmick, \"Observations of solar flares and compact x-ray sources using meteorological balloons,\" _Exp. Astron._**In Preparation** (2019).
* [17] L. E. Peterson, \"Instrumental technique in x-ray astronomy,\" _Ann. Rev. Astron. Astrophys._**13**, 423-509 (1975).
* [18] \"Scinix holland bv..\" [https://www.scionix.nl](https://www.scionix.nl). Accessed: 19 Mar. 2019.
* [19] D. Debnath, A. Nandi, A. R. Rao, _et al._, \"Instruments of rt-2 experiment onboard coronas-photon and their test and evaluation i: ground calibration of rt-2/s and rt-2/g,\" _Exp. Astron._**29(1-2)**, 1-25 (2011).
* [20] F. Frontera, E. Costa, D. dal Fiume, _et al._, \"The high energy instrument pds on-board the bepposax x-ray astronomy satellite,\" _A&A_**122**, 357-369 (1997).
* [21] R. Sarkar, S. K. Chakrabarti, D. Bhowmick, _et al._, \"Detection of crab radiation with a meteorological balloon borne phoswich detector,\" _Exp. Astron._**Accepted for publication** (2019a).
* [22] H. A. Krimm, S. T. Holland, R. H. D. Corbet, _et al._, \"The swift/bat hard x-ray transient monitor,\" _ApJSS_**209**, 14 (2013). | Indian Centre for Space Physics is engaged in studying terrestrial and extra-terrestrial high energy phenomena from meteorological balloon borne platforms. A complete payload system with such balloons is at the most about five kilograms of weight. One has to adopt innovative and optimal design for various components of the experiment, so that the data can be procured at decent heights of \\(\\sim 35-42\\) km and at the same time, some scientific goals are achieved. In this paper, we mainly describe the instruments in detail and present their test and calibration results. We discuss, how we implemented and tested three major instruments, namely, a Geiger-Muller counter, a single crystal scintillator detector and a phoswich type scintillator detector for our missions. We also present some flight data of a few missions to demonstrate the capability of such experiments.
X-ray detectors and instrumentation, Scintillator detectors, X-ray sources, Weather balloon-borne experiment.
*Ritabrata Sarkar, [email protected]_ | Provide a brief summary of the text. | 209 |
arxiv-format/1903_03519v1.md | # DSM building shape refinement from combined remote sensing images based on wnet-cgans
## 1 Introduction
A _digital surface model (DSM)_ is an important and valuable data source for many remote sensing applications, like building detection and reconstruction, cartographic analysis, urban planning, environmental investigations and disaster assessment tasks. The use of DSM for those remote sensing applications is motivated by the fact that it already provides geometric descriptions about the topographic surface. With recent advances in sensor technologies, it became possible to generated DSMs with a _ground sampling distance (GSD)_ smaller than 1 m not only from land surveying, aerial images, laser ranging data, or _interferometric synthetic aperture radar (InSAR)_, but also using satellite stereo images. The main advantages of satellite photogrammetric DSMs are the large land coverage and possibility to access remote areas. However, DSMs generated with the image-based matching approaches miss objects like steep walls in urban areas or feature some unwanted outliers and noise due to temporal changes, matching errors or occlusions. To overcome these problems, algorithms from computer vision have been analyzed and adapted to satellite imagery. For example, the filtering techniques such as geostatistical filter integrated with a hierarchical surface fitting technique, a threshold slope-based filter, or a Gaussian noise removal filter are the ones commonly used for DSM quality improvements. Moreover, some methodologies propose to fuse DSMs obtained from different data sources to compensate the limitations and gaps which each of them has individually [1].
With recent developments devoted to deep learning, it became possible to achieve top scores on many tasks including image processing. As a result, several works have already investigated their applicability for remote sensing applications, like landscape classification, building and road extraction, or traffic monitoring. Recently, a class of neural networks called _generative adversarial networks (GANs)_ was applied on three-dimensional remote sensing data and proved to be suitable. Mainly, the generation of large-scale 3D surface models with refined building shape to the _level of details (LoD)_ 2 from stereo satellite DSMs was studied using _conditional generative adversarial networks (cGANs)_[2, 3]. In this paper, we follow those ideas and propose a hybrid cGAN architecture which couples half-meter resolution satellite _panchromatic (PAN)_ images and DSMs to produce 3D surface models not only with refined 3D building shapes, but also with their completed structures, more accurate outlines, and sharper edges.
## 2 Methodology
The birth of GAN-based domain adaptation neural networks introduced by Goodfellow _et al._[4] yielded great achievements in generating realistic images. The idea behind the adversarial manner of learning is to train a pair of networks in a competing way: a _generator_\\(G\\) that tries to fool the discriminator to make the source domain look like the target domain as much as possible, and a _discriminator_\\(D\\) that tries to differentiate between the target domain and the transformed source domain. Taking the source distribution as input instead of a uniform distribution and using this external information to restrict both the generator in its output and the discriminator in its expected input leads to the conditional type of GANs. The objective function for cGANs can be expressed through a two-player minimax game\\[G^{\\star}=\\arg\\min_{G}\\max_{D}\\mathcal{L}_{\\text{GGAN}}(G,D)+\\lambda\\mathcal{L}_{L _{1}}(G) \\tag{1}\\]
between the generator and the discriminator, where \\(G\\) intents to minimize the objective function \\(\\mathcal{L}_{\\text{cGAN}}(G,D)\\) against the \\(D\\) that aims to maximize it. Moreover, it should be mentioned that in the first term of Eq. (1) we use an objective function with least squares instead of the common negative log likelihood. The second term in Eq. (1) regularizes the generator and produces the output near the ground truth in a \\(L_{1}\\) sense.
In our previous work, we already adapted the architecture proposed by Isola _et al._[5] to obtain refined 3D surface models from the noisy and inaccurate stereo DSMs. Now, we propose a new cGAN architecture that integrates depth information from stereo DSMs together with spectral information from PAN images, as the latter provides a sharper information about building silhouettes, which allows not only a better reconstruction of building outlines but also their missing construction parts. Since intensity and depth information have different physical meanings, we propose a hybrid \\(G\\) network where two separate _UNet_[6] type of networks with the same architecture are used: we feed one part with the PAN image and the second part with the stereo DSM generating a so-called _WNet_ architecture. Before the last upsampling layer, which leads to the final output size, we concatenate the intermediate features from both streams. Moreover, we increase the network with an additional convolutional layer of size \\(1\\times 1\\), which plays the role of information fusion from different modalities. As investigated earlier, this fusion can correct small failures in the predictions by automatically learning which stream of the network provides the best prediction result [7]. Finally, the \\(\\tanh\\) activation function \\(\\sigma_{\\text{tanh}}(z)=\\tanh(z)\\) is applied on the top layer of the \\(G\\) network. \\(D\\) is represented by a binary classification network with a _sigmoid_ activation function \\(\\sigma_{\\text{sigm}}(z)=\\frac{1}{1+\\text{e}^{-z}}\\) to the top layer to output the probability that the input image belongs either to class 1 (\"real\") or class 0 (\"generated\"). It has five convolutional layers which are followed by a leaky _rectified linear units (ReLU)_ activation function
\\[\\sigma_{\\text{leaky ReLU}}(z)=\\begin{cases}z,&\\text{if }z>0\\\\ az,&\\text{otherwise}\\end{cases}\\]
with a negative slope \\(a\\) of 0.2. The input to \\(D\\) is a concatenation of a stereo DSM with either a WNet-generated 3D surface model or a ground-truth DSM. A simplified representation of the proposed network architecture is demonstrated in Fig. 1.
## 3 Study Area and Experiments
Experiments have been performed on WorldView-1 data showing the city of Berlin, Germany, within a total area of 410 km\\({}^{2}\\). As input data, we used a stereo DSM and one of six very high-resolution PAN images, both with a resolution of 0.5 m. The PAN image is orthorectified. As ground truth, the LoD2-DSM, generated with a resolution of 0.5 m from a _city geography markup language (CityGML)_ data model, was used for learning the mapping function between the noisy DSM and the LoD2-DSM with better building shape quality. The detailed methodology on LoD2-DSM creation is given in our previous work. A CityGML data model is freely available on the download portal Berlin 3D ([http://www.businesslocationcenter.de/downloadportal](http://www.businesslocationcenter.de/downloadportal)).
The implementation of the proposed WNet-cGAN is done with the _PyTorch_ python package. For the training process, the satellite images were tiled into patches of size 256 \\(\\times\\) 256 px to fit into the single NVIDIA TITAN X (Pascal) GPU with 12 GB of memory. The total number of epochs was set to 200 with a batch size of 5. We trained the DSM-to-LoD2 WNet-cGAN network with minibatch _stochastic gradient descent (SGD)_ using the ADAM optimizer. An initial learning rate was set to \\(\\alpha=0.0002\\) and the momentum parameters to \\(\\beta_{1}=0.5\\) and \\(\\beta_{2}=0.999\\).
## 4 Results and Discussion
Two selected areas of the resulting LoD2-like DSM generated from combined spectral and depth information together with the LoD2-like DSM from a single image are illustrated in Fig. 2. From Fig. 2b and Fig. 2g we can see that the refinement of building shapes only from stereo DSMs is a very challenging task, due to several reasons. First of all, the presence of vegetation can influence the reconstruction as some parts of buildings are covered by trees. Besides, the stereo DSM is very noisy itself, due to failures in the generation algorithms. It means that in most cases the types of roofs and, as a result, their shapes are indistinguishable. On the other hand,
Figure 1: Schematic overview of the proposed architecture for the building shape refinement in the 3D surface model by WNet-cGAN using depth and spectral information.
looking at Fig. (a)a and Fig. (f)f we can see that the edges and outlines can be seen very well in the PAN image. Refinement of 3D buildings only from PAN image though would be very difficult as it does not contain 3D information, which is very important. Therefore, the combination of these two types of information is a good compromise which leads to advantages.
It can be clearly seen that the hybrid WNet-cGAN architecture is able to reconstruct more complete building structures than the cGAN from a single data source (see the highlighted buildings in Fig. (d)d). Even complicated forms of buildings are also preserved in the reconstructed 3D surface model. The obvious example is a zigzag-shaped building at the upper-left part in Fig. (d)d. This information could be only obtained from the PAN image (see Fig. (a)a).
The second example depicts a smaller but scaled area for better visual investigation. Here, a central building is complete and more details are distinguishable. Besides, the ridge lines of the roofs are also much better visible. One can even guess to which type of roof parts of building belong to: gable or hip roofs. A clear contribution of the spectral information to the building shape refinement task can be seen at the upper-right building in Fig. (i)i. We can notice that this building structure is more complete. The outlines of all buildings are clearer rectilinear and the building shapes become more symmetrical. To look more detailed into the 3D information, we illustrate some building profiles. We can see that the roof forms like gable and hip are clearly improved. The ridge lines tend to be sharp peaks. With the profile in Fig. (d)d we again highlight the ability of the proposed architecture to reconstruct even complicated buildings, which is difficult to reconstruct using a single stereo DSM information.
To quantify the quality of the generated DSMs, we evaluated the metrics _mean absolute error (MAE), root mean squared error (RMSE), normalized median absolute deviation (NMAD)_ and _normalized correlation coefficient (NCC)_, commonly used for 3D surface model accuracy investigation, on the cGAN and WNet-cGAN setups and report their performance in Table 1. As we are interested in quantifying the improvements only of the building shapes on DSMs the above mentioned metrics were measured only within the area where buildings are present plus a three-pixel buffer around each of them. This was achieved by employing the binary building mask and dilation procedure on the footprint boundaries. From the obtained results we can see that DSM from WNet-cGAN is better than the original stereo DSM and the DSM generated by the cGAN model on all proposed metrics. This is reasonable, as the spectral information provides additional information, helpful to reconstruct building structure more accurately and detailed, which is not possible using only stereo DSM. This feature especially influences
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & **MAE, m** & **RMSE, m** & **NMAD, m** & **NCC, m** \\\\ \\hline
**Stereo DSM** & 3.00 & 5.97 & 1.48 & 0.90 \\\\
**cGAN** & 2.01 & 4.78 & 0.86 & 0.92 \\\\
**Fused-cGAN** & **1.79** & **4.36** & **0.67** & **0.94** \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Prediction accuracies of cGAN and Fused-cGAN models on investigated metrics over the Berlin area.
Figure 2: Visual analysis of DSMs, generated by stereo cGAN and WNet-cGAN architectures, over selected urban areas. The DSM images are color-shaded for better visualization.
the corners, outlines, and ridge lines. As the NCC metric indicates how the form of the object resembles the ground truth object, the gaining of 4 % in comparison to the stereo DSM and 2 % improvement on the DSM generated by cGAN model over the whole test area, which includes thousands of buildings, demonstrate the advantage of using complementary information for such complicated tasks. The high values of RMSE (order of 5 m) is due to data acquisition time difference between the available DSM generated from stereo satellite images and the given ground truth data. As a result, several buildings are not presented or newly constructed in the more recent data set.
## 5 Conclusion
Refinement and filtering techniques from the literature for _digital surface models (DSMs)_ quality improvement are adequate for either small-scale DSMs or DSMs with no discontinuities. As a result, there is a need to develop a refinement procedure that can handle discontinuities, mainly building forms in urban regions, in high-resolution large-scale DSMs. A common strategy in remote sensing for refinement procedures is the use of all available information from different data sources. Their combination helps to compensate the mistakes and gaps in each independent data source.
We present a method for automatic large-scale DSM generation with refined building shapes to the _level of details (LoD)_ 2 from multiple spaceborne remote sensing data on the basis of _conditional generative adversarial networks (cGANs)_. The designed end-to-end WNet-cGANs integrates the contextual information from height and spectral images to produce good-quality 3D surface models. The obtained results show the potential of the proposed methodology to generate more completed building structures in DSMs. The network is able to learn how to complement the strong and weak sides of _panchromatic (PAN)_ image and stereo DSM, as, for instance, the stereo DSMs provide elevation information of the objects, but PAN images provide texture information and, as a result, more accurate building boundaries and silhouettes.
## References
* [1] M. Karkee, B. L. Steward, and S. A. Aziz, \"Improving quality of public domain digital elevation models through data fusion,\" _Biosystems Engineering_, vol. 101, no. 3, pp. 293-305, 2008.
* [2] K. Bittner, P dAngelo, M Korner, and P Reinartz, \"Automatic large-scale 3d building shape refinement using conditional generative adversarial networks,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, Utah_, 2018, pp. 18-22.
* [3] K. Bittner, P. dAngelo, M. Korner, and P. Reinartz, \"Dsm-to-lod2: Spaceborne stereo digital surface model refinement,\" _Remote Sensing_, vol. 10, no. 12, p. 1926, 2018.
* [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" in _Advances in neural information processing systems_, 2014, pp. 2672-2680.
* [5] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, \"Image-to-image translation with conditional adversarial networks,\" _ArXiv preprint arXiv:1611.07004_, 2016.
* [6] O. Ronneberger, P. Fischer, and T. Brox, \"U-net: Convolutional networks for biomedical image segmentation,\" in _International Conference on Medical Image Computing and Computer-Assisted Intervention_, Springer, 2015, pp. 234-241.
* [7] K. Bittner, F. Adam, S. Cui, M. Korner, and P. Reinartz, \"Building footprint extraction from vhr remote sensing images combined with normalized dsms using fused fully convolutional networks,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 11, no. 8, pp. 2615-2629, 2018.
Figure 3: Visual analysis of selected building profiles in generated DSMs. | We describe the workflow of a _digital surface models (DSMs)_ refinement algorithm using a hybrid _conditional generative adversarial network (cGAN)_ where the generative part consists of two parallel networks merged at the last stage forming a WNet architecture. The inputs to the so-called WNet-cGAN are stereo DSMs and _panchromatic (PAN)_ half-meter resolution satellite images. Fusing these helps to propagate fine detailed information from a spectral image and complete the missing 3D knowledge from a stereo DSM about building shapes. Besides, it refines the building outlines and edges making them more rectangular and sharp.
Ksenia Bittner\\({}^{1}\\),Marco Korner\\({}^{2}\\), Peter Reinartz\\({}^{1}\\)\\({}^{1}\\) Remote Sensing Technology Institute, German Aerospace Center (DLR), Wessling, Germany -
(ksenia.bittner, peter.reinartz)@dlr.de
\\({}^{2}\\) Technical University of Munich, Munich, Germany - [email protected]
Conditional generative adversarial networks, digital surface model, 3D scene refinement, 3D building shape, data fusion, satellite images | Write a summary of the passage below. | 257 |
isprs/f4fc3c52_ae9a_4684_9871_81ce28caeaba.md | # Accuracy Assessment of 3D Models Generated From Google Street View Imagery
N. Bruno1, R. Roncella
Universita degli Studi di Parma, Dipartimento di Ingegneria e Architettura, Parco Area delle Scienze 181A, 43124, Parma, Italy - [email protected], [email protected]
Footnote 1: Corresponding author
## 1 Introduction
Nowadays images and open data available on-line are constantly increasing. Literature provides many examples of public domain and free available images used for 3D reconstruction. For instance, 3D reconstruction can be obtained from generic touristic photos gathered from the web (Wabheel et al., 2016), frames extracted from videos (Condorelli and Rinaudo, 2018), crowdsourcing images (Somogyi et al., 2016; Frahm et al., 2013) and so on.
Among all the web images collections, service such as Google Street View represents a wide database of available images at street level (Anguelov et al., 2010), freely available and continuously updated. It was launched in several cities in the United States in 2007 and today supplies a great amount of geo-referenced panoramic images that cover many areas of the world, including cities and rural areas. Such a database can represent a great opportunity for several applications, spreading from quick 3D city models reconstruction (Cavallo, 2015; Torii et al., 2009; Micusik and Kosecka, 2009), to historical documentation of urban areas, monitoring, reconstruction of lost building, forensic analyses (Abate et al., 2018) and so on.
In this context, several works addressed the possibilities given by Google Street View imagery (Cavallo, 2015) but few have investigated metric applications and have evaluated accuracy of 3D models obtained by processing multiple Google Street View images referred to the same area.
To record the actual dimensions of the space being photographed, Google vehicles are equipped also with laser scanners that measure up to 50 meters \\(180^{\\circ}\\) in the front of the vehicle (Cumminis, 2012). Generally, consecutive panoramas are acquired with an average distance of 5-10 m (Agarwal et al., 2015) and can be compared to a traditional photogrammetric strip and, thus, processed to reconstruct portion of city at nearly zero cost.
The latest updates in many software solutions (e.g. PhotoScan by Agisoft, Pix4Dmapper by Pix4D, MicMac, etc.) have implemented spherical camera model, making possible to process directly spherical (equirectangular) images. Today, in fact, the rapid development of low cost sensors for the acquisition of spherical images (Barazzetti et al., 2018), without the need for complex processes of stitching of images acquired with traditional pinhole cameras (Fangi, 2007; Fangi, 2009), has rekindled the interest in spherical photogrammetry.
In this context, the goal of the present work is to test the accuracy and reliability of the 3D models obtained from Google Street View panoramas. In addition, a workflow was implemented to automatic download and processing the images, in order to speed and simplify images elaboration, also over wide areas.
## 2 Google Street View Images
Google Street View is a technology implemented in several Google services/applications (e.g. Google Maps, Google Earth) which provides the user, interested in viewing a particular location on the map, with panoramic images at street level. From its introduction in 2007, data quality, location coverage and connected services has grown constantly with an ever increasing base of active (from 2012 users can provide panoramas acquired from a single spot and from 2017 can create their own street view route) and passive users. In recent years the location considered by the service has grown as well, including interior location of relevant monument/landmarks (e.g. the White House), musums, natural/recenational parks, and so on. From 2014 the platform allows to show (where available) panoramas acquired at different epochs, making the tool particularly useful to assess previous state of relevant features. Finally, in recent years (2017), the equipment used for panorama acquisition has been improved with higher resolution cameras, the introduction of a laser scanner and with better GNSS (Global Navigation Satellite System) and Inertial Navigation Systems (INS) to evaluate the location and pose of every panorama. The equipment is usually mounted on vehicles (vans, cars, etc.), but can also be used with tricycles, boats, snowmobiles, or inserted in backpacks andtrolleys to allow the acquisition of the data in otherwise unreachable locations.
Depending on the devices used to record panorama and navigation data, Google identifies two different products quality level: Street View ready and Street View ready Pro Grade products, with the latter considered providing a high degree of accuracy and image quality.
As far as data access is concerned, several options can be considered, the simplest, but less customizable, being the use of web or desktop applications such as Google Maps or Google Earth. At the same time, Google provides public APIs (Application Programming Interface) for requesting panoramas and associated data/metadata of a given area. REST (Representational State Transfer) Street View Static API allows the transfer of data with standard HTTP (HyperText Transmission Protocol) requests, but the same operations can be performed with JavaScript commands using the Javascript google Map API. Several free and open source tools, based on both APIs, are available on the web for downloading and processing Streetview Panoramas, with different level of customization. In the present work an in-house developed code based on the REST API has been used.
Panoramas are distributed in equi-rectangular projection with different level of resolutions (\"zoom\") ranging from 416x208 pixels (zoom = 0) to 13312x6656 (zoom = 5).
Most of the photogrammetric software packages, nowadays, implement spherical camera models and can directly process images in equi-rectangular projection. However, especially for commercial software, it's usually hard to infer how exactly the Structure from Motion (SfM), Bundle Adjustment (BB) and dense image matching procedures are implemented. Even if image formation follows the same rules (central projection of object points) regardless of the projection used to represent image data, with equi-rectangular projection, beside actual perspective transformation, the images have an additional distortion effect. In particular, especially outside the central (equatorial) part of the image, image features are stretched along horizontal (latitude) direction. How this issue is tackled by the software is often omitted in user reference manual/user forum and is therefore unknown. Some authors (see for instance Barazzetti et al., 2014) showed that using a pin-hole based dataset might lead to better results than others where different projection, with stronger scale (usually anisotropic) variation, are used.
Accessing Google Maps REST API, further information can be extracted and injected in the EXIF header of the images (GPS location, panorama orientation, associated depth map, etc.). The methodology used for the automatic download of the Google panoramas is summarized in Figure 1.
The developed application allows the user to specify a specific location expressed in WGS84 coordinates or with its corresponding PanoID (a unique alphanumeric string of character that identifies the Panorama image, that can be found inside the URL (Uniform Resource Locator) of the selected panorama) and automatically find all the neighbour panoramas. Otherwise, the user can specify a list of (WGS84) locations or PanoIDs (or panoramas URL) and the application downloads every single image.
Subsequently, for each image, the application (1) sends an http request to download the information (metadata) associated with the selected panorama image. Such data is transferred with a JSON (Javascript Object Notation) file which contains several information: time of acquisition, panorama location and orientation in WGS84 reference system, the maximum zoom (i.e. resolution) available for the panorama, and the depth map associated with the panorama image.
At the same time image data is downloaded (2): the API requires that each single tile (the panorama is divided in tiles of 512x512 pixels) is requested separately; the total image is then composed by subsequent requests.
Exterior orientation parameters, stored in the JSON metadata file, are written in the EXIF of each single panorama (3).
Metadata are further processed (4) to extract depth data information: the data is optimized (compressed) for web transmission, and additional operations are required to obtain depth map representation of the scene acquired in the panorama (in-depth description of computation algorithm can be found in Cavallo, 2015). The final depth map (512x256 values) can then be used to obtain a point cloud with associated RGB information coming from the panoramic image.
It's worth noting that to limit the size of depth data stored in the JSON file, the scene is simplified in a set of planar region (the maximum allowed number of planes is 255), which approximate the spatial distribution of the points acquired by the laser scanning equipment. The available data should not be considered useful for accurate 3D reconstruction, but can provide some information about the approximate depth of the scene.
At the end of the image download procedure, the equi-rectangular panoramas can be converted in pin-hole projection: the user specifies the area of the panorama that he wants to extract: since a spherical image span a 360degx180deg Field of View (FOV), it cannot be entirely converted in a single planar pin-hole. The conversion is pretty straightforward: image data is resampled with the following procedure:
1. The object space reference system (XYZ) is rotated (X'Y'Z') so that its Z' axis passes through the (planar) image space (\\(\\xi\\eta\\xi\\)) of pin-hole model, and its X' and Y' axes are parallel respectively to the \\(\\xi\\) and \\(\\eta\\) axes.
2. The equi-rectangular image data (whose resolution is \\(w\\) x \\(h\\) pixel) is considered lying on a spherical surface (unit sphere); image coordinates (\\(\\varphi y\\)), are converted in longitude (\\(\\varphi\\)) and latitude (\\(\\lambda\\)) of the projected point on unit sphere: \\[\\varphi=\\frac{(r-x_{0})\\cdot\\varphi}{h}\\] (1) \\[\\lambda=\\frac{(r-y_{0})\\cdot\\varphi}{h}\\] (2) Where \\(x_{0}=w/2\\) and \\(y_{0}=h/2\\) represent the center of the equi-rectangular image.
3. The generic point (\\(\\varphi,\\lambda\\)) casts a ray in object space which passes through the origin with the following parametric equation: \\[\\begin{matrix}\\left|X\\right|\\\\ \\left|Y\\right|=\\left|\\begin{matrix}t\\cdot\\cos\\varphi\\sin\\lambda\\\\ t\\cdot\\sin\\varphi\\sin\\lambda\\\\ t\\cdot\\cos\\lambda\\end{matrix}\\right|\\end{matrix}\\] (3)
4. The rotation computed at point (1.) is applied to the parametric vector of eq. 3;
5. Considering a valid value for the parameter \\(t\\) (e.g. \\(t=1\\)) the resulted array represents the homogenous
Figure 1: The developed pipeline.
coordinates of the corresponding point in image space equivalent to the point \\(|\\xi\\quad\\eta\\quad 1|\\).
If inverse transformation (from pin-hole to equi-rectangular image space) is required the same approach can be used:
* The homogeneous coordinates of a generic pin-hole image points are considered (\\(|\\xi\\quad\\eta\\quad 1|\\));
* The homogeneous vector is normalized and represents the corresponding ray in (pin-hole) image space;
* The array is transformed with the inverse of the rotation described in (1.). The resulting array (\\(a\\quad b\\quad c\\)) represents the intersection between the cased ray and the unit sphere;
* It's corresponding longitude and latitude on unit sphere can be easily computed: \\[\\varphi=\\sin^{-1}b\\] (4) \\[\\lambda=\\tan^{-1}(a/c)\\] (5)
* Inverting eqs. (1) and (2) it's possible to derive image coordinate of the corresponding point in equi-rectangular image space.
If exterior orientation (EO) parameters of the original panorama are known, it's quite simple to derive the corresponding EO parameters of the newly generated pin-hole (planar) image, considering that the centre of perspective is the same for the two projective models (i.e. camera centre is the same) and the rotation of the planar image space derives from the composition of the equi-rectangular pose and the rotation computed in the previous step (1.).
At the same time, as far as interior orientation (IO) parameters are concerned, once the area to be extracted (selected by the user) is known, considering that, in the previous approach the pin-hole image plane is considered tangent to the unit sphere, principal distance (1 unit) and image plane size (depending on the choice of the user) can be easily computed. Usually spherical images are considered already corrected w.r.t. lens distortion.
The whole procedure was automated with a Matlab code and an example of output is shown in Figure 2.
## 3 Accuracy Evaluation
To evaluate the accuracy of the photogrammetric models obtained processing images provided by Google Street View, different approaches have been tested. To grant result repeatability, the methodology described has been tested in three different case studies with different geometric characteristics:
* a square (Piazza Duomo in Parma, Italy);
* a street (via Cardinal Ferrari, on the south side of Parma Cathedral, Italy);
* the internal courtyard of a palace (Costabili palace in Ferrara, Italy);
For each of these case study, a reference dataset was available and was used to evaluate the accuracy of Google images models, Piazza Duomo and via Cardinal Ferrari were surveyed in 2017, integrating total station, GPS, terrestrial laser scanning (TLS) (Bruno and Roncella, 2018). Palazzo Costabili, instead, was subject in 2015 to a topographic and high-resolution close range photogrammetry survey.
To evaluate the accuracy, a first aspect considered was the effect that different ground control solutions have on images orientation and dense matching. Three different control solutions have been applied to each dataset analysed.
* \"NO GPS\" solution: in this case, images were oriented using only Ground Control Point (GCP) coordinates. The GPS coordinates of the camera centre position provided by Google were not used as observation in Bundle Block Adjustment (BBA). In this way the accuracy was evaluated without considering possible errors introduced by inaccurate GPS measurements.
* \"GPS LW (Low Weight)\" solution: the ground control was based only on the GPS coordinated of camera centre position provided by Google. No GCP were used. However, the camera centre positions were used in the stochastic model of BBA with a low weight, so as not to constrain too much the relative orientation solution between panoramas and act only as georeferencing. This was important to evaluate the internal coherence of the panoramas without inserting particular constraints to the block.
* \"GPS HW (High Weight)\" solution: also in this case the ground control was based only on GPS camera centre coordinates, without any GCP. Differently from the previous solution, such coordinates were used with a high weight in BBA. The aim was to evaluate the accuracy of the GPS measurements provided by Google and their consistency with the topographic reference dataset.
In addition, the evaluation of how much the conversion from equi-rectangular to planar (pin-hole) projection model could improve image orientation and dense matching has been done. In particular, from each dataset the planar projections of some limited areas have been extracted and then processed applying the three different ground control solutions described before.
Figure 3: The different options for images elaboration.
Summarising, for each case study, six different processing tests have been done: using equi-rectangular or planar images and applying different ground control solutions (Figure 3).
All data were processed with Agisoft Photoscan using the same processing pipeline.
As far as image orientation is concerned, equi-rectangular images have been processed using spherical camera model, while planar images were oriented using the IO parameters estimated during the conversion process. The tie points extraction was performed working with images at the original size (\"high quality accuracy\" in Photoscan terminology) and using the following set of parameters to define the different control solution:
Dense matching was performed using down-sampled images by factor of 2 (\"high quality\" in Photoscan terminology) to not overload the processing times. The Digital Surface Model (DSM) was generated setting the maximum number of polygons in the final mesh as 1/5 of the number of points in the previously generated dense point cloud (\"high quality\" in Photoscan terminology).
The results were checked analysing the residuals on:
* re-projections (tie point BBA residuals of collinearity equations), to evaluate the internal consistency of the photogrammetric block;
* camera centres position residuals: in the solutions where (independent) GCP were used, camera centre residuals allowed to evaluate the accuracy of the GPS measurements; in the solutions where the ground control was based only on GPS camera centre coordinates, an evaluation of the internal consistency of the block is provided;
* check points residuals, to estimate the accuracy of the final object reconstruction.
Finally, the DSMs obtained from each dataset and from each processing pipeline were compared with the mesh surface model obtained by the high resolution survey just mentioned. The two dataset have been referred to the same reference system and a further alignment with ICP algorithm was performed to reduce possible residual systematic effects.
## 4 Results
### Piazza Duomo case study
In this regard, masking the images excluding the areas that shows the sky, and identifying manually some tie points in the areas where object features are not matched, improved significantly panoramas orientation.
Another problem observed during dense matching is that some parts of the image block are not registered correctly with other (maybe due to the limited number of corresponding tie points) producing relevant shifts between subset of dense points in the final cloud. Figure 6 shows this effect over three sides of the square.
\\begin{table}
\\begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \\hline
**Control solution** & **CC*** enabled** & **GCP enabled** & **CC*** precision** & **GCP Precision** \\\\ \\hline NO GPS & no & yes & / & 0.02 m \\\\ \\hline GPS LW & yes & no & 10 m & / \\\\ \\hline GPS HW & yes & no & 0.01 m & / \\\\ \\hline \\multicolumn{4}{p{28.5pt}}{* Camera centre position} \\\\ \\multicolumn{4}{p{28.5pt}}{** Set equal to 2 cm considering the accuracy of the reference surveys} \\\\ \\end{tabular}
\\end{table}
Table 1: Parameters used for orientation process.
Figure 4: Piazza Duomo
Again, to reduce this relative misalignment, manual tie points identification have proven to be effective: tie points creates additional constraints between portions of the model, reducing shift and rotation effects that, otherwise, occur.
On the basis of these considerations, in the equi-rectangular images presented in this paper the areas occupied by sky have been masked during feature extraction and manual tie points (used as GCP or CP) have been collimated.
On the contrary, planar images are not affected by the above mentioned effects, probably because the most part of the image frames the object to reconstruct. For this reason, in the following case studies, planar images were not masked, but manual tie points have been collimated anyway and used as GCP or CP to check the orientation solution.
Below, stats on collinearity equation residuals, check points error and DSM comparisons are provided.
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multicolumn{2}{c|}{**RMS**} & \\multicolumn{2}{c|}{**St. dev. CP**} \\\\ \\cline{2-5}
**(equal-rectangle)** & **GCP** & **CP** & **CP** & **X** & **Y** & **Z** \\\\ \\hline
2015 NO GPS & 6 & 11 & 0.15 & 0.088 & 0.046 & 0.068 \\\\ \\hline
2015 STO GPS & 0 & 17 & 7.218 & 1.056 & 0.534 & 0.694 \\\\ \\hline
2015 GPS HW & 0 & 17 & 7.006 & 1.453 & 0.402 & 0.735 \\\\ \\hline
2017 NO GPS & 6 & 11 & 0.261 & 0.169 & 0.067 & 0.157 \\\\ \\hline
2017 GPS IIW & 0 & 17 & 9.284 & 0.704 & 1.844 & 1.170 \\\\ \\hline
2017 GPS HW & 0 & 17 & 9.784 & 0.906 & 1.980 & 1.366 \\\\ \\hline
2015-17 NO GPS & 6 & 11 & 0.296 & 0.195 & 0.100 & 0.152 \\\\ \\hline
2015-17 GPS IIW & 0 & 17 & 7.839 & 0.833 & 1.390 & 1.068 \\\\ \\hline
2015-17 GPS IIW & 0 & 17 & 6.906 & 0.748 & 1.250 & 0.891 \\\\ \\hline \\end{tabular}
Table 2: Equirectangular dataset: stats on collinearity residuals.
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multicolumn{2}{c|}{**St. dev. CP**} \\\\ \\cline{2-5}
**(equal-rectangle)** & **GCP** & **CP** & **RMS CP** & **X** & **Y** & **Z** \\\\ \\hline
2015 SO GPS & 6 & 16 & 0.327 & 0.208 & 0.159 & 0.141 \\\\ \\hline
2015 GPS LW & 0 & 22 & 3.307 & 0.446 & 0.559 & 0.349 \\\\ \\hline
2015 GPS HW & 0 & 22 & 3.493 & 0.694 & 0.559 & 0.327 \\\\ \\hline
2017 NO GPS & 6 & 16 & 0.134 & 0.091 & 0.366 & 0.077 \\\\ \\hline
2017 GPS LW & 0 & 22 & 5.891 & 0.595 & 1.536 & 0.726 \\\\ \\hline
2017 GPS HW & 0 & 22 & 5.148 & 0.447 & 1.423 & 0.623 \\\\ \\hline
2015-17 NO GPS & 6 & 16 & 0.211 & 0.159 & 0.176 & 0.093 \\\\ \\hline
2015-17 GPS IIW & 0 & 22 & 6.223 & 0.744 & 1.377 & 0.810 \\\\ \\hline
2015-17 GPS HW & 0 & 22 & 10.94 & 2.502 & 1.908 & 1.544 \\\\ \\hline \\end{tabular}
Table 3: Equirectangular dataset: stats on residuals [m].
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multicolumn{2}{c|}{**RCP**} & \\multicolumn{2}{c|}{**St. dev. CP**} \\\\ \\cline{2-5}
**(equal-rectangle)** & **GCP** & **CP** & **RX** & **Y** & **Z** \\\\ \\hline
2015 SO GPS & 6 & 16 & 0.327 & 0.208 & 0.159 & 0.141 \\\\ \\hline
2015 GPS LW & 0 & 22 & 3.307 & 0.446 & 0.559 & 0.349 \\\\ \\hline
2015 GPS HW & 0 & 22 & 3.493 & 0.694 & 0.559 & 0.327 \\\\ \\hline
2017 NO GPS & 6 & 16 & 0.134 & 0.091 & 0.366 & 0.077 \\\\ \\hline
2017 GPS LW & 0 & 22 & 5.891 & 0.595 & 1.536 & 0.726 \\\\ \\hline
2017 GPS HW & 0 & 22 & 5.148 & 0.447 & 1.423 & 0.623 \\\\ \\hline
2015-17 NO GPS & 6 & 16 & 0.211 & 0.159 & 0.176 & 0.093 \\\\ \\hline
2015-17 GPS IIW & 0 & 22 & 6.223 & 0.744 & 1.377 & 0.810 \\\\ \\hline
2015-17 GPS HW & 0 & 22 & 10.94 & 2.502 & 1.908 & 1.544 \\\\ \\hline \\end{tabular}
Table 3: Equirectangular dataset: stats on residuals [m].
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multicolumn{2}{c|}{**RCP**} & \\multicolumn{2}{c|}{**Camera**} \\\\ \\cline{2-5}
**(equal-rectangle)** & **GCP** & **CP** & **CP** & **X** & **Y** & **Z** \\\\ \\hline
2015 SO GPS & 6 & 11 & 0.15 & 0.088 & 0.046 & 0.068 \\\\ \\hline
2015 SO GPS & 0 & 17 & 7.218 & 1.056 & 0.534 & 0.694 \\\\ \\hline
2015 GPS HW & 0 & 17 & 7.006 & 1.453 & 0.402 & 0.735 \\\\ \\hline
2017 NO GPS & 6 & 11 & 0.261 & 0.169 & 0.067 & 0.157 \\\\ \\hline
2017 GPS IIW & 0 & 17 & 9.284 & 0.704 & 1.844 & 1.170 \\\\ \\hline
2017 GPS HW & 0 & 17 & 9.784 & 0.906 & 1.980 & 1.366 \\\\ \\hline
2015-17 NO GPS & 6 & 11 & 0.296 & 0.195 & 0.100 & 0.152 \\\\ \\hline
2015-17 GPS IIW & 0 & 17 & 7.839 & 0.833 & 1.390 & 1.068 \\\\ \\hline
2015-17 GPS HW & 0 & 17 & 6.906 & 0.748 & 1.250 & 0.891 \\\\ \\hline \\end{tabular}
Table 5: Planar dataset: stats on residuals [m].
\\begin{tabular}{|c|c|c|c|} \\hline
**Equal-rectangle** & **Dataset** & **Standard deviation [m]** \\\\ \\hline
**Equal-rectangle** & 2015 NO GPS & 0.244 \\\\ \\hline
**Equal-rectangular** & 2017 NO GPS & 0.211 \\\\ \\hline
**Equal-rectangular** & 2015-17 NO GPS & 0.208 \\\\ \\hline
**Equal-rectangular** & 2015-17 NO GPS & 0.119 \\\\ \\hline
**Equal-rectangular** & 2017-10 NO GPS & 0.113 \\\\ \\hline
**Equal-rectangular** & 2015-17 NO GPS & 0.11 \\\\ \\hline
**Equal-rectangular** & 2015-17 NO GPS & 0.11 \\\\ \\hline
### Cardinal Ferrari street case study
Cardinal Ferrari street is a quite narrow street (10x100 m) located on the south side of the Cathedral of Parma. Also in this case the available datasets are three (2014, 2015 and 2016) composed of 10, 11, 11, images respectively, with an average base-length of 11 m. This case study represents the typical geometry of Google Street view image acquisition, where images are acquired along an approximate straight trajectory.
This block geometry is particularly disadvantageous for image orientation, first of all because the camera centres are aligned, leaving one degree of freedom (rotation of image block around the direction of vehicle trajectory) hardly evaluable if only GPS camera station positions are used. In addition, due to the conformation of a road (narrow and long) the distance from the camera centre of the elements framed in the image is very variable. In particular, the elements on the street sides located in front of the camera are very close to the shooting point (so deformed in panoramas), the objects along the longitudinal development of the road have a high perspective distortion and, finally, the elements at the end of the street represent points almost at infinite distance. Therefore, between consecutive panoramas there are remarkable depth changes and the perspective distortion that affect the same element changes considerably. This jeopardizes image orientation and dense matching due to difficulties in finding features correspondences.
In this case study, processing the three datasets (2014, 2015 and 2016) independently resulted in invalid image orientation or incomplete reconstruction of most of the elements during dense matching. Therefore, to entirely reconstruct the model, it was necessary to process the three datasets together, both in the case of equi-rectangular and planar images. In this case the base-lengths (and perspective variations) between consecutive images have been reduced, improving the matching. Nevertheless, the dataset (\"All GPS LWR\" in the table below), where no GCP have been used and the GPS control was used with a low weight, did not oriented regardless of tie point manual identification.
The test using planar images has been made on two areas of the south side of the cathedral, using only the panoramas taken in front of these areas. Also, in this case, it was necessary to process together all the images belonging to different years. The use of planar projection, indeed, does not overcome the previous mentioned problems related to disadvantageous block geometry, since re-projection simply removes the deformations produced by spherical mapping.
On the contrary, pin-hole images allowed the orientation of all the datasets, regardless the different control solution adopted (Table 9).
Also in this case, the analysis of camera centre and CP residuals demonstrates the low consistency between GPS and GCP observations, even if it is higher than in Piazza Duomo case study.
Despite the high residuals observed, all the models obtained have correct dimension (i.e. scale factor). Sampling relevant distances on the object but no significant discrepancies have been observed (differences with real lengths up to few centimetres).
As far as the analysis of DSM is concerned, also in this case the reconstructed surface is rather noisy. All the models have been compared with the reference DSM obtained from laser scanning survey and no significant discrepancies have been noticed between the different processing approaches. All the control solutions (GCP, GPS LWR, GPS HW) have been proven to be reliable, with residuals' standard deviations of ca. 10 cm. In
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline \\multirow{3}{*}{**Dataset**} & \\multicolumn{3}{c|}{**\\begin{tabular}{c} **tie** \\\\ **points** \\\\ \\end{tabular} } & \\multicolumn{3}{c|}{**\\begin{tabular}{c} **RROj.** \\\\ **res.** \\\\ \\end{tabular} } & \\multicolumn{1}{c|}{
\\begin{tabular}{c} **Camera** \\\\ **center** \\\\ \\end{tabular} } \\\\
**(planar)** & **RMS** & & & & \\\\
**(planar)** & **GCP** & **CP** & **CP** & **X** & **Y** & **Z** \\\\ \\hline sac
particular, planar projection slightly improved DSM reconstruction (Table 11).
### Costabilil palace case study
Costabilil palace is located in the city centre of Ferrara (Italy) and houses the National Archaeological Museum of Ferrara. It is a particular case study, since, due to the increasing interest of Google in documenting public/cultural sites, panoramas have been acquired also inside the palace, enabling a virtual tour of the Museum.
The test presented here take into account only the images acquired in the internal curtyr (Figure 9). It is a pedestrian area, so the acquisitions have been done by walking operators and not by a moving car. This acquisition method, reasonably has improved the GPS measurements accuracy, as it will be shown by the results provided below.
Currently only one dataset is available: it dates to 2013 and is composed of 30 panoramas acquired with an average base-length of 3 m. Block geometry, conformation and site dimensions are advantageous: it is a small size (20x25 m) closed courtyrard, avoiding the drawbacks related to acquisition along streets or in wide spaces, as seen in the previous examples. Images are arranged along the four sides of the courtyrard perimeter, with a short base-length, which produce a high overlap between images. In addition, the shooting points are quite close to the building facades, thus buildings occupy the main part of the image frame, reducing the areas that show the sky. Nevertheless, for uniformity with the previous examples, also in this case the parts of the images that depicted the sky were masked.
On the basis of these considerations, and as will be confirmed by the stats below, the expected accuracy should be higher than in previous case studies.
The analysis of residuals demonstrates a good accuracy of the results: block internal coherence is very high with re-projection residuals of only 1 pixel and the stats on camera centre positions and check points show residuals of few centimetres (mean Check points RMS equal to 7 cm).
In particular, GPS observations on camera centres position are consistent with GCP measurements, with residuals up to 14 cm (equi-rectangular) and up to 7 cm (planar). Using planar projections instead of equi-rectangular improves results in particular on orientation solution based on GPS control. In these cases the residuals can be halved. Using GCP control, instead, the use of planar o panoramic images is equivalent.
The DSM analysis shows results in accordance with the ones relating to orientation. The average value of standard deviation is 4 cm and only the model processed using equi-rectangular images and high weight of the GPS control (GPS HW) has a higher standard deviation.
In general, the model surface is less noisy than the previous case study, probably thanks to the good overlap between panoramas and the high resolution of the object on the images.
## 5 Conclusions
The research presented in this paper aimed to evaluate the accuracy of photogrammetric models obtained from Google
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multirow{2}{*}{**tie points**} & \\multirow{2}{*}{**Projections**} & \\multirow{2}{*}{**Res.**} & **Reproj.** & **Camera** \\\\
**(planar)** & & & **n.** & **[pix]** & **res.[m]** \\\\ \\hline South NO GPS & 5436 & 23288 & 1.26 & 0.057 \\\\ \\hline South GPS LW & 5436 & 23288 & 1.26 & 0.066 \\\\ \\hline South GPS HW & 5436 & 23288 & 1.25 & 0.061 \\\\ \\hline Est No GPS & 6277 & 31066 & 1.16 & 0.073 \\\\ \\hline Est SO GPS LW & 6277 & 31066 & 1.16 & 0.069 \\\\ \\hline Est GPS LW & 6277 & 31066 & 1.15 & 0.068 \\\\ \\hline \\end{tabular}
\\end{table}
Table 14: Planar dataset: stats on collinearity residuals.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline
**Equi-rect/planar** & **Dataset** & **Standard deviation [m]** \\\\ \\hline Equi-rectangular & All NO GPS & 0.101 \\\\ \\hline Equi-rectangular & All GPS HW & 0.102 \\\\ \\hline Planar & scarcity NO GPS & 0.086 \\\\ \\hline Planar & scarcity GPS LW & 0.085 \\\\ \\hline Planar & scarcity GPS HW & 0.091 \\\\ \\hline Planar & 3rd bay All NO GPS & 0.096 \\\\ \\hline Planar & 3rd bay All GPS LW & 0.097 \\\\ \\hline Planar & 3rd bay All GPS HW & 0.100 \\\\ \\hline \\end{tabular}
\\end{table}
Table 11: DSM comparison.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline
**Dataset** & \\multirow{2}{*}{**tie points**} & \\multirow{2}{*}{**Projections**} & \\multirow{2}{*}{**Res.**} & **Reproj.** & **Camera** \\\\
**(planar)** & & & **n.** & **[pix]** & **res.[m]** \\\\ \\hline South NO GPS & 5436 & 23288 & 1.26 & 0.057 \\\\ \\hline South GPS LW & 5436 & 23288 & 1.26 & 0.066 \\\\ \\hline South GPS HW & 5436 & 23288 & 1.25 & 0.061 \\\\ \\hline Est No GPS & 6277 & 31066 & 1.16 & 0.073 \\\\ \\hline Est SO GPS LW & 6277 & 31066 & 1.16 & 0.069 \\\\ \\hline Est GPS LW & 6277 & 31066 & 1.16 & 0.069 \\\\ \\hline Est GPS LW & 6277 & 31066 & 1.15 & 0.068 \\\\ \\hline \\end{tabular}
\\end{table}
Table 15: Planar dataset: stats on collisionarity residuals.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multirow{2}{*}{**tie points**} & \\multirow{2}{*}{**Projections**} & \\multirow{2}{*}{**Res.**} & **Reproj.** & **Camera** \\\\
**(equal-rectangular)** & & & **n.** & **[pix]** & **res.[m]** \\\\ \\hline SO GPS & 12393 & 32052 & 3.8 & 0.260 \\\\ \\hline GPS LW & 12393 & 32052 & 3.8 & 0.213 \\\\ \\hline GPS HW & 12444 & 31842 & 3.93 & 0.101 \\\\ \\hline \\end{tabular}
\\end{table}
Table 12: Equirectangular dataset: stats on collinearity residuals.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Dataset** & \\multirow{2}{*}{**tie points**} & \\multirow{2}{*}{**Projections**} & \\multirow{2}{*}{**Res.**} & **Reproj.** & **Camera** \\\\
**(planar)** & & & **n.** & **[pix]** & **res.[m]** \\\\ \\hline South NO GPS & 5436 & 23288 & 1.26 & 0.057 \\\\ \\hline South GPS LW & 5436 & 23288 & 1.26 & 0.066 \\\\ \\hline South GPS HW & 5436 & 23288 & 1.25 & 0.061 \\\\ \\hline Est SO GPS & 6277 & 31066 & 1.16 & 0.073 \\\\ \\hline Est SO GPS LW & 6277 & 31066 & 1.16 & 0.069 \\\\ \\hline Est GPS LW & 6277 & 31066 & 1.15 & 0.068 \\\\ \\hline \\end{tabular}
\\end{table}
Table 14: Planar dataset: stats on collinearity residuals.
Figure 9: Palazzo Costabili internal curtyrard. Northern and southern fayades.
Street View images. The tests conducted on three case studies showed a great variability of results, connected to the geometric characteristics of the image block and to the accuracy of the GPS measurements.
As far as the geometry is concerned, wide places and narrow streets present different obstacles: in wide spaces (such as squares), a large part of the equi-rectangular image shows the sky and, if not masked, many tie points could be matched on it, making the orientation solution unstable. In addition, the objects far from the shooting points have low resolution, leading to noisy DSM reconstruction. In this context, converting equi-rectangular images into pin-hole ones, framing only a restrict part of the scene, improves DSM reconstruction.
In narrow streets, camera centres are generally aligned, introducing possible systematic rotations if only GPS camera station positions are used as ground control. Again, if the base-length of acquisition is high (ca. 10 m), consecutive panoramas are characterized by remarkable depth changes and perspective effects which make difficult to match corresponding feature. Using different datasets together (such as images acquired in different epochs) improves the solution: having more images, in many occasions more favourable base-lengths can be found.
The quality of the GPS observations plays a very important role in the accuracy of the final 3d models. No information is provided by Google about precisions and accuracy, but in two of the three cases analysed it has proven be quite inaccurate (up to few meters). In the Palazzo Costabili case study, instead, the residuals on camera centres GPS coordinates are ca. 25 cm (equi-rectangular) and ca. 6 cm (pin-hole). Anyway, the use of GPS observations as ground control is useful to improve the orientation solution convergence, providing initial camera centre positions.
The use of Google Street View images has proven to be generally suitable for 3D reconstruction if high accurate model are not required. To improve accuracy and verify the quality of the solution obtained, the use of some GCP and CP is desirable.
## References
* Abate et al. (2018) Abate, D., Toschi, I., Sturdy-Colls, C., Remondino, F., 2018. Panoramic images, 2d feature-based and change detection methods for the documentation of contaminated crime scenes. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2, 1-8.
* Agarwal et al. (2015) Agarwal, P., Burgard, W., Spinello, L., 2015. Metric Localization using Google Street View, IEEE Conference on Computer Vision and Pattern Recognition.
* Anguelov et al. (2010) Anguelov, D., Dulong, C., Filip, D., Fueh, C., Lafon, S., Lyon, R., Ogale, A., Vincent, L., Weaver, J., 2010. Google Street View: Capturing the World at Street Level, _Computer_, vol. 42.
* Barazzetti et al. (2018) Barazzetti, L., Previtali, M., Roncoroni, F., 2018. Can we use low-cost 360 degree cameras to create accurate 3D models?, _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2, 69-75.
* Barazzetti et al. (2014) Barazzetti, L., Previtali, M., & Scaioni, M., 2014. Simultaneous registration of gnomonic projections and central perspectives. _The Photogrammetric Record_, 29(147), 278-296.
* Bruno and Roncella (2018) Bruno, N. and Roncella, R., 2018. A restoration oriented HBIM system for cultural heritage documentation: the case study of Parma cathedral. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2, 171-178.
* Cavallo (2015) Cavallo, M., 2015. 3D City Reconstruction From Google Street View, Comput. Graph. J
* Condorelli and Rinaudo (2018) Condorelli, F. and Rinaudo, F., 2018. Cultural heritage reconstruction from historical photographs and videos, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2, 259-265.
* Soon in 3D?. Educating Silicon, Retrieved January 3, 2012.
* Fangi (2007) Fangi, G., 2007. The multi-image spherical panoramas as a tool for architectural survey. In: The Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 36(5/C53), pp. 311-316.
* Fangi (2009) Fangi, G., 2009. Further developments of the spherical photogrammetry for cultural heritage. In: XXII International Committee for Cultural Heritage (CIPA), pp. 11-15.
* Frahm et al. (2013) Frahm, J.M., Heinly, J., Zheng, E., Dunn, E., Fite-Georgel, P. & Pollefeys, M., 2013. Geo-registered 3D models from crowdsourced image collections, _Geo-spatial Information Science_, 16:1, 55-60.
* Micusik and Kosecka (2009) Micusik, B. and Kosecka, J., 2009. Piecewise planar city 3d modeling from street view panoramic sequences. In CVPR09, 2009.
* Somogyi et al. (2016) Somogyi, A., Barsi, A., Molnar, B., Lovas, T., 2016. Crowdsourcing based 3D modeling, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B5, 587-590.
* Torii et al. (2009) Torii, A., Havlena, M., Pajdla, T., 2009. From Google Street View to 3D City models. IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.
* Wahbeh et al. (2016) Wahbeh, W., Nebiker, S., Fangi, G., 2016. Combining public domain and professional panoramic imagery for the accurate and dense 3D reconstruction of the destroyed bel temple in Palmyra. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, III-5, 81.88. | Google Street View is a technology implemented in several Google services/applications (e.g. Google Maps, Google Earth) which provides the user, interested in viewing a particular location on the map, with panoramic images (represented in equi-rectangular projection) at street level. Generally, consecutive panoramas are acquired with an average distance of 5-10 m and can be compared to a traditional photogrammetric strip and, thus, processed to reconstruct portion of city at nearly zero cost. Most of the photogrammetric software packages available today implement spherical camera models and can directly process images in equi-rectangular projection. Although many authors provided in the past relevant works that involved the use of Google Street View imagery, mainly for 3D city model reconstruction, very few references can be found about the actual accuracy that can be obtained with such data. The goal of the present work is to present preliminary tests (at time of writing just three case studies has been analysed) about the accuracy and reliability of the 3D models obtained from Google Street View panoramas.
S 2019
Spherical Photogrammetry, Equi-rectangular Panoramas, Accuracy, 3D reconstruction, Google Street View | Write a summary of the passage below. | 245 |
arxiv-format/2312_03207v1.md | # Satellite Imagery and AI: A New Era in Ocean Conservation, from Research to Deployment and Impact
Patrick Beukema
AI2
[email protected]
&Favyen Bastani
AI2
[email protected]
&Piper Wolters
AI2
[email protected]
Henry Herzog
AI2
[email protected]
&Joe Ferdinando
AI2
[email protected]
## 1 Introduction
Unprecedented environmental catastrophes compounded by ruthlessly efficient fishing are pushing our oceans to the brink. Entire species have gone missing seemingly overnight. Last year, 10 billion snow crabs vanished from the Bering Sea prompting the fishery to close for the first time in its history [1]. Worldwide, it is estimated that 34% of fisheries are unsustainably harvested [2], a concerning trend that continues to escalate.
Remote satellite data coupled with artificial intelligence provide a means to monitor and deter unregulated and illegal fishing, one of the biggest threats to marine ecosystems. Although no single satellite can provide adequate coverage of the entire planet, employing many satellites with a variety of passive and active sensors enhances the likelihood of identifying destructive behavior as it occurs. Large, publicly available image data from a diverse constellation of satellites enables real time monitoring of the entirety of the world's oceans.
This paper provides an overview of three novel computer vision models designed for near real time vessel detection. Each of these models has been deployed in Skylight [3], a maritime intelligence platform, supporting international conservation efforts through real-time monitoring. Skylight is provided for free to users worldwide, spanning 308 organizations and over 60 countries.
## 2 Building computer vision models for maritime intelligence
Achieving high performance is critical for our users, who cannot afford to expend limited resources, such as fuel, on pursuing non-existent vessels.. However, while performance is paramount, there are other important considerations that anchor research and development, including minimizing latency and ensuring adequate interpretability. It is essential to report a vessel's presence as quickly as possible. Although the overall latency-from the vessel to the satellite, then to the model, API, and finally the user-is dominated by the downlink latency (see table 1 \"Latency\"), computational efficiency is important. This efficiency facilitates high throughput iteration and regular upgrading. In addition, the model outputs should be interpretable. If the model commits egregious errors or its reasoning is opaque, our users cannot (and should not) trust its outputs. For this reason, the platform outputs a simple crop centered on each vessel detection (Fig. 2.2 D-F) to allow users to visually inspect every detection. We aim for transparency and share documentation about the model creation and ML strategy to help establish confidence in the machine intelligence.
In the following sections we provide a brief description of the unique characteristics and modeling strategy for each satellite. Table 1 provides a high level overview of each vessel detection service. The code and model architectures alongside complete processing pipelines and additional details have been open sourced on GitHub [5; 6].
### Vessel detection in VIIRS imagery
The Visible Infrared Imaging Radiometer Suite (VIIRS) sensor on board the Suomi-NPP and NOAA-20 satellites collect visible and infrared images during both the day and night [7]. While not originally intended as a real time vessel monitoring data source, the low latency (\\(\\sim 2\\) hrs), global coverage, satellite redundancy, and unique signal characteristics make the VIIRS sensor a useful tool in the fight against illegal fishing. However, the low spatial resolution (750m) precludes distinguishing vessels from non vessels prima facie (see example detection in Fig. 2.2D). Therefore, care must be taken to achieve high precision.
The modeling strategy adopted a three stage approach. The first stage consisted of a classical computer vision model, trained without supervision, to extract all possible sources of light. This was achieved with a simple 2D kernel. In the second stage, all known non-vessel light sources (lightning, gas flares, moonlit clouds, the northern and southern lights, and ionospheric particles from within the South Atlantic Anomaly) are removed through a series of postprocessing steps [8]. These non-vessel light sources often exhibit stereotyped distributional patterns (unlike vessels) that is amenable to rules based logic. Additionally, we implemented statistical tests to identify unusually geographically distributed vessels coincident with scan lines and suppressed false positives at the frame's extremeties due to the \"noise smile\" [9] to control the false positive rate. The final stage involved filtering all positive detections through a regularly updated 2d CNN. This CNN was trained on human annotated
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & Provider & PX (m) & Signal & Revisit rate & Latency & Vessel Count (10/2023) \\\\ \\hline VIIRS & NASA & 750 & watts & 2x/night & 2.5 hrs & 145,063 \\\\ Sentinel-1 & ESA & 10 & radar & 14 days & 5 hrs & 182,234 \\\\ Sentinel-2 & ESA & 10 & optical & 5 days & 5 hrs & 430,467 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Overview of each satellite detection service
Figure 1: Data-flow depiction of a real-time streaming computer vision service for vessel detection in satellite imagery. An orbiting satellite images a vessel. The image is downlinked to a ground station. That data is copied to Skylight owned servers and processed by a computer vision model [4]. The resulting vessel detection is reported to our users through a GUI and available via an API.
image labels (correct/not) with four channels (nanowatts, land water masks [10], moonlight, and clouds [11; 12]). This model was specifically designed to run in resource constrained environments, requiring only modest hardware (4 GB RAM, no GPU).
End-to-end deep learning based approaches were also evaluated. However, given the simplicity and limited spatial extent of the objects (1-2 pixels, as shown in 2.2D), and their sparse distribution, end-to-end deep learning based models required significantly more computational power to achieve performance comparable to our hybrid design. Our hybrid approach is designed to be highly efficient, which allows for regular and economical retraining using new labeled data. Such efficiency is particularly beneficial for machine learning-specific continuous integration and continuous delivery (CI/CD) pipelines including extensive model-specific integration testing. Thorough testing (especially within the CICD framework) is beneficial for preventing regressions during phases of fast-paced development, and is typically prohibitively expensive with conventional large-scale DNNs.
### Vessel detection in S1 imagery
Sentinel-1 (S1) is a satellite constellation from the ESA providing 10 m/pixel synthetic aperture radar (SAR) imagery, which measures properties of energy reflections from the planet surface [13; 14]. It captures Earth's land surface and coastal waters roughly every two weeks. The visual features are more complex than VIIRS, requiring detailed visual discrimination to distinguish vessels from islands and fixed marine infrastructure, and we found that deep learning methods were required. We created a dataset of 55,499 vessels (point labels) annotated by maritime experts.
We developed a detection model consisting of a standard Faster-RCNN [15] head, with a customized backbone. The backbone consists of a small 13-layer fully-convolutional encoder, which outputs feature maps at four resolutions that are processed through a feature pyramid network (FPN) [16]. We adapt the backbone to input not only the current target S1 image (with two bands, VV and VH polarizations) in which vessels should be detected, but also one or more aligned historical images of the same region at different times. These historical images enable the model to learn that marine objects consistently in the same location, such as fixed platforms and islands, are unlikely to represent transient objects like vessels. We process the images independently through the backbone, and at each resolution we concatenate the features of the target image with the pooled features of the historical images and pass the result to the FPN. Note that an early version of this model architecture scored fourth place in the xView3 competition [17]. The training data used here does not overlap with the xView3 data, and in contrast to xView3, is 100% human annotated (rather than by machine).
### Vessel detection in S2 imagery
Sentinel-2 (S2) is another ESA satellite constellation, providing optical imagery with four bands at 10 m/pixel, six bands at 20 m/pixel, and three bands at 60 m/pixel [18]. It captures Earth's land surface
Figure 2: Example satellite imagery (top row) and sample detections (bottom row) from a VIIRS image (A, D) near the Ecuadorian coast, an S1 image (B, E) from the North Sea and an S2 image (C, F) from the Maldives. Scale bars are approximate. Confidence scores \\(\\geq\\) 0.95.
and coastal waters roughly every 5 days. The same maritime experts annotated 43,102 vessels (point labels) in S2 imagery.
The various S2 bands provide rich information about the physical objects present in a scene. For example, RGB bands already enable distinguishing most vessels from other marine objects, but additional bands can be leveraged to further improve accuracy due to the different reflectance signatures of vessels and other objects. Thus, we developed a detection model that, like our S1 model, uses a Faster-RCNN detection head, but we couple it with a much larger Swin Transformer [19] backbone that has sufficient parameters to perform complex analysis of the S2 bands.
We found that pre-training the backbone on SatlasPretrain [20], a large-scale remote sensing dataset, further improved performance. Unlike with S1, we did not observe a performance increase from inputting historical images. We speculate that this is because the visual signatures of vessels in S2 optical images are sufficiently distinct from stationary objects (e.g., platforms or wind turbines), so historical images are not needed.
### Model evaluation, validation, and deployment
While there is no single metric that we use to determine when a model is ready to be released to our users, each model is continuously evaluated against a variety of criteria as it passes from research, to staging, to deployment. During the research phase, the primary method of evaluation consists of offline F1 scores against large and randomly held out validation datasets (where possible). For S1, we also compared the model against a previous version that had been submitted to the xView3 competition (the new model improved from 70.1% to 82.7% F1). The S2 model was not part of an external competition, but exhibited a similar F1 score of 0.81. There was no large annotated dataset for the VIIRS model, however, we did compare its performance against the industry standard model on previously human validated frames [9]. Due to the fact that offline performance of VIIRS was limited, we supplemented this evaluation with extensive unit and integration tests (CI/CD), covering a variety of known failure modes (aurora, moonlit clouds, imaging artifacts, etc).
Once all failure modes have been addressed, we transition models into an online staging environment that replicates production dataflows (i.e. streaming and real-time inference). Importantly, all of the inferenced data in the staging environment is new (out-of-sample) data. Models are evaluated in this staging environment for extended periods (months) by subject matter experts versed in maritime intelligence and regularly updated in response to that feedback.
Once confident that a new model is performant, we deploy models from staging into the production environment. In production, the primary method of evaluation consists of user feedback (internal and external) which gives us prompt notice of performance degradation or model drift.
After deployment, models are regularly upgraded (monthly release cadence) in response to feedback and/or new information sources. It is worth noting that while the data sources are (largely) stable, ocean activity is highly dynamic. For example, marine infrastructure (wind turbines, oil platforms, etc) are constantly evolving. These dynamics must be tracked and addressed through regular maintenance to maintain high performance. For example, to improve precision of each of the above models, we recently added an additional postprocessing step that geofences false positive detections coincident with recently detected [21] marine infrastructure produced by Bastani et al. [20].
### Satellite and GPS correlations
Every vessel detection is augmented by the addition of GPS information, when available, provided by the Automatic Identification System (AIS). Most vessels broadcast their locations, but some do not. Notably it is possible to obfuscate one's position by suppressing AIS but quite challenging to evade detection from a satellite. Vessels that neglect to broadcast their locations, but are still visible under radar or satellite imagery are especially relevant to analysts. In a typical satellite image, there are many detected vessels, and many possible candidate matches from AIS signals in the vicinity. Therefore, it is necessary to correlate the signals from these two information sources. Fig. 3 shows a depiction of the correlation process. Matching geolocations from an image and as determined by GPS can be formulated as a simple minimum weight matching problem in bipartite graphs. We apply a Jonker-Volgenant algorithm [22] as implemented by Pedregosa et al. [23] to assign matches.
## 3 Best practices for shipping computer vision for maritime intelligence
Deploying computer vision models into a real time streaming context has offered many valuable lessons, particularly around closing the gap between offline batch and online streaming performance.
* Near real-time satellite data may exhibit unique features or artifacts that will be missed if training on historical imagery. If streaming inference is the goal, ensure that the model is trained against the same data that will ultimately be referenced in production.
* The performance of static models, i.e. those with frozen weights, is at risk of regression due to model or data drift. Ensure that model iteration is as simple and as automated as possible in order to facilitate seamless retraining from feedback.
* Allow ample time to empirically assess the model performance in real world conditions and at appropriate temporal and spatial scales to identify and correct problems as they occur. For example, VIIRS is highly sensitive to the lunar cycle.
* Users cannot rely on machine intelligence unless it is consistently available and reliable. Expect to dedicate significant engineering resources beyond ML to ensure that the model is always online and maintained.
* Employ the best practices from software engineering during research and development. In particular, continuous integration and continuous deployment, unit and integration tests, code quality enforcement, and documentation are all essential.
* Satellite imagery exhibits massive variation at global scale and therefore it can be challenging to anticipate how performance will degrade on out-of-sample data. For example, after deployment we discovered hitherto unknown sources of false positives due to the Aurora Borealis/Australis for VIIRS, newly constructed wind turbines in the North Sea for the S1 model, and Sargassum patches [24] in the Caribbean for the S2 model.
## 4 Equity considerations
Real time vessel detections from each these models are provided through Skylight, a free maritime intelligence platform. The purposes of these models specifically, and Skylight more broadly, is to help nations protect their marine resources and promote ocean health for future generations. While we believe that the benefits of this technology (both within our platform and as open source repositories) outweigh potential risks, there is a possibility these models could be used for ignoble ends. We do not have a straightforward response that obviates these concerns and we do not take the decision to open source this technology lightly. We chose to open source these models because we believe that both the machine learning research community and the conservation community using the vessel detections should have complete transparency and full access into the underlying model architectures and logic (both for inference and training). In addition, these models are made possible by the existing open geospatial community, especially NASA and the ESA which provides both historical data used for training and near real time data for inference and monitoring.
Figure 3: A. Depiction of the correlation process. We compute the haversine distance between vessels in imagery and as located by AIS, then minimize the distance over the pairs. B. Panel from the Skylight UI showing radar detections. Correlated = black, uncorrelated = red.
## Acknowledgments and Disclosure of Funding
### Allen Institute for Artificial Intelligence (AI2)
All authors are employees at AI2. Skylight is a product of AI2.
### Computer Vision Annotation team
Ebenezer Aidoo (Ghana Navy) and James Curtis Carter (Ghana Navy)
### Defense Innovation Unit
The Defense Innovation Unit contributed funding which supported creation of the VIIRS and S2 vessel detection models and supported the improvements of the S1 model. The S1 model described in this paper was an extension of a previous model created by an AI2 team that was submitted to the DIU xView3 competition.
### NASA & LANCE (VIIRS)
We acknowledge the use of data and imagery from NASA's Land, Atmosphere Near real-time Capability for EOS (LANCE) system ([https://earthdata.nasa.gov/lance](https://earthdata.nasa.gov/lance)), part of NASA's Earth Observing System Data and Information System (EOSDIS). The data products that were used for VIIRS model are described here: [https://lance.modaps.eosdis.nasa.gov/viirs/](https://lance.modaps.eosdis.nasa.gov/viirs/). The following products are used. For SuomiNPP, VNP02DNB_NRT (light), VNP03DNB_NRT (supporting data), VNP02MOD_NRT (gas flares). For NOAA20, VJ102DNB_NRT (light), VJ103DNB_NRT (supporting data), VJ102MOD_NRT (gas flares). In addition, we use the cloud masks created by the University of Wisconsin SSEC ([https://www.earthdata.nasa.gov/learn/find-data/near-real-time/viirs-a](https://www.earthdata.nasa.gov/learn/find-data/near-real-time/viirs-a)) [12, 11]. Additional details and code to use these data can be found on the VIIRS GitHub repository [5].
### ESA (European Space Agency) and the Copernicus Data Space Ecosystem (S1 and S2)
Data for the S1 and S2 constellations are available at [https://dataspace.copernicus.eu/](https://dataspace.copernicus.eu/). Additional details of the specific products used for each satellite model are provided below.
* S1: Level-1 GRD (Ground Range Detected) data were used. Models were trained with both VV and VH polarization modes.
* S2: Level-1C data (orthorectified top-of-atmosphere reflectance, with sub-pixel multispectral registration). In addition to the L1C data, we also apply cloud detection to suppress false positives due to clouds. To do so we use the s2 cloud detector (s2cloudless) available from PyPI [25]. More details on this algorithm are available in [26].
Note that we previously used Copernicus Open Access Hub which is deprecated as of October 2023, and replaced by the Copernicus Data Space Ecosystem.
## References
* [1] Alaska Department of Fish and Game. 2022/23 bering sea snow crab season closed, 2022. URL [https://www.adfg.alaska.gov/static/applications/dcfnewrelease/1441272349.pdf](https://www.adfg.alaska.gov/static/applications/dcfnewrelease/1441272349.pdf). Accessed on December 7, 2023.
* [2] FAO. _The State of World Fisheries and Aquaculture 2020: Sustainability in Action_. Rome, 2020.
* [3] Skylight, 2023. URL [https://www.skylight.global/](https://www.skylight.global/). Accessed on: September 1 2023.
* visualizer for neural network models, 2023. URL [https://github.com/lutzroeder/netron](https://github.com/lutzroeder/netron).
* [5] Allen Institute for AI. Vessel detection with viirs. [https://github.com/allenai/vessel-detection-viirs](https://github.com/allenai/vessel-detection-viirs), 2023. Accessed on: September 1 2023.
* [6] Allen Institute for AI. Vessel detection with sentinels. [https://github.com/allenai/vessel-detection-sentinels](https://github.com/allenai/vessel-detection-sentinels), Year of access. Accessed on: Current date.
* 5879, 2017. URL [https://api.semanticscholar.org/CorpusID:134847136](https://api.semanticscholar.org/CorpusID:134847136).
* [8] Andreas Nilsson, Neil Suttie, Joseph S. Stoner, and Raimund Muscheler. Recurrent ancient geomagnetic field anomalies shed light on future evolution of the south atlantic anomaly. _Proceedings of the National Academy of Sciences of the United States of America_, 119, 2022. URL [https://api.semanticscholar.org/CorpusID:249434557](https://api.semanticscholar.org/CorpusID:249434557).
* [9] Christopher D. Elvidge, Mikhail N. Zhizhin, Kimberly E. Baugh, and Feng-Chi Hsu. Automatic boat identification system for viirs low light imaging data. _Remote. Sens._, 7:3020-3036, 2015. URL [https://api.semanticscholar.org/CorpusID:10198494](https://api.semanticscholar.org/CorpusID:10198494).
* [10] M. L. Carroll, C. M. DiMiceli, R. A. Sohlberg J. R. G. Townshend, S. Devadiga A. I. Elders, A. M. Sayer, and R. C. Levy. Development of an operational land water mask for modis collection 6, and influence on downstream data products. _International Journal of Digital Earth_, 10(2):207-218, 2017. doi: 10.1080/17538947.2016.1232756. URL [https://doi.org/10.1080/17538947.2016.1232756](https://doi.org/10.1080/17538947.2016.1232756).
* [11] S. Ackerman et al. Viirs atmosphere l2 cloud mask product. NASA MODIS Adaptive Processing System, Goddard Space Flight Center, USA, 2017.
* [12] S. Ackerman et al. Viirs atmosphere l2 cloud mask product. NASA MODIS Adaptive Processing System, Goddard Space Flight Center, USA, 2017.
* [13] Ramon Torres, Paul Snoeij, Dirk Geudtner, David Bibby, Malcolm Davidson, Evert Attema, Pierre Potin, BjOm Rommen, Nicolas Floury, Mike Brown, et al. Gmes sentinel-1 mission. _Remote sensing of environment_, 120:9-24, 2012.
* [14] Karen Fletcher. _SENTINEL 1: ESA's Radar Observatory Mission for GMES Operational Services_. European Space Agency, 2012.
* [15] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. _Advances in Neural Information Processing Systems_, 28, 2015.
* [16] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature Pyramid Networks for Object Detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 2117-2125, 2017.
* [17] Fernando Paolo, Tsu-ting Tim Lin, Ritwik Gupta, Bryce Goodman, Nirav Patel, Daniel Kuster, David Kroodsma, and Jared Dunnmon. xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture Radar Imagery. _Advances in Neural Information Processing Systems_, 35:37604-37616, 2022.
* [18] European Space Agency. Sentinel-2: Esa's optical high-resolution mission for gmes operational services. ESA Publications, 2012. URL [https://www.esa.int/About_Us/ESA_Publications/ESA_SP-1322_2_Sentinel_2](https://www.esa.int/About_Us/ESA_Publications/ESA_SP-1322_2_Sentinel_2). ESA SP-1322/2.
* [19] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10012-10022, 2021.
* [20] Favyen Bastani, Piper Wolters, Ritwik Gupta, Joe Ferdinando, and Ani Kembhavi. StalasPretrain: A Large-Scale Dataset for Remote Sensing Image Understanding. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2023.
* [21] Allen Institute for AI. Geospatial data products. [https://github.com/allenai/satlas/blob/main/GeospatialDataProducts.md](https://github.com/allenai/satlas/blob/main/GeospatialDataProducts.md), 2023. Accessed: [insert access date here].
* [22] Roy Jonker and Ton Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. _Computing_, 38:325-340, 1987. URL [https://api.semanticscholar.org/CorpusID:7806079](https://api.semanticscholar.org/CorpusID:7806079).
* [23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_, 12:2825-2830, 2011.
* [24] Kristie S. T. Alleyne, Donald Harry Johnson, Francis C. Neat, Hazel A. Oxenford, and Henri Valles. Seasonal variation in morphotype composition of pelagic sargassum influx events is linked to oceanic origin. _Scientific Reports_, 13, 2023. URL [https://api.semanticscholar.org/CorpusID:257367760](https://api.semanticscholar.org/CorpusID:257367760).
* [25] Sentinel Hub. sentinel2-cloud-detector. [https://github.com/sentinel-hub/sentinel2-cloud-detector](https://github.com/sentinel-hub/sentinel2-cloud-detector), 2023. Accessed: 2023-11-18.
* [26] Sergii Skakun, Jan Wevers, Carsten Brockmann, Georgia Doxani, Matej Aleksandrov, Matej Baic, David Frantz, Ferran Gascon, Luis Gomez-Chova, Olivier Hagolle, Dan Lopez-Puigolders, Jerome Louis, Matic Lubei, Gonzalo Mateo-Garcia, Julien Osman, Devis Persestutin, Bringfried Pflug, Jernej Puc, Rudolf Richter, Jean-Claude Roger, Pat Scaramuzza, Eric Vermote, Nejc Vesel, Anze Zupanc, and Lojze Zust. Cloud mask intercomparison exercise (cmix): An evaluation of cloud masking algorithms for Landsat 8 and sentinel-2. _Remote Sensing of Environment_, 274:112990, 2022. ISSN 0034-4257. doi: [https://doi.org/10.1016/j.rse.2022.112990](https://doi.org/10.1016/j.rse.2022.112990). URL [https://www.sciencedirect.com/science/article/pii/S0034425722001043](https://www.sciencedirect.com/science/article/pii/S0034425722001043). | Illegal, unreported, and unregulated (IUU) fishing poses a global threat to ocean habitats. Publicly available satellite data offered by NASA and the European Space Agency (ESA) provide an opportunity to actively monitor this activity. Effectively leveraging satellite data for maritime conservation requires highly reliable machine learning models operating globally with minimal latency. This paper introduces three specialized computer vision models designed for synthetic aperture radar (Sentinel-1), optical imagery (Sentinel-2), and nighttime lights (Suomi-NPP/NOAA-20). It also presents best practices for developing and delivering real-time computer vision services for conservation. These models have been deployed in Skylight, a real time maritime monitoring platform, which is provided at no cost to users worldwide. | Give a concise overview of the text below. | 151 |
isprs/bd77ef8c_5b2f_468c_9db9_5fd4983a4d50.md | # Performance Analysis of BLE 5.1 New Feature Angle of Arrival for Relative Positioning
######
implementation details and the testing conditions. Section 5 presents the experiment results and inferences that can be drawn. It also gives further analysis and extensions. Finally, section 6 concludes the paper.
## 2 Related work
Signal angular detection indicates exactly the matter of a wireless device's relative position determination. The problem of Angle-of-Arrival calculation with respect to a transmitted signal has been extensively researched and partially tackled. In order to measure the phase delay between the duplicated time-varying signals received by adjacent antennas, lying in an unavoidable antenna array. Multiple signal classification (MUSIC) (Schmidt, 1986), which yields great angular resolution, is the most frequent method for determining the AoA based on measured phase delay. Determination of the angular position of a transmitter in commodity wireless systems, is generally relied on measuring the signal intensity of received packets (RSSI). The accuracy of location frameworks based on inBeacon technology has been examined in (Li et al., 2016) and (Lin et al., 2015) using Bluetooth Low Energy (BLE) technique. The first instance reduced the average localisation error to 4 m, which is only possible when 36 beacons are deployed. The second case can obtain localization errors up to 5 m if limited within two subareas that are next to each other, where the testing area is composed of 12 subareas. The experimental results in (Ji et al., 2015) presents a detailed discussion of the relationship between the number of BLE beacons and accuracy performance of localization service. Then, the work of De Blasio et al. was carried out in a 168 m\\({}^{2}\\) testbed, where encompassed 12 devices using BLE 5.0 standard. The accuracy is reported to be less than 2.5 m (De Blasio et al., 2018). Existing positioning systems based on different technologies have a major flow that Bluetooth channel assignment has to be specified precisely, and this is extremely difficult to obtain. Furthermore, different BLE channels may have distinct properties and can exhibit different characteristics, and it contributes to an appropriate range of positioning accuracy when relying on RSSI (Powar et al., 2017). To resolve these issues, MUSIC has been used to figure out where BLE transmitters are based on the AoA measurement conducted by a set of nodes (Monfared et al., 2018).
The new direction-finding feature proposed in the new BLE standard is a significant decision and it reshapes the problem of indoor localization. In details, MUSIC mechanism requires multiple coherent Radio Frequency (RF) channels for signal transmission. However, there is only a single channel to be included and incorporate with an antenna array with multiple antennas enabled and an RF switch to make a decision that which element is selected among all the available options (Bluetooth SIG, 2019b). With the known assumption that the transmitted sequence has to be used to carry out AoA experiments, the simulations conducted in (Zhu, Bocus, 2018) presented the assessment of accuracy performance in this case. Nevertheless, no thorough experimental results and related results analysis are yet to appear since physical implementations of this work are still not practically feasible in reality. To the best of our knowledge, this work is the first to provide a comprehensive empirical analysis to evaluate the accuracy performance of BLE 5.1 new nature, AoA in several different testing conditions.
## 3 Working principle
Bluetooth is a wireless communication standard, particularly designed for wireless personal area networks (WPANs) that require low power consumption and low data rates at low cost in most cases. Ever since 2010, the Bluetooth SIG has merged Bluetooth and BLE together into the Bluetooth Core specification, version 4.0. In the physical layer (PHY), Bluetooth and BLE both operate in the same frequency band, which is the 2.4 GHz industrial, scientific, and medical (ISM) band. The medium access method of Bluetooth and BLE adopts a hybrid time-frequency division multiplexing scheme. The allocated 80 MHz bandwidth is divided into 40 orthogonal RF channels with central frequencies equally spaced by 2 MHz.
Normally, there are two different types of BLE channels and they are advertising channels and data channels. The number of advertising channels is 3 and they occupy the channels 37, 38 and 39; they are used for new user device discovery, connection configuration and signal broadcasting. Data channels take all of the remaining channels, 37 channels to exchange data. After the connection is established between the transmitter and the receiver, the adaptive frequency technique (AFH) scheme is used to reduce the negative impact of signal interference via a sequence of random selection of transmission channels. The channel access policy can be dynamically modified according to the actual condition of signal transmission, the poorer connection between devices in the proximity, the lower possibility to access this channel. Moreover, there is a separation of a fixed time interval after each run of communication between adjacent nodes.
With respect to signal transmission, BLE utilises Gaussian Frequency Shift Keying (GFSK) binary modulation with two possible transmission rates: 1 Mbps and 2 Mbps. In general, there are four different transmission modes that can be included for BLE connection, two uncooled modes have two distinct symbol rates, 1 Mbps and 2 Mbps respectively, whereas the uncoded modes uses the transmission rate of 125 kbps and 500 kbps. However, only uncooled transmission mode in the physical layer suits for BLE new direction finding mechanism, AoA. Hence, only the mandatory PHY mode, which is LE 1M (i.e., a typical configuration for BLE uncoded radio physical layers) with the data rate of 1 Mbps, is considered in the rest of this paper.
### Angle of arrival mechanism specified by Bluetooth 5.1
The Bluetooth user device makes the location available to the receiver side by sending direction-finding enabled packets from the transmitter node at low power consumption. Conspicuously, the transmitter device employs a single antenna only while the receiver device uses multiple antennas, which can be grouped as an antenna array, along with an RF switch to switch from one antenna to another at random. Both phase (I) and quadrature (Q) samples of the received signal are captured by the receiving device in order to calculate the phase difference between the replicas of the same radio signal with different time delay that can be detected. Naturally, the angular position can be finally determined based on the calculation results of phase delay. The AoA mechanism specified in the Bluetooth 5.1 standard can be depicted in Figure 1.
Figure 1: Bluetooth 5.1 AoA mechanism.
In details, the phase difference \\(\\varphi\\) of the transmitted signal to be calculate on the received device between two adjacent antennas can be obtained using the formula of:
\\[\\varphi=\\frac{2radcos\\theta}{\\lambda}, \\tag{1}\\]
where \\(\\lambda=\\) signal wavelength \\(d=\\) distance between the antennas \\(\\theta=\\) angle of arrival
The value of \\(\\theta\\) can be expressed in an alternative way using the following equation:
\\[\\theta=cos^{-1}(\\frac{\\varphi\\lambda}{2rad}). \\tag{2}\\]
### Packet format and antenna switching time
An additional field, which can be called Constant Tone Extension (CTE), allows the angular determination capability of BLE devices, and the format of uncoded packets using the transmission mode of LE 1M in PIIY has been presented in the Table 1. Actually, CTE is a sequence of consecutive 1-second without whitening, and it is used to represent the binary number 1. These unwhitened 1-valued bits provides a sector of unchanged signal after the transmission from one side to another due to the lack of phase shifts caused by signal modulation. The length of CTE duration can be varied with time, and normally it takes around 16-160 \\(\\mu\\)s. The number of symbols included within the CTE is confined by the application layer and this enables an adequate collection of data packets and IQ sample sets to be received. The CTE consists of several different suberoids, at first, a reference periods of 8 \\(\\mu\\)s, and then a guarding period with no operation performed (4 \\(\\mu\\)s), lastly timestick used for data sampling and antenna switching with two possible durations: 1 \\(\\mu\\)s and 2 \\(\\mu\\)s. In particular, only 2 \\(\\mu\\)s slots suits for all direction finding enabled BLE devices, on the other hand, slots of 1 \\(\\mu\\)s cannot support thoroughly as slots of 2 \\(\\mu\\)s does.
The BLE device collects 8 IQ samples every time, during the reference period, if sampled at 1 MS/s, using one antenna only at the receiver side. Each sample slot captures one IQ sample and this is not affected by the length of sample slots to be used. The switching pattern can be manually configured and the simplest possible pattern makes use of two antennas and lasts the shortest period of duration (16 \\(\\mu\\)s).
## 4 Experimental Setup
To scrutinize the fidelity of direction finding capability proposed by the Bluetooth 5.1 specification, we selected the Bluetooth devices from Texas Instruments, including a transmitter board with only one single antenna and a receiving board with multiple antennas embedded. Also, TI provides a list of technical documents and implementation tools that are of correspondence. Therefore, we purchased and implemented the SimpleLink(tm) CC1352R device (Texas Instruments, 2021a) to incorporate with a launch board and work as a transmitter, the SimpleLink(tm) Angle of Arrival BoosterPack (Texas Instruments, 2021b) to be the receiver device with two groups of antenna array. These working prototypes is seen in Figure 2 and Figure 3.
CTE, the phase difference between adjacent antennas, and AoA can be both estimated and calculated using Equation 1 and Equation 2 respectively.
## 5 Experimental Result & Result Analysis
As described in the Bluetooth 5.1 specification (Bluetooth SIG, 2019b), this technical breakthrough makes it possible to achieve the positioning accuracy up to sub-meter or even centimeter level using Bluetooth / Bluetooth low energy 5.1 new features (i.e., AOA and AOD). Therefore, we set up a series of experiments to empirically evaluate the performance, particularly in terms of the measurement accuracy of BLE 5.1 AOA function under different testing conditions and explore the inherent characteristics underlying the results of different groups of analogy experiments. Specifically, the tests were performed in four different typical scenarios including an ideal environment (i.e., anechoic chamber with no multi-path effect), different positions within an underground mine (i.e., a metal mine in NSW), an open area and an office area, as presented in Figure 5. For each testing environment, both intermediate and final results were collected repeatedly. The intermediate result, also called 'raw data', can be referred to the phase difference that is computed using different signals received by adjacent antennas. The final result implies the ultimate outcome of this AoA measurement, which means it is the value of AoA. To collect data more comprehensively and make result analysis more reliable, the measurements were carried out at a set of different angles (i.e., 45deg, 90deg, 135deg). According to the requirement of experiment equipment, the placement of BLE receiver at different measurement angles is presented in Figure 6.
### Ideal environment
The plots in Figure 7 are all obtained from the tests in an ideal measurement environment, an anechoic chamber within UTS tech lab. The distance between the transmitter and receiver is 5 m. The results were collected at three different angles (i.e., 45deg, 90deg, 135deg).
Figure 5: Experimental site for different testing scenarios, (from left to right, up to bottom) (a) Anechoic chamber; (b) Middle of tunnel in the underground mine; (c) Edge of tunnel in the underground mine; (d) Open site; (e) Office area.
Figure 6: Device placement at the angle of 45Β°, 90Β°, 135Β°.
Figure 7: Results of the tests in the ideal environment.
### Underground mine
The plots in Figure 8 and 9 are all obtained from the tests in an underground mine (a metal mine in Australia). The distance between the transmitter and the receiver is 5 m. The results were collected at three different angles (i.e., 45\\({}^{\\circ}\\), 90\\({}^{\\circ}\\), 135\\({}^{\\circ}\\)) and different position of tunnel (i.e., middle of tunnel and edge of tunnel).
#### 5.2.1 Middle of tunnel
Figure 8 demonstrates that the AoA measurement accuracy is affected by various obstacles in underground mines, and the fluctuation of average error measured at angle 135 is much larger than the others.
#### 5.2.2 Edge of tunnel
Figure 9 reveals that obvious obstructions (i.e., for instance, walls, etc.) that leads to signal reflection and multi-path effect, affects the localization accuracy effectively. Since the devices are deployed at the edge of tunnel, the overall performance degrades compared with the middle of tunnel. Nevertheless, the fluctuation of average error measured at angle 135 is still worse than other directions.
### Open site
The following plots are all obtained from the tests in the open space, a lawn in campus. The distance between the transmitter and the receiver is 5 m. The results were collected at three different angles (i.e., 45\\({}^{\\circ}\\), 90\\({}^{\\circ}\\), 135\\({}^{\\circ}\\)).
### Office area
The following plots are all obtained from the tests in the office area. The distance between the transmitter and the receiver is 5 m. The results were collected at three different angles (i.e., 45\\({}^{\\circ}\\), 90\\({}^{\\circ}\\), 135\\({}^{\\circ}\\)).
Overall, the performance of the office area is the worst compared with other testing environments. Nevertheless, the fluctuation of average error measured at angle 135 is similar with the case of angle 90 and angle 45 outperforms the rest, which is quite different from other testing environments.
Additionally, we also note that there is always a particular pattern repeatedly occurring on each result plot obtained from the office area. Therefore, a series of tests followed up to verify the consistency of this interesting phenomenon. We decided to perform the measurements at more different angles including 45\\({}^{\\circ}\\), 90\\({}^{\\circ}\\), 135\\({}^{\\circ}\\), 0\\({}^{\\circ}\\) and -45\\({}^{\\circ}\\). During the experiment, the distance between the transmitter and the receiver also gets more diverse (i.e., 5 m, 2 m, 1 m). A detailed device placement of the BLE transmitter and receiver at the angle of 0 and minus 45 can be found in Figure 12.
#### 5.4.1 Distance of 5 m
Figure 11: Results of the tests in the office area.
Figure 8: Results of the tests at the middle of tunnel in underground mines.
Figure 12: Placement of BLE devices at the angle of 0\\({}^{\\circ}\\), -45\\({}^{\\circ}\\).
Figure 9: Results of the tests at the edge of tunnel in underground mines.
#### 5.4.2 Distance of 2 m:
#### 5.4.3 Distance of 1 m:
Figures 13-15 reveal that each error plot includes a unique pattern indeed, and each pattern is showing up regularly, no matter how we changed the value of distance and angle during the measurements. Given that, we inferred this regular pattern is related to the signal reflection caused by the office environment that includes many obstructions. In fact, the deterministic reason for this phenomenon is the variety law of transmission channel random selection based on our further investigation on the raw data. To be specific, the same BLE data channel has a consistent transmission characteristic that affects the propagation path of RF signal. In that case, every time the same sequence of BLE channels is selected, the corresponding results of accuracy has the same variation trend, and this leads to the occurrence of the'regular pattern'.
### Root Mean Square Error (RMSE)
Lastly, we exploited the characteristic of Root Mean Square Error (RMSE) for each testing conditions, as shown in Figure 16. Each RMSE result was computed based on the average values achieved from all available datasets for each case. This result plot reported that the direction finding capability at the angle of -45 and 135 are more difficult to remain stable, compared with the rest. The discontinuities occurred at angles 0 and 90 demonstrate that the direction finding capability of BLE 5.1 AoA mechanism degrades significantly when the angle between the propagation direction of the transmitted signal and the axis the antenna array gets larger, particularly when it's larger than 135 degrees. This can be considered as a threshold to decide whether the accuracy performance of BLE 5.1 AoA function is satisfactory or not.
#### 5.4.3 Distance of 1 m:
Figures 13-15 reveal that each error plot includes a unique pattern indeed, and each pattern is showing up regularly, no matter how we changed the value of distance and angle during the measurements. Given that, we inferred this regular pattern is related to the signal reflection caused by the office environment that includes many obstructions. In fact, the deterministic reason for this phenomenon is the variety law of transmission channel random selection based on our further investigation on the raw data. To be specific, the same BLE data channel has a consistent transmission characteristic that affects the propagation path of RF signal. In that case, every time the same sequence of BLE channels is selected, the corresponding results of accuracy has the same variation trend, and this leads to the occurrence of the'regular pattern'.
### Root Mean Square Error (RMSE)
Lastly, we exploited the characteristic of Root Mean Square Error (RMSE) for each testing conditions, as shown in Figure 16. Each RMSE result was computed based on the average values achieved from all available datasets for each case. This result plot reported that the direction finding capability at the angle of -45 and 135 are more difficult to remain stable, compared with the rest. The discontinuities occurred at angles 0 and 90 demonstrate that the direction finding capability of BLE 5.1 AoA mechanism degrades significantly when the angle between the propagation direction of the transmitted signal and the axis the antenna array gets larger, particularly when it's larger than 135 degrees. This can be considered as a threshold to decide whether the accuracy performance of BLE 5.1 AoA function is satisfactory or not.
## 6 Conclusion
In this paper, the accuracy of direction finding capability specified by BLE 5.1 standard was empirically evaluated. We can provide the following insights.
1) As expected, the angular detection result is highly sensitive to the testing environment and to achieve the AoA based positioning accuracy within a few centimetres remains difficult. Alternatively, the number of large metal or concrete obstructions that may cause signal reflection leading to multi-path effect, the strength of interference sources and the placement of experiment equipment that are decided by various testing environments, imposes strict constraints on the precision of AoA estimation in different ways.
2) The error plots gained from the office environment with a few desks, walls, and Wi-Fi access points, always surprisingly show a particular regular pattern on the average error plots. This phenomenon we believe is due to the random variation law of data channel selection leading to different levels of multi-path effects caused by the propagation and reflection of RF signals within the office area.
3) The RMSE of AoA estimation in different cases indicates that the accuracy of angular positioning varies significantly with different measurement angles. Within the range of 0 to 90 degrees, the performance still can be maintained relatively stable, however, the results degrade significantly if the measurement angle gets out of this range (e.g., -45, 135).
Overall, BLE 5.1 new nature AoA can be used effectively in the open environment. While in other environments with obstacles,
Figure 16: The result of RMSE for all testing environments and conditions.
Figure 13: A further average error plot including more angles at the distance of 5 m.
Figure 14: A further average error plot including more angles at the distance of 2 m.
Figure 15: A further average error plot including more angles at the distance of 1 m.
only transmitted signal in line of sight condition can be guaranteed.
## References
* [1]Bluetooth SIG (2019) Press release: Bluetooth enhances support for location services with new direction finding feature.
* [2]Bluetooth SIG (2019) Bluetooth Legacy Specification. Available online: [https://www.bluetooth.com/](https://www.bluetooth.com/) specifications/archived-specifications/.
* [3]Conte, G., De M.M., Nacci, A., Rana, V., Sciuto, D., 2014. BlueSentinel: A first approach using iBeacon for an energy efficient occupancy detection system. _Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings_, 11-19. doi.org/10.1145/2676061.2674078.
* [4]Dahlgren, E., Mahmood, H., 2014. Evaluation of indoor positioning based on Bluetooth Smart technology. Master's thesis, Chalmers.
* [5]De Blasio, G., Quesada-Arencibia, A., Garcia, C.R., Rodriguez, J.C., Moreno-Diaz, R., 2018. A protocol-channel-based indoor positioning performance study for bluetooth low energy. _In IEEE Access_, vol(6), 33440-33450. doi.org/10.1109/ACCESS.2018.2837497.
* [6]Ji, M., Kim, J., Jeon, J., Cho, Y., 2015. Analysis of positioning accuracy corresponding to the number of blee beacons in indoor positioning system. _2015 17th International Conference on Advanced Communication Technology (ICACT)_, 92-95. doi.org/10.1109/ICACT.2015.7224764.
* [7]Kempke, B., Pannuto, P., Campbell, B., Dutta, P., 2016. Surepoint: Exploiting ultra wideband flooding and diversity to provide robust, scalable, high-fidelity indoor localization. _In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM (SenSys 16)_. Association for Computing Machinery, New York, NY, USA, 137-149. doi.org/10.1145/2994551.2994570.
* [8]Kempke, B., Pannuto, P., Dutta, P., 2015. Polypoint: Guiding indoor quadrotors with ultra-wideband localization. _Proceedings of the 2nd International Workshop on Hot Topics in Wireless_, 16-20. doi.org/10.1145/2799650.2799651.
* [9]Kumar, S., Gil, S., Katabi, D., Rus, D., 2014. Accurate indoor localization with zero start-up cost. _Proceedings of the 20th Annual International Conference on Mobile Computing and Networking_, 483-494. doi.org/10.1145/2639108.2639142.
* [10]Li, B., Zhao, K., Sandoval, E.B., 2020. A UWB-Based Indoor Positioning System Employing Neural Networks. _Journal of Geovisualization and Spatial Analysis_ 4.2: 1-9.
* [11]Li, B., Zhao, K., Shen, X., 2020. Dilution of Precision in Positioning Systems Using Both Angle of Arrival and Time of Arrival Measurements. _IEEE Access_, 8, 192506-192516. doi.org/10.1109/ACCESS.2020.3033281.
* [12]Li, X., Xu, D., Wang, X., Muhammad, R., 2016. Design and implementation of indoor positioning system based on ibecacon. _2016 International Conference on Audio, Language and Image Processing (ICALIP)_, 126-130. doi.org/10.1109/ICALIP.2016.7846648.
* [13]Lin, X.Y., Ho, T.W., Fang, C.C., Yen, Z.S., Yang, B.J., Lai, F., 2015. A mobile indoor positioning system based on ibecacon technology. _2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)_, 4970-4973. doi.org/10.1109/EMBC.2015.7319507.
* [14]Monfared, S., Nguyen, T.H., Petrillo, L., De Doncker, P., Horlin, F., 2018. Experimental demonstration of ble transmitter positioning based on aoa estimation. _2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)_, 856-859. doi.org/10.1109/PIMRC.2018.8580796.
* [15]Powar, J., Gao, C., Harle, R., 2017. Assessing the impact of multi-channel BLE beacons on fingerprint-based positioning. _2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN)_, 1-8. doi.org/10.1109/IPIN.2017.8115871.
* [16]Sanchez, C., Ceriani, S., Taddei, P., Wolfart, E., Sequeira, V., 2015. STEAM sensor tracking and mapping. Second Annual Microsoft Indoor Localization Competition.
* [17]Schmidt, R., 1986. Multiple emitter location and signal parameter estimation. _IEEE Transactions on Antennas and Propagation_, 34(3), 276-280. doi.org/10.1109/TAP.1986.1143830.
* [18]Texas Instruments, revised February 2021. CC1352R SimpleLink(tm) High-Performance Multi-Band Wireless MCU datasheet.
* [19]Texas Instruments, revised March 2021. CC2652R SimpleLink(tm) Multiprotocol 2.4 GHz Wireless MCU datasheet.
* [20]Vasisht, D., Kumar, S., Katabi, D., 2016. Decimeter-Level localization with a single WiFi Access Point. _In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16)_, 165-178.
* [21]Woolley, M., 2019. Bluetooth Direction Finding: A Technical Overview. Bluetooth Resources.
* [22]Zhu, Z., Bocus, M.Z., 2018. A computationally efficient method for direction finding with known transmit sequence. _2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)_, 1-6. doi.org/10.1109/IPIN.2018.8533794.
## Acknowledgements
This paper is generated to provide background information for a research project, called \"Deep IoT: Collision Avoidance System Based on Ultra-wideband Technology\". This project received grant funding from the Australian Government.
## References
* [1]A. A. M. | The official release of Bluetooth Core Specification, version 5.1 in 2019, provides a new feature for direction determination. It introduces a fine-grained angle measurement capability including Angle of Arrival (AoA) and Angle of Departure, which is deemed essential to the development of Internet of Things applications for localization or relative positioning within the area where Global Navigation Satellite System signals cannot be received. In this paper, we set up a series of experiments to empirically evaluate the direction of the Bluetooth Low Energy transmitter based on the AoA mechanism using the commercialised equipment. The experimental evaluation is performed, for the first time to date, under four different testing environments and other various conditions to inspect the fidelity of angle measurement against the high accuracy given in the specifications. In details, an ideal environment (i.e., anechoic chamber with no multi-path effect), an underground mine, an open area and a typical office area are all tested comprehensively. The experimental results reveal that the ideal environment has the best performance with the minimum error and conforms with the provided datasheet. The worst case occurs on the data collected from the office area. We also find a regular pattern always showing up repeatedly on the error plots based on the measurement results and scrutinize the truthfulness of this rule by adding more innumerable tests conducted in the office. Our results show that the performance of accuracy does depend on the data channel selection due to the multi- | Give a concise overview of the text below. | 288 |
arxiv-format/1412_7006v2.md | # Multi-modal Sensor Registration for Vehicle Perception via Deep Neural Networks
Michael Giering, Vivek Venugopalan and Kishore Reddy
United Technologies Research Center
E. Hartford, CT 06018, USA
Email: {gierinmj, venugov, reddykk}@utrc.utc.com
######
## I Motivation
Navigation and situational awareness of optionally manned vehicles requires the integration of multiple sensing modalities such as Light Detection and Ranging (LiDAR) and video, but could just as easily be extended to other modalities including Radio Detection And Ranging (RADAR), Short-Wavelength Infrared (SWIR) and Global Positioning System (GPS). Spatio-temporal registration of information from multi-modal sensors is technically challenging in its own right. For many tasks such as pedestrian and object detection tasks that make use of multiple sensors, decision support methods rest on the assumption of proper registration. Most approaches [1] in LiDAR-video for instance, build separate vision and LiDAR feature extraction methods and identify common anchor points in both. Alternatively, by generating a single feature set on LiDAR, Video and optical flow, it enables the system to to capture mutual information among modalities more efficiently. The ability to dynamically register information from the available data channels for perception related tasks can alleviate the need for anchor points _between_ sensor modalities. We see auto-registration as a prerequisite need for operating on multi-modal information with confidence.
Deep neural networks (DNN) lend themselves in a seamless manner for data fusion on time series data. For some challenges in which the modalities share significant mutual information, the features generated on the fused information can provide insight that neither input alone can [2]. In effect the ML version of, \"the whole is greater than the sum of it's parts\".
Autonomous navigation places significant constraints on the speed of perception algorithms and their ability to drive decision making in real-time. Though computationally intensive to train, our implemented DCNN run easily within our real-time frame rates of 8 fps and could accommodate more standard rates of 30 fps. With most research in deep neural networks focused on algorithmic improvements and novel applications, a significant benefit to applied researchers is sometimes under appreciated. The automated feature generation of DNNs enables us to create multi-modal systems with far less overhead. The need for domain experts and hand-crafted feature design are lessened, allowing more rapid prototyping and testing. The generalization of auto-registration across multiple assets is clearly a path to be explored.
In this paper, the main contributions are: (i) formulation of an image registration problem as a fusion of modalities from different sensors, namely LIDAR (L), video (Grayscale or R,G,B) and optical flow (U,V); (ii) performance evaluation of deep convolutional neural networks (DCNN) with various input parameters, such as kernel filter size and different combinations of input channels (R,G,B,Gr,L,U,V); (iii) fusion of patch-level and image-level predictions to generate alignment at the frame-level. The experiments were conducted using a publicly available dataset from FORD and the University of Michigan [3]. The DCNN implementation was executed on an NVIDIA Tesla K40 GPU with 2880 cores and compute power of 5 TFLOPS (single precision). The paper is organized into the following sections: Section I describes the introduction and motivation for this work; Section II provides a survey of the related work; the problem formulation along with the dataset description and the preprocessing is explained in Section III; Section IV gives the details of the DCNN setup for the different experiments; Section V describes the experiments and the post-processing steps for visualizing the qualitative results; finally Section VI summarizes the paper and concludes with future research thrusts.
## II Previous Work
A great amount has been published on various multi-modal fusion methods [4, 5, 6, 7]. The most common approaches taken generate features of interest in each modality separately and create a decision support mechanism that aggregates features across modalities. If spatial alignment is required across modalities, as it is for LiDAR-video such filter methods[8] are required to ensure proper inter-modal registration. These filter methods for leveraging 3D LiDAR and 2D images are often geometric in nature and make use of projections between the different data spaces.
Automatic registration of 2D video and 3D LiDAR has been a widely researched topic for over a decade [9, 10, 11, 1]. Its application in real-time autonomous navigation makes it a challenging problem. Majority of the 2D-3D registration algorithms are based on feature matching. Geometric features like corners and edges are extracted from detected vanishing points [12, 13], line segments [14, 15], and shadows [16]. Feature based approaches generally rely on dense 3D point cloud and additional knowledge of relative sun position and GPS/inertial navigation system (INS). Another approach used for video and LiDAR auto-registration is to reconstruct 3D point cloud from video sequences using structure from motion (SFM) and performing 3D-3D registration [17, 18]. 3D-3D registration is more difficult and computationally expensive compared to 2D-3D registration.
The use of deep neural networks to analyze multi-modal sensor inputs has increased sharply in just the last few years, including audio-video [2, 19], image/text [20], image/depth [21] and LiDAR-video To the best of our knowledge the use of multi-modal deep neural networks for dynamic LiDAR-video registration has not been presented.
A common challenge for data fusion methods is deciding at what level features from the differing sensor streams should be brought together. The deep neural network (DNN) approach most similar to the more traditional data fusion methods is to train DNNs independently on sensor modalities and then use the high-level outputs of those networks as inputs to a subsequent aggregator, which could also be a DNN. This is analogous to the earlier example of learning 3D/2D features and the process of identifying common geometric features.
It is possible however to apply DNNs with a more agnostic view enabling a unified set of features to be learned across multi-modal data. In these cases the input channels aren't differentiated. Unsupervised methods including deep Boltzmann machines and deep auto-encoders for learning such joint representations have been successful.
Deep convolutional neural networks (DCNN) enable a similar agnostic approach to input channels. A significant difference is that target data is required to train them as classifiers. This is the approach chosen by us for automating the registration of LiDAR-video and optical-flow, in which we are combining 1D/3D/2D data representations respectively to learn a unified model across as many as 6D.
## III Problem Statement
Being able to detect and correct the misalignment (registration, calibration) among sensors of the same or different kinds, is critical for decision support systems operating on their fused information streams. For our work DCNNs were implemented for the detection of small spatial misalignments in LiDAR and Video frames. The methodology is directly applicable to temporal registration as well. LiDAR-video data collected from a driverless car was chosen for the multi-modal fusion test case. LiDAR-video is a common combination for providing perception capabilities to many types of ground and airborne platforms including driverless cars [8].
### _Ford LiDAR-video Dataset and Experimental Setup_
The FORD LiDAR-video dataset [3] is collected by an autonomous Ford F-250 vehicle integrated with the following perception and navigation sensors as follows:
* Velodyne HDL-64E LiDAR with two blocks of lasers spinning at 10 Hz and a maximum range of 120m.
* Point Grey Ladybug3 omni-directional camera system with six 2-Mega-pixel cameras collecting video data at 8fps with \\(1600\\times 1600\\) resolution.
* Two Riegl LMS-Q120 LIDAR sensors installed in the front of the vehicle generating range and intensity data when the laser sweeps its \\(80^{\\circ}\\) field of view (FOV).
* Applanix POS-LV420 INS with Trimble GPS system providing the 6 degrees of freedom (DOF) estimates at 100 Hz.
* Xsens MTi-G sensor consisting of accelerometer, gyroscope, magnetometer, integrated GPS receiver, static pressure sensor and temperature sensor. It measures the GPS co-ordinates of the vehicle and also provides the 3D velocity and 3D rate of turn.
This dataset is generated by the vehicle while driving in and around the Ford research campus and downtown Michigan. The data includes feature rich downtown areas as well as featureless empty parking lots. As shown in Figure 1, we divided the data set into training and testing sections A to B and C to D respectively. They were chosen in a manner that minimizes the likelihood of contamination between training and testing. Because of this, the direction of the light source is never the same in the testing and training sets.
### _Optical Flow_
In the area of navigation of mobile robots, optical flow has been widely used to estimate egomotion [22], depth maps [23], reconstruct dynamic 3D scene depth [24], and segment moving objects [25]. Optical flow provides information of the scene dynamics and is expressed as an estimate of velocity at each
Fig. 1: Training (A to B) and testing (C to D) tracks in the downtown Dearborn Michigan.
pixel from two consecutive frames, denoted by \\(\\vec{u}\\) and \\(\\vec{v}\\). The motion field from these two frames is measured by the motion of the pixel brightness pattern, where the changes in image brightness is due to the camera or object motion. [26] describes an algorithm for computing optical flow from images, which is used during the preprocessing step. Figure 2 shows an example of the optical flow computed using two consecutive frames from the Ford LiDAR-video dataset. By including optical flow as input channels, we imbue the DCNN with information on the dynamics observed across time steps.
### _Preprocessing_
At each video frame timestep, the inputs to our model consist of \\(C\\) channels of data with \\(C\\) ranging from 3-6 channels. Channels consist of grayscale _Gr_ or _(R,G,B)_ information from the video, horizontal and vertical components of optical flow _(U,V)_ and depth information \\(L\\) from LiDAR The data from each modality is reshaped to a fixed size of \\(800\\times 256\\) values, which are partitioned into \\(p\\times p\\) patches at a prescribed stride. Each patch \\(p\\times p\\) is stacked across \\(C\\) channels, effectively generating a vector of \\(C\\) dimensions. The different preprocessing parameters are denoted by patch size \\(p\\), stride \\(s\\) and the number of input channels \\(C\\).
Preprocessing is repeated \\(N\\) times, where \\(N\\) is the number of offset classes. For each offset class, the video (R,G,B) and optical flow (U,V) channels are kept static and the depth (L) channel from the LiDAR is moved by the offset simulating a misalignment between the video and the LiDAR sensors. In order to accurately detect the misalignment in the LiDAR and Video sensor data, a threshold is set to limit the information available in each channel. The LiDAR data has regions of sparsity and hence the LiDAR patches with a variance (\\(\\sigma^{2}<15\\%\\)) are dropped from the final dataset. This leads to the elimination of the majority of foreground patches in the data set, reducing the size of the training and testing set by approximately \\(80\\%\\). Figure 2(a) shows a \\(N=9\\) class elliptically distributed set of offsets and Figure 2(b) shows a \\(p\\times p\\) patch stacked across all the different \\(C\\) channels.
## IV Model Description
Our models for auto-registration are DCNNs trained to classify the current misalignment of the LiDAR-video data streams into one of a predefined set of offsets. DCNNs are probably the most successful deep learning model to date on fielded applications. The fact that the algorithm shares weights in the training phase, results in fewer model parameters and more efficient training. DCNNs are particularly useful for problems in which local structure is important, such as object recognition in images and temporal information for voice recognition. The alternating steps of convolution and pooling generates features at multiple scales which in turn imbues DCNN's with scale invariant characteristics.
The model shown in Figure 4 consists of 3 pairs of convolution-pooling layers, that estimates the offset between the LiDAR-video inputs at each time step. For each patch within a timestep, there are \\(N\\) variants with the LiDAR-video-optical flow inputs offset by the predetermined amounts. The CNN outputs to a softmax layer, thereby providing an offset classification value for each patch of the frame. As described in Section III-C, \\(32\\times 32\\) patches were stacked across the different channels and provided as the input to the DCNN. All the \\(6\\) channels _RGBLUV_ were used for the majority of the experiments, whereas only \\(4\\) channels were required for the _RGBL_ and the _GrLUV_ experiments. The first convolutional layer uses \\(32\\) filters (or kernels) of size \\(5\\times 5\\times C\\) with a stride of \\(1\\) pixel and padding of \\(2\\) pixels on the edges. The following pooling layer generates the input data (of size \\(16\\times 16\\times 32\\)) for
Fig. 3: Preprocessing steps
Fig. 2: Optical flow: Hue indicates orientation and saturation indicates magnitude the second convolutional layer. This layer uses \\(32\\) filters of size \\(5\\times 5\\times 32\\) with a stride of \\(1\\) pixel and padding of \\(2\\) pixels on the edges. A second pooling layer, similar to the first one is used to generate input with size \\(8\\times 8\\times 32\\) for the third convolutional layer that uses \\(64\\) filters of size \\(5\\times 5\\times 32\\) with the stride and padding same as previous convolutional layer. The third pooling layer with similar configuration as the two previous pooling layers connects to an output softmax layer with labels corresponding to the \\(N=9\\) classes. The DCNN described above was trained using stochastic gradient descent with a mini-batch size of \\(100\\) epochs. The DCNN is configured with Rectified Linear Units (ReLUs), as they train several times faster than their equivalents with \\(\\tanh\\) connections [27]
The NVIDIA Kepler series K40 GPUs [28] are very FLOPS/Watt efficient and are being used to drive real-time image processing capabilities [29]. These GPUs consist of 2880 cores with 12 GB of on-board device memory (RAM). Deep Learning applications have been targeted on GPUs previously in [30] and these implementations are both compute and memory bound. Stacking of channels results in a vector of \\(32\\times 32\\times C\\), which is suitable for the Single Instruction Multiple Datapath (SIMD) architecture of the GPUs. At the same time, the training batch size caches in the GPU memory, so the utilization of the K40 GPU's memory is very high. This also results in our experiments to run successfully on a single GPU instead of partitioning the different layers over multiple GPUs.
## V Experiments
### _Dataset using elliptically distributed offsets_
In our experiments, elliptically distributed set of \\(N=9\\) offsets of the LiDAR-video data were considered. The LiDAR data is displaced along an ellipse with a major axis of \\(32\\) pixels and a minor axis of \\(16\\) pixels rotated clockwise from x-axis by \\(45^{\\circ}\\) as shown in Figure 2(a). Separate training and testing sets were generated from two different tracks as shown in Figure 1 for all the \\(N=9\\) offsets of LiDAR data. Training and testing tracks have never seen regions and also have different lighting conditions. Our preprocessing step described in Section III-C results in \\(223,371\\) and \\(126,513\\) patches for testing and training extracted from \\(469\\) and \\(224\\) images respectively.
In the testing phase, for each frame a simple voting scheme is used to aggregate the patch level offset predictions to a single frame level prediction. A sample histogram of the patch level predictions is show in Figure 5. We color each patch of the frame with a color corresponding to the predicted class as shown in Figure 5.
### _Experimental results_
Table I lists the inputs and CNN parameters explored ranked in the order of increasing accuracy. We averaged the values across the diagonal of the confusion matrix to determine the image level and patch level accuracy. Patch level accuracy is the individual performance of all the \\(32\\times 32\\) patches from the testing images. Classification of patches belonging to a single time-step are voted to predict the shift for image level accuracy. In Table I, the first 3 columns show the results for different number of filter combinations in the convolutional layers with fixed number of filters and input channels _RGBLUV_. We observed that the image and patch level accuracy decreased with the increase in the number of filters. For experiments shown in columns 4 and 5, the filter size was increased, with the number of filters constant at \\((32,32,64)\\). We observed that for the 6 channels _RGBLUV_, filter size of 9 gave the best image level accuracy of \\(63.03\\%\\). Column 6 shows the results of our experiment after dropping the optical flow _UV_ channels. The image and patch level accuracy decreased for this case, indicating that optical flow contributed significantly towards image registration. The remaining experiments utilized the Grayscale information _Gr_ instead of _RGB_ and produced the best results with \\(76.69\\%\\) and \\(41.05\\%\\) image and patch level accuracy respectively. Table II shows that by using information from consecutive frames the performance increases significantly.
## VI Conclusions and Future Work
In this paper, we proposed a deep learning method to do LiDAR-Video registration. We demonstrated the effect of filter size, number of filters and different channels. We also showed the advantage of using temporal information, optical flow and grayscale. The next step in taking this work forward is to complete our development of a deep auto-registration method for ground and aerial platforms requiring no a priori calibration ground truth. Our aerospace applications in particular present noisier data with an increased number of degrees of freedom. The extension of these methods to simultaneously register
Fig. 4: Experimental setup of the LiDAR-video DCNN with \\(5\\times 5\\) convolution information across multiple platforms and larger numbers of modalities will provide interesting challenges that we look forward to working on.
## References
* [1] C. Bodensteiner and M. Arens, \"Real-time 2D Video 3D LiDAR Registration,\" in _Pattern Recognition (ICPR), 2012 21st International Conference on_. IEEE, 2012, pp. 2206-2209.
* [2] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, \"Multimodal Deep Learning,\" in _Proceedings of the 28th International Conference on Machine Learning (ICML-11)_, 2011, pp. 689-696.
* [3] G. Pandey, J. R. McBride, and R. M. Eustice, \"Ford Campus Vision And Lidar Data Set,\" _The International Journal of Robotics Research_, vol. 30, no. 13, pp. 1543-1552, 2011.
* [4] A. Ross and A. Jain, \"Information Fusion In Biometrics,\" _Pattern recognition letters_, vol. 24, no. 13, pp. 2115-2125, 2003.
* [5] K. Gregor and Y. LeCun, \"Learning Representations By Maximizing Compression,\" _arXiv preprint arXiv:1108.1169_, 2011.
* [6] Y. Wu, E. Y. Chang, K. C.-C. Chang, and J. R. Smith, \"Optimal Multimodal Fusion For Multimedia Data Analysis,\" in _Proceedings of the 12th annual ACM international conference on Multimedia_. ACM, 2004, pp. 572-579.
* [7] C. G. Snoek, M. Worring, J. C. Van Gemert, J.-M. Geusebroek, and A. W. Smeulders, \"The Challenge Problem For Automated Detection Of 101 Semantic Concepts In Multimedia,\" in _Proceedings of the 14th annual ACM international conference on Multimedia_. ACM, 2006, pp. 421-430.
* [8] S. Thrun, \"Google's driverless car,\" _Ted Talk, Ed_, 2011.
* [9] L. Wang and U. Neumann, \"A Robust Approach For Automatic Registration Of Aerial Images With Untextured Aerial LIDAR Data,\" in _Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on_. IEEE, 2009, pp. 2623-2630.
* [10] H. Kim, C. D. Correa, and N. Max, \"Automatic registration of lidar and optical imagery using depth map stereo,\" in _Computational Photography (ICCP), 2014 IEEE International Conference on_. IEEE, 2014, pp. 1-8.
* [11] A. Mastin, J. Kepner, and J. Fisher, \"Automatic Registration of LIDAR and Optical Images of Urban Scenes,\" in _Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on_. IEEE, 2009, pp. 2639-2646.
* [12] L. Liu and I. Stamos, \"A systematic approach for 2d-image to 3d-range registration in urban environments,\" in _Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on_, Oct 2007, pp. 1-8.
* [13] M. Ding, K. Lyngbaek, and A. Zakhor, \"Automatic registration of aerial imagery with untextured 3d lidar models,\" in _Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on_, June 2008, pp. 1-8.
* [14] C. Frueh, R. Sammon, and A. Zakhor, \"Automated texture mapping of 3d city models with oblique aerial imagery,\" in _3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004. Proceedings. 2nd International Symposium on_, Sept 2004, pp. 396-403.
* [15] I. Stamos, L. Liu, C. Chen, G. Wolberg, G. Yu, and S. Zokai, \"Integrating automated range registration with multiview geometry for the photorealistic modeling of large-scale scenes,\" _International Journal of Computer Vision_, vol. 78, no. 2-3, pp. 237-260, 2008. [Online]. Available: [http://dx.doi.org/10.1007/s11263-007-0089-1](http://dx.doi.org/10.1007/s11263-007-0089-1)
* [16] A. J. Troccoli and P. K. Allen, \"A shadow based method for image to model registration,\" in _In IEEE Workshop on Image and Video Registration, Conf. on Comp. Vision and_, 2004.
* [17] W. Zhao, D. Nister, and S. Hsu, \"Alignment of continuous video onto 3d point clouds,\" in _Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on_, vol. 2, June 2004, pp. II-II.
* [18] L. Liu, I. Stamos, G. Yu, G. Wolberg, and S. Zokai, \"Multiview geometry for texture mapping 2d images onto 3d range data,\" in _Computer Vision
\\begin{table}
\\begin{tabular}{|l|c|c|c|c|c|c|c|} \\hline
**Number of consecutive time-steps used** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** \\\\ \\hline
**Accuracy(\\%)** & 76.33 & 85.42 & 88.88 & 90.30 & 92.52 & 93.85 & 94.29 & 95.12 \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Performance using temporal information
Fig. 5: Left: Histogram of the classes detected by all patches. Right: Location of patches color coded with the predicted class.
\\begin{table}
\\begin{tabular}{|l|c|c|c|c|c|c|c|} \\hline
**Channels** & \\multicolumn{4}{c|}{RGBLUV} & RGBL & GrLUV \\\\ \\hline
**Filter size** & \\multicolumn{2}{c|}{5} & \\multicolumn{2}{c|}{7} & \\multicolumn{2}{c|}{9} & \\multicolumn{2}{c|}{5} & \\multicolumn{2}{c|}{9} \\\\ \\hline
**\\# of filters** & (32,32,32) & (32,32,64) & (64,64,64) & \\multicolumn{4}{c|}{(32,32,64)} \\\\ \\hline
**Image Level accuracy(\\%)** & 61.75 & 61.06 & 60.09 & 61.79 & 63.03 & 60.66 & 68.03 & **76.69** \\\\ \\hline
**Patch accuracy(\\%)** & 38.74 & 38.57 & 38.49 & 38.03 & 39.00 & 39.28 & 40.96 & **41.05** \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Results of different combination of input channels \\(C\\) and CNN parametersand Pattern Recognition, 2006 IEEE Computer Society Conference on_, vol. 2, June 2006, pp. 2293-2300.
* [19] Y. Kim, H. Lee, and E. M. Provost, \"Deep Learning for Robust Feature Generation in Audovisual Emotion Recognition,\" in _Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on_. IEEE, 2013, pp. 3687-3691.
* [20] N. Srivastava and R. Salakhutdinov, \"Multimodal Learning With Deep Boltzmann Machines,\" in _Advances in neural information processing systems_, 2012, pp. 2222-2230.
* [21] I. Lenz, H. Lee, and A. Saxena, \"Deep Learning for Detecting Robotic Grasps,\" _arXiv preprint arXiv:1301.3592_, 2013.
* [22] K. Prazdny, \"Egomotion and relative depth map from optical flow,\" _Biological Cybernetics_, vol. 36, no. 2, pp. 87-102, 1980. [Online]. Available: [http://dx.doi.org/10.1007/BF00361077](http://dx.doi.org/10.1007/BF00361077)
* [23] B. Shahrary and M. Brown, \"Robust depth estimation from optical flow,\" in _Computer Vision., Second International Conference on_, Dec 1988, pp. 641-650.
* [24] Y. Yang, Q. Liu, R. Ji, and Y. Gao, \"Dynamic 3D Scene Depth Reconstruction via Optical Flow Field Rectification,\" _PLoS ONE_, vol. 7, p. 47041, Nov. 2012.
* [25] S.-Y. Chien, S.-Y. Ma, and L.-G. Chen, \"Efficient moving object segmentation algorithm using background registration technique,\" _Circuits and Systems for Video Technology, IEEE Transactions on_, vol. 12, no. 7, pp. 577-586, Jul 2002.
* [26] C. Liu, \"Beyond Pixels: Exploring New Representations and Applications for Motion Analysis,\" Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 2009, A4N022221.
* [27] V. Nair and G. E. Hinton, \"Rectified Linear Units Improve Restricted Boltzmann Machines,\" in _Proceedings of the 27th International Conference on Machine Learning (ICML-10)_, 2010, pp. 807-814.
* [28] NVIDIA Inc., \"NVIDIA's Next Generation CUDA Compute Architecture: Kepler TM GK110,\" Whitepaper, May 2012.
* [29] V. Venugopal and S. Kannan, \"Accelerating real-time lidar data processing using gpus,\" in _Circuits and Systems (MWSCAS), 2013 IEEE 56th International Midwest Symposium on_, Aug 2013, pp. 1168-1171.
* [30] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet Classification With Deep Convolutional Neural Networks,\" in _Advances in neural information processing systems_, 2012, pp. 1097-1105.
Fig. 6: Confusion Matrix for elliptically distributed \\(N=9\\) classes using Greyscale, Optical Flow and LiDAR channels with a filter size of 9 | The ability to simultaneously leverage multiple modes of sensor information is critical for perception of an automated vehicle's physical surroundings. Spatio-temporal alignment of registration of the incoming information is often a prerequisite to analyzing the fused data. The persistence and reliability of multi-modal registration is therefore the key to the stability of decision support systems ingesting the fused information. LiDAR-video systems like on those many driverless cars are a common example of where keeping the LiDAR and video channels registered to common physical features is important. We develop a deep learning method that takes multiple channels of heterogeneous data, to detect the misalignment of the LiDAR-video inputs. A number of variations were tested on the Ford LiDAR-video driving test data set and will be discussed. To the best of our knowledge the use of multi-modal deep convolutional neural networks for dynamic real-time LiDAR-video registration has not been presented. | Provide a brief summary of the text. | 182 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.