id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
ieee/e4316eb1_a695_4082_ae24_d77229ae70a3.md
# Adaptive Multiscale Reversible Column Network for SAR Ship Detection Tianxiang Wang and Zhangfan Zeng Manuscript received 16 October 2023; revised 15 December 2023 and 21 February 2024; accepted 6 March 2024. Date of publication 12 March 2024; date of current version 22 March 2024. This work was supported in part by the National Natural Science Foundation of China under Grant 62273135 and in part by the Natural Science Foundation of Hubei Province under Grant 2021CFB503. _(Corresponding author: Zhangfan Zeng.)_ Tianxiang Wang is with the School of Computer and Information Engineering, Hubei University, Wuhan 430062, China (e-mail: [email protected]). Zhangfan Zeng is with the School of Artificial Intelligence, Hubei University, Wuhan 430062, China (e-mail: [email protected]). Digital Object Identifier 10.1109/ISTARS.2024.3376070 ## I Introduction The development of synthetic aperture radar (SAR) technology has significantly advanced the field of remote-sensing target detection, particularly demonstrating outstanding potential in maritime monitoring [1, 2, 3]. Unlike optical sensors, which are suboptimal in stringent conditions due to sensitivity to weather and lighting conditions, SAR exceeds in various weather conditions and offers higher data rates and larger processing capacities [4]. Furthermore, SAR benefits from the unique ability to penetrate challenging environments for imaging and to continuously acquire geographical information even in complex weather conditions. These advantages make SAR valuable for applications in various fields, such as target detection and recognition [5, 6], geomorphology and terrain mapping [7], and segmentation and classification [8, 9, 10]. Maritime ship detection is a crucial task in the fields of maritime traffic control, maritime resource exploitation, and maritime environmental protection [11, 12, 13]. Recent studies [1, 2, 3] have shown that SAR also plays an important role in ship detection due to its unique imaging mechanism. However, SAR data can be subject to various degradation factors, noise effects, and variabilities during the imaging process. Unlike spectral variability [14], SAR images mainly suffer from topology and adverse changes in environments, leading to variations in the microwave signals of targets. Topology in SAR imaging typically refers to the geometric arrangement of land features, including their spatial relationships and characteristics. More specifically, the variation in the incidence angle affects the interaction between the radar signal and the terrain. In areas with complex topography, radar signals may cast shadows or result in layover effects. This can lead to distorted or ambiguous representations in the SAR image. Speckle is a common artifact in SAR imaging, resulting from the interference of radar signals with multiple scattering centers on the ground. Consequently, SAR ship detection faces significant challenges in complex coastal environments and harsh marine conditions. For instance, in coastal environments, SAR microwave signals can encounter reflection, scattering, and refraction from ships, coastal constructions, and inshorce constructions, which can result in false detection. In offshore scenarios, complex marine conditions such as sea surface turbulence and ship wakes introduce intricate background clutter in SAR images. Moreover, the large-scale difference in SAR images, often influenced by different observation geometries, poses challenges for SAR ship detection. Recent methods can be broadly categorized into traditional methods and deep-learning-based methods [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. Most of the traditional methods are designed from the perspective of signal processing. Among them, constant false alarm rate (CFAR)-based methods are widely used in SAR ship detection, as explored in related research papers [15, 16, 17, 18, 19]. The CFAR method is a statistically based signal processing technique that adaptively estimates the statistical properties of the background clutter and determines an appropriate threshold to suppress the background clutter and detect the target signal. However, in complex environments with strong clutter, multiple targets, and inhomogeneous noise, the CFAR algorithm may not be able toaccurately estimate the statistical properties of the background, resulting in missed or false alarms. In recent years, deep learning technology has been widely applied to SAR ship detection [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. Compared to the CFAR method, deep learning methods exhibit stronger automatic feature learning ability and robustness, avoiding the tediousness and uncertainty associated with manual design processes. However, deep learning methods in the fields of object detection, such as Faster R-CNN [37], FoveaBox [38], etc., and remote-sensing object detection of optical sensors [39, 40], when directly applied to SAR ship detection, are susceptible to the influence of clutter, significant scattering of target information, and large-scale differences, leading to a degradation in detection performance. As such, numerous improvement methods applicable to SAR ship detection have been proposed. Specifically, to enhance the performance of multiscale ship detection in SAR images, feature pyramid network (FPN)-based methods have been largely proposed [20]. Jiao et al. [21] proposed an end-to-end dense join to extract features at different scales. Cui et al. [22] reported a novel DAPN-based multiscale extraction method with a CBAM attention mechanism module. Zhang et al. [23] presented a novel quad FPN for integrating feature maps in SAR ship detection. Li et al. [24] proposed a multidimensional domain deep learning network with complementary features in spatial and frequency domains. Zhou et al. [25] used a novel Res2-based idea embedded into the backbone network to further extract multiscale ship information. Based on the bidirectional convolutional structure, Yu et al. [26] proposed the two-way convolutional network and utilized multiscale feature mapping to process SAR images. To mitigate the influence of complex background clutter, Zhao et al. [27] designed a dual-feature fusion attention mechanism that combines shallow features and denoised features, effectively reducing clutter from complex backgrounds. Zhang et al. [28] employed a one-stage detector with a frequency attention module, which can process frequency-domain information adaptively and suppress the sea cluster in the SAR images. Wang et al. [29] presented a multi-feature fusion network, which reduced the interference of complex backgrounds by extracting frequency-domain features and combining spatial, high-frequency, and low-frequency texture information. Bai et al. [30] introduced the feature enhancement pyramid and the shallow feature reconstruction module to address scattered spots and noises. Furthermore, the significance of sidelobes is amplified by the weak scattering from small ships and the clutter of background clutter. To alleviate the impact of sidelobes, Zhou et al. [31] proposed feature extraction with sidelobe awareness (FESA), which addresses the sidelobe effect by incorporating maximum pooling and minimum pooling operations along the \\(X\\) and \\(Y\\) axes, respectively. Zhou et al. [32] integrate the respective advantages of a convolutional neural network (CNN) and self-attention to enhance the extraction of scattering information from small targets. Compared to anchor-based methods, researchers have observed advantages in anchor-free methods such as flexibility, simplified designs, adaptability to complex scenes, and lower computational complexity. Jiang et al. [33] constructed an anchor-free detector, which is designed with a foreground enhancement module to reduce the impact of complex backgrounds. Hu et al. [34] constructed an anchor-free balanced attention network, which introduced dynamic convolution to construct a local attention module to obtain local information and designed a nonlocal attention module to extract nonlocal features. To address effectively sparse labeled samples and imbalanced categories, Gao et al. [35] proposed an attention-dense-CycleGAN method for ship automatic target recognition, specifically designed for the optical to SAR transfer learning task. The transformer is also employed for SAR ship detection tasks due to its capability to capture long-term dependencies. A vision transformer architecture named CR-TransSar was proposed by Xia et al. [36], combining transformer and CNN to enhance context learning. However, the lack of utilization of local information and multilevel feature representation in transformer-based methods makes multiscale ship detection more challenging, especially for small-scale ships. Despite progress in SAR ship detection with various methods, the challenges posed by environmental clutter and scale differences need to be further addressed. Specifically, due to the presence of environmental clutter, ship features are similar to surrounding noise features, which results in the masking or obscuring of semantically weaker ship features. Moreover, the prevalence of small-scale ships in SAR images leads to deterioration in the performance of multiscale ship detection. The previous methods, which entirely adopt the strategy based on FPN, alleviate the challenge of significant scale differences among ships to some extent. However, owing to the structural and available pixel constraints of small-scale ships and the unique imaging mechanism of SAR, feature information for small-scale ships will be lost or erroneously identified as background during the feature extraction process. It can be attributed to the fact that deep learning methods widely used in SAR ship detection usually follow the information bottleneck (IB) principle [41, 42], where the information will be compressed or even discarded. Therefore, from the perspective of feature learning, the application of the IB will be inappropriate for SAR image ship detection. The disentangled feature learning [43, 44] is widely used in the area of computer vision, especially for target detection. This method involves the embedding of task-relevant semantic and positional information into separate decoupling dimensions while maintaining the same amount of information as the input. Revcol is the most representative network based on disentangled feature learning, proposed by Cai et al. [45]. In Revcol, a multiplexed subcolumn network is utilized for information extraction, where the information from different layers in different subcolumns is embedded into the next subcolumn using a reversible transformation. Thanks to the reversible transformation, information partially lost in one subcolumn network can be acquired in another subcolumn network. It is proved that the disentangled feature learning is effective for multiscale ship detection, particularly for small-scale ships. Based on the analysis provided, in order to enhance the performance of multiscale ship detection, especially for small-scale ships, as well as to address the impact of complex background clutters in SAR images, an adaptive multiscale reversible column network for SAR ship detection (AMRCNet) is proposed in this article. The main contributions of this article are as follows. 1. A novel ship detection network, AMRCNet, is proposed, which demonstrates superior multiscale ship detection performance in complex environments and mitigates the negative effects of complex background clutters, improving the detection accuracy of ships in SAR images. 2. To address the degraded SAR multiscale ship detection caused by the loss of partial information due to the IB principle, Revcol is applied to reconstruct the backbone network of YOLOv8 [46], resulting in the reversible column networks with a C2f module (Revcol-C2f) backbone network for feature extraction. 3. To mitigate complex background clutter in SAR ship detection, this article proposes a novel multiplexed adaptive spatial pyramid pooling layer (MASPPF). In MASPPF, multiplexed large kernel pooling operations and adaptive fusion are utilized to improve the target perception capabilities of the network and mitigate the influence of background clutter in environments of all scales. 4. To further enhance the detection capabilities of multiscale ships, especially small ships, in SAR images, this article constructs an adaptive sampling FPN (SAFPN). Within SAFPN, an adaptive downsampling module (ADM) is devised to address issues related to acquiring irrelevant information and information loss during feature pyramid downsampling (DS). The ADM facilitates the extraction of semantic information and precise spatial localization. Furthermore, a bidirectional fusion of shallow and deep features at different scales is implemented to merge feature information. In order to validate the effectiveness of our method, a large number of ablation experiments and comparative experiments have been implemented on the SAR ship detection dataset (SSDD) [47] and high-resolution SAR images dataset (HRSID) [48]. The rest of the article is structured as follows: Section II describes the proposed method. Section III validates the proposed method. Section IV offers a comprehensive discussion on the experimental results. Finally, Section V concludes this article. ## II Proposed Method The general architecture of the proposed method is shown in Fig. 1 and consists of three parts: 1) the backbone, 2) the neck, and 3) the head. The implementation details of each module in the proposed method are described. First, the general architecture of the network is described. Following that, the Revcol-C2f feature extraction network, the MASPPF module, and the ASFPN structure will be detailed, respectively. ### _General Architecture of AMRCNet_ AMRCNet is the proposed adaptive multiscale reversible column network, where YOLOv8s is used as a baseline. The overall structure of the AMRCNet is illustrated in Fig. 1. AMRCNet consists of three parts: 1) the backbone, 2) the neck, and 3) the detection head. The Revcol-C2f backbone is proposed to alleviate information loss and acquire comprehensive multiscale Fig. 1: General architecture of the AMRCNet network. It consists of three main parts: 1) The feature extraction part with Revcol-C2f as the backbone and MASPPF as the feature enhancement layer; 2) the neck combined by ADM and RepGFPN; and 3) the parallel detection head. ship information in a complex environment, where Revcol is leveraged to reconstruct the YOLOv8 backbone network. The MASPPF module is introduced to augment the receptive field and mitigate the influence of background clutter after Revcol-C2f. The ASFPN neck is reconstructed by adding an ADM to obtain more adaptive multiscale ship information, which is based on the FPN and PAN [49] network structures. The detection head utilizes the decoupled head of YOLOv8, which processes classification and regression separately. In particular, the Revcol-C2f backbone is composed of four feature extraction layers denoted as \\(Ln\\) (where \\(n\\) ranges from 1 to 4), playing a pivotal role in extracting discriminative features. Thanks to the hierarchical feature extraction process, multiscale feature maps are obtained for each layer, where C2, C3, and C4 represent the outputs of the first three stages in the Revcol-C2f backbone. C5 is the feature maps obtained by MASPPF enhancement of the final output of Revcol-C2f. During the FPN fusion, intermediate feature maps, denoted as F1, F2, F3, and F4, are generated. These feature maps result from the fusion of intermediate levels within the feature pyramid, enabling the integration of information across different scales. Finally, the fused multilevel feature maps are generated, denoted as P1, P2, P3, and P4, which is beneficial to detect multiscale ships. ### _Features Extraction Backbone Revcol-C2f_ In conventional SAR ship detection methods, the IB principle was widely applied. However, this approach can cause small and medium ship information to be lost or recognized as background in complex environments. On the contrary, disentangled feature learning may be more favorable for preserving this information. In this article, to obtain rich multiscale ship information in SAR images and mitigate the information loss, Revcol is utilized to reconstruct the YOLOv8s backbone, named Revcol-C2f backbone, as shown in Fig. 2(a). The multilevel feature extraction structure of YOLOv8s, specifically the C2f structure attributed to the CSPs structure, facilitates an improved capture of target information across various scales of ships. The use of reversible transformations is helpful in retaining large amounts of extracted information while mitigating information conflicts. In particular, the Revcol-C2f backbone combines embeddings from different layers and employs disentanglement mechanisms to address the potential information loss encountered when using a single-column network to extract features from deeper layers. In addition, Revcol-C2f promotes the fusion of deeper semantic information with shallow features, resulting in a more comprehensive and informative representation of the input data. Reversible transformation can be explained by the following formula: \\[\\text{Forward}:x_{t}=F_{t}(x_{t-1},x_{t-m+1})+\\gamma x_{t-m} \\tag{1}\\] \\[\\text{Inverse}:x_{t-m}=\\gamma^{-1}\\left[x_{t}-F_{t}(x_{t-1},x_{t-m+1})\\right] \\tag{2}\\] where \\(t\\) represents each subnetwork as the \\(t\\)th layer (\\(n{\\geq}1\\)), \\(m\\) represents the number of subnetworks (\\(m{\\geq}2\\)), \\(F_{t}\\) represents the \\(n{\\times}\\)C2f module at the \\(t\\)th layer, \\(\\gamma\\) denotes a fixed parameter set Fig. 2: (a) Revcol-C2f module (b) C2f module. (c) Fusion module. Red arrows represent reversible transformations, blue arrows denote feature extraction within the current subnetwork, and green arrows signify cross-subnetwork multiscale information embedding. to 0.5 in this article, and \\(x_{t}\\) represents the feature map of the \\(t\\)th layer in the network. The detailed implementation process of Revcol-C2f is as follows: 1. To achieve consistent tensor dimensions with the downsampled dimensions in YOLOv8s, a convolutional layer with a 4\\(\\times\\)4 kernel, a stride of 4, and no padding is employed. 2. According to the implementation of Revcol, the downsampled feature maps are subjected to an extraction network consisting of three subnetworks with reversible transformations. 3. The C2f module is introduced to extract the feature of the current subnetwork and the previous subnetwork, and the structure of the C2f module is shown in Fig. 2(b). 4. The first subnetwork is equivalent to the original feature extraction network in YOLOv8. In the second and third subnetworks, a fusion module is employed to combine the features from the previous subnetworks with the features of the current subnetwork, where features are extracted by C2f. This fusion process is illustrated in Fig. 2(c). 5. The information from the same layer in the previous subnetwork is embedded through reversible transformations. 6. To address the problem of information collapse, the intermediate supervision method proposed by RevCol is incorporated into the second subnetwork. Through this approach, the network receives feedback and gradients not only at the final output but also at multiple intermediate layers. Feature fusion is accomplished by the fusion module through the upsampling of low-resolution features and the DS of high-resolution features, followed by a sum operation. DS refers to the process of reducing the spatial resolution of features using regular convolutional operations. ### _Multiplexed Adaptive Spatial Pyramid Pooling Layer_ Due to the unique imaging mechanism of SAR, the effective feature information of ships can be affected by the clutter dispersed around the ship, leading to excessive semantic differences, especially for small-scale ships. According to attention mechanisms such as FESA [31], coordinate attention [50], and EMA [51], it can be demonstrated that pooling operations can effectively extract more relevant feature information in a local region. Moreover, the receptive field can be greatly enhanced by successive multiple identical kernel pooling operations in SPPF [52]. It is believed that utilizing different receptive fields to obtain a wider range of feature information can help mitigate the effects of scattered clutter, noise, and complex environments on the detection of ships in SAR images. However, the use of several fixed receptive fields in SPPF is prone to introducing background clutter. This approach potentially leads to the mixture of background noise features with ship features, resulting in blurred ship details. Therefore, multiple pooling layers with different kernels are added to obtain various scales of the receptive field. Subsequently, the receptive fields are enhanced by using multiplexed parallel pooling. Ultimately, the adaptive fusion of different receptive fields is conducted to obtain more suitable features. The overall structure is referred to as the MASPPF, illustrated in Fig. 3. In particular, pooling operations with kernel sizes 5 and 7 are utilized for local feature extraction. The receptive fields of size 9 \\(\\times\\) 9 and 13 \\(\\times\\) 13 are acquired by the right branch after successive \\(5\\times 5\\) pooling while the left branch acquires receptive fields of size 11 \\(\\times\\) 11 and \\(17\\times 17\\), respectively, after successive \\(7\\times 7\\) pooling. The process for the implementation of the adaptive fusion consists of three steps. First, a learnable parameter \\(w_{n}\\) (\\(1\\leq n\\leq 6\\)) is assigned for each feature map from the pooling layers. Second, the scoring scores are adaptively obtained by these learnable parameters. Finally, the scores are passed through the ReLU function to obtain the final adaptive scores, as shown in \\[\\varepsilon_{k}=\\text{Relu}\\left(\\frac{w_{k}}{\\mu+\\sum_{n=1}^{6}w_{n}}\\right) \\tag{3}\\] where \\(w_{n}\\) represents the learnable parameters, \\(\\varepsilon_{k}\\) the adaptive score for the \\(k\\)th layer of the feature map, and \\(k\\) specifies the layer. \\(\\mu\\) represent a constant value of 0.0001, which is added to prevent a score of 1. Relu represents the activation function Relu. The feature map of MASPPF for each layer can be described as follows: \\[X_{1}= \\text{CBS}(x) \\tag{4}\\] \\[X_{2}= \\text{MaxPool2d}_{5}(X_{1})\\] (5) \\[X_{3}= \\text{MaxPool2d}_{5}(X_{2})\\] (6) \\[X_{4}= \\text{MaxPool2d}_{5}(X_{3})\\] (7) \\[X_{5}= \\text{MaxPool2d}_{7}(X_{2})\\] (8) \\[X_{6}= \\text{MaxPool2d}_{7}(X_{5})\\] (9) \\[X_{\\text{out}}= \\text{CBS}\\left(\\text{Cat}\\left(\\varepsilon_{1}\\ X_{1},\\varepsilon _{2}\\ X_{2},\\varepsilon_{3}\\ X_{3},\\varepsilon_{4}\\ X_{4},\\varepsilon_{5}\\ X_{5}, \\varepsilon_{6}\\ X_{6}\\right)\\right) \\tag{10}\\] where \\(X\\) is the input feature map, and the CBS consists of a full convolutional layer, BatchNorm2d, and the SiLU activation function. MaxPool2d\\({}_{5}(.)\\) and MaxPool2d\\({}_{7}(.)\\) represent the maximum pooling layers with kernel size 5, stride 1, padding 2 and kernel size 7, stride 1, padding 3, respectively. \\(X_{1}\\) represents the output of CBS\\((.)\\), and \\(X_{2},X_{3},X_{4},X_{5},\\) and \\(X_{6}\\) represent the Fig. 3: Structure of MASPPF. output of different pooling layers, respectively. \\(X_{\\text{out}}\\) represents the final output of the MASPPF module. ### _Adaptive Sampling FPN_ In SAR images, the scale difference of ship poses difficulty for existing feature fusion networks to effectively fuse multi-scale ship features. In most existing FPNs, upsampling and DS operations are employed to modify the resolution of feature maps at different levels. However, as the number of upsampling and DS operations increases, the risk of losing valuable feature information rises, thereby diminishing the effectiveness of information fusion. Furthermore, the deficient correlation among features in individual layers subsequent to the fusion of shallow and deep feature maps through concatenation followed by convolutional processing hampers the detection performance of ships in SAR images. In order to alleviate these problems, an adaptive sampling feature pyramid network (ASFPN) is introduced, as shown in Fig. 1, to help the detection of multiscale ships in SAR images. In ASFPN, an ADM is introduced to alleviate information loss during the DS process. Furthermore, to address the introduction of irrelevant information resulting from upsampling, multiple bidirectional fusions are employed to effectively integrate the abundant shallow acquired from ADM and deep feature information from upsampling. The implementation process of ADM is shown in Fig. 4 and the specific implementation process is as follows. The ADM is divided into two components: 1) adaptive feature score acquisition and 2) sampling information fusion. In the first component, a kernel with a size of 3, stride of 1, and padding of 1 is applied to perform average pooling, which helps alleviate the influence of the background. Subsequently, a fully convolutional layer is applied to capture the relationships between different channels and spatial locations. The generated feature maps are then divided into nonoverlapping regions of size 2\\(\\times\\)2. Furthermore, each region is transformed to increase its channel dimension by a factor of 4. Finally, applying the Softmax function to these transformed regions yields scores for each feature within the 2\\(\\times\\)2 kernel region, indicating their respective importance or relevance. In the second component, group convolution is first utilized (with the number of groups equal to the input channels divided by 16) to perform DS. The downsampled feature map is then resized to match the feature map obtained in the first component. Next, information weights are re-established by multiplying the obtained scores with the corresponding feature values from the downsampled feature map. Finally, the most relevant information is aggregated by summing the scores within each 2\\(\\times\\)2 window. By following the aforementioned steps, information loss during DS can be effectively reduced. The feature map for each layer of the ADM can be described as follows: \\[P_{1} =\\text{Rearrange}\\left(\\text{Conv}_{1\\times 1}\\left(\\text{AvgPool}_{3 \\times 3}(X_{\\text{in}})\\right)\\right) \\tag{11}\\] \\[P_{2} =\\text{Rearrange}\\left(\\text{GConv}_{3\\times 3,2}(X_{\\text{in}})\\right)\\] (12) \\[X_{\\text{out}} =\\text{Sum}\\left(\\text{Reweight}\\left(\\text{Softmax}(P_{1}),P_{ 2})\\right) \\tag{13}\\] where \\(\\text{AvgPool}_{3\\times 3}\\) represents the average pooling operation with kernel size 3, stride 1, padding 1, and \\(\\text{Conv}_{1\\times 1}\\) represents the full convolutional layer. \\(\\text{GConv}_{3\\times 3,2}\\) represents a group convolution operation with a kernel size of 3 and a stride of 2. The number of groups in the group convolution is equal to \\(C_{\\text{in}}/16\\) Fig. 4: Structure of ADM. where \\(C_{\\text{in}}\\) represents the number of input feature map channels. Rearrange represents the special feature map adjustment operation, Softmax is the Softmax function, Reweight represents the multiplication to recapture the score weights, and Sum sums the obtained scores. The feature map of each layer of ASFPN can be described as follows: \\[F_{4} =f\\left(\\text{Cat}\\left(\\text{ADM}(C_{4}),\\text{MASPPF}(C_{5}) \\right)\\right) \\tag{14}\\] \\[F_{3} =f\\left(\\text{Cat}\\left(\\text{ADM}(C_{3}),\\text{Ups}(F_{4}),C_{4} \\right)\\right)\\] (15) \\[F_{2} =f\\left(\\text{Cat}\\left(\\text{ADM}(C_{2}),\\text{Ups}(F_{3}),C_{3} \\right)\\right)\\] (16) \\[P_{1} =F_{1}=f\\left(\\text{Cat}\\left(\\text{UpS}(F_{2}),C_{2}\\right)\\right)\\] (17) \\[P_{2} =f\\left(\\text{Cat}\\left(\\text{Conv}_{3\\times 3,2}(P_{1}),F_{2} \\right)\\right)\\] (18) \\[P_{3} =f\\left(\\text{Cat}\\left(\\text{ADM}(F_{2}),\\text{Conv}_{3\\times 3, 2}(P_{2}),F_{3}\\right)\\right)\\] (19) \\[P_{4} =f\\left(\\text{Cat}\\left(\\text{ADM}(F_{3}),\\text{Conv}_{3\\times 3,2}(P_{3}),F_{4}\\right)\\right) \\tag{20}\\] where \\(C_{2},C_{3},C_{4},\\) and \\(C_{5}\\) represent the feature maps extracted from different layers of the backbone network, \\(F_{1},F_{2},F_{3},\\) and \\(F_{4}\\) represent intermediate features obtained from top-down and bottom-up paths, UpS represents the up-sampling operation, \\(\\text{Conv}_{3\\times 3,2}\\) represents the full DS operation of a convolutional layer with convolutional kernel 3\\(\\times\\)3 and a step of 2, and ADM represents the adaptive sampling module. \\(f(.)\\) represents 3 C2f feature fusion operations. ## III Experiment The proposed method is validated on two public datasets: 1) the SSDD [47] and 2) HRSID [48]. In this section, the two datasets, along with the experimental setup, are briefly described in the first place. Then, evaluation metrics are presented. Finally, ablation experiments and comparison experiments are conducted. ### _Dataset Description_ The two public datasets, SSDD and HRSID, include the data of nearshore and ocean-going ships from complex environments. The overview of the SSDD dataset is presented in Table I. As shown in the table, the multiscale SSDD dataset provides 1160 SAR image samples from Radarsat-2, TerraSAR-X, and Sentinel-1 satellites. The average size of the images is about 500 \\(\\times\\) 500 pixels. Polarizations are HH, VV, VH, and HV. The resolution ranges from about 1 to 15 m. The environments of the ships in the dataset range from good to poor sea state, and from complex docking scenarios to simple offshore scenarios, as shown in Fig. 5(a). According to the statistics, the SSDD dataset comprises 2587 ships, with the smallest ship measuring 5 pixels in width and 4 pixels in height, occupying only 20 pixels. In contrast, the largest ship has dimensions of \\(180\\times 308\\) pixels, occupying 55 440 pixels. It is clear that the largest size ship is 2772 times larger than the smallest ship, which indicates that the ship scales in the SSDD dataset vary greatly. Therefore, SSDD is very suitable for measuring the multiscale SAR ship detection performance of the network. The HRSID dataset was published by Wei et al. [48] and the overview of information is shown in Table II. The HRSID dataset provides 5604 SAR image samples from Sentinel-1, TerraSAR-X, and TanDEM. These images have an average size of \\(800\\times 800\\) pixels, polarization modes of HH, HV, and VV, and resolutions of 0.5 m, 1 m, and 3 m, and locations including Houston, Sao Paulo, Barcelona, Chittagong, Bangladesh, and other important international shipping lanes. The environments of the ships in the dataset range from good to poor sea state, and from complex docking scenarios to simple offshore scenarios, as revealed in Fig. 5(b). Compared with the multiscale SSDD dataset, the HRSID dataset contains a wider variety of complex scenarios, which makes the dataset suitable for measuring the model's performance in detecting SAR ships in complex scenarios. Fig. 5: Sample images from two experimental datasets. (a) SSDD. (b) HRSID. ### _Hyperparameters and Environment Settings_ The images in the SSDD and HRSID datasets are resized to 640\\(\\times\\)640 during input. All the experiments are implemented using PyTorch with hyperparameters fine-tuned based on the YOLO series. The network is trained using the stochastic gradient descent method with a learning rate of 0.01. The momentum is set to 0.937, and the weight decay is set to 0.0005. The model is trained for 300 epochs. The software environment includes PyTorch 1.11.0, Python 3.8 (Ubuntu 20.04), and CUDA 11.3. The hardware environment consists of an Intel(R) Xeon(R) Gold 6330 CPU and an RTX 3090 with 24 GB memory. To enhance the diversity of the training dataset, mosaic augmentation is utilized during the training phase. In addition, the proposed model is trained from scratch rather than using pretrained models. ### _Evaluation Metrics_ The object detection evaluation metrics in the COCO [53] dataset are utilized and detailed in Table III. It is summarized as follows. * The primary metric for measuring the accuracy of algorithmic detection results. It is the average precision (AP) across different IoU thresholds ranging from 0.5 to 0.95. A higher AP indicates more accurate detection results. * The AP at an IoU threshold of 0.5. In many applications, 0.5 is a commonly used IoU threshold. * The AP at an IoU threshold of 0.75. In more stringent applications, a higher IoU threshold can better measure algorithm performance. * The AP for small objects (area smaller than \\(32^{2}\\) pixels). Since small objects are usually more challenging to detect, this metric helps evaluate algorithm performance in such cases. * The AP for medium objects (area between \\(32^{2}\\) and \\(96^{2}\\) pixels). This metric helps to evaluate algorithm performance in detecting medium objects. * The AP for large objects (area larger than \\(96^{2}\\) pixels). This metric helps to evaluate algorithm performance in detecting large objects. Precision and recall are defined as \\[\\text{Precision}=\\frac{\\text{TP}}{\\text{TP}+\\text{FP}} \\tag{21}\\] \\[\\text{Recall}=\\frac{\\text{TP}}{\\text{TP}+\\text{FN}} \\tag{22}\\] where TP (true positives), FP (false positives), and FN (false negatives) refer to the number of correct detections, false alarms, and missed targets, respectively. AP is defined as \\[\\text{AP}=\\int_{0}^{1}P(R)\\,dR \\tag{23}\\] where \\(P(R)\\) is denoted as the curve of precision and recall, which can be defined as the area AP values contained in the \\(P(R)\\) curve and axes will be calculated separately in this article depending on the IoU threshold. ### _Evaluations of the Proposed Method_ In this section, the YOLOv8s is used as the baseline network, and ablation experiments are implemented to verify the effectiveness of the proposed modules. Comparison experiments are then conducted with the representative classical deep learning networks to verify the superiority of the proposed network. #### Iv-D1 Ablation Study In AMRCNet, three methods (RevcolC2f, ASFPN, and MASPPF) are proposed to enhance ship detection performance in SAR images. To validate the performance of AMRCNet, the experimental results of the designed modules are compared with the baseline. For the sake of objectivity and fairness, identical hyperparameters are used in the ablation experiment. The ablation experiment is divided into two parts. In the first part, the modules in the baseline network are replaced by the designed modules. In the second part, the designed modules incrementally are added to the baseline network. _Part 1. (a) Analysis of the effectiveness of Revcol-C2f_: RevcolC2f utilizes the hierarchical CSP structure in C2f to acquire rich multiscale ship information in various gradients. Subsequently, the ship information supplemented by Revcol is embedded with reversible transformation. Experiments are conducted to confirm the effectiveness of the proposed Revcol-C2f, and the results are shown in Tables IV and V. It is noteworthy that on the HRSID dataset, AP, AP50, AP75, APs, APm, and API are improved by 3%, 1.6%, 3.9%, 2%, 1.6%, and 24.3%, respectively. On theSSDD dataset, the gains in the AP, AP75, APs, APm, and API are reaching 1.4%, 2%, 1.4%, 1.6%, and 6.6%, respectively. From the gains brought by APs, APm, and API, it can be confirmed that Revcol-C2f captures information about ships at different scales. The improvements are mainly due to the fact that Revcol-C2f, which includes multiple subnetworks, enables the extraction of features at different hierarchical levels. Meanwhile, the improvements in AP, AP50, and AP75 can be attributed to the application of reversible transformation with a disentangled feature learning concept. This mechanism effectively alleviates the challenge of ship information loss, thereby contributing to the overall improvement in performance. To illustrate the experimental effect, the visualization results of Revcol-C2f are shown in Figs. 7, 8, and 9. It is observed that the missed detection of ships has decreased. _(b) Analysis of the effectiveness of MASPPF:_ To validate the effectiveness of MASPPF, ablation experiments are conducted for analysis, and the results are presented in Tables IV and V. It can be observed that various metrics of the network exhibit effective improvements. AP75, being a more stringent object detection metric, reveals a notable increase in both SSDD and HRSID datasets, with improvements of 2.7% and 2.5%, respectively, compared to the baseline. This demonstrates MASPPFs ability to suppress the impact of clutter around ships. This is attributed to MASPPF employing multiplexed large-kernel pooling operations to capture comprehensive information about the ship hull within a region. Moreover, adaptive fusion is able to obtain receptive fields suitable for ships of different scales. This ultimately contributes to mitigating the impact of clutter. To verify the superior performance of MASPPF over similar methods, comparative experiments were conducted, introducing the most effective spatial pyramid methods, including SPPF [52], SPPCSPC [54], and SPPFCSPC [55]. Features were extracted by the YOLOv8s backbone network, and all experiments were conducted on the HRSID dataset. The results of the comparative experiments are shown in Table VI. It is evident that MASPPF achieves better results with minimal increase in parameters. In particular, MASPPF achieved a 1.6% AP75 improvement compared to the suboptimal SPPFCSPC. Furthermore, Grad-CAM [56] is used to display the heat map results of different methods, as revealed in Fig. 6. It is clearly shown that MASPPF is more sensitive to the specific location of ships and is able to obtain more valuable information, thereby mitigating the impact of clutters. _(c) Analysis of the effectiveness of ASFPN:_ ASFPN is an improved FPN, which is used to further enhance the performance of multiscale ships detection. Ablation experiments were conducted to demonstrate the effectiveness and superiority of ASFPN, and the results are presented in Tables V and VI. Compared to the FPN-PAN used in the baseline, there is a significant improvement in the HRSID dataset, with the evaluation metrics of AP, AP50, AP75, and APs increasing by 3.6%, 2.1%, 5.7%, and 7%, respectively. Likewise, for the SSDD dataset, the AP, AP50, AP75, APs, and APm increase by 2.2%, 3.7%, 2.6%, 2.5%, and 1.7%, respectively. The visual results of ASFPN are shown in Figs. 7 and 8, where the accuracy of detecting the small ships can be clearly observed. The previous experimental and visual results adequately demonstrate the effectiveness of ASFPN in multiscale ship detection. Within ASFPN, the ADM plays a crucial role in alleviating the acquisition of irrelevant information and mitigating information loss, leading to the Fig. 6: Presentation of heat map for different spatial pyramid methods on HRSID dataset. (a) SPPF. (b) SPPCSPC. (c) SPPFCSPC. (d) MASPPF. improvement of AP75. This is due to ADM that employs an attention-based DS operation, ensuring the preservation of information from ships. Furthermore, the combination of ADM and bidirectional information fusion operations effectively integrates feature information from ships at different scales, resulting in the improvement in APs, APm, and API. To verify the effectiveness of ADM, experiments were conducted by adding ADM to RepGFPN. Comparison experiments were performed on the HRSID dataset with PAN-FPN [49] and RepGFPN [57]. The results, shown in Table VII, show that the AP75 metrics are improved by 5.7% and 1.3%, respectively, compared to SPPF and RepGFPN. This proves that ADM can capture more relevant information during DS, which enables it to achieve better performance in more demanding environments. Fig. 8: Detection results for different methods in complex inland environments on the HRSID dataset. Green, blue, and red boxes represent TP, FP, and FN,respectively. (a), (b), (c), (d), (e), and (f) represent real boxes, YOLOv8s, Revcol, ASFPN, Revcol+ASFPN, and AMRCNet, respectively. Fig. 9: Detection results of harsh inhomore noise environments on the SSDD dataset. (a), (b), (c), (d), (e), and (f) represent real boxes, YOLOv8s, Revcol, ASFPN, Revcol+ASFPN, and AMRCNet, respectively. metrics for AP, AP50, AP75, and APs reach 68.9%, 91.9%, 79.6%, and 59.7%, respectively. The overall performance is superior, especially in small-object detection, compared to using the Revcol-C2f module alone. Furthermore, improvements can be found when switching to the SSDD dataset. When MASPPF is added to complete AMRCNet, AP, AP50, AP75, APs, APm, and API are improved by 0.9%, 0.8%, 1.1%, 0.9%, 0.2%, and 13.6%, respectively, on the HRSID dataset. For a more intuitive illustration of the effect of each proposed module, the ship detection results of different combinations are separately displayed on the HRSID dataset, which encounters complex marine environments, as shown in Figs. 7 and 8. It is shown that the leakage detection rate is significantly reduced after the addition of each module compared to the baseline. On the SSDD dataset, the visualization of the detection results for various methods is also shown in Fig. 9. As can be seen from the visualized images, the more challenging evaluation environments, AMRCNet is able to accurately locate and detect ships in SAR images. Despite the fact that the SSDD dataset itself has a wide range of SAR ship scales, the method achieves better multiscale detection results than other methods in terms of APs, APm, and API. The improvement indicates that AMRCNet is more applicable to the task of multiscale SAR ship target detection. In order to measure the model's performance for SAR ship detection under complex scenarios as well as to validate the robustness and generalization ability, comparative experiments with other state-of-the-art methods are conducted on the HRSID dataset. The experimental results are shown in Table XI. Experimental methods include Faster R-CNN, FoveaBox, Libra R-CNN [60], Deformable DETR, HRSDNet, FCOS, Cascade R-CNN [61], and FEPS-Net. The results consistently demonstrate that the proposed method outperforms other state-of-the-art methods across various evaluation metrics. On the HRSID dataset, the proposed method achieves the highest AP (69.8%) and AP75 (80.7%) among the compared methods, indicating its superior robustness and generalization capabilities in complex scenarios. Moreover, the proposed method achieves a well-balanced performance in terms of APs, APm, and API, showcasing its strong detection ability across ships of different scales. Notably, APm and API reach 81.0% and 52.5% respectively. Although APs are slightly lower compared to suboptimal FEPS-Net, the overall improvements in AP and other metrics are significant, especially in inference Time. In Tables X and XI, it can be clearly observed that the proposed method outperforms Deformable DETR, which is based on the transformer architecture, across various evaluation metrics. This is primarily attributed to the proposed modules, which aim to mitigate the impact of background clutter and make effective use of local information and multilevel feature representation. Typically, anchor-free networks, such as FoveaBox and FCOS in Tables X and XI, face difficulties in achieving precise target localization. However, it is noteworthy that the AP of the proposed method much is higher than that of FoveaBox and FCOS, which means that the shortcomings of the traditional anchor-free methods for reduced localization accuracy can be effectively addressed. In addition, compared to traditional two-stage networks such as Faster R-CNN, Libra R-CNN, and Cascade R-CNN, the proposed uses only half the number of parameters and achieves better performance. YOLOv5s serves as a lightweight model with a short inference time (10.9 ms) and a relatively small number of parameters (7.12 million), but the proposed method outperforms across all key metrics. These experimental results emphasize that the proposed method strikes a commendable balance between speed and parameter efficiency while maintaining superior detection performance. ## IV Discussion In this section, the analysis of the performance of Revcol-C2f, MASPPF, ASFPN, and the overall network are provided with the aid of experimental results mentioned before. ### _Reversible Column Network With C2f for Backbone Extraction (Revcol-C2f)_ The proposed Revcol-C2f is primarily designed to alleviate the issue of information loss during feature extraction and enhance the extraction capability of multiscale target information. The experimental results in Tables V and VI demonstrate the Fig. 13: PR curves of ablation experiments on the HRSID dataset. superiority of Revcol-C2f over the IB-based method for feature extraction backbone. Furthermore, Figs. 7(c), 8(c), and 9(c) show more accurate detection results for targets of different scales in different environments. The improvement is attributed to the adoption of the disentangled feature learning concept and multisubnetworks for feature extraction. The reversible transformation in Revcol with the disentangled feature learning concept preserves more target information and mitigates the issue of information loss. Moreover, multisubnetwork feature extraction is able to capture richer feature information for multiscale ships with the help of reversible transformation. ### _Multiplexed Adaptive Spatial Pyramid Pooling_ MASPPF has proved to be an effective solution for background clutters through ablation analysis presented in Fig. 4 and Table V. It is primarily attributed to the implementation of multiplexed large-kernel pooling operations and adaptive fusion. Multiplexed large-kernel pooling operations capture richer receptive fields of different sizes, catering to different scales of ships. Meanwhile, the adaptive fusion process selects more suitable receptive fields for targets, effectively excluding background clutter. Compared to other similar methods, MASPPF demonstrates outstanding performance, as shown in Table VI and Fig. 6. ### _Adaptive Sampling FPN_ This method is specifically designed to further meet the requirements of multiscale target feature fusion. In SAR images, the presence of significant noise and weak semantic information makes it challenging to perform effective information extraction through ordinary DS operations and information fusion without losing small-scale information and introducing noise. ASFPN addresses these challenges by first utilizing ADM to mitigate the loss of accurate target information. Subsequently, multiple bidirectional information fusion operations are employed to overcome scale differences. The effectiveness of ASFPN is demonstrated by ablation experiment results, as revealed in Tables IV and V. In comparison to other FPN-based methods, the proposed methods emphasize the detection of small and medium targets. The improvements in APs, APm, and AP75 in Table VII further attest to the effectiveness of ADM in reducing information loss during the DS. Visual results in Figs. 7(d), 8(d), and 9(d) support these conclusions. ### _Overall Network_ The overall experimental results in Tables X and XI demonstrate the effectiveness of the proposed method for ship detection in SAR images comprehensively. In comparison to anchor-based methods, such as YOLOv5s, MSSDNet, and HRSDNet, the proposed methods achieve superior overall detection results. When compared to anchor-free methods such as FCOS and FoveaBox, the issues of target omission and inaccurate positioning caused by the absence of predefined anchors are mitigated through the use of the disentangled feature learning concept and ASFPN. Compared to traditional two-stage methods such as Faster R-CNN, Libra R-CNN, and Cascade R-CNN, the proposed method is more lightweight and exhibits better detection performance. Relative to transformer-based methods such as Deformable DETR, the proposed method is more effective for multiscale ship detection, especially small-scale ships. Overall, the proposed method achieves a more effective balance between computational complexity and detection performance. ## V Conclusion In this article, an integrated approach enclosing Revcol-C2f, MASPPF, and ASFPN is proposed to address challenges related to the information loss caused by sampling, impact of complex background clutter, and multiscale ship detection from SAR images. Specifically, the Revcol-C2f module reconfigures the YOLOv8 backbone using a reversible column network, effectively preserving target information and improving ship detection accuracy. Subsequently, the proposed MASPPF module mitigates the influence of background clutter. In addition, to address the issues of scale difference and semantic information loss, the ASFPN module fuses shallow and deep feature maps to enhance multiscale target detection capability at different stages. Ultimately, these modules are integrated into the AMRCNet architecture, demonstrating excellent SAR ship detection performance in complex environments, and mitigating the negative impact of scattering noise. Experimental results validate the effectiveness of the proposed method, showing improved detection performance for multiscale ship targets and reduced false and missed alarms in complex background clutter. In particular, the AP reaches 71.1% and 69.8% on the SSDD and HRSID datasets, respectively. Future work in SAR ship detection can prioritize real-time implementation, explore multimodal fusion techniques, and expand the application of transfer learning to enhance overall detection performance and promote generalization across radar domains. ## References * [1] Q. Sun, M. Liu, S. Chen, F. Lu, and M. Xing, \"Ship detection in SAR images based on multilevel superpixel segmentation and fuzzy fusion,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Apr. 2023, Art. no. 5206215. * [2] C. Qin, X. Wang, G. Li, and Y. He, \"A semi-soft label-guided network with self-distillation for SAR instance ship detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Jul. 2023, Art. no. 5211814. * [3] B. Pan, Z. Xu, T. Shi, T. Li, and Z. Shi, \"An imbalanced discriminant alignment approach for domain adaptive SAR ship detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Aug. 2023, Art. no. 5108114. * [4] G. Gao, L. Yao, W. Li, L. Zhang, and M. Zhang, \"Onboard information fusion for multistealth collaborative observation: Summary, challenges, and perspectives,\" _IEEE Trans. Geosci. Remote Sens._, vol. 11, no. 2, pp. 40-59, Jun. 2023. * [5] L. Chen, R. Luo, J. Xing, Z. Li, Z. Yuan, and X. Cai, \"Geospatial transformer is what you need for aircraft detection in SAR imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Mar. 2022, Art. no. 5225715. * [6] F. Ma, X. Sun, F. Zhang, Y. Zhou, and H.-C. Li, \"What catch your attention in SAR images: Saliency detection based on soft-superpixel lacunarity cue,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Dec. 2023, Art. no. 5200817. * [7] D. Marzi and P. Gamba, \"Inland water body mapping using multitemporal sentinel-1 SAR data,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 11789-11799, Dec. 2021, doi: 10.1109/JS-TARS.2021.3127748. * [8] R. Shang, M. Liu, L. Jiao, J. Feng, Y. Li, and R. Stolkin, \"Region-level SAR image segmentation based on edge feature and label assistance,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Oct. 2022, Art. no. 5237216. * [9] F. Gao et al., \"Cross-modality features fusion for synthetic aperture radar image segmentation,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 61, Aug. 2023, Art. no. 5214814. * [10] Y. Guan et al., \"Fishing vessel classification in SAR images using a novel deep learning model,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Sep. 2023, Art. no. 5215821. * [11] X. Wu, D. Hong, and J. Chanussoot, \"UIU-Net: U-Net in U-Net for infrared small object detection,\" _IEEE Trans. Image Process._, vol. 32, pp. 364-376, Dec. 2023. * [12] T. Wu et al., \"MTU-Net: Multilevel TransUNet for space-based infrared tiny ship detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Jan. 2023, Art. no. 5601015. * [13] L. Ying, D. Miao, and Z. Zhang, \"3WM-AugNet: A feature augmentation network for remote sensing ship detection based on three-way decisions and multigranularity,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Sep. 2023, Art. no. 1001219. * [14] D. Hong, N. Yokoya, J. Chanussoot, and X. X. Zhu, \"An augmented linear mixing model to address spectral variability for hyperspectral unmixing,\" _IEEE Trans. Image Process._, vol. 28, no. 4, pp. 1923-1938, Apr. 2019. * [15] J. Ai, Z. Cao, Y. Mao, Z. Wang, F. Wang, and J. Jin, \"An improved bilateral CFAR ship detection algorithm for SAR image in complex environment,\" _J. Radars_, vol. 10, no. 4, pp. 499-515, Mar. 2021. * [16] X. Leng, K. Ji, K. Yang, and H. Zou, \"A bilateral CFAR algorithm for ship detection in SAR images,\" _IEEE Geosci. Remote Sens. Lett._, vol. 12, no. 7, pp. 1536-1540, Jul. 2015. * [17] H. Dai, L. Du, Y. Wang, and Z. Wang, \"A modified CFAR algorithm based on object proposals for ship target detection in SAR images,\" _IEEE Geosci. Remote Sens. Lett._, vol. 13, no. 12, pp. 1925-1929, Dec. 2016. * [18] T. Li, Z. Liu, R. Xie, and L. Ran, \"An improved superpixel-level CFAR detection method for ship targets in high-resolution SAR images,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 1, pp. 184-194, Jan. 2017. * [19] C. Wang, F. Bi, W. Zhang, and L. Chen, \"An intensity-space domain CFAR method for ship detection in HR SAR images,\" _IEEE Geosci. Remote Sens. Lett._, vol. 14, no. 4, pp. 529-533, Apr. 2017. * [20] T.-Y. Lin, P. Dollr, R. Girshick, K. He, B. Hariharan, and S. Belongie, \"Feature pyramid networks for object detection,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2017, pp. 936-944. * [21] J. Jiao et al., \"A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection,\" _IEEE Access_, vol. 6, pp. 20881-20892, 2018. * [22] Z. Cui, Q. Li, Z. Cao, and N. Liu, \"Dense attention pyramid networks for multi-scale ship detection in SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 11, pp. 8983-8997, Nov. 2019. * [23] T. Zhang, X. Zhang, and X. Ke, \"Quad-FPN: A novel quad feature pyramid network for SAR ship detection,\" _Remote Sens._, vol. 13, no. 14, Jul. 2021, Art. no. 2771. * [24] D. Li, Q. Liang, H. Liu, Q. Liu, H. Liu, and G. Liao, \"A novel multidimensional domain deep learning network for SAR ship detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Mar. 2022, Art. no. 5203213. * [25] K. Zhou, M. Zhang, H. Wang, and J. Tan, \"Ship detection in SAR images based on multi-scale feature extraction and adaptive feature fusion,\" _Remote Sens._, vol. 14, no. 3, Feb. 2022, Art. no. 755. * [26] L. Yu, H. Wu, Z. Zhong, L. Zheng, Q. Deng, and H. Hu, \"TWC-Net: A SAR ship detection using two-way convolution and multi-scale feature mapping,\" _Remote Sens._, vol. 13, no. 13, pp. 2558-2558, Jun. 2021. * [27] M. Zhao, X. Zhang, and A. Kaup, \"Multitask learning for SAR ship detection with Gaussian-mask joint segmentation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Aug. 2023, Art. no. 5214516. * [28] L. Zhang, Y. Liu, W. Zhao, X. Wang, G. Li, and Y. He, \"Frequency-adaptive learning for SAR ship detection in clutter scenes,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Feb. 2023, Art. no. 5215514. * [29] S. Wang, Z. Cai, and J. Yuan, \"Automatic SAR ship detection based on multifeature fusion network in spatial and frequency domains,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Apr. 2023, Art. no. 4102111. * [30] L. Bai, C. Yao, Z. Ye, D. Xue, X. Lin, and M. Hui, \"Feature enhancement pyramid and shallow feature reconstruction network for SAR ship detection,\" _IEEE J. Sel. Topics Appl. Earth Observer. Remote Sens._, vol. 16, pp. 1042-1056, Jan. 2023. * [31] Y. Zhou, H. Liu, F. Ma, Z. Pan, and F. Zhang, \"A sidelobe-aware small ship detection network for synthetic aperture radar imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Apr. 2023, Art. no. 5205516. * [32] Z. Zhou et al., \"HRL-SARDet: A lightweight SAR target detection algorithm based on hybrid representation learning enhancement,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, Mar. 2023, Art. no. 5203922. * [33] Y. Jiang, W. Li, and L. Liu, \"R-CenterNet+: Ancho-free detector for ship detection in SAR images,\" _Sensors_, vol. 21, no. 17, Aug. 2021, Art. no. 5693. * [34] Q. Hu, S. Hu, and S. Liu, \"BANet: A balance attention network for anchor-free ship detection in SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Jan. 2022, Art. no. 5222212. * [35] G. Gao, Y. Dai, X. Zhang, D. Duan, and F. Guo, \"ADCG: A cross-modality domain transfer learning method for synthetic aperture radar in ship automatic target recognition,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5109114. * [36] R. Xia et al., \"CRTransSan: A visual transformer based on contextual joint representation learning for SAR ship detection,\" _Remote Sens._, vol. 14, no. 6, 2022, Art. no. 1488. * [37] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster R-CNN: Towards real-time object detection with region proposal networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 39, no. 6, pp. 1137-1149, Jun. 2017. * [38] T. Kong, F. Sun, H. Liu, Y. Jiang, L. Li, and J. Shi, \"FoveaBox: Beyond anchor-based object detection,\" _IEEE Trans. Image Process._, vol. 29, pp. 7389-7398, Jun. 2020. * [39] D. Hong et al., \"Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks,\" _Remote Sens. Environ._, vol. 299, 2023, Art. no. 113856. * [40] D. Hong et al., \"SpectralGPT: Spectral foundation model,\" 2024, _arXiv:2311.07113_. * [41] N. Tishby, F. C. Pereira, and W. Bialek, \"The information bottleneck method,\" 2000, _arXiv:physics/0004057_. * [42] N. Tishby and N. Zaslavsky, \"Deep learning and the information bottleneck principle,\" in _Proc. IEEE Inf. Theory Workshop_, 2* [43] G. Desjardins, A. Courville, and Y. Bengio, \"Disentangling factors of variation via generative entangling,\" 2012, _arXiv:1210.5474_. * [44] G. Hinton, \"How to represent part-whole hierarchies in a neural network,\" _Neural Comput._, vol. 35, no. 3, pp. 413-452, 2023. * [45] Y. Cai et al., \"Reversible column networks,\" in _Proc. Int. Conf. Learn. Representation_, 2023. [Online]. Available: [https://openreview.net/forum?id=O.czVtU0FY](https://openreview.net/forum?id=O.czVtU0FY) * [46] J. Glenn, \"Ultralytics YOLOv8,\" GitHub, 2023. [Online]. Available: [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) * [47] T. Zhang et al., \"SAR ship detection dataset (SSDD): Official release and comprehensive data analysis,\" _Remote Sens._, vol. 13, no. 18, 2021, Art. no. 3690. * [48] S. Wei, X. Zeng, Q. Qu, M. Wang, H. Su, and J. Shi, \"HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation,\" _IEEE Access_, vol. 8, pp. 120234-120254, 2020. * [49] Z. Zhang, K. Zhang, Z. Li, and Y. Qiao, \"PANNet: Path aggregation network for instance segmentation,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 8759-8768. * [50] O. Hou, D. Zhou, and J. Feng, \"Coordinate attention for efficient mobile network design,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2021, pp. 13713-13722. * [51] D. Ouyang et al., \"Efficient multi-scale attention module with cross-spatial learning,\" in _Proc. IEEE Int. Conf. Acoust., Speech, Signal Process._, 2023, pp. 1-5. * [52] J. Glenn, \"YOLOv5 release v6.1\" GitHub, Accessed: Jan. 1, 2022. [Online]. Available: [https://github.com/ultralytics/yolov5/releases/tag/v6.1](https://github.com/ultralytics/yolov5/releases/tag/v6.1) * [53] T.-Y. Lin et al., \"Microsoft COCO: Common objects in context,\" in _Proc. Eur Conf. Comput. Vis._, 2014, pp. 740-755. * [54] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, \"YOLOv7: Trainable bag-of-frefreibse sets new state-of-the-art for real-time object detectors,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2023, pp. 7464-7475. * [55] C. Li et al., \"YOLOv6 v3.0: A full-scale reloading,\" 2023, _arXiv:2301.05586_. * [56] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, \"Grad-CAM: Visual explanations from deep networks via gradient-based localization,\" in _Proc. IEEE/CVF Int. Conf. Comput. Vis._, 2017, pp. 618-626. * [57] X. Xu, Y. Jiang, W. Chen, Y. Huang, Y. Zhang, and X. Sun, \"DAMO-YOLO: A report on real-time object detection design,\" 2022, _arXiv:2211.15444v2_. * [58] Z. Tian, C. Shen, H. Chen, and T. He, \"FCOS: Fully convolutional one-stage object detection,\" in _Proc. IEEE/CVF Int. Conf. Comput. Vis._, 2019, pp. 9626-9635. * [59] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, \"Deformable DETR: Deformable transformers for end-to-end object detection,\" 2020, _arXiv:2010.04159_. * [60] J. Pang et al., \"Libra R-CNN: Towards balanced learning for object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2019, pp. 821-830. * [61] Z. Cai and N. Vasconcelos, \"Cascade R-CNN: Delving into high quality object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 6154-6162. \\begin{tabular}{c c} & Tianxiang Wang received the Bachelor of Engineering degree in software engineering from Yibin University, Yibin, China, in 2022. He is currently working toward the Master of Engineering degree in software engineering with the School of Computer and Information Engineering, Hubei University. His research interests include computer vision and remote-sensing image target detection and recognition. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhangfan Zeng received the Bachelor of Science (B.S.) degree in electrical and information engineering from Wuhan University, Wuhan, China, in 2006, the Master of Science (M.S.) degree in communications engineering from The University of Manchester, Manchester, U.K., in 2007, and the Ph.D. degree in communications engineering from the University of Birmingham, Birmingham, U.K., in 2013. From 2008 to 2009, he was an Algorithm Engineer with Guangdong Nortel Network, China. From 2013 to 2014, he was a ProtocolStack Software Engine with NextG-Com, Ltd., Staines-upon-Thames, U.K. From January 2015 to September 2015, he was a Senior Engineer with Cobbam Wireless, Stevenage, U.K. Since December 2015, he has been a full-time Associate Professor with Hubei University, Wuhan. His research interests include wireless communication and digital signal processing. \\\\ \\end{tabular}
Ship detection via synthetic aperture radar (SAR) is widely used in maritime safety and maritime traffic control, etc. Recently, deep learning methods are employed to improve the performance of SAR ship detection to a large extent. However, the presence of clutter in SAR images and the large-scale difference of ships result in the diminished detection performance in complex environments. As such, in this article, a novel adaptive multiscale reversible column network is proposed. First, the idea of disentangled feature learning is applied to construct reversible column networks with a C2f module to alleviate the problem of large-scale differences and the loss of ship information. Second, the multiplexed adaptive spatial pyramid pooling module is proposed to alleviate the impact of complex background clutter through multiplexed pooling operations and adaptive fusion. Finally, a novel feature pyramid network with an adaptive downsampling module is designed to reduce the information loss caused by downsampling while enhancing the multiscale ship detection capability. The effectiveness of the proposed method is validated on two public datasets: 1) SAR ship detection dataset and 2) high-resolution SAR images dataset. The experimental results show that the proposed method is able to achieve better results than current state-of-the-art methods for SAR ship detection in complex environments and large-scale differences of ships. Adaptive downsampling, feature extraction, reversible column network, ship detection, synthetic aperture radar (SAR).
Provide a brief summary of the text.
ieee/e571d145_e4c1_4707_addb_ccaea68bf859.md
# Phase synchronization processing method for alternating bistatic mode in distributed SAR Zhihua He 1. School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China; 2. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200230, China11 Feng He 1. School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China; 2. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200230, China11 Junli Chen 2. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200230, China11 Haifeng Huang 1. School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China; 2. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200230, China11 Diannong Liang 1. School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China; 2. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200230, China11 ## 1 Introduction Spaceborne distributed synthetic aperture radar (SAR) is an innovative spaceborne remote sensing instrument based on a combination of the satellite formation technique and the SAR technique, which can effectively generate high-resolution global digital elevation models (DEMs) [1, 2, 3]. However, the separation of the transmitter and the receiver also raised a couple of new challenges like the phase, time and beam synchronization [4, 5, 6]. The phase synchronization refers to the demodulated phase deviation of bistatic echoes caused by the transmitter carriers and the receiver oscillators. The uncompensated phase errors may cause not only a distortion of bistatic SAR focusing [6, 7, 8, 9], but also interferometric phase errors, which may in turn cause a low-frequency modulation of the DEM along the azimuth [6] and therefore degrading the interferometric SAR height measurement performance [1]. Several studies on phase reference in an interferometric SAR have been published in recent years. The use of continuous duplex satellite links with an ultra-stable local oscillator (USO) signal for oscillator drift compensation was first suggested in [10]. Then, a pulsed alternate synchronization method was proposed in [11, 12], which extracted the phase errors between carriers by a mutual exchange of radar pulses on the satellite, and a phase synchronization processing on the ground. This synchronization scheme was successfully used to the TanDEM-X interferometry system [1]. The drawbacks of the pulsed alternate synchronization method are that the nominal bistatic SAR data acquisition process is periodically interrupted and that the synchronization frequency is always less than 10 Hz. To deal with this problem, other possible phase synchronization methods without inter-satellite links were also suggested [13, 14, 15, 16]. The promising method is to use the alternating bistatic mode, which does not need the ground control points and the major modification of the radar system. This synchronization method can be used for backup of the pulsed alternate synchronization method. Based on the assumption of scatter reciprocity, a phase synchronization processing method in alternating bistatic mode was mentioned in [1]. However, there is no further discussion in [1] of the phase synchronization processing procedure and its synchronization accuracy. Based on these studies, the sub-aperture processing method and the echo domain processing method are further studied, and the phase synchronization processing errors are analyzed. These investigations are validated by the results of simulation experiments. ## 2 Phase synchronization signal model When a distributed SAR works in the alternating bistatic mode, both transmitters transmit a radar signal on a pulseto-pulse basis, and both receivers receive the scattered signal from the ground simultaneously, which can acquire two monostatic and two bistatic SAR images during a single pass, as illustrated in Fig. 1. The significant advantages of the alternating bistatic mode is that it is possible to derive the carrier phase difference by the comparison of the two bistatic echoes without any inter satellite links. The alternating bistatic mode has been used as a DEM acquisition mode during a single pass of the TanDEM-X [1]. Ignoring the initial phase, the carrier phase of satellite \\(i\\) at time \\(t\\) can be expressed as [12] \\[\\phi_{i}(t)=2\\pi\\int_{t_{0}}^{t}\\left(f_{0}+\\Delta f_{i}\\right)\\mathrm{d}t+n_{ \\phi_{i}}(t) \\tag{1}\\] where \\(i\\in\\{1,2\\}\\), \\(f_{0}\\) represents the nominal frequency at the start time \\(t_{0}\\), \\(\\Delta f_{i}\\) is the constant frequency offset, and \\(n_{\\phi_{i}}(t)\\) is the carrier phase noise with its phase spectrum represented by a composite power-law model as follows [7,17]: \\[S_{\\phi}(f)=M(af^{-4}+bf^{-3}+cf^{-2}+df^{-1}+e) \\tag{2}\\] where \\(M=f_{i}/f_{\\mathrm{ref}}\\) is the frequency up-conversion factor and \\(f_{\\mathrm{ref}}\\) is the reference oscillator frequency; coefficients \\(a\\), \\(b\\), \\(c\\), \\(d\\), and \\(e\\) describe contributes from random walk frequency noise (RWFM), frequency flicker noise (FFM), white frequency noise (WFM), flicker phase noise (FPM) and white phase noise (WPM). Satellite \\(i\\) transmits a radar signal which is received by satellite \\(j\\) after a delay \\(\\tau_{ij}\\) corresponding to the time taken by the signal to travel back from the ground scene with a demodulated phase \\[\\varphi_{ij}(t)=\\phi_{i}(t-\\tau_{ij})-\\phi_{j}(t)=\\] \\[-2\\pi f_{i}\\tau_{ij}+2\\pi\\Delta f_{ij}t+n_{\\phi_{i}}(t-\\tau_{ij})-n_{\\phi_{j} }(t) \\tag{3}\\] where \\(f_{i}=f_{0}+\\Delta f_{i}\\), and \\(\\Delta f_{ij}=\\Delta f_{i}-\\Delta f_{j}\\) is the constant carrier frequency offset. Apart from the required interferometric phase term, the demodulated phase contains interferogram distortions along the azimuth caused by the carrier frequency offset and the bistatic carrier phase noise, which have to be compensated before interferometric data processing. The objective of phase synchronization is to compensate the bistatic demodulated phase as in the case of a monostatic SAR. The ideal compensation phase is given by \\[\\varphi_{c}(t)=(\\phi_{1}(t-\\tau_{12})-\\phi_{1}(t))-\\varphi_{12}(t)=\\] \\[2\\pi\\Delta f_{21}t+n_{\\phi_{2}}(t)-n_{\\phi_{1}}(t)=\\phi_{2}(t)-\\phi_{1}(t) \\tag{4}\\] where \\((\\phi_{1}(t-\\tau_{12})-\\phi_{1}(t))\\) is the equivalent monostatic SAR echo phase, and \\(\\varphi_{12}(t)\\) is the demodulated bistatic echo phase. It can be shown that the ideal compensation phase is the carrier phase difference between satellite 2 and satellite 1. ## 3 Phase synchronization processing method The key to phase synchronization in the alternating bistatic mode is to extract the carrier phase difference from the recorded echoes by an appropriate phase synchronization processing method. For an ideal system under the scatter reciprocity assumption with a small bistatic angle, systematic deviations between the two bistatic SAR images are mainly due to carrier phase differences [1]. Therefore, it is possible to extract the carrier phase difference from the interferogram of the two bistatic SAR images. Ignoring the carrier phase noise changes in range fast time, the recorded slave radar echo from the master radar is [18] \\[s_{12}(t,t_{m})=\\exp(-\\mathrm{j}\\varphi(t_{m}))p_{r}(t)*m_{12}(t,t_{m})+n_{12 }(t) \\tag{5}\\] and the recorded master radar echo from the slave radar is \\[s_{21}(t,t_{m}+t_{m0})=\\exp(\\mathrm{j}\\varphi(t_{m}+t_{m0}))p_{r}(t)*\\] \\[m_{21}(t,t_{m}+t_{m0})+n_{21}(t) \\tag{6}\\] where \\(p_{r}(t)\\) is the transmitting signal, \\(t_{m0}=0.5/\\mathrm{PRF}\\) and PRF is the pulse repetition frequency, \\(\\varphi(t_{m})\\approx\\varphi_{c}(t_{m})\\) is the expected carrier phase difference, \\(n_{ij}(t)\\) is the echo-domain noise, and \\(m_{ij}(t,t_{m})\\) is the integration of the scatter radar cross section (RCS) in the antenna beam \\[m_{ij}(t,t_{m})=\\] \\[\\int_{\\tau_{1}}^{\\tau_{2}}\\sigma(\\tau,t_{m})w_{a}(t_{m}-t_{p})\\exp(-\\mathrm{ j}2\\pi f_{i}\\tau)\\delta(t-\\tau)\\mathrm{d}\\tau \\tag{7}\\] Figure 1: Illustration of alternating bistatic data acquisition mode where \\(\\tau_{1}\\) and \\(\\tau_{2}\\) are the delay times from the near end and far end of the swath, \\(w_{a}(t_{m})\\) is the two-way antenna beam pattern in the azimuth direction, \\(t_{p}\\) is the beam center crossing time, \\(\\sigma(\\tau)\\) is the RCS distribution over the delay time \\(\\tau\\) and \\(\\delta(t)\\) is a delta function. Considering only one scatter and ignoring the noise in SAR imaging processing, the slave radar echo from the master radar can be expressed as \\[s_{12}(t,t_{m})=\\sigma p_{r}(t-\\tau_{12})w_{a}(t_{m}-t_{p})\\times\\] \\[\\exp(-{\\rm j}2\\pi f_{1}\\tau_{12}+{\\rm j}2\\pi\\Delta f_{12}t_{m}+{\\rm j}\\varphi_{ e}(t_{m})) \\tag{8}\\] where \\(\\varphi_{e}(t_{m})\\approx n_{\\phi_{1}}(t_{m})-n_{\\phi_{2}}(t_{m})\\), and \\(\\tau_{12}\\) is the delay of the scatter \\[\\tau_{12}=\\frac{2}{c}\\sqrt{R_{0}^{2}+v_{a}^{2}(t_{m}-t_{p})^{2}}\\approx\\] \\[\\frac{2}{c}R_{0}+\\frac{1}{2f_{1}}k_{a}(t_{m}-t_{p})^{2} \\tag{9}\\] where \\(c\\) is the speed of light, \\(R_{0}\\) is the shortest slant range, \\(v_{a}\\) is the satellite velocity, and \\(k_{a}=2v_{a}^{2}/\\lambda R_{0}\\) is the Doppler rate in the stripmap mode. After range compression processing, the signal becomes \\[s_{12}^{rc}(t,t_{m})=\\sigma\\exp\\bigg{(}-{\\rm j}\\frac{4\\pi f_{1}}{c}R_{0}\\bigg{)} \\rho_{r}\\bigg{(}t-\\frac{2R_{0}}{c}\\bigg{)}\\times\\] \\[w_{a}(t_{m}-t_{p})\\exp(-{\\rm j}\\pi k_{a}(t_{m}-t_{p})^{2})\\times\\] \\[\\exp({\\rm j}2\\pi\\Delta f_{12}t_{m}+{\\rm j}\\varphi_{e}(t_{m})) \\tag{10}\\] where \\(\\rho_{r}(t)={\\rm sinc}~{}(\\pi Bt)\\) is the range impulse response and \\(B\\) is the bandwidth of the transmitting signal. Ignoring the phase noise difference term \\(\\varphi_{e}(t_{m})\\), the signal after azimuth compression processing is \\[s_{12}^{img}(t,t_{m})=\\sigma\\exp\\bigg{(}-{\\rm j}\\frac{4\\pi f_{1}}{c}R_{0} \\bigg{)}\\rho_{r}\\bigg{(}t-\\frac{2R_{0}}{c}\\bigg{)}\\times\\] \\[\\rho_{a}\\bigg{(}t_{m}-t_{p}-\\frac{\\Delta f_{12}}{k_{a}}\\bigg{)}\\exp({\\rm j}2 \\pi\\Delta f_{12}t_{p})\\exp\\bigg{(}{\\rm j}\\pi\\frac{\\Delta f_{12}^{2}}{k_{a}} \\bigg{)} \\tag{11}\\] where \\(\\rho_{a}(t_{m})\\) is the azimuth impulse response. It is shown that a linear phase \\(2\\pi\\Delta f_{12}t_{p}\\) is introduced by the frequency offset, which means that the linear phase error in the echo domain is transferred to the image domain while maintaining the same magnitude. Therefore, it is possible to estimate the carrier phase difference by interferometric processing from the two bistatic SAR images. Equation (11) shows that the frequency offset will cause an azimuth shift, given by \\[\\Delta x=\\frac{\\Delta f_{12}}{k_{a}}v_{a}=\\beta\\frac{cR_{0}}{2v_{a}} \\tag{12}\\] where \\(\\beta=\\Delta f_{12}/f_{0}\\) is the relative frequency deviation. For example, a constant azimuth shift of 171 meters can be observed for \\(v_{a}\\) = 7 km/s, \\(R_{0}\\) = 800 km, and \\(\\beta\\) = 10\\({}^{-8}\\). Therefore, it is possible to obtain a coarse \\(\\Delta f_{12}\\) by means of the co-registration offset between monostatic and bistatic SAR images [7]. As a kind of generalization, when the imaging processing aperture is reduced to one range line, the equivalent azimuth resolution \\(\\rho_{a}=L_{s}\\) where \\(L_{s}\\) is the synthetic aperture length [14]. In this situation, a correlation processing method can be directly used in the echo domain to extract the carrier phase differences without SAR imaging and interferometric processing using the strong correlations between two adjacent bistatic echoes. The first step is to interpolate the echo \\(s_{21}(t,t_{m}+t_{m0})\\) along the azimuth to obtain an registered echo \\(s_{21}(t,t_{m})\\) with the echo \\(s_{12}(t,t_{m})\\). Then a correlation processing is applied to this two echoes \\[r(t,t_{m})=s_{12}^{*}(-t,t_{m})*s_{21}(t,t_{m})=\\] \\[\\exp({\\rm j}2\\varphi(t_{m}))m(t)+n(t) \\tag{13}\\] where \\(n(t)\\) is the noise term, and \\(m(t)\\) is the scene term: \\[m(t)=\\rho_{r}(t)*(m_{12}^{*}(-t,t_{m})*m_{21}(t,t_{m})). \\tag{14}\\] Under the scatter reciprocity assumption, the RCS distribution of the two echoes are approximately equal and the last term is approximately a delta function \\(\\delta(t)\\). Therefore, the cross-correlation signal is a sinc function with resolution of \\(1/B\\) s and peak position at time 0, and the peak phase contains the carrier phase difference, the estimated value of which is \\[\\hat{\\varphi}(t_{m})=\\frac{1}{2}\\arg\\{r(0,t_{m})\\} \\tag{15}\\] where \\(r(0,t_{m})\\) is the cross-correlation value at time \\(0\\) and \\(\\arg\\{\\cdot\\}\\) represents the phase extraction operation. This estimation has deviations due to the noise and scatter reciprocity assumption when it is applied to the real data. To sum up, the proposed sub-aperture phase synchronization processing procedure is shown in Fig. 2. Firstly, a sub-aperture of the echo is selected, and SAR imaging is performed to obtain a bistatic interferogram. Secondly, standard interferometric SAR processing is performed, including co-registration, phase flattening, phase filtering, and phase unwrapping. Finally, the estimated carrier phase difference is obtained by range averaging. The echo-domain processing method can be processed without imaging and interferometric processing procedure. The preprocessing step consists of interpolating the echo along the azimuth, and the correlation processing step extracts the position and phase of the correlated peaks. The echo-domainprocessing method has higher processing efficiency because the correlation processing time is equivalent to the range compression processing in the SAR imaging, however its robustness is less than the sub-aperture processing method due to the influence of the scene type. ## 4 Phase synchronization processing error analysis The phase errors caused by SAR imaging processing, the along-track baseline, image misregistration, and noise are analyzed as follows. For the random phase noise component of bistatic SAR echoes, the phase errors \\(\\gamma(t_{p})\\) after SAR imaging can be estimated with sufficient accuracy by integrating the phase errors \\(\\varphi_{e}(t_{m})\\) in the echo domain over the image processing aperture time \\(T_{a}\\) \\[\\gamma(t_{p})=\\frac{1}{T_{a}}\\int_{t_{p}-T_{a}/2}^{t_{p}+T_{a}/2}\\varphi_{e}(t_ {m})\\mathrm{d}t_{m}. \\tag{16}\\] SAR image integration corresponds to a low-pass filter with the transfer function \\(H_{az}(f)=\\sin(\\pi T_{a}f)/\\pi T_{a}f\\). According to the signal model shown in Fig. 3, the variance of the residual phase errors introduced by SAR imaging is \\[\\sigma_{\\Delta\\gamma}^{2}=2\\int_{0}^{\\infty}S_{\\varphi}(f)(H_{az}(f)-1)^{2} \\mathrm{d}f. \\tag{17}\\] The factor 2 reflects the use of two independent transmitting and receiving oscillators in bistatic echoes. It can be concluded from the above analysis that SAR image processing is equivalent to transfer through a low-pass filter with a cut-off frequency of 1/\\(T_{a}\\). Therefore, the frequency offset and the lower-frequency part of the phase noise in the carrier phase difference can be retained in the image domain. The image-domain phase synchronization processing method in the alternating bistatic mode is similar to the pulsed alternate synchronization method in the bistatic model with synchronization frequency \\(f_{\\mathrm{syn}}=1/T_{a}\\). It is possible to increase the synchronization frequency by decreasing the imaging processing aperture time \\(T_{a}\\) to reduce the interpolation and aliasing phase errors [6], thus reducing the phase errors introduced by SAR image processing. The azimuth resolution is \\(\\rho_{a}=Kv_{a}/B_{a}\\) with \\(1/K\\) sub-aperture, where \\(B_{a}\\) is the Doppler bandwidth. Then the critical along-track baseline is given by \\[B_{\\mathrm{img}}^{\\mathrm{crit}}=\\frac{\\lambda r_{0}}{2\\rho_{a}\\cos^{2}\\phi}= L_{s}/K \\tag{18}\\] where \\(\\phi\\) is the squint angle. The correlation coefficient as a function of the along-track baseline \\(t_{m0}v_{a}\\), the image misregistration \\(\\Delta y\\), and the noise is [19] \\[\\gamma=\\gamma B\\gamma\\mathrm{Coreg}\\gamma_{\\mathrm{SNR}}=\\] \\[\\left(1-\\frac{|t_{m0}v_{a}|}{B_{\\mathrm{img}}^{\\mathrm{crit}}} \\right)\\mathrm{sinc}\\left(\\frac{\\Delta y}{\\rho_{a}}\\right)\\frac{1}{1+ \\mathrm{SNR}^{-1}}\\approx\\] \\[\\left(1-K\\frac{v_{a}}{2\\mathrm{PRF}\\cdot L_{s}}\\right)\\frac{0.98 }{1+\\mathrm{SNR}^{-1}} \\tag{19}\\] where \\(\\gamma_{B}\\), \\(\\gamma_{\\mathrm{Coreg}}\\), \\(\\gamma_{\\mathrm{SNR}}\\) are the correlation coefficients caused by the along-track baseline, image misregistration, and noise respectively, SNR is the signal-to-noise ratio, and a residual misregistration of 0.1 resolution cell is considered as \\(\\Delta y/\\rho_{a}=0.1\\). When using the echo-domain processing method, \\(\\rho_{a}=L_{s}\\), \\(K\\) achieves its max value \\(L_{s}B_{a}/v_{a}\\), The correlation coefficient achieves its min value as \\[\\gamma=\\left(1-\\frac{B_{a}}{2\\mathrm{PRF}}\\right)\\frac{0.98}{1+\\mathrm{SNR}^{- 1}}\\geq\\frac{0.49}{1+\\mathrm{SNR}^{-1}} \\tag{20}\\] where \\(\\mathrm{PRF}\\geq B_{a}\\) is considered. It can be seen that there are strong correlations between two bistatic images. This is also the reason that the phase synchronization can be performed. The phase error can be derived from the correlation coefficient [19]. According to the analysis above, the phase synchronization processing error is mainly determined by SAR imaging and noise. Because it is difficult to determine independent processing views by using the echo-domain processing method, the phase errors caused by the noise in correlated peaks can be analyzed directly instead. Assuming that the correlation processing bandwidth is the same as the receiver bandwidth, and considering the estimated phase divided by 2, the phase error caused by the noise is [11] \\[\\sigma_{\\varphi}\\approx\\frac{1}{\\sqrt{4\\mathrm{SNR}}}. \\tag{21}\\] Figure 3: Residual phase errors introduced by SAR imaging Figure 2: Image-domain phase synchronization processing algorithm Fig. 4 shows the standard deviation (STD) of the predicted phase errors caused by SAR imaging processing as a function of \\(1/T_{a}\\). In this example, a typical space-borne 10 MHz reference oscillator is used with spectral coefficients \\(\\{a=-92\\) dB, \\(b=-87\\) dB, \\(c=-197\\) dB, \\(d=-127\\) dB, \\(e=-152\\) dB\\(\\}\\)[7], the carrier frequency is 10 GHz corresponding to an \\(M\\) value of \\(10^{3}\\), and the synthetic aperture time is approximately 0.5 s with full-aperture processing, which corresponds to an equivalent synchronization frequency of 2 Hz. The STD of the phase errors caused by SAR imaging processing is 2.43\\({}^{\\circ}\\). According to the phase error curve shown in Fig. 4, it is possible to reduce the phase errors using a sub-aperture SAR image processing method. However, the processing aperture has a lower limit because the processing bandwidth must be larger than \\(4\\Delta f_{12}\\) to ensure correct phase filtering and unwrapping. The appropriate sub-aperture time for the image-domain processing method is between \\(T_{a}/4\\) and \\(T_{a}/10\\). Fig. 5 shows the STD of the predicted phase errors caused by noise as a function of SNR. We can see that 38 dB noise in correlated peaks causes a phase error of 0.36\\({}^{\\circ}\\). It can be concluded that the phase error of the sub-aperture processing method with larger processing aperture is mainly introduced by SAR imaging, while the phase error of the echo-domain processing method is mainly introduced by noises. ## 5 Simulation experiments The simulated alternating bistatic echoes are obtained by filtering and decimating an actual stripmap SAR echo and multiplying the result by a set of simulated carrier phase differences. Under the scatter reciprocity assumption, the simulated echoes realistically reflect changes in the scatter and noise properties of actual alternating bistatic echoes, except for the half PRF [14]. The phase synchronization accuracy can be determined by comparing the estimated carrier phase differences with the simulated carrier phase differences after phase synchronization processing. In this simulation, the carrier frequency offset is 0.5 Hz, and the analytical coefficients described in Section 4 are used as the phase noise simulation parameters. If the frequency offset is greater than 0.5 Hz, a coarse frequency offset compensation can be performed before the phase synchronization processing to make sure that the frequency offset is less than 0.5 Hz. Fig. 6 shows simulation examples of the carrier phase noise difference for a time interval of 5 s. The phase synchronization processing results are shown from Fig. 7 - Fig. 9. The phase synchronization processing results using full aperture (\\(K=1\\)) are given in Fig. 7. It is shown that the interferometric phase reflects the carrier phase differences. The STD of the phase errors between the averaged interferometric phase over the range and the simulated carrier phase differences is 2.96\\({}^{\\circ}\\), which fits fairly well with the theoretical prediction in Fig. 4. The phase synchronization processing results using 1/10 aperture (\\(K=10\\)) are given in Fig. 8. Because the phase errors introduced by SAR imaging processing are decreased by the sub-aperture processing, the STD of the phase errors is reduced to 0.55\\({}^{\\circ}\\). The echo-domain phase synchronization processing results (\\(K=L_{s}B_{a}/v_{a}\\)) are given in Fig. 9. The mean and STD of the correlated peak position are 0 and 0.06 ns, respectively, and the SNR of the correlated peak is approximately 38 dB. It is shown that the correlated peak phase reflects the carrier phase differences. The STD of the phase errors is 0.37\\({}^{\\circ}\\), which fits fairly well with the theoretical prediction in Fig. 5. ## 6 Conclusions This paper presents phase synchronization processing methods for alternating the bistatic mode in distributed SAR. From the results of the simulation experiment, the sub-aperture processing method gives a phase synchronization accuracy of \\(0.55^{\\circ}\\) using 1/10 aperture, and the echo-domain processing method gives a phase synchronization accuracy of \\(0.37^{\\circ}\\), both of the methods achieve high phase synchronization processing accuracy. Although the echo-domain processing method has higher processing efficiency, its robustness is less than the sub-aperture processing method. The work of this paper offers supports for the system design and realization of the spaceborne distributed SAR phase synchronization. ## References * [1] G. Krieger, A. Moreira, H. Fiedler, et al. TanDEM-X: a satellite formation for high-resolution SAR interferometry. _IEEE Trans. on Geoscience Remote Sensing_, 2007, 45(11): 3317-3341. * [2] A. Moccia, G. Krieger, P. Lombardo. Tutorial: future SAR systems: principles and applications. _Proc. of the European Conference on Synthetic Aperture Radar_, 2010: 83-166. * [3] G. Krieger, I. Hajnssek, K. P. Papathanassiou, et al. Interferometric synthetic aperture radar (SAR) missions employing formation flying. _Proceedings of the IEEE_, 2010, 98(5): 816-843. * [4] W. Q. Wang. GPS-based time & phase synchronization processing for distributed SAR. _IEEE Trans. on GRS Aerospace and Electronic Systems_, 2009, 45(3): 1040-1051. * [5] L. J. Zhao, X. D. Liang, C. B. Ding. Multi-PRF link method for bistatic SAR synchronization. _Journal of Electronics & Information Technology_, 2010, 32(6): 1338-1342. (in Chinese) * [6] Y. S. Zhang, D. N. Liang, Z. Dong. Analysis of frequency synchronization error in space-borne parasitic interferometric SAR system. _Proc. of the European Conference on Synthetic Aperture Radar_, 2006: 1-4. * [7] G. Krieger, M. Younis. Impact of oscillator noise in bistatic and multistatic SAR. _IEEE Geoscience Remote Sensing Letters_, 2006, 3(3): 424-428. * [8] L. Kuang, X. F. Shen, W. L. Yang. Influence of frequency stability on bistatic SAR resolution. _Journal of University of Electronic Science and Technology of China_, 2009, 38(2): 165-168. (in Chinese) * [9] X. M. Xie, Y. M. Pi. Impact and estimation of frequency source noise on bistatic SAR. _Systems Engineering and Electronics_, 2010, 32(2): 275-278. (in Chinese) * [10] M. Eineder. Oscillator clock drift compensation in bistatic interferometric SAR. _Proc. of the International Geoscience and Remote Sensing Symposium_, 2003: 1449-1451. * [11] M. Younis, R. Metzig, G. Krieger. Performance prediction of a phase synchronization link for bistatic SAR. _IEEE Trans. on Geoscience Remote Sensing Letters_, 2006, 3(3): 429-433. * [12] M. Younis, R. Metzig, G. Krieger, et al. Performance prediction and verification for the synchronization link of TanDEM-X. _Proc. of the International Geoscience and Remote Sensing Symposium_, 2007: 1-4. * [13] W. Q. Wang, C. B. Ding, X. D. Liang. Time and phase synchronisation via direct-path signal for bistatic synthetic aperture radar systems. _IET Radar Sonar Navigation_, 2008, 2(1): 1-11. * [14] Z. H. He, F. He, J. L. Chen, et al. Echo-domain phase synchronization algorithm for bistatic SAR in alternating bistatic/ping-pong mode. _IEEE Trans. on Geoscience Remote Sensing Letters_, 2012, 9(4): 604-608. * [15] W. Q. Wang. Approach of adaptive synchronization for bistatic SAR real-time imaging. _IEEE Trans. on Geoscience Remote Sensing_, 2007, 45(9): 2695-2700. * [16] P. Lopez-Dekker, J. J. Mallorqui, P. Serra-Morales, et al. Phase synchronization and Doppler dentroid estimation in fixed receiver bistatic SAR systems. _IEEE Trans. on Geoscience Remote Sensing_, 2008, 46(11): 3459-3471. * [17] IEEE Standards Coordinating Committee 27 on Time and Frequency. _IEEE standard conditions of physical quantities for fundamental frequency and time metrology-random instabilities_. New York: Institute of Electrical and Electronics Engineers, Inc., 1999. Figure 8: Phase synchronization processing results (\\(K=10\\)) Figure 7: Phase synchronization processing results (\\(K=1\\)) Figure 9: Phase synchronization processing results (\\(K=L_{s}B_{a}/v_{a}\\))* [18] I. G. Cumming, F. H. Wong. _Digital processing of synthetic aperture radar: algorithms and implementation_. London: Artech House, 2005. * [19] D. Just, R. Bamler. Phase statistics of interferograms with application to synthetic aperture radar. _Applied Optics_, 1994, 33(20): 4361-4368. ## Biographies \\begin{tabular}{c c} & Zhihua He was born in 1982. He received his B.S. and Ph.D. degrees in signal processing from the National University of Defense Technology in 2005 and 2011, respectively. He is currently a lecturer with the Institute of Space Electronic and Information Technology, National University of Defense Technology. His main scientific interests are in SAR system design and advanced radar signal processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Feng He was born in 1976. He received his B.S. and Ph.D. degrees in signal processing from National University of Defense Technology in 1998 and 2005, respectively. His main scientific interests are in radar system theory, adaptive signal processing, and radar signal processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Junli Chen was born in 1971. He is currently working toward a Ph.D. degree at Shanghai Jiao Tong University. His main scientific interests are in spaceborne SAR, task analysis and simulation, and system integration and testing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Haifeng Huang was born in 1976. He received his Ph.D. degree in signal processing from National University of Defense Technology in 2005. His main scientific interests are in interferometric SAR system design, and radar signal processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & E-mail: [email protected] \\\\ \\end{tabular} \\begin{tabular}{c c} & Diannong Liang was born in 1936. He is a professor in National University of Defense Technology. His main scientific interests are in radar system theory, adaptive signal processing, and array signal processing. \\\\ \\end{tabular}
In the distributed synthetic aperture radar (SAR), the alternating bistatic mode can perform phase reference without a synchronization link between two satellites compared with the pulsed alternate synchronization method. The key of the phase synchronization processing is to extract the oscillator phase differences from the bistatic echoes. A signal model of phase synchronization in the alternating bistatic mode is presented. The phase synchronization processing method is then studied. To reduce the phase errors introduced by SAR imaging, a sub-aperture processing method is proposed. To generalize the sub-aperture processing method, an echo-domain processing method using correlation of bistatic echoes is proposed. Finally, the residual phase errors of the both proposed processing methods are analyzed. Simulation experiments validate the proposed phase synchronization processing method and its phase error analysis results. distributed synthetic aperture radar (SAR), alternating bistatic mode, phase synchronization, sub-aperture processing, echo-domain processing. **DOI: 10.1109/JSEE.2013.00049**
Give a concise overview of the text below.
ieee/e69de409_7b96_4c6a_9c12_f07ee224d0df.md
# Detecting Aircraft in Low-Resolution Multispectral Images: Specification of Relevant IR Wavelength Bands Florian Maire and Sidonie Lefebvre Manuscript received December 26, 2014; revised May 12, 2015; accepted July 10, 2015. Date of publication August 25, 2015; date of current version December 21, 2015. This work was supported by the French Procurement Agency (DGA).The authors are with the ONERA, The French Aerospace Lab, 91761 Palaiseau, France (e-mail: [email protected]).Color versions of one or more of the figures in this paper are available online at [http://ieeexplore.ieee.org.Digital](http://ieeexplore.ieee.org.Digital) Object Identifier 10.1109/JSTARS.2015.2457514 ## I Introduction Progress made during the last 50 years in optics sensors enhanced the use of infrared (IR) detection for scientific, surveillance, and military applications. IR sensors enable to detect targets that cannot be set apart from their surroundings in the visible spectral range, thanks to their emitted heat. In the last decade, the usefulness of multispectral or hyperspectral sensors for remote sensing assignments has been proven [1, 2] and some studies [3, 4] emphasize their potential for target detection. Multispectral sensors sample the incoming light from the scene in several, about 10 or less, wavelength bands, whereas hyperspectral sensors collect data in hundreds of narrow contiguous spectral bands. These sensors provide a powerful means to discriminate different materials on the basis of their unique spectral signatures. However, few multispectral sensors are, for now, available in the IR field. The underlying goal of this paper is to address the specification of a low-resolution multispectral IR sensor for stealth aircraft detection, from a signal processing perspective. Our objective is twofold: 1. designing an anomaly detection algorithm for multispectral images of low-resolution target; 2. specifying the IR wavelength bands which should be used in such applications. For many reasons, the experimental approaches do not allow to evaluate _real_ multispectral IR signature (IRS) (aircrafts not available, safety reasons, etc.). More important, this would require to design a multispectral sensor with relevant IR wavelength bands for this application, which is precisely what we address in this paper. A significant research effort to model and predict aircraft IR radiation [5, 6, 7] has paved the way for computer programs allowing to simulate aircraft IRS, given a set of input parameters. In this context, ONERA has been continually developing over the last 30 years a simulation program for combat aircraft IRS, CRIRA [8]. However, CRIRA does not account for the output dispersion induced by uncertainty on input data (aircraft aspect angles, meteorological conditions, optical properties, etc.) and is, thus, coupled with uncertainty propagation methods [9]. As a consequence, for a given input data, the simulated IRS is no longer a single value, but a set of possible IRS which should include any experimentally measured IRS. IRS simulated through CRIRA has already been used to specify a general method to detect aircraft in low-resolution spectrally integrated IR images [10]. In this paper, we consider multispectral aircraft IRS: each pixel is a vector whose coordinates correspond to the irradiance of the optronic scene partially integrated over a specified set of bands of the IR spectrum. In order to be useful, the sensor should be able to detect an aircraft far ahead. This explains the coarse resolution of the images we consider (\\(16\\times 16\\) pixels), where the aircraft signs over at most 10 pixels. The sensor would indeed be too cumbersome otherwise. Even though hyperspectral images provide higher spectral resolution, the reason for focusing on multispectral data is twofold. First, given the typically low signal-to-noise ratio (SNR) of our application, any significant irradiance from a potential anomaly would be difficult to record in the hyperspectral narrow spectral bands. Second, since the IRS spectral components in hyperspectral images are quite correlated [11, 12], the discriminant signal conveyed by a potential anomaly would present more regularities and is likely to be drowned in theintrinsic noise. Therefore, no significant gain in discrimination capability would be observed using hyperspectral images and we believe that considering larger wavelength bands is beneficial in our application. Target detection in multi/hyperspectral images has given rise to a wealth of research efforts [4, 13, 14]. The typical objective of these methods is to detect small and rare objects in a background clutter. Given a statistical model, most of the target detection algorithms derive from the Neyman-Pearson likelihood ratio test (LRT) or from the generalized LRT (GLRT), when some parameters are unknown. A highly sought after feature for these detectors is the constant false alarm rate (CFAR) property, implying that the probability of false alarm (PFA) does not depend on any unknown parameter. Therefore, it is theoretically possible to set the test rule so that the detector achieves a given false alarm rate. In some applications, a characteristic spectral signature of the target is _a priori_ known [4, 15, 16, 17]. The corresponding detection methods are referred to as _matched filter_ algorithms in the literature. Although being CFAR [15], the detection performances of these algorithms rely upon the quality of the target reference spectrum, which may be difficult to obtain in practice. Moreover, it is not suitable to target whose spectral signature cannot be described by a model, which is the case in the situation we are interested in. Another approach, referred to as _anomaly detection_, does not require any knowledge about the target spectral signature [13]. In this framework, it is assumed that most of the image is composed of a background clutter, whose first- and second-order statistics may be unknown. Anomaly detection algorithms aim at identifying areas or pixels of the image which significantly differ from the background. More precisely, in the multi/hyperspectral imagery context, this generally boils down to find pixels whose spectral properties stand out from those of the background. Reed and Yu [18] proposed a CFAR anomaly detector, referred to as the Reed Xiaoli (RX) detector and considered as the benchmark among the anomaly detection algorithms designed for multi/hyperspectral images. In the RX detector, the background pixels are supposed to be independent and identically distributed (i.i.d.) with an unknown multivariate Gaussian distribution. Under this assumption, the GLRT amounts to compare the squared Mahalanobis distance between the sampled background distribution and a pixel under test to a detection threshold. A significant research effort based on the RX detector has thus emerged, mostly focusing on the estimation of background distribution moments. As a matter of fact, the homogeneous multivariate Gaussian distribution assumption is generally not suitable for real backgrounds as a whole, and deviations from this model lead to high false alarm rates. Among the different estimation methods proposed, reviewed in [13] and [19], we can mention: 1. local RX [20, 21], in which the covariance matrix and the mean are estimated locally in a window around each pixel under test; 2. subspace RX [22], in which the background statistics are determined after carrying out a principal components analysis and removing the first (high variance) principal components; 3. kernel RX [23], in which a nonlinear transformation is performed on the input data, in order to better account for high-order correlations between spectral bands; 4. cluster-based or class-conditional RX, in which a clustering is first performed on the image. This clustering may be achieved with a mixture of Gaussian distributions model [24, 25]. The background statistics are then estimated within each class, and the Mahalanobis distances between the pixel under test and each of the classes are determined. The final result is the minimum of these distances. Although in their seminal paper [18], Reed and Yu assume the knowledge of the target optical pattern, most of the _evolved_ RX detectors process each pixel separately, and do not account for the target spatial pattern [13]. However, a promising way of accounting for spatial contextual information was proposed in [26]. To achieve hyperspectral classification, this method uses a support vector machine algorithm with composite kernels for the clustering step, in order to add information about surrounding area of each pixel: a sum of two kernels is considered, one for the pixel signature, and the other one for the mean and/or the standard deviation of signatures of the surrounding pixels. Unfortunately, this is not suited to low-resolution objects: in the case of aircraft multispectral IRS, nearly all aircraft pixels are adjacent to several background pixels. Therefore, to the best of our knowledge, detector in multi/hyperspectral images for low-resolution targets, whose spectral signature is unknown and taking into account the target sprawl, is yet to be proposed. This is particularly a relevant issue in the context of aircraft detection, as progress in the area of stealth technologies has allowed to reduce aircraft IRS. As a consequence, a stealth aircraft IR irradiance may hardly be higher than that of the background or from any decoy. This, combined with the low spatial resolution of multispectral images, explains the high false alarm rate that we observed in detection algorithms _ala_ RX, which only exploit the spectral features conveyed by the IRS. Fig. 1 gives a hint of the kind of images that we are going to cope with and illustrates the two types of dispersion that should be handled by the proposed method. 1. Spatial dispersion: The geometry of the aircraft is different in the three images. 2. Spectral dispersion: The irradiance varies in intensity and in the different wavelength bands. On one hand, the multispectral IRS (a) and (c) has a similar spectral profile, whereas IRS (b) feature two distinct intensity peaks. On the other hand, the irradiance intensity in (a) and (b) is similar while it is clearly lower in (c). Some recent work by Liu _et al._[11] has proposed multispectral bands selection for stealth aircraft detection by analyzing contrast characteristics between the two main constituents of aircraft's plume gas, i.e., H\\({}_{2}\\)O and CO\\({}_{2}\\), and background. In this paper, we consider the whole aircraft IRS and we propose an innovative method for aircraft detection in a multispectral image: it takes simultaneously advantage of spectral and spatial discriminant features to reveal anomalies. It combines the Mahalanobis transform embedded in the RX algorithm with some level set techniques proposed in [10]. In most cases,arccraft correspond to hot temperatures at the sensor level. Hence, it is natural to rely on a detection test that considers the hottest pixels, i.e., pixels associated to highest IRS, in the sensed image, and, therefore, in its Mahalanobis transform. If these pixels are close, they are likely to come from a target; otherwise, they are considered as part of the clutter. Instead of manually testing the neighborhood of each hot pixel, we propose to take advantage of a powerful tool in image analysis: the level sets [27, 28, 29]. We believe that the proposed method is rather general and could be applied to other setups such as ground-level surveillance where targets would be observed from the nadir view-angle. The results emphasize that, in the context of aircraft detection, 1) this new test significantly improves detection results compared to other RX-based methods that could be considered, especially when a low false alarm rate is required and 2) there is a great interest in using multispectral IRS rather than integrated IRS, as long as the IR bands are well chosen. As a matter of fact, the detection performances turn out to vary greatly according to the number and the location of spectral bands in the IR spectrum. We used a genetic algorithm (GA) [30] to optimize the detection performances and to provide the set of 2, 3, or 4 optimal elementary band combinations, which should be privileged in any aircraft detection application. This paper is organized as follows. The basic features of our IRS simulation are briefly outlined in Section II, the multispectral detection algorithm is introduced in Section III, and the wavelength bands selection strategy is detailed in Section IV. Finally, simulation results are reported in Section V. ## II IRS Simulation The main principle of multispectral IR detection of aircraft is to compare the irradiance measured by the sensor due to the aircraft with that coming from the atmospheric background clutter, spectrally integrated in about 10 contiguous bands. The knowledge of both types of irradiance is, therefore, mandatory to design a detection method. In this paper, we do not make use of real IR data, but of simulated IRS. However, our simulator CRIRA has been compared with some experimental data, like the ones provided by a campaign which took place at Paris Orly airport in 2007. This validation study showed good agreement for a front or a rear view between experimental and simulated IRS; see [31] and [32] for more details. Yet, this point is crucial: indeed, any attempt to evaluate a detection test on synthesis images should be motivated by the fact that the associated IRS dispersion is realistic. We describe, in this section, the method used to simulate multispectral images, virtually recorded by the sensor. The contributions to the aircraft IRS can be classified in heat source emission and airframe reflected light from the surrounding background. The emission comes mainly from: 1. the wings and the airframe, which are aerodynamically heated; 2. the air intakes, each composed of a cavity with an internal source represented by the first stages of low pressure compressor; 3. the engine plume hot gases; 4. the nozzle and the metallic components heated by the combustion gases; 5. some mechanical or electrical components, which are heat sources. The second class is composed of light coming from the atmosphere, the Earth's ground and the sun, which is reflected by the aircraft. We consider a daylight air-to-ground attack in a mid latitude region by three different combat aircraft, flying at low altitude (800-1200 ft) without afterburning. The multispectral IR sensor is located on the ground, at a distance of 20 km from the target, in the flight direction. The aircraft is supposed to be spatially resolved, in each wavelength band, on a \\(16\\times 16\\) pixels image by the sensor. This scenario is a typical one for a surveillance sensor, which aims at detecting menacing aircraft soon enough to organize defense reaction for a strategic area. Among the input parameters expected by our computer simulation program CRIRA, 28 (listed in Table I) are left unspecified in our scenario: 9 describe IR optical properties of the various aircraft surfaces, 7 are related to flight conditions, and 12 report to atmospheric conditions. Atmospheric conditions determine the atmospheric absorption and attenuation and lead to significant IRS variations in different battlefields. The generation of a large database of simulated multispectral IRS for the three Fig. 1: (a)–(c) Three multispectral IR signatures of the same aircraft in the same scenario, in the range of 2000–3000 cm\\({}^{-1}\\) with 10 bands of spectral width equal to 100 cm\\({}^{-1}\\) each. aircraft is made with a Quasi-Monte Carlo sampling of the 28 uncertain input data. The Quasi-Monte Carlo method makes use of a slightly different kind of sampling than the Monte Carlo one, as described in [33]: the pseudorandom numbers are replaced with uniformly distributed determinist sequences, the low discrepancy sequences. As a matter of fact, the discrepancy \\({D_{N}}^{*}\\) is a measure of the uniformity of the points dispersion. A low discrepancy sequence is characterized by a \\(O\\left(\\log(N)^{p}/N\\right)\\) discrepancy, where \\(p\\) is the problem's dimension, that is to say the number of uncertain factors in this paper. In this study, we use a Faure-Tezuka sequence [34], but any other low discrepancy sequences with good space filling properties in low order projections could be used. A single run of our simulation requires about 3 min (for each simulation performed in parallel on a 64 bits Sun Fire workstation, with 4 quad-core processors Intel Xeon), we thus keep the number of simulation runs below 10 000 for each aircraft. As output, CRIRA provides \\(16\\times 16\\) pixel multispectral images of the contrast between the irradiance due to the aircraft and that due to the background, in \\(K_{0}=10\\) bands. We must, therefore, add a relevant background clutter model. We make a Gaussian white noise assumption for the background: when an aircraft is observed on a clear sky background, this assumption is completely realistic. For each multispectral image, the background spectral pixels are independent realizations of a multivariate Gaussian distribution with zero mean and diagonal covariance matrix whose coefficients \\(\\{s_{k}\\}_{k=1}^{K_{0}}\\) are detailed in (1). To account for spatial dependence of the background, a textured Gaussian process would have been more appropriate to model a cloudy sky, but we preferred to focus on a simple background for this first application of our method. By default, the multispectral IRS simulator provides \\(K_{0}=10\\) elementary bands each having a spectral width of 100 cm\\({}^{-1}\\), in the 2000-3000 cm\\({}^{-1}\\) range, that will be denoted band II in the following. A typical standard deviation \\(s_{\\text{I}}=.058\\) has been estimated on measured sky images, spectrally integrated over the band II, and scaled in order to be consistent with images levels. Let the band II be decomposed in \\(K_{0}\\) contiguous subbands of the type \\(\\{b_{k}=[\\sigma_{k}^{-},\\sigma_{k}^{+}]\\}_{k=1}^{K_{0}}\\). Under the assumptions that 1) the background clutter is a photon noise and 2) the incident photons follow a Poisson distribution, the standard deviation \\(s_{k}\\) of the background clutter corresponding to the subband \\([\\sigma_{k}^{-},\\sigma_{k}^{+}]\\) may be expressed as a function of \\(s_{\\text{I}}\\) as follows: \\[s_{k}=s_{\\text{I}}\\sqrt{\\frac{\\tilde{\\ell}_{k}(\\sigma_{k}^{+}-\\sigma_{k}^{-})/ \\tilde{\\sigma}_{k}}{\\sum_{u=1}^{K_{0}}\\tilde{\\ell}_{u}(\\sigma_{u}^{+}-\\sigma_{ u}^{-})/\\tilde{\\sigma}_{u}}} \\tag{1}\\] where \\(\\tilde{\\ell}_{k}\\) and \\(\\bar{\\sigma}_{k}\\), respectively, denote the background spectral luminance and the mean wavelength of the subband \\(b_{k}=[\\sigma_{k}^{-},\\sigma_{k}^{+}]\\). The mean values of the background spectral luminance \\(\\{\\ell_{k}\\}_{k=1}^{K_{0}}\\) were estimated using MATISSE, a background scene generator developed at ONERA for the computation of natural background spectral radiance images [35]. A remote sensor, monitoring the sky to protect a sensitive area or installation, acquires most of the time background images. Modeling a clear sky background by a field of independent identically distributed multivariate Gaussian variables, with zero mean and a diagonal covariance matrix \\(\\Sigma=(\\sigma_{1}^{2},\\ldots,\\sigma_{K_{0}}^{2})\\), it is reasonable to assume that the continuous flow of background images allows a proper estimation of \\(\\Sigma\\). In the following, we assume that the background distribution, denoted \\(\\mathcal{B}\\), is known. ## III A Level Set Approach to Anomaly Detection ### _Statistical Framework_ A multispectral image with \\(K_{0}\\) spectral bands is a function \\(f:\\Omega\\rightarrow\\mathbb{R}^{K_{0}}\\), where \\(\\Omega\\) is a discrete and finite subset of \\(\\mathbb{R}^{2}\\). The set of multispectral images featuring \\(K_{0}\\) bands is denoted \\(\\mathcal{F}_{K_{0}}\\), and let for all \\(f\\in\\mathcal{F}_{K_{0}}\\), \\(\\Omega_{B}(f)\\) and \\(\\Omega_{T}(f)\\) denote the subspaces of \\(\\Omega\\) related to \\(f\\) background and target support, respectively. We assume that \\(\\Omega_{T}(f)\\) and \\(\\Omega_{B}(f)\\) are complementary subsets of \\(\\Omega\\), i.e. \\(\\Omega=\\Omega_{B}(f)\\cup\\Omega_{T}(f)\\) and \\(\\Omega_{B}(f)\\cap\\Omega_{T}(f)=\\{\\emptyset\\}\\). In addition, let \\(|\\Omega|\\) denotes the number of pixels in the image \\(f\\). Finally, for all \\(x\\in\\Omega\\), the vector \\(Y_{x}:=(Y_{x}^{(1)},\\ldots,Y_{x}^{(K_{0})})=f(x)\\) is referred to as a spectral pixel, where for all \\(k\\in\\{1,\\ldots,K_{0}\\}\\), \\(Y_{x}^{(k)}\\) denotes the pixel \\(x\\) irradiance integrated over the \\(k\\)th spectral band. On the basis of this observation, a decision between two hypotheses, the null hypothesis \\(H_{0}\\) corresponding to sky background and the alternative hypothesis \\(H_{1}\\), standing for everything but \\(H_{0}\\), shall be made. We, thus, assume that the set \\(\\mathcal{F}_{K_{0}}\\) can be written as \\(\\mathcal{F}_{K_{0}}=H_{0}\\cup H_{1}\\). In our framework, an anomaly detection is a statistical test \\(\\phi\\), mapping any multispectral image \\(f\\in\\mathcal{F}_{K_{0}}\\) to \\(\\{0,1\\}\\) such that \\[\\phi:f\\to\\begin{cases}0\\Longrightarrow f\\in H_{0}\\\\ 1\\Longrightarrow f\\in H_{1}\\,.\\end{cases} \\tag{2}\\] In the following, \\(\\Phi\\) will denote the set of all possible mappings from \\(\\mathcal{F}_{K_{0}}\\) to \\(\\{0,1\\}\\), for any \\(K_{0}\\in\\mathbb{N}^{*}\\). Positive samples are images which actually contain a target, and negative samples are background images and do not contain any target. The performance of a detection test \\(\\phi\\in\\Phi\\) is characterized by the two following statistics: 1. the probability to detect true positive samples \\(\\text{P}_{\\text{D}}(\\phi)\\), defined for any \\(f\\in\\mathcal{F}_{K_{0}}\\) as \\[\\text{P}_{\\text{D}}(\\phi)=\\mathbb{P}[\\,\\phi(f)=1\\,|\\,f\\in H_{1}]\\] (3) 2. the probability to predict positive samples that are actually negative \\(\\text{P}_{\\text{FA}}(\\phi)\\), also referred to as false alarm rate, and defined as \\[\\text{P}_{\\text{FA}}(\\phi)=\\mathbb{P}[\\,\\phi(f)=1\\,|\\,f\\in H_{0}]\\,.\\] (4) When these probabilities are analytically intractable, one may use the following estimates: \\[\\widehat{\\text{P}_{\\text{D}}}(\\phi)=\\frac{1}{|H_{1}|}\\sum_{f\\in H_{1}}\\phi(f) \\,,\\qquad\\widehat{\\text{P}_{\\text{FA}}}(\\phi)=\\frac{1}{|H_{0}|}\\sum_{f\\in H_{0 }}\\phi(f)\\] where \\(|H_{1}|\\) and \\(|H_{0}|\\) denote the number of positive and negative samples in the data set, respectively. In addition to \\(\\phi\\), anomaly detection methods such as the RX algorithm [18] use a pixel-level test which we will denote \\(\\psi\\) in the following. Similarly to \\(\\phi\\), \\(\\psi\\) is a mapping from \\(\\mathcal{F}_{K_{0}}\\times\\Omega\\) to \\(\\{0,1\\}\\), such that \\[\\psi:(f,x)\\to\\begin{cases}0,&\\text{if }x\\in\\Omega_{B}(f)\\\\ 1,&\\text{if }x\\in\\Omega_{T}(f)\\,.\\end{cases} \\tag{5}\\] Defining \\(\\phi\\) and \\(\\psi\\) are two important issues when implementing a detection algorithm. In particular, there is always a compromise between increasing the probability of detection \\(\\text{P}_{\\text{D}}\\) while keeping the, PFA, \\(\\text{P}_{\\text{FA}}\\) low. For any given detector \\(\\phi\\in\\Phi\\), this tradeoff between these two statistics may be described by the receiver operating characteristic curve [ROC(\\(\\phi\\))], which plots \\(\\text{P}_{\\text{D}}(\\phi)\\) versus \\(\\text{P}_{\\text{FA}}(\\phi)\\). In this paper, we characterize a detection test performances with the following two statistics, which take into account a tradeoff between \\(\\text{P}_{\\text{D}}(\\phi)\\) and \\(\\text{P}_{\\text{FA}}(\\phi)\\). 1. A global indicator \\(S_{1}(\\phi)\\) defined as the area under the curve ROC(\\(\\phi\\)). This is a standard way to measure the performance of a binary classifier \\(\\phi\\). 2. A set of local indicators \\(\\{S_{2}(\\phi,\\epsilon),\\,\\epsilon\\in(0,1)\\}\\) defined as the probability of detection given a fixed alarm rate \\(\\epsilon\\), i.e., \\(S_{2}(\\phi,\\epsilon)=\\max\\text{P}_{\\text{D}}(\\phi)\\) such that \\(\\text{P}_{\\text{FA}}(\\phi)\\leq\\epsilon\\), where \\((\\text{P}_{\\text{FA}}(\\phi),\\text{P}_{\\text{D}}(\\phi))\\in\\text{ROC}(\\phi)\\). ### _RX Anomaly Detection Test Applied to Our Framework_ If we model the background as a \\(K_{0}\\)-dimensional multivariate Gaussian distribution \\(\\mathcal{B}=\\mathcal{N}(\\mu,\\Gamma)\\), a proper anomaly detector is the well-known RX detector [18]. In this case, the log-likelihood function of the background distribution is proportional to the Mahalanobis distance \\(d_{M}(\\,\\cdot\\,,\\mathcal{B})\\). For any \\(f\\in\\mathcal{F}_{K_{0}}\\) and \\(x\\in\\Omega\\), \\(d_{M}(\\,\\cdot\\,,\\mathcal{B})\\) provides a similarity measure between \\(f(x)\\) and the background distribution \\(\\mathcal{B}\\) such that \\[d_{M}(f(x),\\mathcal{B})=(f(x)-\\mu)^{(T)}\\Gamma^{-1}(f(x)-\\mu)\\,. \\tag{6}\\] For any multispectral image \\(f\\in\\mathcal{F}_{K_{0}}\\), let \\(\\hat{f}\\) denote, in the following, the Mahalanobis transform of \\(f\\) defined for all \\(x\\in\\Omega\\) as \\[\\hat{f}:x\\to d_{M}(f(x),\\mathcal{B})\\,. \\tag{7}\\] Although the spatial structure of \\(f\\) is preserved by the Mahalanobis transform, \\(\\hat{f}\\) does not inherit any quantitative spectral information from \\(f\\) but provides a cartography where _high_ gray levels are likely to be anomalies. Fig. 2 illustrates the Mahalanobis transform of a synthetic multispectral image \\(f\\), with \\(K_{0}=4\\) spectral bands. As explained in Section II, we assume that the background distribution is known and we first consider the base RX version \\(\\phi_{RX}\\). Under this assumption, for any multispectral image \\(f\\in\\mathcal{F}_{K_{0}}\\), the Mahalanobis transform of a pixel \\(x\\in\\Omega_{B}(f)\\) is the random variable \\(\\hat{f}(x)\\sim\\chi^{2}_{K_{0}}\\), where \\(\\chi^{2}_{K_{0}}\\) is the chi-squared distribution with \\(K_{0}\\) degrees of freedom. The PFA, in this case, is given by \\[\\text{P}_{\\text{FA}}(\\phi_{RX}) =\\mathbb{P}\\left[\\{\\exists\\,x\\in\\Omega,\\ \\hat{f}(x)>\\alpha\\}\\,|\\,f\\in H_{0}\\right]\\] \\[=|\\Omega|\\int_{\\alpha}^{\\infty}\\chi^{2}_{K_{0}}(u)\\mathrm{d}u \\tag{8}\\] by independence. The threshold \\(\\alpha\\) can, thus, be set to achieve a specific (constant) false alarm rate (CFAR property). Let \\(\\phi^{(1)}_{\\alpha}\\in\\Phi\\), with \\(\\alpha\\in(0,1)\\), be an anomaly detection test derived from the RX detector, defined by \\[\\phi^{(1)}_{\\alpha}(f)=\\openone_{[q_{\\alpha};\\infty[}\\left(\\max_{x\\in\\Omega} \\hat{f}(x)\\right) \\tag{9}\\] where \\(q_{\\alpha}\\) is the test threshold. A convenient property of this test is that the threshold \\(q_{\\alpha}\\) can be set analytically in order to achieve a given false alarm rate: \\(\\text{P}_{\\text{FA}}(\\phi^{(1)}_{\\alpha})\\leq\\alpha\\). Indeed, a basic proposition on the distribution of an i.i.d. random field yields \\[\\text{P}_{\\text{FA}}(\\phi^{(1)}_{\\alpha}) =\\mathbb{P}[\\max_{x\\in\\Omega}\\hat{f}(x)\\geq q_{\\alpha}\\,|\\,f\\in H _{0}\\,]\\] \\[=1-F_{\\chi^{2}_{K_{0}}}(q_{\\alpha})^{|\\Omega|} \\tag{10}\\] where \\(F_{\\chi^{2}_{K_{0}}}\\) is the cumulative distribution function of the \\(\\chi^{2}_{K_{0}}\\) distribution. As a consequence, \\(q_{\\alpha}\\) may be set as the \\((1-\\alpha)^{1/|\\Omega|}\\)-quantile of the \\(\\chi^{2}_{K_{0}}\\) distribution. However, as a by-product, for low false alarm rates \\(q_{\\alpha}\\) can become very large, especially when \\(|\\Omega|\\) is high, yielding poor detection probabilities. A more general test \\(\\phi^{(2)}_{\\alpha,\\beta}\\) derived from \\(\\phi^{(1)}_{\\alpha}\\) consists in detecting an anomaly in a multispectral image \\(f\\in\\mathcal{F}_{K_{0}}\\) when at least one pixel is unlikely to be a realization of a \\(K_{0}\\)-degree of freedom chi-squared distribution. That is, either one pixel exceeds (or is lower) to a high (or a low) quantile of a \\(\\chi^{2}_{K_{0}}\\) random field. \\(\\phi^{(2)}_{\\alpha,\\beta}\\) can be regarded as the two-tailed counterpart of \\(\\phi^{(1)}_{\\alpha}\\). This test features the advantage of detecting anomalies with either a positive or a negative contrast from the background IRS. \\(\\phi^{(2)}_{\\alpha,\\beta}\\) may be expressed as \\[\\phi^{(2)}_{\\alpha,\\beta}(f)=\\max\\left\\{\\mbox{\\small 1 \\kern-3.8pt1}_{[0;q_{\\alpha}]}\\min_{x\\in\\Omega}\\hat{f}(x),\\;\\mbox{\\small 1 \\kern-3.8pt1}_{[q_{\\alpha};\\infty]}\\max_{x\\in\\Omega}\\hat{f}(x)\\right\\} \\tag{11}\\] where \\(q_{\\alpha}\\) and \\(q_{\\beta}\\) are the two test thresholds. \\(q_{\\alpha}\\) is defined as previously, while using the same argument as for \\(q_{\\alpha}\\), \\(q_{\\beta}\\) is set as a \\(1-(1-\\beta)^{1/|\\Omega|}\\) quantile of the \\(\\chi^{2}_{K_{0}}\\) distribution. ### _A Detection Test Combining Spectral and Spatial Information_ Although, in our context, an aircraft is weakly resolved, its typical IRS spreads over a small set of adjacent pixels (Fig. 1). As a consequence, the associated Mahalanobis transform also features meaningful adjacent pixels, an information which is left unused by \\(\\phi^{(1)}_{\\alpha}\\) and \\(\\phi^{(2)}_{\\alpha,\\beta}\\). An alternative to \\(\\phi^{(1)}_{\\alpha}\\) is thus to study some well chosen level sets of \\(\\hat{f}\\). Level sets have long proved their usefulness in image processing [27, 28, 29]. Here, they provide a handy tool for testing spatial proximity of pixels that are unlikely realizations of \\(\\mathcal{B}\\). Indeed, the assumption that the background pixels are not spatially correlated implies that the observation of a large level set at a high quantile of the \\(\\chi^{2}_{K_{0}}\\) distribution is an unlikely event under \\(H_{0}\\) and should help to identify a target. Hence, the level set analysis offers more protection against false positive samples than the standard RX test \\(\\phi^{(1)}_{\\alpha}\\). This allows to decrease the test threshold \\(q_{\\alpha}\\) which in turn will mathematically increase the true positive detection probability, while maintaining a reasonable false alarm rate. Let us recall some basic definitions used in level set analysis (refer, for instance, to [36] for further details). In the sequel, let \\(\\bar{\\Omega}\\) be the \\(\\mathbb{R}^{2}\\) extension of \\(\\Omega\\) and for any function \\(g:\\Omega\\to\\mathbb{R}\\), let \\(I(g):\\bar{\\Omega}\\to\\mathbb{R}\\) denote the bicubic interpolation of \\(g\\) on \\(\\bar{\\Omega}\\) (many other interpolation schemes could also be used as well). Moreover, let for any \\((\\alpha,\\beta)\\in\\mathbb{R}^{2}\\), \\(\\mathcal{C}^{+}_{\\alpha}(g)\\) and \\(\\mathcal{C}^{-}_{\\beta}(g)\\) be the two level sets defined by \\[\\mathcal{C}^{+}_{\\alpha}(g) =\\{x\\in\\bar{\\Omega},\\,I(g)(x)\\geq\\alpha\\}\\] \\[\\mathcal{C}^{-}_{\\beta}(g) =\\{x\\in\\bar{\\Omega},\\,I(g)(x)\\leq\\beta\\}\\,.\\] The interpolation implies that \\(I(g)\\) is continuous on \\(\\bar{\\Omega}\\) and that regularity conditions hold. In particular, the \\(\\alpha\\)-level line \\(\\mathcal{L}^{+}_{\\alpha}(g)\\) (respectively \\(\\mathcal{L}^{-}_{\\beta}(g)\\)) exists and is defined as \\(\\mathcal{L}^{+}_{\\alpha}(g):=\\partial\\mathcal{C}^{+}_{\\alpha}(g)\\) (respectively \\(\\mathcal{L}^{-}_{\\beta}(g):=\\partial\\mathcal{C}^{-}_{\\beta}(g)\\)). Finally, let \\(\\mathcal{C}^{+}_{\\alpha}(g)\\) be the set of the closed elements of \\(\\mathcal{C}^{+}_{\\alpha}(g)\\) and similarly define \\(C^{-}_{\\beta}(g)\\), \\(L^{+}_{\\alpha}(g)\\), and \\(L^{-}_{\\beta}(g)\\). We propose a third detection test \\(\\phi^{(3)}_{\\alpha,\ u}\\in\\Phi\\), making use of these level set tools. For convenience, we begin to define the corresponding set of anomalous pixels of any \\(f\\in\\mathcal{F}_{K_{0}}\\) \\[A^{(3)}_{\\alpha,\ u}(f)=\\{\\,x\\in\\mbox{\\small c, }\\mbox{\\small c}\\in C^{+}_{\\alpha}(\\hat{f}),\\,\\mbox{Per(c)}>\ u\\,\\} \\tag{12}\\] where for any level line \\(l\\) or level set c, \\(\\mbox{Per(l)}\\) and \\(\\mbox{Per(c)}\\) both denote its perimeter. The pixel-level and image-level detection tests may, thus, be expressed as follows: \\[\\psi^{(3)}_{\\alpha,\ u}(f,x) =\\mbox{\\small 1\\kern-3.8pt1}_{A^{(3)}_{\\alpha,\ u}(f)}(x) \\tag{13}\\] \\[\\phi^{(3)}_{\\alpha,\ u}(f) =\\mbox{\\small 1\\kern-3.8pt1}_{[\ u;\\infty]}\\left(\\max_{l\\in L^{ \\infty}_{\\alpha}(\\hat{f})}\\mbox{Per(l)}\\right). \\tag{14}\\] Fig. 2: (a) Synthetic multispectral IRS, featuring \\(K_{0}=4\\) spectral bands and its (b) Mahalanobis transform. \\(\\phi^{(3)}_{\\alpha,\ u}\\) exploits both spatial and spectral information in that it only retains sets of adjacent pixels of the Mahalanobis transform above the noise level. Moreover, compared to \\(\\phi^{(1)}_{\\alpha}\\) or \\(\\phi^{(2)}_{\\alpha,\\beta}\\), it conveys a spatial information about the target location. Fig. 3 illustrates the appropriateness of taking into account some well chosen level lines in the detection test \\(\\phi^{(3)}_{\\alpha,\ u}\\) on some \\(64\\times 64\\) multispectral images, which was used for illustration purpose. The top panel (a) displays the level line set \\(L^{(+)}_{\\alpha}(f)\\). Even though \\(L^{(+)}_{\\alpha}(f)\\) contains the target, it also features level lines that belong to the background and which could potentially yield to false alarms. Conversely, the bottom panel (b) displays only the elements of \\(L^{(+)}_{\\alpha}(f)\\) whose perimeter exceeds a threshold \\(\ u\\): this set is actually restricted to the target. To focus on the relevant level sets, \\(\\alpha\\) and \\(\ u\\) should be set such that an \\(\\alpha\\)-level set with perimeter larger than \\(\ u\\) is an event with low probability for a \\(\\chi^{2}_{K_{0}}\\) random field. Therefore, \\(\\alpha\\) may be taken as a \\((1-10^{-p})\\)-quantile of the \\(\\chi^{2}_{K_{0}}\\) distribution, where \\(p\\geq 1\\). However, the distribution of the maximum perimeter of a \\(\\alpha\\)-level set of a \\(\\chi^{2}_{K_{0}}\\) field, denoted \\(\\mathcal{P}_{\\alpha}\\), is intractable. Thus, there is no analytical value for the threshold \\(\ u\\), such that \\(\\phi^{(3)}_{\\alpha,\ u}\\) achieves a prescribed false alarm rate. Nevertheless, for a rough false alarm rate \\(\\text{P}_{\\text{FA}}(\\phi^{(3)}_{\\alpha,\ u})\\simeq 10^{-r}\\), where \\(r>0\\), the calibration of the test may be achieved thanks to simulations: \\(\ u\\) may be set as a Monte Carlo approximation of the \\((1-10^{-r})\\)-quantile of \\(\\mathcal{P}_{\\alpha}\\), since one can sample from \\(\\mathcal{P}_{\\alpha}\\). Complementarily to \\(\\phi^{(3)}_{\\alpha,\ u}\\) and similarly to \\(\\phi^{(2)}_{\\alpha,\\beta}\\), we define a last detection test \\(\\phi^{(4)}_{\\alpha,\\beta,\ u}\\in\\Phi\\) defined for any \\(f\\in\\mathcal{F}_{K_{0}}\\) as \\[A^{(4)}_{\\alpha,\\beta,\ u}(f) =\\{\\,x\\in\\text{c},\\,\\text{c}\\in C^{+}_{\\alpha}(\\hat{f})\\cup C^{-} _{\\beta}(\\hat{f}),\\,\\text{Per}(\\text{c})>\ u\\,\\}\\] \\[\\psi^{(4)}_{\\alpha,\\beta,\ u}(f,x) =\\mathbf{1}_{A^{(4)}_{\\alpha,\\beta,\ u}(f)}(x) \\tag{15}\\] \\[\\phi^{(4)}_{\\alpha,\\beta,\ u}(f) =\\mathbf{1}_{[\ u;\\infty[}\\left(\\max_{\\ell\\in L^{+}_{\\alpha}( \\hat{f})\\cup L^{-}_{\\beta}(\\hat{f})}\\text{Per}(\\hat{l})\\right). \\tag{16}\\] The possible anomalies are likely to belong either to a \\(\\alpha\\)-upper level set or to a \\(\\beta\\)-lower level set of a \\(|\\Omega|\\)-dimensional \\(\\chi^{2}_{K_{0}}\\) random field. \\(\\alpha\\) and \\(\\beta\\) may be set as high and low quantiles of the \\(\\chi^{2}_{K_{0}}\\) distribution, respectively. \\(\ u\\) can be defined in the same way as for \\(\\phi^{(3)}_{\\alpha,\ u}\\) case. As mentioned in Section I, when the background distribution \\(\\mathcal{B}\\) is unknown, one can substitute the mean \\(\\mu\\) and the covariance matrix \\(\\Sigma\\) with their maximum likelihood estimate \\(\\hat{\\mu}\\) and \\(\\hat{\\Sigma}\\). Moreover, it was demonstrated in [18] that the CFAR property remains valid: instead of a \\(\\chi^{2}_{K_{0}}\\) distribution, the false alarm rate follows a beta distribution, whose parameters only depend upon \\(K_{0}\\) and \\(|\\Omega|\\). Thus, the four detection tests we have proposed may also be used when the background distribution \\(\\mathcal{B}\\) is unknown. The only requirement is to have sufficient number of background samples \\(f\\in H_{0}\\), to have a good estimate of the test thresholds via Monte Carlo sampling, which is always the case for monitoring applications. Moreover, the scope of application of this method could be extended to address target detection in multispectral satellite images or airphotos. In such setups, the potential target operates on the ground and the scene is monitored from the nadir view-angle. Possible nonhomogeneous backgrounds (e.g., forest, sea, and desert) could be handled by defining \\(\\mathcal{B}\\) as a mixture model. ## IV Wavelength Bands Selection Our database of simulated aircraft IRS consists of multispectral images featuring \\(K_{0}=10\\) elementary spectral bands \\(\\{b_{k}\\}_{k=1}^{10}\\) evenly spaced across the 2000-3000 cm\\({}^{-1}\\) spectral range (actually, 10 bands of spectral width \\(100\\,\\mathrm{cm}^{-1}\\)). However, as already mentioned in [17, 37], and [11], the number of bands \\(K\\) and their location in the 2000-3000 cm\\({}^{-1}\\) spectrum both have a huge impact in the detection performance, regardless the detector \\(\\phi\\in\\Phi\\) choice. Fig. 4 displays the distribution of the Mahalanobis transform of pixels in a multispectral image \\(f\\in\\mathcal{F}_{K}\\) with \\(K=1,\\ 2,\\ 4,\\ \\mathrm{and}\\ 6\\) bands. On the one hand, the plain lines correspond to the analytic distribution of the background pixels, i.e., the chi-squared distribution with 1, 2, 4, and 6 degrees of freedom. On the other hand, the dashed line refers to the empirical distribution of the target pixels denoted \\(T\\) which does not vary significantly with \\(K\\). The hatched areas below the plain lines correspond to the respective false alarm rate achieved with the different values of \\(K\\). This shows that, in the case of our distribution \\(T\\), the higher the number of bands \\(K\\) involved in the multispectral image, the higher is the expected false alarm rate. Second, a higher number of spectral bands enables the identification of different spectral features and thus leads to an easier discrimination between background and target pixels. However, compared with broader bands measurements, narrow spectral bands may significantly reduce the SNR. As a consequence, considering groups of consecutive elementary bands are a promising prospect. Third, the location of some elementary bands is such that they do not provide any information about the targets. As a matter of fact, the atmospheric absorption phenomenon, due in particular to H\\({}_{2}\\)O and CO\\({}_{2}\\) in the 2000-3000 cm\\({}^{-1}\\) range, makes some elementary bands irrelevant. For all these reasons, a tradeoff on the band number \\(K\\) and their respective bandwidth should be made, and a band selection step is, therefore, mandatory. We consider multispectral images with \\(K\\) bands as follows. 1. Each band \\(\\{r_{k}\\}_{k=1}^{K}\\) is a group of some consecutive elementary bands. 2. A spectral band \\(r_{k}\\) may not consist in more than \\(C\\) consecutive elementary bands. 3. Two spectral bands \\(r_{k}\\) and \\(r_{k+1}\\) are separated by at least one elementary band. The first constraint aims at aggregating similar contiguous bands, the second forces to account for multiple bands and the last one can be regarded as a way to decouple the correlated information. These constraints induce a set \\(\\mathcal{E}_{K}\\) of the possible combinations providing multispectral images with \\(K\\) bands. For all \\(\\gamma\\in\\mathcal{E}_{K}\\), define \\(T_{\\gamma}\\) as the mapping \\(\\mathcal{F}_{10}\\rightarrow\\mathcal{F}_{K}\\), such that for all \\(f\\in\\mathcal{F}_{10}\\), \\(T_{\\gamma}(f)\\) is the multispectral image \\(f\\) corresponding to the group \\(\\gamma\\) (cf., Fig. 5). More precisely, we have \\[T_{\\gamma}(f) =(T_{\\gamma}(f)^{(1)},\\ldots,T_{\\gamma}(f)^{(K)})\\quad\\forall\\,k \\in\\{1,\\ldots,K\\}\\] \\[T_{\\gamma}(f)^{(k)} =\\sum_{\\ell\\in r_{k}}f^{(\\ell)}. \\tag{17}\\] \\(T_{\\gamma}(f)\\) is an additive transformation since 1) the target pixels in each band are already integrated data (and therefore additive) and 2) the background pixels of different bands must be added up in order to satisfy the variance model (1). For a given detector \\(\\phi\\in\\Phi\\), we propose, in this section, a method to find the element \\(\\gamma^{*}\\in\\mathcal{E}_{K}\\) such that for all \\(\\gamma\\in\\mathcal{E}_{K}\\) then either \\((i)\\) or \\((ii)\\) holds: \\[(i)\\ S_{1}{}^{\\gamma}(\\phi)\\leq S_{1}{}^{\\gamma^{*}}(\\phi)\\,,\\qquad(ii)\\ S_{2}{}^{ \\gamma}(\\phi)\\leq S_{2}{}^{\\gamma^{*}}(\\phi)\\] where \\(S_{1}{}^{\\gamma}(\\phi)\\) and \\(S_{2}{}^{\\gamma}(\\phi)\\), respectively, refer to the detection statistics \\(S_{1}(\\phi)\\) and \\(S_{2}(\\phi)\\) defined in Section III, but applied to the multispectral images \\(T_{\\gamma}(f)\\) instead of \\(f\\). Any combination \\(\\gamma\\in\\mathcal{E}_{K}\\) may be parameterized by some vector \\(\\theta\\in\\Theta_{K}\\), where \\(\\Theta_{K}\\) is the subset of \\(\\mathbb{N}^{2K}\\) defined as follows: \\[\\theta\\in\\Theta_{K},\\quad\\theta=(\\theta_{1},\\ldots,\\theta_{2K})\\] \\[\\text{where}\\quad\\begin{cases}\\theta_{1}\\geq 0\\ \\text{and}\\ \\theta_{2},\\theta_{3},\\ldots,\\theta_{2K}>0\\\\ \\sum_{k=1}^{2K}\\theta_{k}\\leq K_{0}\\,.\\end{cases} \\tag{18}\\] For all \\(k\\in\\{1,\\ldots,K\\}\\), \\(\\theta_{2k}\\) denotes the number of consecutive elementary band(s) in the group \\(r_{k}\\). Conversely, \\(\\theta_{2k-1}\\) denotes the number of elementary band(s) left out between \\(r_{k-1}\\) and \\(r_{k}\\). Thus, for each \\(\\theta\\in\\Theta_{K}\\), it exists an unique \\(\\gamma\\in\\mathcal{E}_{K}\\) and conversely and with some abuse of notations we write \\(T_{\\theta}(f)\\) for the multispectral image \\(f\\) corresponding to the bands grouping \\(\\theta\\). As a consequence, finding the combination \\(\\gamma^{*}\\in\\mathcal{E}_{K}\\) which satisfies the relations \\((i)\\) or \\((ii)\\) defined above is equivalent to finding the corresponding optimal parameter \\(\\theta^{*}\\in\\Theta_{K}\\). Fig. 6 provides an example of such a parameterization. Finding the optimal parameter \\(\\theta^{*}\\in\\Theta_{K}\\) is a discrete optimization problem with constraints. For this reason, we chose, in this work, to use a GA [30] to perform this task. By analogy with the evolutionary theory which predicts that, in a random population, only the individuals the most adapted to the environment will survive, a GA looks for the gene that corresponds to the fittest individual. In our context, for a given number of band groups \\(K\\), the population consists in the possible band combinations \\(\\theta\\in\\Theta_{K}\\), the genes are the parameters \\(\\theta_{1},\\ldots,\\theta_{2K}\\) and the fitness function (fitting environment measure) \\(\\mu\\) is either the function \\(\\theta\\to S_{1}{}^{\\theta}(\\phi)\\) or \\(\\theta\\to S_{2}{}^{\\theta}(\\phi)\\), where, with some abuse of notations, \\(S_{i}^{\\theta}(\\phi)\\) denotes the statistic \\(S_{i}(\\phi)\\) evaluated on the images \\(T_{\\theta}(f)\\). By deriving successive generations from an initial random population, the GA will provide _in fine_ individuals belonging to the last generation, which are the fittest with respect to the environment. Algorithm 1 summarizes the different steps performed at each iteration of the GA. Implementation issues are discussed in Section V. With a proper population size \\(N\\) and a generation number \\(M\\) specified, this constrained and discrete optimization problem may be achieved through the routine ga available in the MatlabGlobalOptimizationToolbox. Fig. 4: Distribution of the Mahalanobis transform of a background multispectral pixel featuring \\(K=1,2,4,\\,\\mathrm{and}\\,6\\) bands (plain lines) and the empiric Mahalanobis transform of a multispectral target pixel (dashed line). ## V Results of the Methodology's Application ### _Detectors Comparison_ In this section, we compare the different detectors defined in Section III applied to the raw multispectral images \\(f\\in\\mathcal{F}_{10}\\). The multispectral image database for the three combat aircraft will be denoted as \\(\\mathcal{F}_{10}^{(1)}\\), \\(\\mathcal{F}_{10}^{(2)}\\), and \\(\\mathcal{F}_{10}^{(3)}\\). \\(\\mathcal{F}_{10}^{(4)}\\) will stand for the complete database gathering the three aircraft. A first graphical evidence about the relevance of simultaneously taking into account high and low level sets of the Mahalanobis transform \\(\\hat{f}\\) in the detection test is given in Fig. 7. It displays the four detectors \\(\\phi^{(1)}\\), \\(\\phi^{(2)}\\), \\(\\phi^{(3)}\\), and \\(\\phi^{(4)}\\) ROC curves in log scale, obtained by applying these detectors using different parameters \\(\\alpha,\\beta\\), and \\(\ u\\) to \\(N=1000\\) multispectral images such that \\(\\{f_{n}\\}_{n=1}^{5000}\\in\\mathcal{F}_{10}^{(2)}\\) and \\(|H_{0}|=5000\\) background images. For notational simplicity, we omit the parameters in the detection test denomination. Clearly, the ROC curve associated to \\(\\phi^{(1)}\\) is below that of \\(\\phi^{(2)}\\) and the one associated to \\(\\phi^{(3)}\\) is below that of \\(\\phi^{(4)}\\), implying that the latter two detectors outperform the two former. Quantitatively, Fig. 8 provides the statistics \\(S_{1}\\) and \\(S_{2}\\) characterizing the four Fig. 8: Statistics of the four detectors applied to \\(f\\in\\mathcal{F}_{10}^{(2)}\\). detectors. First, this confirms the conjecture made with Fig. 7. Then, this shows that when low false alarm rate is required, the information conveyed by the level lines improves significantly the detection: for \\(\\text{P}_{\\text{FA}}<10^{-3}\\), \\(\\text{P}_{\\text{D}}(\\phi^{(1)})\\), and \\(\\text{P}_{\\text{D}}(\\phi^{(2)})\\) shrink dramatically. In the following, we compare the two best detectors, namely the RX-based detector \\(\\phi^{(2)}\\) and the level set based \\(\\phi^{(4)}\\) for each aircraft. Fig. 9 plots the ROC curves in log scale for the four different databases. In Fig. 9(a)-(c), the training set consisted of \\(N=5000\\) multispectral images \\(f\\) of \\(\\mathcal{F}_{10}^{(1)}\\), \\(\\mathcal{F}_{10}^{(2)}\\), and \\(\\mathcal{F}_{10}^{(3)}\\), respectively, along with \\(|H_{0}|=5000\\) background images, while in Fig. 9(d) \\(N=15\\,000\\) multispectral images \\(f\\in\\mathcal{F}_{10}^{(\\text{all})}\\) were used along with \\(|H_{0}|=5000\\) background images. Most of the type \\(1\\) aircraft feature a single hot pixel: as a consequence, it is no surprise that the two ROC curves in Fig. 9(a) are closer than in any other scenario. Indeed, the detector \\(\\phi^{(4)}\\) ROC curve is achieved for level sets that encompass one or at most two pixels, which is equivalent to the min/max test \\(\\phi^{(2)}\\). The cases of aircraft 2 and 3 are interesting because the detection performance of \\(\\phi^{(2)}\\) and \\(\\phi^{(4)}\\) does not evolve similarly: \\(\\phi^{(2)}\\) detects more easily type \\(2\\) aircraft than type \\(3\\) and conversely for \\(\\phi^{(4)}\\). This actually meets the aircraft \\(2\\) and \\(3\\) characteristics: while most of the time, the aircraft \\(2\\) has an higher IRS than aircraft \\(3\\), the min/max test \\(\\phi^{(2)}\\) detects better aircraft \\(2\\) than aircraft \\(3\\). On the other hand, because aircraft \\(3\\) IRS is characterized by a relatively low but constant level it is, thus, easier to detect for the level set test \\(\\phi^{(4)}\\) than aircraft \\(2\\). In all the cases, the level set test outperforms the min/max RX detector. In addition of being more efficient than \\(\\phi^{(2)}\\), the detector \\(\\phi^{(4)}\\) provides spatial information about the aircraft location in the multispectral images. The set of the estimated target pixels is defined as the inner pixels of the contour line \\(l\\in L_{\\alpha}^{+}(\\hat{f})\\cup L_{\\beta}^{-}(\\hat{f})\\) having the larger perimeter. If this perimeter is lower than the threshold \\(\ u\\), no target is detected and thus the estimated target pixels are the empty set. Fig. 10 illustrates the spatial information conveyed by \\(\\phi^{(4)}\\): Fig. 10(a) represents a (difficult) multispectral image \\(f\\) of a type 2 aircraft, Fig. 10(b) shows the corresponding Mahalanobistransform \\(\\hat{f}\\) and the biggest contour lines of the sets \\(L_{\\alpha}^{+}(\\hat{f})\\) in yellow and \\(L_{\\beta}^{-}(\\hat{f})\\) in cyan for two fixed thresholds \\(\\alpha\\) and \\(\\beta\\), respectively. Finally, Fig. 10 displays the set of the estimated target pixels (the white pixels) and the true target pixels (inner pixels of the green line). To make the location test more challenging, a background was added to the original \\(16\\times 16\\) images [a Gaussian white noise with variance given in (1)] so that in Fig. 10, the detection algorithm is applied to a \\(48\\times 48\\) sample. ### _Optimal Band Selection_ The raw database provides standard multispectral images \\(f\\in\\mathcal{F}_{10}\\). We consider, in the following, multispectral images derived from this database, featuring \\(K=2\\), \\(3\\), or \\(4\\) groups \\(\\{r_{k}\\}_{k=1}^{K}\\) of consecutive elementary subbands and for which the constraints imposed in Section IV hold. In this implementation, we added a group length restriction. 1. For all \\(K\\in\\{2,3,4\\}\\) and for all \\(1\\leq k\\leq K\\), \\(t_{2k}\\leq 4\\). Moreover, we used the following specification for the GA 1. 2. Given two parents, say \\(\\theta^{(1)}\\) and \\(\\theta^{(2)}\\), the cross-over operator [Step 4-(b)] sets the genes of a children, say \\(\\theta^{\\prime}=(\\theta^{\\prime}_{1},\\ldots,\\theta^{\\prime}_{2K})\\), such that only the genes unshared by the parents, i.e., the genes in the set \\(G_{1,2}:=\\{1\\leq\\ell\\leq 2K,\\ \\theta^{(1)}_{\\ell}\ eq\\theta^{(2)}_{\\ell}\\}\\) are globally refreshed. This is achieved by choosing \\(\\theta^{\\prime}\\) with a discrete uniform random draw on the following set: \\[\\mathcal{S}_{\\text{glob}}(G_{1,2}): =\\{\\forall\\,g\\in G_{1,2}\\quad\\theta^{\\prime}_{g}\ ot\\in(\\theta^{ (1)}_{g},\\theta^{(2)}_{g})\\] \\[\\forall\\,g\ ot\\in\\,G_{1,2}\\quad\\theta^{\\prime}_{g}=\\theta_{g}, \\quad\\theta^{\\prime}\\in\\Theta_{K}\\}.\\] (19) Fig. 10: Location of a zone of interest in a multispectral image using the detector \\(\\phi^{(4)}\\). (a) \\(f\\in\\mathcal{F}_{10}^{(2)}\\). (b) \\(\\hat{f}\\), \\(L_{\\alpha}^{+}(\\hat{f})\\), and \\(L_{\\alpha}^{-}(\\hat{f})\\). (c) Part of the target detected and true target location. 2. The mutation step [Step 4-(c)] allows the gene of any individual to randomly change at each iteration with a probability of 1%. If a mutation occurs locally on the gene \\(\\theta_{k}\\) of some individual \\(\\theta\\in\\Theta_{K}\\), \\(\\theta_{k}\\) is replaced with a discrete uniform random draw on the set \\[\\mathscr{S}_{\\text{loc}}(k):=\\{0\\leq\\ell\\leq 2K,\\] \\[(\\theta_{1},\\ldots,\\theta_{k-1},\\ell,\\theta_{k+1},\\ldots,\\theta_ {2K})\\in\\Theta_{K}\\}.\\] (20) Fig. 11 shows some ROC curves obtained by applying the detector \\(\\phi^{(4)}\\) to a database featuring \\(N=5000\\) multispectral images \\(\\{f_{n}\\in\\mathcal{F}_{3}^{(1)}\\}_{n=1}^{5000}\\) and \\(|H_{0}|=5000\\) background images. The four curves correspond to different band combinations \\(\\theta\\in\\Theta_{3}\\): the plain curve is achieved with the parameter \\(\\theta^{*}\\in\\Theta_{3}\\) which maximizes the criterion \\(\\theta\\to S_{1}^{\\theta}(\\phi^{(4)})\\). The dashed curves are related to other parameters \\(\\left\\{\\theta^{(i)}\\in\\Theta_{3}\\right\\}_{i=1}^{3}\\) specified in Fig. 11(b). Given similar band combination, the resulting area under ROC curve (AUC) turns out to be significantly different: it is, therefore, crucial for the detection performance to know the optimal band combination. This band selection method could also help to discriminate decoys from real aircraft, provided that their emission spectrum does not perfectly match the radiation of the airplane. If not, following the method developed in [38], an additional classification step making use of geometric features of the targets will be necessary. Using the detector \\(\\phi^{(4)}\\) and multispectral images of aircraft 1, Fig. 12 shows the GA outcomes for \\(K=2\\) band combinations and using different parameters. The blue curves are obtained for the fitness function \\(\\mu=S_{1}(\\phi^{(4)})\\) and the black ones for \\(\\mu=S_{2}(\\phi^{(4)},\\epsilon=5\\times 10^{-2})\\). The plain and dashed lines only differ according to the GA parameters: the population size and the generation number, referred to as \\(N\\) and \\(M\\) in Algorithm 1, respectively. For each scenario, the quantitative results of Fig. 12(b) confirm that the GA provides band combinations which are coherent with the fitness function \\(\\mu\\) and that the optimization improves with \\(M\\) and \\(N\\). Still, the computation time considerably increases with these two parameters. Therefore, given that the detection performances vary very little, the parameters \\(\\mu=S_{1}\\), \\(N=10\\), and \\(M=50\\) will be used in the following. Fig. 13 displays the ROC curves of the detector \\(\\phi^{(4)}\\) applied to the multispectral images \\(f\\in\\mathcal{F}^{(1)}\\) [Fig. 13(a)], \\(f\\in\\mathcal{F}^{(2)}\\) [Fig. 13(b)], \\(f\\in\\mathcal{F}^{(3)}\\) [Fig. 13(c)], and \\(f\\in\\mathcal{F}^{(\\text{all})}\\) [Fig. 13(d)]. The GA was applied to these four data sets with the parameters \\(\\mu\\), \\(N\\), and \\(M\\) specified above to obtain the optimal parameters \\(\\theta_{2}^{*}\\), \\(\\theta_{3}^{*}\\), and \\(\\theta_{4}^{*}\\) corresponding to the \\(K=2\\), \\(K=3\\), and \\(K=4\\) optimal band combinations, respectively. For each data type, the detector \\(\\phi^{(4)}\\) is applied to the multispectral images featuring: 1. the optimal band combinations for \\(K=2\\) combinations \\(\\theta_{2}^{*}\\) (plain black curves); 2. the optimal band combinations for \\(K=3\\) combinations \\(\\theta_{3}^{*}\\) (dotted black curves); 3. the optimal band combinations for \\(K=4\\) combinations \\(\\theta_{4}^{*}\\) (dashed black curves); 4. the images integrated over the band II spectrum, i.e., monospectral \\(\\theta_{\\text{II}}^{*}\\) (plain green curves); 5. the standard multispectral images, i.e., with \\(K_{0}=10\\) elementary subbands, \\(\\theta_{10}^{*}\\) (plain blue curves). Fig. 12. Optimization with different GA parameters. (a) ROC curves. (b) Optimized criterion. For all data sets, the \\(K=2\\) band combination outperforms the other band combinations with \\(K=3\\) and \\(K=4\\). However, there is no such general ranking between the optimal combinations of \\(K=3\\) and \\(K=4\\) bands. In comparison, the standard multispectral images with \\(K_{0}=10\\) subbands are always less efficient than the optimal \\(K=2\\) band combination but may achieve better performance than the optimal \\(K=3\\) and \\(K=4\\) band combinations. For \\(K=3\\) and \\(K=4\\), the optimal band combinations \\(\\theta_{K}^{*}\\in\\Theta_{K}\\) are identical for the different aircraft. For \\(K=2\\), the optimal band combinations are the same for two aircraft and only differ with one element for the third one. This simulation shows that provided the knowledge of the optimal band combinations, there is a definite interest in using multispectral images to perform detection instead of monospectral images: indeed, the ROC curves obtained for the monospectral data set are always below the best multispectral ROC curve. Regardless of the data set, the optimal parameter \\(\\theta^{*}\\) always presents the same spectral profile: a first short group involving two elementary subbands early in the band II and a second group of three or four elementary subbands further in the band II. The optimal nature of the \\(K=2\\) band combinations may be explained with two arguments. 1. _Algorithm justification_: For any standard multispectral image \\(f\\in\\mathcal{F}_{10}\\), consider the multispectral image \\(T_{\\theta^{*}}(f)\\in\\mathcal{F}_{2}\\), (\\(\\theta^{*}\\in\\Theta_{2}\\)). If a target is present in the multispectral image \\(T_{\\theta^{*}}(f)\\), then it is likely to appear in inverted contrast in the first image and in positive contrast in the second one. Indeed, for the first group, the noise level is high (the noise level is higher at the beginning of the band II) and the aircraft signature is low (only two bands are taken into account in the first group of \\(\\theta^{*}\\)). Conversely, the several bands of the second group provide a stronger target signal while maintaining a low noise level. As a consequence, a target embedded in a multispectral image \\(T_{\\theta^{*}}(f)\\in\\mathcal{F}_{2}\\) may, thus, be detected either with the level set \\(L_{+}^{+}(\\hat{f})\\) and/or \\(L_{-}^{-}(\\hat{f})\\). 2. _Physical justification_: The engine plume hot gases emissions, mainly constituted of H\\({}_{2}\\)O and CO\\({}_{2}\\), radiate at 2.7, 4.3, and \\(4.9\\,\\upmu\\)m and may, thus, sign at the beginning of the band II. Hence, [11] selected the spectral band from 4.17 to \\(4.55\\,\\upmu\\)m for their application. Moreover, the CO\\({}_{2}\\) absorption band in the neighborhood of \\(3.5\\,\\upmu\\)m corresponds to the gap between the first and the second group. Finally, the information conveyed by the second band group coincides with the fuselage reflectance which signs at \\(4.3\\,\\upmu\\)m. Note that the optimal band combinations with \\(K=3\\) or \\(K=4\\) tend to adopt the same spectral profile as \\(\\theta^{*}\\in\\Theta_{2}\\) by artificially adding one or two irrelevant band group(s) in the end of the band II to reach their target number of groups \\(K\\). As a consequence, in addition to providing an explanation of the optimality of \\(\\theta_{2}^{*}\\), these two arguments provide a justification of the suboptimality of the band combinations \\(\\theta_{3}^{*}\\) and \\(\\theta_{4}^{*}\\). In this paper, we focus on the single sensor case, but if multiple sensors are used our method still applies, only the optimal band combination could differ. ## VI Conclusion In this paper, we have introduced a novel method to perform anomaly detection in low-resolution multispectral images. The detection task is challenging because the targets feature simultaneously spectral and spatial dispersion and few prior knowledge are available. The proposed detector \\(\\phi^{(4)}\\), combining a Neyman-Pearson LRT and a study of relevant level sets, is designed to handle these dispersions and is shown to outperform the standard RX detector, in our case. This detector was then used to identify the optimal spectral band combination for stealth aircraft detection, i.e., the band number and their location in the band II spectrum. It results that, for three different military aircraft, the same spectral profile featuring a combination of \\(K=2\\) bands, provides the best detection performance. The optimization method and at large the proposed detection methodology can be extended to other situations: other targets, different backgrounds, etc. In particular, an interesting issue that remains to be addressed in the aircraft detection context is the anomaly detection in a cloudy sky background. In [10], a fractional Brownian motion was used to model this kind of textured background for monospectral images. Provided a model extending this pattern to the multispectral approach, it would be interesting to study how does the detector \\(\\phi^{(4)}\\) cope with such a background. ## Acknowledgment The authors would like to thank the Associate Editor and three anonymous referees for helpful suggestions. ## References * [1] L. E. Hoff, A. M. Chen, X. Yu, and E. M. Winter, \"Enhanced classification performance from multiband infrared imagery,\" in _Proc. IEEE Conf. Rec. Asilomar Signals Syst. Comput._, 29, 1996, pp. 837-841. * [2] C. I. Chang and S. S. Chiang, \"Anomaly detection and classification for hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 40, no. 6, pp. 1314-325, Jun. 2002. * [3] J. Karholm and I. Renhorn, \"Wavelength band selection method for multispectral target detection,\" _Appl. Opt._, vol. 41, pp. 6786-6795, 2002. * [4] D. Manolakis, D. Marden, and G. Shaw, \"Hyperspectral image processing for automatic target detection applications,\" _Lincoln Lab. J._, vol. 14, no. 1, pp. 79-116, 2003. * [5] M. Johansson and M. Dalenbring, \"SIGGE, a prediction tool for aeronautical IR signatures, and its applications,\" in _Proc. 9th AIAA/ASME Joint Thermophys. Heat Transfer Conf._, 2006, vol. 3276. * [6] A. Rao and S. P. Mhaulikar, \"Aircraft powerplant and plume infrared signature modelling and analysis,\" in _Proc. 43rd AIAA Aerosp. Sci. Meeting Exhibit_, 2005. * [7] M. Noah, J. Kristl, J. Schroeder, and B. P. Sandford, \"NIRATAM-NATO infrared air target model,\" in _Proc. SPIE Surveillance Technol._, 1991, vol. 1479, pp. 275-282. * [8] G. Gauffle, \"Aircraft infrared radiation modeling,\" _Rec. Aerosp._, vol. 4, pp. 245-265, 1981. * [9] S. Lefebvre, A. Roblin, S. Varet, and G. Durand, \"A methodological approach for statistical evaluation of aircraft infrared signature,\" _Reliab. Eng. Syst. Saf._, vol. 95, pp. 484-493, 2010. * [10] J. Jakubowicz, S. Lefebvre, F. Maire, and E. Moulines, \"Detecting aircraft with a low-resolution infrared sensor,\" _IEEE Trans. Image Process._, vol. 21, no. 6, pp. 3034-3041, Jun. 2012. * [11] F. Liu, X. Shao, P. Han, B. Xiangli, and C. Yang, \"Detection of infrared stealth aircraft through their multispectral signatures,\" _Opt. Eng._, vol. 53, no. 9, 2014. * [12] H. Su, Y. Sheng, P. Du, and K. Liu, \"Adaptive affinity propagation with spectral angle mapper for semi-supervised hyperspectral band selection,\" _Appl. Opt._, vol. 51, no. 14, pp. 2656-2663, 2012. * [13] S. Matteoli, M. Diani, and G. Corsini, \"A tutorial overview of anomaly detection in hyperspectral images,\" _IEEE A & E Syst. Mag._, vol. 25, no. 7, pp. 5-27, Jul. 2010. * [14] D. Manolakis, \"Taxonomy of detection algorithms for hyperspectral imaging applications,\" _Opt. Eng._, vol. 44, no. 6, p. 066403, 2005. * [15] F. Roberg, D. Fuhrmann, E. Kelly, and R. Nitzberg, \"A CFAR adaptive matched filter detector,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 28, no. 1, pp. 208-216, Jan 1992. * [16] A. Stocker, I. Reed, and X. Yu, \"Multidimensional signal processing for electro-optical target detection,\" in _Proc. Int. Soc. Opt. Photonics Opticoncior. Laser Appl. Sci. Eng. (OE/LASE'90)_, Los Angeles, CA, USA, Jan. 14-19, 1990, pp. 218-231. * [17] C. Schwartz, J. Cederoquist, and M. Eismann, \"Target detection using infrared spectral sensors,\" in _Proc. SPIE Int. Symp. Opt. Sci. Eng. Instrum._, 1996, pp. 182-194. * [18] I. Reed and Xu, Yu, \"Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,\" _IEEE Trans. Acoust. Speech Process._, vol. 38, no. 10, pp. 1760-1770, Oct. 1990. * [19] D. Borgby, V. Achard, S. R. Rotman, N. Gorelik, C. Pernael, and E. Schweicher, \"Hyperspectral anomaly detection: A comparative evaluation of methods,\" in _Proc. IEEE URSI Gen. Assen. Sci. Symp. (GASS)_, 2011, pp. 1-4. * [20] L. Ma and J. Tian, \"Anomaly detection in hyperspectral images based on the improved RX algorithms,\" in _Proc. SPIE Int. Soc. Opt. Eng._, 2007. * [21] Y. Taiato, B. Geier, and K. Bauer, \"A locally adaptable iterative RX detector,\" _EURASIP J. Adv. Signal Process._, vol. 2010, pp. 11:1-11:10, 2010. * [22] A. Schaum, \"Hyperspectral anomaly detection: Beyond RX,\" _Proc. SPIE_, vol. 6565, p. 656502, 2007. * [23] H. Kwon and N. Nasrabadi, \"Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 2, pp. 388-397, Feb. 2005. * [24] S. G. Beaven, D. Stein, and L. E. Hoff, \"Comparison of gaussian mixture and linear mixture models for classification of hyperspectral data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS)_, 2000, pp. 1597-1599. * [25] A. D. Stocker and A. P. Schaum, \"Application of stochastic mixing models to hyperspectral detection problems,\" in _Proc. SPIE_, vol. 3071, pp. 47-60, 1997. * [26] G. Camps-Valls, L. Gomez-Chova, J. Munoz Mari, J. Vila-Frances, and J. Calpe-Maravilla, \"Composite kernels for hyperspectral image classification,\" _IEEE Geosci. Remote Sens. Lett._, vol. 3, no. 1, pp. 93-97, Jan. 2006. * [27] N. Paragios and S. Osher, _Geometric Level Set Methods in Imaging Vision and Graphics_. New York, NY, USA: Springer, 2003. * [28] P. Monasse and F. Guichard, \"Fast computation of a contrast invariant image representation,\" _IEEE Trans. Image Process._, vol. 9, no. 5, pp. 860-872, May 2000. * [29] S. Masuon and J. M. Morel, \"Level lines based disocclusion,\" in _Proc. IEEE Int. Conf. Image Process. (ICIP)_, 1998, vol. 3, pp. 259-263. * [30] D. Goldberg, _Genetic Algorithms in Search, Optimization, and Machine Learning_. Reading, MA, USA: Addison Wesley, 1989. * [31] E. Coiro, C. Chatelard, G. Durand, S. Langlois, and J. M. Martinenq, \"Experimental validation of an aircraft infrared signature code for commercial airifiers,\" in _Proc. 43rd AIAA Thermophys. Conf._, 2012, pp. 2012-3190. * [32] E. Coiro, \"Global illumination technique for aircraft infrared signature calculations,\" _J. Aircraft_, vol. 50, pp. 103-113, 2013. * [33] B. Tuffin, \"Simulation accelerometer per ks methodes de Monte Carlo et Quasi-Monte Carlo '. Theorie de application,\" Ph.D. dissertation, Univ. Rennes I, Rennes, France, 1997. * [34] S. Teruka and H. Faure, \"I-binomial scrambling of digital nets and sequences,\" _J. Complexity_, vol. 19, 2003. * [35] P. Simoneau _et al._, \"Matisse: Advanced earth modeling for imaging and scene simulation,\" in _Proc. Int. Symp. Remote Sens._, 2002, pp. 39-48. * [36] G. Matheron, _Random Sets and Integral Geometry_. Hoboken, NJ, USA: Wiley, 1975. * [37] X. Yu, I. Reed, and A. Stocker, \"Comparative performance analysis of adaptive multispectral detectors,\" _IEEE Trans. Signal Process._, vol. 41, no. 8, pp. 2639-2656, Aug. 1993. * [38] F. Maire, S. Lefebvre, E. Moulines, and R. Douc, \"Aircraft classification with a low resolution infrared sensor,\" in _Proc. IEEE Statist. Signal Process. Workshop (SSP)_, 2011, pp. 761-764. \\begin{tabular}{c c} & Florian Maire received the Ph.D. degree in statistics from the Universite Pierre et Marie Curie, Paris, France, in 2014. \\\\ \\end{tabular} He is currently working as a Postdoctoral Research Fellow with the University College Dublin, Dublin, Ireland. His research interests include uncertainty modeling, inference in missing data models, and computational statistics at large. \\\\ \\end{tabular} \\begin{tabular}{c c} & Sidonie Lefebvre received the Ph.D. degree in mechanics of materials from the Ecole Centrale Paris, Paris, France, in 2006. \\\\ \\end{tabular} She is currently working as a Research Scientist in Statistics with the Department of Theoretical and Applied Optics, ONERA, Paris, France. Her research interests include the design and modeling of computer experiments, uncertainty and sensitivity analysis, and target detection and classification. \\\\ \\end{tabular}
We address the problem of detecting a stealth aircraft flying far away from an observer with limited visibility conditions using their multispectral signature. In such environment, the aircraft is a very low-contrast target, i.e., the target spectral signature may have a similar magnitude to the background clutter. Therefore, methods accounting only for the spectral features of the target, while leaving aside its spatial pattern, may either lead to poor detection statistics or high false alarm rate. We propose a new detection method which accounts for both spectral and spatial dispersions, by inferring level sets of the Mahalanobis transform of the multispectral image. This combines the approach of the well-known Reed Xiaoli (RX) detector with some elements of the level set methods for shapes analysis. This algorithm is in turn used to specify the wavelength bands which maximize an aircraft detection probability, for a given false alarm rate. This methodology is illustrated in a typical scenario, consisting of a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 30 000 simulated multispectral infrared signature (IRS). The results emphasize that, in the context of aircraft detection, there is great interest in using multispectral IRS rather than integrated IRS, as long as the IR bands are well chosen. Aircraft detection, anomaly detection, multispectral infrared signature (IRS), spectral band selection. + Footnote †: Dagital Object Identifier 10.1109/JSTARS.2015.2457514
Summarize the following text.
ieee/e74893ff_9bfb_42f5_ba82_a39513106ae8.md
# Few-Shot Transfer Learning for SAR Image Classification Without Extra SAR Samples Yuan Tai, Yihua Tan, Shengzhou Xiong, Zhaojin Sun, and Jinwen Tian Manuscript received December 5, 2021; revised February 9, 2022; accepted February 25, 2022. Date of publication March 1, 2022; date of current version March 21, 2022. This work was supported in part by the National Natural Science Foundation of China under Grant 41371339 and in part by The Fundamental Research Funds for the Central Universities under Grant 2017KFYXJ179. (_Corresponding author:_ Yihua Tan.)The authors are with the School of Artificial Intelligence and Automation, State Key Laboratory of Multispectral Information Processing Technology, Huong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).Digital Object Identifier 10.1109/ISTARS.2022.3155406 ## I Introduction Synthetic aperture radar (SAR) imaging benefits from propagating radar signals during occluded weather or at night. Radar signals sent from mobile antennas and reflection signals have been collected for subsequent signal processing to produce high-resolution images regardless of weather conditions and shielding. Therefore, SAR imaging is a powerful technique in many applications, such as continuous environmental monitoring, large-scale surveillance [1], Earth remote sensing [2], and military investigation. Image classification is one of the basic tasks in these applications. Recently, with a large number of labeled training samples, DL is a popular and effective solution to SAR image classification. However, we can only obtain few labeled SAR samples of the interesting targets in some application scenarios because of collection difficulty. In such scenarios, the problem has to be studied by few-shot learning that trains the network with few labeled SAR samples (less than 20) of each class. Existing few-shot learning methods for SAR are mainly divided into two types. 1. Transfer learning-based (TL-based) methods that match features between the electro-optical (EO) and SAR domains with extra similar SAR samples. For example, Rostami _et al._[3] proposed to minimize the distance of two feature distributions of unlabeled SAR and EO samples, and then fine-tuned the network with few labeled SAR samples. The performance of this method relies on the extra unlabeled SAR samples, which are required to be similar classes as the testing samples. 2. Meta-learning-based methods that learn from similar labeled SAR samples without using EO samples. For example, Wang _et al._[4] pretrained the network with hundreds of labeled samples of seven supporting categories in the MSTAR dataset before the network fine-tunes procedure with few labeled samples in the three target categories. However, lots of the extra labeled SAR samples are required in such methods. In general, extra SAR samples, including unlabeled novel classes samples or labeled SAR samples of similar categories (in [4], the novel classes and supporting classes were different subclasses of the tank) are necessary to achieve good performances for the existing TL-based few-shot methods and meta-learning-based few-shot methods. However, in some extreme application scenarios, such as surveillance, it is unrealistic to collect extra similar SAR samples, which will result in a severe decline in the performance of existing few-shot learning methods. Therefore, a few-shot learning method for SAR image classification that can mitigate the difficulty of the scarcity of extraSAR samples is critical, which we name as the extreme few-shot learning method. In this article, we propose a TL-based extreme few-shot learning method that can reduce the dependence on the extra similar SAR samples. In fact, two core reasons make extra SAR samples critical to the performance of existing few-shot TL methods. First, the big shift between the EO and SAR samples results in parts of features extracted by the network pretrained with EO samples being unsuitable for SAR image classification. For example, Fig. 1 shows the comparison between samples of aircraft and vehicles in EO and SAR domains. We can see that their shapes are similar,but their textures are quite different. This phenomenon indicates that SAR and EO samples of the same category share \"common features,\" such as shape, and present distinct \"individual features,\" such as texture. Therefore, common features of SAR samples extracted by a network pretrained with EO samples are also very effective for SAR image classification. On the other hand, the extraction of individual features of SAR samples is transferred from that of EO samples, which is harmful to SAR image classification. Existing methods have to learn from extra SAR samples of similar classes to supplement those individual features of targets in the SAR domain. Second, extra SAR samples can first update the network parameters to a suitable initial point so that the network is easier to be well-trained with few labeled SAR samples. Based on the aforementioned analyses, we must take measures to compensate for the loss of two advantages brought by extra SAR samples since we have only few labeled samples in the extreme few-shot case. For the first loss of supplementation of individual features, if we can strengthen common features and suppress the transferring of individual features of EO samples, it is probably to make the network able to classify SAR samples more accurate based on the enhanced common features. This aim can be realized by transferring common features from a source network to a target network, and connecting features between them is a popular and effective way [5, 6, 7]. In this article, we transfer common features from a complicated source network to a simplified target network also by connecting features between both networks. Differently, we propose a novel transferring structure for the extreme few-shot case. Specifically, we construct a complex source network to capture rich features and a small target network to be more easily trained in the extreme few-shot case. The transferring structure connecting the source and target network is in charge of enhancing common features transferring and depressing individual features transferring. However, to achieve the goal, following two factors need to be considered. 1. Which features of the source network are common features and individual features. 2. Different layers of the target network are adjusted to receive the two kinds of feature with suitable weights. It can be implemented by connecting all the feature channels in the source network to each layer of the target network, in which each connection is corresponding to a weight combo. It includes two types of weights that indicate the transferring extent in the layer level and feature channel level. However, it is not easy to determine both of them artificially, so that the transferring structure needs to be learnable. An attention mechanism is a popular way to learn the weights of the features we care about by designing various attention modules [8, 9, 10]. However, most attention mechanism-based methods construct the attention module with fully-connected (FC) layers, which bring a large number of parameters due to the high dimensions of features, increasing the difficulty of attention module training, making such attention modules difficult to be well-trained in the extreme few-shot case. Therefore, in this article, we propose a novel attention module, which avoids the large number of parameters brought by the FC layer by using the learnable vector to replace the FC layer, namely connection-free attention module. For the second problem of loss of good initial point, we have to design an appropriate parameter update strategy with few labeled samples. Considering that the features extracted by different parameters of the network are of different importance to classification ability, it is reasonable to infer that training those important parameters more accurately can give the network better generalization ability. Concentrating on updating important parameters, we probably mitigate the optimization problem to avoid training all the parameters equally. Based on this point, we need to find a way to measure the importance of the parameters of the network first. Bayesian convolutional neural network (Bayesian-CNN) [11] models the uncertainty on the parameters, which provides a basis for measuring the importance of the parameters. Thus, we can regard the higher uncertainty parameters as less important because the important parameters should be stable for a well-trained network. Therefore, the Bayesian-CNN is introduced as the target network to measure the importance of each parameter according to its uncertainty. Furthermore, to train these important parameters more accurately, based on the Bayesian-CNN, we propose a training strategy for the extreme few-shot case, namely accurately updating important parameters (AUIPs). Specifically, first, we pretrain the target network by EO samples with the initial learning rate, and then we train the target network by few labeled SAR samples with adaptive learning rates for each parameter according to their uncertainty. Generally, the structure of our method is shown in Fig. 2, which consist of following three parts. 1. A complex CNN is introduced as the source network to capture rich features of EO samples. 2. Several connection-free attention modules are constructed to selectively transfer common features from the source network to the target network. 3. A small Bayesian-CNN is introduced as the target network to capture effective features for SAR image classification. Fig. 1: Comparison of samples between the EO domain and the SAR domain. (a) Comparision of the aircrafts in the EO and SAR domain. (b) Comparision of the vehicles in the EO and SAR domain. The training details are introduced in Section III-F and Algorithm 1. In summary, the contributions of this article are as follows. 1. A novel few-shot transfer learning method for SAR image classification is proposed in the extreme few-shot case. It focuses on the application scenarios in which no extra similar SAR samples are available. 2. The connection-free attention module is proposed to selectively transfer common features from a complex source network to a small target network in the extreme few-shot case. 3. The Bayesian-CNN is introduced as the target network to measure the importance of its parameters. Based on this, the training strategy of AUIPs is proposed to solve the problem that the number of training samples is insufficient to update all parameters to a suitable value in the extreme few-shot case. ## II Realated Work ### _Few-Shot Learning in SAR Image Classification_ Few-shot classification methods learn a classifier with only few labeled training samples of each class [12]. The TL strategy is widely used in the SAR domain, which transfers knowledge from extra similar SAR samples. Huang _et al._[13] used an unsupervised learning method to generate many unlabeled SAR samples to train the depth autoencoder. Zhang _et al._[14] transferred knowledge from another SAR task, where labeled data were easy to obtain. Shang _et al._[15] modified a CNN with an information recorder used to store spatial features of labeled samples to label the unlabeled samples according to the spatial similarity. However, in the extreme few-shot case, the extra SAR samples are unavailable, causing a decline in the performance of these methods. On the other hand, the meta-learning strategy in image classification [16] has become popular in few-shot learning, which predicts novel classes based on few labeled samples and the meta-dataset containing thousands of base classes with many labeled samples. In the SAR domain, for the lack of base classes, Wang _et al._[4] repeatedly chose seven classes of the MSTAR [17], a thousand times to construct the meta-dataset, and then fine-tuned the network with few labeled samples of the three target classes. The base classes are very similar to the target classes in this method to ensure performance. However, in the extreme few-shot case, the base classes are unavailable. ### _Feature Transferring_ Transferring features from a source network to a target network became popular in feature transferring, and recently, attention-based methods have been proven effective. Jang _et al._[7] constructed an attention-based meta-network consisting of two kinds of attention modules to selectively transfer features Fig. 2: Training process. Each connection-free attention module generates a \\(w^{m,n}\\) and a \\(\\lambda^{m,n}\\) to weight feature channels and feature connection when transferring features from the \\(m\\)th layer of the source network to the \\(m\\)th layer of the target network. The target network is responsible for classifying SAR samples. The Bayesian ResNet block, represents the parameters of the ResNet block, is defined by a Gaussian distribution. Note that during the testing process, only the target network works. from a source network to a target network. Ji _et al._[18] also proposed an attention-based meta-network with a different structure from that of Jang _et al._[7]. This meta-network learns relative similarities between features and applies identified similarities to control distillation intensities of all possible pairs. Zagoruyko _et al._[6] computed statistics of features across the channel dimension to construct a spatial attention module to features from a source network to a target network. These methods utilized attention modules to selectively transfer effective features from a source network to a target network, enhancing the feature extraction ability of the target network. However, these attention-based methods are unsuitable for the extreme few-shot case because of the large number of parameters brought by the attention modules in them. Thus, in this article, a light-weight connection-free attention module is proposed to transfer common features in the extreme few-shot case. ### _Bayesian Convolutional Neural Network_ The Bayesian approach has been studied in the field of learning neural networks for a few decades [19]. Several methods have been proposed for Bayesian-CNN, such as the Laplace approximation [20], variational inference [21, 22], and probabilistic back-propagation [23]. Recently, several problems of different fields have been studied with Bayesian-CNN. Kendall and Gal [24] modeled the uncertainty with the Bayesian-CNN to study the confidence of the output of the network. Kendall _et al._[25] decided weights among tasks in the multitask learning problem according to the uncertainty for each task. Ebrahimi _et al._[26] applied the Bayesian-CNN into continual learning and achieve state-of-the-art performance in several open datasets. However, the potential of Bayesian-CNN in the field of few-shot transfer learning has not been exploited. ## III Problem Formulation and Proposed Method ### _Problem Formulation_ This article aims to learn a network without extra SAR samples but only few labeled SAR samples, namely extreme few-shot learning. Specifically, let \\(D_{T}=(X_{T},Y_{T})\\) be the few labeled SAR samples of target classes and \\(D_{T}^{\\prime}\\) be the extra SAR samples of similar classes, including labeled or unlabeled samples. \\(X\\) and \\(Y\\) are the images and the corresponding class labels, respectively. Compared with the common few-shot TL methods that train a network by both \\(D_{T}=(X_{T},Y_{T})\\) and \\(D_{T}^{\\prime}\\), the extreme few-shot learning method train a network only by the \\(D_{T}=(X_{T},Y_{T})\\). In general, TL-based few-shot methods belong to semisupervised algorithms because they use unlabeled SAR samples to support training, and meta-learning-based few-shot methods follow the episode training strategy, which needs extra labeled SAR samples of similar classes as the target class. Differently, the extreme few-shot learning methods are supervised algorithms and do not follow the episode training strategy. ### _Overview of the Proposed Method_ In this section, we first overview the structure of the source network and the target network, and then, we briefly introduce the training process. As shown in Fig. 2, the source network contains eight ResNet blocks, namely 8-Resblock-based CNN. The target network is a Bayesian-CNN, which only contains four Bayesian ResNet blocks, namely 4-Resblock-based Bayesian-CNN. ResNet block is the basic unit of ResNet [27], which contains three convolution layers and one down-sampling layer. The Bayesian ResNet block represents each parameter of the ResNet block as a Gaussian distribution. The details of the connection-free attention module and Bayesian-CNN will be introduced in Sections III-C and III-D, respectively. The training process generally divides into two parts. 1. The source network and the target network are pre-trained independently from scratch with EO samples \\(D_{S}=(X_{S},Y_{S})\\). 2. The target network and connection-free attention modules are trained with few labeled SAR samples \\(D_{T}=(X_{T},Y_{T})\\). As shown in Fig. 2, the cores of the training process are as follows. 1. \\(\\lambda^{m,n}\\) and \\(w^{m,n}\\) are learned by a connection-free attention module to weight the feature connection \\((S^{m},T^{n})\\) and the feature channels in the feature pair \\((S^{m}(x_{t}),T^{n}(x_{t}))\\), respectively, where \\(S^{m}\\) and \\(T^{n}\\) represent the \\(m\\)th layer of the source network and the \\(n\\)th layer of the target network, respectively, \\(S^{m}(x_{t})\\) and \\(T^{n}(x_{t})\\) are the features of the \\(m\\)th layer of the source network and the \\(n\\)th layer of the target network with the inputs \\(x_{t}\\), respectively. 2. The target network are trained with the AUIP training strategy. The details of the training scheme are shown in Algorithm 1. ### _Learn to Transfer Common Features With Connection-Free Attention Module_ We aim to enhance the common features and suppress the individual features of EO samples of the target network \\(T_{\\theta}\\) parameterized by \\(\\theta\\), through transferring common features from the source network \\(S\\). To achieve this aim, we connect all the feature channels in the source network to each layer of the target network with different weights. Besides, the connection-free attention module is proposed to learn the weights for both feature channels in the feature pair (\\(S^{m}(x_{t}),T_{\\theta}^{n}(x_{t})\\)) and the feature connection \\((S^{m},T^{n})\\) when transferring \\(S^{m}(x_{t})\\) to \\(T_{\\theta}^{n}(x_{t})\\). Specifically, following two operations are needed for transferring \\(S^{m}(x_{t})\\) to \\(T_{\\theta}^{n}(x_{t})\\). 1. The number of feature channels of the feature pair \\((S^{m}(x_{t}),T^{n}(x_{t}))\\) are equaled by a \\(1\\times 1\\) convolution function \\(z_{\\theta}^{m,n}\\). 2. The F-norm of the difference of \\(S^{m}(x_{t})\\) and \\(z_{\\theta}^{m,n}(T_{\\theta}^{n}(x_{t}))\\) is minimized. This operation is also called \"feature matching\" [5]. Based on the two operations, the following equation is given: \\[\\mathcal{L}_{\\text{match}}^{m,n}(\\theta\\mid x_{t})=\\sum_{c}\\|(S^{m}(x_{t})_{c}-(z _{\\theta}^{m,n}(T_{\\theta}^{n}(x_{t})))_{c})\\|_{F}^{2} \\tag{1}\\] where \\(m\\) and \\(n\\) represent the \\(m\\)th layer of the source network and the \\(n\\)th layer of the target network, respectively, \\(c\\) is the \\(c\\)th feature channel, and the size of \\(S^{m}(x_{t})\\) is resized to be the same as \\(T_{\\theta}^{n}(x_{t})\\). Note that \\(\\theta\\) contains parameters of both \\(T_{\\theta}\\) and \\(z_{\\theta}\\), but \\(z_{\\theta}\\) is not included in the target network (see Fig. 2). To achieve the aim of weighting feature channels and the feature connection when transferring \\(S^{m}(x_{t})\\) to \\(T_{\\theta}^{n}(x_{t})\\), Jang _et al._[7] constructed a meta-network implemented by FC layers to obtain \\(w^{m,n}\\) and \\(\\lambda^{m,n}\\) for weighting feature channels and the feature connection, which is an attention module indeed. Thus, the meta-network is renamed as the FC-based attention module for easier understanding in this article. In fact, the number of the parameters of the FC-based attention module reaches \\(10^{6}\\) because of the FC layer, making it difficult to be well-trained in the extreme few-shot case. Therefore, to avoid the large number of parameters brought by the FC layers, we propose the connection-free attention module consisting of channel attention and feature-connection attention. The number of parameters of our attention module only reaches \\(10^{3}\\). The comparison between the meta-network and our attention module is shown in Fig. 3. Specifically, the \\(w^{m,n}\\) for weighting feature channels in feature pair (\\(S^{m}(x_{t}),T_{\\theta}^{n}(x_{t})\\)) is computed by the channel attention by \\[w^{m,n}=\\text{softmax}(\\text{AvgPooling}(\\mathbf{f}(S^{m}(x_{t}),\\text{CA}))) \\tag{2}\\] where CA represents the channel attention, which is a \\(1\\times C\\) vector, where \\(C\\) is the number of feature channels of \\(S^{m}(x_{t})\\), \\(\\mathbf{f}(a,b)\\) represents the each element in the \\(c\\)th channel of \\(a\\) perform pixel-wise operation with the each element in the \\(c\\)th channel of \\(b\\), AvgPooling is the operation of global average pooling, softmax is the softmax function to make \\(\\sum_{c}w_{c}^{m,n}=1\\), and \\(w^{m,n}\\) is a \\(1\\times C\\) vector. Besides, the \\(\\lambda^{m,n}\\) for weighting feature connection, \\((S^{m},T^{n})\\) is computed by the feature-connection attention by \\[\\lambda^{m,n}=\\text{ReLU6}(\\mathbf{g}(\\text{AvgPooling}(S^{m}(x_{t}),\\text{ FCA})) \\tag{3}\\] where FCA represents the feature-connection attention, which is a \\(C\\times 1\\) vector, \\(\\mathbf{g}(a,b)\\) represents the \\(a\\) and \\(b\\) perform matrix multiplication, ReLU6 [28] is a function to prevent \\(\\lambda^{m,n}\\) from being too large, and \\(\\lambda^{m,n}\\) is a value. Note that each parameter of both channel attention and feature-connection attention is learnable, and we use \\(\\phi\\) to represent the parameters of the attention module. The comparison results between the meta-network and our attention are given in Table IX, and the detailed difference between our method and the method in [7] is discussed in Section IV-F. Based on the outputs of our attention module, the objective function for transferring features from the source network to the target network is defined as follows: \\[\\mathcal{L}_{\\text{transfer}}\\left(\\theta,\\phi\\mid x_{t}\\right)=\\sum\ olimits _{m,n\\in\\mathcal{Q}}\\lambda^{m,n}\\sum\ olimits_{c}\\mathbf{f}(\\mathcal{F}_{c} ^{m,n},w_{c}^{m,n}) \\tag{4}\\] where \\(\\mathcal{Q}\\) is the set of candidate feature connections, \\(\\mathbf{f}\\) is the same as that in (2), and \\(\\mathcal{F}^{m,n}=\\|(S^{m}(x_{t})-z_{\\theta}^{m,n}(T_{\\theta}^{n}(x_{t})))\\|_ {F}^{2}\\) is a \\(1\\times C\\) vector. ### _Learn to Accurately Update Important Parameters_ To supplement the loss of good initial points brought by extra SAR samples, we design a training strategy that gives the network better generalization ability by training important parameters more accurately, namely AUIPs. Therefore, we need to measure the importance of the parameters of the network first. To achieve this goal, the Bayesian-CNN [11] is introduced as the target network because it models the uncertainty on the parameters, which can provide a basis for measuring the importance. First, we briefly introduce the Bayesian-CNN. Bayesian-CNN models the parameter uncertainty by initializing each parameter with a Gaussian distribution. Specifically, the \\(i\\)th parameter \\(\\theta_{i}\\in\\theta\\) is defined as a Gaussian distribution with the mean \\(\\mu_{i}\\) and the standard deviation \\(\\sigma_{i}\\), represented as \\(\\theta_{i}\\sim N(\\mu_{i},\\sigma_{i})\\). When the Bayesian-CNN propagates forward, \\(\\theta_{i}\\) is sampled as \\[\\xi_{i}=\\mu_{i}+\\log(1+\\text{exp}(\\sigma_{i}))\\circ\\epsilon,\\epsilon\\sim N(0,1) \\tag{5}\\] where \\(\\xi_{i}\\) participates in the calculation of network output as the value of sampling \\(\\theta_{i}\\), \\(\\circ\\) means the point multiplication, and \\(\\epsilon\\) is a random value sampled from the Gaussian distribution \\(N(0,1)\\) Fig. 3: Comparison between the FC-based attention module and our connection-free attention module. The FC-based attention module contains two FC layers with \\((C+1)\\times C\\) parameters, and the attention module contains channel attention and feature-connection attention with \\(2\\times C\\) parameters. To visualize the difference between CNN and Bayesian-CNN, Fig. 4 shows a comparative example of the \\(3\\times 3\\) convolution kernel in the CNN and the Bayesian-CNN. As shown in Fig. 4, each parameter in the CNN is a fixed value, but in the Bayesian-CNN is a Gaussian distribution. In fact, the output of the Bayesian-CNN during the training process is obtained by averaging the network output of multiple sampling the parameters \\[\\text{output}=\\frac{1}{N}\\sum_{j=1}^{N}B(x_{t},\\xi^{j}) \\tag{6}\\] where \\(B\\) represents the Bayesian-CNN, output is the output of the Bayesian-CNN, representing the score for each class of a training sample \\(x_{t}\\), \\(N\\) is the sampling times, and \\(\\xi^{j}\\) is the jth sampling result of network parameters. Intuitively, the important features of the Bayesian-CNN should be stable because the output of a well-trained Bayesian-CNN to a certain sample should be relatively fixed in different sampling times. Therefore, the parameters extracting these important features should be less volatile in each sampling time. Based on the abovementioned analysis, it is reasonable to infer that the parameters with low standard deviation significantly impact the network output, showing the importance of such parameters to the output of the network. Besides, Ebrahim _et al._[26] verified this inference in the continuous learning task [29]. They focused on those unimportant parameters of the network to solve the problem of \"forgetting.\" Unlike [26], this article focuses on those important parameters by increasing their learning rates and reducing that of unimportant parameters. Specifically, the learning rate \\(\\beta_{i}\\) of \\(\\theta_{i}\\in\\theta\\) is scaled according to standard deviation \\(\\sigma_{i}\\) by \\[\\beta_{i}\\leftarrow\\beta*\\gamma_{i} \\tag{7}\\] where \\(\\gamma_{i}=\\frac{1}{\\log(1+e^{\\sigma_{i}})}\\) is to ensure the positive value and \\(\\beta\\) is the initial learning rate. Fig. 5 illustrates how important and unimportant parameters change while transferring from the EO domain to the SAR domain [see Fig. 5(b) and (c)]. The value of \\(\\mu_{i}\\) and \\(\\sigma_{i}\\) of important parameters (the blue one and the green one, respectively) makes a more significant change. Note that Fig. 5 is not the actual result but just a diagram. ### _Objective Function_ Up to now, we can learn the target network in the extreme few-shot case by enhancing the common features of the target network and updating important parameters of the target network more accurately. However, learning the Bayesian-CNN needs to estimate its parameters' posterior distribution, but the general loss function for image classification, such as the cross-entropy loss \\(L_{\\text{CE}}\\), cannot learn the posterior distribution. Instead, a popular method for training the Bayesian-CNN is to learn an approximating distribution \\(q(\\theta|\\iota)\\) parameterized by \\(\\iota\\) to minimize Kullback-Leibler (KL) divergence with the true Bayesian posterior on the parameters \\[\\theta^{*}=\\arg\\min_{\\theta}\\text{KL}(q(\\boldsymbol{\\theta}|\\iota)\\|P( \\boldsymbol{\\theta}|x_{t},y_{t})). \\tag{8}\\] This objective function can be deduced as \\[\\mathcal{L}_{\\text{bayes}}(\\theta|x_{t},y_{t},\\iota)\\] \\[=\\text{KL}[q(\\theta|\\iota)\\|P(\\theta)]-E_{q(\\theta|\\iota)}[\\log( P(y_{t}|x_{t},\\theta))] \\tag{9}\\] where \\(p(\\theta)\\) represents the prior distribution we set and \\(q(\\theta|\\iota)\\) is the variational posterior distribution. Further, (9) can be approximated using \\(N\\) Monte Carlo samples from the variational posterior [30] \\[\\mathcal{L}_{\\text{bayes}}(\\theta|x_{t},y_{t},\\iota)\\] \\[\\approx\\sum\ olimits_{i=1}^{N}\\log q\\left(\\boldsymbol{\\theta}^{i} |\\iota\\right)-\\log P\\left(\\boldsymbol{\\theta}^{i}\\right)-\\log\\left(P\\left(y_{t }|x_{t},\\theta^{i}\\right)\\right) \\tag{10}\\] where \\(N\\) is the sampling times. Finally, based on the (10), the total objective function is given by \\[\\mathcal{L}_{\\text{total}}\\left(\\theta,\\phi|x_{t},y_{t},\\iota\\right)=\\mathcal{ L}_{\\text{transfer}}\\left(\\theta,\\phi\\mid x_{t}\\right)+\\mathcal{L}_{\\text{bayes}}( \\theta|x_{t},y_{t},\\iota) \\tag{11}\\] where \\(\\mathcal{L}_{\\text{transfer}}\\left(\\theta,\\phi\\mid x_{t}\\right)\\) is the objective function for transferring features and \\(\\mathcal{L}_{\\text{bayes}}(\\theta|x_{t},y_{t},\\iota)\\) is the objective function for the classification task. ### _Training Scheme_ First, we pretrain the source and target networks with EO samples to obtain rich features and reduce transferring difficulty. Then, we learn the parameters of the attention modules and the target network with few labeled SAR samples. In each epoch, we update parameters with \\(\\mathcal{L}_{\\text{transfer}}\\) to transfer common features first, then update parameters with \\(\\mathcal{L}_{\\text{bayes}}\\) to learn to classify images. Finally, we update parameters with both objective functions to learn to transfer common features and classify images together. Algorithm 1 shows the detail of our training scheme. Fig. 4: Comparison of parameters of CNN (left-hand side) and Bayesian-CNN (right-hand side). Fig. 5: Diagram of the evolution of parameters distributions through TL from EO domain to SAR domain. (a) Parameters initialized by Gaussian distributions. (b) Posterior distribution after pretraining with training samples of the EO domain. (c) Posterior distribution after training with training samples of the SAR domain. ## IV Experiment ### _Data Preparation_ #### Iv-A1 Ship Dataset The ship dataset of the EO domain [31] contains 4000 RGB \\(80\\times 80\\) images, which are taken from planet satellite imagery of the San Francisco Bay area. All the samples of the dataset are used to pretrain the model. The ship dataset of the SAR domain comes from a public release dataset [32], which contains three classes of images, including 1596 positive samples of the ship, 3192 false-positive samples of ship-like areas, and 9588 negative samples of ocean areas. Fig. 6 shows nine samples of the three classes. In this article, we define a binary classification problem, where each sample is considered to contain ships (positive data points) or no-ship (false positives and negatives). To balance the number of positive and negative samples, the ship dataset of the SAR domain contains all 1596 positive samples, 798 false-positive samples, and 798 negative samples. The false-positive samples and negative samples are randomly selected. The training set are randomly chosen from the dataset, and the testing set consists of all the other samples. #### Iv-A2 Aircraft Dataset The aircraft dataset of the EO domain [17] contains 3891 positive samples of aircraft and 8154 negative background samples in 408 images. We build an aircraft dataset of the SAR domain. The dataset contains 224 aircraft from TerraSAR-X of Singapore National Airport, with a resolution of 3 m, 112 aircraft from Wuxi airport of China, with a resolution of 1 m, and 528 negative samples come from the background around the aircraft. Fig. 7 shows six examples in the aircraft dataset of the SAR domain. The training set are randomly chosen from the dataset, and the testing set consists of all the other samples. #### Iv-A3 Vehicle Dataset The vehicle dataset of the EO domain [17] contains 2639 positive samples and 8154 negative samples of background. The vehicle dataset of the SAR domain is the MSTAR dataset [33], which consists of a training set and a testing set. Fig. 8 shows nine samples of three different classes. The training samples are randomly sampled from the training set, and all the samples of the testing set are used to verify our method. Fig. 6: Samples of the ship dataset in the SAR domain. (a) The samples of positives in images. (b) The samples of false positives in SAR images. (c) The samples of negatives in SAR images. Fig. 7: Samples of the aircraft dataset in the SAR domain. (a) The samples of aircraft in SAR images. (b) The samples of Negative in SAR images. ### _Settings_ #### Iv-B1 Few-Shot Task The few-shot task assumes that only few labeled data of each category are used to train the network, namely N-way K-shot. For example, in the 2-way 8-shot case, eight examples are selected for each of the two categories. Note that, for the ship dataset, the training set is randomly and equally selected from two kinds of negative samples to balance the number of positive and negative samples. For example, in the 2-way 2-shot case, we randomly take one negative sample from ship-like areas and ocean areas, respectively, when taking two positive samples from ships. #### Iv-B2 Pretraining All the networks are pretrained by the corresponding EO dataset from scratch until the objective function converges. The batch size is 256 and the learning rate is 0.1. The pretraining takes little time because the EO datasets only contain thousands of samples. All the pretrained parameters are preserved for the tasks of ship and aircraft classification because the number of categories of the EO dataset is the same as that of the SAR dataset. However, for the vehicle classification, pretrained parameters of the FC layer are discarded because the vehicle dataset of the SAR domain contains ten categories of targets, but the EO dataset only has two categories. #### Iv-B3 Training For the parameters of the attention modules and the target network, the optimizer is Adam with a learning rate of \\(10e-4\\) and 0.1, respectively. The prior distribution is a mixture of the Gaussian distribution whose mean value is 0 and the variance value is 1 and 0.0025. The batch size is all the training samples. We train each model with training samples for 200 epochs and test it in the testing set. The SAR samples are resized to 32\\(\\times\\)32. We randomly select training samples of the SAR datasets four times, represented as seed 0-3, where \"seed\" is the value used by the code to generate random numbers. In order to minimize the shift across different imaging methods, the targets of the EO domain and the SAR domain have the same class. For example, the network is pretrained by the ship dataset of the EO domain when the testing sample is the ship in the SAR domain. ### _Compared Methods_ Three few-shot learning methods are compared with ours, including the TL-based method [3], the meta-learning-based method [34], and the SAR-pretrained-based method [35]. All the compared methods are trained without extra SAR samples. #### Iv-C1 DTLF The DTLF [3] is a few-shot learning method that transfers knowledge from the EO domain to the SAR domain with many unlabeled SAR samples. #### Iv-C2 Diversity Transfer Network (DTN) The DTN [34] is a meta-learning-based method that learns to transfer knowledge from similar labeled samples and combine them with support features to generate training samples for novel categories. #### Iv-C3 Nwpu The NWPU [35] is a SAR-pretrained model that pretrain the network on an annotated EO dataset with a top two smooth loss function to tackle label noise and imbalanced class problems. #### Iv-C4 L2t The L2T [7] is a TL method, which uses attention module to decide what features should be transferred to which layer of the network. It makes a good performance in the limited training samples case. #### Iv-C5 TransMatch The TransMatch [36] is a semisupervised few-shot learning, which transfer knowledge from unlabeled data. In this article, we replace these unlabeled by few labeled SAR samples. ### _Results_ In this article, the results are represented as the accuracy for the image classification. Each accuracy is represented with the mean and standard deviation of ten experiments and the subscript represents the standard deviation. #### Iv-D1 Comparative Results With Other Few-Shot Learning Methods We compare our method with three few-shot learning methods on all datasets, including the TL-based few-shot learning method [3], the meta-learning-base few-shot learning method [34], and the SAR-pretrained-based method [35]. All the compared methods are trained without extra SAR samples. The results are given in Tables I-III. In these tables, the boldface number means the best performance. So does in Tables IV-IX. We can see that our method achieves the highest accuracy in all cases. #### Iv-D2 Comparative Results With Shallow Networks Some recent works prove that the shallow network make a similar performance as those complex methods with a deep network in some application scenarios [37, 38, 39, 40]. Therefore, we compare our method with several shallow networks trained from scratch for the N-way 8-shot case, including A-Net [37], LeNet [41], and VGG-11 [42]. We adjust the hyperparameters to get the best performance of these shallow networks. The results are Fig. 8: Samples of the vehicle dataset in the SAR domain. (a) The samples of T62 in SAR images. (b) The samples of D7 in SAR images. (c) The samples of 2S1 in SAR image. given in Table IV. We can see that the A-Net outperforms the other two shallow networks and gets a similar performance as ours in seed 0 of the vehicle dataset. This is because the A-Net is specially designed to classify SAR images with limited training samples. However, by comprehensively analyzing the experimental results on three datasets, our method has obvious performance advantages. Besides, the simple structure of A-Net reduces the ability for fitting data, which limits its performance potential. #### Iv-B3 Ablation Experiments for the Training Strategy and Target Network The results are given in Tables V-VII. In these tables, Direct transfer (DT) means the target network is pretrained by the resized EO samples so that all parameters of the network are pretrained. Fine-tune (FT) means the target network is pretrained by the EO samples with the original size so that the parameters of the FC layer of the target network are randomly initialized. Note that only the target network (do not have the source network) is in the abovementioned learning way. Attention module represents that we use connection-free attention modules to transfer features from the source network to the target network. Besides, Bayes represents the target network is 4-Resblock-based Bayesian-CNN, Bayes_AUIP represents that the AUIP training strategy is used to train the network, and \\(4-\\) Res represents the target network is 4-Resblock-based CNN. The results show that the Bayesian-CNN without the AUIP strategy reaches similar performance as the common CNN. It proves that the key to improve the performance is the AUIP training strategy rather than the Bayesian-CNN structure. #### Iv-B4 Ablation Experiments for the Connection-Free Attention Module To verify the superiority of the connection-free attention module over the FC-based attention module, we conduct ablation experiments on each dataset for the 8-shot case. Table VIII gives that the attention module brings a maximum accuracy improvement of 3.2%, 7.1%, and 4.7% in the ship, aircraft, and vehicle datasets, respectively. This is because the performance depends on the ratio \\(N^{0.74}/D\\)[43], where \\(N\\) is the number of parameters in the model and \\(D\\) means the data size. The FC-based attention module contains \\(10^{6}\\) parameters, theoretically requiring 27 542 training samples and connection-free attention module contains \\(10^{3}\\) parameters, theoretically requiring 165 training samples. By comparison, in the extreme few-shot case, the FC-based attention module exacerbates the overfitting phenomenon. #### Iv-B5 Ablation Experiments for Samples of Pretraining To verify that our method is not limited to the class of EO samples, we pretrain our model with EO samples of other classes (ship or aircraft) and train it with few labeled SAR vehicle samples. The results are given in Table IX. Compared with the results in Table III, although the best performance occurs when the EO and SAR samples belong to the same class, pretraining our model with EO samples of other classes also makes a better performance than other few-shot learning methods. This is because the common features still exist between the EO ship or aircraft and SAR vehicles. Thus, these common features can be transferred from the source network to the target network, enhancing the ability of the target network to extract common features. #### Iv-B6 Analysis of the Performance Equivalence To illustrate the performance of our method, the 4-Resblock-based CNN trained from scratch with different amounts of SAR samples is compared with our method. The colored number in Table X means the performance of our method in the 8-shot case of seed 0 is equivalent to that train the network with 800 SAR images. \\begin{table} \\begin{tabular}{c|c c c|c c c|c c c|c c} \\hline \\hline Dataset & \\multicolumn{3}{c|}{Ship} & \\multicolumn{3}{c|}{Aircraft} & \\multicolumn{3}{c}{Vehicle} \\\\ \\hline Methods & A-Net [37] & LNet [41] & VGG-11 [42] & Ours & A-Net [37] & LNet [41] & VGG-11 [42] & Ours & A-Net [37] & LNet [41] & VGG-11 [42] & Ours \\\\ \\hline seed 0 & \\(79.21_{\\pm 0.05}\\) & \\(73.53_{\\pm 4.25}\\) & \\(76.57_{\\pm 9.06}\\) & **89.24\\({}_{\\pm 1.23}\\)** & \\(78.43_{\\pm 4.23}\\) & \\(77.73_{\\pm 2.12}\\) & \\(78.14_{\\pm 1.69}\\) & **84.62\\({}_{\\pm 0.44}\\)** & 63.19\\({}_{\\pm 1.27}\\) & 56.60\\({}_{\\pm 1.57}\\) & 54.17\\({}_{\\pm 1.56}\\) & **65.42\\({}_{\\pm 1.64}\\)** \\\\ \\hline seed 1 & \\(78.52_{\\pm 4.84}\\) & \\(73.83_{\\pm 3.54}\\) & \\(77.47_{\\pm 4.14}\\) & **85.97\\({}_{\\pm 2.72}\\)** & \\(69.85_{\\pm 1.52}\\) & \\(68.48_{\\pm 1.43}\\) & \\(70.12_{\\pm 3.54}\\) & **73.12\\({}_{\\pm 0.52}\\)** & 59.40\\({}_{\\pm 3.26}\\) & 55.49\\({}_{\\pm 1.46}\\) & 54.33\\({}_{\\pm 1.42}\\) & **63.79\\({}_{\\pm 1.39}\\)** \\\\ \\hline seed 2 & \\(77.92_{\\pm 1.52}\\) & \\(78.06_{\\pm 2.78}\\) & \\(75.89_{\\pm 2.17}\\) & **86.29\\({}_{\\pm 1.00}\\)** & \\(71.42_{\\pm 2.43}\\) & \\(70.22_{\\pm 3.25}\\) & \\(72.67_{\\pm 2.31}\\) & **76.52\\({}_{\\pm 0.43}\\)** & 62.42\\({}_{\\pm 0.41}\\) & 59.18\\({}_{\\pm 1.62}\\) & 57.26\\({}_{\\pm 2.60}\\) & **66.45\\({}_{\\pm 1.12}\\)** \\\\ \\hline seed 3 & \\(74.26_{\\pm 1.77}\\) & \\(74.20_{\\pm 1.12}\\) & \\(77.41_{\\pm 3.57}\\) & **87.16\\({}_{\\pm 2.20}\\)** & \\(71.92_{\\pm 3.28}\\) & \\(70.21_{\\pm 1.32}\\) & \\(71.07_{\\pm 3.23}\\) & **75.44\\({}_{\\pm 0.73}\\)** & 53.90\\({}_{\\pm 1.39}\\) & 49.16\\({}_{\\pm 1.33}\\) & 45.01\\({}_{\\pm 1.06}\\) & **60.45\\({}_{\\pm 1.91}\\)** \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE V: RESULTS of the Ablation Experiment on the Ship Dataset With the DT Method and the Fine-Tune Method for the 2-Way 4-Shot Case in Accuracy (in %) \\begin{table} \\begin{tabular}{c|c c c|c c c|c c c} \\hline \\hline Transfer Method & \\multicolumn{3}{c|}{Attention module} & \\multicolumn{3}{c|}{Direct transfer} & \\multicolumn{3}{c}{Fine-Tune transfer} \\\\ \\hline Target Network & 4-Res & Bayes & Bayes\\_AUIPours & 4-Res & Bayes & Bayes\\_AUIP & 4-Res & Bayes & Bayes\\_AUIP \\\\ \\hline seed 0 & \\(75.32_{\\pm 1.12}\\) & \\(79.22_{\\pm 0.62}\\) & **81.58\\({}_{\\pm 0.76}\\)** & \\(67.72_{\\pm 0.14}\\) & \\(67.51_{\\pm #### Vi-B7 Analysis of Feature-Connection Weight To understand the process of transferring features, we record the value of \\(\\lambda^{m,n}\\) for feature connection in the vehicle dataset for the 10-way 8-shot case in Table XI. Each row represents layers of the source network, and each column represents layers of the target network. As shown in Fig. 9, the features of middle layers are more likely to be transferred to the target model: \\(\\lambda^{3,4}=0.62,\\ \\lambda^{3,3}=0.76,\\ \\lambda^{3,2}=0.75,\\ \\lambda^{3,1}=0.73\\), \\(\\lambda^{2,4}=0.66\\), \\(\\lambda^{2,3}=0.79\\), \\(\\lambda^{2,2}=0.73\\), and \\(\\lambda^{2,1}=0.72\\). The amounts of other feature connections are much smaller than these values. This phenomenon indicates that the features in the middle layers are more likely to be the common features suitable for transferring from the EO domain to the SAR domain. #### Vi-B8 Analysis and Visualization of Common Features To understand the common features, we visualize features of EO and SAR samples in the same class extracted by different parameters of the source network with the method of Zeiler and Fergus [44]. Specifically, we first pretrain the source network with the corresponding EO dataset until the accuracy reaches 99%. Then, we visualize different features of the source network when inputting an image. The results are shown in Fig. 9. In Fig. 9, the top four images are the input images, and the below them are feature maps extracted from them by different convolution kernels of different layers. For example, the \"1-4\" in Fig. 9(b) represents the feature map extracted by the fourth convolution kernel of the first layer of the source network (there are four layers in the source network, see Fig. 2). Compared with the EO features in the first layer, which contain rich target information [see Fig. 9(b)], the SAR features are more about the background. Thus, the features in the first layer are unsuitable to be transferred. Besides, the EO features of the fourth layer [see Fig. 9(e)] are in the last layer of the source network, which is usually thought of as the underlying semantic features. Therefore, the SAR features should be as similar as possible to the EO features if they are common features. However, there Fig. 9: Comparison of features of EO and SAR samples in the same class extracted from parameters in different layers of the source network. (a) Example samples. (b) Features in the 1st layer. (c) Features in the 2nd layer. (d) Features in the 3rd layer. (e) Features in the 4th layer. seems to be a big difference between them, which indicates the features in the fourth layer are unlikely to be common features. In contrast, the second and third layers extract the features related to the target in both EO samples and SAR samples, i.e., shape of the target, which indicate these features are more likely to belong to the common features. Besides, the analysis results of common features are consistent with the weight for transferring, indicating that our method transfers common features from the source network to the target network. Finally, we conclude that the common features are more likely to appear in the middle layers of the network, making middle-layers features suitable for transferring from the EO domain to the SAR domain. This conclusion is consistent with Huang _et al._[45]. However, they get the conclusion that \"middle layers are worth transferring\" through experiments without further studying why these layers are valuable. ### _Discussion_ We mainly discuss our method from aspects of performance and the relationship with related few-shot learning methods. #### Iv-E1 Performance First, compared with other few-shot learning methods, our method brings maximum accuracy improvements of 6.3%, 10.4%, and 14.8% in the extreme few-shot case, for the ship, aircraft, and vehicle datasets, respectively. We explain the reasons as follows. 1. 1) We strengthen common features through transferring them from the source network to the target network with the proposed attention module to supplement the loss of individual features brought by extra SAR samples to some extent. 2. We design an appropriate parameter update strategy with few labeled SAR samples by training those important parameters more accurately to supplement the loss of good initial points brought by extra SAR samples. Second, as given in Table IX, compared with the meta-network, our attention module brings a maximum accuracy improvement of 3.2%, 7.1%, and 4.7% in the ship, aircraft, and vehicle datasets, respectively. This is because the attention module has fewer parameters, making it fully trained to transfer common features to the target network. Finally, by comparing the results of seed 2 (third row of each table) in Tables V and VI, we find that the network trained with two samples achieves higher accuracy than that of four samples, which means that some samples may play a villain role in the extreme few-shot case. #### Iv-E2 Relationship With Other Methods Here, we mainly discuss the relationship between our method and two related methods [3, 7]. First, the similarity between our method and the method in [3] is that we both match the features between two networks. The differences are as follows. 1. We do not use extra SAR samples, but the method in [3] needed extra similar unlabeled SAR samples to provide enough knowledge. 2. We first transfer common features to the target network, then we train the network with few labeled SAR samples, which is a TL-based method, but the method in [3] trained the network with EO samples and SAR samples at the same time, which is a domain adaptation method essentially. 3. We selectively transfer different features, but the method in [3] only matched the high-level semantic features. Second, the similarity between our method and the method in [7] is that we both transfer features from the source network to the target network by the learning way. Differences are as follows. 1. Our method is designed for the extreme few-shot case, but the method in [7] is not for the few-shot case. 2. The method in [7] constructed meta-networks to transfer features, but the number of parameters of meta-networks reaches \\(10^{6}\\), which cause a difficulty to be fully trained in the extreme few-shot case, but our attention module only contains \\(10^{3}\\) parameters, which obtain a better ability to transfer features. 3. The target network of our method is a Bayesian-CNN so that it can train those important parameters more accurately, but the target network of method in [7] is just a CNN. For other compared methods, DTN [34] and TransMatch [35] are meta-learning-based methods that follow the episode training strategy. However, our method is a transfer-based method that does not follow that strategy. Besides, NWPU [36] is a simple fine-tune method that does not weight features, but our method transfers \"common features\" by weighting features and training network with the strategy AUIP. ## V Conclusion and Future Work This article proposed a novel few-shot transfer learning method for the application scenarios that no extra similar SAR samples were available, namely extreme few-shot learning. In this case, the connection-free attention module was proposed to transfer common features from the source network to the target network, and the training strategy of AUIP was proposed to update important parameters of the Bayesian-CNN more accurately. Also, we find that the common features are more likely to appear in the middle layers of the network, which is valuable for designing more targeted transfer learning methods between the EO domain to the SAR domain. In our future work, how to model the quality of samples and how to deal with these samples differently is a problem that needs to be studied. ## References * [1] V. C. Koo _et al._, \"A new unmanned aerial vehicle synthetic aperture radar for environmental monitoring,\" _Prog. Electromagn. Res._, vol. 122, pp. 245-268, 2012. * [2] C. L. V. Cooke and K. A. Scott, \"Estimating sea ice concentration from SAR: Training convolutional neural networks with passive microwave data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 7, pp. 4735-4747, Jul. 2019. * [3] M. Rostami, S. Kolouri, E. Eaton, and K. Kim, \"Deep transfer learning for few-shot SAR image classification,\" _Remote Sens._, vol. 11, no. 11, 2019, Art. no. 1374. * [4] L. Wang, X. Bai, R. Xue, and F. Zhou, \"Few-shot SAR automatic target recognition based on Conv-BiLSTM prototypical network,\" _Neucomputing_, vol. 443, pp. 235-246, 2021. * [5] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, \"FiltNets: Hints for thin deep nets,\" in _Proc. 3rd Int. Conf. Learn. Representations_, 2015, pp. 1-13. * [6] S. Zagoruyko and N. Komodakis, \"Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer,\" in _Proc. 5th Int. Conf. Learn. Representations_, 2017. * [7] Y. Jang, H. Lee, S. J. Hwang, and J. Shin, \"Learning what and where to transfer,\" in _Proc. 36th Int. Conf. Mach. Learn._, 2019, pp. 3030-3039. * [8] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, \"Squeeze-and-excitation networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 42, no. 8, pp. 2011-2033, Aug. 2020. * [9] S. Woo, J. Park, J. Lee, and I. S. Kweon, \"CBAM: Convolutional block attention module,\" in _Proc. 15th Eur. Conf. Comput. Vis._, 2018, pp. 3-19. * [10] F. Wang _et al._, \"Residual attention network for image classification,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2017, pp. 6450-6458. * [11] C. Blundell, J. Combes, K. Kavukcuoglu, and D. Wierstra, \"Weight uncertainty in neural networks,\" in _Proc. Int. Conf. Mach. Learn._, 2015, pp. 1613-1622. * [12] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, \"Generalizing from a few examples: A survey on few-shot learning,\" _ACM Comput. Surv._, vol. 53, no. 3, pp. 1-34, 2020. * [13] Z. Huang, Z. Pan, and B. Lei, \"Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data,\" _Remote Sens._, vol. 9, no. 9, 2017, Art. on 907. * [14] D. Zhang, J. Liu, W. Heng, K. Ren, and J. Song, \"Transfer learning with convolutional neural networks for SAR ship recognition,\" in _Proc. IOP Conf. Ser. Mater. Sci. Eng._, 2018, Art. on 072001. * [15] R. Shang, J. Wang, L. Jiao, R. Stolkin, B. Hou, and Y. Li, \"SAR targets classification based on deep memory convolution neural networks and transfer parameters,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 8, pp. 2834-2846, Aug. 2018. * [16] D. Ha, A. M. Dai, and Q. V. Le, \"Hypernetworks,\" in _Proc. 5th Int. Conf. Learn. Representations_, 2017. [Online]. Available: [https://openreview.net/forum?id=kpAc4e1lx](https://openreview.net/forum?id=kpAc4e1lx) * [17] H. Zhu, X. Chen, W. Dai, K. Fu, Q. Ye, and J. Jiao, \"Orientation robust object detection in aerial images using deep convolutional neural network,\" in _Proc. IEEE Int. Conf. Image Process._, 2015, pp. 3735-3739. * [18] M. Ji, B. Heo, and S. Park, \"Show, attend and distill: Knowledge distillation via attention-based feature matching,\" in _Proc. 35th AAAI Conf. Artif. Intell. 33rd Conf. Innov. Appl. Artif. Intell. 11th Symp. Educ. Adv. Artif. Intell._, 2021, pp. 7945-7952. * [19] T. By _et al._, \"Bayesian methods for adaptive models,\"Ph.D. dissertation, California Inst. Technol., Pasadena, CA, USA, 1992. * [20] D. J. C. MacKay, \"A practical Bayesian framework for backpropagation networks,\" _Neural Comput._, vol. 4, no. 3, pp. 448-472, 1992. * [21] G. E. Hinton and D. van Camp, \"Keeping the neural networks simple by minimizing the description length of the weights,\" in _Proc. 6th Annu. ACM Conf. Comput. Learn. Theory_, 1993, pp. 5-13. * [22] A. Graves, \"Practical variational inference for neural networks,\" in _Proc. Int. Conf. Neural Inf. Process. Syst._, 2011, pp. 2348-2356. * [23] J. M. Hernandez-Lobato and R. P. Adams, \"Probabilistic backpropagation for scalable learning of Bayesian neural networks,\" in _Proc. 32nd Int. Conf. Mach. Learn._, 2015, pp. 1861-1869. * [24] A. Kendall and Y. Gal, \"What uncertainties do we need in Bayesian deep learning for computer vision?,\" in _Proc. Int. Conf. Neural Inf. Process. Syst._, 2017, pp. 5574-5584. * [25] A. Kendall, Y. Gal, and R. Cipolla, \"Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 7482-7491. * [26] S. Ebrahimi, M. Elhoseiny, T. Darrell, and M. Rohrbach, \"Uncertainty-guided continual learning with Bayesian neural networks,\" in _Proc. 8th Int. Conf. Learn. Representations_, 2020. * [27] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2016, pp. 770-778. * [28] A. Krizhevsky and G. Hinton, \"Convolutional deep belief networks on CIFAR-10,\" _Unpublished Manuscript_, vol. 40, no. 7, pp. 1-9, 2010. * [29] J. McClelland, B. L. McNaughton, and R. C. O'Reilly, \"Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory,\" _Psychol. Rev._, vol. 102, pp. 419-457, 1995. * [30] C. Blundell, J. Combes, K. Kavukcuoglu, and D. Wierstra, \"Weight uncertainty in neural network,\" in _Proc. 32nd Int. Conf. Mach. Learn._, 2015, pp. 1613-1622. * [31] R. Hammell, \"Data retrieved from Kaggle.\" Accessed: Feb. 1, 2019. [Online]. Available: [https://www.kaggle.com/rahmell/ships-in-satellite-imagery](https://www.kaggle.com/rahmell/ships-in-satellite-imagery) * [32] C. P. Schwegmann, W. Kleynhans, B. Salmon, L. W. Makane, and R. G. V. Meyer, \"A SAR Ship Dataset for Detection, Discrimination and Analysis,\" _Distributed by IEEE Dataport_, 2017, doi: 10.21227/H2RK82. * [33] J. R. Diemunsch and J. Wissinger, \"Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR,\" in _Proc. Algo. Synth. Aperture Radar Imagery V_, 1998, vol. 3370, pp. 481-492. * [34] M. Chen _et al._, \"Diversity transfer network for few-shot learning,\" in _Proc. 34th AAAI Conf. Artif. Intell._, 2020, pp. 10559-10566. * [35] Z. Huang, C. O. Dumitru, Z. Pan, B. Lei, and M. Datcu, \"Classification of large-scale high-resolution SAR images with deep transfer learning,\" _IEEE Geosci. Remote Sens. Lett._, vol. 18, no. 1, pp. 107-111, Jan. 2021. * [36] Z. Yu, L. Chen, Z. Cheng, and J. Luo, \"TransMatch: A transfer-learning scheme for semi-supervised few-shot learning,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2020, pp. 12853-12861. * [37] S. Chen, H. Wang, F. Xu, and Y.-Q. Jin, \"Target classification using the deep convolutional networks for SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 8, pp. 4806-4817, Aug. 2016. * [38] S. Zagoruyko and N. Komodakis, \"Wide residual networks,\" in _Proc. Brit. Mach. Vis. Conf. (BMVC)_, 2016, pp. 87.1-87.12 Art. no. 87. * [39] C.-H. Chang, \"Deep and shallow architecture of multilayer neural networks,\" _IEEE Trans. Neural Netw. Learn. Syst._, vol. 26, no. 10, pp. 2477-2486, Oct. 2015. * [40] S. S. Mannelli, E. Vanden-Eijnden, and L. Zdeborova, \"Optimization and generalization of shallow neural networks with quadratic activation functions,\" in _Advances in Neural Information Processing Systems_, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds. Curran Associates, vol. 33, 2020, pp. 13445-13455. * [41] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, \"Gradient-based learning applied to document recognition,\" _Proc. IEEE_, vol. 86, no. 11, pp. 2278-2324, Nov. 1998. * [42] K. Simonyan and A. Zisserman, \"Very deep convolutional networks for large-scale image recognition,\" in _Proc. 3rd Int. Conf. Learn. Representations_, 2015. * [43] J. Kaplan _et al._, \"Scaling laws for neural language models,\" 2020, _arXiv:2001.08361_. * [44] M. D. Zeiler and R. Fergus, \"Visualizing and understanding convolutional networks,\" in _Proc. 13th Eur. Conf. Comput. Vis._, 2014,pp. 818-833. * [45] Z. Huang, Z. Pan, and B. Lei, \"What, where, and how to transfer in SAR target recognition based on deep CNNs,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 4, pp. 2324-2336, Apr. 2020. * [46] G. E. Hinton, O. Vinyals, and J. Dean, \"Distilling the knowledge in a neural network,\" 2015, _arXiv:1503.02531_. * [47] S. Ahn, S. X. Hu, A. Damianu, N. D. Lawrence, and Z. Dai, \"Variational information distillation for knowledge transfer,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2019, pp. 9155-9163. \\begin{tabular}{c c} & Yuan Tai received the B.E. degree in automation from the Huazhong University of Science and Technology, Wuhan, China, in 2017. He is currently working toward the Ph.D. degree with the National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan China. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yihua Tan (Member, IEEE) received the Ph.D. degree in pattern recognition and intelligent systems from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2004. Since 2005, he has been with the School of Artificial Intelligence and Automation, HUST, where he is currently a Professor. From 2005 to 2006, he was a Postdoctoral Staff with the Department of Electronics and Information, HUST. From 2010 to 2011, he was a Visiting Scholar with Purdue University, where he worked on remote sensing image analysis. He has authored more than 80 papers in journals and conferences. His research interests include digital image/video processing and analysis, object detection and classification, and machine learning. \\\\ \\end{tabular} Shengzhou Xiong received the master's degree in pattern recognition and intelligent systems in 2017 from the Huazhong University of Science and Technology, Wuhan, China, where he is currently working toward the Ph.D. degree with the National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Artificial Intelligence and Automation. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhaojin Sun is currently working toward the B.E degree in automation with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jinwen Tian received the Ph.D. degree in pattern classification and intelligent systems from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 1998. He was with the School of Artificial Intelligence and Automation, HUST, where he is currently a Professor. His research interests include remote sensing image analysis, image compression, computer vision, and fractal geometry. \\\\ \\end{tabular}
Deep learning-based synthetic aperture radar (SAR) image classification is an open problem when training samples are scarce. Transfer learning-based few-shot methods are effective to deal with this problem by transferring knowledge from the electro-optical (EO) to the SAR domain. The performance of such methods relies on extra SAR samples, such as unlabeled novel class's samples or labeled similar classes samples. However, it is unrealistic to collect sufficient extra SAR samples in some application scenarios, namely the extreme few-shot case. In this case, the performance of such methods degrades seriously. Therefore, few-shot methods that reduce the dependence on extra SAR samples are critical. Motivated by this, a novel few-shot transfer learning method for SAR image classification in the extreme few-shot case is proposed. We propose the connection-free attention module to selectively transfer features shared between EO and SAR samples from a source network to a target network to supplement the loss of information brought by extra SAR samples. Based on the Bayesian convolutional neural network, we propose a training strategy for the extreme few-shot case, which focuses on updating important parameters, namely the accurately updating important parameters. The experimental results on the three real-SAR datasets demonstrate the superiority of our method. Bayesian convolutional neural network (Bayesian-CNN), few-shot transfer learning, synthetic aperture radar (SAR).
Summarize the following text.
ieee/e7b18880_d89a_4463_8c18_5c32ec50371c.md
# Deep Learning Algorithm for Satellite Imaging Based Cyclone Detection Snehlata Shakya\\({}^{\\text{\\textregistered}}\\), Sanjeev Kumar, and Mayank Goswami Manuscript received November 3, 2019; revised December 31, 2019; accepted January 18, 2020. Date of publication January 31, 2020; date of current version March 2, 2020. _(Corresponding author: Shellata Shakya.)_ Snehlata Shakya is with the Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee 247667, India, and also with the Department of Clinical Physiology, Lund University, 221 00 Lund, Sweden (e-mail: [email protected]). Sanjeev Kumar is with the Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee 247667, India (e-mail: [email protected]). Mayank Goswami is with the Divyadishti Imaging Laboratory, Department of Physics, Indian Institute of Technology Roorkee, Roorkee 247667, India (e-mail: [email protected]). This article has supplementary downloadable material available at [http://ieeexplore.ieee.org](http://ieeexplore.ieee.org), provided by the authors.Digital Object Identifier 10.1109/ISTARS.2020.2970253 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/) ## I Introduction ### _Remote Sensing (RS)_ RS applications, mainly, via satellite imagery has expanded from conventional meteorology, geological exploration, oceanography toward homeland security, urban planning, ecology and several other novel unconventional fields. Cost-effective unmanned aerial vehicle (UAV) and weather balloon approaches have shared the burden but suffer from limitations such as 1) low elevation point for imaging, 2) stability issues under bad weather conditions, and 3) dependence on navigation satellites. Meteorological applications, especially weather forecasting for disaster readiness, yet, requires dedicated but costly satellite infrastructure. Routine scheduling of multispectral satellite imaging requires optimized schemes fulfilling different reward opportunities such as operating time windows, changeover efforts between two consecutive imaging tasks, cloud-coverage effects, etc. [1, 2]. Sometimes expected data often are not available at the landscape scale in abundance [3]. Trading-off between several such engineering factors limits the temporal resolution of imaging dataset, typically an hour. Resource optimization equally affects temporal resolution when imaging is performed using UAV or weather balloon. ### _Image Processing Framework_ The sparsely acquired imaging datasets comprise of limited temporal resolution that may affect the accuracy of the analysis [4]. Expert individuals can perceive and estimate missing information (such as if a set of images depict storm, a pathway of clouds, location of vortex, etc.) just seeing consecutive image frames. However, the accuracy of analysis again depends on this person's experience and off course temporal resolution of timeframes. Although classical image processing techniques are proven useful but the element of humanlike perception can only be replicated via an artificial neural network (ANN) based image processing algorithm. Several fields of research and applications have exploited this direction but state of the art for weather prediction analysis still is under development [4]. Few examples such as rainfall predictions using limited data setting [3], estimation of hydrological variables to forecast the runoff at ungauged river basins [5], air quality index estimation and prediction [6], analyzing and predicting an individual's movements/locations [7], optical flow based interpolation for structure-preservation using tomography images for improving data quality [8, 9], precipitation now casting as a spatio-temporal sequence forecasting problem [10], etc., are based on ANN. Studies have reported weather forecasting problems using machine-learning algorithms [10, 11]. Deep learning (DL) algorithms, which learn the characteristic features in a hierarchical manner, have been introduced into the RS community due to the availability of data. DL has a wide variety of applications in RS including image preprocessing, pixel-based classifications, target identification, and scenario understanding. In one of the recent surveys, the superiority of DL algorithm (due to feature learning abilities) is shown to outperform existing commonly used image processing algorithm in hybrid field of agriculture and RS [12]. It is also shown that an outcome can be enhanced with spatio-temporal interpolation by hybridizing discriminatively trained predictive models with a deep neural network [11]. A deep neural network with stacked denoising autoencoders is found comparatively better for predicting air temperature when compared with standard ANNs [13]. For image preprocessing purposes, a specific deep network such as a deconvolution network [14] and a sparse de-noising auto-encoder [15] are constructed. Pixel-based classification is already employed in the field of geoscience and RS. Aspects of handcrafted feature description [15], discriminative feature learning [16], and powerful classifier designing [17] are successfully tested. DL methods, thus, are made well suited for the extraction of low-level features with a high frequency, such as edges, contours, and outlines of objects, shape, size, color, and rotation angle of the targets [18]. Training of such an algorithm requires a significant amount (quantitative reference) of images containing a vast set of features/characters categorized under supervision. _In the absence of a large dataset, one approach is artificially enriching the data using interpolation techniques for generating missing time frames and densify the feature-based information content._ #### I-C1 Interpolation Interpolation might increase the probability to distinguish successive peaks in the frequency domain. The probability can be controlled using an apt method of interpolation. It is expected that this step will enhance the interpretability of characters for the DL algorithm. It has been used for enhancing the image quality [19, 20]. Another technique involves optical flow-based temporal interpolation by using backward warping [8, 9]. Optical flow based method is the preferred approach with an atmosphere full of clouds to obtain interpolation related image processing characters. Estimated image velocity by an optical flow can be used for supervised scene interpretation to an unsupervised dynamic investigation. Many methods for computing optical flow have been proposed. For in-depth insight, we refer to Barron _et al._[21]. The process of determining optical flow is generally carried out through utilizing a brightness constancy constraint equation (BCCE). The relation makes the use of spatio-temporal derivatives of image intensity [22, 23]. Determining optical flow using the BCCE is an ill-posed problem. Classical gradient based methods, for example, Cauchy's method [24, 25], Newton's method [26], Marquardt's method [27], conjugate gradient method [28], quasi-Newton methods [26, 29], are useful solving underlying optimization problem. Smoothness constraints by Horn and Schunck's method found helpful minimizing the distortions in optical flow estimation [30]. Combination of local and global methods was introduced by Bruhn _et al._[31] to deal with ill-posedness. Spatial and temporal derivatives were used as constraints to overcome the ill-posedness issue. The choice of initial guess affects the performance of an iterative optimization technique resulting in a global or local extremum. Previously estimated optical flow fields were used as initial estimates by Giaccone and Jones [32]. A perceptually weighted optical flow was also proposed by Malo _et al._[33]. Backward-warping [20] method for temporal interpolation was initially proposed by Ehrhardt _et al._[8, 9]. This discussion inspires for carrying out a sensitivity analysis using various optimization methods and error estimates. ### _Motivation and Methodology_ In this article, the DL algorithm is used for the classification of satellite images under storm or nonstorm category. It is also used for locating the eye of the storm so that the regression model can be fitted for prediction automatically. The training process involves the formation of the system of equations, constraints, and its preferred solutions as input. These systems of equations and constraints can be solved by classical optimization techniques. There are several optimization methods and error estimates that can be used but the DL algorithm may not be sensitive to all. Thus, it is important to choose a proper combination of methods to use for a given dataset. The present work illustrates a case study for cyclone data. The manuscript presents an exhaustive study testing multiple mathematical frameworks (14) and several error estimates (6) for converting sparse into sufficient data enhancing its usability for the DL algorithm. Multiple satellite data having cyclones are tested and compared with previously reported estimations. Optical flow estimation methods are compared in terms of error after temporal interpolation. Generalized model to compute optical flow incorporating fractional order (FO) gradients is preferred over Brox's method and Horn and Schunck after testing. This model furnishes the dense optical flow while preserving the discontinuities at sharp boundaries [34, 35], good for images with clouds. In order to perform interpolation, inversion of an optical flow vector is required. A sensitivity analysis is performed for inverting the optical flow vector using 14 classical optimization methods. Performance metrics, namely: 1) mean square error (MSE), 2) mean difference error (MDE), 3) number of sites of disagreements (NSDs), 4) percentage error (PE), 5) peak signal-to-noise ratio (PSNR), and 6) sharpness are used for choosing the best solution method. Again DL platform and the artificially enriched data carrying storm features are used for classifying the cyclonic weather and estimation of the location of vortex of the cyclone. Finally, the regression method is used for predicting the path of cyclone. Th same is illustrated in Fig. 1 in the form of flow diagram. Satellite data available at NASA and ISRO servers are used for training and testing purposes. ## II Methodology This section is divided into two parts: 1) temporal interpolation algorithms and 2) DL frameworks. The interpolation section explains mathematical formulation for estimation of optical flow and algorithm. ### _Optical Flow_ We present an improved version of Horn and Schunck's method [22] for computing the optical flow field. A detailed version can be found elsewhere [34]; here, a summary is presented. The energy functional [22] is given as follows: \\[E_{\\mathrm{data}}\\left(p\\right)=\\int_{\\Omega}\\Big{[}\\big{(}\ abla I^{T}p+I_ {t}\\big{)}^{2}+\\alpha^{2}\\left(|\ abla u|^{2}+|\ abla v|^{2}\\right)\\Big{]}dxdy \\tag{1}\\]where \\(\ abla I\\) is the gradient of image intensity, \\(I_{t}\\) is the temporal derivative of image intensity, \\(P=(u,v)^{T}\\) is the velocity vector (optical flow components), and \\(\\alpha\\!>\\!0\\) is a regularizing parameter. This model was combined with Nagel and Enkelmann [38] and the energy functional becomes [35] \\[E_{\\rm data}\\left(p\\right)=\\int_{\\Omega}\\Big{[}\\big{(}\ abla I^{T}p+I_{t} \\big{)}^{2}+\\beta^{2}\\left(p^{T}p+\\lambda(|\ abla p|)^{2}\\right)\\Big{]}d\\Omega. \\tag{2}\\] Here, \\(\\beta\\) is a constant value. This model preserves the discontinuity and gives a dense optical flow. It was further improved by incorporating FO derivatives optical flow components [34]. The energy functional is given as \\[E_{\\rm data}\\left(p\\right)=\\int_{\\Omega}\\Big{[}\\big{(}\ abla I ^{T}p+I_{t}\\big{)}^{2}\\] \\[+\\beta^{2}\\left(p^{T}p+\\lambda(|D^{\\alpha}p|)^{2}\\right)\\Big{]}\\,d\\Omega. \\tag{3}\\] Here, \\(D^{\\alpha}:=(D_{x}^{\\alpha},D_{y}^{\\alpha})^{T}\\) is called the left fractional derivative operator of Riemann-Liouville and \\(|D^{\\alpha}u|\\) is defined as \\[|D^{\\alpha}p|=\\sqrt{\\Big{(}(D_{x}^{\\alpha}u)^{2}+\\big{(}D_{y}^{\\alpha}u\\big{)} ^{2}\\Big{)}}.\\] This model is a generalization of the integer-order variational optical flow models. The variational functional, (3), can also be written as follows: \\[E_{\\rm data}\\left(p\\right)=\\int_{\\Omega}\\Big{[}\\big{(}\ abla I ^{T}p+I_{t}\\big{)}^{2}+\\beta^{2}\\left(u^{2}+v^{2}\\right)\\] \\[+\\lambda\\left(|D^{\\alpha}u|^{2}+|D^{\\alpha}v|^{2}\\right)\\Big{]}\\,d\\Omega. \\tag{4}\\] A detailed description of the method can be found elsewhere [34]. We choose \\(\\alpha\\!=\\!0.8\\), \\(\\beta\\!=\\!600\\), and \\(\\lambda\\!=\\!1\\). Detailed information about the choice of these values is given by Kumar _et al._[34]. We will also be estimating the optical flow using Brox's method and Horn and Schunck. However, the technical details are not given here for the sake of brevity. Interested reader may follow [30] for more detail. ### _Temporal Interpolation Algorithms_ Ehrhardt _et al._[8, 9] have proposed an interpolation technique that utilizes the weighted average of optical flow components. The interpolation equation is given as follows: \\[I\\left(x,t\\right)= \\left(1-\\delta t\\right)\\cdot I\\left(\\mathbf{x}-\\delta t\\cdot \\mathbf{u},t_{i}\\right)\\] \\[+\\delta t\\cdot I\\left(\\mathbf{x}-(1-\\delta t)\\cdot\\mathbf{u}^{-1},t_{i+1}\\right). \\tag{5}\\] Here, \\(I\\) is the image intensity, \\(\\mathbf{x}\\) is the position vector, \\(t_{i}\\) and \\(t_{i+1}\\) are two consecutive time instances, \\(\\mathbf{u}=(u,v)\\) is the optical flow vector, \\(\\delta t=t-t_{i}\\) and time is normalized, i.e., \\(t_{i+1}-t_{i}=1\\). For computing \\(\\mathbf{u}^{-1}\\), Ehrhardt _et al._[8, 9] have used Newton-Raphson method. Shakya and Kumar [39] have also used this method for interpolation with optical flow vectors computed from the FO-based method. We will be using various techniques for numerical interpolation of the above-mentioned formulae. **Method 1**: _Moore-Penrose pseudoinverse of the optical flow vector._ **Method 2**: _Scattered data interpolation (SDI) for the inverse of optical flow vector._ **Method 3**: _Nearest neighbor (NN) calculation for the inverse of optical flow vector._ **Method 4**: _Thin plate spline (TPS) interpolation._ **Method 5**: _Kernel regression (KR)._ **Method 6**: _Sigmoid function interpolation (SFI)._ A brief description of the above-mentioned methods is given as follows. **Method 1**: _Moore-Penrose pseudoinverse of optical flow vector: A matrix \\(\\mathbf{A}_{m\\times n}\\), with \\(m=n\\) or \\(m\\!\ eq\\!n\\), can be decomposed using singular valued decomposition into one diagonal matrix \\(\\Sigma\\) and two orthogonal matrices \\(\\mathbf{U}\\) and \\(\\mathbf{V}\\) such that \\(\\mathbf{A}=\\mathbf{U}\\boldsymbol{\\Sigma}\\mathbf{V}^{T}\\). The pseudoinverse of the matrix is defined as \\(\\mathbf{A}^{+}=\\mathbf{U}\\boldsymbol{\\Sigma}^{+}\\mathbf{V}^{T}\\). This method is used to compute the inverse of the optical flow vector and interpolation has been done using (5)._ **Method 2**: _SDI_ _for the inverse of the optical flow vector: SDI is another technique that has been used to compute the inverse of optical flow vector. Following Shakya and Kumar [39], if we define a forward transformation \\(\\mathbf{T}\\) from source image space \\(\\mathbf{s}\\) to target image space \\(\\mathbf{t}\\), the points in source space are specified as \\(\\mathbf{s}=\\mathbf{t}+\\mathbf{T}(\\mathbf{t})\\)._ Then, the inverse transformation is given as \\[\\mathbf{I}\\left(\\mathbf{s}\\right)=\\frac{\\sum_{i}w\\left(d_{i}\\right)\\cdot \\mathbf{I}\\left(\\mathbf{t_{i}+T}\\right)}{\\sum_{i}w\\left(d_{i}\\right)} \\tag{6}\\] Fig. 1: Flow diagram describing the methodology. where \\[w\\left(d_{i}\\right)=\\left\\{\\begin{array}{ll}\\left(\\frac{1}{d_{i}}-\\frac{1}{R} \\right)^{2},&\\text{if }d_{i}\\leq R\\\\ 0,&\\text{otherwise}\\end{array}\\right. \\tag{7}\\] is the distance weight function associated with each interpolation point, \\(d_{i}\\) is the distance of interpolated point from the \\(i\\)th data point, and \\(R\\) is the search radius. **Method 3**: _NN calculation for the inverse of the optical flow vector: Negative nearest forward transformation is used as inverse transformation to estimate the inverse of optical flow vector. If the nearest forward transformed point lies outside the source voxel, then averaging of surrounding voxels of forward transformed points is used._ **Method 4**: _TPS interpolation: Having a set of \\(k\\) points \\(P_{j}(x_{j},y_{j})\\) and heights \\(h_{j}\\) for \\(j=1,\\,2,\\ldots,\\,k\\), TPS interpolation is defined as [41]_ \\[f\\left(x,y\\right)=a_{0}+a_{x}x+a_{y}y+\\sum_{j=1}^{k}c_{j}U\\left(x-x_{j},\\;y-y_ {j}\\right)\\text{.} \\tag{8}\\] Here, constants \\(c_{1},c_{2},\\ldots,c_{k},a_{0},a_{x},a_{y}\\) need to be find with property \\(f(x_{j},y_{j})=h_{j}\\forall j=1,2,\\ldots,k\\). The term \\(U(x,y)\\) is defined as \\[U\\left(x,y\\right)=\\left(x^{2}+y^{2}\\right)\\ln\\left(x^{2}+y^{2}\\right).\\] **Method 5**: _KR:_ Following Muhlenstid and Kuhnt [42], consider a set \\(\\mathbf{x}=\\left\\{x_{1},x_{2},\\ldots,x_{n}\\right\\}\\) with \\(N\\) simplices \\(S_{j}=1,2,\\ldots,N\\) and vertices \\(x_{0}^{j},x_{1}^{j},\\ldots,x_{k}^{j}\\). \\(T\\) is the Delaunay triangulation of the set \\(\\mathbf{x}\\). A linear function \\(\\hat{y}_{j}(x)=\\beta_{0}^{j}+x^{T}\\beta^{j}\\) for all \\(S_{j}\\) can be fitted that will interpolate the vertices. The _KR_ interpolator is constructed on polygons and it is combined with locally fitted linear functions [42] \\[\\hat{y}_{j}\\left(x\\right):=\\left\\{\\begin{array}{ll}y_{i}&x=x_{i},i=1,2, \\ldots,n\\\\ \\frac{\\sum_{j=1}^{N}g_{j}(x)\\hat{y}_{j}(x)}{\\sum_{j=1}^{N}g_{j^{\\prime}}(x)}& \\text{elsewhere}\\end{array}\\right.. \\tag{9}\\] **Method 6**: _SFI:_ We used a sigmoid function for interpolation. For univariate logistic curve, it is defined as \\[S\\left(x\\right)=\\frac{1}{1+e^{-\\beta x}} \\tag{10}\\] where \\(\\beta\\) is the logistic growth rate or steepness of the logistic curve. The above-mentioned methods are categorized as direct interpolation methods. We are also incorporating iterations on some of those techniques. Solution is updated, locally and globally, after each iteration. The local method inverts point by point in target space, whereas the global method inverts the whole field in source space [40]. Therefore, results of interpolation with 14 methods: 1) pseudoinverse, 2) pseudoinverse with local convergence, 3) pseudoinverse with global convergence, 4) _SDI_, 5) _SDI_ with local convergence, 6) _SDI_ with global convergence, 7) _NN_, 8) _NN_ with local convergence, 9) _NN_ with global convergence, 10) random with local convergence and 11) random with global convergence, 12) TPS interpolation, 13) _KR_, and 14) _SFI_ are compared (see Table III). Here, an initial choice of optical flow vector is chosen randomly between maximum and minimum values of optical flow components. We use the following metrics for comparing the results from different interpolation techniques. 1. _MDE:_ Let \\(I_{\\tau}^{\\mathrm{intp}}(x)\\) and \\(I_{\\tau}^{\\mathrm{orig}}(x)\\) represent interpolated and original image intensities at pixel position \\(x\\) in the \\(\\tau\\mathrm{th}\\) frame. \\(N_{\\tau}\\) denotes the number of interpolated images and \\(\\Omega_{\\tau}\\) the set of pixels in frame \\(\\tau\\). Then, the MDE is defined as [8] \\[\\mathrm{MDE}=\\frac{1}{N_{\\tau}}\\sum_{\\tau=1}^{N_{\\tau}}\\frac{1}{\\left|\\Omega_ {\\tau}\\right|}\\sum_{\\mathbf{x}\\in\\Omega_{\\tau}}\\big{|}I_{\\tau}^{\\mathrm{intp} }\\left(\\mathbf{x}\\right)-I_{\\tau}^{\\mathrm{orig}}\\left(\\mathbf{x}\\right) \\big{|}\\text{.}\\] (11) 2. _NSD:_ This metric is defined by the number of pixels where the difference between \\(I_{\\tau}^{\\mathrm{intp}}(x)\\) and \\(I_{\\tau}^{\\mathrm{orig}}(x)\\) is greater than a threshold value \\(\\Theta\\). It is defined as follows [8, 9]: \\[\\mathrm{NSD}=\\sum_{\\tau=1}^{N_{\\tau}}\\sum_{\\mathbf{x}\\in\\Omega_{\\tau}}\\delta \\left|I_{\\tau}^{\\mathrm{intp}}\\left(\\mathbf{x}\\right)-I_{\\tau}^{\\mathrm{orig} }\\left(\\mathbf{x}\\right)\\right|\\] (12) where \\[\\delta\\left(z\\right)=\\left\\{\\begin{array}{ll}\\theta,&\\text{if }z<0\\\\ 1,&\\text{otherwise}\\end{array}\\right..\\] (13) Here, \\(\\theta\\) is threshold and it is chosen 5% in the present study. 3. _PE:_ It is defined as follows [43]: \\[\\mathrm{PE}=\\frac{\\mathrm{Avg\\ Error}}{\\frac{1}{N}\\frac{1}{M}\\sum\\sum\\left[u_{ \\mathrm{orig}}^{2}+v_{\\mathrm{orig}}^{2}\\right]^{1/2}}\\text{.}\\] (14) Here, Avg Error is the average error defined as [43] \\[\\mathrm{Avg\\ Error}=\\left[\\sum\\sum\\left[\\frac{D1_{e}^{2}+D2_{e}^{2}}{N_{e}M_{e }}\\right]^{1/2}\\right]\\] (15) where \\(D1_{e}=u_{\\mathrm{orig}}-u_{\\mathrm{int}}\\) and \\(D2_{e}=v_{\\mathrm{orig}}-v_{\\mathrm{int}}\\). Moreover, \\(u\\) and \\(v\\) are horizontal and vertical components of flow, respectively. Suffixes _orig_ and _int_ are used for original and interpolated flow. (_N_, _M_) is the size of ground truth image. Term \\(\\left(N_{e},\\;M_{e}\\right)\\) is the size of \\(D1_{e}\\). 4. _PSNR:_ A PSNR value is calculated between true and estimated flow [43] \\[\\mathrm{PSNR}=10\\mathrm{log}_{10}\\frac{255^{2}}{\\mathrm{MSE}}\\] (16) where _MSE_ is the mean squared error \\[\\mathrm{MSE}=\\frac{1}{M\\times N}\\left[\\sum_{i=0}^{M}\\sum_{j=0}^{N}\\left(I^{ \\mathrm{int}}\\left(i,j\\right)-I^{\\mathrm{orig}}\\left(i,j\\right)\\right)\\right]^{2} \\text{.}\\] 5. Sharpness ratio: This metric is defined as follows: \\[\\mathrm{sharpness}=\\frac{\\sum\\left\\{\\left(df/dx\\right)^{2}+\\left(df/dy \\right)^{2}\\right\\}}{N}\\text{.}\\] (17) ### _DL Frameworks_ For classification purposes, a basic Keras model [44] was trained on the dataset. The platform provided by Google color-ratory [45] was used for training. Standard convolutional neural network (CNN) models like Xception [46], NasNetMobile, and MobileNet were also applied on the preprocessed dataset using Keras applications. The methodology for object detection and localization is to break down the image into multiple segments and feed each segment to the model and get the label. It is highly likely that the object will appear half cropped in one segment. In order to get complete object in our segment, we need to work with images of different scales. This increases the computational cost and the inference time, thus making the model impractical in real-time scenarios. This idea, implemented in a sliding window technique, is fairly outdated [47]. To overcome the cost of time as well computation, you only look once (YOLO) [37] is preferred in this work, for detecting the circular rotating area. YOLO system helps in detection of objects in real time. It consists of 23 convolutional layers and it uses batch normalization technique and leaky ReLU activation [37]. Unlike sliding window technique or regional proposal network, YOLO considers the whole image and thus encodes the contextual information about the classes and their appearance. Fast region-based CNN (R-CNN) [48], a top detection method, mistakes the background patches as an object because it cannot see the larger context. The number of background errors is significantly reduced with YOLO in comparison to fast R-CNN [47]. The YOLO pixelates input image into \\(N\\times N\\) grid cells. A particular grid detects an object, if its center falls into that grid cell. Each grid cell predicts \\(B\\) number of bounding boxes and confidence scores for those boxes. The confidence score tells about the accuracy of prediction. For each bounding box, five values \\(x\\), \\(y\\), \\(w\\), \\(h\\), and confidence score are predicted, where \\((x,y)\\) represents the center of box relative to the grid cell. The width (\\(w\\)) and height (\\(h\\)) are predicted relative to the whole image. In YOLO, an individual grid cell is allowed to contain a single class with the capability to predict two bounding boxes only. This spatial constraint limits the prediction capability of the number of nearby objects. Overall, the model struggles with the detection of small objects that appear in groups. RetinaNet [49], a relatively sophisticated model, uses Resnet and feature pyramid network. Single-stage detectors are less accurate but had fast inference time, whereas two-stage detectors are more accurate but took significant time during inference. RetinaNet is a modified model of a single-stage detector with improved accuracy by modifying the loss functions. It outperforms the two-stage/shot detector faster R-CNN in terms of speed as well as accuracy. The cross-entropy/focal loss (\\(FL\\)) function (18) is reshaped by adding a modulating factor \\((1-p_{t})\\) to downweight low level examples and focusing parameter \\(\\gamma\\). The constraint is meant to training on hard negatives \\[\\mathrm{FL}(p_{t})^{\\gamma}\\mathrm{log}\\left(p_{t}\\right) \\tag{18}\\] where focusing parameter \\(\\gamma\\) is tested in the range of [0, 5] in the experiment. \\(\\gamma=2\\) works best in our experiment. The focusing parameter smoothly adjusts the rate at which easy examples are downweighted. It affects the function of modulating factor [49]. If an example is misclassified and model's estimated probability \\(p_{t}\\) is small, the modulating factor is near 1 and the loss is unaffected. For high probability values, the factor becomes to 0 and the loss for well-classified examples is downweighted. ## III Experimental Results ### _Dataset_ For interpolation, we used the satellite images obtained from KALPANA-I. Image data are downloaded from the India Meteorological Department (IMD) archive [50] for the cyclone in June 2007. In particular, we processed (Temyin) the images for June 21, 2007. A depression area was declared by IMD near east-southeast of Kakinada, Andhra Pradesh, India. For DL classification, we downloaded data from IMD archives [50] that contain images of cyclones from the year 1990 until recent times with increased accuracy and coverage in recent years. With the advancement in technology infrared, midinfrared, short-wavelength infrared, water vapor images of the recent cyclones were also included in the archive. Raw data are selected for the analysis given in Table I. For prediction, we trained the model with several random cyclone images downloaded from Internet. Also, we downloaded images from Meteorological and Oceanographic Satellite Data Archival Center (MOSDAC) [51]. We tested the model on Ockhi cyclone that occurred on December 2, 2017. Python codes and libraries are used for creating the labels. The resolution of each image is \\(1024*1200\\) pixels. ### _Interpolation Results_ Temporal interpolation plays an important role when the image data is available at large time intervals. We are processing images of cyclone Yemyin, which happened in June 2007. Images obtained from satellite KALPANA-I are captured at 1 h of interval. We first used two images of 2 h of interval (05:00:03 hours and 07:00:03 hours (GMT) on June 21, 2007) and generated the image at 1 h (06:00:03 hours) of interval. The ground truth image is also available to compare. In order to get the intermediate images, we used (13). The optical flow is estimated with three approaches: Brox's [12] method, Horn and Schunck [22], andFO derivatives [43] based method. It is found that Brox's method and FO derivation method provide results under acceptable error estimation for considered data. Results by Horn and Schunck are omitted here due to brevity. Results are compared in Tables II and III, respectively. A comparative study is made based on different interpolation methods mentioned above [_Methods 1-6_] with global and local convergences. Error is computed with image intensities and optical flow vector. _MDE_ is one of the error estimates that computes the differences of image intensities between the original and the interpolated image. Looking into both Tables II and III, the _MDE_ values for different interpolation methods, _SFI_ has the lowest values of 2.33 with Brox's method and 2.34 with the FO-based method. It indicates that performance of _SFI_ is comparatively better than other methods, if MDE is the criterion. Different metric values are comparatively lower for the _FO_-based method. The next error measure is _NSD_, which is the number of pixels where the difference of image intensities between original and interpolated is greater than 0.05. Thus, the smaller number of _NSD_ indicates close similarity to the original image. This error estimator's behavior is consistent with the _MDE_ values, i.e., _SFI_ giving the minimum NSDs. Another error estimate is _PE_, which is computed from the difference of optical flow components estimated from the original image and the interpolated image. The behavior of _PE_ is different from what we observed with the previous two error measures, which were based on image intensity differences. A minimum value of 0.46 is observed with the _TPS_ interpolation method while using Brox's method for computing optical flow. However, we observed a very small value of 0.09 with other methods (\\(P_{\\mathrm{inv}}+\\) local and global iterations, \\(SDI\\), \\(SDI\\) + local and global iterations, \\(NN\\), \\(NN\\) + local + global iterations, random + local and global iterations) with optical flow components estimation from FO derivative based method. _PSNR_ value is also best for these interpolation methods. From overall observation, we preferred to use the _NN method with local convergence_ due to less computational cost, fairly comparative error estimates, and sufficient iterations before convergence aspects. Next, we generated 14 images between the images captured at 05:00:03 hours and 07:00:03 hours. The interpolated images are shown in Fig. 2(a1)-(a13). Leftmost image in the first row and rightmost image in the fourth row [see Fig. 2(a) and (b)] are the original images captured at 05:00:03 and 07:00:03 hours, respectively. Images are interpolated at approximately 8 min of interval; the first row contains images from 05:00:03 hours to 05:24:03 hours (left to right). The second row displays the images from 05:32:03 hours to 05:56:03 hours (left to right). The third row contains images from 06:04:03 hours to 06:28:03 hours (left to right). The last row has images from 06:36:03 hours to 07:00:03 hours (left to right). By closely observing, we found that the second image in the first row is close to the image captured at 05:00:03 hours and the second last image in last row is much more similar to the image at 07:00:03 hours. Ground truth images are not available for small temporal steps (8 min). Four instances are marked with red boundaries around cyclone boundary. It can be observed that artificially interpolated images show a smooth transformation of the cyclone (circular shape and relatively small diameter) at 05:00:03 hours into cyclone depicted into second image (relatively bigger diameter and oval shaped). These missing (during the measurement) but artificially generated images are expected to help the neural network to classify storm. Similarity check between interpolated and original images is performed using shape comparison algorithm [52]. Relative change (w.r.t. first recorded image) in the Hausdorff dimension decreases with the time series of interpolated image. It is shown in Fig. 2(c). ### _Deep Learning_ We applied DL techniques for two purposes: 1) for classifying an image under category being part of storm or not containingany characters of the existence of storm and 2) for predicting the storm location in near future. #### Iii-A1 Classification of Storm and Nonstorm Weather Conditions Tropical cyclones are a regular phenomenon in the North Indian Ocean, which affect the Indian subcontinent mainly from May until mid-December causing significant loss of lives and damage to property. The region specialized meteorological center for tropical cyclone over the North Indian Ocean, IMD, keeps a track of the North Indian Ocean cyclones and their trajectories, and issues a four-stage warning on any cyclonic weather system. The four stages include a _precyclone, precyclone watch, cyclone alert, cyclone warning and post-landfall lookout_. These four stages of tracking and warning issuance are determined by the different stages of the development of a cyclonic storm. According to IMD's classification, North Indian Ocean cyclone generally starts out as a depression with a wind speed of 31-50 km/h over the Bay of Bengal or the Arabian Sea. Depression intensifies into a deep depression when wind speed reaches 51-62 km/h and the system starts drawing in more moisture. When the wind speeds further intensify to 63-88 Km/h with longer sustenance, the IMD classifies it as a cyclonic storm and assigns it a name. The next stages of the severe cyclonic storm are reached when the wind speeds peak to a range of 118-165 km/h and have a potential of huge damage to life and property. On further intensification, the cyclone is subsequently classified into a very severe, extremely severe, and super cyclonic system depending on the wind speeds. Weather prediction and sensitivity analysis of these storms have a profound sociological and humanitarian value. The advancements in RS, weather prediction research, and evacuation procedures in the last decade have significantly decreased the damage to life caused by these cyclones. The sensitivity and accuracy analysis of these predictions are very important in this domain. A false alarm can cause extraneous expenditure in mobilizing and Fig. 2: (a) Top left and (b) bottom right images are original (recorded via satellite: movie A) images captured on June 21, 2007 at 05:00:03 hours and 07:00:03 hours (GMT), respectively. From the second image (2a1) in first row to the second last image (2a13) in fourth row are the interpolated images. In total, 14 images are generated between 05:00:03 hours to 07:00:03 hours at equal time interval. Interpolated images translate characters well in-between both of the recorded images. (c) Hausdorff dimension for each image. evacuating people and a delayed response to weather intensification can amplify the damage incurred. DL has shown encouraging performance for the characterization of complex patterns such has storm classification [51]. We deploy the CNN framework using Keras (open source neural network library) for the classification of storm images into the five categories specified by IMD. The satellite images (infrared and visual) of eight cyclones from IMD are fed as input to the CNN and the model is trained on Google collaboratory platform to extract relevant features from these images. Temporal interpolation of images using optical flow based interpolation [8, 9] is used to augment the data. Results are highly promising and contain great future scope. We first preprocessed the images to remove the unnecessary information, which could make the model computationally costly. Image preprocessing: Before training the data, we preprocessed the images. This step removes the unwanted and noisy features and also increases the efficiency of the model. The preprocessing steps mentioned in [53] are used as a skeleton with some changes, which were optimal for our dataset. Raw image from the dataset was cropped to remove header files and white edges. Subsequently, image binarization with some additional changes was applied to the images. Objective of this step was to remove the unnecessary information like grid lines, geographical boundaries, and landscape. In classic image binarization, the image is converted into a binary image with pixel intensities above a certain threshold are converted to one and others to zero. In our algorithm, the output was still an RGB image and the pixels with intensities above a threshold retained their original values while the other pixels were converted to the lowest minimum, i.e., zero. As a result, only the vortex and peripheral cloud patches were retained with original pixel values and other features were removed. The threshold was taken to be suitably optimized multiple (generally 1.4) of median of the pixel intensities of the image. Subsequently, image erosion was applied on the processed image. The mathematical definition of image erosion is presented in [53]. Fig. 3 shows the preprocessed steps for a satellite image. The unnecessary features like grid lines and landscapes are filtered also. Model training and output: A total of 995 images were downloaded from the IMD archive [50] but for training of our model, data augmentation was required. Optical flow-based temporal interpolation was applied to the images and ten images were obtained between two temporally consecutive images. A basic Keras model was trained on the preprocessed dataset. The platform provided by Google collaboratory was used for training. The model was trained on 6930 augmented images and validated on 2970 images randomly shuffled and splitted from our original database. The model used is depicted in Table IV. The above model was compiled with categorical cross-entropy loss and Adam optimizer [53]. A peak validation accuracy of 97% is achieved for classifying the storm or non-storm. Standard CNN models like Xception, NasNetMobile, and MobileNet were also applied on the preprocessed dataset using Keras applications. Accuracy is less due to the inconsistency of Helen cyclone data. The model was then trained on six out of eight cyclones mentioned in Table I and validated on the other two (in these eight cyclone images inconsistencies have been removed by data augmentation and other processes). A peak validation score of 58% was achieved. An increase in accuracy highlights the power of data preprocessing. ### _Prediction of Storm_ Cyclone is a system of winds rotating about a center of low atmospheric pressure termed as \"eye\" having wind velocity above certain limit. Here, we are presenting a model to track the eye (or the center) of the cyclone. The aim of our model is to analyze the patterns present in its movement and then use it to predict the path of the cyclone. In order to achieve a robust model, we need to have a large amount of data. We did not have any official repository that managed the cyclone-annotated data. Therefore, we had to manually annotate the bounding boxes across the cyclone images. We took the cyclone images and annotated it using python library matplotlib [55]. Unfortunately, we were only able to extract hundreds of images but any DL network would easily over-fit on such a small dataset. To counter this we had to rely on augmentation. Robustness in incorporated in the modeling by artificially adding noise. We performed different augmentation techniques like scaling, translating, cropping, rotating, adding Gaussian noise, adding hue and saturation to it. After augmentation, we were able to create 5000 images for training and 500 images for testing. For testing, we used the archived cyclone images provided by the Indian Meteorological Department. These satellite images were captured over half an hour interval, over different spectrum (visible, infrared). We applied interpolation to generate intermediate images to provide large number of images from a particular cyclone. Our model consists of the following two phases. 1. Training phase (see Fig. 5). 2. Prediction phase (see Fig. 6). The DL model RetinaNet required a CSV file that contains the following format. path2image, \\(x1\\), \\(y1\\), \\(x2\\), \\(y2\\), \\(obj\\), where path2image represents the complete path of the image, whereas (\\(x1,y1\\)) and (\\(x2,y2\\)) correspond to the top-left coordinate and bottom-right coordinate of bounding box, respectively. As our goal was to only detect a cyclone, the \\(obj\\) was set to cyclone. Prediction phase: We had to provide two CSV files: one for training and other for validation. The training was done over an NVIDIA GTX 1080 graphics card taking 24 h of processing time. We saved the weights of our network, so that it can be used later without training the network from the scratch. In this phase, a sequence of images was fed to the RetinaNet, which was initialized with the weights that we have saved earlier. It produces the bounding boxes across the cyclones, i.e., it outputs (\\(x1\\), \\(y1\\)) and (\\(x2\\), \\(y2\\)) coordinates (see Fig. 7). As we are only interested in having the location of the centers, we calculated the center coordinates using the top-left and bottom-right coordinates and saved it to a CSV file. The path of the cyclone is highly temporal; thus, another DL-based model long short-term memory (LSTM) can be used to extract its temporal essence. But it has its own drawbacks. To train an LSTM, we need a high-frequency dataset that was not available [56]. Furthermore, some of the frames were missing from the videos, which then lead to a sudden change in center coordinates. Therefore, there was no homogeneity within the dataset, which forced us to find an alternative. RetinaNet is used. The model, which we used for training, takes care of class imbalance by modifying its loss and taking into account the examples that are less frequent in the dataset [49, 57]. The data of cyclone named Ockhi taken from MOSDAC [58] are used for showing the final analysis now. We have processed multiple cases studies/data from May 2016 to December 2017. Several instances with convincing cyclonic activities are observed in this duration. The case study from November 29, 2017 to December 5, 2017 shows the presence of a cyclone (termed as Ockhi). This cyclone was found transiting from Indian Ocean via Peninsular India toward Sea of Arabia. These data comprise 44 images for each day (each with 30 min time difference). Data on December 2, 2017 are discussed in Fig. 7. This image set is densified into 95 images using interpolation technique, each image with 15-min time interval. Fig. 7(a1)-(a8) shows images from whole day. Fig. 7(b1) contains overlay of images of Fig. 7(a1) and (a8). Further, Fig. 7(c1) is an \\(\\times 5\\) zoomed section showing existence of two eyes of cyclone or vortex by \\(V_{c_{1}}\\) and \\(V_{c_{2}}\\). It also shows the path of the eye of cyclone for whole day, estimated by the DL algorithm and traced manually. Fig. 7(d1) shows separate trajectories (in red color: bigger circles by DL), which is predicted by both methods. The DL algorithm estimated coordinates of 95 eyes, out of which 61 (with unique coordinates) are used for polynomial regression fitting, toward both ends. Coordinates (71 unique) of Fig. 5: Training phase. Fig. 6: Prediction phase. eyes (see Fig. 7(d1) shown in black circles) are also located, manually. Six persons are given the data and apt training to visualize the eye of the vortex. The averaged data (see Fig. 7(d1) shown in blue dots) are fitted using same polynomial regression method for predicting the path. It is compared with the predicted path obtained by DL algorithm. Predicted path by DL algorithm appears to be originating nearby to the first eye [see Fig. 7(a1) and (d1)] and eye [see Fig. 7(a8) and (d8)] from the final image of the data. One can visualize clear presence of eye of cyclone in Fig. 7(a1)-(a4). Afterward in successive figures, these six persons had to estimate approximate centers. We note [see Fig. 7(a4)] that eye of cyclone not always remains at the center. This might be the reason of difference between manually and DL extracted path of cyclone in Fig. 7(c1). Average difference between fitted curve and original position (extracted manually) is found less than five pixels in every case that we analyzed. Our model correctly labeled it as no cyclone and outputs no bounding boxes, if cyclone activity was absent in images. One pixel is equivalent to 1 km. The average velocity and maximum velocity of cyclone can be estimated by coordinates of eyes. In this particular case, these are found to be 30.1 and 140.2 km/h of course, a correlation is required for estimating the ground reality. Coordinates information with respect to time is used for this calculation. Close observation of the initial few images depicts appearance of two cyclones, one in left bottom and other, rather a week case, in right side. The weaker cyclone, on the right side, gets dissolved after 9 h. DL algorithm clearly avoids classifying it into storm case. We have found many instances (not this case study) with situations such as if in a time sequence multiple cyclones exist or will arise in the future. In some instances cyclone with strong appearances dies out while weaker gets converted into strong cyclone. All such cases are tested, successfully. The algorithm is also tested with the option when images contain multiple cyclones together. In this figure that option is not used. The processed data between May 17, 2016 and December 6, 2017 are shown in Fig. 8. Table V contains the details of five different cyclones developed in this duration. Respective data are obtained from the Joint Typhoon Warning Center (JTWC) and shown in Fig. 8(a) [59, 60, 61, 62]. JTWC data tracks are obtained from web repositories of 1) NCAR Lab, UCAR (yellow pin shape: only in case of Ockhi) and 2) North Indian Ocean Best Track Data by Naval Meteorology and Oceanography Command of U.S. Navy (multicolored cyclone shape). Coordinates of eyes of all five cyclones are obtained manually and shown in Fig. 8(b) and (c) using red dots [best seen in the composite image in Fig. 8(d)]. Fig. 8(c) Fig. 7: Automatic locating the eye of vortex of a cyclone: (a1)–(a8) time series images (Movie 1) showing extracted boundaries of vortex between 00:00:00 hours and 23:30:00 hours; (b1) overlapped images a1 (red) and a8 (green); (c1) \\(\\times\\)5 zoom part covering only the cyclone regions paths traversed by eye of the vortex (black: via DL algorithm, blue: via manually) is shown; (d1) \\(\\times\\)11 zoom part shows coordinates of eye of vortex, initial and final vortex location \\(v_{c_{1}}\\) and \\(v_{c_{8}}\\), respectively. contains data related to Ockhi in Arabian Sea side of Indian Peninsula. The other four storms were present in Bay of Bengal side. Root-mean-square error (RMSE) is estimated between coordinates obtained by DL and coordinates obtained manually. Manually obtained coordinates are assumed near to be the true value. The DL algorithm has successfully detected instances of cyclonic activities in two-year durations in respective locations that match with manually obtained with less than 16% error. Weaker instances (wind speed less than 55 km/h) namely, TWO, NADA, MAARUTHA, FOUR are skipped to keep the clarity in this figure. Fig. 8(c) shows datawise color-tagged estimated path (coordinates of eye of cyclone) using DL algorithm (green, pink, blue, yellow-red, blue, yellow, green, respectively, from November 29 to December 4, 2017). Fig. 8(d) is made by overlaying the images shown in Fig. 8(a)-(c). For Ockhi, estimation by the DL algorithm, reported earlier and manually, is closer to each other from November 30, 2017 to December 4, 2017. For VARDHA (green dots by DL and curve with cyclone marker by JTWC), both match even better. It is observed that DL estimation and manual estimation are closer to each other but JTWC tracks digress when storm is about to weaken. The slope of the latter part of Mora and Ockhi slightly differs. JTWC tracks, otherwise, show similar characters with minor offset. The slight difference may be due to difference in perception of eye of the vortex by a human user and DL algorithm. The processed flat images do not have \\(z\\)-axis perception; eye of a cyclone visually appears to be a hole or darker region surrounded by clouds rotating in spiral or circular fashion. Sometimes, the camera of the satellite is not right on the top of the eye. The eye aligned according to curvature of earth but satellite's camera is not. Images in such conditions give impression that there are clouds on the top of the eye hiding it. In such situations, the only way to locate an eye is to assume that it would be at the center of the cyclone, which may not be true. It is observed that DL algorithm performs better identifying eye in such conditions when compared with human user. We note that there is a difference in JTWC data by two resources for Ockhi. One is depicted using yellow color button and another by storm shaped markers. Movie 2 contains time series migration of Ockhi in detailed fashion. ## IV Conclusion Several datasets are used as training and testing set for the development of a DL algorithm. Several interpolation methods are also tested for enhancing the performance of the DL algorithm. The basic CNN model outperforms all the standard models in the classification regime of RS images. In particular, the YOLO model is suggested for detecting and locating the cyclone. R-CNN model is suggested for predicting the location of storm. Pointwise conclusions are given as follows. 1. The FO-based approach is slightly better than Brox's approach for optical flow estimation for interpolation. 2. SFI optimization approaches are better as far as MDE and NSD error estimates are used, but for slight compromise of performance nearest neighborhood method is considerably faster. Fig. 8: Tracing the cyclone path. (a) Cyclones between 2016 and 2017 by JTWC records over Indian subcontinent with traced paths. (b) Location coordinates of cyclone in year 2016 using DL algorithm. (c) Location coordinates of cyclone in year 2014 using DL algorithm. (d) Overlay of coordinates of path of cyclone. 3. If the high-frequency dataset is not available, then Reti-naNet is a better model than LSTM. 4. Performance of DL algorithm improves with densified dataset using interpolation and data augmentation. 5. Tracking and predicting eye of cyclone is much accurate by DL algorithm as compared to manual process, if images do not contain more than one cyclone. Finally, DL algorithms for classification and prediction of storm in near future are successfully tested. For classifying the storm or nonstorm, an accuracy of 97% and for detection of cyclone a confidence of greater than 84% are achieved. The DL algorithm contains two different neural networks, one for classification and another for locating the eye of cyclone, trained separately. The performance of classification is expected to affect the performance of locating the eye of the cyclone. The flat images fail to impart effect due to curvature of the earth and need to incorporated separately in the future, if made available. The outcome of the presented work depicts involved mathematical nature of issues and highlights an approach to find an optimal classical preprocessing candidate before employing neural network, in the form of a complete DL algorithm. _We note that advanced post processing DL algorithm such as presented in this article can help to predict the cyclone pathway for early preparedness without much human intervention with more accuracy and speed._ ## Acknowledgment The authors gratefully acknowledge the Indian Space Research Organization, MOSDAC, and other similar open-access repositories for providing data. The authors acknowledge the early stage data processing contribution made by K. Kumar and SPARK Intern A. Agarawal. The authors would also like to acknowledge SPARK Intern Program Indian Institute of Technology Roorkee. ## References * [1] W. C. Lin, D. Y. Liao, C. Y. Liu, and Y. Y. Lee, \"Daily imaging scheduling of an earth observation satellite,\" _IEEE Trans. Syst., Man, Cybern. A, Syst. Humans_, vol. 35, no. 2, pp. 213-223, Mar. 2005. * [2] Y. Li, R. Wang, and M. Xu, \"Scheduling and rescheduling of imaging satellite based on ant colony optimization,\" _J. Comput. Inf. Syst._ vol. 9, no. 16, pp. 6503-6510, 2013. * [3] G. Langella, A. Basile, A. Bonfante, and F. Terrible, \"High-resolution space-time rainfall analysis using integrated ANN inference systems,\" _J. Hydrol._, vol. 387, no. 3/4, pp. 328-342, Jun. 15, 2010. * [4] J. Li, X. Huang, and J. Gong, \"Deep neural network for remote-sensing image interpretation: Status and perspectives,\" _Nat. Sci. Rev._, May 2, 2019. * [5] N. Valizadeh _et al._, \"Artificial intelligence and geo-statistical models for stream-flow forecasting in ungauged stations: State of the art,\" _Natural Hazards_, vol. 86, pp. 1377-1392, 2017. * [6] X. Zhao, T.Xu, Y. Fu, E. Chen, andH. Guo, \"Incorporating spatio-temporal smoothness for air quality inference,\" in _Proc. IEEE Int. Conf. Data Mining_, 2017, pp. 1177-1182. * [7] Q. Liu, S. Wu, L. Wang, and T. Tan, \"Predicting the next location: A recurrent model with spatial and temporal contexts,\" in _Proc. 30th AAAI Conf. Artif. Intell._, 2016, pp. 194-200. * [8] J. Ehrhardt, D. Saring, and H. Handels, \"Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method,\" _Methods Inf. Med._, vol. 46, no. 3, pp. 300-307, 2007. * [9] J. Ehrhardt, D. Saring, and H. Handels, \"Interpolation of temporal image sequences by optical flow based registration,\" in _Bildverarbeitung fur die Medizin_, H. Handels, J. Ehrhardt, A. Horsch, H. P. Meinzer, and T. Tolkdorff, Eds. Berlin, Germany: Informatik aktuell Springer, 2006, pp. 256-260. * [10] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W. K. Wong, and W.-C. Woo, \"Convolutional LSTM network: A machine learning approach for precipitation nowcasting,\" in _Advances in Neural Information Processing Systems_, vol. 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Red Hook, NY, USA: Curran, 2015, pp. 802-810. * [11] A. Grover, A. Kapoor, and E. J. Horvitz, \"A deep hybrid model for weather forecasting,\" in _Proc. 21st ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining_, Aug. 10-13, 2015, pp. 379-386. * [12] A. Kamilaris and F. X. Prenta-Bolda, \"Deep learning in agriculture: A survey,\" _Comput. Electron. Agriculture_, vol. 147, pp. 70-90, Apr. 2018. * [13] M. Hossain, B. Rekabdar, S. J. Louis, and S. Dascalu, \"Forecasting the weather of Nevada: A deep learning approach,\" in _Proc. Int. Joint Conf. Neural Nenc._, Killarney, Ireland, Jul. 12-17, 2015. * [14] J. Zhang, P. Zhong, Y. Chen, and S. Li, \"L\\({}_{1/2}\\)-regularized deconvolution network for the representation and restoration of optical remote sensing images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 5, pp. 2617-2627, May 2014. * [15] W. Huang, L. Xiao, Z. Wei, H. Liu, and S. Tang, \"A new pan sharpening method with deep neural networks,\" _IEEE Geosci. Remote Sens._, vol. 12, no. 5, pp. 1037-1041, May 2015. * [16] X. Jia, B. C. Kuo, and M. M. Crawford, \"Feature mining for hyperspectral image classification,\" _Proc. IEEE_, vol. 101, no. 3, pp. 676-697, Mar. 2013. * [17] F. Melgani and L. Bruzzone, \"Classification of hyperspectral remote sensing images with support vector machines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 42, no. 8, pp. 1778-1790, Aug. 2004. * [18] L. Zhang, L. Zhang, and B. Du, \"Deep learning for remote sensing data: A technical tutorial on the state of the art,\" _IEEE Geosci. Remote Sens. Mag._, vol. 4, no. 2, pp. 22-40, Jun. 2016. * [19] J. A. Parker, R. V. Kenyon, and D. E. Troxel, \"Comparison of interpolating methods for image resampling,\" _IEEE Trans. Med. Imag._, vol. MI-2, no. 1, pp. 31-39, Mar. 1983. * [20] T. Gurdan, M. R. Oswald, D. Gurdan, and D. Cremers, \"Spatial and temporal interpolation of multi-view image sequences,\" in _German Conference on Pattern Recognition_ (Lecture Notes in Computer Science), vol. 8753, X. Jiang, J. Hornegger, and R. Koch, Eds. Cham, Switzerland. Springer, 2014. * [21] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, \"Performance of optical flow techniques,\" _Int. J. Comput. Vision_, vol. 12, pp. 43-77, 1994. * [22] B. K. P. Horn and B. G. Schunck, \"Determining optical flow,\" _Artif. Intell._, vol. 17, no. 1-3, pp. 185-203, 1981. * [23] B. Lucas and T. Kanade, \"An iterative image registration technique with an application to stereo vision,\" in _Proc. 7th Int. Joint Conf. Artif. Intell._, Vancouver, BC, Canada, 1981, pp. 674-679. * [24] L. M. G. Drummond and B. F. Svaiter, \"A steepest descent method for vector optimization,\" _J. Comput. Appl. Math._, vol. 175, no. 2, pp. 395-414, 2005. * [25] J. Barzilai and J. M. Borwein, \"Two-point step size gradient methods,\" _IMA J. Numer Anal._, vol. 8, no. 1, pp. 141-148, 1988. * [26] R. Fletcher, _Practical Methods of Optimization_, 2nd ed. New York, NY, USA: Wiley, 1987. * [27] D. Marquardt, \"An algorithm for least-squares estimation of nonlinear parameters,\" _SIAM J. Appl. Math._, vol. 11, no. 2, pp. 431-441, 1963. * [28] R. Fletcher and C. M. Reeves, \"Function minimization by conjugate gradients,\" _Comput. J._, vol. 7, no. 2, pp. 149-154, 1964. * [29] W. C. Davidson, \"Variable metric method for minimization,\" _SIAM J. Optim._, vol. 1, pp. 1-17, 1991. * [30] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, \"High accuracy optical flow estimation based on a theory for warping,\" in _Proc. Eur. Conf. Comput. Vision_, 2004, vol. 4, pp. 25-36. * [31] A. Bruhn, J. Weickert, and C. Schnorr, \"Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods,\" _Int. J. Comput. Vision_, vol. 61, no. 3, pp. 211-231, 2005. * [32] P. R. Giacone and G. A. Jones, \"Feed-forward estimation of optical flow,\" in _Proc. IET Conf._, 1997, 204-208. * [33] J. Malo, J. Gutierrez, I. Epifnio, and F. J. Ferri, \"Perceptually weighted optical flow for motion-based segmentation in MPEG-4 paradigm,\" _Electron. Lett._, vol. 36, no. 20, pp. 1693-1694, 2000. * [34] P. Kumar, S. Kumar, and R. Balasubramanian, \"A fractional order variational model for the robust estimation of optical flow from image sequences,\" _Optik--Int. J. Light Electron Opt._, vol. 127, no. 20, pp. 8710-8727, 2016. * [35] P. Kumar and S. Kumar, \"A modified variational functional for estimating dense and discontinuity preserving optical flow in various spectrum,\" _AEU--Int. J. Electron. Commun._, vol. 70, no. 3, pp. 289-300, 2016. * [36] M. Shi and V. Solo, \"Empirical choice of smoothing parameters in robust optical flow estimation,\" in _Proc. IEEE Int. Conf. Acoust., Speech, Signal Process._, 2003, vol. 3, pp. 349-352. * [37] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, \"You only look once: Unified, real-time object detection,\" in _Proc. IEEE Conf. Comput. Vision Pattern Recognit._, 2016, pp. 779-788. * [38] H. H. Nagel and W. Enkelmann, \"An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. PAMI-8, no. 5, pp. 565-593, Sep. 1986. * [39] S. Shakya and S. Kumar, \"Characterising and predicting the movement of clouds using fractional-order optical flow,\" _IET Image Process._, vol. 13, no. 8, pp. 1375-1381, 2019. * [40] W. R. Crum, O. Camara, and D. J. Hawkes, \"Methods for inverting dense displacement fields: Evaluation in brain image registration,\" in _Proc. 10th Int. Conf. Med. Image Comput. Comput. Assisted Intervention--Part I_, 2007, pp. 900-907. * [41] D. C. Wilson and B. A. Mair, \"Thin-plate spline interpolation,\" in _Sampling, Wavelets, and Tomography. Applied and Numerical Harmonic Analysis_, J. J. Benedetto and A. I. Zayed, Eds. Boston, MA, USA: Birkhauser, 2004. * [42] T. Muhlenstadt and S. Kuhnt, \"Kernel interpolation,\" _Comput. Statist. Data Anal._, vol. 55, no. 11, pp. 2962-2974, 2011. * [43] S. Kumar, S. Kumar, N. Sukavanam, and R. Balasubramanian, \"Dual tree fractional quaternion wavelet transform for disparity estimation,\" _ISA Trans._, vol. 53, no. 2, pp. 547-559, 2014. * [44] [Online]. Available: [https://keras.ioapplications/](https://keras.ioapplications/) * [45] [Online]. Available: [https://colab.research.google.com/notebooks/welcome:injpyn#](https://colab.research.google.com/notebooks/welcome:injpyn#) * [46] F. Chollet, \"Xception: Deep learning with depthwise separable convolutions,\" in _Proc. Conf. Comput. Vision Pattern Recognit._, 2017, pp. 1800-1807. * [47] J. D. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, \"You only look once: Unified, real-time object detection,\" 2015, _arXiv:1506.02640_. * [48] S. Ren, K. He, R. B. Girshick, and J. Sun, \"Faster R-CNN: Towards real-time object detection with region proposal networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 39, no. 6, pp. 1137-1149, Jun. 2017. * [49] T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollar, \"Focal loss for dense object detection,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 42, no. 2, pp. 318-327, Feb. 2020. * [50] [Online]. Available: [http://satellite.imd.gov.in/archive/](http://satellite.imd.gov.in/archive/) * [51] X. Yu X. Wu, C. Luo, and P. Ren, \"Deep learning in remote sensing scene classification: A data augmentation enhanced convolutional neural network framework,\" _GISci. Remote Sens._, vol. 54, no. 5, pp. 741-758, 2017. * [52] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, \"Comparing images using the Hausdorff distance,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 15, no. 9, pp. 850-863, Sep. 1993. * [53] T.-L. Pao and J.-H. Yeh, \"Typhoon locating and reconstruction from the infrared satellite cloud image,\" _J. Multimedia_, vol. 3, pp. 45-50, 2008. * [54] [Online]. Available: [http://www.smcnewedhi.imd.gov.in/images/pdf/publications/preliminary-report/helen_2013.pdf](http://www.smcnewedhi.imd.gov.in/images/pdf/publications/preliminary-report/helen_2013.pdf) * [55] [Online]. Available: [https://matplotlib.org/](https://matplotlib.org/) * [56] [Online]. Available: [https://keras.io](https://keras.io) * [57] J. M. Johnson and T. M. Khoshgoftaar, \"Survey on deep learning with class imbalance,\" _J. Big Data_, vol. 6, 2019, Art. no. 27. * [58] [Online]. Available: [https://www.mosdac.gov.in/](https://www.mosdac.gov.in/) * [59] \"JTWC track: Naval Meteorology and Oceanography Command\" [Online]. Available: [https://www.metoc.navy.mil/jfwc/jfwc.html?north-indian-ocean](https://www.metoc.navy.mil/jfwc/jfwc.html?north-indian-ocean) * [60] \"National Center for Atmospheric Research database\" [Online]. Available: [http://hurricane.kat.ucar.edu/realtime/plots/html/2017/16/302107/1](http://hurricane.kat.ucar.edu/realtime/plots/html/2017/16/302107/1) * [61] M. Samy, S. K. Karthikeyan, S. Durai, and R. Sherif, \"Ockin cycloone and its impact in the Kanyakumarian district of Southern Tamilachu, India: An aftermath Analysis,\" _Int. J. Recent Res. Aspects_, pp. 466-469, Apr. 2018. * [62] A. A. Fousiya and A. M. Lone, \"Cyclone Ockin and its impact over Minicoy Island, Lakshadweep, India,\" _Current Sci._, vol. 115, no. 5, pp. 819-820, Sep. 10,2018. \\begin{tabular}{c c} & Snehlata Shakya received the M.Sc. degree from Nehru College Chhibramach, Chhatrapati Shahu Ji Maharaj University, Kanpur, India, in 2006, and the Ph.D. degree in mechanical engineering from the Indian Institute of Technology Kanpur, Kanpur, India, in 2014. She has been working with the University of Bergen, Bergen, Norway and University of Linkoping, Linkoping, Sweden on various assignments. She is currently a Postdoctoral Employee with the Department of Clinical Physiology, Lund University, Lund, Sweden. Her research interests include computerized tomography, magnetic resonance imaging, medical imaging, image processing, inverse problems, error estimates, and fluid flow analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Sanjeev Kumar received the M.Sc. degree in applied mathematics and Ph.D. degree in mathematics from the Indian Institute of Technology Rorkee, Roorkee, India, in 2003 and 2008, respectively. He is currently an Associate Professor with the Department of Mathematics, Indian Institute of Technology Roorkee. His research interests include computer vision and mathematical imaging, inverse problems in imaging and control, and machine learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Mayank Goswami received the B.E. degree in electrical engineering from Madhav Institute of Technology and Science, Gwalich, India, in 2006, and the M.Tech. and Ph.D. degrees in nuclear engineering and technology from the Indian Institute of Technology Kanpur, Kanpur, India, in 2009 and 2014, respectively. He is currently an Assistant Professor with the Department of Physics, Indian Institute of Technology Roorkee, Roorkee, India. His research interests include image and signal processing, instrumentation for noninvasive and direct imaging, and algorithm development for inverse problem using AI and classical optimization. \\\\ \\end{tabular}
Satellite images are primary data in weather prediction modeling. Deep learning-based approach, a viable candidate for automatic image processing, requires large sets of annotated data with diverse characteristics for training purposes. Accuracy of weather prediction improves with data having a relatively dense temporal resolution. We have employed interpolation and data augmentation techniques for enhancement of the temporal resolution and diversifications of characters in a given dataset. Algorithm requires classical approaches during preprocessing steps. Three optical flow methods using 14 different constraint optimization techniques and five error estimates are tested here. The artificially enriched data (optimal combination from the previous exercise) are used as a training set for a convolutional neural network to classify images in terms of storm or nonstorm. Several cyclone data (eight cyclone datasets of a different class) were used for training. A deep learning model is trained and tested with artificially densified and classified storm data for cyclone classification and locating the cyclone vortex giving minimum 90 % and 84% accuracy, respectively. In the final step, we show that the linear regression method can be used for predicting the path. Miscellaneous applications, optical data.
Condense the content of the following passage.
ieee/e7e6e512_b81c_47dc_9ccc_faa19e1281b1.md
Formation Control of Nonholonomic Multirobot Systems Over Robot Coordinate Frames and Its Application to LiDAR-Based Robots Kazunori Sakurama\\({}^{\\copyright}\\),, Chunlai Peng\\({}^{\\copyright}\\), Ryo Asai\\({}^{\\copyright}\\), Hirokazu Sakata, and Mitsuhiro Yamazumi\\({}^{\\copyright}\\) Manuscript received 5 January 2024; accepted 19 April 2024. Date of publication 16 May 2024; date of current version 23 October 2024. This work was supported in part by Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 22H01511 and Grant 23H04468, and in part by the project of Theory of Innovative Mechanical Systems, collaborated by Kyoto University and Mitsubishi Electric Corporation. Recommended by Associate Editor C. N. Jones. _(Corresponding author: Kazunori Sakurama.)_ Kazunori Sakurama is with the Graduate School of Engineering Science, Osaka University, Toyonaka 560-8531, Japan (e-mail: [email protected]). Chunlai Peng, Ryo Asai, and Hirokazu Sakata are with the Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan (e-mail: [email protected]; [email protected]; [email protected]). Mitsuhiro Yamazumi is with the Advanced Technology Research and Development Center, Mitsubishi Electric Corporation, Amagasaki 661-8661, Japan (e-mail: [email protected]). Digital Object Identifier 10.1109/TCSI.2024.397018 ## I Introduction Multirobot systems and multiagent systems have been enthusiastically studied for decades because of the wide applicable targets [1, 2], especially in the field of the control engineering [3, 4]. For these systems, distributed control, based on local communication and/or observation, is important for the reduction of computational burdens [5]. Many types of distributed controller have been designed for various control tasks, including consensus [6, 7], coverage [8], and attitude synchronization [9, 10]. Formation control is one of the most important tasks to achieve a desired configuration of robots [11] for efficient inspection, investigation, and surveillance [12, 13]. Especially, distance-based formation control has been investigated in many papers [14, 15, 16, 17, 18]. In this formation, the desired configuration is prescribed with desired distances between robots. In many studies concerning formation control, motions of robots are assumed to be controllable in any direction. However, common mobile robots, e.g., automatic guided vehicles (AGVs), unmanned surface vehicles (USVs), unmanned underwater vehicles (UUVs), and unmanned aerial vehicles (UAVs), usually have a nonholonomic constraint under which robots cannot slide in the lateral direction. Such a robot can be controlled via the speed toward the heading direction and the rotation of the direction in the robot coordinate frame. In addition, measurement is involved with the robot coordinate frame, particularly when using onboard sensors available indoors, including cameras and light detection and ranging (LiDAR). This type of measurement is said to be relative, which depends on the position and rotation of each robot. The formation control of nonholonomic robots has been investigated widely for the displacement-based formation [19, 20, 21, 22], which requires the global frame (or the absolute bearing) under the assumption that the graph is connected. To avoid using the global frame, distance-based formation is applied to nonholonomic robots [23, 24, 25], which requires only relative measurements. For this formation, a rigid graph is necessary. The bearing-based formation requires relative bearings [26, 27, 28]. Recently, Zhao et al. [29] developed a general method to transform any type of gradient-based controller into a controller of nonholonomic robots, where required measurements and graph topologies were not discussed, because they depend on objective functions. Especially, Zhao et al. [29] applied this method to distance-based control and designed a nonholonomic controller with relative measurements, and Zhao [30] applied the method of [29] to the affine formation control. The significant disadvantages of the existing work are that these papers do not discuss the performance of controllers and that they assume certain graph conditions, e.g., the rigidity and connectedness, while no discussion is provided on what happens for general graphs. In contrast, in this article, we design a distributed controller using relative measurements with the best performance, which is applicable to any graph. To do so, first, the control space of each robot and the measurements of other robots are defined in the robot coordinate frame as a subspace of the tangent space of \\(\\text{SE}_{d}\\) and the group action of \\(\\text{SE}_{d}\\), respectively. Next, a gradient-based method is developed by using the projection of the gradient flow of an objective function onto the control space. Then, we derive a necessary and sufficient condition of network topologies to achieve formation under this setting. Next, we design a distributed controller using only measurement in robot coordinate frames by applying the gradient-based method with a clique-based objective function. The clique-based function consists of functions each of which depends on the states of the robots belonging to a clique (i.e., complete subgraph). By employing a clique-based function rather than edge-based ones, the developed controller is guaranteed to have the best performance in that it has the least undesired minima. The effectiveness of the method is demonstrated in two ways. First, 100 simulations in 3-D space are conducted for comparison between the developed and existing methods. Second, the developed method is applied to an experiment with a team of LiDAR-based mobile robots in 2-D space to demonstrate its practical effectiveness. This article is an expanded version of the conference publication of Sakurama [31]. Compared with the conference version, the following points are updated. 1. In this article, the best performance of the proposed method is newly guaranteed in that it has the least undesired minima in Section VI. The performance advantage is illustrated through 100 simulations in Section VII, where the proposed controller always achieves the desired configuration, while the existing one fails in many cases to get stuck in an undesired minimum point. 2. In this article, we employ the group theory on the special Euclidean group to model the kinematics with the nonholonomic constraint and the relative measurements as the differential equation over \\(\\text{SE}(d)\\) and the group action over \\(\\text{SE}(d)\\), respectively, in Section II-B. This description allows us to guarantee the existence of the solution of the kinematics and the best performance of the controller and justify the expression of the relative measurements. In contrast, in the conference version, due to the lack of an appropriate description, there is no such guarantee or justification. 3. In this article, we rebuild the controller using the projection of the gradient flow onto the control space in (27) and (35) in Sections V and VI. This expression gives us a core idea of the proposed method more intuitively and allows us to prove theorems and lemmas rigorously. In the conference version, the controller was expressed in the matrix-multiplied form, and the proofs were not provided. 4. A simulation result for a large-scale system over a time-varying graph is provided in Section VII to show the scalability of the designed controller and the possibility of orientation control in a practical setting. Experimental results are added in Section VIII, which show the practicability of the method in the real world. This article is organized as follows. Section II provides mathematical preliminaries. Section III gives the robot model under the nonholonomic constraint with measurement in the robot coordinate frame. In Section IV, the target problem is formulated. Section V introduces the gradient-based method for the multirobot system. Section VI is the main part of this article, which provides the controller with the best performance. Sections VII and VIII show the simulation and experiment results, respectively. Section IX concludes this article. ## II Preliminaries ### _Notation_ Let \\(\\mathbb{R}\\) be the set of real numbers, and let \\(\\mathbb{R}_{+}\\) be the set of nonnegative real numbers. The Euclidean inner product and Euclidean norm are denoted as \\(\\langle\\cdot,\\cdot\\rangle\\) and \\(\\|\\cdot\\|\\), respectively. The transpose of a matrix is represented with the superscript \\(\\top\\). The determinant of a matrix is denoted by \\(\\det(\\cdot)\\). Let \\(\\mathbf{0}_{d}\\in\\mathbb{R}^{d}\\) and \\(\\mathbf{1}_{d}\\in\\mathbb{R}^{d}\\) represent the vector with all components \\(0\\) and \\(1\\), respectively, and let \\(I_{d}\\in\\mathbb{R}^{d\\times d}\\) represent the \\(d\\)-dimensional identity matrix. Let \\(\\text{SO}_{d}\\subset\\mathbb{R}^{d\\times d}\\) be the special orthogonal group, i.e., the set of the orthogonal matrices of dimension \\(d\\) with determinant one. Let \\(\\text{SE}_{d}\\subset\\mathbb{R}^{(d+1)\\times(d+1)}\\) be the special Euclidean group, the set of the matrices consisting of the elements in \\(\\text{SO}_{d}\\) and \\(\\mathbb{R}^{d}\\) in the following way: \\[\\text{SE}_{d}=\\left\\{\\begin{bmatrix}R&x\\\\ \\mathbf{0}_{d}^{\\top}&1\\end{bmatrix}\\in\\mathbb{R}^{(d+1)\\times(d+1)}:(R,x) \\in\\text{SO}_{d}\\times\\mathbb{R}^{d}\\right\\}.\\] Let \\(\\text{Skew}_{d}\\subset\\mathbb{R}^{d\\times d}\\) be the set of the skew-symmetric matrices of dimension \\(d\\). For a unit vector \\(b\\in\\mathbb{R}^{d}\\), the orthogonal projection onto the subspace \\(\\{bv:v\\in\\mathbb{R}\\}\\) is denoted by \\(\\text{proj}_{b}:\\mathbb{R}^{d}\\rightarrow\\mathbb{R}^{d}\\), and the orthogonal projection onto the orthogonal complement of the subspace is denoted by \\(\\text{proj}_{b}^{\\perp}:\\mathbb{R}^{d}\\rightarrow\\mathbb{R}^{d}\\), that is, for \\(x\\in\\mathbb{R}^{d}\\) \\[\\text{proj}_{b}x=bb^{\\top}x,\\quad\\text{proj}_{b}^{\\perp}x=(I_{d}-bb^{\\top})x. \\tag{1}\\] The orthogonal projection onto \\(\\text{Skew}_{d}\\) is denoted by \\(\\text{proj}_{\\text{Skew}_{d}}:\\mathbb{R}^{d\\times d}\\rightarrow\\mathbb{R}^{d \\times d}\\), that is, for a matrix \\(X\\in\\mathbb{R}^{d\\times d}\\) \\[\\text{proj}_{\\text{Skew}_{d}}X=\\frac{1}{2}(X-X^{\\top}). \\tag{2}\\] For elements \\(x_{1}\\in\\mathcal{X}_{1},x_{2}\\in\\mathcal{X}_{2},\\ldots,x_{n}\\in\\mathcal{X}_{n}\\) of sets \\(\\mathcal{X}_{1},\\mathcal{X}_{2},\\ldots,\\mathcal{X}_{n}\\) and a set \\(\\mathcal{C}\\subset\\{1,2,\\ldots,n\\}\\) of positive integers, let \\(x_{\\mathcal{C}}\\) be the \\(|\\mathcal{C}|\\) tuple consisting of \\(x_{i}\\) for \\(i\\in\\mathcal{C}\\), denoted as follows: \\[x_{\\mathcal{C}}=(x_{i_{1}},x_{i_{2}},\\ldots,x_{i_{|\\mathcal{C}|}})\\in\\prod_{i \\in\\mathcal{C}}X_{i}\\] where \\(|\\mathcal{C}|\\) is the number of the elements of \\(\\mathcal{C}\\) and \\(i_{1},i_{2},\\ldots,i_{|\\mathcal{C}|}\\in\\mathcal{C}\\) satisfy \\(1\\leq i_{1}<i_{2}<\\cdots<i_{|\\mathcal{C}|}\\leq n\\). Let \\(\\text{ave}(\\cdot)\\) be the componentwise average of a tuple \\(x_{\\mathcal{C}}\\) consisting of vectors \\(x_{i}\\in\\mathbb{R}^{d}\\) for \\(i\\in\\mathcal{C}\\), that is, \\[\\text{ave}(x_{\\mathcal{C}})=\\frac{1}{|\\mathcal{C}|}\\sum_{i\\in\\mathcal{C}}x_{i}.\\]For a tuple \\(x_{\\mathcal{C}}\\in(\\mathbb{R}^{d})^{|\\mathcal{C}|}\\) and a set \\(\\mathcal{A}\\subset(\\mathbb{R}^{d})^{|\\mathcal{C}|}\\), the distance between \\(x_{\\mathcal{C}}\\) and \\(\\mathcal{A}\\) is defined as follows: \\[\\mathrm{dist}(x_{\\mathcal{C}},\\mathcal{A})=\\inf_{y_{\\mathcal{C}}\\in\\mathcal{A }}\\sqrt{\\sum_{i\\in\\mathcal{C}}\\|x_{i}-y_{i}\\|^{2}}. \\tag{3}\\] For a function \\(V:(\\mathbb{R}^{d})^{n}\\to\\mathbb{R}\\) of \\(x_{\\mathcal{N}}\\), where \\(\\mathcal{N}=\\{1,2,\\ldots,n\\}\\), its gradient with respect to \\(x_{i}\\) is denoted by \\(\ abla_{i}V:(\\mathbb{R}^{d})^{n}\\to\\mathbb{R}^{d}\\), that is, \\[\ abla_{i}V(x_{\\mathcal{N}})=\\frac{\\partial V}{\\partial x_{i}}(x_{\\mathcal{N }}).\\] The zero set of a function \\(f:(\\mathbb{R}^{d})^{n}\\to\\mathbb{R}^{m}\\) is denoted by \\(f^{-1}(0)\\subset(\\mathbb{R}^{d})^{n}\\), that is, \\(f^{-1}(0)=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}:f(x)=0\\}\\). ### _Matrix Lie Groups_ Some properties of matrix Lie groups are provided, which are basically the standard definitions for general Lie groups [32]. First, matrix Lie groups are defined as follows. **Definition 1**: _A closed subset \\(\\mathcal{M}\\) of \\(\\mathbb{R}^{m\\times m}\\) is called a matrix Lie group if \\(\\mathcal{M}\\) is a set of matrices of nonzero determinant satisfying the following: 1) \\(M_{1}\\), \\(M_{2}\\in\\mathcal{M}\\Rightarrow M_{1}M_{2}\\in\\mathcal{M}\\); 2) \\(I_{d}\\in\\mathcal{M}\\); and 3) \\(M\\in\\mathcal{M}\\Rightarrow M^{-1}\\in\\mathcal{M}\\)._ The group action is defined to a matrix Lie group as follows. **Definition 2**: _For a matrix Lie group \\(\\mathcal{M}\\subset\\mathbb{R}^{m\\times m}\\) and a subset \\(\\mathcal{B}\\subset\\mathbb{R}^{m}\\), \\(\\mathcal{M}\\) is said to act on \\(\\mathcal{B}\\) if \\(Mb\\in\\mathcal{B}\\) holds for any \\(M\\in\\mathcal{M}\\) and \\(b\\in\\mathcal{B}\\)._ As a property of a differentiable manifold, the tangent space of a matrix Lie group is defined as follows. **Definition 3**: _For a matrix Lie group \\(\\mathcal{M}\\subset\\mathbb{R}^{m\\times m}\\), the tangent space of \\(\\mathcal{M}\\) at \\(M_{0}\\in\\mathcal{M}\\), denoted by \\(T_{M_{0}}\\mathcal{M}\\), is defined as follows:_ \\[T_{M_{0}}\\mathcal{M}=\\{M^{\\prime}(0)\\in\\mathbb{R}^{m\\times m}:\\\\ M\\in\\mathcal{C}^{1}((-\\varepsilon,\\,\\varepsilon),\\,\\mathcal{M}),\\,M (0)=M_{0}\\} \\tag{4}\\] _for a constant \\(\\varepsilon>0\\), where \\(\\mathcal{C}^{1}(J,\\mathcal{M})\\) denotes the set of the continuously differentiable functions with the domain \\(J\\) and the range \\(\\mathcal{M}\\), and \\(M^{\\prime}\\) represents the derivative of \\(M\\in\\mathcal{C}^{1}((-\\varepsilon,\\varepsilon),\\,\\mathcal{M})\\), that is, \\(M^{\\prime}(s)=d\\,M/ds(s)\\)._ The tangent space \\(T_{L_{n}}\\) at the identity matrix (i.e., a matrix Lie algebra) is representative of the tangent space at each point in the following way: \\[T_{M_{0}}\\mathcal{M}=\\{M_{0}U:U\\in T_{L_{n}}\\mathcal{M}\\}. \\tag{5}\\] For a matrix Lie group \\(\\mathcal{M}\\subset\\mathbb{R}^{m\\times m}\\), consider a matrix differential equation \\[\\dot{M}(t)=M(t)U(t) \\tag{6}\\] of \\(M(t)\\in\\mathcal{M}\\) with a function \\(U(t)\\in\\mathbb{R}^{m\\times m}\\). Then, \\(M(t)\\in\\mathcal{M}\\) holds as long as \\(\\dot{M}(t)\\in T_{M(t)}\\mathcal{M}\\), equivalently, \\(U(t)\\in T_{L_{n}}\\mathcal{M}\\) from (5). Actually, the following lemma is obtained. **Lemma 1**: _For a matrix Lie group \\(\\mathcal{M}\\subset\\mathbb{R}^{m\\times m}\\), let \\(M:\\mathbb{R}_{+}\\to\\mathbb{R}^{m\\times m}\\) be the solution of the differential equation (6) for \\(M(0)\\in\\mathcal{M}\\). Assume that the solution uniquely exists as a continuously differentiable function. Then, \\(M(t)\\in\\mathcal{M}\\) holds for any \\(t\\in\\mathbb{R}_{+}\\) if and only if \\(U(t)\\in T_{L_{n}}\\mathcal{M}\\) holds for any \\(t\\in\\mathbb{R}_{+}\\)._ From Definition 3, \\(M(t)\\in\\mathcal{M}\\) holds if and only if \\(\\dot{M}(t)\\in T_{M(t)}\\mathcal{M}\\) holds for any \\(t\\in\\mathbb{R}_{+}\\). Equation (5) guarantees that this condition is equivalent to the existence of \\(\\ddot{U}(t)\\in T_{L_{n}}\\mathcal{M}\\) satisfying \\(\\dot{M}(t)=M(t)\\ddot{U}(t)\\) for any \\(t\\). Hence, for each \\(t\\in\\mathbb{R}_{+}\\), the solution \\(M(t)\\) of (6) stays on \\(\\mathcal{M}\\) if and only if \\(U(t)\\in T_{L_{n}}\\mathcal{M}\\) holds. ### _Graph-Theoretic Concepts_ Some graph-theoretic concepts are introduced to describe the connection of the robots in multirobot systems, such as neighbor sets, cliques [33, 34], and rigidity [14, 35]. Consider an undirected graph \\(G=(\\mathcal{N},\\,\\mathcal{E})\\) with a node set \\(\\mathcal{N}=\\{1,2,\\ldots,n\\}\\) and an edge set \\(\\mathcal{E}\\). The _neighbor set_ of node \\(i\\in\\mathcal{N}\\) is defined as follows: \\[\\mathcal{N}_{i}=\\{j\\in\\mathcal{N}:\\{i,j\\}\\in\\mathcal{E}\\}\\cup\\{i\\}. \\tag{7}\\] For a node subset \\(\\mathcal{C}\\subset\\mathcal{N}\\) of \\(G\\), the subgraph induced by \\(\\mathcal{C}\\) is denoted as \\(G|_{\\mathcal{C}}=(\\mathcal{C},\\,\\mathcal{E}|_{\\mathcal{C}})\\), where \\(\\mathcal{E}|_{\\mathcal{C}}\\subset\\mathcal{E}\\) contains the edges of the nodes in \\(\\mathcal{C}\\), that is, \\(\\{i,j\\}\\in\\mathcal{E},i,\\,j\\in\\mathcal{C}\\Leftrightarrow\\{i,\\,j\\}\\in\\mathcal{ E}|_{\\mathcal{C}}\\). A node subset \\(\\mathcal{C}\\subset\\mathcal{N}\\) is called a _clique_ if the induced subgraph is complete. A clique \\(\\mathcal{C}\\) is said to be _maximal_ if \\(\\mathcal{C}\\) is not contained by any other cliques. The index set of the maximal cliques in \\(G\\) is denoted as \\(\\mathrm{Clq}=\\{1,2,\\ldots,q\\}\\), with which the maximal cliques are described as \\(\\mathcal{C}_{k}\\subset\\mathcal{N}\\) for \\(k\\in\\mathrm{Clq}\\). The index subset of the maximal cliques that node \\(i\\in\\mathcal{N}\\) belongs to is denoted as follows: \\[\\mathrm{Clq}_{i}=\\{k\\in\\mathrm{Clq}:i\\in\\mathcal{C}_{k}\\}.\\] The following relation connects the concepts of the neighbor sets and the maximal cliques: \\[\\mathcal{N}_{i}=\\bigcup_{k\\in\\mathrm{Clq}_{i}}\\mathcal{C}_{k}. \\tag{8}\\] For a graph \\(G\\) and a tuple of vectors \\(x_{\\mathcal{N}}^{*}\\in(\\mathbb{R}^{d})^{n}\\), a pair \\((G,x_{\\mathcal{N}}^{*})\\) is called a framework. A framework \\((G,x_{\\mathcal{N}}^{*})\\) is said to be _rigid_ if there exists an open set \\(\\mathcal{O}\\supset\\mathcal{X}^{*}\\), such that \\[\\bigcap_{\\{i,j\\}\\in\\mathcal{E}}\\tilde{\\mathcal{X}}_{ij}^{*}\\cap\\mathcal{O}= \\mathcal{X}^{*} \\tag{9}\\] where \\[\\tilde{\\mathcal{X}}_{ij}^{*}=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}: \\|x_{i}-x_{j}\\|=\\|x_{i}^{*}-x_{j}^{*}\\|\\} \\tag{10}\\] \\[\\mathcal{X}^{*}=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}:\\exists( \\Phi,\\,\\tau)\\in\\mathrm{SO}_{d}\\times\\mathbb{R}^{d}\\] \\[\\text{s.t. }x_{i}=\\Phi x_{i}^{*}+\\tau\\,\\,\\forall i\\in\\mathcal{N}\\}. \\tag{11}\\] Note that \\(\\cap_{\\{i,j\\}\\in\\mathcal{E}}\\tilde{\\mathcal{X}}_{ij}^{*}\\) is the set of the structures constructed by the joints and bars corresponding to the nodes and edges in \\(G\\). The definition (9) requires that each of the structures is locally congruent to \\(x_{\\mathcal{N}}^{*}\\). This concept is equivalent to the conventional rigidity. In contrast, a structure can be constructed by the maximal cliques in \\(G\\), instead of the edges, with the set \\[\\tilde{\\mathcal{X}}_{k}^{*}=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}: \\exists(\\Phi_{k},\\,\\tau_{k})\\in\\mathrm{SO}_{d}\\times\\mathbb{R}^{d}\\] \\[\\text{s.t. }x_{i}=\\Phi_{k}x_{i}^{*}+\\tau_{k}\\,\\,\\forall i\\in \\mathcal{C}_{k}\\} \\tag{12}\\] for \\(k\\in\\mathrm{Clq}\\). The structures constructed by the edges and those by the maximal cliques are locally equivalent as follows. **Lemma 2**: _For framework \\((G,x_{\\mathcal{N}}^{*})\\), there exists an open set \\(\\tilde{\\mathcal{O}}\\supset\\mathcal{X}^{*}\\), such that_ \\[\\bigcap_{(i,j)\\in\\mathcal{E}}\\tilde{\\mathcal{X}}_{ij}^{*}\\cap\\tilde{\\mathcal{O}} =\\bigcap_{k\\in\\mathrm{CI}_{\\mathbf{q}}}\\tilde{\\mathcal{X}}_{k}^{*}. \\tag{13}\\] See Appendix A. ### _Procrustes Problem_ For vectors \\(x_{i},\\,x_{i}^{*}\\in\\mathbb{R}^{d}\\), \\(i\\in\\mathcal{C}\\subset\\{1,2,\\ldots,n\\}\\), consider the following optimization problem, called a Procrustes problem [36]: \\[\\min_{(\\Phi,\\tau)\\in\\mathrm{SO}_{d}\\times\\mathbb{R}^{d}}\\sum_{i\\in\\mathcal{C} }\\|x_{i}-\\big{(}\\Phi x_{i}^{*}+\\tau\\big{)}\\|^{2}. \\tag{14}\\] Problem (14) appears when evaluating the discrepancy between the current and desired positions of robots in formation control. The solution \\((\\hat{\\Phi},\\,\\hat{\\tau})\\in\\mathrm{SO}_{d}\\,\\times\\,\\mathbb{R}^{d}\\) of (14) is of the form [37] \\[\\hat{\\Phi} =Q\\mathrm{diag}(\\overline{1,\\ldots,1},\\,\\det(P\\,Q))\\,P^{\\top}\\in \\mathrm{SO}_{d} \\tag{15}\\] \\[\\hat{\\tau} =\\mathrm{ave}(x_{\\mathcal{C}}-\\hat{\\Phi}x_{\\mathcal{C}}^{*})\\in \\mathbb{R}^{d} \\tag{16}\\] where \\(P\\), \\(Q\\in\\mathbb{R}^{d\\times d}\\) are orthogonal matrices satisfying \\[\\sum_{i\\in\\mathcal{C}}\\big{(}x_{i}^{*}-\\mathrm{ave}\\big{(}x_{\\mathcal{C}}^{*} \\big{)}\\big{)}(x_{i}-\\mathrm{ave}(x_{\\mathcal{C}}))^{\\top}=PSQ^{\\top} \\tag{17}\\] with a diagonal matrix \\(S=\\mathrm{diag}(\\sigma_{1},\\,\\sigma_{2},\\,\\ldots,\\,\\sigma_{d})\\) of entries \\(\\sigma_{1},\\sigma_{2},\\ldots,\\,\\sigma_{d}\\) (\\(\\sigma_{1}\\geq\\sigma_{2}\\geq\\cdots\\geq\\sigma_{d}\\geq 0\\)). Note that (17) is the singular value decomposition (SVD) to evaluate the discrepancy between \\(x_{i}\\) and \\(x_{i}^{*}\\) for \\(i\\in\\mathcal{C}\\). Let \\(\\mathrm{Proc}\\,:\\,(\\mathbb{R}^{d})^{|\\mathcal{C}|}\\times(\\mathbb{R}^{d})^{| \\mathcal{C}|}\\to\\mathrm{Pow}(\\mathrm{SO}_{d})\\) be the set-valued function of \\(x_{\\mathcal{C}}\\) and \\(x_{\\mathcal{C}}^{*}\\) consisting of the matrices \\(\\hat{\\Phi}\\in\\mathrm{SO}_{d}\\) of the form (15), where \\(\\mathrm{Pow}(\\cdot)\\) denotes the power set of a set. Its element is regarded as a function of respectively, in \\(\\Sigma_{i}\\). Subsequently, the kinematic model (20) with \\[R_{i}(t)=\\begin{bmatrix}\\cos\\theta_{i}(t)&-\\sin\\theta_{i}(t)\\\\ \\sin\\theta_{i}(t)&\\cos\\theta_{i}(t)\\end{bmatrix},\\;\\;S_{i}(t)=\\begin{bmatrix}0& -\\omega_{i}(t)\\\\ \\omega_{i}(t)&0\\end{bmatrix} \\tag{23}\\] is reduced to the standard model of a rolling coin, i.e., \\[\\begin{cases}\\dot{\\theta}_{i}(t)=\\omega_{i}(t)\\\\ \\dot{x}_{i}(t)=[\\cos\\theta(t)\\;\\;\\sin\\theta(t)]^{\\top}v_{i}(t).\\end{cases}\\] The relative measurement is given from (21) as follows: \\[x_{j}^{[i]}(t)=\\begin{bmatrix}\\cos\\theta_{i}(t)&\\sin\\theta_{i}(t)\\\\ -\\sin\\theta_{i}(t)&\\cos\\theta_{i}(t)\\end{bmatrix}(x_{j}(t)-x_{i}(t)).\\] ## IV Problem Formulation The control objective is that the positions \\(x_{\\mathcal{N}}(t)\\) of the robots converge to a congruent shape with a desired configuration \\(x_{\\mathcal{N}}^{\\star}\\in(\\mathbb{R}^{d})^{n}\\). This objective is described as follows: \\[\\exists(\\Phi,\\tau) \\in\\text{SO}_{d}\\times\\mathbb{R}^{d}\\] (24) s.t. \\[\\lim_{t\\to\\infty}(x_{i}(t)-(\\Phi x_{i}^{\\star}+\\tau))=0\\;\\forall i \\in\\mathcal{N}\\] where \\(\\Phi\\in\\text{SO}_{d}\\) and \\(\\tau\\in\\mathbb{R}^{d}\\) correspond to the rotation and translation freedoms in the congruence. By using the set \\(\\mathcal{X}^{\\star}\\) in (11), (24) is represented as follows: \\[\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),\\,\\mathcal{X}^{\\star})=0. \\tag{25}\\] The control objective is stated in a more formal way as follows. A closed set \\(\\mathcal{X}^{\\star}\\in(\\mathbb{R}^{d})^{n}\\) is said to be an _equilibrium set_ of (20) if the following holds in (20): \\[x_{\\mathcal{N}}(t)\\in\\mathcal{X}^{\\star}\\Rightarrow(\\dot{R}_{\\mathcal{N}}(t),\\,\\dot{x}_{\\mathcal{N}}(t))=(0,0). \\tag{26}\\] An equilibrium set \\(\\mathcal{X}^{\\star}\\) of (20) is said to be _attractive_ if there exists an open set \\(\\mathcal{A}\\supset\\mathcal{X}^{\\star}\\), such that the state \\((R_{\\mathcal{N}}(t),\\,x_{\\mathcal{N}}(t))\\) from every initial state \\((R_{\\mathcal{N}}(0),\\,x_{\\mathcal{N}}(0))\\in(\\text{SO}_{d})^{n}\\times\\mathcal{A}\\) satisfies (25). Moreover, \\(\\mathcal{X}^{\\star}\\) is said to be _globally attractive_ if this holds for \\(\\mathcal{A}=(\\mathbb{R}^{d})^{n}\\). Now, it is expected to design a controller for system (20) satisfying the following requirements. 1. [label=(R0)] 2. The controller is distributed, relative over \\(G\\). 3. The set \\(\\mathcal{X}^{\\star}\\) in (11) is an equilibrium set. 4. The set \\(\\mathcal{X}^{\\star}\\) in (11) is attractive. Note that because the existence of such a controller depends on the topology of graph \\(G\\), we have to specify a condition of such a graph \\(G\\). Furthermore, even if a graph \\(G\\) does not satisfy the specified condition, the designed controller has to perform the best. In summary, the main problem tackled in this article is given as follows. **Problem 1**: _For a given graph \\(G\\) and a desired configuration \\(x_{\\mathcal{N}}^{\\star}\\in(\\mathbb{R}^{d})^{n}\\), specify a condition of graph \\(G\\), such that a controller satisfying (R1), (R2), and (R3) exists. Under the condition, design a controller satisfying (R1), (R2), and (R3). Furthermore, even if the condition is not satisfied, the best performance of the designed controller is guaranteed._ ## V Gradient-Based Method As a preliminary, a gradient-based method is developed for the kinematic model (20) with the nonholonomic constraint. The developed method is applied to formation control in Section VI, whereas it is applicable to many other tasks. Let \\(V:(\\mathbb{R}^{d})^{n}\\to\\mathbb{R}_{+}\\) be an objective function of \\(x_{\\mathcal{N}}\\), taking the minimum zero when \\(x_{\\mathcal{N}}\\) is at a desired point. Because system (20) is not just an integrator but includes the nonholonomic constraint, the gradient flow \\(\\dot{x}_{i}(t)=-\ abla_{i}\\,V(x_{\\mathcal{N}}(t))\\) cannot be realized. From (20), the direction of the velocity is restricted to \\(R_{i}(t)b_{i}\\). Thus, we consider decomposing the gradient flow into the subspace spanned by \\(R_{i}(t)b_{i}\\) and its complement space by projections in the robot coordinate frame \\(\\Sigma_{i}\\). The elements in the two spaces are used to move the robot according to \\[\\begin{cases}2\\dot{R}_{i}(t)b_{i}=-\\kappa_{K}\\text{proj}_{R_{i}(t)b_{i}}^{ \\perp}\ abla_{i}\\,V(x_{\\mathcal{N}}(t))\\\\ \\dot{x}_{i}(t)=-\\kappa_{x}\\text{proj}_{R_{i}(t)b_{i}}\ abla_{i}\\,V(x_{\\mathcal{ N}}(t))\\end{cases} \\tag{27}\\] where \\(\\kappa_{K},\\,\\kappa_{x}>0\\) are gains to adjust the converge rates of rotation and position, respectively. See Fig. 2 for the illustration of the decomposition in \\(d=2\\)-D space as Example 1. The two elements sum up to \\[\\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(\\frac{x_{i}(t)}{\\kappa_{x}}+\\frac{2R_{i}( t)b_{i}}{\\kappa_{R}}\\right)=-\ abla_{i}\\,V(x_{\\mathcal{N}}(t)) \\tag{28}\\] which implies that the forward point \\(x_{i}(t)/\\kappa_{x}+2R_{i}(t)b_{i}/\\kappa_{R}\\) of the robot follows the gradient flow. Consequently, \\(x_{i}(t)\\) is expected to move until \\(\ abla_{i}\\,V(x_{\\mathcal{N}}(t))\\) converges to zero. To generate the velocities in (27) for the kinematic model (20), the control input \\((S_{i}(t),v_{i}(t))\\) is Fig. 1: Global and robot coordinate frames in 2-D space. Fig. 2: Decomposition of the gradient flow. designed as follows: \\[\\left\\{\\begin{aligned} & S_{i}(t)=-\\kappa_{R}\\text{proj}_{\\text{skew} _{d}}\\big{(}R_{i}^{\\top}(t)\\text{proj}_{R_{i}(t)h_{i}}^{\\perp}\ abla_{i}V(x_{ \\mathcal{N}}(t))b_{i}^{\\top}\\big{)}\\\\ & v_{i}(t)=-\\kappa_{x}b_{i}^{\\top}R_{i}^{\\top}(t)\ abla_{i}V(x_{ \\mathcal{N}}(t)).\\end{aligned}\\right. \\tag{29}\\] Equation (29) is generated by projecting the gradient flow in (27) into the matrix set \\(\\text{Skew}_{d}\\) and the vector space spanned by \\(b\\). Actually, the following is obtained. **Lemma 4**: _Consider a continuously differentiable function \\(V:(\\mathbb{R}^{d})^{n}\\to\\mathbb{R}_{+}\\) of which minimum is zero. Let \\((R_{i}(t),x_{i}(t))\\) be the solution of the kinematic model (20) under the control input \\((S_{i}(t),v_{i}(t))\\in\\text{Skew}_{d}\\,\\times\\,\\mathbb{R}\\) in (29) for \\(\\kappa_{R},\\kappa_{x}>0\\). Then, the following statements hold._ 1. _Equations (_27_) and (_28_) hold._ 2. \\(V(x_{\\mathcal{N}}(t))\\) _is monotonically nonincreasing, that is,_ \\[\\dot{V}(x_{\\mathcal{N}}(t))\\leq 0.\\] (30) 3. _The zero set_ \\((\ abla_{\\mathcal{N}}V)^{-1}(0)\\) _of the gradient is an equilibrium set._ 4. _If_ \\(x_{i}(t)\\in\\mathbb{R}^{d}\\) _is bounded for each_ \\(i\\in\\mathcal{N}\\)_,_ \\((\ abla_{\\mathcal{N}}V)^{-1}(0)\\) _is globally attractive._ 5. _In addition, if_ \\(V\\) _is real-analytic, the zero set_ \\(V^{-1}(0)\\) _is attractive._ The proof of Lemma 4 is briefly explained. See Appendix B for the complete proof. Under the control input (29), the monotonous nonincreasingness of \\(V(x_{\\mathcal{N}}(t))\\) in (30) is shown as 2). Then, LaSalles' invariance principle [39] guarantees that the state converges to the largest invariant set of \\(\\dot{V}(x_{\\mathcal{N}})=0\\), which is contained by \\((\ abla_{\\mathcal{N}}V)^{-1}(0)\\), as shown in 4). Furthermore, \\((\ abla_{\\mathcal{N}}V)^{-1}(0)\\) is locally equivalent to \\(V^{-1}(0)\\), as shown in 5), and the attractiveness of \\(V^{-1}(0)\\) is guaranteed. ## VI Main Result ### _Distributed Controller With the Best Performance_ As the main result of this article, a solution to Problem 1 is provided. The proofs in this section are given in Section VI-B. Following the gradient-based method developed in Section V, the gradient-based controller (29) is employed with an objective function \\(V\\) to satisfy requirements (R1), (R2), and (R3), given in Section IV. A function satisfying (R1) and (R2) always exists, e.g., \\(V=0\\). However, a function satisfying (R3) in addition does not always exist, which strongly depends on graph \\(G\\). For this reason, we take the following strategy to solve Problem 1. Let \\(\\mathcal{F}(G,x_{\\mathcal{N}}^{*})\\) be the set of the continuously differentiable functions taking minimum zero at \\(x_{\\mathcal{N}}^{*}\\), such that the gradient-based controller (29) satisfies (R1) and (R2). First, we find a function with _the best performance_ of all functions \\(V\\in\\mathcal{F}(G,x_{\\mathcal{N}}^{*})\\) in terms of the volume of \\(V^{-1}(0)\\backslash\\mathcal{X}^{*}\\), which is the undesired minimum set to achieve the control objective (25) with the gradient-based controller (29). Thus, solving the following problem is expected: \\[\\min_{V\\in\\mathcal{F}(G,x_{\\mathcal{N}}^{*})}\\text{vol}(V^{-1}(0)\\backslash \\mathcal{X}^{*}) \\tag{31}\\] where \\(\\text{vol}(\\cdot)\\) denotes the volume of a set. Next, we consider a condition on graph \\(G\\), such that the best function satisfies (R3). First, we derive a solution to (31) for the latter part of Problem 1. The solution is of the form \\[V_{\\star}(x_{\\mathcal{N}})=\\sum_{k\\in\\text{\\rm{Cq}}}\\frac{1}{2}\\big{(}\\text{ dist}\\big{(}x_{\\mathcal{C}_{k}},\\,\\hat{\\mathcal{X}}_{k}^{*}\\big{)}\\big{)}^{2} \\tag{32}\\] where \\[\\hat{\\mathcal{X}}_{k}^{*}=\\{x_{\\mathcal{C}_{k}}\\in(\\mathbb{R}^{d} )^{|\\mathcal{C}_{k}|}:\\exists(\\Phi_{k},\\,\\tau_{k})\\in\\text{SO}_{d}\\times\\mathbb{ R}^{d}\\] \\[\\text{s.t.}\\,\\,x_{i}=\\Phi_{k}x_{i}^{*}+\\tau_{k}\\,\\,\\forall i\\in \\mathcal{C}_{k}\\} \\tag{33}\\] as shown in the following theorem. **Theorem 1**: _For graph \\(G\\) and a desired configuration \\(x_{\\mathcal{N}}^{*}\\in(\\mathbb{R}^{d})^{n}\\), consider the function \\(V_{\\star}\\) in (32). The following relation holds:_ \\[V_{\\star}^{-1}(0)\\subset V^{-1}(0)\\,\\,\\forall V\\in\\mathcal{F}\\big{(}G,x_{ \\mathcal{N}}^{*}\\big{)}. \\tag{34}\\] Theorem 1 guarantees the best performance of \\(V_{\\star}\\) in terms of (31), i.e., generates the least undesired minima, among all functions whose gradient-based controllers are distributed. Actually, if the volume of the set in (31) can be evaluated, (34) directly indicates that \\(V_{\\star}\\) is a solution to (31). Even if the volume cannot be evaluated, (34) is valid for performance evaluation. An explicit form of the designed controller is given as follows. **Theorem 2**: _The gradient-based controller (29) for the function \\(V_{\\star}\\) in (32) is of the form_ \\[\\left\\{\\begin{aligned} & S_{i}(t)=\\kappa_{R}\\text{proj}_{\\text{skew} _{d}}\\Big{(}\\text{proj}_{h_{i}}^{\\perp}g_{i}(x_{\\mathcal{N}_{i}}^{[i]}(t))b_{i} ^{\\top}\\Big{)}\\\\ & v_{i}(t)=\\kappa_{x}b_{i}^{\\top}g_{i}\\Big{(}x_{\\mathcal{N}_{i}}^{ [i]}(t)\\Big{)}\\end{aligned}\\right. \\tag{35}\\] _where the function \\(g_{i}:(\\mathbb{R}^{d})^{|\\mathcal{N}_{i}|}\\to\\mathbb{R}^{d}\\) is defined as follows:_ \\[g_{i}(x_{\\mathcal{N}_{i}})=\\sum_{k\\in\\text{\\rm{Cq}}_{\\text{k}}}\\Big{(}\\text{ \\rm{ave}}(x_{\\mathcal{C}_{k}})+\\hat{\\Phi}_{k}\\big{(}x_{\\mathcal{C}_{k}},\\,x_{ \\mathcal{C}_{k}}^{*}\\big{)}\\big{(}x_{i}^{*}-\\text{\\rm{ave}}\\big{(}x_{\\mathcal{C }_{k}}^{*}\\big{)}\\big{)}\\big{)} \\tag{36}\\] _with a function \\(\\hat{\\Phi}_{k}(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}}^{*})\\in\\text{Proc}(x_{ \\mathcal{C}_{k}},x_{\\mathcal{C}}^{*})\\)._ Finally, the solution to the first part of Problem 1 is given in the following theorem. **Theorem 3**: _First, there exists a function \\(V\\), such that the gradient-based controller (29) satisfies (R1), (R2), and (R3) if and only if framework \\((G,x_{\\mathcal{N}}^{*})\\) is rigid. Second, \\(V=V_{\\star}\\) is such a function._ **Remark 1**: _Theorem 1 implies that cliques are the network structure for objective functions to ensure both the best performance and distributedness of the gradient-based controllers. In contrast, many conventional objective functions are edge-based, e.g., the objective function for distance-based formation control [29]_ \\[V_{\\text{conv}}(x_{\\mathcal{N}})=\\sum_{(i,j)\\in\\mathcal{E}}\\frac{1}{8}\\big{(} \\|x_{i}-x_{j}\\|^{2}-\\|x_{i}^{*}-x_{j}^{*}\\|^{2}\\big{)}^{2} \\tag{37}\\] _to guarantee the distributedness of the gradient-based controllers. The better convergence performance of the proposed method than the conventional one is shown by simulationsin Section VII. On the other hand, the control objective (25) inspires the objective function \\[V_{\\star}(x_{\\mathcal{N}})=\\frac{1}{2}\\text{dist}(x_{\\mathcal{N}}, \\,\\mathcal{X}_{\\star})^{2}.\\] However, its gradient-based controller is not distributed. **Remark 2**: _Although \\(\\hat{\\Phi}(x_{\\mathcal{C}_{\\mathcal{L}}},x_{\\mathcal{C}_{\\mathcal{L}}}^{*})\\in \\operatorname{Proc}(x_{\\mathcal{C}_{\\mathcal{L}}},x_{\\mathcal{C}_{\\mathcal{L} }}^{*})\\) in (36) is possibly not unique, any component in \\(\\operatorname{Proc}(x_{\\mathcal{C}_{\\mathcal{L}}},x_{\\mathcal{C}_{\\mathcal{L} }}^{*})\\) can be chosen and justified in the following way. Because \\(\\hat{\\Phi}(x_{\\mathcal{C}_{\\mathcal{L}}},x_{\\mathcal{C}_{\\mathcal{L}}}^{*})\\in \\operatorname{Proc}(x_{\\mathcal{C}_{\\mathcal{L}}},x_{\\mathcal{C}_{\\mathcal{L} }}^{*})\\) is unique almost everywhere [38], the Fillipov's solution [40] exists for the system (20) with the controller (35) and (36), equivalently, the gradient-based controller (29) of \\(V_{\\star}(x_{\\mathcal{N}})\\) in (32). Then, the nonsmooth version of LaSalle's invariance principle [40] is available for \\(V_{\\star}(x_{\\mathcal{N}})\\), and the same convergence properties as Lemma 4 are obtained._ ### _Proofs of Theorems_ The rest of this section is devoted to proving Theorems 1-3. The proofs of the lemmas are provided in the Appendix. _Proof of Theorem 1:_ As a preliminary, two sets are defined. First, let \\(\\mathcal{F}_{d}(G,x_{\\mathcal{N}}^{*})\\) be the set of the continuously differentiable functions taking minimum zero at \\(x_{\\mathcal{N}}^{*}\\), such that \\(\ abla_{i}V(x_{\\mathcal{N}})\\) depends only on \\(x_{\\mathcal{N}_{i}}\\) for each \\(i\\in\\mathcal{N}\\), and the following equation is satisfied: \\[\\mathcal{X}^{*}\\subset V^{-1}(0). \\tag{38}\\] Let \\(\\mathcal{F}_{rd}(G,x_{\\mathcal{N}}^{*})\\) be the set of the functions \\(V\\in\\mathcal{F}_{d}(G,x_{\\mathcal{N}}^{*})\\) additionally satisfying the condition that \\(R_{i}^{\\top}\ abla_{i}V(x_{\\mathcal{N}})\\) depends only on \\(x_{\\mathcal{N}_{i}}^{[i]}\\) for any \\(R_{i}\\in\\text{SO}_{d}\\) for each \\(i\\in\\mathcal{N}\\), where \\(x_{j}^{[i]}=R_{i}^{\\top}(x_{j}-x_{i})\\). Some properties of these sets are given as follows. **Lemma 5**: _Equation (34) holds for \\(\\mathcal{F}_{d}(G,x_{\\mathcal{N}}^{*})\\) instead of \\(\\mathcal{F}(G,x_{\\mathcal{N}}^{*})\\)._ See [41, Th. 2]. **Lemma 6**: _The following relation holds:_ \\[\\mathcal{F}\\big{(}G,x_{\\mathcal{N}}^{*}\\big{)}\\subset\\mathcal{F}_ {rd}\\big{(}G,x_{\\mathcal{N}}^{*}\\big{)}. \\tag{39}\\] See Appendix C. The relation \\(\\mathcal{F}_{rd}(G,x_{\\mathcal{N}}^{*})\\subset\\mathcal{F}_{d}(G,x_{\\mathcal{N }}^{*})\\) is obvious, and Lemmas 5 and 6 lead to (34). _Proof of Theorem 2:_ We show that (35) is equivalent to (29) for \\(V=V_{\\star}\\). From (3), (18), (32), (33), and (36), we obtain \\[\ abla_{i}V_{\\star}(x_{\\mathcal{N}}) =\ abla_{i}\\sum_{k\\in\\text{Clq}}\\frac{1}{2}\\big{(}\\text{dist} \\big{(}x_{\\mathcal{C}_{k}},\\,\\hat{\\mathcal{X}}_{k}^{*}\\big{)}\\big{)}^{2}\\] \\[=\\sum_{k\\in\\text{Clq}_{k}}\\frac{1}{2}\ abla_{i}\\sum_{j\\in \\mathcal{C}_{k}}\\Big{\\|}x_{j}-\\text{ave}(x_{\\mathcal{C}_{k}})\\] \\[\\qquad\\qquad\\qquad\\qquad-\\hat{\\Phi}_{k}\\big{(}x_{\\mathcal{C}_{k} },x_{\\mathcal{C}_{k}}^{*}\\big{)}\\big{(}x_{j}^{*}-\\text{ave}\\big{(}x_{\\mathcal{ C}_{k}}^{*}\\big{)}\\big{)}\\Big{\\|}^{2}\\] \\[=\\sum_{k\\in\\text{Clq}_{k}}\\big{(}x_{i}-\\text{ave}(x_{\\mathcal{C} _{k}})\\] \\[\\qquad\\qquad\\qquad-\\hat{\\Phi}_{k}\\big{(}x_{\\mathcal{C}_{k}},\\,x_{ \\mathcal{C}_{k}}^{*}\\big{)}\\big{(}x_{i}^{*}-\\text{ave}\\big{(}x_{\\mathcal{C}_{k }}^{*}\\big{)}\\big{)}\\big{)} \\tag{40}\\] where \\(\\hat{\\Phi}_{k}(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}}^{*})\\in\\operatorname{Proc }(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}}^{*})\\). See Appendix F for how to derive the gradient in (40). From (40) and Lemma 3 \\[\ abla_{i}V_{\\star}\\Big{(}x_{\\mathcal{N}}^{[i]}\\Big{)} =\ abla_{i}V_{\\star}\\big{(}R_{i}^{\\top}\\big{(}x_{\\mathcal{N}}-x_ {i}\\mathbf{I}_{n}^{\\top}\\big{)}\\big{)}\\] \\[=R_{i}^{\\top}\ abla_{i}V_{\\star}(x_{\\mathcal{N}}) \\tag{41}\\] holds for \\(x_{j}^{[i]}=R_{i}^{\\top}(x_{j}-x_{i})\\). From (36), (40), and \\(x_{i}^{[i]}=0\\), we obtain \\[g_{i}\\big{(}x_{\\mathcal{N}_{i}}^{[i]}\\big{)}=-\ abla_{i}V_{\\star} \\Big{(}x_{\\mathcal{N}}^{[i]}\\Big{)}. \\tag{42}\\] From (41) and (42), we obtain \\[\\operatorname{proj}_{h_{h}}^{\\perp}g_{i}\\big{(}x_{\\mathcal{N}_{i }}^{[i]}\\big{)} =-\\operatorname{proj}_{h_{h}}^{\\perp}\ abla_{i}V_{\\star}\\Big{(}x_{ \\mathcal{N}}^{[i]}\\Big{)}=-\\operatorname{proj}_{h_{h}}^{\\perp}R_{i}^{\\top} \ abla_{i}V_{\\star}(x_{\\mathcal{N}})\\] \\[=-R_{i}^{\\top}\\operatorname{proj}_{h_{h},\ abla_{i}}^{\\perp}\ abla_ {i}V_{\\star}(x_{\\mathcal{N}_{i}}) \\tag{43}\\] where we use \\(\\operatorname{proj}_{h_{h}}^{\\perp}R_{i}^{\\top}=R_{i}^{\\top}\\operatorname{proj} _{h_{h}}^{\\perp}\\) from (1). From (43), (35) is reduced to (29) for \\(V=V_{\\star}\\). From (8), the right-hand side of (36) is a function of \\(x_{\\mathcal{N}}^{[i]}\\), and hence, the controller (35) is distributed, relative over \\(G\\) according to the definition (22). _Proof of the Necessity of Theorem 3(i):_ Assume that there exists a function \\(V\\), such that the gradient-based controller (29) satisfies (R1), (R2), and (R3), and we show that framework \\((G,x_{\\mathcal{N}}^{*})\\) is rigid. Because \\(\\mathcal{X}^{*}\\) is attractive from (R3), there exists an open set \\(\\mathcal{A}\\supset\\mathcal{X}^{*}\\), such that \\(x_{\\mathcal{N}}(t)\\) from every initial position \\(x_{\\mathcal{N}}(0)\\in\\mathcal{A}\\) satisfies (25). As a result \\[\\mathcal{A}\\cap V^{-1}(0)=\\mathcal{X}^{*} \\tag{44}\\] holds. Otherwise, there exists \\(\\xi_{\\mathcal{N}}\\in\\mathcal{A}\\cap V^{-1}(0)\\), such that \\(\\xi_{\\mathcal{N}}\ ot\\in\\mathcal{X}^{*}\\), because \\(\\mathcal{A}\\cap V^{-1}(0)\\supset\\mathcal{X}^{*}\\) from \\(\\mathcal{A}\\supset\\mathcal{X}^{*}\\) and (38). Then an equilibrium set, because Lemma 4(iii) guarantees that \\((\ abla_{\\mathcal{N}}V_{\\star})^{-1}(0)\\) is an equilibrium set. Finally, (R3), the attractiveness of \\(\\mathcal{X}^{*}\\), is shown after the following lemma, which is given to guarantee that the assumptions in Lemma 4(v) are satisfied. **Lemma 7**: _Consider the system (20) under the control input \\((S_{i}(t),v_{i}(t))\\) in (29) with \\(V=V_{\\star}\\) in (32). Then, \\(x_{i}(t)\\) is bounded for every \\(i\\in\\mathcal{N}\\)._ See Appendix D. Furthermore, \\(V_{\\star}\\) is a real-analytic function, because \\(\\hat{\\mathcal{X}}_{k}^{*}\\) in (33) is a real-analytic manifold [42]. Hence, from Lemma 4(v), \\(V_{\\star}^{-1}(0)\\) is attractive; that is, there exists an open set \\(\\mathcal{A}_{\\star}\\supset V_{\\star}^{-1}(0)\\), such that \\[x_{\\mathcal{N}}(0)\\in\\mathcal{A}_{\\star}\\Rightarrow\\lim_{t\\to\\infty}\\text{ dist}\\big{(}x_{\\mathcal{N}}(t),\\,V_{\\star}^{-1}(0)\\big{)}=0. \\tag{47}\\] From the assumption of the rigidity and Lemma 2, (9) and (13) hold for open sets \\(\\mathcal{O},\\tilde{\\mathcal{O}}\\supset\\mathcal{X}^{*}\\). Then, from (9), (13), and (45), we obtain \\[V_{\\star}^{-1}(0)\\cap\\tilde{\\mathcal{O}}=\\bigcap_{k\\in\\text{Cl}_{\\mathcal{Q} }}\\hat{\\mathcal{X}}_{k}^{*}\\cap\\tilde{\\mathcal{O}}=\\mathcal{X}^{*} \\tag{48}\\] where \\(\\tilde{\\mathcal{O}}=\\mathcal{O}\\cap\\tilde{\\mathcal{O}}\\). Let \\(\\mathcal{L}_{c}=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}:V_{\\star}(x_{ \\mathcal{N}})\\leq c\\}\\) be the sublevel set of \\(V_{\\star}\\) for \\(c\\in\\mathbb{R}_{+}\\). As shown in Appendix E, there exists \\(c_{1}>0\\) satisfying \\[x_{\\mathcal{N}}(0)\\in\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\ \\Rightarrow\\ x_{\\mathcal{N}}(t)\\in\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}} \\quad\\forall t\\geq 0 \\tag{49}\\] where denotes the interior of a set. From (47)-(49), for \\(x_{\\mathcal{N}}(0)\\in\\mathcal{A}_{\\star}\\cap L_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\) \\[\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),\\,\\mathcal{X}^{*}) =\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),\\,V_{\\star}^{-1} (0)\\cap\\tilde{\\mathcal{O}})\\] \\[=\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),\\,V_{\\star}^{-1 }(0))=0\\] is obtained. The set \\(\\mathcal{A}=\\mathcal{A}_{\\star}\\cap\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\) is open and satisfies \\(\\mathcal{A}\\supset\\mathcal{X}^{*}\\). Therefore, \\(\\mathcal{X}^{*}\\) is attractive. ## VII Simulation Results The effectiveness of the proposed method is demonstrated through simulations. The system of each robot is governed by the kinematic model (20). The control objective is given by (25) for \\(\\mathcal{X}^{*}\\) in (11). First, we consider \\(n=7\\) robots in \\(d=3\\)-D space with the desired configuration \\(x_{\\mathcal{N}}^{*}\\in\\mathbb{R}^{3\\times 7}\\) depicted in Fig. 3(a). The network topology is fixed and given as Fig. 3(a). We compare the following two controllers: (A) the proposed controller (35) with gains \\(\\kappa_{R}=10\\) and \\(\\kappa_{x}=5\\), which is derived from the gradient-based controller (29) with \\(V=V_{\\star}\\) in (32) and (B) the conventional controller for distance-based formation control, which is derived with \\(V=V_{\\text{conv}}\\) in (37), employed in [29]. To estimate the achievement of the control objective (25), the formation error \\(E(t)=\\text{dist}(x_{\\mathcal{N}}(t),\\,\\mathcal{X}^{*})\\) is evaluated. Fig. 3(b) depicts the time plots of \\(E(t)\\) of a typical simulation result by the solid and dashed lines for (A) the proposed and (B) the conventional controllers, respectively. It is observed that \\(E(t)\\) with (A) converges to almost zero (about \\(0.13\\)), whereas \\(E(t)\\) with (B) converges to a nonzero point (about \\(5.4\\)). The trajectories \\(x_{\\mathcal{N}}(t)\\) of the robots from \\(t=0\\) to \\(100\\) s in this simulation with (A) are drawn by the dotted lines in Fig. 3(c), where the initial and terminal positions are described by the numbered circles and squares, respectively. Fig. 3(c) shows that the robots attain a congruent shape with the desired configuration given in Fig. 3(a). In contrast, the simulation result with (B), shown in Fig. 3(d), indicates that the robots do not attain the desired configuration. We carried out simulations for \\(100\\) different initial states in the same setting with (A) the proposed and (B) the conventional controllers. Fig. 4 shows the time plots of \\(E(t)\\) of the \\(100\\) simulations. It is observed that the formation error converges to zero in any case with the proposed method, while the error remains in many cases with the conventional one. This result demonstrates the better performance of (A) the proposed controller than (B) the conventional one, as indicated by Theorem 1. Second, we consider formation guidance with \\(n=36\\) robots in \\(d=2\\)-D space to evaluate the scalability and practicability of the proposed method. As a practical setting, we consider a time-varying graph \\(G(t)=(\\mathcal{N},\\,\\mathcal{E}(t))\\) with the edge set \\(\\mathcal{E}(t)=\\{[i,j]:\\|x_{i}(t)-x_{j}(t)\\|<1\\}\\), where the robots within a distance \\(1\\) are connected, and the additional control input \\(v_{\\text{ref}}=[1\\ 0]^{\\text{T}}\\) is added to a leader. Fig. 5 shows the simulation result, where the robots at \\(t=0,75,150\\), and \\(225\\) are described in cyan, green, red, and purple, respectively, with edges (solid lines) and trajectories (dotted lines). It is seen that the robots form a formation about \\(t=75\\), face in the direction of \\(v_{\\text{ref}}\\), and move toward the direction afterward. This result shows that the proposed method is effective in the practical situation. Fig. 3: Simulation results. (a) Desired configuration and edges. (b) Time plots of \\(E(t)\\). (c) Trajectories with the proposed method. (d) Those with the conventional one. ## VIII Experimental Result An experiment was carried out with the proposed method in \\(d=2\\)-S space using the mobile robots, the Turtlebot 3 Burgers shown in Fig. 6(a). The size of the robot is 138 mm in length, 178 mm in width, 192 mm in height, and the weight is 1.0 kg. The robot is equipped with a single-board computer (the Raspberry Pi 3) and a 2-D LiDAR (the 360 Laser Distance Sensor LDS-01). See [43] for detailed specifications. Each robot is controlled through the single-board computer according to angular velocity and speed commands. The commands are computed as \\(\\omega_{i}(t)\\in\\mathbb{R}\\) in (23) and \\(v_{i}(t)\\in\\mathbb{R}\\) according to the distributed, relative controller (35) with some gains. In addition, collision avoidance is realized through the gradient-based method by adding a repulsive function into the objective function. The positions \\(x_{j}^{[t]}(t)\\in\\mathbb{R}^{2}\\) of the neighbors \\(j\\in\\mathcal{N}_{i}\\) in \\(\\Sigma_{i}\\) are measured by the LiDAR. To make the correspondence between measurements and robot IDs, robots are informed of the positions and the IDs of the neighbors at the initial time. When a neighbor cannot be detected due to occlusion, the measurement is complemented by network communication between the robots. First, \\(n=8\\) mobile robots are used with the desired positions of the robots and the edges shown in Fig. 6(b). The trajectories of the robots in the experiment are drawn in Fig. 7 from time \\(t=0\\) to 70 s, where the green (red) polygons describe the initial (terminal) positions and orientations of the robots. The successive photographs of the robots during the experiment are shown in Fig. 8. It is observed that the robots finally formed a congruent shape with the desired configuration in Fig. 6(b). Second, formation guidance is conducted for \\(n=6\\) mobile robots with the additional control input \\(v_{\\text{ref}}=[1\\ 0]^{\\top}\\) to a leader. Fig. 9 gives the snapshots of the experiment, showing that the robots form a formation and face in almost the same direction at \\(t=120\\) s. These results demonstrate the effectiveness of the developed controller in real-world settings. Fig. 4: Time plots of \\(E(t)\\) of 100 simulations from different initial states with (a) proposed method and (b) conventional one. Fig. 5: Simulation of formation guidance. Fig. 6: Experimental setup. (a) Photographs of robots. (b) Desired configuration (numbered squared) and edges (solid lines). Fig. 7: Trajectories \\(x_{\\mathcal{N}}(t)\\) of the robots in the experiment from \\(t=0\\) to 70 s with the initial (terminal) positions and orientations of the robots by green (red) polygons. ## IX Conclusion This article addressed a formation control problem for multirobot systems with nonholonomic constraints under measurements in robot coordinate frames. First, by using the Lie group theory on the special Euclidean group, the nonholonomic constraint and measurement were introduced in the robot coordinate frame of each robot. Moreover, the control space was defined as a subspace of the tangent space of the special Euclidean group under the nonholonomic constraint. Next, a gradient-based method was developed by using the projection of the gradient flow onto the control space, which is a widely applicable method in addition to formation control. By employing the clique-based objective function in the gradient-based method, the best distributed formation controller using only relative measurements was designed in terms of minimizing the undesired minimum set. The proposed method is valid regardless of the dimension of the space, which was shown by the simulations in 3-D space and an experiment by mobile robots in 2-D space. The theory is valid for any dimension, but a 3-D experiment cannot be conducted currently for the hardware reason. The current sensors to detect 360\\({}^{\\circ}\\) in 3-D space are not easily Fig. 8: Successive photographs captured during the experiment. (a) \\(t=0\\) s. (b) \\(t=10\\) s. (c) \\(t=20\\) s. (d) \\(t=30\\) s. (e) \\(t=40\\) s. (f) \\(t=70\\) s. Fig. 9: Successive photographs captured during the experiment of formation guidance. (a) \\(t=0\\) s. (b) \\(t=10\\) s. (c) \\(t=20\\) s. (d) \\(t=40\\) s. (e) \\(t=70\\) s. (f) \\(t=120\\) s. equipped with drones. Such a device must be developed to apply this method to drones. In this work, the orientations of the robots are not considered in the control objective (25). As for orientation control, it can be achieved separately from positional formation control by several methods. First, we can control orientation after the formation is achieved. There are many papers dealing with orientation control (e.g., [44, 45]). Second, by moving the robots along appropriate trajectories, the robots naturally face in the same direction, as shown by the simulation and experiment. In future work, formation control involving both the positions and orientations has to be investigated more rigorously. The proposed method was applied to a time-varying graph in the simulation although the convergence is not theoretically guaranteed. A theoretical convergence condition of time-varying graphs has not been derived. Declaring such a condition is important for future work. This method guarantees the best performance of the steady state in terms of (31), i.e., the state converges to a point as close to the desired points as possible, but does not consider the performance of the transit state. There is a possibility to obtain the best steady and transit state performances simultaneously by combining with other methods, e.g., using the proposed objective function as the cost function of the model predictive control. This is important for future work. ## Appendix A Proof of Lemma 2 Because the subgraph induced by clique \\(\\mathcal{C}_{k}\\) is complete, the vectors \\(x_{\\mathcal{C}_{k}}\\) satisfying \\(\\|x_{i}-x_{j}\\|=\\|x_{i}^{*}-x_{j}^{*}\\|,i,j\\in\\mathcal{C}_{k}\\) are congruent to \\(x_{\\mathcal{C}_{k}}^{*}\\). From this fact and (10) \\[\\bigcap_{\\{i,j\\}\\in\\mathcal{E},i,j\\in\\mathcal{C}_{k}}\\tilde{ \\mathcal{X}}_{ij}^{*} =\\bigcap_{i,j\\in\\mathcal{C}_{k}}\\tilde{\\mathcal{X}}_{ij}^{*}\\] \\[=\\{x_{X}\\in(\\mathbb{R}^{d})^{n}:\\|x_{i}-x_{j}\\|\\] \\[=\\|x_{i}^{*}-x_{j}^{*}\\|,i,\\,j\\in\\mathcal{C}_{k}\\}\\] \\[=\\{x_{X}\\in(\\mathbb{R}^{d})^{n}:\\exists(\\Phi_{k},\\,\\tau_{k})\\in \\mathrm{O}_{d}\\times\\mathbb{R}^{d}\\] \\[\\text{s.t. }x_{i} =\\Phi_{k}x_{i}^{*}+\\tau_{k}\\ \\forall i\\in\\mathcal{C}_{k}\\} \\tag{50}\\] is obtained, where \\(\\mathrm{O}_{d}\\subset\\mathbb{R}^{d\\times d}\\) is the set of the orthogonal matrices of dimension \\(d\\). Because \\(\\mathrm{SO}_{d}\\) and \\(\\mathrm{O}_{d}\\backslash\\mathrm{SO}_{d}\\) are the disconnected components of \\(\\mathrm{O}_{d}\\), and \\(\\mathrm{O}_{d}\\) is bounded, from (12) and (50), there exists an open set \\(\\tilde{\\mathcal{O}}_{k}\\supset\\mathcal{X}^{*}\\), such that \\[\\bigcap_{\\{i,j\\}\\in\\mathcal{E},i,j\\in\\mathcal{C}_{k}}\\tilde{ \\mathcal{X}}_{ij}^{*}\\cap\\tilde{\\mathcal{O}}_{k}=\\tilde{\\mathcal{X}}_{k}^{*}. \\tag{51}\\] Taking the intersection of the sets in (51) for all \\(k\\in\\mathrm{Clq}\\), the following is obtained: \\[\\bigcap_{k\\in\\mathrm{Clq}}\\tilde{\\mathcal{X}}_{k}^{*} =\\bigcap_{k\\in\\mathrm{Clq}}\\left(\\bigcap_{\\{i,j\\}\\in\\mathcal{E},i,j\\in\\mathcal{C}_{k}}\\tilde{\\mathcal{X}}_{ij}^{*}\\cap\\tilde{\\mathcal{O}}_{k}\\right)\\] \\[=\\bigcap_{\\{i,j\\}\\in\\mathcal{E}}\\tilde{\\mathcal{X}}_{ij}^{*}\\cap \\bigcap_{k\\in\\mathrm{Clq}}\\tilde{\\mathcal{O}}_{k}\\] which leads to (13) for \\(\\tilde{\\mathcal{O}}=\\bigcap_{k\\in\\mathrm{Clq}}\\tilde{\\mathcal{O}}_{k}\\supset \\mathcal{X}^{*}\\). ## Appendix B Proof of Lemma 4 We show items 1)-5) in order. 1. From (2), (20), and (29) \\[2\\dot{R}_{i}(t)b_{i}\\] \\[=2R_{i}(t)S_{i}(t)b_{i}\\] \\[=-2\\kappa_{R}R_{i}(t)\\mathrm{proj}^{\\perp}_{k_{i}(t)b_{i}}\ abla _{i}V(x_{X}(t))b_{i}^{\\top})b_{i}\\] \\[=-\\kappa_{R}R_{i}(t)\\big{(}R_{i}^{\\top}(t)\\mathrm{proj}^{\\perp}_{k _{i}(t)b_{i}}\ abla_{i}V(x_{X}(t))b_{i}^{\\top})b_{i}\\] \\[\\quad+\\kappa_{R}R_{i}(t)\\big{(}R_{i}^{\\top}(t)\\mathrm{proj}^{\\perp }_{k_{i}(t)b_{i}}\ abla_{i}V(x_{X}(t))b_{i}^{\\top}\\big{)}^{\\top}b_{i}\\] \\[=-\\kappa_{R}\\mathrm{proj}^{\\perp}_{k_{i}(t)b_{i}}\ abla_{i}V(x_{ X}(t))\\] is achieved, which leads to (27). Similarly, \\(\\dot{x}_{i}(t)\\) is reduced to (27). Equation (28) is obvious. 2. From (27), (30) is obtained as follows: \\[\\dot{V}(x_{X}(t)) =\\sum_{i\\in\\mathcal{N}}\\langle\ abla_{i}V(x_{\\mathcal{N}}(t)),\\dot {x}_{i}(t)\\rangle\\] \\[=-\\sum_{i\\in\\mathcal{N}}\\kappa_{s}\\|\\mathrm{proj}_{k_{i}(t)b_{i} }\ abla_{i}V(x_{X}(t))\\|^{2}\\leq 0.\\] 3. According to the definition of the equilibrium set in (26), \\((\ abla_{N}V)^{-1}(0)\\) is an equilibrium set of system (20) for the control input (29), because \\((S_{i}(t),v_{i}(t))=(0,0)\\) holds if \\(x_{X}(t)\\in(\ abla_{N}V)^{-1}(0)\\), that is, \\(\ abla_{N}V(x_{X}(t))=0\\). 4. The global attractiveness to \\((\ abla_{N}V)^{-1}(0)\\) is shown. Under the assumption of the boundedness of \\(x_{i}(t)\\), the state \\((R_{\\mathcal{N}}(t),x_{\\mathcal{N}}(t))\\) is bounded. From (30), LaSalles' invariance principle [39] guarantees that \\[\\lim_{t\\to\\infty}\\mathrm{dist}((R_{\\mathcal{N}}(t),x_{X}(t)),\\,\\hat{\\mathcal{Q} })=0\\] (52) where \\(\\hat{\\mathcal{Q}}\\subset(\\mathrm{SO}_{d})^{n}\\times(\\mathbb{R}^{d})^{n}\\) is the largest invariant set contained with the set \\[\\mathcal{Q}=\\{(R_{\\mathcal{N}},x_{\\mathcal{N}})\\in(\\mathrm{SO}_{d })^{n}\\times(\\mathbb{R}^{d})^{n}:\\] \\[\\mathrm{proj}_{k,b_{i}}\ abla_{i}V(x_{\\mathcal{N}})=0\\ \\forall i\\in \\mathcal{N}\\}.\\] (53) Assume that \\((R_{\\mathcal{N}}(t),x_{\\mathcal{N}}(t))\\in\\mathcal{Q}\\) holds for all \\(t\\geq 0\\), and \\[\\mathrm{proj}_{k_{i}(t)b_{i}}\ abla_{i}V(x_{\\mathcal{N}}(t))=0\\quad\\forall t \\geq 0,\\ i\\in\\mathcal{N}\\] (54) holds from (53), which is equivalent to \\[(R_{i}(t)b_{i},\ abla_{i}V(x_{\\mathcal{N}}(t)))=0\\quad\\forall t\\geq 0,\\ i\\in \\mathcal{N}.\\] (55) From (27) and (54), \\(\\dot{x}_{i}(t)=0\\) holds for all \\(t\\geq 0\\), and thus, \\(x_{i}(t)=\\xi_{i}\\) holds with some constant \\(\\xi_{i}\\in\\mathbb{R}^{d}\\) for each \\(i\\in\\mathcal{N}\\). Substitute \\(x_{i}(t)=\\xi_{i}\\) in (55) and differentiate the resultant equation with respect to \\(t\\), and from (27), we obtain \\[0 =\\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(2R_{i}(t)b_{i},\\,\ abla_{i}V( \\xi_{\\mathcal{N}})\\right)=\\left\\langle 2\\,\\dot{R}_{i}(t)b_{i},\\,\ abla_{i}V(\\xi_{\\mathcal{N}})\\right\\rangle\\] \\[=-\\Big{\\langle}\\kappa_{R}\\mathrm{proj}^{\\perp}_{k_{i}(t)b_{i}} \ abla_{i}V(\\xi_{\\mathcal{N}}),\\,\ abla_{i}V(\\xi_{\\mathcal{N}})\\Big{\\rangle}\\] \\[=-\\kappa_{R}\\Big{\\|}\\mathrm{proj}^{\\perp}_{k_{i}(t)b_{i}}\ abla_ {i}V(\\xi_{\\mathcal{N}})\\Big{\\|}^{2}.\\] From this equation and (53), \\(\ abla_{i}V(\\xi_{\\mathcal{N}})=0\\) holds for each \\(i\\in\\mathcal{N}\\) in the largest invariant set \\(\\hat{\\mathcal{Q}}\\subset\\mathcal{Q}\\), which yields \\(\\hat{\\mathcal{Q}}\\subset(\\text{SO}_{d})^{n}\\times(\ abla_{\\mathcal{N}}V)^{-1}(0)\\). From this inclusion and (52) \\[\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),(\ abla_{ \\mathcal{N}}V)^{-1}(0))\\] \\[=\\lim_{t\\to\\infty}\\text{dist}((R_{\\mathcal{N}}(t),\\,x_{\\mathcal{ N}}(t)),\\,(\\text{SO}_{d})^{n}\\times(\ abla_{\\mathcal{N}}V)^{-1}(0))\\] \\[\\leq\\lim_{t\\to\\infty}\\text{dist}((R_{\\mathcal{N}}(t),\\,x_{ \\mathcal{N}}(t)),\\,\\hat{\\mathcal{Q}})=0\\] is obtained, which implies that \\((\ abla_{\\mathcal{N}}V)^{-1}(0)\\) is globally attractive. 5. Under the condition of the real analyticity of \\(V\\), the following inequalities, called Lojasiewicz's inequalities [46, 47], hold: for a bounded open set \\(\\Omega\\subset(\\mathbb{R}^{d})^{n}\\), there exist \\(\\beta(\\Omega)>0\\), \\(\\theta(\\Omega)>0\\), and \\(\\rho(\\Omega)>0\\), such that \\[V(x_{\\mathcal{N}})\\leq\\beta(\\Omega)\\Biggl{(}\\sum_{i\\in\\mathcal{ N}}\\|\ abla_{i}V(x_{\\mathcal{N}})\\|^{2}\\Biggr{)}^{\\theta(\\Omega)}\\quad\\forall x _{\\mathcal{N}}\\in\\Omega\\] \\[\\text{s.t.}\\,\\,V(x_{\\mathcal{N}})\\leq\\rho(\\Omega).\\] (56) For a compact subset \\(\\hat{\\Omega}\\subset(\\mathbb{R}^{d})^{n}\\), there exist \\(\\hat{\\beta}(\\hat{\\Omega})>0\\) and \\(\\hat{\\theta}(\\hat{\\Omega})>0\\), such that \\[V(x_{\\mathcal{N}})\\geq\\hat{\\beta}(\\hat{\\Omega})(\\text{dist}(x_{ \\mathcal{N}},\\,V^{-1}(0)))^{\\hat{\\theta}(\\hat{\\Omega})}\\quad\\forall x_{ \\mathcal{N}}\\in\\hat{\\Omega}.\\] (57) For some \\(\\xi_{\\mathcal{N}}\\in V^{-1}(0)\\), let \\(\\Omega(\\xi_{\\mathcal{N}})\\subset(\\mathbb{R}^{d})^{n}\\) be a bounded open set containing \\(\\xi_{\\mathcal{N}}\\), such that \\[x_{\\mathcal{N}}(0)\\in\\Omega(\\xi_{\\mathcal{N}})\\Rightarrow x_{\\mathcal{N}}(t) \\in\\Omega(\\xi_{\\mathcal{N}}).\\] (58) Such a set \\(\\Omega(\\xi_{\\mathcal{N}})\\) can be always constructed, because if \\(\\Omega(\\xi_{\\mathcal{N}})\\) does not satisfy (58), that is, there exist solutions \\(x_{\\mathcal{N}}(t)\\), which escape from \\(\\Omega(\\xi_{\\mathcal{N}})\\), we just expand \\(\\Omega(\\xi_{\\mathcal{N}})\\) to contain the trajectories of all such \\(x_{\\mathcal{N}}(t)\\) to satisfy (58). The expanded \\(\\Omega(\\xi_{\\mathcal{N}})\\) is still bounded from the assumption on the boundedness of \\(x_{\\mathcal{N}}(t)\\). For a constant \\(c\\in\\mathbb{R}_{+}\\), let \\(\\mathcal{L}_{c}=\\{x_{\\mathcal{N}}\\in(\\mathbb{R}^{d})^{n}:V(x_{\\mathcal{N}}) \\leq c\\}\\) be the sublevel set of \\(V\\). Then, \\(\\Omega(\\xi_{\\mathcal{N}})\\cap\\mathcal{L}_{c}\\) is nonempty for any \\(c>0\\), because \\(\\xi_{\\mathcal{N}}\\in V^{-1}(0)\\cap\\Omega(\\xi_{\\mathcal{N}})\\). Consider an initial state \\(x_{\\mathcal{N}}(0)\\in\\Omega(\\xi_{\\mathcal{N}})\\cap\\mathcal{L}_{\\rho(\\Omega(\\xi _{\\mathcal{N}}))}\\), and the solution satisfies \\(x_{\\mathcal{N}}(t)\\in\\Omega(\\xi_{\\mathcal{N}})\\cap\\mathcal{L}_{\\rho(\\Omega(\\xi _{\\mathcal{N}}))}\\) for any \\(t\\in\\mathbb{R}_{+}\\) from (30) and (58). Then, from 3), (56), \\(V(x_{\\mathcal{N}}(t))\\) converges to zero. By applying (57) with a compact set \\(\\hat{\\Omega}\\supset\\Omega(\\xi_{\\mathcal{N}})\\cap\\mathcal{L}_{\\rho(\\Omega(\\xi_{ \\mathcal{N}}))}\\), the following equation is obtained: \\[\\lim_{t\\to\\infty}\\text{dist}(x_{\\mathcal{N}}(t),\\,V^{-1}(0))=0.\\] (59) Let \\(\\mathcal{A}=\\bigcup_{\\xi_{\\mathcal{N}}\\in V^{-1}(0)}\\Omega(\\xi_{\\mathcal{N}}) \\cap\\mathcal{L}_{\\rho(\\Omega(\\xi_{\\mathcal{N}}))}\\), and then, \\(\\mathcal{A}\\) is an open set containing \\(V^{-1}(0)\\), such that all the solutions \\(x_{\\mathcal{N}}(t)\\) for the initial states \\(x_{\\mathcal{N}}(0)\\in\\mathcal{A}\\) satisfy (59). Hence, \\(V^{-1}(0)\\) is attractive. ## Appendix C Proof of Lemma 6 To show (39), consider a function \\(V\\in\\mathcal{F}(G,x_{\\mathcal{N}}^{*})\\). Because \\(\\mathcal{X}^{*}\\) is an equilibrium set from the definition of \\(\\mathcal{F}(G,x_{\\mathcal{N}}^{*})\\), according to (26), \\((\\dot{x}_{i}(t),\\,\\dot{R}_{i}(t))=(0,0)\\) holds as long as \\(x_{\\mathcal{N}}(t)\\in\\mathcal{X}^{*}\\) for system (20) with the controller (29). This and (28), which holds from Lemma 4(i), imply that \\(\ abla_{i}V(x_{\\mathcal{N}})=0\\). Hence, \\(V(x_{\\mathcal{N}})\\) is constant, and \\(V(x_{\\mathcal{N}})=0\\) holds for any \\(x_{\\mathcal{N}}\\in\\mathcal{X}^{*}\\) from \\(V(x_{\\mathcal{N}}^{*})=0\\), and thus, (38) holds. Because (29) is distributed, relative over \\(G\\), it is of the form (22) for some functions \\(U_{i}\\) and \\(u_{i}\\). From (20), (22), and (28) \\[R_{i}^{\\top}(t)\ abla_{i}V(x_{\\mathcal{N}}(t))\\] \\[\\qquad=-R_{i}^{\\top}(t)\\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(\\frac{x _{i}(t)}{\\kappa_{x}}+\\frac{2R_{i}(t)b_{i}}{\\kappa_{R}}\\right)\\] \\[\\qquad=-R_{i}^{\\top}(t)\\left(\\frac{R_{i}(t)b_{i}v_{i}(t)}{\\kappa_{x }}+\\frac{2R_{i}(t)S_{i}(t)b_{i}}{\\kappa_{R}}\\right)\\] \\[\\qquad=-\\frac{b_{i}u_{i}(x_{\\mathcal{N}_{i}}^{[i]}(t))}{\\kappa_{x}} -\\frac{2U_{i}(x_{\\mathcal{N}_{i}}^{[i]}(t))b_{i}}{\\kappa_{R}}\\] is obtained, which implies that \\(R_{i}^{\\top}\ abla_{i}V(x_{\\mathcal{N}})\\) depends only on \\(x_{\\mathcal{N}_{i}}^{[i]}\\). Therefore, \\(V\\in F_{rd}(G,x_{\\mathcal{N}}^{*})\\) holds, and (39) is derived. ## Appendix D Proof of Lemma 7 Without loss of generality, assume that \\(G\\) is connected. Otherwise, we can make the same discussion for each connected component of \\(G\\). From (28) and (42) \\[\\frac{\\mathrm{d}}{\\mathrm{d}t}\\sum_{i\\in\\mathcal{N}}\\Biggl{(}\\frac{ x_{i}(t)}{\\kappa_{x}}+\\frac{2R_{i}(t)b_{i}}{\\kappa_{R}}\\Biggr{)} =-\\sum_{i\\in\\mathcal{N}}\ abla_{i}V_{*}(x_{\\mathcal{N}})\\] \\[=-\\sum_{i\\in\\mathcal{N}}g_{i}(x_{\\mathcal{N}_{i}})=0\\] holds, which indicates that \\(\\sum_{i\\in\\mathcal{N}}(x_{i}(t)+2R_{i}(t)b_{i})\\) is constant. Thus, \\[\\left\\|\\sum_{i\\in\\mathcal{N}}\\frac{x_{i}(t)}{\\kappa_{x}}\\right\\| \\leq\\left\\|\\sum_{i\\in\\mathcal{N}}\\frac{2R_{i}(t)b_{i}}{\\kappa_{x}} \\right\\|+\\left\\|\\sum_{i\\in\\mathcal{N}}\\biggl{(}\\frac{x_{i}(t)}{\\kappa_{x}}+\\frac {2R_{i}(t)b_{i}}{\\kappa_{x}}\\right)\\right\\|\\] \\[\\leq 2\\sum_{i\\in\\mathcal{N}}\\left\\|\\frac{R_{i}(t)b_{i}}{\\kappa_{x}} \\right\\|+\\left\\|\\sum_{i\\in\\mathcal{N}}\\biggl{(}\\frac{x_{i}(0)}{\\kappa_{x}}+ \\frac{2R_{i}(0)b_{i}}{\\kappa_{x}}\\biggr{)}\\right\\|\\] \\[=2\\,\\,n+\\left\\|\\sum_{i\\in\\mathcal{N}}\\biggl{(}\\frac{x_{i}(0)}{ \\kappa_{x}}+\\frac{2R_{i}(0)b_{i}}{\\kappa_{x}}\\biggr{)}\\right\\|\\] is obtained, implying that \\(\\|\\sum_{i\\in\\mathcal{N}}x_{i}(t)\\|\\) is bounded. From (30) and (32) \\[(\\text{dist}(x_{\\mathcal{C}_{i}}(t),\\,\\hat{\\mathcal{X}}_{k}^{*})^{2}\\leq 2V_{*}(x_{ \\mathcal{N}}(t))\\leq 2V_{*}(x_{\\mathcal{N}}(0)) \\tag{60}\\] holds for any \\(tis obtained, where \\(\\hat{\\Phi}_{k}\\in\\mathrm{Proc}(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}}^{*}) \\subset\\mathrm{SO}_{d}\\). From (60) and (61) \\[\\|x_{i_{1}}(t)-x_{i_{2}}(t)\\|\\leq \\sqrt{V_{*}(x_{\\mathcal{N}}(0))}+\\|x_{i_{1}}^{*}-x_{i_{2}}^{*}\\| \\tag{62}\\] holds for any \\(t\\geq 0\\). Because \\(G\\) is connected, any two nodes are connected through a path (sequence of edges) and are connected through a sequence of cliques. From this fact and (62), \\(\\|x_{i}(t)-x_{j}(t)\\|\\) is bounded for any nodes \\(i\\), \\(j\\in\\mathcal{N}\\). From the boundedness of \\(\\|\\sum_{i\\in\\mathcal{N}}x_{i}(t)\\|\\) and \\(\\|x_{i}(t)-x_{j}(t)\\|\\), the boundedness of \\(x_{i}(t)\\) is shown as follows: \\[\\|x_{i}(t)\\| \\leq\\left\\|x_{i}(t)-\\frac{1}{n}\\sum_{j\\in\\mathcal{N}}x_{j}(t) \\right\\|+\\left\\|\\frac{1}{n}\\sum_{j\\in\\mathcal{N}}x_{j}(t)\\right\\|\\] \\[\\leq\\frac{1}{n}\\sum_{j\\in\\mathcal{N}}\\left\\|x_{i}(t)-x_{j}(t) \\right\\|+\\frac{1}{n}\\left\\|\\sum_{j\\in\\mathcal{N}}x_{j}(t)\\right\\|.\\] ## Appendix E Proof of (49) To show (49), assume that (49) does not hold for any \\(c_{1}>0\\), and we derive a contradiction. Because \\(\\mathcal{X}^{*}\\) in (11) is closed, the set \\(\\mathcal{L}_{0}\\cap\\tilde{\\mathcal{O}}\\) is closed from (48) and \\(\\mathcal{L}_{0}=V_{*}^{-1}(0)\\). From this fact, the openness of \\(\\tilde{\\mathcal{O}}\\), and the continuity of \\(V_{*}\\), there exists \\(c_{1}>0\\), such that \\(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\) is closed. From this fact and the inclusion \\(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\subset\\tilde{\\mathcal{O}}\\) \\[\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}})\\cap\\partial\\tilde{ \\mathcal{O}}=\\emptyset \\tag{63}\\] holds, where \\(\\partial\\) denotes the boundary of a set and \\(\\emptyset\\) represents the empty set. From the property of the boundary of the intersection, (63), and the openness of \\(\\tilde{\\mathcal{O}}\\) \\[\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}) \\subset\\left(\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}) \\cap\\tilde{\\mathcal{O}}\\right)\\cup\\left((\\mathcal{L}_{c_{1}}\\cap\\tilde{ \\mathcal{O}})\\cap\\partial\\tilde{\\mathcal{O}}\\right)\\] \\[\\quad\\cup\\left(\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O} })\\cap\\partial\\tilde{\\mathcal{O}}\\right)\\] \\[=\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}})\\cap\\tilde{ \\mathcal{O}}\\] is obtained. The inverse inclusion is obvious, and we obtain \\[\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}})=\\partial(\\mathcal{L}_{c_{ 1}}\\cap\\tilde{\\mathcal{O}})\\cap\\tilde{\\mathcal{O}}. \\tag{64}\\] From the assumption that (49) does not hold, there exists an initial state \\(x_{\\mathcal{N}}(0)\\in\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\), such that \\(x_{\\mathcal{N}}(t_{1})\\in\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}})\\) holds with some time \\(t_{1}>0\\). Because \\(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\) is open, \\(x_{\\mathcal{N}}(t_{1})\ ot\\in\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}}\\) holds. From Lemma 4(ii), \\(x_{\\mathcal{N}}(t_{1})\\in\\mathcal{L}_{c_{1}}\\) holds. Thus, \\(x_{\\mathcal{N}}(t_{1})\ otin\\tilde{\\mathcal{O}}\\) holds. The relations \\(x_{\\mathcal{N}}(t_{1})\\in\\partial(\\mathcal{L}_{c_{1}}\\cap\\tilde{\\mathcal{O}})\\) and \\(x_{\\mathcal{N}}(t_{1})\ otin\\tilde{\\mathcal{O}}\\) contradict (64). ## Appendix F Proof of (40) Let \\[f(x_{\\mathcal{C}_{k}})\\] \\[\\quad=\\sum_{j\\in\\mathcal{C}_{k}}\\left\\|x_{j}-\\mathrm{ave}(x_{ \\mathcal{C}_{k}})-\\hat{\\Phi}_{k}(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}}^{*}) \\big{(}x_{j}^{*}-\\mathrm{ave}\\big{(}x_{\\mathcal{C}_{k}}^{*}\\big{)}\\big{)}\\right\\|^ {2} \\tag{65}\\] where \\(\\hat{\\Phi}_{k}(x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}^{*}}^{*})\\in\\mathrm{Proc} (x_{\\mathcal{C}_{k}},x_{\\mathcal{C}_{k}}^{*})\\). To prove (40), we just need to show that \\[\ abla_{i}f(X)=2\\big{(}x_{i}-\\mathrm{ave}(x_{\\mathcal{C}_{i}})-\\hat{\\Phi}_{k} \\big{(}x_{\\mathcal{C}_{i}},x_{\\mathcal{C}_{k}}^{*}\\big{)}\\big{(}x_{i}^{*}- \\mathrm{ave}\\big{(}x_{\\mathcal{C}_{k}}^{*}\\big{)}\\big{)}\\big{)} \\tag{66}\\] holds. Let \\(X=x_{\\mathcal{C}_{k}}\\), \\(X^{*}=x_{\\mathcal{C}_{k}}^{*}\\in\\mathbb{R}^{d\\times m}\\), \\(m=|\\mathcal{C}_{k}|\\), and \\(M_{m}=I_{m}-\\mathbf{1}_{m}\\mathbf{1}_{m}^{\\top}/m\\), and remove the subscript \\(k\\), and (65) is reduced to \\[f(X)=\\|XM_{m}-\\hat{\\Phi}(X,X^{*})X^{*}M_{m}\\|_{F}^{2} \\tag{67}\\] where \\(\\|\\cdot\\|_{F}\\) is the Frobenius norm of a matrix, and (66) for all \\(i\\) is reduced to \\[\\frac{\\partial f}{\\partial X}(X)=2(XM_{m}-\\hat{\\Phi}(X,X^{*})X^{*}M_{m}). \\tag{68}\\] The rest of the proof is devoted to showing (68). Let \\[D=\\mathrm{diag}(\\overbrace{1,\\ldots,1}^{d-1},\\det(P\\,Q)). \\tag{69}\\] From (15), (67) is reduced to \\[f(X) =\\|XM_{m}-Q\\,DP\\,P^{\\top}X^{*}M_{m}\\|_{F}^{2}\\] \\[=\\mathrm{tr}((XM_{m})^{\\top}XM_{m})-2\\mathrm{tr}((XM_{m})^{\\top}Q \\,DP\\,P^{\\top}X^{*}M_{m})\\] \\[\\quad+\\mathrm{tr}((X^{*}M_{m})^{\\top}X^{*}M_{m}). \\tag{70}\\] As shown below, the gradients of the terms in (70) are calculated as follows: \\[\\frac{\\partial\\mathrm{tr}((XM_{m})^{\\top}XM_{m})}{\\partial X} =2XM_{m} \\tag{71}\\] \\[\\frac{\\partial\\mathrm{tr}((XM_{m})^{\\top}Q\\,DP\\,P^{\\top}X^{*}M_{m}) }{\\partial X} =\\hat{\\Phi}(X,X^{*})X^{*}M_{m} \\tag{72}\\] and (68) is achieved. Since (71) is obvious, just (72) is shown. Each component of the left-hand side of (72) is calculated as follows: \\[\\frac{\\partial\\mathrm{tr}((XM_{m})^{\\top}Q\\,DP\\,P^{\\top}X^{*}M_{m})}{ \\partial x_{ij}}\\] \\[\\quad=\\mathrm{tr}\\bigg{(}M_{m}\\frac{\\partial X^{\\top}}{ \\partial x_{ij}}Q\\,DP\\,P^{\\top}X^{*}M_{m}\\bigg{)}\\] \\[\\quad\\quad\\quad\\quad+\\mathrm{tr}\\bigg{(}(XM_{m})^{\\top}\\frac{ \\partial Q\\,DP\\,P^{\\top}}{\\partial x_{ij}}X^{*}M_{m}\\bigg{)} \\tag{73}\\] for \\(X=[x_{ij}]\\), where the matrices \\(P\\), \\(Q\\), and \\(D\\) depend on \\(X\\) as (17) and (69). The first term of the right-hand side of (73) is reduced to \\[\\mathrm{tr}\\bigg{(}M_{m}\\frac{\\partial X^{\\top}}{\\partial x_{ij}}Q\\,DP \\,P^{\\top}X^{*}M_{m}\\bigg{)}\\] \\[\\quad=\\mathrm{tr}\\bigg{(}\\big{(}e_{di}\\,e_{mj}^{\\top}\\big{)}^{ \\top}\\hat{\\Phi}(X,X^{*})X^{*}M_{m}\\bigg{)}\\] \\[\\quad=\\mathrm{tr}\\big{(}e_{di}^{\\top}\\hat{\\Phi}(X,X^{*})X^{*}M_{m}e_{ inj}\\big{)}=e_{di}^{\\top}\\hat{\\Phi}(X,X^{*})X^{*}M_{m}e_{nj}\\] from (15) and (69), where \\(e_{di}\\) is the \\(d\\)-dimensional unit vector whose \\(i\\)th component This term is reduced to \\[\\text{tr}\\bigg{(}(XM_{m})^{\\top}\\frac{\\partial\\,QD\\,P^{\\top}}{ \\partial x_{ij}}X^{*}M_{m}\\bigg{)}\\] \\[=\\text{tr}\\bigg{(}X^{*}M_{m}(XM_{m})^{\\top}\\frac{\\partial\\,QD\\,P^{ \\top}}{\\partial x_{ij}}\\bigg{)}\\] \\[=\\text{tr}\\bigg{(}PSQ^{\\top}\\bigg{(}\\frac{\\partial\\,Q}{\\partial x _{ij}}DP^{\\top}+Q\\frac{\\partial\\,D}{\\partial x_{ij}}P^{\\top}+QD\\frac{\\partial \\,P^{\\top}}{\\partial x_{ij}}\\bigg{)}\\bigg{)}\\] \\[=\\text{tr}\\bigg{(}Q^{\\top}\\frac{\\partial\\,Q}{\\partial x_{ij}}DS \\bigg{)}+\\text{tr}\\bigg{(}S\\frac{\\partial\\,D}{\\partial x_{ij}}\\bigg{)}+\\text{ tr}\\bigg{(}SD\\frac{\\partial\\,P^{\\top}}{\\partial x_{ij}}P\\bigg{)} \\tag{74}\\] from (17). The partial derivative of the constraint of the orthogonal matrix \\(Q\\), say \\(Q^{\\top}Q=I_{d}\\), is derived as follows: \\[\\bigg{(}Q^{\\top}\\frac{\\partial\\,Q}{\\partial x_{ij}}\\bigg{)}^{\\top}+Q^{\\top} \\frac{\\partial\\,Q}{\\partial x_{ij}}=0.\\] Hence, \\(Q^{\\top}(\\partial\\,Q/\\partial x_{ij})\\) is skew-symmetric, and all the diagonal entries of this matrix are zero. Then, all the diagonal entries of the matrix \\(Q^{\\top}(\\partial\\,Q/\\partial x_{ij})DS\\) are zero, because \\(D\\) and \\(S\\) are diagonal. Hence, \\[\\text{tr}\\bigg{(}Q^{\\top}\\frac{\\partial\\,Q}{\\partial x_{ij}} \\bigg{)} =0 \\tag{75}\\] \\[\\text{tr}\\bigg{(}Q^{\\top}\\frac{\\partial\\,Q}{\\partial x_{ij}}DS \\bigg{)} =0 \\tag{76}\\] are achieved. In the same way \\[\\text{tr}\\bigg{(}SD\\frac{\\partial\\,P^{\\top}}{\\partial x_{ij}}P \\bigg{)}=0 \\tag{77}\\] is obtained. From (75) \\[\\frac{\\partial\\,\\det(Q)}{\\partial x_{ij}}=\\det(Q)\\text{tr}\\bigg{(}Q^{\\top} \\frac{\\partial\\,Q}{\\partial x_{ij}}\\bigg{)}=0\\] is derived. Similarly, \\(\\partial\\,\\det(P)/\\partial x_{ij}=0\\) is obtained. Then, \\[\\frac{\\partial\\,(\\det(P\\,Q))}{\\partial x_{ij}} =\\frac{\\partial\\,\\det(P)}{\\partial x_{ij}}\\,\\det(Q)+\\det(P\\,) \\frac{\\partial\\,\\det(Q)}{\\partial x_{ij}}\\] \\[=0 \\tag{78}\\] is derived. From (69) and (78), \\(\\partial\\,D/\\partial x_{ij}=0\\) is achieved. From this, (76), and (77), the right-hand side of (74) is zero. ## References * [1] R. N. Darmanin and M. K. Bugeja, \"A review on multi-robot systems categorised by application domain,\" in _Proc. 25th Mediterranean Conf. Control Automation_, 2017, pp. 701-706. * [2] J. K. Verma and V. Ranga, \"Multi-robot coordination analysis, taxonomy, challenges and future scope,\" _J. Intell. Robotic Syst._, vol. 102, no. 1, pp. 1-12, Apr. 2021. * [3] F. Bullo, J. Cortes, and S. Martinez, _Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms_. Princeton, NJ, USA: Princeton Univ. Press, 2009. * [4] K. Sakurama and T. Sugie, \"Generalized coordination of multi-robot systems,\" _Found. Trends Syst. Control_, vol. 9, no. 1, pp. 1-170, 2021. * [5] S. Martinez, J. Cortes, and F. Bullo, \"Motion coordination with distributed information,\" _IEEE Control Syst. Mag._, vol. 27, no. 2, pp. 75-88, Aug. 2007. * [6] R. Olfati-Saber and R. M. Murray, \"Consensus problems in networks of agents with switching topology and time-delays,\" _IEEE Trans. Autom. Control_, vol. 49, no. 9, pp. 1520-1533, Sep. 2004. * [7] R. Olfati-Saber, J. A. Fax, and R. M. Murray, \"Consensus and cooperation in networked multi-agent systems,\" _Proc. IEEE_, vol. 95, no. 1, pp. 215-233, Jan. 2007. * [8] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, \"Coverage control for mobile sensing networks,\" _IEEE Trans. Robot. Autom._, vol. 20, no. 2, pp. 243-255, Apr. 2004. * [9] Y. Igarashi, T. Hatanaka, M. Fujita, and M. W. Spong, \"Passivity-based attitude synchronization in \\(SE(3)\\),\" _IEEE Trans. Control Syst. Technol._, vol. 17, no. 5, pp. 1119-1134, Sep. 2009. * [10] W. Ren, \"Distributed cooperative attitude synchronization and tracking for multiple rigid bodies,\" _IEEE Trans. Control Syst. Technol._, vol. 18, no. 2, pp. 383-392, Mar. 2010. * [11] H. Ahn, _Formation Control_. Cham, Switzerland: Springer, 2020. * [12] J. A. Fax and R. M. Murray, \"Information flow and cooperative control of vehicle formations,\" _IEEE Trans. Autom. Control_, vol. 49, no. 9, pp. 1465-1476, Sep. 2004. * [13] K.-K. Oh, M.-C. Park, and H.-S. Ahn, \"A survey of multi-agent formation control,\" _Automatica_, vol. 53, pp. 424-440, Mar. 2015. * [14] B. D. Anderson, C. Yu, B. Fidan, and J. M. Hendrickx, \"Rigid graph control architectures for autonomous formations,\" _IEEE Control Syst. Mag._, vol. 28, no. 6, pp. 48-63, Dec. 2008. * [15] L. Krick, M. E. Broucke, and B. A. Francis, \"Stabilisation of infinitesimal-mally rigid formations of multi-robot networks,\" _Int. J. Control_, vol. 82, no. 3, pp. 423-439, Mar. 2009. * [16] F. Dorfler and B. Francis, \"Geometric analysis of the formation problem for autonomous robots,\" _IEEE Trans. Autom. Control_, vol. 55, no. 10, pp. 2379-2384, Oct. 2010. * [17] P. Lin and Y. Jia, \"Distributed rotating formation control of multi-agent systems,\" _Syst. Control Lett._, vol. 59, no. 10, pp. 587-595, Oct. 2010. * [18] Z. Sun and B. D. O. Anderson, \"Rigid formation control with prescribed orientation,\" in _Proc. IEEE Int. Symp. Intell. Control (ISIC)_, Sep. 2015, pp. 639-645. * [19] K.-C. Cao, G. Xiang, and H. Yang, \"Formation control of multiple nonholonomic mobile robots,\" in _Proc. Int. Conf. Inf. Sci. Technol._, Mar. 2011, pp. 1038-1042. * [20] D. V. Dimarogonas and K. J. Kyriakopoulos, \"A connection between formation infeasibility and velocity alignment in kinematic multi-agent systems,\" _Automatica_, vol. 44, no. 10, pp. 2648-2654, Oct. 2008. * [21] T. Liu and Z.-P. Jiang, \"Distributed formation control of nonholonomic mobile robots without global position measurements,\" _Automatica_, vol. 49, no. 2, pp. 592-600, 2013. * [22] X. Yu and L. Liu, \"Distributed formation control of nonholonomic vehicles subject to velocity constraints,\" _IEEE Trans. Ind. Electron._, vol. 63, no. 2, pp. 1289-1298, Feb. 2016. * [23] E. D. Ferreira-Vazquez, E. G. Hernandez-Martinez, J. J. Flores-Godoy, G. Fernandez-Anaya, and P. Paniagua-Contro, \"Distance-based formation control using angular information between robots,\" _J. Intell. Robotic Syst._, vol. 83, pp. 543-560, Sep. 2016. * [24] E. Montjano, E. Cristofalo, M. Schwager, and C. Sagues, \"Distributed formation control of non-holonomic robots without a global reference frame,\" in _Proc. IEEE Int. Conf. Robotics Automat._, Jul. 2016, pp. 1-15. * [25] P. Hernandez-Leon, J. Davila, S. Salazar, and X. Ping, \"Distance-based formation maneuvering of non-holonomic wheeled mobile robot multi-agent system,\" in _Proc. IFAC World Conf._, 2020, pp. 5739-5744. * [26] X. Li, C. Wen, and C. Chen, \"Adaptive formation control of networked robotic systems with bearing-only measurements,\" _IEEE Trans. Cybern._, vol. 51, no. 1, pp. 199-209, Jan. 2021. * [27] S. Y. Zhao and D. Zelazo, \"Bearing rigidity theory and its applications for control and estimation of network systems,\" _IEEE Control Syst. Mag._, vol. 39, no. 2, pp. 66-83, Apr. 2019. * [28] Y. Zhao, Y. Hao, Q. Wang, and Q. Wang, \"A rigid formation control approach for multi-agent systems with curvature constraints,\" _IEEE Trans. Circuits Syst. II, Exp. Briefs_, vol. 68, no. 11, pp. 3431-3435, Nov. 2021. * [29] S. Zhao, D. V. Dimarogonas, Z. Sun, and D. Bauso, \"A general approach to coordination control of mobile agents with motion constraints,\" _IEEE Trans. Autom. Control_, vol. 63, no. 5, pp. 1509-1516, May 2018. * [30] S. Zhao, \"Affine formation maneuver control of multiagent systems,\" _IEEE Trans. Autom. Control_, vol. 63, no. 12, pp. 4140-4155, Dec. 2018. * [31] K. Sakurama, \"Formation control of non-holonomic multi-agent systems under relative measurements,\" in _Proc. IFAC World Cong._, 2020, pp. 1-20. * [32] J. Lee, _Introduction to Topological Manifolds_, 2nd ed. Cham, Switzerland: Springer, 2010. * [33] T. A. McKee and F. R. McMorris, _Topics in Intersection Graph Theory_. Philadelphia, PA, USA: SIAM, 1999. * [34] K. Sakurama, \"Unified formulation of multiagent coordination with relative measurements,\" _IEEE Trans. Autom. Control_, vol. 66, no. 9, pp. 4101-4116, Sep. 2021. * [35] L. Asimow and B. Roth, \"The rigidity of graphs,\" _Trans. Amer. Math. Soc._, vol. 245, pp. 279-289, Aug. 1978. * [36] J. C. Gower and G. B. Dijksterhuis, _Procrustes Problems_. London, U.K.: Oxford Univ. Press, 2004. * [37] K. Kanatani, \"Analysis of 3-D rotation fitting,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 16, no. 5, pp. 543-549, May 1994. * [38] K. Sakurama, S.-I. Azuma, and T. Sugie, \"Multiagent coordination via distributed pattern matching,\" _IEEE Trans. Autom. Control_, vol. 64, no. 8, pp. 3210-3225, Aug. 2019. * [39] H. K. Khalil, _Nonlinear Systems_. Upper Saddle River, NJ, USA: Prentice-Hall, 2002. * [40] D. Shevitz and B. Paden, \"Lyapunov stability theory of nonsmooth systems,\" _IEEE Trans. Autom. Control_, vol. 39, no. 9, pp. 1910-1914, Sep. 1994. * [41] K. Sakurama, S.-I. Azuma, and T. Sugie, \"Distributed controllers for multi-agent coordination via gradient-flow approach,\" _IEEE Trans. Autom. Control_, vol. 60, no. 6, pp. 1471-1485, Jun. 2015. * [42] S. G. Krantz and H. R. Parks, _A Primer of Real Analytic Functions: Second Edition_. Cham, Switzerland: Springer, 2002. * [43] Open Source Robot. Found., Inc. (2021). _Turtlebot_. Accessed: Apr. 21, 2021. [Online]. Available: [https://www.turtlebot.com/](https://www.turtlebot.com/) * [44] R. Tron, B. Afsari, and R. Vida, \"Intrinsic consensus on SO(3) with almost-global convergence,\" in _Proc. 51st EEE Conf. Decis. Control_, Aug. 2012, pp. 2052-2058. * [45] J. Thunberg, W. Song, E. Montijano, Y. Hong, and X. Hu, \"Distributed attitude synchronization control of multi-agent systems with switching topologies,\" _Automatica_, vol. 50, no. 3, pp. 832-840, Mar. 2014. * [46] S. Lojasiewicz, _Ensembles Semi-analytiques_. Bures-sur-Yvette, France: Institut des Hautes Etudes Estonitiques, 1965. * [47] S. Lojasiewicz and M. Zurro, \"On the gradient inequality,\" _Bull. Polish Acad. Sci. Math._, vol. 47, no. 2, pp. 143-145, 1999. \\begin{tabular}{c c} & Kazunori Sakurama (Member, IEEE) received the bachelor's degree in engineering and the master's and Ph.D. degrees in informatics from Kyoto University, Kyoto, Japan, in 1999, 2001, and 2004, respectively. He was an Assistant Professor at The University of Electro-Communications, Tokyo, Japan, from 2004 to 2011, an Associate Professor at the Graduate School of Engineering, Tottori University, Tortori, Japan, from 2011 to 2018, and an Associate Professor at the Graduate School of Informatics, Kyoto University, from 2018 to 2024. He is currently a Professor at the Graduate School of Engineering Science, Osaka University, Osaka, Japan. His research interests include control of multirobot systems, networked systems, and nonlinear systems. Dr. Sakurama received the Control Division Pioneer Award in 2017 and the Control Division Kimura Research Award in 2022 from the Society of Instrument and Control Engineers (SICE). \\\\ \\end{tabular} \\begin{tabular}{c c} & Chunlai Peng received the bachelor's degree in engineering from Guangdong University of Technology, Guangzhou, China, in 2019, and the master's degree in informatics from Kyoto University, Kyoto, Japan, in 2021. His research interests include distributed control of multirobot systems and robotics. \\\\ \\end{tabular} \\begin{tabular}{c c} & Ryo Asai received the bachelor's degree in engineering and the master's degree in informatics from Kyoto University, Kyoto, Japan, in 2021 and 2023, respectively. His research interests include distributed control of multirobot systems and robotics. \\\\ \\end{tabular} \\begin{tabular}{c c} & Hirokazu Sakata received the bachelor's degree in engineering from Kyoto University, Kyoto, Japan, in 2023, where he is currently pursuing the master's degree with the Graduate School of Informatics. His research interests include distributed control of multirobot systems and robotics. \\\\ \\end{tabular} \\begin{tabular}{c c} & Mitsuhiro Yamazumi received the bachelor's, master's, and Ph.D. degrees in engineering from Tokyo Institute of Technology, Tokyo, Japan, in 2008, 2010, and 2013, respectively. Since 2013, he has been with the Advanced Technology Research and Development Center, Mitsubishi Electric Corporation, Amagasaki, Japan, where he is currently the Head Researcher with the Robotics Department. His research interests include the control of multiagent systems, system engineering of space robots, swarm robotics, and spatial intelligence. \\\\ \\end{tabular}
This article addresses a formation control problem for nonholonomic multirobot systems in robot coordinate frames. First, the nonholonomic constraint and measurement in robot coordinate frames are modeled with the Lie group theory on the special Euclidean group, \\(\\mathrm{SE}_{d}\\). The control space under the nonholonomic constraint is defined as a subspace of the tangent space of \\(\\mathrm{SE}_{d}\\), whereas the measurement in the robot coordinate frame is given as the group action of \\(\\mathrm{SE}_{d}\\). Then, a gradient-based method is developed by using the projection of the gradient flow of an objective function onto the control space. By using the method with a clique-based objective function rather than edge-based ones, the designed formation controller is distributed and uses only measurement information in robot coordinate frames and has the best performance of the gradient-based distributed controllers. The proposed method is valid regardless of the dimension of the space, and therefore, it is applicable to not only automatic guided vehicles (\\(\\alpha\\)GVs) but also unmanned aerial vehicles (\\(\\alpha\\)GVs). Finally, the effectiveness of the method is demonstrated through simulations in 3-D space and an experiment by mobile indoor robots equipped with light detection and ranging (LiDAR). (c)2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
Provide a brief summary of the text.
ieee/e99cbf86_e9ac_4955_a591_1ba698042533.md
Randomized Histogram Matching: A Simple Augmentation for Unsupervised Domain Adaptation in Overhead Imagery Can Yaras \\({}^{\\text{\\textcircled{C}}}\\), Kaleb Kassaw \\({}^{\\text{\\textcircled{C}}}\\), Bohao Huang \\({}^{\\text{\\textcircled{C}}}\\), Kyle Bradbury \\({}^{\\text{\\textcircled{C}}}\\), Member, IEEE, and Jordan M. Malof \\({}^{\\text{\\textcircled{C}}}\\), Member, IEEE Manuscript received 4 August 2023; revised 9 November 2023; accepted 23 November 2023. Date of publication 7 December 2023; date of current version 28 December 2023. (_Corresponding author: Can Yaras_). Can Yaras is with the Department of Electrical and Computer Engineering, University of Michigan, Ann Arbor, MI 48109 USA (e-mail: [email protected]). Kaleb Kassaw and Bohao Huang are with the Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708 USA (e-mail: [email protected]; [email protected]). Kyle Bradbury is with the Nicholas Institute for Energy, Environment, and Sustainability, Duke University, Durham, NC 27708 USA (e-mail: [email protected]). Jordan M. Malof is with the Department of Computer Science, University of Montana, Missouri, MT 59812 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/ISTARS.2023.3340412 ## I Introduction Modern deep neural networks (DNNs) can now achieve accurate recognition on a variety of tasks involving overhead imagery (e.g., satellite imagery, aerial photography), such as classification, object detection, and semantic segmentation [1, 2, 3]. One emergent limitation of DNNs in remote sensing, however, is their sensitivity to the statistics of their training imagery. Recent research has shown that DNNs often perform unpredictably, and often much more poorly, when they are applied to novel collections with respect to their training data [4, 5, 6, 7]. Furthermore, this performance degradation seems to occur even if DNNs are trained on relatively large and diverse datasets, encompassing large and diverse geographic regions [5, 7]. One cause of the performance degradation of DNNs on new sets of imagery involves visual domain shift (i.e., distribution shift); these are statistical differences between the training imagery and new collections of imagery [4, 5]. Fig. 1 presents images from different collections of imagery where the domain shift is readily visible. These domain shifts are caused by variations in a diverse set of factors that influence the appearance (i.e., statistics) of the overhead imagery including scene geography, the built environment (e.g., building and road styles), imaging hardware, weather, time-of-day, among others. Each of these factors influences the imagery in a manner that is generally complex and unknown in advance and, therefore, challenging to address. Fig. 1: Illustration of the domain shifts between different collections of overhead imagery. These are representative images from two cities from the Inria and DG datasets. Both Inria and DG serve as our experimental datasets in this work. One straightforward solution to these domain shifts is to label a subset of each new collection of imagery and then retrain the DNN; however, this solution is costly and time consuming [4, 5]. Instead, we would ideally have a model that performs well across many different collections of imagery and does so without the need for labels from each one. This setting is a special case of a broader problem in machine learning known as _unsupervised domain adaptation_, wherein it is assumed that we are given a \"source domain\" dataset with ground truth labels and that we aim to maximize recognition performance on one (or more) sets of unlabeled \"target domain\" data [8]. ### _Spectral Domain Shift and Adaptation_ The unsupervised domain adaptation problem has been studied extensively in recent years [9, 10], and has also recently received growing attention in the remote sensing community due to the aforementioned challenges of domain shifts [4, 8, 11, 12, 13]. Most recent domain adaptation approaches for overhead imagery attempt to address all sources of domain shift simultaneously. In this work, however, we attempt to simplify the problem by focusing on a subset of domain shifts that can be modeled as purely spectral (single-pixel) transformations. Mathematically, this is given by \\[p^{t}=T(p^{s}) \\tag{1}\\] where \\(p^{s}\\) and \\(p^{t}\\) are the source and target domain pixel intensities, respectively, of an otherwise identical scene. We hypothesize that domain shifts of this type arise from variations across imagery collections in several specific factors: e.g., camera specifications and calibration, time of day, and lighting conditions. Variations in these factors are likely to occur, to varying degrees, between almost any two collections of imagery so that domain shifts of the kind in (1) are common. Our experiments here suggest that this is not only the case, but that spectral domain shifts appear to be responsible for a significant proportion of the performance degradation of DNNs when applied to novel collections of imagery. ### _Contributions of This Work_ In this work, we begin by investigating whether spectral domain shifts of the kind in (1) can be addressed simply through data augmentation during training. We perform a systematic study where we train DNNs of varying capacity (i.e., number of parameters) with image augmentation comprising different classes of spectral augmentations (e.g., gamma, affine, etc). We then test the performance of these networks on collections of imagery that have been augmented with one of these same classes of spectral transformations. We find that modern DNNs with large encoders (e.g., ResNet-18, 50, 100 [14]) can become largely robust to several different classes of spectral transformations if provided with a _matching_ training augmentation strategy. In general, however, we do not know the transformation between any two collections of imagery, or even the class of transformations from which it may be drawn (e.g., affine, gamma), so it is unclear, which augmentation should be adopted. To overcome this problem, we propose a simple augmentation technique, termed _randomized histogram matching_ (RHM) that matches the histogram of each training image to a randomly-chosen (unlabeled) target domain image, as illustrated in Fig. 2. This approach results in a random spectral shift being applied to _each_ training image, and we hypothesize that this occasionally (by chance) approximates the true spectral shift between the source and target domains (see Section IV). Consequently, the training data are occasionally augmented (again, by chance) with the true spectral shift and can thereby become more robust to it. Since RHM only requires the unlabeled testing data, it can be viewed as a simple unsupervised domain adaptation approach. To demonstrate the efficacy of RHM, we conduct benchmark testing with two large publicly-available datasets for building segmentation in two settings: 1) training on one collection and testing on one collection (one-to-one adaptation, following [13]), and 2) a more real-world scenario where we train on multiple domains and test on multiple domains (many-to-many, also following [13]). We focus on building segmentation because it is a challenging task that has received substantial attention in recent years, with large and diverse benchmark datasets to support our multidomain experiments (e.g., Inria [6] and DG [1]). We now summarize our contributions as follows. 1. _Can augmentation confer spectral robustness in DNNs?_ We provide, to the best of authors' knowledge, the first Fig. 2: Illustration of the RHM concept. To produce an augmented training dataset, we repeat the following process. For each image \\(I^{s}\\) in the training dataset (source domain), an image \\(I^{t}\\) is drawn randomly from the testing dataset (unlabeled target domain). Then, the histogram of \\(I^{s}\\) is matched to the histogram of \\(I^{t}\\), which yields the modified image \\(I^{m}\\). If the information loss \\(\\Delta H\\) (defined in Section IV-A) between \\(I^{m}\\) and \\(I^{s}\\) is below a set threshold \\(\\gamma\\), \\(I^{m}\\) is added to the augmented training dataset. Otherwise, another random image is drawn from the target domain and the process is repeated. This resampling is only performed at most once. systematic empirical evidence that modern DNNs with large encoders are capable of becoming robust to complex spectral transformations via data augmentation during training. We show that the ability of DNNs to become robust depends upon their capacity, especially for more complex classes of transformations. We also find that DNNs only become robust to the precise class of spectral transforms that were used for augmentation, rather than becoming robust to generic spectral transforms. This suggests that augmentation is an effective mechanism to address spectral domain shifts, if a class of spectral transforms is used that includes in it the particular transforms that are often encountered in real-world imagery (e.g., between independent collections of imagery). 2. _RHM augmentation:_ RHM is a simple, yet highly effective unsupervised domain adaptation approach via spectral augmentation. We show that RHM almost always offers substantial performance benefits in unsupervised cross-domain settings (e.g., we wish to apply a model in a new geo-location with no labeled data). Our results also indicate that RHM usually offers substantially greater performance benefits than other common types of spectral augmentation [e.g., Affine, Gamma, or hue-saturation-value (HSV)], and additionally performs competitively with (or even better than) two state-of-the-art unsupervised domain adaptation approaches (i.e., CycleGAN or ColorMapGAN [4]), despite being substantially simpler and faster in many cases (e.g., RHM only has one hyperparameter, and does not require training additional models). Next, in Section II, we provide further details about related work and how our contributions differ from them. ## II Related Work In this section, we review related work for unsupervised domain adaptation in overhead imagery and how our work relates and builds upon it. Unsupervised adaptation methods can broadly be divided into two main groups: model adaptation and data adaptation. ### _Unsupervised Model Adaptation_ In many of these approaches, the goal is to obtain features (e.g., through selection or learning) that are invariant across the source and target domains, but still useful to discriminate between the target classes. Some approaches have focused on a feature selection strategy, often using a curriculum learning approach, where \"easier\" (i.e., far-from-margin or high-confidence) samples are progressively given as training data until convergence [15, 16, 17, 18]. Other approaches attempt to learn the desired feature representation in target domains using feature alignment techniques [19] or discriminators to distinguish between domains [20, 21]. These approaches have been widely used for domain adaptation of remote sensing imagery, e.g., curriculum learning in [22, 23], discriminators used in [24], and feature alignment in [25, 26], and [27]. ### _Unsupervised Data Adaptation_ Our work here builds directly upon recently proposed methods of this kind. These methods are designed to modify the source and/or target domain data so that they are statistically more similar to each other. If successful, a recognition model that is trained and evaluated on the modified source and target data should be more accurate. These methods can be subdivided into the following two main categories: (i) domain standardization and (ii) domain matching. In (i), the goal is to map the source and target domains into some common domain. Some well-known examples of such approaches are normalization (or \\(z\\)-scoring) [28]; histogram equalization [28], color invariance approaches (e.g., [29, 30, 31, 32]), and recent approaches using DNNs [11]. In (ii), the goal is to match the source domain to the target domain. Graph matching [33, 34] and especially histogram matching (HM) [11] are common approaches for this. Based on the CycleGAN model [35], a large number of approaches have been proposed to train a DNN to map source domain data to be more similar to the target domain [4, 36, 37, 38, 39, 40]. One challenge with many of these approaches is that they can alter the semantic content of the source domain imagery [4] (e.g., changing object shapes or even their semantic class). More recently, ColorMapGAN [4] was proposed to address this challenge by restricting the DNN to perform pixel-wise intensity transformations, preventing the model from making more complex semantic changes to the imagery. The authors show that ColorMapGAN (along with CycleGAN [35]) outperformed a variety of types of unsupervised domain adaptation approaches for segmentation on overhead imagery. One limitation of ColorMapGAN, however, is that a separate model has to be learned between each _pair_ of source and target domains, which is impractical for a large number of source and target domains (many-to-many testing). In this work, we propose RHM as a simple and fast alternative to recent DNN-based unsupervised domain adaptation approaches. ### _Data Augmentation_ In this approach, the original training dataset is supplemented with transformed yet semantically consistent views of the training data that create variations in the training imagery. Some commonly utilized classes of transformations used for augmentation in remote sensing are Gamma corrections [41] and contrast changes (e.g., via HSV shifts [41]). These approaches are designed to build invariance to different classes of spectral shift. For this reason, we will compare RHM to Gamma and HSV augmentation approaches, and investigate whether DNNs can indeed become robust to these transformations, as is implicitly assumed in their application. ## III Experimental Materials and Methods ### _Experimental Datasets_ In our experiments, we employ two large publicly-available datasets for building segmentation: Inria [6] and DeepGlobe (DG) [1]. Both datasets are composed of high-resolution (0.3 m)color overhead imagery and have accompanying pixel-wise building labels. Both datasets include large quantities of imagery from several distant geographic locations, summarized in Table I. Importantly, each collection varies greatly in both their scene content and their spectral characteristics--Fig. 1 presents examples of imagery from DG and Inria illustrating these differences. ### _Segmentation Model and Training_ In recent years, U-Net [42] and its variants (e.g., [43]) have achieved state-of-the-art performance for building segmentation in overhead imagery (e.g., [1, 6]). Following [43], we modify U-Net by using ResNet encoders of varying size that have been pretrained on the ImageNet dataset. In our experiments, we will use a ResNet-50 encoder unless otherwise noted, to balance training speed and performance. We also make the following specific design choices for our models: 1. cross-entropy loss between the pixel-wise ground truth and predictions; 2. the SGD optimizer; 3. 90 epochs of training; 4. a batch size of 8. We also use a learning rate of 0.001 and 0.01, respectively, for the encoder and decoder of the U-Net models. A smaller learning rate is applied to the encoder since it is already pretrained on ImageNet. For both the encoder and decoder, we drop the learning rate by one order of magnitude after 50 and 80 epochs. These settings are chosen to be nearly identical to that those in [44]--the only variation is that we additionally drop the learning rate after 80 epochs to ensure that the validation loss converges by the end of training. ### _Baseline Adaptation Methods_ For baselines, we focused upon spectral adaptation methods, allowing us to compare RHM to methods with similar complexity (e.g., only altering spectral content of the imagery). We include computationally simple spectral augmentations that are widely-used in remote sensing (e.g., Gamma, HSV, Affine intensity augmentations), as well as ColorMapGAN, a recent _data-driven_ spectral adaptation method. We also consider one state-of-the-art method that is not restricted to spectral transforms, CycleGAN, to determine how spectral methods compare with more general adaptation methods. #### Iv-C1 Augmentation We consider three parameterized transformations as baseline augmentations for comparison to RHM: _Affine_, _Gamma_, and _HSV_. These are common transformations for modeling spectral transformations and as such are most relevant for comparison to our proposed method. Table II contains the parameterized functional forms and their respective distributions. The distribution of each parameter is chosen via a standard Bayesian hyperparameter optimization procedure using Gaussian processes, as described in [45]. For the HSV augmentation, we fix the scaling factor of the hue channel \\(\\alpha^{(H)}\\) to be 1, since the hue value corresponds to the angular dimension in the cylindrical geometry of HSV space. During training, we apply these augmentations in real time throughout training, with uniquely sampled augmentation parameters for each mini-batch iteration. #### Iv-C2 Standardization. Histogram equalization [28] One approach to standardizing each domain is to ensure that the contrast of all images are the same. Histogram equalization achieves this by adjusting the histogram of pixel intensities of each image to be uniform. We transform images in both the source and target domain in this manner and use the transformed images for training and testing the U-Net model, respectively. _Gray world [29]:_ This approach attempts to find a standardized domain in which each image's average color is gray, and therefore, invariant to illumination conditions that may affect each color channel independently. By modeling the deviation in color illumination of each channel from gray as a linear scaling, we may remove the scaling factor by normalizing each channel by its average pixel intensity. We transform images in both the source and target domain in this manner and use the transformed images for training and testing of the U-Net model, respectively. #### Iv-C3 Image-to-Image Translation. HM [28] A naive method for matching the distribution of the source domain to the target domain is to match the histogram of each source image to the aggregate histogram of the target domain. This matching is done independently for each channel. We transform only the images in the source domain in this manner and use the transformed images for training the U-Net model. _ColorMapGAN [4]:_ This state-of-the-art approach aims to learn an unconstrained pixel-wise mapping from the source to target domain, modeled as a generator in an unsupervised adversarial setting. As with most GAN set-ups, there is a generator \\(G\\) and a discriminator \\(D\\), where \\(D\\) attempts to differentiate images generated by \\(G\\) from the images in the target domain. On the other hand, \\(G\\) learns a unique pixel-to-pixel mapping for every possible RGB triple. \\(G\\) and \\(D\\) are trained simultaneously with the LSGAN [46] loss. After training, \\(G\\) is used to generate fake images from the source domain that look like the target domain--these fake source images are then used to train the U-Net model. We use the same hyperparameters as given in [4] in our own experiments. _CycleGAN. [35]:_ Similar to ColorMapGAN, this method learns a transformation between domains in an unsupervised adversarial setting. However, we now have two generators \\(G\\) and \\(F\\) where \\(G\\) attempts to transform the source domain \\(S\\) to the target domain \\(T\\) and \\(F\\) attempts to transform the target domain \\(T\\) to the source domain \\(S\\). Unlike ColorMapGAN, both \\(G\\) and \\(F\\) are multilayer networks that can realize more complicated functions than pixel-wise transforms. We also have two domain-specific discriminators \\(D_{S}\\) and \\(D_{T}\\) that attempt to differentiate the real and fake images in their respective domains. \\(G\\) and \\(D\\) are trained simultaneously via an objective that combines the conditional GAN [47] loss in both directions with a cycle-consistency loss term, which forces the compositions \\(F\\circ G\\) and \\(G\\circ F\\) to be the identity mapping. After training, \\(G\\) is used to generate fake images from the source domain that look like the target domain--these fake source images are then used to train the U-Net model. We use the same hyperparameters as given in [35] in our own experiments. ## IV RHM Augmentation RHM is a modification of conventional HM. In conventional HM, one matches the histogram of pixel intensities of one set of imagery to the corresponding histogram of another set of imagery. For example, for cross-domain adaptation, we can transform a single source image to match the histogram of created from the full collection of target domain imagery (see Section III-C). This approach works well if the scene content of the source imagery and the target imagery are similar, in which case the histograms of the two image collections _should_ be similar as well; any differences must be due to other factors (e.g., variations in lighting, imaging hardware, etc), which are often modeled as spectral domain shifts. Consequently, matching the histograms removes any existing spectral domain shift between the two image sets, if their scene content is similar. However, we hypothesize that it is unlikely that two random sets of imagery will contain similar scene content. In such cases, the differences in histograms will be due to content differences, in which case the two histograms will not generally be the same, even if other imaging conditions are similar (e.g., lighting, hardware). When such content differences are present, therefore, conventional HM produces undesirable results. For example, our experimental results in Section V-B indicate that conventional HM often works well, but that it is also inconsistent, and can sometimes fail badly. RHM is intended to mitigate this limitation of HM by relying upon matching many random pairs of imagery, making it likely that the content between the pairs will sometimes (by chance) be similar, causing HM to approximate the true underlying spectral shift for those pairs. Specifically, as outlined in Fig. 2, RHM matches the histogram of each training image with the histogram of a randomly-sampled target domain image. We hypothesize that this approach often results in image pairs with dissimilar scene content (like conventional HM), but that it also sometimes creates pairs with similar scene content. Consequently, some training images are augmented in a way that approximates the true underlying spectral shift between the source and target domain imagery. Furthermore, we hypothesize that largely inaccurate augmentations will be effective for training robust DNNs as long as it _periodically_ augments with the correct transformations. This is motivated by the experiments in Section V-A suggesting that a given class of spectral augmentations will work well as long as the true augmentation is a special case of the class (e.g., augmentation with _random_ spectral Affine transforms works well for a given target-domain if that target domain is shifted by _any specific_ Affine transform). ### _RHM Algorithm_ The detailed RHM training procedure is summarized in Algorithm 1, and we describe two of the major components in more detail: HM and entropy-based resampling. _HM:_ Mathematically, the HM is performed as follows: for a source image \\(X\\in\\mathbb{R}^{C\\times H\\times W}\\), let \\(F_{c}:\\mathbb{R}\\to[0,1]\\) be the normalized cumulative histogram of each channel \\(c\\in[C]\\), i.e., \\(F_{c}(x)\\) is the proportion of pixels in channel \\(c\\) with magnitude no more than \\(x\\). Similarly, define \\(G_{c}\\) to be the normalized cumulative histogram of channel \\(c\\in[C]\\) for a randomly selected target image \\(\\widetilde{X}\\in\\mathbb{R}^{C\\times H^{\\prime}\\times W^{\\prime}}\\). Then, the RHM augmented version \\(T(X;\\widetilde{X})\\) of source \\(X\\) with target \\(\\widetilde{X}\\) is defined as \\[T(X;\\widetilde{X})_{c,i,j}=G_{c}^{-1}(F_{c}(X_{c,i,j})) \\tag{2}\\] where \\(G_{c}^{-1}(y)\\triangleq\\min\\{x:G_{c}(x)\\geq y\\}\\). _Entropy-based resampling:_ For some pairings of source and target images, the transform in (2) can result in a large loss of image information needed to perform the building segmentation task, which can be detrimental to model training. To limit the amount of image compression in RHM augmentations, we discard augmentations that lead to large compressions of the image intensity values. We measure the compression via the change in Shannon entropy of their histograms, here denoted \\(H(X)\\) for an image \\(X\\) and \\(H(T(X))\\) for a transformed image, where \\(H(X)\\) is defined as \\[H(X)=-\\frac{1}{c}\\sum_{c\\in[C]}\\int f_{c}(x)\\log(f_{c}(x))\\,dx \\tag{3}\\] where \\(f_{c}\\) is the normalized histogram of each channel \\(c\\in[C]\\) of \\(X\\). We then define the quantity \\(\\Delta H\\), the change in image information, as \\[\\Delta H\\triangleq H(X)-H(T(X)). \\tag{4}\\] The distribution of values of \\(\\Deltaproduce \\(\\Delta H>\\gamma\\), the first pairing is discarded, a new randomly selected target image is chosen, and the transformation in (2) is computed once more. To limit overall computation time, we only repeat this resampling step once for each source image. We show in our experiments in Section V-B and V-C that this filtering step is consistently beneficial. As a result, we are able to utilize variations in the target domain to apply random spectral shifts to the training imagery that we hypothesize periodically coincide with the true spectral shift between the source and target domains. Like the baseline augmentations described in Section III-C, we apply RHM as an online augmentation to each training image, where a new target image is sampled every iteration for matching. See Algorithm 1 for a full description of training using RHM with entropy resampling. RHM is not applied to the target domain during testing. ## V Experiments and Results ### _Does Spectral Augmentation Confer Robustness to Spectral Transformations?_ In this section, we investigate the extent to which modern DNNs with high-capacity feature encoders can become robust to spectral transformations of the form in (1). Many popular augmentation approaches for remote sensing (and elsewhere) apply random intensity transformations (e.g., HSV, Gamma) with the implicit assumption that DNNs will become robust to domain shifts with a similar functional form. To the best of authors' knowledge, there have been no controlled experiments investigating these assumptions or evaluating how they depend either on the complexity of the spectral transformations, or the capacity of the DNNs involved. We also investigate whether augmentation with one class of spectral functions (e.g., HSV) confers general invariance to spectral transformations (e.g., the DNN learns to ignore spectral shifts of any kind), or that the DNN only becomes robust to the particular class of functions it was trained upon. To address these questions, we emulate several different kinds of spectral domain shifts on the Inria dataset. We use the Inria test partition to create five different testing datasets. Each testing dataset is the result of applying just one of five possible types of spectral augmentation to the original Inria testing dataset: Original (no augmentation), Affine, Gamma, HSV, and RHM. Each of these test datasets then represents one _class_ of spectral domain shifts. We then train five different models on the Inria training partition, however, each models is trained with just one of the five aforementioned augmentation strategies (including \"Original\"). We consider a DNN to be robust to a particular class of spectral domain shift (e.g., HSV) if its performance does not degrade significantly--compared to the unaltered Inria \"Original\" test dataset--when it is evaluated on that transformed test dataset. In these experiments, we can examine the robustness of models when their augmentation is perfectly matched to the test datasets spectral domain shift (e.g., evaluate the HSV-augmented model on the HSV-augmented testing dataset)--an ideal scenario. In this setting, performance degradation (relative to testing on the \"Original\" unaugmented Inria test data) should arise only due to: i) inability of the model to become robust to the spectral transformation, or ii) loss of image information due to the augmentations (e.g., some spectral augmentations compress the imagery, by mapping several pixel intensities into a single intensity). The results of this experiment are presented on the diagonal (bolded) entries in Table III. As we see, the performance degradation for all domain shifts is relatively low, suggesting that the DNNs do achieve relative robustness when trained with a matching augmentation. We also applied each trained model to all of the other test datasets (i.e., that have different augmentations), and the results of this are presented in the unbolded entries in Table III. As expected, substantial performance degradation Fig. 3: Distribution of changes in image information \\(\\Delta H\\) resulting from RHM transforms. Higher values of \\(\\Delta H\\) correspond to greater degrees of loss in image information. is observed when applying the \"Original\" model to any of the augmented testing datasets. This confirms the importance of spectral augmentation of some kind when training DNNs with overhead imagery. Furthermore, when an augmentation is applied to the test set, we see that the best model for any testing dataset is the model that was trained with a matching augmentation. Interestingly, we find that training with an augmentation that does not match the test set augmentation usually results in further performance degradation (i.e., compared to a matching augmentation strategy), and sometimes a substantial degradation. This has several implications. First, these results suggest that spectral augmentations do not result in _general_ robustness to spectral domain shifts, so that the DNN learns to ignore many or most kinds of spectral shifts. Instead, it appears that they confer robustness only for the class of domain shifts that were presented in training. A corollary of this is that it is important to choose an augmentation strategy that does indeed emulate the domain shifts that can be expected in real-world imagery, and that failing to do so can result in substantial loss of otherwise recoverable performance. We also investigate the extent to which DNN robustness depends upon the capacity of the model (e.g,. the number of free parameters it has). Therefore, we repeated our experiments using segmentation models with three different encoder sizes: ResNet-18, ResNet-50, and ResNet-101. The results of this experiment are presented in Fig. 4, where we only report results when we train and test with the same augmentations. The results indicate that a larger model does seem to enable a greater level of invariance; except for RHM where ResNet-50 has slightly better performance than ResNet-101, the ResNet-18, and ResNet-101 models consistently perform the worst and best, respectively. ### _One-to-One Domain Adaptation_ In this section, we compare RHM to other unsupervised domain adaptation methods when evaluated in a one-to-one scenario, i.e., we are given a single source domain, and we must maximize performance on a single (unlabeled) target domain [4, 12]. Following the practice of recent work [4], we treat the imagery over a single city as a single domain, and we randomly chose two pairs of cities (i.e., four total cities) for our one-to-one experiments. For each pair, we alternately trained on one of the two cities, and tested on the other. The only constraint on the selection of the city pairs is that they must contain one city from the Inria dataset, and one city from DG dataset. These two datasets were produced by different groups and at different times, and therefore, we reason they are more likely to exhibit domain shifts. All experiments were conducted with a U-Net model with a ResNet-50 encoder, as described in Section III-B. All results are reported in terms of intersection-over-union (IoU). Descriptions of our baseline methods can be found in Section III-C. As baselines, we included a variety of methods that are comparable in their simplicity and speed to RHM (e.g., HSV and Gamma augmentation, gray-world standardization, etc.). We also included ColorMapGAN [4], which is a more sophisticated approach, which recently reported superior results to a large number of other state-of-the-art unsupervised domain adaptation methods when evaluated in the one-to-one scenario. As an ablation study, we also test RHM _without_ the entropy-based resampling step described in Section IV-A, which is termed \"RHM w/o EBR\" in Table IV. The results of the benchmark experiments are presented in Table IV. RHM gives a slightly better IoU than RHM w/o EBR, while both RHM models perform substantially better than all other baselines (on average). RHM achieves the highest IoU on two of the four individual test cities (Vienna \\(\\rightarrow\\) Vegas and Tyrol-w \\(\\rightarrow\\) Shanghai). Although it does not achieve the highest performance on Vegas \\(\\rightarrow\\) Vienna or Shanghai \\(\\rightarrow\\) Tyrol-w, in both cases, it is the second best performing model, and achieves very similar performance to the top-performing approach. ### _Many-to-Many Domain Adaptation_ In this section, we compare RHM to other unsupervised domain adaptation methods when evaluated in a many-to-many scenario, i.e., we are given multiple source domains, and we must maximize performance on multiple (unlabeled) target domains [4, 12]. In contrast to the one-to-one scenario, the many-to-many setting is more likely to reflect real-world testing conditions in which a model is trained on a large and diverse Fig. 4: Percent performance degradation as a function of model size and augmentation type. For each combination of model size and training augmentation type, we measure the percentage performance degradation when testing on the augmented test set compared to the un-augmented test set. training set and then tested on multiple new collections of imagery (i.e., multiple target domains). For these experiments, we train each model on one of our two multicity benchmark datasets (Inria and DG), and test on the other. Because ColorMapGAN was designed specifically for the one-to-one task and, therefore, may be at a disadvantage [4], for these experiments, we utilized an additional benchmark method, CycleGAN [35]. CycleGAN recently achieved comparable performance to ColorMapGAN in [4], while being better-suited for the many-to-many testing scenario. As in the previous section, we also test RHM with, and without, the entropy-based resampling step (see Section IV-A for details). The RHM model without resampling is denoted \"RHM w/o EBR\" in Tables V and VI. Our many-to-many experimental results are reported in Tables V and VI, respectively. In each case, the IoU for each testing city is provided along with an \"Overall\" IoU (computed after aggregating all test city predictions) and a \"City Average\" (computed by averaging the IoUs of each test city). As with the one-to-one setting, we find that entropy-based resampling improves both the overall and per-city performance of RHM (in all nine cities, except for Chicago). Moreover, RHM outperforms CycleGAN on 5 of the 9 cities, and achieves better city average performance than all other baselines, while having a very similar overall IoU to CycleGAN in both training/testing directions. In Fig. 5, we illustrate example predictions for various methods, including RHM--across diverse conditions, we see that RHM yields considerably fewer false positives (encoded in red), which can explain the improvement in IoU. Notably, RHM gives comparable or better performance than CycleGAN without the need to train an auxiliary DNN (e.g., our CycleGAN typically required over three days to train on an NVIDIA Titan RTX), or tune many hyperparameters. Compared to other simple unsupervised domain adaptation approaches, RHM provides substantially better average performances on both benchmark test sets. Finally, we note that applying other augmentations (e.g., HSV or Gamma) along with RHM does not improve performance, and in fact substantially degrades the performance of RHM--we postpone the results and discussion to the Appendix. ### _Run-Time Analysis_ In this section, we demonstrate that employing RHM during training only incurs modest computational costs compared to similar online augmentation approaches. Following the model and optimizer configurations outlined in Section III-B, we train a U-Net with a ResNet-18 encoder over a single epoch of the Inria dataset and benchmark the wall times of an average single training iteration for each baseline augmentation given in Table II, as well as RHM with entropy-based resampling. In Table VII, we report the increase in wall time compared to no augmentation used during training. We see that the use of RHM only results in a marginal increase in training time that is comparable to common baselines, and is in fact considerably faster than the widely-used HSV augmentation. We note that our implementation of RHM computes histograms on-the-fly during training, and does not reuse previously computed histograms in future iterations. At the cost of additional memory usage, it is possible to make RHM even faster by precomputing all histo Fig. 5: Visualization of segmentation masks for several domain adaptation methods across various cities in Inria and DG. The masks are colored as follows: black for true negative, blue for false negative, red for false positive, white for true positive. that they are readily available for the matching process, but this is beyond the scope of our work. ## VI Conclusion In this work, we address the problem of unsupervised domain adaptation in overhead imagery. To do so, we model domain shifts caused by variations in imaging hardware, lighting conditions (e.g., due to time-of-day), or atmospheric conditions as nonlinear pixel-wise transformations, and we show that DNNs can become largely robust to these types of transformations if they are provided with the appropriate training augmentation. In general, however, we do not know the transformation between any two sets of imagery. To overcome this problem, we propose RHM, a simple real-time training data augmentation approach. We then conduct experiments with two large benchmark datasets for building segmentation and we find that RHM consistently yields comparable performance to recent state-of-the-art unsupervised domain adaptation approaches for overhead imagery, despite being substantially easier and faster to use. RHM also offers substantially better performance than other comparably simple and widely-used unsupervised approaches for overhead imagery. This new approach to training augmentation has the ability to expand the efficacy of automated analysis of remote sensing data to more applications while reducing the burden of expensive labeled imagery from target domains. As briefly mentioned in Section V-C, applying spectral augmentations, such as Gamma and HSV in conjunction with RHM does not improve performance over standalone RHM--results for Inria to DG many-to-many domain augmentation are shown in Table VIII. We note that for all target cities, RHM alone achieves a better IoU than either of the combined methods, suggesting that this addition is not beneficial but detrimental for performance. ## Acknowledgment The authors would like to thank the Energy Initiative at Duke University for supporting this work. ## References * [1]I. Demir et al. (2018) DeepGlobe 2018: a challenge to parse the Earth through satellite images. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, pp. 172-181. Cited by: SSI. [MISSING_PAGE_POST] and D. Lindenbaum (2018) A parallel unsupervised cascade classifier for the analysis of multitemporal remote-sensing images. Pattern Recognit. Lett.23, pp. 1063-1071. Cited by: SSI. * [37]D. T. Tuan, C. Persello, and L. Bruzzone (2016-07) Domain adaptation for the classification of remote sensing data: an overview of recent advances. IEEE Geosci. Remote Sens. Mag.4 (2), pp. 41-57. External Links: Document Cited by: SSI. * [38]D. Tuan, S. Happy, Y. Tarabalka, and P. Alliez (2020) SEMIZI: Semantically consistent image-to-image translation for domain adaptation of remote sensing data. In Proc. IEEE Int. Geosci. Remote Sens., pp. 1837-1840. Cited by: SSI. * [39]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens.59 (2), pp. 1067-1081. External Links: Document Cited by: SSI. * [40]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens.59 (2), pp. 1067-1081. External Links: Document Cited by: SSI. * [41]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens.59 (2), pp. 1067-1081. External Links: Document Cited by: SSI. * [42]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of very high resolution satellite images by data standardization. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 770-778. External Links: Document Cited by: SSI. * [43]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of very high resolution satellite images by data standardization. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 770-778. External Links: Document Cited by: SSI. * [44]D. Tuan, A. Giroy, Y. Tarabalka, P. Alliez, and S. Clerc (2021-06) DAugNet: unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens.59 (2), pp. 1067-1081. External Links: Document Cited by: SSI. * [45]D. Tuan * [27] Y. Qin, L. Bruzzone, and B. Li, \"Tensor alignment based domain adaptation for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 11, pp. 9290-9307, Nov. 2019. * [28] R. C. Gonzalez and R. E. Woods, _Digital Image Processing_. Hoboken, NJ, USA: Prentice-Hall, 2002. * [29] G. Buchsbaum, \"A spatial processor model for object colour perception,\" _J. Franklin Inst._, vol. 310, no. 1, pp. 1-26, 1980. * [30] F. Pacifici, N. Longbotham, and W. J. Emery, \"The importance of physical quantities for the analysis of multitemporal and multiangular optical very high spatial resolution images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 10, pp. 6241-6256, Oct. 2014. * [31] D. A. Forsyth, \"A novel algorithm for color constancy,\" _Int. J. Comput. Vis._, vol. 5, no. 1, pp. 5-35, 1990. * [32] K. I. Itten and P. Meyer, \"Geometric and radiometric correction of TM data of mountainous forested areas,\" _IEEE Trans. Geosci. Remote Sens._, vol. 31, no. 4, pp. 764-770, Jul. 1993. * [33] D. Tuia, J. Munoz-Mar, L. Gomez-Chova, and J. Malo, \"Graph matching for adaptation in remote sensing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 1, pp. 329-341, Jan. 2013. * [34] D. Das and C. G. Lee, \"Unsupervised domain adaptation using regularized hyper-graph matching,\" in _Proc. IEEE 25th Int. Conf. Image Process._, 2018, pp. 3758-3762. * [35] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \"Unpaired image-to-image translation using cycle-consistent adversarial networks,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2017, pp. 2242-2251. * [36] J. Hoffman et al., \"CyCADA: Cycle-consistent adversarial domain adaptation,\" in _Proc. Int. Conf. Mach. Learn._, 2018, pp. 1989-1998. * [37] M.-Y. Liu, T. Breuel, and J. Kautz, \"Unsupervised image-to-image translation networks,\" in _Proc. Int. Conf. Neural Inf. Process. Syst._, 2017, pp. 700-708. * [38] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, \"Multimodal unsupervised image-to-image translation,\" in _Proc. Eur. Conf. Comput. Vis._, 2018, pp. 172-189. * [39] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, \"Diverse image-to-image translation via disentangled representations,\" in _Proc. Eur. Conf. Comput. Vis._, 2018, pp. 35-51. * [40] B. Bendira, Y. Bazi, A. Koubaa, and K. Ouni, \"Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images,\" _Remote Sens._, vol. 11, no. 11, 2019, Art. no. 1369. * [41] A. Buslaev, V. I. Iglovikov, E. Khovcheangya, A. Parinov, M. Druzhinin, and A. A. Kalinin, \"Albumentations: Fast and flexible image augmentations,\" _Information_, vol. 11, no. 2, 2020, Art. no. 125. * Assist. Interv._, 2015, pp. 234-241. * [43] V. Iglovikov and A. Shvets, \"TernasNet: U-Net with VGG11 encoder pre-trained on ImageNet for image segmentation,\" 2018, _arXiv:1801.05746_. * [44] V. Nair, P. Rhee, J. Yang, B. Huang, K. Bradbury, and J. M. Malof, \"Designing synthetic overhead imagery to match a target geographic region: Preliminary results training deep learning models,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2020, pp. 948-951. * [45] J. Snoek, H. Larochelle, and R. P. Adams, \"Practical Bayesian optimization of machine learning algorithms,\" in _Proc. Adv. Neural Inf. Process. Syst._, 2012, pp. 2951-2959. * [46] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, \"Least squares generative adversarial networks,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2017, pp. 2813-2821. * [47] M. Mirza and S. Osindero, \"Conditional generative adversarial nets,\" 2014, _arXiv:1411.1784_. \\begin{tabular}{c c} & Can Yaras received the B.S.E. degree in electrical and computer engineering and the B.S. degree in mathematics from Duke University, Durham, NC, USA, in 2021, and the M.S. degree in electrical and computer engineering in 2023 from the University of Michigan, Ann Arbor, Ann Arbor, MI, USA, where he is currently working toward the Ph.D. degree in ECE with the Electrical Engineering and Computer Science Department. His research interests include representation learning, efficient deep learning, and nonconvex optimization. \\\\ \\end{tabular} \\begin{tabular}{c c} & Kaleb Kassaw (Student Member, IEEE) received the B.S.E.E. degree in electrical engineering from the University of Arkansas, Fayetteville, AR, USA, in 2020 and the M.S. degree in electrical and computer engineering in 2023 from Duke University, Durham, NC, USA, where he is currently working toward the Ph.D. degree in electrical and computer engineering, advised by Dr. Leslie Collins and Dr. Jordan Malof. His research focuses on applied machine learning and computer vision. \\\\ \\end{tabular} \\begin{tabular}{c c} & Bohao Huang (Student Member, IEEE) received the B.S. degree in electrical, electronics and communications engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 2011 and the M.S. and Ph.D. degrees in electrical and computer engineering from Duke University, Durham, NC, USA, in 2017 and 2021, respectively. His work in Ph.D. focuses on the application of computer vision and machine learning in remote sensing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Kyle Bradbury (Member, IEEE) received the B.S.E. degree in electrical engineering from Tufts University, Medford, MA, USA, in 2007, the M.S. degree in electrical and computer engineering and the Ph.D. degree in energy systems modeling from Duke University, Durham, NC, USA, in 2008 and 2013, respectively. He is currently an Assistant Research Professor in Electrical and Computer Engineering with Duke University and the Director of the Energy Data Analytics Lab, Nicholas Institute for Energy, Environment and Sustainability, Durham, NC, USA. His research focuses on developing and applying machine learning techniques to better understand, plan, and manage energy infrastructure, scarce energy resources, and climate impacts. In particular, his work focuses on the use of remotely sensed data including satellite and aerial imagery. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jordan M. Malof (Member, IEEE) received the B.S. degree in electrical and computer engineering (ECE) from the University of Louisville, Louisville, KY, USA, in 2008 and the Ph.D. degree in ECE from Duke University, Durham, NC, USA, in 2015. He is currently an Assistant Professor with the Department of Computer Science, University of Montana, Missoula, MT, USA, where he develops advanced computer vision, machine learning, especially deep learning approaches to solve challenging real-world problems in fields such as materials science, remote sensing, and defense. \\\\ \\end{tabular}
Modern deep neural networks (DNNs) are highly accurate on many recognition tasks for overhead (e.g., satellite) imagery. However, visual domain shifts (e.g., statistical changes due to geography, sensor, or atmospheric conditions) remain a challenge, causing the accuracy of DNNs to degrade substantially and unpredictably when testing on new sets of imagery. In this work, we model domain shifts caused by variations in imaging hardware, lighting, and other conditions as nonlinear pixel-wise transformations, and we perform a systematic study indicating that modern DNNs can become largely robust to these types of transformations, if provided with appropriate training data augmentation. In general, however, we do not know the transformation between two sets of imagery. To overcome this, we propose a fast real-time unsupervised training augmentation technique, termed randomized histogram matching (RHM). We conduct experiments with two large benchmark datasets for building segmentation and find that despite its simplicity, RHM consistently yields similar or superior performance compared to state-of-the-art unsupervised domain adaptation approaches, while being significantly simpler and more computationally efficient. RHM also offers substantially better performance than other comparably simple approaches that are widely used for overhead imagery. Augmentation, domain adaptation, segmentation.
Summarize the following text.
ieee/ea3fe868_c9d4_426a_b076_2c214d168c64.md
Solid-State Pulsed Time-of-Flight 3-D Range Imaging Using CMOS SPAD Focal Plane Array Receiver and Block-Based Illumination Techniques Juha Kostamovaara \\({}^{\\copyright}\\), Sahba Jahromi \\({}^{\\copyright}\\), Lauri Hallman \\({}^{\\copyright}\\), Guoyong Duan \\({}^{\\copyright}\\), Jussi-Pekka Jansson \\({}^{\\copyright}\\), and Pekka Keranen Manuscript received January 31, 2022; accepted February 17, 2022. Date of publication February 24, 2022; date of current version March 15, 2022. This work was supported by the Academy of Finland under Grant 339997. (_Corresponding author: Juha Kostamovaara.)_The authors are with the University of Oulu, Circuits and Systems Research Unit, 90014 Oulu, Finland (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).Digital Object Identifier 10.1109/JPHOT.2022.3153487 ## I Introduction Three-dimensional range imaging techniques have found applications in surveying, civil engineering (e.g., construction site mapping), inspection and quality control, for example, and recently also in autonomous vehicles [1, 2]. These applications are dominated by lidar systems that use mechanically steered light to obtain the 3-D range image, with drawbacks arising from relatively high costs and mechanical fragility [3, 4]. It is generally recognized that 3-D range imaging could well be used in many other applications such as robotics, small vehicle guidance, (e.g., unmanned aerial vehicles, UAVs), virtual/augmented reality (VR/AR), consumer electronics (games) and machine control (e.g., construction and forestry machines) [5]. Solid-state 3-D range imager realizations, i.e., systems without any mechanically moving parts, are appealing techniques for these potential new applications since they would pave the way for low costs and miniature size realization. One successful approach to solid-state 3-D range imaging (e.g., in gesture control and games) uses a continuous wave (CW) modulated laser beam and deduces the distance information from the phase difference between the emitted and received signal with a CMOS active pixel sensor [6, 7, 8]. This technology gives a high image pixel resolution, but at the cost of a limited measurement range (typically only a few meters) and a relatively high average optical illumination power (up to hundreds of milliwatts) [8]. By using several simultaneous modulation frequencies it is possible to achieve longer unambiguous measurement range without compromising the precision, but the high optical average illumination power needed remains still as an issue [9]. Optical phased arrays are interesting candidates for 3-D range imaging, but these techniques are still in an early development phase [10, 11]. Another potential solid-state technique for 3-D range imaging uses a 2-D CMOS single photon detector (SPAD) array to measure directly the round-trip transit times of the photons from the transmitter to the target and back to the receiver [12, 13]. Typical realizations following this approach use the pulsed time-of-flight (TOF) principle in which the laser diode transmitter sends pulses of length 1 10 ns to the target in the flood illumination configuration, i.e., the whole system field of view (FOV) is illuminated for each transmitted pulse [14, 15, 16, 17, 18, 19]. A 2-D CMOS SPAD array is then positioned at the focal plane of the receiver optics. Thus, this approach resembles the focal plane array (FPA) techniques well known from CMOS image sensors. The advantages of the SPAD approach are the high sensitivity and low timing jitter inherent in single photon detection techniques. The typical sensitivity and jitter of a CMOS SPAD at NIR wavelength region are \\(\\sim\\)1 5% and \\(<\\)100 ps, respectively [20, 21, 22]. Since the avalanche breakdown of a SPAD element can easily produce a signal in the volt range, no analogue amplifiers are needed which markedly simplifies the receiver and also eliminates the electrical cross-talk. SPADs can be realized in standard CMOS, and thus other electronics such as the time-to-digital converters that record the photon flight time intervals can also be located on the same die, which is an important advantage from the system integration point of view. The focus of research in the latter field has up to now been mostly on the design of versatile receivers realizing a high density 2-D CMOS SPAD array and the related signal processing electronics. Most of the designs assume a flood illumination strategy, and the basic on-chip functionalities typically include a 2-D array of SPADs (e.g., 128 \\(\\times\\) 128 elements), time-to-digital converters (TDC) and some histogram processing to relieve the I/O data rate load [18]. To improve the SPAD performance (e.g., fill factor, photon detection probability) separate chips may also be used for the SPAD array and the other electronics [15]. Radiometric analysis and practical demonstrations show however, that at longer distances (\\(>\\)5 10 m) and with the laser pulse energies readily available from semiconductor laser diodes (\\(\\sim\\)10 nJ), the per pixel photon detection probability in this kind of a measurement system, especially at the limit of the measurement range, is rather low, e.g., around 1% or even lower [16, 23]. Thus, most of the detectors are idle as far as signal detection is concerned. Another illumination strategy tries to overcome this issue and achieve a more efficient use of the available average illumination power by illuminating the system FOV in blocks. In this strategy, the FOV is segmented into to 16 blocks, for example, and each of these is illuminated by a certain number of successive laser pulses (e.g., 5000) so that the required signal-to-noise ratio (SNR) for a successful detection of the target pixels within the illuminated segment is achieved. As described in greater details in Section II, this approach enables one to achieve a higher ratio between the number of signal photons detected and the background photons in a given time frame and with a given average illumination power (with certain assumptions concerning the measurement situation) than what is available with flood illumination [24, 25]. The practical realization of the illuminator can consist of a number of laser diode emitters which are located so that each one produces the illumination for a particular segment in the system FOV [26]. Thus, this system, which resembles mechanical scanning of the laser beam over the system FOV, can be realized in full solid state, which is an important advantage. Also, the synchronization issue known to exist when using a mechanical scanner and a SPAD-based receiver is not experienced here due to the unambiguous knowledge of which of the laser emitters is being driven at any particular time [27]. Interestingly, a similar illumination concept realized with an on-chip thermo-optic switching tree and a focal plane grating-based transmit array was recently applied to FMCW lidar based 3-D ranging with promising results [28]. We present here a full realization of a pulsed time-of-flight 3-D range imager which is based on segmented block-based illumination transmitter realized in solid-state and CMOS 2-D SPAD/TDC receiver techniques. The design principles are presented in Section II. Section III describes the realizations of the transmitter and receiver, measurements demonstrating the performance of the resulting 3-D range imager are given in Section IV, and finally, the results are discussed and compared with those of other relevant studies in Section V. ## II Measurement Concept and Design Considerations The concept behind the 3-D range imager developed here is shown in Fig. 1. The transmitter consists of a custom-designed common anode laser diode bar with 16 separately addressable laser diode elements. This laser diode bar is located at the focal plane of the transmitter optics with the result that when one of the laser diode elements is being driven, only the corresponding segment within the field of view of the system is illuminated. On the receiver side, a custom-designed CMOS SPAD IC receiver with 32 \\(\\times\\) 128 SPAD elements is located at the focal plane of the receiver optics. Each of the SPAD elements in the array sees photons from a certain direction only and by measuring the transit time of these photons the 3-D range coordinates (x, y, z) of the corresponding target point can be determined. The x and y coordinates are determined by the position of the SPAD element under consideration and the z coordinate can be calculated from the measured transit time. In addition to the SPAD array, the receiver IC also contains 2 \\(\\times\\) 128 time-to-digital converters which can be electrically connected to any two 128-element SPAD lines within the receiver array (corresponding to the segment under illumination). Thus, the total number of TDCs needed in this realization is determined by the number of SPAD elements within a single illumination block (i.e., 2 \\(\\times\\) 128 in this case), and not by the total number of SPAD elements in the receiver (32 \\(\\times\\) 128 in this particular design) as would be needed in a flood illumination-based system. This is an important advantage since the design of accurate TDCs typically requires a considerable circuit area. The signal-to-noise ratio in a typical worst case measurement scenario will be discussed next. It is assumed that for a single emitted laser pulse the probability of photon detection is \\(<<\\)1 (the probability that a particular SPAD is triggered by a signal photon), which is the case in a typical measurement situation especially at the maximum end of the measurement range [23, 25]. It will be seen in the experimental section that for this particular design the signal photon detection probability at the limit of the measurement range is \\(\\sim\\)0.05, which would justify the above assumption even if a markedly higher pulse energy were used. This allows one to deduce that for a valid signal detection a bunch of laser pulses (e.g., 5000) would need to be sent, and that the echo pulse detection probability would then improve proportionally to the number of laser pulses emitted. Any noise in the measurement, especially in the potential outdoor applications suggested above, would typically come from random detections induced by the background radiation, i.e., from the sun. The intensity of the background radiation can be usefully characterized by the mean time interval between Fig. 1: Block-based illumination: concept, transmitter PCB and CMOS receiver IC. the random detections in a SPAD, \\(\\tau_{\\rm BG}\\), a parameter which depends on the intensity of the background radiation and on certain system-level parameters that will be described in more details below. The value of \\(\\tau_{\\rm BG}\\) is typically from a few ns up to hundreds of nanoseconds, which is markedly shorter than the typical random detection rate due to SPAD internal random thermal noise (dark counts), and thus the dark counts of the detector can be disregarded. In a typical measurement sequence a bunch of laser pulses is sent into the illuminated segment and the corresponding time histograms of detections per SPAD corresponding to the illuminated segment are collected. The illumination is then switched to the next segment by driving the next laser diode in the bar, and this procedure is then repeated for all the laser diode elements. As an example, if 5000 laser pulses were sent per segment, and the system FOV were divided into 16 segments, the frame rate with a laser driver pulsing rate of 250 kHz would be \\(\\sim\\)3 fps. The per-pixel histogram would then consist of random detections due to the background noise, together with the signal hits at the time position corresponding to the transit time of the pulse to the target and back to the receiver. Matched filtering is typically used to maximize the signal-to-noise ratio during detection. In order to be able to detect the signal echo reliably from among the background hits, the SNR should be high enough. In the following analysis, the signal-to-background noise ratio (SN\\({}_{\\rm BG}\\)R) will be calculated (instead of SNR) since it clearly shows the effect of the shortening of the laser pulse (while maintaining the total pulse energy at same level) on the signal detection performance, especially on the false alarm rate. For a precise analysis of the false alarm rate and detection probabilities, complete photon statistics and multiple pulse probabilities should be used, see for example [29, 30, 31]. A rough estimate of the SN\\({}_{\\rm BG}\\)R can be given as the ratio between the average number of signal detections and the square root of the number of random noise detections during the laser pulse envelope, see (1). For simplicity, homogenous illumination is assumed, and the cosine effects of the optics are neglected. The number of signal photon detections can be evaluated using the well-known inverse-of-distance-squared dependent radar equation and system parameters. If we assume that the hit histogram is filtered with a simple integrator with an integration time corresponding to the width of the laser pulse (\\(\\Delta\\rm t_{pulse}\\), FWHM), the probability of detecting a noise count within the pulse envelope will be roughly \\(\\Delta\\rm t_{pulse}/\\tau_{\\rm BG}\\). In (1), \\(\\rm E_{opt}\\) is the energy of the laser pulse, \\(\\varepsilon\\) the efficiency of the optics, \\(\\rho\\) the reflection coefficient of the target, \\(\\rm A_{rec}\\) the aperture area of the receiver optics, r the distance to the target, \\(\\rm E_{ph}\\) the photon energy, PDP and FF are the detection efficiency and fill factor of the SPAD pixel, respectively, x and y are the total number of SPAD elements in two orthogonal directions over the receiver surface, M is the number of illumination blocks, \\(\\rm f_{LD}\\) is the pulsing frequency of the laser transmitter, \\(\\rm P_{av}\\) is the average illumination power and \\(\\rm f_{frame}\\) is the desired 3-D range image frame rate. Thus, \\(\\rm f_{LD}/(M*f_{frame})\\) is the number of laser pulses emitted for any of the target points to achieve a valid image result, and the total number of signal and background detections is proportional to this. Since the SPADs in the receiver IC can be triggered only once per emitted laser pulse, the detection of a signal or background photon (within the pulse envelope) is possible only if no background photon is detected during the photon flight time \\(\\Delta\\rm t_{flight}\\). The probability of this follows an exponential distribution (assuming Poisson statistics) and is \\(\\rm e^{-(\\Delta\\rm t_{flight}/\\tau_{\\rm BG})}\\). This term produces a serious attenuation in the signal (and background) counts and may even block the receiver if the photon flight time is longer than the mean time interval between background radiation induced detections. This attenuation can be relieved, however, if the SPADs are activated only after the emission of the laser pulse, e.g., with a time delay that corresponds to a minimum target distance of \\(\\rm r_{gate}\\). This would shorten the active time window of the SPADs correspondingly (term r-\\(\\rm r_{gate}\\) in (1)) and thus also improve the overall performance, as will be demonstrated in Section IV. \\[\\rm SN_{BG}R~{}(r)=\\frac{N_{signal\\_bits}~{}(r)}{\\sqrt{N_{BG}} \\rm,\\rm hits}\\] \\[\\approx\\frac{\\rm e^{-\\frac{2~{}(\\varepsilon-\\varepsilon_{\\ The use of segmentation would also reduce the number of TDCs needed in the receiver relative to a flood illumination-based system (by a factor of 16 in the above example), which would markedly reduce the receiver complexity. Due to the electric control maintained over the segments, the illumination can be adaptive in the sense that the illumination sequence can be selected freely depending on the needs of the application and also changed during operation (e.g., the system FOV can be traded off against faster measurement or a longer range). Another interesting design consideration can be discovered by looking in more detail at how the mean time interval between background illumination -induced detections, \\(\\tau_{\\rm BG}\\), depends on the level of background radiation and the target and system parameters. The time interval \\(\\tau_{\\rm BG}\\) can be calculated based on the background illumination power \\(\\rm P_{B}\\) as seen from the active area of the SPAD element, the photon energy \\(\\rm E_{ph}\\) (e.g., 2.5\\(\\cdot\\)10\\({}^{-19}\\) J at \\(\\sim\\)810 nm), and the probability of a photon detection, PDP, in the SPAD (\\(\\sim\\)4% at 810 nm in CMOS) as given in (2)-(4). \\[\\tau_{\\rm BG} =\\frac{1}{\\rm PDP}\\ \\left(\\frac{\\rm E_{ph}}{\\rm P_{B}}\\right) \\tag{2}\\] \\[\\rm FOV_{\\rm SPAD} =\\frac{\\rm\\Phi_{\\rm SPAD}}{\\rm f_{\\rm rec}}\\] (3) \\[\\rm P_{B} \\approx\\rm I_{S}\\cdot A_{\\rm rec}\\cdot\\rho\\cdot\\left(\\frac{\\rm FOV _{\\rm SPAD}\\cdot\\sqrt{\\rm FF}}{2}\\right)^{2}\\cdot BW_{\\rm opt} \\tag{4}\\] In (2)-(4), \\(\\rm I_{S}\\) is the spectral irradiance of the background radiation on the target surface at \\(\\sim\\)810 nm (maximum \\(\\sim\\)700 mW/(m\\({}^{2}\\)*nm), corresponding to \\(\\sim\\)100 klux), \\(\\rm A_{rec}\\) is the area of the receiver aperture, \\(\\rm\\Phi_{\\rm SPAD}\\) the diameter of the SPAD element, \\(\\rm f_{rec}\\) the effective focal length of the receiver optics, \\(\\rho\\) (e.g., 0.5/\\(\\pi\\) sr\\({}^{-1}\\)) the reflection coefficient of the Lambertian target, \\(\\rm BW_{\\rm opt}\\) the optical bandwidth of the receiver (e.g., 50 nm) and FF the fill factor of the SPAD detector element. \\(\\rm FOV_{\\rm SPAD}\\) is the linear field of view of a single SPAD element given in radians. As can be seen, \\(\\rm P_{B}\\) does not depend on distance, but is strongly dependent on the FOV of the SPAD. To reduce the background power, narrow optical filtering is preferred, but the passband of the spectral filter must be wide enough to accommodate wavelength shifts caused by variations in temperature and the angle of incidence of the optical rays. Interestingly, we now see from (1) that in this kind of system (one detection per laser shot), there is an optimum size for the receiver aperture [29, 30, 24]. While an increase in the receiver aperture would in principle improve the \\(\\rm SN_{\\rm BG}R\\) (proportional to \\(\\sqrt{A_{rec}}\\)), the exponential attenuation effect would increase as well, due to the inverse dependence of the \\(\\tau_{\\rm BG}\\) on the aperture size. At large aperture size the latter effect will dominate and thus there is typically an optimum aperture size to be found under high background illumination conditions which will maximize the measurement range. Optimization can also be achieved, of course, by means of a variable optical attenuator in front of the receiver, or by changing the PDP of the SPAD, for example. ## III System Design ### _Block-Based Illuminator_ The emitter of the transmitter is a custom designed laser diode (LD) bar with 16 individual emitting elements. The laser diodes have a common anode contact and separate cathode contacts allowing n-type driver transistors and thereby simplifying the transmitter design. The pitch of the laser diode bar is 300 \\(\\upmu\\)m. The individual elements have an active layer stripe width and cavity length of 150 \\(\\upmu\\)m and 1.5 mm, respectively. The wavelength of the laser radiation is \\(\\sim\\)810 nm. The active layer of each laser consists of five GaAs/AlGaAs quantum wells each 40 A thick, and thus the total thickness of the active material is \\(\\rm d_{a}\\sim\\)200 A. The active layer is positioned towards the p-cladding so that the equivalent spot size \\(\\rm d_{a}/T\\) (where \\(\\Gamma\\) is the optical confinement factor), is relatively large (\\(>\\sim\\)3 \\(\\upmu\\)m). As shown in our earlier work, a high value for the equivalent spot size will enable the accumulation of a large number of carriers in the active layer before the emission of the gain-switching pulse, leading to high gain-switching pulse energy (enhanced gain switching) [32, 33]. This kind of design allows for the generation of sub-ns optical pulses of relatively high energy (nJ regime) even when driven with markedly longer current pulses (ns range) [34]. As an example of this, Fig. 2 shows the optical output of one laser diode element in the bar when driven with current pulses of different amplitudes and a length of \\(\\sim\\)1 ns (FWHM) at 1 kHz. As can be seen, the maximum drive current of \\(\\sim\\)13 A results in a single isolated optical pulse with a length of \\(\\sim\\)100 ps and peak power of \\(\\sim\\)35 W. Thus, the pulse energy is \\(\\sim\\)3 nJ which is considerably more than that achieved from gain-switched laser diodes with a conventional structure [35, 36]. It is also seen that as a result of the design, the threshold current of such a structure is relatively high, i.e., \\(\\sim\\)10 A in this case. For this reason, there is no optical output for the laser with the lowest drive current pulse (peak current \\(<\\)10 A). Fig. 3 shows a schematic diagram of the laser transmitter electronics. An FPGA controller triggers the GaN drivers sequentially with a pulsing rate of 256 kHz, providing pulses of adjustable widths to the gates of the GaN switches, so that the pulsing rate of each of the laser diodes is 16 kHz (256 kHz/16). Of course, other type of pulse sequenc Fig. 2: Optical output power and corresponding drive current pulse shapes for a single laser diode element in the 16-element common anode laser diode bar, gate pulse width \\(\\sim\\)1 ns. easily, just by changing the code of the FPGA. The PCB realization of the driver part of the transmitter is also shown in Fig. 3. The total anode capacitance is estimated to be \\(\\sim\\)700 pF (including the anode stray capacitance in the laser diode bar), and the series resistor R\\({}_{\\rm D}\\) was varied between 2.5 \\(\\Omega\\) (the result of Fig. 2) and 5 \\(\\Omega\\) (the result of Fig. 4). In the system test the laser diodes were driven with longer \\(\\sim\\)3.5 ns current pulses to increase the energy of the resulting optical pulses. Fig. 4 shows the resulting driver current and optical pulses within the HV supply range of 50 to 90 V for a single laser diode element in the bar. The optical pulses now show a tail of length \\(\\sim\\)2 ns, which increases the total maximum pulse energy to \\(\\sim\\)10 nJ. The total power consumption of the transmitter with a driver pulsing rate of 256 kHz is \\(\\sim\\)1 W. It should also be noted that for ns-range optical pulses a conventional double heterostructure laser diode structure (optimizing optical confinement) can well be used in the transmitter. Cylindrical optics are used to shape the optical radiation of the laser diode bar into a system illumination cone of 40\\({}^{\\circ}\\)\\(\\times\\) 10\\({}^{\\circ}\\) (FWHM). When focused to infinity, 16 illumination segments with a separation corresponding to the height of the segment are formed (see Fig. 5 for the measured time-averaged illumination pattern). As explained above, in reality only one of the 16 segments is illuminated at a time. The illumination pattern can be made to be more homogenous by slightly misfocusing the transmitter [26]. ### _Receiver_ The custom-designed receiver is a 6.6 mm \\(\\times\\) 5.5 mm integrated circuit (IC) consisting of a 128 \\(\\times\\) 32 SPAD array and a 257-channel TDC array, realized in 0.35 \\(\\mu\\)m HV CMOS technology. A micrograph of the IC is shown in Fig. 6. The light-sensitive part of the IC, which has a fill-factor of \\(\\sim\\)35%, is highlighted. The inset in Fig. 6 shows a zoomed version of the SPAD array with each SPAD having a 40 \\(\\mu\\)m \\(\\times\\) 40 \\(\\mu\\)m pitch (active area of 26 \\(\\mu\\)m \\(\\times\\) 21 \\(\\mu\\)m) and a deep-nwell cathode/p\\(+\\) anode junctions (a shared cathode structure). A block diagram of the receiver as part of the 3-D imager system is shown in Fig. 7. The SPAD array can be regarded as 16 blocks, each consisting of two adjacent rows (2 \\(\\times\\) 128 SPADs), and any of these blocks can be connected to the 256 TDC bank in each measurement through tri-state buffers. The 257th TDC is always reserved for measuring the start transmission of the laser pulse. The TDCs can measure with a resolution of \\(\\sim\\)78 ps. The operation time of the SPADs can be limited during each measurement (gated mode) to suppress dark and background light-induced detections from blocking the SPADs before the Fig. 4: Optical output power and corresponding drive current pulse shapes for a single laser diode element in the 16-element common anode laser diode bar, gate pulse width \\(\\sim\\)3.5 ns. Fig. 5: Photograph of the time-averaged illumination pattern of the transmitter measured at a distance of \\(\\sim\\)1 m. Fig. 3: Schematic diagram of the 16-element laser diode bar driver electronics and details of the printed circuit board realization of the 16-element block-based illuminator. Fig. 6: Micro-photograph of the receiver IC showing the 32 \\(\\times\\) 128 SPAD array, the 257 element TDC array and the switches controlling the active receiver block with 2 \\(\\times\\) 128 SPAD and 256 TDC elements. arrival of the laser pulse photons. The width and delay of the gate windows can be adjusted between 5 ns and 635 ns (with 5 ns resolution) depending on the measurement conditions. Matching scans of the LD array and the SPAD array can be performed in 16 cycles for the 16 blocks (in a block-based illumination scheme) to cover the whole field of view. A timing diagram of receiver's operation during one cycle for one block is shown in Fig. 8. At the beginning of each cycle, the IC is configured for the measurement (i.e., a block is chosen for the measurement and the gate window is set), then the time of the start signal and SPAD detections are measured by the respective TDCs, and finally the measured data are read out from the IC. The cycle for one block can be repeated the desired number of times before moving on to the next block. More details of the design and performance of the receiver circuit are presented in [25]. ## IV 3-D Range Image Demonstrations ### _Indoor Measurements_ Since the basic properties of the SPAD based pulsed time-of-flight imaging, e.g., its precision and the dependence of this on pulse length and the averaging of successive results, have been well covered in prior research, see [37, 38, 23] and references therein, the emphasis in the experimental section of the current study will be on demonstrating the functionality of a 3-D range imager in various measurement situations. In fact, in spite of quite active research in this field, there have been so far only a few full solid-state realizations (including transmitter and receiver) to demonstrate the recording of 3-D range images, especially under high background illumination conditions [16, 17, 25, 39]. In order to demonstrate the functionality of the 3-D range imager developed here, system test measurements were carried out both indoors and outdoors. The aperture and focal length of the receiver optics were 5.6 mm and 6.7 mm, respectively, and an optical bandpass filter bandwidth of 40 nm (FWHM) was used to reduce the effect of the background illumination. The field of view (FOV) of the receiver was \\(\\sim\\)40\\({}^{\\circ}\\)\\(\\times\\) 10\\({}^{\\circ}\\). A picture of the target scene for the indoors measurements (a classroom) is shown in Fig. 9, and the corresponding 3-D range image (point cloud) recorded at a frame rate of \\(\\sim\\)2 fps is shown in Fig. 10. The laser drive rate was 256 kHz, and the radiation from a single laser diode element fell on two lines of SPAD elements at a time. 8192 laser shots were recorded per pixel so that the equivalent frame rate with 16 laser diode elements was \\(\\sim\\)2 fps. The average laser power used for illumination was \\(\\sim\\)2.6 mW. The signal processing of the hit histograms of the 32 \\(\\times\\) 128 Fig. 8: Timing diagram of receiver operation during the imaging of one block. Fig. 10: 3-D range image of the scene shown in Fig. 9 measured with 8192 laser shots per pixel, equivalent frame rate \\(\\sim\\)2 fps. Fig. 7: Block diagram of the receiver IC as part of the 3-D imager. Fig. 9: Photograph of the target scene in indoors measurements. points included calibration of the static on-chip delay differences within the receiver IC and of the distortion of the receiver optics. The raw histogram was filtered using the shape of the optical laser pulse as the filtering time-domain template (i.e., matched filtering). The distance result was determined by finding the channel with maximum number of detections and then averaging the result around it. Most of the target points were located on the walls at the back and to the left of the scene, while some detector elements were hit from the tables as well. The distance to the back wall was \\(\\sim\\)13 15 m. It is seen that the target shapes in the scene are quite well recorded under normal office lighting. The shape of the black pole to the left of the scene, for example, is clearly seen in the recorded 32 \\(\\times\\) 128 pixel 3-D range image. For comparison, Fig. 11 shows the same scene measured with 512 laser shots per point, in which case the equivalent frame rate is \\(\\sim\\)30 fps. Some point results are now missing, e.g., from the surface of the black pole on the left of the image. The point cloud quality is still relatively good, however. The detection rates of pixels in the 32 \\(\\times\\) 128 SPAD array during the measurement shown in Fig. 10, i.e., with the rate of \\(\\sim\\)2 fps, as shown in Fig. 12, are seen to vary in the range 1 10% over the image area. The lower detection probability at the sides of the pole is clearly visible in the detection rate map (columns 75 80), and it is evident that one of the laser diodes in the bar is emitting considerably lower power than the others (rows 3-4). The lower intensity in every other row is due to the gap between the laser diode emitters (150 \\(\\mu\\)m). The line profile of one of the 32 rows, in this case row no. 15, as measured in the situation shown in Fig. 11 using 512 laser shots per point, is shown in Fig. 13. For this purpose a simple parabolic filter with a FWHM width approximately fitted to the laser pulse width was used to filter the raw histogram data. The profiles of the wall on the left side of the figure are well recognizable. It is also seen that the variation in the results, e.g., on the back wall increases towards the right side of the picture, which is due to the lower numbers of detections in that region. As an example of the measured histograms, Fig. 14 shows the raw and filtered data of one of the points from the 32 \\(\\times\\) 128 point cloud. The data shown were recorded from the point marked with a red dot in Fig. 13. The upper curve in Fig. 14 shows the raw data and the lower curve the filtered histogram. The time histogram consists of the random hits caused by the background Figure 11: 3-D range image of the scene shown in Fig. 9 measured with 512 laser shots per pixel, eq. frame rate \\(\\sim\\)30 fps. Figure 14: Raw count (upper curve) and filtered intensity histograms (lower curve) for one of the points in the result depicted in Fig. 13(marked with a red dot in Fig. 13). Figure 12: Detection rate over the 2-D SPAD array with 32 \\(\\times\\) 128 pixels in the measurement situation represented in Fig. 10. Figure 13: Line profile of the row 15 in the point cloud of 32 \\(\\times\\) 128 pixels measured with 512 laser shots in the measurement situation of Figs. 9 and 11. radiation and the signal hits within the pulse envelope of 0.3 m (corresponding to the pulse length \\(\\sim\\)2 ns) at a distance of 13 m. The number of hits within the pulse envelope in this particular point was 29 (out of 512 laser pulses emitted), which conforms to expectations in the light of (1) (within the limits of the statistics and the uncertainty with regard to some parameters). Due to the low background illumination (office lighting), the SNR in the measurement is defined solely by the signal photon statistics. In order to appreciate the available precision, the error distributions for the points along the back wall (right-hand part of the line profile) in Fig. 13 were analyzed for 512 and 8192 laser shots, respectively. The error distributions are shown in Fig. 15. The upper boxes show the point-wise error for 512 laser shots and the lower boxes that for 8192 laser shots. As is seen, more clearly from the results with 8192 laser shots due to the higher averaging, the calibration of the distortion of the receiver optics was not entirely accurate since it shows a quadratic dependence on the horizontal measurement position. This error was then removed in order to focus more clearly on the random error, which was markedly smaller for the higher number of laser pulses. Given an average detection rate over the whole line of \\(\\sim\\)5%, the expected precision for 512 laser shots would be \\(\\Delta\\mathrm{t}_{\\mathrm{pulse}}\\)/SQRT(0.05 \\(\\times\\) 512) \\(\\sim\\) 2000 ps/5 = 400 ps (FWHM) in time. Assuming a relation of \\(\\sigma=\\mathrm{FWHM}\\)/2.35 (accurate for a Gaussian pulse), a sigma value of 170 ps is achieved for the precision. This corresponds to a distance precision of 2.55 cm (170 ps/67 ps) which is quite close to the measured precision of 2.8 cm shown in Fig. 15. For 8192 laser shots the precision should be 4 times better, i.e., 100 ps/1.5 cm (FWHM) in time/distance, which improvement is also seen in the result of Fig. 15. ### _Outdoor Measurements_ In order to demonstrate the effect of gating on the quality of the 3-D range images, some outdoor tests were also performed, the test environment for which is shown in Fig. 16. The weather during the measurements was windy with 50% cloud cover. The background illumination level measured at the surface of the target wall (gray concrete) varied between 10 klux and 60 klux. In the first measurement the distance to the wall was \\(\\sim\\)13 m. As explained in Section III-B, the SPADs of the receiver array can be activated at any time after the laser pulse with a resolution of 5 ns. This technique reduces the exponential attenuation, thus improving the measurement performance. In the measurement situation shown in Fig. 16, for example, with \\(\\sim\\)20 klux background illumination, 3-D range imaging was not successful without gating. When the SPADs were activated about 5 m before the target location, however, the measurement was already operative for most of the image points, especially at the center of the image area. The measured point cloud is shown in Fig. 17. In this measurement 16 000 laser shots were used for each of the image points and thus the equivalent frame rate with a driver rate of 256 kHz was 1 fps. The raw histogram data for a few points on row 16 are shown in Fig. 18. The exponential dependence of the background hits is clearly seen due to the relatively high background illumination level. The time constant \\(\\tau_{\\mathrm{BG}}\\) can be evaluated to be \\(\\sim\\)20 ns which corresponds to background illumination of \\(\\sim\\)20 klux in this particular measurement situation. The position of the SPAD activation is at \\(\\sim\\)8 m. In this raw data the positions of the target points at \\(\\sim\\)13 m are barely noticeable. Fig. 16: Photograph of the target scene in the outdoors measurements. Fig. 17: 3-D range image from the scene at a distance of \\(\\sim\\)13 m (Fig. 16) measured with 16 000 laser shots per pixel, eq. frame rate 1 fps with gating of the SPADs 5 m before the target surface. Background illumination level \\(\\sim\\)20 klux. Fig. 15: Point-wise error and their distributions for the line profile shown in Fig. 13 for the part of the back wall on the right side of the corner. The upper boxes are for 512 laser shots and the lower ones for 8192 laser shots. For outdoor measurements the signal processing included compensation of the exponential attenuation (normalization in Fig. 19, not necessary in indoors measurements, see [40, 41], and matched filtering (presented as post-processed intensity in Fig. 19). As an example of this processing, the corresponding histograms for one of the points in the point cloud image (row 16, column 96) are shown in Fig. 19. The SN\\({}_{\\rm BG}\\)R estimated from Fig. 19 at the target distance is 5 10 which is in line with the prediction based on (1). At a slightly lower background illumination intensity (\\(\\sim\\)12 klux) and with tighter gating (\\(\\sim\\)20 ns before the target surface, equivalent to 3 m), the image quality was improved, as shown in Fig. 20. It is also evident that measurement is not successful for one of the laser diode elements even in this case (rows 3 and 4). ## V Discussion and Summary The full realization of a solid-state 3-D range imager based on the pulsed time-of-flight method has been described above. The system employs block-based segmented illumination in the transmitter and a single chip 2-D SPAD/TDC 0.35 um CMOS array with 32 \\(\\times\\) 128 SPAD pixels and 257 TDCs in the receiver. The illuminator is constructed using a custom-developed common-anode LD bar which is driven with GaN FETs working in such a way that any of them can be separately addressed. Thus, the illuminator works effectively as a solid-state scanner, i.e., scans the system FOV in blocks without any mechanically moving parts. The segmentation of the illumination results in a higher SN\\({}_{\\rm BG}\\)R -ratio in the detection and a simpler receiver realization (in terms of the number of TDCs needed) than with the typically used flood-illumination approach, assuming equal average illumination power. A photon detection probability of \\(<<\\)1 and background noise limited measurement are assumed in this comparison. Certain design consideration with regard to gating of the SPAD detectors and optimization of the receiver aperture were also discussed. The improvement in performance is proportional to the square root of the number of blocks used. In this particular realization a custom-designed common anode laser diode bar based on QW laser diodes working in the enhanced gain switching regime was used as the laser transmitter, although when working with a ns pulse regime for a longer range, an array based on standard double heterostructure (DH) pulsed laser diodes would be a possibility, and in view of the lower threshold current, an even better choice. Another option could be a VCSEL (vertical cavity surface emitter laser) diode array with addressable sub-array blocks. This approach might allow for a large number of illumination sub-blocks and simpler realization of the transmitter optics, since the dimensions of the VCSEL array and its sub-blocks can be scaled to correspond to the desired illumination pattern. Another VCSEL transmitter approach would be to use the unit laser elements of the VCSEL array to directly define the spatial resolution of the measurement (rather than the SPAD elements of the receiver array) [42]. It was shown here that the system is capable of producing relatively accurate 3-D range image results (at the level of a few cm) with 32 \\(\\times\\) 128 pixels (FOV 40\\({}^{\\circ}\\times\\) 10\\({}^{\\circ}\\)) up to a range of 15 m in normal office lighting at a frame rate of \\(\\sim\\)30 fps using a low average illumination power of only 2.6 mW. In fact, 3-D range images were still captured at a distance of \\(\\sim\\)13 m in relatively high background illumination of \\(\\sim\\)20 klux using the gated SPAD approach, although at a lower frame rate of \\(\\sim\\)1 fps. These results compare quite well with those of a very recent state-of-the art study that demonstrates 3-D range images obtained at a frame Fig. 19: Raw counts, normalized counts and post-processed intensity histogram for one of the points (row 16, column 96) in the measurement situation of Fig. 16. Fig. 20: 3-D range image of the scene in Fig. 16 measured with 16 000 laser shots per pixel, eq. frame rate 1 fps and with gating of the SPADs 3 m before the target surface. Background illumination level \\(\\sim\\)12 klux. Fig. 18: Raw histogram data from a few points on row 16 in the measurement situation shown in Fig. 16. rate of 4 fps at a distance of \\(\\sim\\)13 m with a VCSEL based flood illuminator working at an average illumination power of 90 mW in a pulsed time-of-flight set up. The two-tier CMOS receiver (FOV 63\\({}^{\\circ}\\)\\(\\times\\) 41\\({}^{\\circ}\\)) includes 240 \\(\\times\\) 160 SPAD elements in the top tier. These are divided into sub-groups of 8 \\(\\times\\) 8 SPAD elements which all are served by a time-to-digital converter and digital processor located on the lower tier, thus the total number of on-chip TDCs (and the maximum number of detections per emitted laser pulse) is 600 [39]. The measurement distance achieved in moderately high background illumination of \\(\\sim\\)10 klux was 4 m at a frame rate of 20 fps [39]. A more detailed comparison of this work and the study referred above and another development within this field is given in Table I. Another interesting comparison can be made with a recent study that uses a similar illumination concept realized with an on-chip thermo-optic switching tree and a focal plane grating-based transmission array in an FMCW (frequency modulated continuous wave) lidar configuration using a narrow-band fixed-frequency 1550 nm laser in CW mode frequency modulated with a Mach-Zehnder modulator [28]. The study demonstrates distance measurement with mm-level precision in a measurement range of several tens of meters with 32 \\(\\times\\) 16 pixels, although within a small field of view of \\(\\sim\\)2\\({}^{\\circ}\\)\\(\\times\\) 2\\({}^{\\circ}\\). The added advantage of the FMCW lidar is that in addition to the target distance, its velocity can also be analyzed from the measured beat frequency. The average illumination power used was 4 mW and the (estimated) equivalent frame rate \\(\\sim\\)9 fps. The small FOV allows for a long focal length and thus a relatively large aperture of 25 mm. It is clear from Section II that the use of a much smaller system FOV in the work described in this paper would have improved the performance considerably. Practical applications, however, typically require a FOV in the range considered here or even larger. One obvious way to improve the performance of the technology developed, especially the maximum range in a harsh background environment, is to use short ns-scale or even sub-ns laser pulses with higher peak power and pulse energy, and to increase the number of sub-blocks in the illuminator. The limitations here arise from the challenge in constructing high speed and high current drivers, since each of the blocks should be driven with \\(>\\)10 A ns-range current pulses. With a standard broad-stripe DH laser diode a pulse energy of 50 100 nJ seems feasible with 1 2 ns pulse widths. Some improvement can be achieved by increasing the pulsing rate to the driver, however, by considering the allowed maximum average optical illumination power from the point of view of eye safety, the increase in the power consumption of the transmitter and related cooling issues, and also the increase of the I/O load of the interfaces. For example, a pulsing rate of \\(\\sim\\)1 MHz should be possible for the transmitter used in this work without any severe cooling issues. With a VCSEL-based approach the number of illumination blocks could conveniently be higher than with separate laser diode elements or even with a laser bar structure. From the receiver design point of view, the rate of background illumination induced random hits (and thus noise and blocking) can be effectively reduced by reducing the field of view for the SPAD unit element. This can be achieved either by increasing the focal length of the receiver optics and/or by decreasing the area of the unit element. To retain the system FOV, however, the number of pixels has to be increased, which then partly counteracts the improvement in SN\\({}_{\\rm BG}\\)R. One other possible approach for overcoming this is to decouple the system and unit element FOVs by means of a segmented matrix-type transmitter. Thus, the block-based VCSEL illuminator offers an interesting approach and certainly deserves further studies. Another option to reduce the blocking in the receiver is to construct the image pixel from several free-running SPADs (using the same equivalent pixel size), which would work independently from each other and whose results are summed up to get the macro-pixel histogram. Preliminary system level simulations show that a SPAD dead time corresponding to the level of maximum range (e.g., 100 ns in 10 m range) with 4 sub-pixels, for example, would reduce quite efficiently the blocking effect. In this approach, the use of several sub-pixels would allow the longer dead time for the SPAD, which would simplify the design of the receiver. To conclude, the current approach seems to pave the way for the development of fully solid-state 3-D range imagers, which can be realized in relatively small size, e.g., for machine guidance and control in the construction industry, forestry and agriculture. The technology should be able to give a range of up to several tens of meters with \\(\\sim\\)10 k spatial resolution, a relatively wide system FOV, cm-range precision (with ns-scale optical pulses) and a frame rate of 10 30 fps under harsh background illumination conditions. Operation in bright sunlight (\\(\\sim\\)100 klux) is certainly a challenge, as for any other SPAD-based lidar approach for that matter but could at least partly be alleviated by advanced gating or freely running SPAD element approaches, for example. In general, more research in the field of advanced illumination techniques is needed, since this topic is definitely a critical bottleneck in our research into SPAD based pulsed TOF 3-D range imaging. ## Acknowledgment Jari-Pekka Nousiainen and Mika Aikio have contributed to the design of the PCB layout and the optics of the transmitter, respectively. Boris S. Ryvkin and Eugene A. Avrutin have contributed to the design of the laser diode structure. All the contributors are gratefully acknowledged. ## References * [1] P. F. McManamon, _Lidar Technologies and Systems_, USA: SPIE Press, 2019, pp. 504. * [2] G. M. Williams, \"Optimization of eye-safe avalanche photodiode lidar for automobile safety and autonomous navigation systems,\" _Opt. Eng._, vol. 56, no. 3, Mar. 2017, Art. no. 031224. * [3] J. Kidd, \"Performance evaluation of the velodyne VLP-16 system for surface feature surveying,\" MSc thesis, Univ. New Hampshire, 2017. * [4] B. Schwarz, \"Mapping the world in 3D,\" _Nat. Photon._, vol. 4, no. 7, pp. 429-430, Jul. 2010. * [5] V. C. Coffey, \"Imaging in 3-D: Killer apps coming soon to a device near you!,\" _Opt. Photon. News_, vol. 25, no. 6, pp. 36-43, 2014. * [6] R. Lange and P. Seitz, \"Solid-state time-of-flight range camera,\" _IEEE J. Quantum Electron._, vol. 37, no. 3, pp. 390-397, Mar. 2001. * [7] T. Oggier _et al._, \"An all-solid-state optical range camera for 3D real-time imaging with sub-entireter depth resolution (SwissRanger),\" in _Proc. SPIE Opt. Des. Eng._, vol. 5249, St. Etienne, France, 2004, pp. 534-545. * [8] C. S. Bamji _et al._, \"A 0.13 um CMOS system-on-chip for a 512 \\(\\times\\) 424 time-of-flight image sensor with multi-frequency photo-demodulation up to 130 MHz and 2 GS/s ADC,\" _IEEE J. Solid State Circuits_, vol. 50, no. 1, pp. 303-319, Nov. 2015. * [9] A. P. P. Jongenelen, D. A. Carnegie, A. D. Payne, and A. A. Dorrington, \"Maximizing precision over extended unambiguous range for TOF range imaging systems,\" in _Proc. IEEE Instrum. Meas. Technol. Conf._, Austin, TX, USA, 2010, pp. 1575-1580. * [10] C. V. Poulton _et al._, \"Long-range LiDAR and free-space data communication with high-performance optical phased arrays,\" _IEEE J. Sel. Top. Quantum Electron._, vol. 25, no. 5, Sep./Oct. 2019, Art. no. 7700108. * [11] S. A. Miller _et al._, \"512-element actively steered silicon phased array for low-power LIDAR,\" in _Proc. Conf. Lasers Electro-Opt._, San Jose, CA, USA, 2018, pp. 1-2. * [12] M. A. Albota _et al._, \"Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser,\" _Appl. Opt._, vol. 41, pp. 7671-7677, Dec. 2002. * [13] C. Niclass, A. Rochas, P. Besse, and E. Charbon, \"Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes,\" _IEEE J. Solid-State Circuits_, vol. 40, no. 9, pp. 1847-1854, Sep. 2005. * [14] M. Perenzoni, D. Perenzoni, and D. Stoppa, \"A 64 x 64-Pixels digital silicon photomultiplier direct TOF sensor with 100-MPtronds/pixel background rejection and imaging/altimer mode with 0.14% precision up to 6 km for spacecraft navigation and landing,\" _IEEE J. Solid-State Circuits_, vol. 52, no. 1, pp. 151-160, Jan. 2017. * [15] S. W. Hutchings _et al._, \"A reconfigurable 3-D-Stacked SPAD imager with in-pixel histogramming for flash LIDAR or high-speed time-of-flight imaging,\" _IEEE J. Solid-State Circuits_, vol. 54, no. 11, pp. 2947-2956, Sep. 2019. * [16] H. Ruckamo, L. W. Hallman, and J. Kostamovaara, \"An 80x25 pixel CMOS single-photon sensor with flexible on-chip time grating of 40 subarrays for solid-state 3-D range imaging,\" _IEEE J. Solid-State Circuits_, vol. 54, no. 2, pp. 501-510, Nov. 2018. * [17] D. Bronzi, Y. Zou, F. A. Villa, S. Tisa, A. Tosi, and F. Zappa, \"Automotive three-dimensional vision through a single-photon counting SPAD camera,\" _IEEE Trans. Intell. Transp. Syst._, vol. 17, no. 3, pp. 782-795, Oct. 2015. * [18] F. Mattioli Della Rocca _et al._, \"A 128 \\(\\times\\) 128 SPAD motion-triggered time-of-flight image sensor with in-pixel histogram and column-parallel vision processor,\" _IEEE J. Solid-State Circuits_, vol. 55, no. 7, pp. 1762-1775, Jul. 2020. * [19] A. Ronchini Ximenes, P. Padmanabhan, M. Lee, Y. Yamashita, D. Yaung, and E. Charbon, \"A modular, direct time-of-flight depth sensor in 45/65-nm 3-D-Stacked CMOS technology,\" _IEEE J. Solid-State Circuits_, vol. 54, no. 11, pp. 3203-3214, Nov. 2019. * [20] A. Rochas _et al._, \"Low-noise silicon avalanche photodiodes fabricated in conventional CMOS technologies,\" _IEEE Trans. Electron. Devices_, vol. 49, no. 3, pp. 387-394, 2002. * 37th Eur. Solid-State Device Res. Conf._, Munich, Germany, 2007, pp. 362-365. * [22] M. Perenzoni, L. Pancheri, and D. Stoppa, \"Compact SPAD-based pixel architectures for time-resolved image sensors,\" _Sensors_, vol. 16, no. 5, May 2016, Art. no. 745, doi: 10.3390/s16050745. * [23] J. Kostamovaara _et al._, \"On laser ranging based on high-speed/energy laser diode pulses and single-photon detection techniques,\" _IEEE Photon. J._, vol. 7, no. 2, Apr. 2015, Art. no. 7800215. * [24] J. Kostamovaara, S. Jahroni, and P. Keranen, \"Temporal and spatial focusing in SPAD-based solid-state pulsed time-of-flight laser range imaging,\" _Sensors_, vol. 20, no. 21, Oct. 2020, Art. no. 5973. * [25] S. Jahroni _et al._, \"A 32 \\(\\times\\) 128 SPAD-257 TDC receiver IC for pulsed soft-state 3-D imaging,\" _IEEE J. Solid-State Circuits_, vol. 55, no. 7, pp. 1960-1970, Jul. 2020. * [26] S. Jahroni, J.-P. Jansson, P. Keranen, E. A. Avrutin, B. S. Ryvkin, and J. T. Kostamovaara, \"Solid-state block-based pulsed laser illuminator for single-photon avalanche diode detection based time-of-flight 3D image,\" _Opt. Eng._, vol. 60, no. 5, May 2021, Art. no. 054105. * [27] C. Niclass _et al._, \"Design and characterization of a 256x46-pixel single-photon linear in CMOS for a MEMS-based laser scanning time-of-flight sensor,\" _Opt. Exp._, vol. 20, no. 11, pp. 11863-11881, May 2012. * [28] C. Rogers _et al._, \"A universal 3D imaging sensor on a silicon photonics platform,\" _Nature_, vol. 590, pp. 256-261 Feb. 2021. * [29] G. Fouche, \"Detection and false-alarm probabilities for laser radars that use geiger-mode detectors,\" _Appl. Opt._, vol. 42, no. 27, pp. 5388-5398, 2003. * [30] M. Henriksson, \"Detection probabilities for photon-counting avalanche photodiodes applied to a laser radar system,\" _Appl. Opt._, vol. 44, no. 24, pp. 5140-5147, 2005. * [31] P. Keranen and J. Kostamovaara, \"256\\(\\times\\)8 SPAD array with 256 column TDCs for a line profiling laser radar,\" _IEEE Trans. Circuits Syst. I Regular Papers_, vol. 66, no. 11, pp. 4122-4133, Nov. 2019. * [32] B. S. Ryvkin, E. A. Avrutin, and J. Kostamovaara, \"Asymmetric-waveguide laser diode for high-power optical pulse generation by gain switching,\" _J. Lightw. Technol._, vol. 27, no. 12, pp. 2125-2131, Jun. 2009. * [33] J. Huikari, J. Nissinen, B. Ryvkin, E. Avrutin, and J. T. Kostamovaara, \"High-energy picosecond pulse generation by gain switching in asymmetric waveguide structure multiple quantum well lasers,\" _J. Sel. Top. Quantum Electron._, vol. 21, no. 6, pp. 189-194, Mar. 2015. * [34] L. W. Hallman, J. Huikari, and J. Kostamovaara, \"A high-speed/power laser transmitter for single photon imaging applications,\" _Sensors_, pp. 1157-1160, Dec. 2014, doi: 10.1109/ICSENS.2014.6985213. * [35] D. Bimberg, K. Ketterer, E. H. Bottcher, and E. Scoll, \"Gain modulation of unbiased semiconductor lasers: Ultrashort pulse generation,\" _Int. J. Electron._, vol. 60, no. 23, pp. 23-45, 1986. * [36] P. Vasil'ev, \"Ultrafast diode lasers: Fundamentals and applications. Boston/London,\" 1995, Artech House, Inc. * [37] S. Pellegrini, G. Buller, J. Smith, A. Wallace, and S. Cova, \"Laser-based distance measurement using picosecond resolution time-correlated single-photon counting,\" _Moss. Sci. Technol._, vol. 11, pp. 712-716, 2000. * [38] S. Jahroni, J.-P. Jansson, and J. Kostamovaara, \"Solid-state 3D imaging using a 1 nJ/100 ps laser diode transmitter and a single photon receiver matrix,\" _Opt. Exp._, vol. 24, pp. 21619-21632, Sep. 2016. * [39] C. Zhang _et al._, \"A 240 x 160 3D stacked SPAD dToF image sensor with rolling shutter and in pixel histogram for mobile devices,\" _IEEE Open J. Solid-State Circuits Soc._, early access, doi: 10.1109/OISSSC.2013.181332. * [40]
Full realization of a solid-state 3-D range imager based on the pulsed time-of-flight method is presented. The system uses block-based segmented illumination in the transmitter realized with a 16-element common anode laser diode bar. The receiver is based on a single chip 2-D SPAD/TDC 0.35 \\(\\mu\\)m CMOS array with 32 \\(\\times\\) 128 SPAD pixels and 257 TDCs. Segmentation of the illumination improves the SN\\({}_{\\rm BG}\\)R in the detection and results in simpler receiver realization than in the commonly used flood-illumination approach. The system is capable of producing cm-accurate 3-D range images within a FOV of 40\\({}^{\\circ}\\)\\(\\times\\) 10\\({}^{\\circ}\\) up to a range of 15 m in normal office lighting at a frame rate of \\(\\sim\\)30 fps using a low average illumination power of only 2.6 mW. In background illumination of \\(\\sim\\)20 klux, 3-D range images were captured at a distance of \\(\\sim\\)13 m using the gated SPAD approach at a frame rate of \\(\\sim\\)1 fps. Laser radar, lidar, range imaging, single photon detection. + Footnote †: This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
Write a summary of the passage below.
ieee/ea5a0c38_e3f1_4040_8595_9c65296ee232.md
# Design of a Low-Crosstalk Sub-Wavelength-Pitch Silicon Waveguide Array for Optical Phased Array Guangzhu Zhou, Shi-Wei Qu\\({}^{\\copyright}\\),, Jieyun Wu, and Shiwen Yang\\({}^{\\copyright}\\), Manuscript received June 16, 2021; revised July 24, 2021; accepted August 4, 2021. Date of publication August 12, 2021; date of current version August 30, 2021. This work was supported in part by the National Natural Science Foundation of China under Grant U20A20165 and in part by the Fundamental Research Funds for the Central Universities under Grant ZYGX2019Z005. (_Corresponding author: Shi-Wei Qu._) Guangzhu Zhou, Shi-Wei Qu, and Shiwen Yang are with the School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China (e-mail: [email protected]; [email protected]; [email protected]). Jieyun Wu is with the School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China (e-mail: [email protected]). Digital Object Identifier 10.1109/JPHOT.2021.3103392 ## I Introduction Due to the rapid development of photonic integrated circuits (PICs), on-chip optical phased arrays (OPAs) have attracted much attention for their rapid and precise beam steering without mechanical actuators. OPAs have seen extensive applications in light detection and ranging (LiDAR) [1, 2], imaging [3, 4], holographic displays [5], and free-space communications [6]. Particularly, integrated silicon photonics platform which is compatible with CMOS fabrication processes enables large-scale chip-based OPAs with significantly low loss and reduced cost [7]. Beam forming and beam steering in chip-based OPAs are achieved by independently controlling the amplitude and phase of elements of an optical antenna array, typically formed by 1-D waveguide grating arrays with uniform emitter spacing. Ideally, aliasing-free beam scanning can be ensured by a half wavelength antenna element pitch [8]. However, strong crosstalk between waveguides occurs when the element pitch in arrays is below one wavelength, preventing the independent phase control over each antenna elements. Although the crosstalk can be mitigated by sacrificing the longitudinal OPA aperture size, i.e., the effective length along the light propagation direction. However, the low resolution caused by the small aperture size is highly unfavorable for many practical applications like LiDAR, automotive systems, in which a fine resolution is always pursued. As a result, the typical element pitch in current OPAs is several micrometers, giving rise to small aliasing-free beam steering ranges [9, 10, 11]. Therefore, the tradeoff between the steering range and the element pitch has posed a limitation to the development of high-performance OPAs. To deal with this circumstance, several methods have been investigated which can be categorized into different types depending on the design strategies used. The first kind is the non-uniform-pitch array designs [12, 13], which is widely adopted in microwave phased arrays. For instance, a sparse optical phased array with a scanning range of 50\\({}^{\\circ}\\) is achieved in [12], where the minimum element pitch is 4\\(\\lambda\\) to avoid crosstalk between waveguides and the position of each element is optimized to suppress the grating lobes caused by the large element spacing. Two major drawbacks of the sparse array are the higher side lobe level and the reduced power in main lobe as low as 2%, which greatly reduces the available detection range of the OPAs. Besides, many efforts have been devoted into suppressing the crosstalk between waveguides by introducing phase mismatch between waveguides. In [14, 15], subarrays composed of strip waveguides with different sizes are utilized to suppress the waveguide crosstalk, achieving an ideal half-wavelength element spacing. Also, another type of half-wavelength-pitch silicon waveguide array with low crosstalk is proposed in [16]. Two thin silicon strips with same width are asymmetrically placed between waveguides, thus introducing a strong phase mismatch between two adjacent waveguides. In that design, the crosstalk between waveguides remains below -18 dB even for the array length of 1 mm. Furthermore, recently, a low-crosstalk sinusoidal silicon waveguide array has been put forward [17]. The low crosstalk is achieved by manipulating the difference of super-mode propagation constants of the sinusoidal waveguide. However, there are several drawbacks in the existing designs. Firstly, the phase mismatches are introduced by breaking the symmetry of the structures. It means that wavelength scanning is not available in these designs. In addition, a gap size of 27.5 nm in [16] is very challenging for fabrication. On the other hand, for high-resolution LiDAR applications, OPAs with ultra-small beam divergence is always pursued. However, due to strong radiation of silicon-based waveguide grating antennas, it is very challenging for such silicon waveguide arrays to achieve narrow beam width. It has been demonstrated that silicon waveguide grating antenna with a 10 nm feature size can merely provide 1 mm propagation length [18]. There are several reports on small beam-divergence OPAs. For example, low-contrast silicon nitride gratings are deposited on silicon waveguide to achieve a relatively low emission rate which in turn gives rise to large aperture and small beam divergence [19, 20, 21]. In addition, a surface-emitting silicon waveguide antenna with a radiation length of 3.65 mm has been reported [22], in which periodic radiative segments are placed on both sides of a subwavelength metamaterial waveguide to form weak radiating grating. Nevertheless, the coupling between antennas is not considered in that design. In this paper, we propose a compact sub-wavelength-pitch silicon waveguide array, where one-dimensional periodic blocks are symmetrically arranged along silicon strip waveguides. The periodic structures are the key to achieve low crosstalk simultaneously millimeter-long radiation length. The designs presented in this paper are based on a silicon-on-insulator (SOI) platform and are compatible with CMOS technology. Simulation results show that the crosstalk between waveguides in the proposed design is at least 10 dB lower than the referenced traditional waveguide array with identical sizes within the 1500 \\(\\sim\\) 1590 nm bandwidth. Moreover, the one-dimensional periodic structure is introduced without breaking structure symmetry. Hence, the proposed design is capable of wavelength scanning. For the waveguide array with an element pitch of 1 \\(\\mu\\)m in this design, overall aliasing-free beam steering ranges of 100\\({}^{\\circ}\\) can be achieved with the phase modulation method and 19.3\\({}^{\\circ}\\) beam steering ranges via wavelength tuning by 113 nm is numerically demonstrated. Furthermore, the proposed design achieves an effective radiation length up to 1.47 mm, corresponding to a theoretical narrow beam width of 0.052\\({}^{\\circ}\\). Therefore, our design offers a promising platform for realization of two-dimensional scanning optical phased array with a large field of view and a narrow beam width. ## II Design and Result ### _Crosstalk Reduction_ The schematic views of the proposed structure are shown in Figs. 1(a) and 1(c). To illustrate the operating principle, the crosstalk of the proposed two-waveguide system is investigated and compared to that of conventional design shown in Figs. 1(b) and 1(d) as a reference. The insets in (a) and (c) show the cross-section of the proposed two-waveguide system and the referenced one, respectively. In our design, one-dimensional (1-D) periodic nano-blocks with a period \\(\\Lambda\\) are symmetrically placed along the two strip waveguides. The silicon nano-blocks are with a width \\(b\\) and a length \\(a\\). The two strip waveguides, with a center-to-center distance \\(D\\), have the same width \\(W\\) and height \\(H\\). The periodic silicon nano-blocks and the silicon strip waveguides are separated by a gap \\(G\\). According to the coupled mode theory, for the reference conventional two-waveguide system, symmetric and antisymmetric modes with the propagation constants of \\(\\beta_{+}\\) and \\(\\beta_{-}\\) will be simultaneously excited when light is launched from one waveguide. When the two modes propagate along the waveguides, if the accumulated phase difference between them reaches \\(\\pi\\), the energy in the input waveguide will be transferred into the other one to the maximum extent. The crosstalk can be expressed as [16]: \\[\\frac{P_{1\\to 2}}{P_{1}}=F\\mathrm{sin}^{2}\\left(\\frac{\\pi L}{2L_{C}( \\lambda)}\\right) \\tag{1}\\] where \\(P_{t}\\) and \\(P_{t\\to 2}\\) are the input power in the input waveguide and the transferred power from the input waveguide to the other one, respectively. \\(F\\), \\(L\\) and \\(L_{C}\\) represent the maximum crosstalk, the propagation length and the coupling length, respectively. Generally, \\(F=1\\) is reached when the propagating length \\(L\\) equals the coupling length \\(L_{C}\\), and it always holds for the referenced lossless two-waveguide system shown in Figs. 1(c) and 1(d). It means the energy in the input waveguide is completely transferred to the other. In addition, the material dispersion or wavelength dependence may result in the difference in coupling length \\(L_{C}\\) of traditional two-waveguide system, but the maximum crosstalk \\(F\\) will not be changed. On the other hand, according to the total internal reflection mechanism of the optical dielectric waveguide, the dispersion relation can be described as [23]: \\[\\frac{\\left(k_{y}\\right)^{2}}{\\varepsilon_{x}}+\\frac{\\left(k_{x}\\right)^{2}}{ \\varepsilon_{y}}=\\left(k_{0}\\right)^{2} \\tag{2}\\] where \\(k_{0}\\) is free-space wavenumber, \\(k_{y}\\) and \\(k_{x}\\) are the wavenumber along the longitudinal direction and the transverse direction of the waveguide in our case. \\(\\varepsilon_{x}\\) and \\(\\varepsilon_{y}\\) represent the anisotropic dielectric constant along the \\(x\\)- and \\(y\\)-axis, respectively. Particularly, it is the evanescent waves decaying along waveguide-transverse direction that directly contribute to the coupling between waveguides. Therefore, it has been demonstrated that the coupling length \\(L_{C}\\) can be effectively extended by utilizing highly anisotropic photonic metamaterial to tailor the evanescent waves [24, 25, 26]. In our design, the 1-D periodic silicon nano-blocks are utilized as high-reflection boundaries Fig. 1: 3D schematic views (a) of the proposed structure and (b) the referenced conventional two-waveguide system. Top views of (c) the proposed structure and (d) the referenced conventional two-waveguide system. The insets in (a) and (c) shows the cross-section of the proposed structure and the referenced one. The insets in (a) and (c) show the cross-section of the proposed two-waveguide system and the referenced one. to harness the evanescent waves of the silicon strip waveguides. The parameters of the silicon nano-blocks are properly designed to make the 1-D periodic structures work in their resonant regions, so that the energy of the evanescent waves will be truncated by the periodic structures. As a result, low crosstalk between the waveguides can be achieved. Accordingly, the crosstalk of the proposed design is rewritten as follows: \\[\\frac{P_{1\\to 2}}{P_{1}}=F\\left(\\lambda\\right)\\sin^{2}\\left(\\frac{\\pi L}{2L_{C}( \\lambda)}\\right) \\tag{3}\\] It means the maximum crosstalk \\(F\\) become a function of wavelength resulting from the resonant characteristics of the 1-D periodic structures. To validate the effectiveness of our design, a pair of low-crosstalk waveguides with \\(D=1\\)\\(\\mu\\)m are designed. The length \\(a\\), width \\(b\\) and period \\(\\Lambda\\) of the silicon nano-blocks are 300 nm, 280 nm, and 800 nm, respectively. Their overall height \\(H\\) is optimized to be 330 nm, and the width \\(W\\) is set as 400 nm for transverse-electric (TE) single-mode operation. Therefore, in this design, a relatively large feature size \\(G=160\\) nm is reserved for ease of fabrication. Although a nonstandard silicon thickness of 330 nm is employed in the calculations to demonstrate the crosstalk reduction, the same approach to minimize crosstalk can be achieved in standard 300 nm platform and commercial 340 nm platform (which is discussed in Fig. 8(c)). The proposed two-waveguide systems with the propagation lengths of 100 \\(\\mu\\)m and 200 \\(\\mu\\)m are simulated, respectively. Besides, the referenced two-waveguide system with identical waveguide sizes is also simulated for comparison purpose. The simulations are carried out using software package, e.g., Empire 8.04, which is based on the finite-different time-domain (FDTD) method. In the simulations, PML absorption boundaries are applied to absorb outgoing waves and simulate the open space. Wave ports are added to excite the fundamental TE mode of the silicon strip waveguide and calculate the transmission or crosstalk in arrays. The refraction indices of SiO\\({}_{2}\\) and Si are 1.444 and 3.48 in these calculations, respectively. The cladding of the arrays presented in this paper is air. In the calculations, a TE-polarized wave is launched into the single-mode waveguide with a width and height of 400 nm and 330 nm, respectively. After traveling through a short transmission region, it propagates into the radiation region where 1-D periodic structures are arranged on both of the waveguide to radiate the energy into free space and simultaneously to suppress the crosstalk between waveguides. Fig. 2(a) shows the crosstalk between waveguides for both the referenced two-waveguide system and the proposed design. As indicated by the dash curves in Fig. 2(a), for the conventional design, the crosstalk between waveguides varies linearly with Fig. 4: (a) With the waveguide lengths of 200 \\(\\mu\\)m, the simulated transmission spectrums of the conventional waveguide array and the proposed design. (b) The corresponding CSR. The corresponding normalized \\(E\\)-field distributions of (c) the conventional waveguide array, and (d) the proposed waveguide array, at the wavelength of 1.55 \\(\\mu\\)m. The input ports in both waveguide arrays are set to port A\\({}_{5}\\). The parameters employed in these calculations are: \\(W=400\\) nm, \\(H=330\\) nm, \\(G=160\\) nm, \\(a=300\\) nm, \\(b=280\\) nm, and \\(\\Lambda=800\\) nm, respectively. Fig. 5: The TE-eigenmode field distributions of (a) a conventional strip waveguide, and (b) the evanescent-field modulated strip waveguide. The normalized \\(E\\)-field intensity along the reference line (red dash line) of (c) a conventional strip waveguide, and (d) the evanescent-field modulated strip waveguide. The parameters are: \\(W=400\\) nm, \\(H=330\\) nm, \\(G=160\\) nm. Fig. 3: Top views of (a) the proposed waveguide array, and (b) the referenced waveguide array. The input ports are uniformly numbered from A\\({}_{1}\\) to A\\({}_{9}\\), and all output ports in the two arrays are numbered, individually. Fig. 2: With different waveguide lengths of 100 \\(\\mu\\)m and 200 \\(\\mu\\)m, (a) the crosstalk of the proposed design and the referenced two-waveguide system, and (b) the corresponding CSR. The normalized \\(E\\)-field distributions of (c) referenced two-waveguide system, and (d) the proposed design. wavelength. As a comparison, in our design, as indicated by the solid curves in Fig. 2(a), the crosstalk is greatly suppressed when introducing the 1-D periodic structures, even for such a long propagation length of 200 \\(\\mu\\)m (about 129\\(\\lambda\\)). To show the coupling suppression effect more intuitively, the coupling suppression ratio (CSR), defined as the coupled energy ratio between the proposed design and the referenced one under the same propagation length, is shown in Fig. 2(b). It can be seen that within the 1500-1620 nm wavelength ranges, the crosstalk of the proposed design is lower than that of the referenced one. The maximal CSR reaches up to 25 dB at the wavelength of 1.57 \\(\\mu\\)m. Ideally, the CSR does not vary with the propagating length because of the intrinsic resonant characteristics of the periodic structures. However, the crosstalk suppression becomes better with the increasing of propagation length due to the weak-radiating grating radiation formed by the periodically arranged silicon nano-blocks. This phenomenon is verified in Fig. 2(b), where the CSR of two waveguide lengths of 100 \\(\\mu\\)m and 200 \\(\\mu\\)m are illustrated, respectively. The corresponding normalized \\(E\\)-field distributions of the referenced two-waveguide system and the proposed design at the wavelength of 1.55 \\(\\mu\\)m are depicted in Figs. 2(c) and 2(d), respectively. Above discussions indicate that, for the proposed two-waveguide systems, the crosstalk can be effectively suppressed via evanescent field manipulation, giving rise to a very small coupling coefficient \\(F\\). When extended to waveguide array case, the coupling paths become more complicated [27]. Taking a three-waveguide system as an example, two dominant coupling paths need to be considered. In the first case, the couplings originate directly from the input waveguide. When light is launched from one waveguide, the energy will be coupled into the waveguides nearby with the coupling strength decays with the distance from the input waveguide. The second coupling path involves the coupling from the nearest waveguide to the next-nearest waveguide. According to the above analysis about the proposed two-waveguide system, the crosstalk between adjacent waveguides can be effectively suppressed by introducing the 1-D periodic structures. On the other hand, a 2 \\(\\mu\\)m distance between the input waveguide and the next-nearest waveguide in our case is sufficient to suppress the coupling from the input waveguide to the next-nearest waveguide. Therefore, a sub-wavelength pitch silicon waveguide array with low crosstalk can be obtained. Based on the given parameters of the proposed two-waveguide system, a silicon waveguide array is constructed to demonstrate the crosstalk suppression effect of our design. The top view of the proposed waveguide array is shown in Fig. 3(a). Meanwhile for comparison purpose, a reference waveguide array with identical physical parameters is also simulated as shown in Fig. 3(b). The array lengths are 200 \\(\\mu\\)m and number of the waveguides in both arrays is set to be 9, which is sufficient to investigate the couplings of a waveguide array. As shown in Figs. 3(a) and 3(b), the input ports are uniformly numbered from A\\({}_{1}\\) to A\\({}_{9}\\), and each output port on the right side is individually numbered to distinguish from the others. Fig. 4(a) give the simulated coupling coefficients for both the proposed and the referenced waveguide arrays when light is launched from the center waveguide, i.e., Port A\\({}_{5}\\) for both arrays. In Fig. 4(a), \\(S_{C6A5}\\) and \\(S_{C7A5}\\) represent the energy coupling coefficients from Port A\\({}_{5}\\) to Port \\(C_{6}\\) and \\(C_{7}\\) in the referenced waveguide array, respectively. Similar case holds for \\(S_{B6A5}\\) and\\(S_{B7A5}\\) in the proposed design. Because two major coupling paths, i.e., one from the input waveguide to the nearest waveguides and the other one to the next-nearest waveguides, dominate the couplings, while energy coupling via other coupling paths are negligible, the coupling between Port A\\({}_{5}\\) and the nearest two waveguides are our major concerns, i.e., \\(S_{B6A5}\\)/\\(S_{B4A5}\\) and \\(S_{B7A5}\\)/\\(S_{B3A5}\\). In addition, the symmetry of the structures, makes the fields symmetrically distributed Fig. 8: Normalized far-field gain patterns at various wavelengths. Fig. 6: (a) The crosstalk versus the length \\(a\\) in different period of the periodic silicon blocks at the operating wavelength of 1550 nm. (b) Radiation angle of the evanescent-field-modulated grating antennas as a function of the period of the 1-D periodic structure at 1550 nm. (c) The radiation strength as a function of operating wavelength at different gap size \\(G\\). (d) Radiation strength as a function of gap size \\(G\\). The inset in (d) shows the radiation strength versus the perturbed depth of conventional waveguide grating antenna with the same waveguide parameters of width \\(W\\) and height \\(H\\). The parameters are: \\(W\\) = 400 nm, \\(H\\) = 330 nm, \\(G\\) = 160 nm, \\(a\\) = 300 nm, \\(b\\) = 280 nm, and \\(\\Lambda\\) = 800 nm, respectively. Fig. 7: (a) Crosstalk as a function of wavelength at different gap size \\(G\\) for the proposed structure. (b) Crosstalk versus wavelength at different waveguide pitch \\(D\\) for a conventional two-waveguide system. with respect to the input waveguide. It means \\(S_{B6A5}=S_{B4A5}\\) and \\(S_{B7A5}=\\)\\(S_{B3A5}\\). Therefore, for the proposed design, only \\(S_{B6A5}\\) and \\(S_{B7A5}\\) are illustrated for the sake of brevity. Similarly, for the referenced waveguide array, the corresponding parameters \\(S_{C6A5}\\) and \\(S_{C7A5}\\) are illustrated in Fig. 4(a) by dash curves for comparison purpose. It can be seen that for the referenced waveguide array, the maximum coupling occurs at the wavelength of 1.62 \\(\\mu\\)m, where up to 22% of the input power is transferred into the two adjacent waveguides. However, in the proposed design, the couplings to adjacent waveguides are effectively suppressed owning to the introduction of the 1-D periodic structures, which are negligible compared to those in the referenced waveguide array. A CSR with respect to the adjacent waveguides of over 10 dB is achieved within the wavelength ranges of 1.5-1.59 \\(\\mu\\)m as shown in Fig. 4(b). Compared with the proposed two-waveguide system results shown in Fig. 2(b) and the waveguide array results shown in Fig. 4(b), the CSR in the waveguide array case is slightly lower than that in the two-waveguide system analyzed above due to more complicated coupling paths and the array environment impacts in the array case. However, the 1-D periodic silicon nano-blocks can be further optimized in the array environment for higher CSR. Furthermore, for the coupling from input waveguide to the next-nearest waveguide, e.g., from Port A\\({}_{5}\\) to Port B\\({}_{7}\\), without special engineering, the structures between the two waveguides have no effect on coupling suppression. Comparatively, the introduced 1-D periodic structure increases the coupling from the input waveguide to the next-nearest waveguide, as indicated by the red curve in Fig. 4(a), resulting in a positive CSR over a large wavelength range. However, in the proposed deign, the crosstalk to the next-nearest waveguide still maintains at a low level, e.g., below -27 dB as shown in Fig. 4(a), due to the relatively large waveguide pitch of 2 \\(\\mu\\)m between the input waveguide and the next-nearest waveguide. Figs. 4(c) and 4(d) depict the normalized \\(E\\)-field distribution of the referenced waveguide array and the proposed design at the wavelength of 1.55 \\(\\mu\\)m. For the referenced waveguide array, the energy launched from Port A\\({}_{5}\\) into the input waveguide is gradually coupled to other waveguides inside the array. Comparatively, in the proposed design, as seen in Fig. 4(d), crosstalk is barely observed when light is launched from the center waveguide. In addition, the electric fields in the input waveguide gradually decays along the wave propagating direction due to the grating radiation caused by the periodically arranged silicon nano-blocks. Similar \\(E\\)-field distributions can be obtained by launching power into other waveguides. Accordingly, low-crosstalk silicon waveguide arrays with a large number of waveguides can also be achieved. Therefore, our design holds a great promise for larger-scale integrated OPAs. ### _Radiation Rates and Frequency-Scanning Characteristics_ In this section, the radiation characteristics of the proposed structure are discussed. Different from conventional sidewall-modulated waveguide grating antennas, in this design, the radiation grating is formed by periodical perturbation of the silicon nano-blocks to the evanescent fields of the silicon strip waveguides. For referenced silicon strip waveguide, the field distribution of TE-eigenmode is shown in Fig. 5(a). When silicon blocks are arranged on both sides of the strip waveguide and separated from the waveguide with a gap \\(G\\), the TE-eigenmode field distribution is shown in Fig. 5(b). In addition, the normalized \\(E\\)-field intensity distributing along the red reference lines are illustrated in Figs. 5(c) and 5(d), respectively. It can be seen that partial evanescent field is perturbed by the silicon nano-blocks. With properly designed parameters of the silicon nano-blocks and the gap \\(G\\), the evanescent field can be modulated accordingly. When the period of the 1-D periodic structure is comparable with the waveguide wavelength in the silicon strip waveguide, the radiating grating is formed and light will be radiated out. Compared with conventional sidewall-modulated waveguide grating antennas where a small feature size leads to strong radiation, the evanescent-field-modulated grating antennas allow a relatively large feature size to achieve weak radiation rate. size \\(G\\) is shown in Fig. 6(d). As indicated by Fig. 5(c), the evanescent fields will gradually decay when away from input waveguide. Therefore, the larger the gap size is, the weaker the perturbation of the silicon nano-blocks to the evanescent fields of the silicon strip waveguide will be. As a result, radiation strength is dramatically reduced with the increasing of the gap size \\(G\\) as indicated in Fig. 6(d). Comparatively, for conventional waveguide grating antenna with identical physical parameters, the radiation strength as a function of perturbed depth is shown in the inset of Fig. 6(d). It can be seen that although a relatively large minimal feature size \\(G\\) is adopted in the proposed design, the radiation strength is comparable to the referenced conventional waveguide grating antenna with the perturbed depth of a dozen nanometers. Moreover, as seen in Fig. 6(c), the radiation strength shows a smaller fluctuation in the wavelength domains with the increasing of gap size \\(G\\), which is very favorable for OPAs to achieve two-dimensional scanning with stable angular resolution in the far-field region. However, a larger gap size \\(G\\) means a larger element pitch, resulting in smaller aliasing-free beam steering ranges of OPAs. Therefore, in the proposed design, the gap size \\(G\\) is designed to be 160 nm for a tradeoff between the radiation strength and steering range. At the 1.55 \\(\\mu\\)m wavelength, a weak radiation strength \\(\\alpha\\) about 1.57 mm\\({}^{-1}\\) is obtained under the radiation angle of 31.4\\({}^{\\circ}\\). To radiate 99% of the input power, the corresponding radiation length is about 1.47 mm. According to [29], the 3-dB beamwidth of 1-D leaky-wave antennas is given as follows: \\[\\Delta\\theta_{3dB}\\approx\\frac{2\\alpha}{k_{0}\\cos{(\\theta)}} \\tag{5}\\] where \\(\\alpha\\), \\(k_{0}\\) and \\(\\theta\\) are radiation strength, free-space wavenumber and radiation angle, respectively. The radiation length of 1.47 mm yields a corresponding 3-dB beamwidth of 0.052\\({}^{\\circ}\\) in the far-field region at the wavelength of 1.55 \\(\\mu\\)m. Such narrow beam width is very favorable for automotive LiDAR applications where an angular resolution around 0.1\\({}^{\\circ}\\) is required for distinguishing any potential hazards even at a distance of 200 m [30, 31]. Although the sizes of periodic silicon blocks play a crucial role in coupling suppression, the gap size \\(G\\) will also have significant effects. Therefore, in the proposed two-waveguide system, the crosstalk as a function of wavelength at different \\(G\\) is given in Fig. 7(a). It is worth noting that varying \\(G\\) and keeping \\(b\\) a constant at the same time, the waveguide pitch \\(D\\) will also change. For a fair comparison, in the conventional two-waveguide system, the crosstalk versus wavelength at various \\(D\\) is presented in Fig. 7(b). The corresponding relationships between \\(D\\) and \\(G\\) are shown in Table 1. As shown in Fig. 7(a), with the increasing of \\(G\\), the resonant characteristics gradually emerges. Besides, in the proposed design, when \\(G\\) is larger than 160 nm, no obvious improvement in coupling suppression can be observed. Therefore, \\(G\\) is also optimized to be 160 nm for a tradeoff between the crosstalk suppression effect and element pitch. On the other hand, in the previous reports [14, 15, 16, 17], large phase mismatches are introduced with asymmetric structures to suppress the crosstalk between waveguides. Due to different dispersion relationships of antenna elements, the wavelength scanning will be infeasible in these designs. However, in the proposed design, the 1-D periodic silicon nano-blocks are introduced without breaking the structure symmetry, making the proposed waveguide array available for wavelength scanning besides low crosstalk between waveguides and weak radiation strength. With the waveguide length of 200 \\(\\mu\\)m, the normalized far-field gain patterns at various wavelengths are shown in Fig. 8. For wavelength ranging from 1500 to 1613 nm, the radiation angle varies from 40.3\\({}^{\\circ}\\) to 21\\({}^{\\circ}\\), corresponding to a tuning efficiency of 0.17\\({}^{\\circ}\\)/nm. According to the antenna theories, the scanning angle \\(\\theta\\) can be expressed as follows [32]: \\[\\theta=\\sin^{-1}\\left(\\frac{m\\times 2\\pi+\\Delta\\phi}{2\\pi}\\times\\frac{\\lambda}{D}\\right) \\tag{6}\\] Where \\(\\Delta\\Phi\\), \\(\\lambda\\), \\(D\\) are the phase difference between adjacent antenna element, free-space wavelength and element pitch, respectively and \\(m\\) is an integer. To avoid the grating lobes, a single real solution is required in Eq. (3) for all m \\(\\geq\\) 1: \\[\\left|\\frac{m\\times 2\\pi+\\Delta\\Phi}{2\\pi}\\times\\frac{\\lambda}{D}\\right|>1 \\tag{7}\\] It indicates that aliasing-free beam scanning can be ensured by the antenna element pitch below half a wavelength. However, limited by the crosstalk in optical waveguide arrays, the element pitch is typically larger than half a wavelength. Therefore, the first grating lobe appears at \\(m=\\pm 1\\). In addition, the aliasing-free beam steering ranges can be theoretically calculated according to (8). At the wavelength of 1.55 \\(\\mu\\)m, for the waveguide array with an element pitch of \\(D=1\\)\\(\\mu\\)m in this design, an overall aliasing-free beam steering range of around 100\\({}^{\\circ}\\) can be achieved with the phase modulation method [10]. \\[2\\theta=2\\arcsin{\\left(\\frac{\\lambda}{2D}\\right)} \\tag{8}\\] To verify the beam-steering characteristics by the phase modulation method, a uniform linear array composed of 20 ideal point sources and spacing with an identical element pitch of 1 \\(\\mu\\)m is theoretically calculated. The corresponding far-field radiation patterns at various angles are illustrated in Figs. 9(a)-(d). It can be seen from Fig. 9(c), when scanning up to 33\\({}^{\\circ}\\), the first grating lobe appears at \\(-90^{\\circ}\\). Besides, when it scans up to 50\\({}^{\\circ}\\), the first grating lobe appears at \\(-50^{\\circ}\\), indicating an over field of view of 100\\({}^{\\circ}\\) as illustrated in Fig. 9(d). These results are in good agreement with the theoretical calculations. Table 2 gives the summary and comparisons with prior beam-steering OPAs on the performances, e.g., array element pitch, field of view, beam divergence and crosstalk. It should be noted that the beam divergence in transverse direction (waveguide array direction) exactly depends on the number of the antenna element and more antenna element leads to smaller beam divergence angle, while the beam divergence in longitude direction (light propagating direction) depends on the effective radiating length of the grating. The mutual restraints between array element pitch and field of view pose a limitation to the \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\(G\\) & 0.1 & 0.12 & 0.14 & 0.16 & 0.18 & 0.2 & \\((\\mu m)\\) \\\\ \\hline \\(D\\) & 0.88 & 0.92 & 0.96 & 1 & 1.04 & 1.08 & \\((\\mu m)\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE 1The Corresponding Waveguide Pitch \\(D\\) at Different Gap Size \\(G\\)development of high-performance OPAs. Therefore, reducing the element pitch while keeping a low crosstalk between antenna elements can substantially extend the scanning angle of OPAs. In the simulations, the beam steering can be achieved by individually controlling the phase of each input port. However, in an experimental system of OPA, MMI splitters are required to split the input power into \\(N\\) channels, followed by a phase-modulated region and the designed waveguide grating antennas. The phase in the phase-modulated region can be modulated via thermo-optic effect or plasma dispersion effect [36]. The phase shifters based on thermo-optic effect generally suffer from high power consumption. For example, a heating power of 22 mW is required for 2\\(\\pi\\) phase shift. Compared with the thermo-optic phase shifters, although the phase shifters based on plasma dispersion effect can provide smaller power consumption and faster response, optical loss is unavoidable in these devices. As examples, additional losses for the 2\\(\\pi\\) electro-optic phase shift presented in [37] and [38] are about 2.9 \\(\\sim\\) 3.2 dB and 6 dB, and the tuning powers are about 3.4 mW/2\\(\\pi\\) and 4.6 mW/2\\(\\pi\\), respectively. ## III Fabrication Tolerance The proposed design with the minimum feature size \\(G=160\\) nm can be fabricated using single-step electron beam lithography and etching processes, which simplifies the fabrication. In this section, the fabrication tolerances of the proposed design are investigated and the impacts of the fabrication errors on the crosstalk fluctuation are our primary consideration. Figs. 10(a)-10(c) show the CSR within the wavelength ranges of 1.5 \\(\\sim\\) 1.62 \\(\\mu\\)m at different width \\(b\\), length \\(a\\) and height \\(H\\) of the periodic silicon nano-blocks, respectively. It can be seen that the width \\(b\\) has a significant influence on the resonant frequencies of the 1-D periodic structure. As mentioned above, the evanescent waves of the silicon strip waveguide contribute to the coupling between waveguides. When the evanescent wave propagates along the \\(x\\)- direction with a wave vector \\(k_{x}\\), the parameter \\(b\\) will play a decisive role in the traveling path that the energy transfers from the input waveguide to adjacent ones. Therefore, the resonant frequencies of the periodic structure are directly influenced by parameter \\(b\\). As seen in Fig. 10(a), with the increasing of \\(b\\), the two resonances are both shifted downwards to lower frequencies because of the increased traveling path, resulting in deterioration of the CSR at high frequencies. As indicated by Fig. 6(a), when keeping the period \\(\\Lambda\\) a constant, the duty cycle can be properly designed to obtain optimal crosstalk suppression between waveguides. It can be seen from Fig. 10(b), the CSR is hardly influenced under a reasonable deviation of fabrication errors of the silicon nano-block length \\(a\\). In addition, the resonant frequencies are affected by the structure overall height \\(H\\). With the decreasing of height \\(H\\), the CSR at low frequencies will slightly deteriorate. Considering that the proposed design can be fabricated with the electron beam lithography process, generally, the size deviations of the structure are on the order from a few nanometers to a few dozens of nanometers. As shown in Figs. 10(a)-10(c), the crosstalk can be suppressed well over a large wavelength range even when the fabrication error is as large as \\(\\pm\\) 20 nm. Therefore, these results strongly suggest that the proposed design holds considerable merit in favorable fabrication tolerances. ## IV Conclusion In this paper, a compact sub-wavelength-pitch silicon waveguide array for OPAs is proposed. Firstly, a pair of waveguides Fig. 10: The CSR versus wavelength at different (a) width b and (b) length a of the periodic silicon nano-blocks, and (c) overall height \\(H\\) of the proposed design. Fig. 9: For a uniform linear array composed by 20 ideal point sources and spacing with an identical element pitch of 1 \\(\\mu\\)m with the proposed structure, the far-field radiation patterns at the scanning angles of (a) 0\\({}^{\\circ}\\), (b) 20\\({}^{\\circ}\\), (c) 33\\({}^{\\circ}\\), and (d) 50\\({}^{\\circ}\\). with element pitch of 1 \\(\\mu\\)m are analyzed and discussed to illustrate the operating principle of the proposed design. Then, a low crosstalk waveguide array is structured based on the given parameters of the proposed two-waveguide system. Simulation results show that within the 1500 \\(\\sim\\) 1590 nm bandwidth, the crosstalk between the waveguides in the proposed design is at least 10 dB lower than the referenced traditional waveguide array with identical physical parameters. Furthermore, the silicon blocks periodically perturb the evanescent fields of the silicon strip waveguide forming weak radiating grating. Therefore, the radiation characteristics of the evanescent-field modulated waveguide grating are investigated. It demonstrates that the proposed design achieves an effective radiation length up to about 1.47 mm at the wavelength of 1.55 \\(\\mu\\)m. As a result, a theoretical narrow beam width of 0.052\\({}^{\\circ}\\) is achieved in the far field. For the waveguide array with an element pitch of 1 \\(\\mu\\)m in this design, theoretical aliasing-free beam steering ranges of 100\\({}^{\\circ}\\) can be achieved with phase modulation method. Overall 19.3\\({}^{\\circ}\\) beam steering ranges via wavelength tuning by 113 nm, corresponding to the tuning efficiency of 0.17\\({}^{\\circ}\\)/nm, is numerically demonstrated. Finally, the fabrication tolerances of the proposed design are investigated. Our results indicate that the design is robust to reasonable fabrication errors from parameter variations of the 1-D periodic silicon nano-blocks. Therefore, the proposed waveguide array will be very promising in realizing high-performance two-dimensional scanning OPAs of the solid-state LiDAR with a large field of view and a narrow beam width. ## References * [1] J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, \"Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,\" _Opt. Exp._, vol. 19, no. 22, pp. 21595-21604, Oct. 2011. * [2] C. V. Poulton _et al._, \"Coherent solid-state LIDAR with silicon photonic optical phased arrays,\" _Opt. Lett._, vol. 42, no. 20, pp. 4091-4094, Oct. 2017. * [3] Y. Kohno, K. Komatsu, R. Tang, Y. Ozeki, Y. Nakano, and T. Tanemura, \"Ghost imaging using a large-scale silicon photonic phased array chip,\" _Opt. Exp._, vol. 27, no. 3, pp. 3817-3823, Feb. 2019. * [4] J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, \"Large-scale nanophotonic phased array,\" _Nature_, vol. 493, no. 7431, pp. 195-199, Jan. 2013. * [5] G. Yang, W. Han, T. Xie, and H. Xie, \"Electronic holographic three-dimensional display with enlarged viewing angle using non-mechanical scanning technology,\" _OSA Continuum_, vol. 2, no. 6, pp. 1917-1924, Jun. 2019. * [6] W. M. Neubert, K. H. Kudielka, W. R. Leeb, and A. L. Scholtz, \"Experimental demonstration of an optical phased array antenna for laser space communications,\" _Appl. Opt._, vol. 33, no. 18, pp. 3820-3830, Jun. 1994. * [7] C. Hulme _et al._, \"Fully integrated hybrid silicon two-dimensional beam scanner,\" _Opt. Exp._, vol. 23, no. 5, pp. 5861-5874, Mar. 2015. * [8] G. Zhou, S. W. Qu, and J. Wu, \"Grating lobe suppression in optical phased arrays by loading near-wavelength grating,\" _Opt. Lett._, vol. 45, no. 20, pp. 5664-5667, Oct. 2020. * [9] D. Kwong _et al._, \"On-chip silicon optical phased array for two-dimensional beam steering,\" _Opt. Lett._, vol. 39, no. 4, pp. 941-944, Feb. 2014. * [10] T. Kim _et al._, \"A single-chip optical phased array in a water-scale silicon photonics/CMOS 3D-integration platform,\" _IEEE J. Solid-State Circuits_, vol. 54, no. 11, pp. 3061-3074, Nov. 2019. * [11] S. M. Kim, T. H. Park, C. S. Im, S. S. Lee, T. Kim, and T. C. Oh, \"Temporal response of polymer waveguide beam scanner with thermo-optic phased- modulator array,\" _Opt. Exp._, vol. 28, no. 3, pp. 3768-3778, Feb. 2020. * [12] D. N. Hutchison _et al._, \"High-resolution aliasing-free optical beam steering,\" _Optica_, vol. 3, no. 8, pp. 887-890, Aug. 2016. * [13] R. Fatemi, A. Khachaturian, and A. Hajimiri, \"A nonuniform sparse 2-D large-FO optical phased array with a low-power PWM drive,\" _IEEE J. Solid-State Circuits_, vol. 54, no. 5, pp. 1200-1215, May 2019. * [14] W. Song _et al._, \"High-density waveguide superlattices with low crosstalk,\" _Nat. Commun._, vol. 6, no. 1, pp. 7027, May 2015. * [15] R. Gadula, S. Abbaslou, M. Lu, A. Stein, and W. Jiang, \"Guiding light in bent waveguide superlattices with low crosstalk,\" _Optica_, vol. 6, no. 5, pp. 585-591, May 2019. * [16] L. Wang _et al._, \"Design of a low-crosstalk half-wavelength pitch nano-structured silicon waveguide array,\" _Opt. Lett._, vol. 44, no. 13, pp. 3266-3269, Jul. 2019. * [17] X. Yi, H. Zeng, S. Gao, and C. Qiu, \"Design of an ultra-compact low-crosstalk sinusoidal waveguide array for optical phased array,\" _Opt. Exp._, vol. 28, no. 25, pp. 37505-37513, Dec. 2020. * [18] H. Xu and Y. Shi, \"Diffraction engineering for silicon waveguide grating antenna by harnessing bound state in the continuum,\" _Nanophotonics_, vol. 9, no. 6, pp. 1439-1446, 2020. * [19] Y. Zhang _et al._, \"Sub-wavelength-pitch silicon-photonic optical phased array for large field-of-regard coherent optical beam steering,\" _Opt. Exp._, vol. 27, no. 3, pp. 1929-1940, Feb. 2019. * [20] K. Shang _et al._, \"Uniform emission, constant wavevector silicon grating surface emitter for beam steering with ultra-sharp instantaneous field-of-view,\" _Opt. Exp._, vol. 25, no. 17, pp. 19655-19661, Aug. 2017. * [21] M. Zadka, Y. C. Chang, A. Mohanty, C. T. Phare, S. P. Roberts, and M. Lipson, \"On-chip platform for a phased array with minimal beam divergence and wide field-of-view,\" _Opt. Exp._, vol. 26, no. 3, pp. 2528-2534, Feb. 2018. * [22] P. Ginel-Moreno _et al._, \"Highly efficient optical antenna with small beam divergence in silicon waveguides,\" _Opt. Lett._, vol. 45, no. 20, pp. 5668-5671, Oct. 2020. * [23] S. Jahani and Z. Jacob, \"Transparent subdiffraction optics: Nanoscale light confinement without metal,\" _Optica_, vol. 1, no. 2, pp. 96-100, Aug. 2014. * [24] M. B. Mia, S. Z. Ahmed, I. Ahmed, Y. J. Lee, M. Qi, and S. Kim, \"Exceptional coupling in photonic anisotropic metamaterials for extremely low waveguide crosstalk,\" _Optica_, vol. 7, no. 8, pp. 881-887, Aug. 2020. * [25] A. Khavasi, L. Chrostowski, Z. Lu, and R. Bojko, \"Significant crosstalk reduction using all-dielectric CMO-compatible metamaterials,\" _IEEE Photon. Technol. Lett._, vol. 28, no. 24, pp. 2787-2790, Dec. 2016. * [26] Y. Yang _et al._, \"Crosstalk reduction of integrated optical waveguides with nonuniform subwavelength silicon strips,\" _Sci. Rep._, vol. 10, no. 1, 2020, Art. no. 4491. * [27] A. Yariv, \"Coupled-mode theory for guided-wave optics,\" _IEEE J. Quantum Electron._, vol. 9, no. 9, pp. 919-933, Sept. 1973. * [28] S. W. Chung, H. Abedasi, and H. Hashem, \"A monolithically integrated large-scale optical phased array in silicon-on-insulator CMOS,\" _IEEE J. Solid-State Circuits_, vol. 53, no. 1, pp. 275-296, Jan. 2018. * [29] W. Fuscaldo, D. R. Jackson, and A. Galli, \"A general and accurate formula for the beamwidth of 1-D leaky-wave antennas,\" _IEEE Trans. Antennas Propag._, vol. 65, no. 4, pp. 1670-1679, Apr. 2017. * [30] X. Sun, L. Zhang, Q. Zhang, and W. Zhang, \"Si photonics for practical LiDAR solutions,\" _Appl. Sci._, vol. 9, no. 20, 2019, Art. no. 4225. * [31] C. P. Hsu _et al._, \"A review and perspective on optical phased array for automotive LiDAR,\" _IEEE J. Sel. Topics Quantum Electron._, vol. 27, no. 1, pp. 1-16, Feb. 2021. * [32] W. L. Stutzmann and G. A. Thiele, _Antenna Theory and Design_, 2nd ed., New York, NY, USA: Wiley, 1997. * [33] Y. Wang _et al._, \"2D broadband beamsteering with large-scale MEMS optical phased array,\" _Optica_, vol. 6, no. 5, pp. 557-562, May 2020. * [34] G. Luo _et al._, \"Demonstration of 128-channel optical phased array with large scanning range,\" _IEEE Photon. J._, vol. 13, no. 3, Jun. 2021, Art. no. 6800710. * [35] L.-M. Leng, Y. Shao, P.-Y. Zhao, G.-F. Tao, S.-N. Zhu, and W. Jiang, \"Waveguide superlattice-based optical phased array,\" _Phys. Rev. Appl._, vol. 15, no. 1, Jan. 2021, Art. no. 014019. * [36] G. T. Reed, G. Mashanovich, F. Y. Gardes, and D. J. Thomson, \"Silicon optical modulators,\" _Nature Photon._, vol. 4, no. 8, pp. 518-526, Jul. 2010. * [37] G. Kang _et al._, \"Silicon-based optical phased array using electro-optic \\(p\\)-\\(i\\)-\\(n\\) phase shifters,\" _IEEE Photon. Technol. Lett._, vol. 31, no. 21, pp. 1685-1688, Nov. 2016. * [38] S.-H. Kim _et al._, \"Thermo-optic control of the longitudinal radiation angle in a silicon-based optical phased array,\" _Opt. Lett._, vol. 44, no. 2, pp. 411-414, Jan. 2019.
In this work, a compact sub-wavelength-pitch silicon waveguide array with low crosstalk is proposed and analyzed. The crosstalk is suppressed by periodic silicon nano-blocks symmetrically arranged along the silicon strip waveguides. The silicon nano-blocks are properly designed to work in the resonant region as a high-reflection boundary so that the evanescent fields of the silicon waveguide, which directly contribute to the coupling between waveguides, can be truncated. Meanwhile, the nano-blocks periodically perturb the evanescent fields to form a weak-radiating grating, leading to a millimeter-long effective radiation length required for highly directive optical phased arrays. Simulation results show that the crosstalk between the waveguides in the proposed design is at least 10 dB lower than traditional waveguide array with identical sizes within the 1500-1590 nm bandwidth. Furthermore, the proposed design achieves an effective radiation length up to 1.47 mm, resulting in a theoretical narrow beam width of 0.052\\({}^{\\circ}\\). Combining both the low crosstalk and the long effective radiating length, our design offers a promising platform for high-performance two-dimensional scanning optical phased array with a large field of view and a narrow beam width. Optical phased arrays (OPs), low crosstalk, high-reflection boundary. 220068
Condense the content of the following passage.
ieee/eafb8fb5_802a_46f1_824a_3ca6d3a75b25.md
# Training Images Generation for CNN Based Automatic Modulation Classification Wei-Tao Zhang\\({}^{\\copyright}\\)1, Dan Cui\\({}^{\\copyright}\\)1, and Shun-Tian Lou\\({}^{\\copyright}\\)1, (Member, IEEE) \\({}^{1}\\)School of Electronic Engineering, Xidian University, Xi'an 71071, China \\({}^{2}\\)Research Institute of Advanced Remote Sensing Technology, Xidian University, Xi'an 710071, China Corresponding author: Wei-Tao Zhang ([email protected]) ## 1 Introduction In modern wireless communication systems, the modulation type of transmitted signal is mandatory for a receiver to successfully demodulate the original transmitted message. Conventional way of doing this involves sending a header or pilot signal along with the original message to inform the receiver about the modulation type. However, such approach incurs penalty in terms of bandwidth utilization and data throughput. While intelligent receivers can mitigate this drawback by intelligently pre-processing the received signal to identify the modulation type of the transmitted signal with no need of prior knowledge. This has led to huge interest in developing automatic modulation classification (AMC) techniques [1], which is actually an intermediate step between signal detection and demodulation. It is usually preferred in adaptive modulation scenarios such as software defined radio (SDR) and cognitive radio (CR) [2], where the transmitted modulation can be dynamically chosen such that the spectral efficiency is constantly optimized. Now AMC has found a variety of applications in both commercial and military fields such as spectrum management, surveillance and threat analysis. Over the past few years, numerous methods for AMC were proposed in the literature [3]. They can be mainly divided into two categories, namely, Likelihood-Based (LB) methods [4, 5, 6] and Feature-Based (FB) methods [7, 8, 9, 10, 11]. The former treats AMC as a hypothesis testing problem, where the exact or approximated likelihood function of the incoming signal is calculated and compared with a threshold value. Note that if the probability density function (pdf) of the received signal is known and identical to the actual pdf of the parameters, the LB method yields the optimal solution in terms of correct classification rate (CCR) since it minimizes the probability of false classification. However, LB methods usually suffer from model mismatch with respect to carrier frequency and phase offsets [8]. In addition, the LB methods always have a high computational complexity. So LB solution serves as an upper bound performance benchmark of any classifier, while they are commonly discarded in practical use. FB methods are popular in the practical implementations because of the less complexity involved. Most of FB methods usually consist of two steps, the first step involves extracting features from the received signal. In the second step, a linear or nonlinear classifier is designed to perform the classification. Numerous features with their respective meritsand defects were proposed. Among them, the most used are high-order cumulants [6, 7, 8], wavelet transform [9], and cyclic statistics [10]. As for the classifier, machine learning algorithms, such as support vector machines (SVM) [11], K-nearest neighbor (KNN) [8], and artificial neural networks [10], have been widely studied for inference. Whereas these methods were developed and optimized for some environments, they suffer from performance degradation for mismatch between extracted feature and classifier because the feature selection procedure and classification are independent of each other. The quality of the whole AMC relies on both the performance of classification algorithm and the ability of the features to differentiate between the constellations of a given set. Obviously, the features that are insensitive to the inherent parameters of the received signal such as the phase and frequency offsets, the synchronization, and the noise are preferred. Unfortunately, such properties are rarely achieved by manually designed feature under various conditions. More recently, studies have shown that deep neural networks (DNN) can learn from the complex data structures and achieve superior classification accuracy [12]. This makes them an obvious choice in AMC problem because of the much denser modulation schemes used in the modern communication system. Kim used a fully connected model with three hidden layers [13]. To feed the DNN model, twenty-one features are computed from the received data samples based on power spectrum density and cumulants. Ali proposed a fully connected DNN model based on autoencoder with non-negativity constraints [14], where the input features are fourth order cumulants. Note that the fully connected DNN model always involves too many free parameters to be trained, which usually results in high computational load for network learning and inference. In addition, the above DNN model used in AMC only serves as a classifier, which is still independent of feature extraction. Recently, the convolutional neural network is more popular for modulation classification to overcome some obstacles of traditional machine learning algorithms. As for CNN based AMC, the feature extraction procedure is incorporated into CNN model, the model extracts feature from data autonomously, then the challenging task of manual feature selection can be avoided. CNN-based methods can be roughly divided into two categories according to the input of network. One was trained using IQ component signals, while the other was trained using image-based constellation diagrams. For example, a complex-valued network [15] considered the correlation between the real and imaginary parts of signal, is proposed to demonstrate the high potential for AMC and validate the superior performance compared with the real-valued network. However, a complex-model has a higher computational complexity because of the plenty of complex valued multiplications involved. Huynuh-The [16] proposed a cost-efficient convolutional neural network (MCNet) for AMC, whose input is IQ components and network architecture is built with several specific convolutional blocks to concurrently learn the spatiotemporal signal correlations via different asymmetric convolution kernels. Although the accuracy performance was good, thousands of parameters were used in the network, which is still large relative to the small IQ length. Kim _et al._[17] proposed a novel CNN architecture for AMC with low computational complexity compared with MCNet. The proposed model showed good performance in the SNR range from \\(-4\\)dB to \\(20\\)dB. Compared with IQ component-based model, image-based model is more elegant for AMC, because it can provide the visualization of modulation categories. Huang _et al._ proposed a compressive CNN (CCNN) for AMC [18], where multiple images (called kcs and cgc) are utilized as the input of the network. The cgc further considered the two-dimensional probability distribution of signal samples on the basis of kcs. However, there are still two limitations. First, the location of each sample within a grid region is not considered. Second, the impact of each sample in a region on its neighboring pixel is ignored. To handle these problems, Doan [19] leverages a bivariate histogram and an exponential decay mechanism to obtain gray-scale constellation image. Meanwhile, a novel CNN model, namely FiF-Net, is introduced for modulation classification. However, the computation burden of generating an image is higher since it must compute the distances between sample points and the center of each pixel. In [20], modulation classifiers are developed based on transfer learning of classical ResNet-50 and Inception V2 deep learning model, where the classifiers are trained with color images generated through the constellation density of the masked signal. The constellation density matrix (CDM) based modulation classification algorithm is proposed to identify the orders of different modulation categories. Despite exploiting more explicitly discriminative features of constellation diagrams, modeling a modulation classifier from color image suffers from poor performance along increment of QAM at lower SNR level. In [21] the transfer learning of classical AlexNet and GoogleNet are adopted for AMC using multiple constellation-like image data sets. As for the CNN based AMC, the training images generation is crucial for model learning. Unfortunately, the constellation diagrams generated from data samples are binary images with limited resolution, or enhanced gray (color) images with high computational load. Moreover, classical large CNN models are actually inappropriate for AMC problem since the constellation diagrams of the incoming signals are relatively simple images with uniform background, training of these models based on constellation-like images probably result in overfitting. In this paper, we focus our attention on the CNN based AMC problem. In order to overcome the aforementioned drawbacks, the constellation-like convolutional gray images are generated for model training, which exhibit better representation than binary images and other existing gray images. Moreover, the multiple-scale convolutional neural network (MSCNN) with dropout is proposed as the classifier. ## II Problem formulation We assume that the radio frequency (RF) signal in the receiver is preprocessed such that the received waveform consists of samples of prefiltered and pulse shaping digital signal in multipath fading channel. The oversampled data point reads \\[r(n)=\\alpha e^{(i2\\pi f_{o}nT_{s}+\\varphi_{o})}\\sum_{l=0}^{L-1}s(l)h(n-IT-n^{ \\prime})+w(n) \\tag{1}\\] where \\(\\alpha\\) represents the channel attenuation factor, \\(\\{s(l)\\}_{l=0}^{L-1}\\) are \\(L\\) complex transmitted symbols drawn independently from a finite alphabet constellation, \\(h(n)\\) represents the overall effect of pulse shaping filter and physical channel, \\(w(n)\\) is additive white Gaussian noise with power \\(\\sigma_{w}{}^{2},f_{o}\\) is the carrier frequency offset due to the impairment between transmitter and receiver, \\(\\varphi_{o}\\) is the phase offset, \\(n^{\\prime}\\) represents the propagation delay, \\(T\\) is the symbol period, \\(T_{s}\\) is the sampling period, then the oversampling rate is given by \\(\\rho=T/T_{s}\\). In this paper, we assume that the channel is flat fading or be properly equalized such that \\(h(n)\\) is negligible and the parameters \\(T\\) and \\(n^{\\prime}\\) are assumed to be known. The goal of AMC is to identify which modulation scheme has been utilized with the knowledge of \\(N\\) received samples \\(\\mathbf{r}=[r(1),\\,\\ldots,\\,r(N)]\\). A CNN based AMC technique for adaptive modulation system is shown in Fig. 1, where the imaging method for data conversion and the training of CNN model are critical points. In the sequel, the focus will be on these two points. ## III Training images generation The received signal in equation (1) can be represented by its constellation diagram through mapping signal samples into scattering points on a complex plane. Note that the complex plane is infinite, while the signal samples represented by the scattering points are distributed within a certain area in the complex plane. Moreover, the amplitude of the received signal varies from different channel responses and modulation types, which makes the selection of appropriate area for constellation diagram more difficult. If the selected area is too small, some signal samples may be excluded from the image. On the contrary, if the area is too large, signal samples may crowd a small region, which makes it difficult to discriminate the higher order modulation types. In order to solve this problem, we compensate for the arbitrary channel attenuation by normalizing the received complex baseband samples as \\((r(n)-\\mu_{r})/\\sigma_{r}\\), where \\(\\mu_{r}\\) and \\(\\sigma_{r}\\) are the mean and standard deviation of received samples. After that, the signal samples are distributed in a relatively fixed area, which is convenient for us to choose an appropriate complex plane. We select a \\(6\\times 6\\) complex plane, assuming a typical signal-to-noise ratio (SNR) range from 0 to 15 dB. ### Binary image According to the distribution of signal samples in constellation diagram, a pixel limited binary image is straightforward. In this case, the selected complex plane is uniformly divided into grids, which correspond to pixels in the resulting binary image. Naturally, if the grids contain signal sample points, the corresponding pixels are set 1, otherwise 0. Then the constellation diagram of the signal is converted to binary image. However, there might be multiple samples that crowded one pixel, in this case the pixels with one or more sample points are treated identically. So binary image is unable to provide an accurate representation of the distribution of signal samples. ### Gray image For a pixel of binary image, the number of sample points in corresponding grid has been ignored, which degraded the resulting image quality. In order to improve the representation accuracy of the pixels with multiple sample points, the binary image can be upgraded to gray image by regarding the number of sample points as the weight coefficient for the pixels with multiple sample points. The multiple pixels with different number of sample points are shown in Fig. 2. For a gray image, the pixels 3, 6, 13, 14 will have the weight coefficients 1, 2, 4, 3 respectively, which can be normalized to form the intensity value for these pixels. ### Enhanced Gray image Although the number of samples in each pixel is considered in the gray image, the impact of each sample in a pixel on its neighboring pixels is neglected. Hence, an enhanced gray image is developed, which takes into consideration the distances between sample points and centroids of pixels. Concretely, it adopts an exponential decay model, \\(o_{ij}=\\sum_{n=1}^{N}\\theta^{-\\lambda d_{in,j}}\\), where \\(o_{ij}\\) represents the cumulative impact of all received sample points on \\((i,j)\\)th pixel, \\(d_{(n,ij)}\\) is the distance between the centroid of \\((i,j)\\)th pixel and sample point \\(r(n)\\), \\(\\theta\\) is the base of exponential function, and \\(\\lambda\\). is the decay factor. \\(o_{ij}\\) can be normalized to form the intensity values to generate an enhanced gray image. Figure 1: The architecture of CNN based AMC technique for adaptive modulation system. Figure 2: Sample points and pixels. ### Convolutional Gray Image The enhanced gray image greatly improves the image quality for the subsequent classification procedure. However, it has two limitations. Firstly, for the pixels located in the boundary between two adjacent constellation points, there are usually rare signal sample points available in the corresponding grids, then the boundary pixels commonly have very small intensity values relative to the constellation point pixel, which is useful to identify the different constellation diagrams. Whereas, the enhanced gray imaging model still compute the cumulative impact of all data samples on the boundary pixels. This results in a dim boundary between two adjacent constellation point, and hence it is difficult to identify the higher order modulations in noisy channel. Secondly, according to the enhanced gray imaging model, the computation of intensity of each pixel involves the distance from the corresponding grid to all data samples and the subsequent \\(N\\) exponential operation. When the number of signal samples \\(N\\) is large or a higher resolution is preferred, the computational burden of enhanced gray image will be prohibitive for the generation of a large amount of training images for deep model. In order to overcome the aforementioned drawbacks, we put forward a convolutional gray image generation method, which is based on the simple convolution operation of local gray image. First, let us build a gray image \\(R(i,j)\\) using the received complex-valued signal \\(r(n)\\) with \\(N\\) data samples. Let \\(b\\) denote the selected boundary of the complex plane, and \\(\\Delta\\) denote the grid size, which actually represents the resolution of the resulting image. Fig. 3 shows the image in natural coordinate system and graphic coordinate system. For a certain sample point \\(r(n)\\), it contribute to the corresponding pixel by \\[R(i,j)\\longleftarrow R(i,j)+1 \\tag{2}\\] where the graphic index \\(i\\) and \\(j\\) are related to data sample \\(r(n)\\) by \\[i =\\left\\lceil\\frac{b-Im[r(n)]}{\\Delta}\\right\\rceil \\tag{3a}\\] \\[j =\\left\\lceil\\frac{b+Re[r(n)]}{\\Delta}\\right\\rceil \\tag{3b}\\] where notation \\(\\lceil x\\rceil\\) denotes the smallest integer that is greater than or equal to \\(x\\). After that, the gray image can be obtained by the following normalization procedure \\[R(i,j)\\longleftarrow R(i,j)/p \\tag{4}\\] where \\(p\\) is the maximum of \\(R\\). Second, note that the constellation diagram of a modulated signal actually represents the cluster of data samples. So it is appropriate to take into account only the impact of surrounding data samples to the selected pixel, which represents the local features for the modulated signal. In order to efficiently calculate the locally clustered gray image, we propose to use a convolution kernel \\(W\\), which is shown in Fig. 4. It can be seen that the stride of the convolution filter is simply set to the image resolution \\(\\Delta\\), and the size of the filter is determined by positive integer \\(A\\) and \\(B\\). The filter weight coefficients can be evaluated by \\[W(a,b)=\\theta^{-\\lambda d_{ab}} \\tag{5}\\] where \\(d_{ab}=\\Delta\\sqrt{x^{2}+y^{2}}\\) is the Euclid distance between the \\((a,b)\\)th element of filter and its centroid. The other parameters are similar to that used in an enhanced gray image. The reason why we choose the aforementioned filter is based on the following. The enhanced gray image is actually obtained by passing the signal samples through a 2D filter with infinite size, because the impact of all data samples on the certain pixel are evaluated. However, it is inappropriate to use such a large filter since only the data samples that belong to the certain constellation point contribute to the corresponding pixel. So a 2D filter with finite size, say [A, B], is preferred, which is expected to solve the dim boundary problem in enhanced gray images. The computation of filter coefficients in (5) reflects the fact that the data sample points closed to the certain pixel have a greater impact than those far away from the pixel. Third, for the complex-valued received signal \\(r(n)\\), the pixels of convolutional gray image can be computed by \\[I(i,j)=\\sum_{a=-Ab}^{A}\\sum_{b=-B}^{B}W(a,b)R(i+a,j+b) \\tag{6}\\] where \\(I(i,j)\\) represents the intensity of the pixel \\((i,j)\\). By performing a convolution operation instead, the computation Figure 4: Schematic diagram of convolution filter. Figure 3: Resulting image in natural coordinate system and graphic coordinate system. complexity for generating a higher resolution image will be greatly decreased compared to enhanced gray image. Moreover, faster computation of (6) can be implemented via fast Fourier transform in frequency domain. The resulting convolutional gray images for different modulations are depicted in Fig. 5. ## IV Deep network for amc In modern communication systems, reliability is one of the most import indexes to evaluate the system performance, which demands a well-performed amc model in terms of classification performance. Hence, the multiple-scale architecture of a modulation classification convolutional neural network, named MSCNN, is proposed to learn modulation patterns from constellation like uniform background images. The network architecture is presented in Fig. 6. The deep network is specifically designed with several convolutional blocks associated with skip connections, in which each block comprises various asymmetric convolutional layers and comprised of one convolutional layer, followed by one batch normalization layer and ReLU activation function. The proposed network is capable of analyzing the multi-scale feature map correlations exhaustively to promisingly improve the accuracy of modulation classification under poor conditions with the cheaper computational cost. At the beginning of network, an input layer configured by the size of \\(240\\times 240\\times 1\\) to be compatible with the volume size of resulting image is followed by a process block with 64 kernels of size \\(3\\times 3\\) to acquire coarse features. With \\(2\\times 2\\) kernels, the first pooling layer is able to reduce the size of the feature map to optimize the extraction of the image characteristics with the stride of \\((2,2)\\). Subsequently, two layers of process layers organized in parallel, called pre-block as illustrated in Fig. 7(a), use an asymmetric kernel matrix of kernel sizes \\(3\\times 1\\) and \\(1\\times 3\\) corresponding to vertical and horizontal kernels, respectively, instead of \\(3\\times 3\\) to decrease the number of trainable parameters. After that, the network consists of three modules for deeply mining more explicitly discriminative features at multi-scale feature maps. Each module has two sophisticated process blocks, called M-block and M-block-drop respectively, which are cascaded along the network backbone. For details, M-block is configured by three process layers with different kernels, \\(3\\times 1\\), \\(1\\times 3\\), \\(1\\times 1\\) kernels arranged in parallel, at which all feature maps are then merged in the depth dimension at the output of each block via depth-wise concatenation layer. It is worth noting that the reason why the spatial dimension of feature maps remains unchanged at the output of M-block is that all kernels are applied with stride (1, 1). Meanwhile, another dimension-reduced version of M-block, named M-block-drop, is given with the same structure of M-block, except a dropout layer is carried out following a \\(3\\times 1\\) process layer as shown in Fig. 7(b). Notably, different from the traditional CNN model, where the maximum pooling operation is applied in convolution blocks, a dropout layer (rather than a pooling operation) follows every convolutional block instead. This modified architecture not only implements the down-sampling of the feature maps, but also improves the robustness of the model against the various additive noise. Moreover, it enables to prevent the network training process from overfitting. M-block-drop is also applied immediately after the pre-block to quickly diminish the dimension of feature maps and subsequently reduce the computational burden of following layers. As a result, the feature maps go through the M-block-drop, whose dimension before reaching the concatenation layer will be halved. In order to be compatible with output of dropout layer when performing depth concatenation, two remaining layers are deployed with stride of \\((2,2)\\). Each block has two \\(1\\times 1\\) convolutional layer: one for feature extraction and another on the top for reduction of the channel dimension. The module is finalized with M-block-drop. By following this architecture, the spatial size of output feature volume halves for every module. To improve the accuracy performance of amc model for mitigating the negative effect of vanishing gradient problem caused by popular ReLU activation function in a relatively deep network and maintain the informative identity of previous layers, skip-connection technique is deployed for associating M-blocks via an element-wise addition layer as described in Fig. 6. Unlike the traditional structure of network, skip-connection mechanism allows the network to learn the integrated information. At the end of network, the feature maps of the last M-block are gathered with its input by a depth concatenation layer. It is obvious that multiple scale features extracted in each block and the informative identity maintained throughout the network via skip-connection are jointly synthesized to enrich the amc model. MSCNN can overcome the problems of vanishing gradients and overfitting during the network training process. The network is finalized with an average pooling layer with the pool size of \\((2,2)\\), a fully connected layer (where the number of hidden nodes is identical to the number of modulation categories considered for classification), and a softmax layer arranged sequentially after the fully connected layer. The detailed configurations of MSCNN are given in Table 1. ## V Results The simulation settings are as follows: 1) complex base-band modulated signal are obtained from the output of additive white Gaussian noise (AWGN) channel with four noise level (the corresponding SNR is 0dB, 5dB, 10dB, 15dB); 2) 1000 data samples are collected to generate the gray image, enhanced gray image and convolutional gray image respectively, the binary image is ignored because of its low Figure 5: Convolutional gray image for five modulation categories. resolution; 3) We consider the classification of 5 modulation categories, including BPSK, OQPSK, 8PSK, 16QAM, 64QAM, each of which contain 20000 labeled images for model training and 5000 labeled images for performance test. Notably, the SNR of test dataset is different from that of the training dataset (1dB-14dB for test data), which indicates a more difficult scenario for classification model to predict the modulation categories. Therefore, the test accuracy of a trained network may not be high enough compared with the results in existing literatures. ### Parameters Selection For enhanced gray image and convolutional gray image, the parameter \\(\\theta\\) and \\(\\lambda\\) play an important role in imaging process. In this section, we will discuss the parameters selection. The exponential functions (5) with different \\(\\theta\\) and \\(\\lambda\\) are plotted in Fig. 8. As shown in Fig. 8, the exponential function decreases rapidly with a larger \\(\\theta\\) or \\(\\lambda\\), which have an important effect on imaging. With the increment of \\(\\theta\\) or \\(\\lambda\\), the equivalent support receptive field of convolution 2D filter shrink in imaging process. Fig. 9 shows the effect of different \\(\\lambda\\) in generating convolutional gray images for BPSK modulated signal under additive white Gaussian noise. One sees that a smaller \\(\\lambda\\) blurs the edge between two adjacent constellation points. Especially at lower SNR, adjacent constellation connected to each other due to the noise interference, which makes it difficult for classifier to identify the modulation type. Whereas a larger \\(\\lambda\\) will produce lower resolution gray images that is similar to binary image. So \\(\\theta\\) and \\(\\lambda\\) are tradeoff parameters between blurred gray image and binary image. Fortunately, these parameters can be determined using empiric values with a wide range. Fig. 10 shows the resulting images by two different imaging methods. We see that the convolutional gray image produced a sharper change than enhanced gray image between light and dark, where sample density is different. This property enables our method to yield a clearer edge between two adjacent constellations under noisy interference scenario, which is superior in modulation classification. Figure 6: The overall network architecture of MSCNN. Figure 7: Description of convolutional blocks deployed in the MSCNN. (a) the Pre-block; (b) the convolutional M-block-drop; (c) the convolutional M-block and (d) the structure of Process block. ### _Effect of IMAGING Schemes on 64QAM Recognition_ We present some simulation results to show the effect of different imaging schemes. The reason why we show the results of imaging schemes on 64QAM is that it is hard to recognize among the aforementioned modulation categories. In our simulation, the same set of complex samples is used to generate three types of images, including gray image, enhanced gray image, and convolutional gray image. For each imaging method, corresponding images are fed into MSCNN for training. Then 1000 test images are generated for 64QAM modulated signals with SNR=4dB. Table 2 records the accuracy of three imaging methods. As shown in the table, the classification accuracy improves from 73.6% to 91.9% if the convolutional gray image is utilized instead of the gray image. Notably, despite achieving the greatest accuracy of 91.9%, 64QAM suffers the misclassification with 16QAM. ### _Comparison of Computational Load for Different IMAGING Schemes_ In this example, we investigate the computational load of imaging schemes, because this is a very important issue for adaptive demodulation systems applicable in real time scenario. The same set of complex valued data samples is used to generate the gray image, enhanced gray image and convolutional gray image respectively. For each imaging method, we compare the impact of the number of samples and different resolutions on imaging time, where the resolutions we considered include \\(200\\times 200\\), \\(300\\times 300\\), \\(400\\times 400\\), \\(600\\times 600\\). Fig. 11 plots CPU time versus the number of samples for generated images with different resolutions. We see that the imaging time of enhanced gray image significantly increases \\begin{table} \\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|} \\hline Layer & Output Volume & Detailed Description \\\\ \\hline Input & \\(240\\times 240\\times 1\\) & \\\\ \\hline Process & \\(240\\times 240\\times 64\\) & 64 conv 3 \\(\\times\\) 3 \\\\ \\hline Max-pool & \\(120\\times 120\\times 64\\) & \\(2\\times 2\\), stride = (2,2) \\\\ \\hline Pre-block & \\(120\\times 120\\times 128\\) & \\(\\left\\{\\begin{array}{l}64conv1\\times 3,stride=(1,1)\\\\ 64conv3\\times 1,stride=(1,1)\\\\ dropout(0.5)\\end{array}\\right.\\) \\\\ \\hline Avg-pool & \\(60\\times 60\\times 128\\) & \\(2\\times 2\\), stride = (2,2) \\\\ \\hline M-block-drop & \\(60\\times 60\\times 128\\) & \\(\\left\\{\\begin{array}{l}64conv1\\times 1,stride=(1,1)\\\\ 48conv3\\times 1,stride=(2,2)\\\\ dropout(0.5)\\\\ 48conv1\\times 3,stride=(2,2)\\\\ 32conv1\\times 1,stride=(2,2)\\\\ depthcatenation \\\\ \\hline Add & \\(60\\times 60\\times 128\\) & element-wise addition \\\\ \\hline \\(3\\times\\) module & \\(8\\times 8\\times 128\\) & \\(\\left\\{\\begin{array}{l}32conv1\\times 1,stride=(1,1)\\\\ 48conv1\\times 3,stride=(1,1)\\\\ 32conv1\\times 1,stride=(1,1)\\\\ depthcatenation \\\\ \\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{} \\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{}\\text{ }\\text{}\\text{} when the number of samples grows. In addition, it is significantly higher than that of other imaging schemes under the same number of samples and resolution, which validates our analysis in section III. Concretely, when the number of samples is 1000 and the resolution is 200, the imaging time of enhanced gray image is more than five times of convolutional gray image. The reason for its high computational load is that it is obtained by evaluating the impact of all data samples on each pixel with repeated exponential computations. Whereas the proposed convolutional gray image avoids the repeated exponential operation, and the convolution kernel is calculated just once and shared for each pixel. Moreover, we adopt fast convolution operation with convenient implementation. Note that the imaging time of gray image and convolutional gray image keep almost constant under different resolution when the number of samples increases. In addition, when the resolution grows, the imaging time of gray image and convolutional gray image slightly increases. We see that the computational load of convolutional gray image is slightly higher than that of gray image, the additional computational burden lies in the convolution operation, which is computationally cheaper as indicated in the gap of CPU Time between two imaging schemes. ### Classification Performance of Mscnn for Different Modulations We report the classification accuracy of MSCNN for five modulation categories separately, where the numerical results are plotted in Fig. 12. In general, the classification accuracy increases along the increment of SNR levels. In our simulation, the classification accuracy of low order modulation categories, including BPSK, OQPSK, 8PSK, are 100% under different imaging methods. Meanwhile, MSCNN on convolutional gray image recognizes 16QAM and 64QAM signals competently with the accuracy rates of 96.4% and 91.9%, respectively, at 4dB SNR. It is observed that the classification accuracy keeps getting worse along increment of QAM due to vulnerability of high-order modulation signal. For instance, the accuracy of gray image significantly decreases over 16% when upgrading the QAM order from 16 to 64. As the worst modulation in our simulation, 64QAM suffers the confusion with 16QAM. It is well known that high-order modulations usually achieve high transmission rate in wireless communication system, but the modulation recognition of received signal will be less accurate due to the fact that the distance between scattered points distributed in a constellation map is narrower, and hence close constellation points are vulnerable with noise. As for the proposed convolutional imaging scheme, by using an appropriate kernel the convolution operation produces a gathering effect for each constellation point and creates a clear edge between two adjacent constellation points. Consequently, convolution gray image achieves a significant increment in accuracy of 64QAM compared with other images. At an SNR of 1dB, the convolutional gray image achieves 4.9% and 18.3% improvement compared to enhanced gray image and gray image respectively. It is not surprising that the enhanced gray image shows higher performance than gray image at the most of SNR levels. ### Comparison of Different Classifiers In this example, AMC algorithm using MSCNN model on convolutional images is compared with that using the SVM on different features extracted from the received signal, particularly SVM-7 and SVM-5 [8], and GoogleNet on three-channel images [15]. The comparison result is plotted in Fig. 13, where SVM-5 includes two sixth order and three fourth order cumulants and SVM-7 includes three fourth order cumulants and four sixth order cumulants respectively. Observing Fig. 13, the MSCNN model achieves the classification rate of 83.7% at 1dB SNR, which is better than SVM-7 and SVM-5 by approximately 5.26% and 6.84%, respectively. For two machine learning algorithms, the SVM-7 algorithm with more features employed slightly performs better than SVM-5. However, SVM-7 has a higher computational burden than SVM-5. In terms of inference \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|c|} \\hline Imaging method & BPSK & OQPSK & 8PSK & 16QAM & 64QAM & Accuracy \\\\ \\hline Gray Image & 0 & 0 & 0 & 264 & 736 & 73.6\\% \\\\ \\hline Enhanced Gray Image & 0 & 0 & 0 & 129 & 870 & 87.0\\% \\\\ \\hline Convolutional Gray Image & 0 & 0 & 0 & 81 & 919 & 91.9\\% \\\\ \\hline \\end{tabular} \\end{table} Table 2: Classification results of three imaging method for 64QAM with SNR = 4dB using MSCNN. Figure 12: Classification accuracy versus modulation types under different images. time, SVM-7 spends 55% more than SVM-5 because it should compute more features. It is worth noting that the manual selection of features is a critical issue and affects classification performance noticeably in classical machine learning algorithms. However, the CNN-based algorithm even performs better without manual feature selection. Subsequently, we compared our MSCNN with FiFNet [19] for constellation based modulation classification using convolutional gray images. Observing the results in Fig. 13, we see that the classification accuracy of the proposed model was better in the SNR range from 1dB to 4dB, where the proposed model improves classification accuracies of 2.5% at 2dB compared with those of FiFNet. The network capacity and average inference time are summarized in Table 3, where inference time is averaged over 5000 trials. It can be seen that MSCNN is cheaper than FiFNet by approximately 34% of capacity (aka the number of trainable parameters). However, the inference time of both networks is almost equivalent. This is because that both depth-wise concatenation and addition operations are performed many times by MSCNN. Finally, the performance of MSCNN using convolutional gray images is compared with GoogleNet using three channel images. We see that MSCNN outperforms GoogleNet at the most of SNR levels. This is not surprising because GoogleNet is a standard deep network for general purpose applications, such as large scale classification with more than 1000 categories. It is probably not efficient for modulation classification using simple uniform background images. on the contrary, it will cause the problem of gradient vanish or overfitting. In terms of capacity and inference time, there is no doubt that GoogleNet is the largest one. ### _Classification accuracy of MSCNN for DIFFERENT IMAGING SCHEMES_ We investigate the performance of MSCNN for the generated three type images. The classification accuracy for each imaging method varied with SNRs is presented in Fig. 14. Note that the convolutional image outperforms the gray image and enhanced gray image within the SNRs less than 8dB. In lower SNR cases, the data samples belong to a certain constellation point cannot be gathered round in gray image due to noise interference, while the enhanced gray image gives rise to dim boundary problem. However, the convolution kernel with finite size is used to improve the aggregation of interfered data samples and solve the dim boundary problem, and hence improved accuracy was observed. Concretely, at an SNR of 4dB, MSCNN model on convolutional image achieves 1.6% and 3.3% improvement compared with gray image and enhanced gray image. Both convolutional gray image and enhanced gray image considered the impact of data samples on the selected pixel, but convolutional gray image performs more accurately and requires less computing resources than enhanced gray image. By deploying an appropriate kernel size, convolutional gray image achieves good trade-off between accuracy and computational cost. ## VI Conclusion In this paper, a multiple-scale convolutional neural network, namely MSCNN, is proposed for constellation-based modulation classification. The network architecture consists of several processing blocks to comprehensively learn more intrinsic characteristics from constellation-like image. Meanwhile, the convolutional gray image is developed, in which convolution kernel is deployed to overcome the drawbacks in existing imaging schemes. The trained MSCNN on convolutional gray image dataset achieves the averaged classification accuracy of approximately 97.7% at 4 dB SNR. With a well-designed network and effective imaging method, \\begin{table} \\begin{tabular}{|l|c|c|} \\hline \\multirow{2}{*}{NetWork} & Capacity & Inference time \\\\ & (No.parameters) & (ms) \\\\ \\hline SVM-5 & - & 0.4 \\\\ \\hline SVM-7 & - & 0.9 \\\\ \\hline MSCNN & 274K & 3.0 \\\\ \\hline GoogleNet & 6.8M & 9.6 \\\\ \\hline FiFNet & 416K & 3.2 \\\\ \\hline \\end{tabular} \\end{table} TABLE III: **Comparison of capacity and inference time for different networks.** Figure 14: **Average accuracy of three types of images versus SNR.** Figure 13: **Average classification accuracy of different classifiers versus SNR.** MSCNN on convolutional gray image outperforms other models in terms of accuracy. For future works, the impacts of interference and frequency selective fading channel will be investigated. ## References * [1] A. K. Nandi and E. E. Azzouz, \"Algorithms for automatic modulation recognition of communication signals,\" _IEEE Trans. Commun._, vol. 46, no. 4, pp. 431-436, Apr. 1998. * [2] F. K. Jondral, \"Software-defined radio-basic and evolution to cognitive radio,\" _EURASIP J. Wireless Commun. Netw._, vol. 2005, no. 3, pp. 1-9, Dec. 2005. * [3] O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, \"Survey of automatic modulation classification techniques: Classical approaches and new trends,\" _IET Commun._, vol. 1, no. 2, pp. 137-156, Apr. 2007. * [4] C.-Y. Huan and A. Polydoros, \"Likelihood methods for MPSK modulation classification,\" _IEEE Trans. Commun._, vol. 43, nos. 2-4, pp. 1493-1504, Feb. 1995. * [5] W. Wei and J. M. Mendel, \"Maximum-likelihood classification for digital amplitude-phase modulations,\" _IEEE Trans. Commun._, vol. 48, no. 2, pp. 189-193, Feb. 2000. * [6] S. Huang, Y. Yao, Y. Xiao, and Z. Feng, \"Cumulant based maximum likelihood classification for overlapped signals,\" _Electron. Lett._, vol. 52, no. 21, pp. 1761-1763, Oct. 2016. * [7] A. Swami and B. M. Sadler, \"Hierarchical digital modulation classification using cumulants,\" _IEEE Trans. Commun._, vol. 48, no. 3, pp. 429-461, Mar. 2000. * [8] M. W. Aslam, Z. Zhu, and A. K. Nandi, \"Automatic modulation classification using combination of genetic programming and KNN,\" _IEEE Trans. Wireless Commun._, vol. 11, no. 8, pp. 2742-2750, Aug. 2012. * [9] S. Kumar, V. A. Bohara, and S. J. Darki, \"Automatic modulation classification by exploiting cyclostolators' features in wavelet domain,\" in _Proc. 23rd Nat. Conf. Commun. (NCC)_, Mar. 2017, pp. 1-6. * [10] B. Ramkumar, \"Automatic modulation classification for cognitive radios using cyclic feature detection,\" _IEEE Transit. Mag._, vol. 9, no. 2, pp. 27-45, Jun. 2009. * [11] L. Xie and Q. Wan, \"Automatic modulation recognition for phase shift keying signals with compressive measurements,\" _IEEE Wireless Commun. Lett._, vol. 7, no. 2, pp. 194-197, Apr. 2018. * [12] Y. LeCun, Y. Bengio, and G. Hinton, \"Deep learning,\" _Nature_, vol. 521, pp. 436-444, May 2015. * [13] B. Kim, J. Kim, H. Chae, D. Yoon, and J. W. Choi, \"Deep neural network based automatic modulation classification technique,\" in _Proc. Int. Conf. Int. Commun. Technol. Comerg._, Jeju, South Korea, Oct. 2016, pp. 579-582. * [14] A. Ali and F. Yangyu, \"Automatic modulation classification using deep learning based on sparse autoencoders with nonnegativity constraints,\" _IEEE Signal Process. Lett._, vol. 24, no. 11, pp. 1626-1630, Nov. 2017. * [15] Y. Tu, Y. Lin, C. Hou, and S. Mao, \"Complexed-valued networks for automatic modulation classification,\" _IEEE Trans. Veh. Technol._, vol. 69, no. 9, pp. 10085-10089, Sep. 2020. * [16] T. Huynh-The, C.-H. Hua, Q.-V. Pham, and D.-S. Kim, \"MCNet: An efficient CNN architecture for robust automatic modulation classification,\" _IEEE Commun. Lett._, vol. 24, no. 4, pp. 811-815, Apr. 2020. * [17] S.-H. Kim, J.-W. Kim, V.-S. Doan, and D.-S. Kim, \"Lightweight deep learning model for automatic modulation classification in cognitive radio networks,\" _IEEE Access_, vol. 8, pp. 197532-197541, Nov. 2020. * [18] S. Huang, L. Chai, Z. Li, D. Zhang, Y. Yao, Y. Zhang, and Z. Feng, \"Automatic modulation classification using compressive convolutional neural network,\" _IEEE Access_, vol. 7, pp. 79636-79643, 2019. * [19] V.-S. Doan, T. Huynh-The, C.-H. Hua, Q.-V. Pham, and D.-S. Kim, \"Learning constellation map with deep CNN for accurate modulation recognition,\" in _Proc. GLOBECOM IEEE Global Commun. Conf._, Dec. 2020, pp. 1-6. * [20] Y. Kumar, M. Sheoran, G. Jajoo, and S. K. Yadav, \"Automatic modulation classification based on constellation density using deep learning,\" _IEEE Commun. Lett._, vol. 24, no. 6, pp. 1275-1278, Jun. 2020. * [21] S. Peng, H. Jiang, H. Wang, H. Alwaged, Y. Zhou, M. M. Sebdani, and Y.-D. Yao, \"Modulation classification based on signal constellation diagrams and deep learning,\" _IEEE Trans. Neural Netw. Learn. Syst._, vol. 30, no. 3, pp. 718-727, Mar. 2019. \\begin{tabular}{c c} & Wei-Hao Zhang received the Ph.D. degree in control science and engineering from Xidian University, Xi'an, China, in 2011. He is currently an Associate Professor with the School of Electronic Engineering, Xidian University. He is also a Faculty Research Fellow with the Research Institute of Advanced Remote Sensing Technology, Xidian University. His research interests include blind signal processing, tensor analysis, and machine learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Dan Cui received the B.S. degree in electronic and information engineering from the Xi'an University of Science and Technology, Xi'an, China, in 2019. She is currently pursuing the Ph.D. degree with the Department of Electronic Engineering, Xidian University, Xi'an. Her current research interest includes machine learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Shun-Tian Lou (Member, IEEE) was born in Zhejiang, China, in 1962. He received the B.Sc. degree in automatic control and the M.Sc. degree in electronic engineering from Xidian University, Xi'an, China, in 1985 and 1988, respectively, and the Ph.D. degree in navigation guidance and control from Northwest Polytechnical University, Xi'an, in 1999. From 1999 to 2002, he was a Postdoctoral Fellow with the Institute of Electronic Engineering, Xidian University. He is currently a Professor with the School of Electronic Engineering, Xidian University. His research interests include signal processing, pattern recognition, and intelligent control using neural networks and fuzzy systems. \\\\ \\end{tabular}
Convolutional neural network (CNN) models have recently demonstrated impressive classification and recognition performance on image and video processing scope. In this paper, we investigate the application of CNN to identifying modulation classes for digitally modulated signals. First, the received baseband data samples of modulated signal are gathered up and transformed to generate the constellation-like training images for convolutional networks. Among the resulting training images, the proposed convolutional gray image is preferred for network training and inference because of the lower computational burden. Second, we propose to use a multiple-scale convolutional neural network (MSCNN) as the classifier. The skip-connection technique is deployed for mitigating the negative effect of vanishing gradients and overfitting during the network training process. Numerical simulations have been carried out to validate the effectiveness of the proposed scheme, the results show that the proposed scheme outperforms the traditional algorithms in terms of classification accuracy. + Footnote †: This work was supported by the National Natural Science Foundation of China under Grant 62071350. + Footnote †: This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [https://ceativecommons.org/licenses/by/4.0/VOLUME](https://ceativecommons.org/licenses/by/4.0/VOLUME) 9, 2021.
Summarize the following text.
ieee/ed355f79_d8c0_42b8_bb75_9d206ea9b54a.md
Retrieving the 3-D Tropospheric Wet Refractivity Field From a Standalone Ground-Based GNSS Station With A Priori Information: Theory and Simulation Xianjie Li\\({}^{\\text{\\textcircled{C}}}\\), Jingna Bai\\({}^{\\text{\\textcircled{C}}}\\), Jean-Pierre Barriot\\({}^{\\text{\\textcircled{C}}}\\), Yidong Lou\\({}^{\\text{\\textcircled{C}}}\\), and Weixing Zhang\\({}^{\\text{\\textcircled{C}}}\\) Manuscript received 6 February 2024; revised 30 April 2024; accepted 5 June 2024. Date of publication 11 June 2024; date of current version 27 June 2024. This work was supported in part by the University of French Polynesia (UPF) under Grant Ph.D. grant, in part by the Geodesy Observatory of Tahiti through the French Space Agency (CNES) under Grant the Decision Aide a la Recherche (DAR), in part by the National Natural Science Foundation of China under Grant 42174027, in part by the Fundamental Research Funds through the Central Universities under Grant 2042022xf1198, and in part by the Natural Science Foundation of Hubei Province (Innovative Group Program) under Grant 2024AAFA023. _(Corresponding author: Xianjie Li.)_ Xianjie Li and Jean-Pierre Barriot are with the Geodesy Observatory of Tahiti, University of French Polynesia, 98702 Faa'a, French Polynesia (e-mail: [email protected]; [email protected].). Jingna Bai and Weixing Zhang are with GNSS Research Center, Wuhan University, Wuhan 430079, China (e-mail: [email protected]; [email protected]). Yidong Lou is with GNSS Research Center, Wuhan University, Wuhan 430079, China, and also with Hubei Luojia Laboratory, Wuhan University, Wuhan 430079, China (e-mail: [email protected]). Digital Object Identifier 10.1109/TGRS.2024.3412689 ## I Introduction The Earth's atmosphere is rarely static but consists of ubiquitous nonturbulent and turbulent flows, particularly in the troposphere [1, 2]. As a result, variations of atmospheric variables in the troposphere, e.g., temperature, water vapor, and refractivity [3], could be found on various spatiotemporal scales (e.g., synoptic scale, mesoscale, and microscale) [4, 5]. These variations in water vapor contents make it difficult to determine or model the signal delays when radio-wave signals pass through the troposphere, i.e., the tropospheric delays [6]. Moreover, the variability of the water vapor content in the atmosphere is crucial by itself because it is inextricably linked to extreme weather events and climate changes [7]. The ground-based Global Navigation Satellite System (GNSS) is nowadays becoming one of the most promising techniques to continuously observe the water vapor content in the troposphere with the advent of GNSS meteorology [8, 9]. In convention, 1-D integrated amounts of water vapor (IWVs) can be obtained from the integrated tropospheric delays in GNSS signals, i.e., the slant wet delays (SWDs) and zenith wet delays (ZWDs). A 2-D ZWD/IWVs map is also available with a dense ground-based GNSS network (see [10, 11]). In terms of the 3-D distribution of the water vapor field, a lot of attention has been paid to GNSS tomography over the years (see [12, 13, 14, 15, 16, 17, 18, 19, 20]). However, a dense GNSS network is often required by GNSS tomography for collecting sufficient slant IWVs as input, which is too heavy to be implemented. This prerequisite restricts the application of GNSS tomography on a relatively small island where only a standalone GNSS station is available, e.g., Tahiti Island in French Polynesia with a diameter of about 30 km. In this case, one may have other options for retrieving the 3-D water vapor field, such as the numerical weather models (NWMs) and water vapor differential absorption lidar (WVDIAL). Unfortunately, the resolution of the former is too low at present, e.g., the latest ERA5 now provides hourly products with a horizontal resolution of about 30 km [21], and it is still not economically feasible for the latter to build a network for observing the 3-D water vapor field [22]. To fill this gap, here in our study, we propose to apply a new model of the tropospheric delays, or more precisely, the SWDs, to retrieve the 3-D tropospheric wet refractivity field on a local scale overhead a standalone ground-based GNSS station. Here, the local scale is defined as a spatial scale ranging from several kilometers (2\\(\\sim\\)5 km) to hundreds of kilometers (\\(\\sim\\)100 km). The corresponding water vapor field could be inferred from the 3-D wet refractivity field since the refractivity is a function of temperature, pressure, humidity, and electric density in the atmosphere [23]. Hence, we will hereinafter focus on the 3-D wet refractivity field. Our new method follows the works by Barriot and Feng [4] and Barriot et al [24], where the wet refractivity field is represented by a well-recognized exponential decay together with a relatively small term \\(\\varepsilon_{w}\\) reflecting the fluctuation/departure of the refractivity with respect to the exponential decay law. As proposed in [4], the term \\(\\varepsilon_{w}\\) can be represented as a 3-D (or 4-D if the temporal evolution is concerned) series expansion based on a set of predefined orthogonal basis functions. By definition, the SWD is an integration of the wet refractivity along the ray path. Thereby, this integral relationship gives us a Radon transform [25]. Solving the 3-D wet refractivity field with SWDs is essentially a typical Radon inverse problem. Known as a classical ill-posed problem, this Radon inverse problem needs to be solved by applying some regularizations [26]. The truncated singular value decomposition (tSVD) method was used in the previous work by Barriot et al. [24]. However, some important features of the field may be lost due to truncation. A better way to obtain a physically acceptable solution is to introduce additional physical information about the observables or wet refractivity field from diverse sources [24]. Thereafter, this ill-posed problem can be solved with the Tikhonov regularization method as shown in the framework of radar tomography (see [27, 28]). Therefore, in this work, we investigate the availability of the a priori information for reconstructing the 3-D wet refractivity field based on SWDs derived from a single GNSS station. Two kinds of appropriate a priori information are proposed and then adopted for a simulation study. The simulation and reconstruction form a closed-loop test to validate our proposed a priori information and the inverse process, which also reveals an alternative potential method for modeling and retrieving the 3-D wet refractivity field in the troposphere from a single GNSS station. This article is organized as follows. To have a clear picture of the a priori information that is applicable to our study, possible a priori information related to SWDs and wet refractivity field in previous studies is revisited in Section II, leading to two kinds of proposed a priori information. The method that is used to model and simulate SWDs is introduced in Section III, where the adaptation of the a priori information proposed in Section II is elucidated. The inverse process is also given in this section with details. In Section IV, the experimental results of the simulation and reconstruction are presented and discussed. Finally, our conclusion is given in Section V. ## II Revisit of the a Priori Information for Regularizations The concept of introducing the a priori information of parameters or observations in data analysis is not new in the space geodesy community, e.g., the a priori empirical elevation-dependent weighted matrix for observations in GNSS data processing. Here, in our work, specifically, by the a priori information, we mean the temporal or spatial correlation/(co)variance function of the quantity that is of concern, i.e., SWDs or wet refractivity. One possible way to obtain such a priori information is from atmospheric turbulence theory [29, 30]. By assuming a statistically homogeneous and isotropic turbulent atmosphere, the covariance function of the wet refractive index can be directly computed from the refractive index structure function that follows the well-known 2/3 Kolmogorov scaling law (see [31, 32]). These turbulence-based covariance functions of wet refractive index were successfully introduced as the a priori information in the Kalman filter of GPS tomography for simulation studies, see [33, 34]. Moreover, the (co)variance function of SWDs can be derived from the (co)variance matrix of phase observations in space geodesy if one assumes that the correlations between them are predominantly introduced by isotropic turbulence occupying a \"slab\" boundary layer (typically below 1-2 km altitude). Pioneering work in terms of correlations in very long baseline interferometry (VLBI) observations can be found in [35] (or the TL model). Similarly, using the spectral representation of atmospheric turbulence, Schon and Brunner [36] proposed a generalized form of the TL model. Both methods were then adopted and verified in many other applications in space geodesy with both real datasets and simulations (see [5, 6, 37, 38, 39, 40, 41, 42]) as it has been shown to improve the quality and precision description of parameter estimations. However, all the above covariance functions are derived based on turbulence models, the parameters of which are given empirically and are only approximations to the truth. One needs to be cautious when applying these covariance functions since the assumptions that these turbulent models underlie may no longer be valid in real cases. On the other hand, according to the conventional model of SWDs in GNSS data processing with mapping functions and horizontal gradients, statistical information on ZWDs or horizontal gradients could also give us indications for the covariance function of SWDs. Indeed, the variations of GNSS-derived ZWD/IWV on different time scales have been analyzed both in meteorological (see [43, 44, 45, 46], and references therein) and climatological studies (see [8, 11, 47, 48], and references therein) over the years. Horizontal gradients were also found to help describe the anisotropy of the water vapor field and detect small-scale convective structures as a valuable indicator (e.g., [49, 50, 51, 52]). Nevertheless, the temporal or spatial variations of ZWD/IWV before and after the onset of severe weather events are the main target of previous literature dealing with severe weather events [46]. Only temporal variations of the 1-D IWV are analyzed, which cannot represent the 3-D nature of the water vapor field [44]. A similar problem can be found in climate research with GNSS as the studies so far have been limited to investigating the long-term linear trends in IWVs [45]. Another alternative way is to derive the covariance function from the statistical property of the SWDs or wet refractivity field sampled over NWMs on various spatiotemporal scales. Again, as we mentioned before, the spatial resolution of NWMs at present is too low to be applied to our case. In summary, the covariance function of SWDs or the wet refractivity that we are looking for should describe their correlations within a relatively small spatial scale (tens of kilometers) but with various time scales (ranging from minutes to weeks). This means that both atmospheric turbulent and nonturbulent flows on the microscale, mesoscale, and even the synoptic scale are concerned in our case. The impacts of these atmospheric flows on the covariance function of SWDs or the refractivity field hence should be considered. Fortunately, Zhang et al. [53] recently found a covariance function of SWDs that applies to our case. Based on real datasets collected at one GNSS station for about eight months, Zhang et al. [53] presented the statistical property of an almost stationary process with respect to the elevation angle \\(el\\), i.e., \\(\\mathrm{SE}=\\mathrm{SWD}\\cdot\\sin(el)\\). This approach was first used in previous studies of the SWDs, e.g., [40, 54]. In Zhang et al.'s work [53], this SE series was expanded with spherical harmonics in space and trigonometric functions in time. Thereby, an angular correlation length of about \\(20^{\\circ}\\) and a correlation time of up to four days are reported. Here, in this work, we will adopt this spatial covariance information as the a priori information for SWDs or, strictly speaking, SEs. It is worth noting that an additional a priori information (or constraint) needs to be introduced, which is implicitly suggested in our model of \\(\\varepsilon_{w}\\). Since we assume that the term \\(\\varepsilon_{w}\\) can be modeled by a set of orthogonal, \"well-behaved\" functions, staying within certain bounds, the parameters of functions generally should decrease in magnitude as the degrees of functions increase according to Parseval's theorem [55, 56]. Such a constraint is widely used in modeling the gravitational and geopotential field of the Earth with spherical harmonics (see [57, 58, 59, 60]), which is known as Kaula's rule [55, 61]. This rule of thumb states that the degree variance of normalized harmonic parameters is proportional to the inverse cube of the degree \\(l\\), i.e., \\(\\sim\\)\\(10^{-10}\\)/\\(l^{3}\\)[55]. Considering that spherical harmonics are included in our model (see Section III) and the similarity in inversion studies between the atmospheric refractivity field and the geopotential field, it is natural to come to a hypothesis that a similar Kaula's rule taking the power-law form will provide the a priori constraint for our model parameters. However, this Kaula-type power rule needs to be slightly adapted to our case, which will be discussed in detail in Section III-B. ## III Methodology ### _Modelization_ A good approximation of refractive index \\(n(r)\\) in the troposphere can be taken as a twofold exponential formula [4] \\[n(r) =1+N_{h}+N_{w}\\] \\[=1+N_{h}^{0}\\cdot\\exp\\!\\left(-\\frac{r-r_{0}}{H_{h}}\\right)+N_{w} ^{0}\\cdot\\exp\\!\\left(-\\frac{r-r_{0}}{H_{w}}\\right) \\tag{1}\\] where \\(N_{h}\\) is the hydrostatic refractivity; \\(N_{w}\\) is the wet refractivity; \\(r\\) is the geocentric radius; \\(r_{0}\\) is the geocentric radius at the GNSS receiver; \\(N_{h}^{0}\\) and \\(N_{w}^{0}\\) are the hydrostatic and wet refractivity at the receiver, respectively; \\(H_{h}\\) is the scale height of \\(N_{h}\\); and \\(H_{w}\\) is the scale height of \\(N_{w}\\), also known as water vapor scale height. Since delays caused by the hydrostatic component (i.e., \\(N_{h}\\)) can be accurately determined by a model with atmospheric pressure measurements, see [62], we here only focus on \\(N_{w}\\), which is highly variable both in time and space due to the highly variable water vapor content in the troposphere [63]. Taking into account both the nonturbulent and turbulent flows in the troposphere, the wet refractivity field can be represented as [4, 24] \\[N_{w}=N_{w}^{0}\\cdot\\exp\\!\\left(-\\frac{r-r_{0}}{H_{w}}\\right)\\cdot\\left(1+ \\varepsilon_{w}(x,y,z,t)\\right) \\tag{2}\\] where a relatively small term \\(\\varepsilon_{w}\\) (\\(|\\varepsilon_{w}|\\ll 1\\)) is introduced to represent the fluctuation/departure of the refractivity with respect to the general exponential decay law; \\(x\\), \\(y\\), and \\(z\\) are spatial coordinates in a given frame; and \\(t\\) denotes the time. As proposed in [4], the term \\(\\varepsilon_{w}\\) in (2) can be represented as a 4-D series expansion with a set of predefined orthogonal basis functions. Here, in our case, we adopt a set of orthonormal functions with respect to time \\(T_{w}(t)\\) up to degree \\(u_{\\max}\\) to describe the variations of \\(\\varepsilon_{w}\\) in time and 3-D Zernike functions up to degree \\(n_{\\max}\\) to express \\(\\varepsilon_{w}\\) in space with the spherical coordinate as \\[\\varepsilon_{w}(x,y,z,t) =\\varepsilon_{w}(r,\\theta,\\lambda,t)\\] \\[=\\sum_{\\alpha=0}^{u_{\\min}}\\sum_{\\alpha=0}^{u_{\\max}}\\sum_{l=0}^{ n}\\sum_{m=0}^{l}T_{u}(t)R_{n}^{l}(r)Y_{l}^{m}(\\theta,\\lambda)p_{nl,\\alpha}^{m} \\tag{3}\\] where radial polynomials \\(R_{n}^{l}(r)\\) (\\(0\\leq r\\leq 1\\)) and spherical harmonics \\(Y_{l}^{m}(\\theta,\\lambda)\\) form a set of orthonormal basis functions over a whole ball (i.e., the 3-D Zernike functions) with \\(r\\), \\(\\theta\\), and \\(\\lambda\\) denoting the radial distance, polar angle, and azimuthal angle in the spherical coordinate, respectively. Here, \\(u\\), \\(n\\), \\(m\\), and \\(l\\) are integers with \\(u\\geq 0\\), \\(0\\leq l\\leq n\\), \\(0\\leq m\\leq l\\), \\(n-l\\) is even, and \\(m+l\\) is even for orthogonality over the hemisphere. \\(p_{nl,u}^{m}\\) are the parameters in the model to be estimated. The definition of 3-D Zernike functions is highly technical, we therefore refer to [64, 65], and [66] for further details. Although the 4-D wet refractivity field is of more concern, taking the temporal variations into account will introduce a high degree of freedom in the model, which is too complicated to deal with at the moment. At this stage, we only consider the 3-D wet refractivity field, i.e., the wet refractivity field overhead a standalone GNSS station at a fixed measurement epoch [i.e., \\(t=t_{F}\\) and \\(T_{u}(t_{F})=1\\)]. The 4-D case with time \\(t\\) involved will be investigated in our future work. Hence, the term \\(t\\) in \\(\\varepsilon_{w}\\) is removed from (2) and (3) in this work as \\[\\varepsilon_{w}(x,y,z)=\\varepsilon_{w}(r,\\theta,\\lambda)=\\sum_{n=0}^{n_{\\max}} \\sum_{l=0}^{n}\\sum_{m=0}^{l}R_{n}^{l}(r)Y_{l}^{m}(\\theta,\\lambda)p_{nl}^{m}. \\tag{4}\\] For simplicity, we hereinafter use a matrix representation of the model \\[\\mathbf{\\varepsilon}_{w}=\\mathbf{G}_{w}\\cdot\\mathbf{p} \\tag{5}\\] with \\(\\mathbf{G}_{w}\\) and \\(\\mathbf{p}\\) representing the matrix of coefficients \\(R_{n}^{l}(r)Y_{l}^{m}(\\theta,\\lambda)\\) and the vector of parameters \\(p_{nl}^{m}\\) in (4), respectively. By definition, the SWD is an integration of the wet refractivity along the ray path \\(S\\) launching at elevation angle \\(el\\) and azimuth angle \\(\\lambda\\), which reads \\[\\text{SWD}(el,\\lambda)=\\int_{S(el,\\lambda)}N_{w}^{0}\\cdot\\exp\\left(-\\frac{r-r_{0 }}{H_{w}}\\right)\\cdot(1+\\varepsilon_{w})ds. \\tag{6}\\] According to the ansatz [24] \\[\\int\\exp(-a\\mu)\\cdot\\mu^{c}d\\mu=-\\frac{1}{a^{c+1}}\\Gamma(c+1,a\\mu). \\tag{7}\\] Equation (6) can be further expanded as \\[\\text{SWD}(el,\\lambda)=\\overline{\\text{SWD}}(el,\\lambda)+\\delta\\text{SWD}(el, \\lambda) \\tag{8}\\] where \\[\\overline{\\text{SWD}}(el,\\lambda)=-\\frac{N_{w}^{0}H_{w}}{\\sin(el)}\\left(\\exp \\left(-\\frac{S\\cdot\\sin(el)}{H_{w}}\\right)-1\\right) \\tag{9}\\] and \\[\\delta\\text{SWD}(el,\\lambda)\\] \\[=N_{w}^{0}S\\] \\[\\quad\\cdot\\sum_{n=0}^{n_{max}}\\sum_{l=0}^{n}\\sum_{m=0}^{l}\\left[ \\sum_{v=0}^{k}q_{kl}^{v}\\cdot\\gamma(2v+l+1,a)\\cdot a^{-(2v+l)-1}\\right]\\] \\[\\quad\\cdot Y_{l}^{m}\\left(\\frac{\\pi}{2}-el,\\lambda\\right)\\cdot p_ {nl}^{m}. \\tag{10}\\] We assume that the ray path is limited to a certain azimuthally fixed vertical plane. Here, \\(q_{kl}^{v}\\) are the coefficients in the radial polynomials \\(R_{n}^{l}(r)\\) with integer \\(v\\) taking values from 0 to \\(k=(n-l)/2\\) and \\(a=S\\cdot\\sin(e)/H_{w}\\), where \\(S\\) is the ray path length, \\(H_{w}\\) is the water vapor scale height, \\(\\mu\\) represents the first moment of 3-D Zernike functions, and \\(\\gamma\\) denotes the lower incomplete Gamma function. Further derivations of (8)-(10) can be found in the Appendix. In analogy, once SWDs at various elevation and azimuth angles are obtained, one can rewrite (8) into its matrix form \\[\\mathbf{SWD}=\\overline{\\mathbf{SWD}}+\\mathbf{G_{I}}\\cdot\\mathbf{p}. \\tag{11}\\] The so-called SE is simply a production of SWDs and the sine function of the corresponding elevation angles \\(el\\) as [53] \\[\\text{SE}(el,\\lambda)=\\text{SWD}(el,\\lambda)\\cdot\\sin(el) \\tag{12}\\] of which the matrix form reads \\[\\mathbf{SE}=\\overline{\\mathbf{SE}}+\\mathbf{G}\\cdot\\mathbf{p}. \\tag{13}\\] ### _A Priori Information_ As aforementioned, the a priori information given by Zhang et al. [53] is adopted in our simulation study. The temporal evolution is not taken into account at this moment, and therefore, only the spatial covariance function of the SEs with an angular correlation length of about \\(20^{\\circ}\\) is used here. To put it simply, we approximate this spatial covariance function in an analytical form as \\[\\text{Cov}_{\\text{SE}}(\\Psi)=\\delta_{0}^{2}\\cdot\\exp\\bigl{(}-(\\Psi/20)^{2} \\bigr{)} \\tag{14}\\] where \\(\\Psi\\) is the spherical distance in degree and \\(\\delta_{0}^{2}\\) is the variance. We will hereinafter use \\(\\mathbf{Cov}_{\\text{SE}}\\) denoting the covariance function in its matrix form. An additional constraint for parameters \\(\\mathbf{p}\\) needs to be introduced according to Parseval's theorem. Taking Kaula's rule as a prototype, we here propose a Kaula-like rule for the variance of parameters \\(\\mathbf{p}\\) in our case as \\[\\delta_{nl}^{2}=\\sum_{m}\\bigl{|}p_{nl}^{m}\\bigr{|}^{2}=\\frac{A}{n^{a}p^{\\beta}} \\tag{15}\\] considering that both degrees \\(n\\) and \\(l\\) in 3-D Zernike functions have physical significance. Order \\(m\\) is not considered as it has no physical significance in the degree variance of spherical harmonics. Here, \\(A>0\\) is an empirical constant, and \\(\\alpha>0\\) and \\(\\beta>0\\) are the power of degrees \\(n\\) and \\(l\\), respectively. In practice, following the derivation of Kaula's rule [61], one possible method to determine the unknowns in (15) (i.e., \\(A\\), \\(\\alpha\\), and \\(\\beta\\)) is from a spectral analysis of the wet refractivity field over the spatial and time scale that is of concern, e.g., from NWM products. Unfortunately, it is not applicable in our case due to the low resolution of the present NWM products. Nevertheless, a good indication for the values of \\(\\alpha=2\\) and \\(\\beta=2\\) can be found in previous studies related to the spectral behavior of wind speed or other atmospheric variables (see [67, 68, 69], and references therein). The determination of the constant \\(A\\) is case-dependent. In this way, the a priori constraint with a diagonal (co)variance matrix \\(\\mathbf{C_{p}}\\) for parameters \\(\\mathbf{p}\\) can be obtained from (15). Provided that \\(\\mathbf{Cov}_{\\text{SE}}\\) and \\(\\mathbf{C_{p}}\\) for parameters \\(\\mathbf{p}\\) are available, the a priori information of parameters \\(\\mathbf{p}\\) can be derived as [70, 71] \\[\\mathbf{Cov_{p}}=\\bigl{(}\\mathbf{G}^{\\text{T}}\\mathbf{G}+\\mathbf{C_{p}}^{-1}\\bigr{)}^{-1}\\mathbf{G }^{\\text{T}}\\mathbf{Cov_{\\text{SE}}}\\mathbf{G}\\bigl{(}\\mathbf{G}^{\\text{T}}\\mathbf{G}+\\mathbf{C_{p }}^{-1}\\bigr{)}^{-1} \\tag{16}\\] recalling the model of SEs in (13). ### _Simulation_ The a priori information of parameters \\(\\mathbf{p}\\), i.e., \\(\\mathbf{Cov_{p}}\\) in (16), allows us to simulate a set of Gaussian random parameters \\(\\mathbf{p}\\), as well as the SE series. This can be done with the spectral decomposition approach [72]. In this approach, the SVD of \\(\\mathbf{Cov_{p}}\\) is first implemented to obtain the orthonormal matrix of eigenvectors \\(\\mathbf{V}\\) and eigenvalues \\(\\mathbf{\\Lambda}\\) as \\[\\mathbf{Cov_{p}}=\\mathbf{V}\\cdot\\mathbf{\\Lambda}\\cdot\\mathbf{V}^{\\text{T}}. \\tag{17}\\] Thereafter, a set of parameters \\(\\mathbf{p}^{*}\\) can be simulated by \\[\\mathbf{p}^{*}=\\mathbf{V}\\cdot\\sqrt{\\mathbf{\\Lambda}}\\cdot\\mathbf{w} \\tag{18}\\] with a vector of random variables \\(\\mathbf{w}=[w_{1},w_{2},\\ldots]\\) taking Gaussian normal distribution with mean 0 and variance 1. In this way, the simulated parameters \\(\\mathbf{p}^{*}\\) will take zero mean and (co)variance \\(\\mathbf{Cov_{p}}\\). Recalling the model of SEs in (13), a set of SE series could also be generated based on the simulated parameters \\(\\mathbf{p}^{*}\\) as \\[\\mathbf{SE}^{*}=\\overline{\\mathbf{SE}}^{*}+\\mathbf{G}\\cdot\\mathbf{p}^{*}. \\tag{19}\\] ### _Inverse Process_ Taking the simulated \\(\\mathbf{SE}^{*}\\) as input, let us now consider the linear inverse problem given in (13). The inverse problem is endogenously ill-posed because the input signals/observations are integrated values, e.g., SE/SWDs. To obtain a reasonable and stable solution, here in our case, we propose to introduce the derived a priori covariance matrix of parameters \\(\\mathbf{Cov_{p}}\\) for the regularization of the ill-posed inverse problem. Under the principle of least square, this ill-posed linear inverse problem can then be solved following the so-called Tikhonov regularization method [71, 73] as \\[\\widehat{\\mathbf{p}}^{*}=\\left(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}+\\chi\\cdot\\mathbf{Cov_{p}^{-1} }\\right)^{-1}\\mathbf{G}^{\\mathsf{T}}\\left(\\mathbf{SE}^{*}-\\overline{\\mathbf{SE}}^{*}\\right) \\tag{20}\\] where \\(\\chi>0\\) is a regularization parameter that indicates the weighting of the additional a priori information in the inverse process. Several different strategies can be applied to choose this regularization parameter \\(\\chi\\) (see [73] and references therein), e.g., making a compromise between the norm of the residuals and the norm of the solution. However, no \"best\" strategy exists as it depends on the application. Instead, a pragmatic one is more practical. Thereafter, we can reconstruct the SE series with (13) as \\[\\widehat{\\mathbf{SE}}^{*}=\\overline{\\mathbf{SE}^{*}}+\\mathbf{G}\\cdot\\widehat{\\mathbf{p}}^{*} \\tag{21}\\] and also, the wet refractivity field in (2) with \\[\\widehat{\\mathbf{e}_{w}}=\\mathbf{G}_{w}\\cdot\\widehat{\\mathbf{p}}^{*} \\tag{22}\\] based on (5). ## IV Simulations and Experiment Results ### _Simulation Setup_ As a rule of thumb, a single ground-based GNSS station can observe water vapor information of about a 100-km radius in the horizontal direction (with a cutoff elevation angle of 10\\({}^{\\circ}\\)) [8]. Taking a single GNSS station located on Tahiti Island as a prototype, the typical height of the tropopause is 16.5 km in this tropical region. Therefore, in our simulation, a 3-D refractivity field with a horizontal scale of 200 \\(\\times\\) 200 km and a vertical scale of up to 16.5 km high is considered. Some simulation parameters are fixed to constants, such as the Earth's radius \\(r_{0}=6371\\) km, \\(N_{w}^{0}=128\\cdot 10^{-6}\\), and the water vapor scale height \\(H_{w}=2.391\\) km according to [24], and the other parameters will be determined in the following case-dependent analysis. To give a good spatial representation of the field, the SWD/SEs are simulated at random elevation and azimuth angles on the hemisphere over the single station. A cutoff elevation angle of 10\\({}^{\\circ}\\) is chosen to avoid the anomalies near the horizon since no observations below the horizon are available in reality to constrain the function behavior. Finally, 4148 SWD/SEs over the hemisphere were sampled and simulated in this experiment and their distribution is given in Fig. 1. Note that in real cases, the distribution of SWDs overhead a standalone GNSS station is much sparser with periodic GNSS satellite tracks. Based on the spatial distribution in Fig. 1 and using (14), \\(\\mathbf{Cov_{\\mathrm{SE}}}\\) is generated. The variance \\(\\delta_{0}^{2}\\) is regulated to 10 to ensure that the variations of the simulated SEs are in a physically reasonable range, i.e., about 305 \\(\\pm\\) 10 mm in our case. By adopting the model with 3-D Zernike functions up to degree 20 (i.e., \\(n_{\\text{max}}=20\\)), we generate the design matrix \\(\\mathbf{G}\\) for SEs based on the setup of elevation and azimuth angles above. The number of unknown parameters is 946. After several trials with different \\(A\\)'s in (15) to derive \\(\\mathbf{Cov_{p}}\\), we found that \\(A\\) significantly affects relatively small singular values of \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}+\\mathbf{C_{p}^{-1}}\\) in (16). We decided to take a value of \\(A\\) by visual inspection of the decrease pattern of singular values, as shown in Fig. 2. Taking \\(A=2.5\\)\\(\\times\\) 10\\({}^{4}\\), we are able to keep all the relatively large singular values as those in \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}\\) and suppress the sharp decrease in smaller singular values of \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}\\). However, even with the addition of the diagonal matrix \\(\\mathbf{C_{p}^{-1}}\\), the matrix \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}+\\mathbf{C_{p}^{-1}}\\) is not guaranteed to be numerically Fig. 1: Sky plot of the spatial distribution of the simulated SWD/SEs over a single GNSS station with a cutoff elevation angle of 10\\({}^{\\circ}\\). Fig. 2: Log10 plots of singular values of \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}\\) (blue) and \\(\\mathbf{G}^{\\mathsf{T}}\\mathbf{G}+\\mathbf{C_{p}^{-1}}\\) with \\(A=2.5\\)\\(\\times\\) 10\\({}^{4}\\) (orange). invertible. Here, in our case, we use the tSVD method to get an approximation of the inverse of matrix \\(\\mathbf{G}^{\\mathrm{T}}\\mathbf{G}+\\mathbf{C}_{\\mathbf{p}}^{-1}\\). By truncating the 900 leading singular values, the truncated matrix still retains the major characteristics of \\(\\mathbf{G}^{\\mathrm{T}}\\mathbf{G}+\\mathbf{C}_{\\mathbf{p}}^{-1}\\), e.g., the norm of the truncated matrix takes over 99.9% of the original \\(\\mathbf{G}^{\\mathrm{T}}\\mathbf{G}+\\mathbf{C}_{\\mathbf{p}}^{-1}\\). This is also indicated in Fig. 3, where the statistical histogram distribution of the differences between them is shown. Only a small fraction of information is lost due to the truncation. To further validate the approximated \\((\\mathbf{G}^{\\mathrm{T}}\\mathbf{G}+\\mathbf{C}_{\\mathbf{p}}^{-1})^{-1}\\) from tSVD, a comparison between the generated \\(\\mathbf{Cov}_{\\mathrm{SE}}\\) and the a posteriori \\(\\mathbf{Cov}_{\\mathrm{SE}}^{\\prime}\\) is shown in Fig. 4, of which \\(\\mathbf{Cov}_{\\mathrm{SE}}^{\\prime}=\\mathbf{GCov}_{\\mathrm{SE}}\\mathbf{G}^{\\mathrm{T}}\\) is derived according to the error propagation law. A good agreement can be found in the figure as the norm of \\(\\mathbf{Cov}_{\\mathrm{SE}}\\) takes over 99.9% of the norm of \\(\\mathbf{Cov}_{\\mathrm{SE}}^{\\prime}\\). ### _Experimental Results_ With \\(\\mathbf{Cov}_{\\mathbf{p}}\\) derived from (16), a set of Gaussian random parameters \\(\\mathbf{p}^{i}\\) is simulated by applying the spectral decomposition with (17) and (18), as shown in Fig. 5. The corresponding \\(\\mathbf{SE}^{\\mathrm{*}}\\) values are then generated using (19) (see Fig. 6), where the computation of the mean part \\(\\overline{\\mathbf{SE}^{\\mathrm{*}}}\\) can be found in the Appendix with (12). Taking the simulated SEs as input, the ill-posed inverse problem in (13) can be solved with the Tikhonov regularization method as given in (20). Since the \"true\" values of parameters are known in our simulation, we selected \\(\\chi=0.01\\) to make a compromise between the norm of the residuals vector and the norm of the vector difference of the reconstructed solution and the true solution (see Fig. 7). However, in real cases, a better way to determine \\(\\chi\\) is to compare the reconstructed refractivity field with those observed/derived from other techniques, such as the vertical profile of refractivity derived from radiosonde. The reconstructed parameters \\(\\hat{\\mathbf{p}}_{s}\\) are shown in Fig. 5 together with the simulated \\(\\mathbf{p}^{*}\\). Residuals of the reconstructed SEs are shown in Fig. 8. Overall, the reconstructed parameters agree well with the simulated ones and a good agreement Fig. 4: Covariance functions of SEs with respect to the spherical distances \\(\\Psi\\). The generated a priori covariance function is shown in orange dots and the a posteriori covariance function is given in blue dots. Fig. 5: Series of the simulated parameters (blue circles) and reconstructed parameters (orange dots). Fig. 3: Statistical histogram distribution of the differences between the original and reconstructed matrix \\(\\mathbf{G}^{\\mathrm{T}}\\mathbf{G}+\\mathbf{C}_{\\mathbf{p}}^{-1}\\) with tSVD. Fig. 6: Sky plot of the simulated SEs over the hemisphere from the viewpoint of a single GNSS station. Fig. 7: Choice of the regularization parameter \\(\\chi\\) by achieving a compromise between the norm of the residual vector and the norm of the vector difference of the reconstructed solution and the true solution. could also be found between \\(\\widehat{\\mathbf{SE}}^{i}\\) and \\(\\mathbf{SE}^{i}\\). These results are validated in the corresponding statistical histograms of the residuals given in Fig. 9. More specifically, the mean of the SE residuals is about \\(-1.4\\times 10^{-4}\\) mm and the root mean square (rms) takes about 0.01 mm. The minimum and maximum SE residuals are about \\(-0.037\\) and 0.05 mm, respectively. Most of the relatively large SE residuals are found at low elevation angles, which may be ascribed to the less constrained behavior of our model near the boundary of the simulated observations geometry. Finally, the 3-D wet refractivity field is retrieved according to (22) and (2). The 2-D profiles of the simulated and reconstructed fluctuating component \\(\\varepsilon_{w}\\) in the north-south direction (azimuth angles \\(0^{\\circ}\\) and \\(180^{\\circ}\\)) are shown and compared in Fig. 10. Taking the exponential decay into account, the corresponding 2-D profiles of the simulated and reconstructed 3-D wet refractivity field in the north-south direction are presented in Fig. 11. An overall good agreement between them can be concluded. A full picture of the 3-D distribution of the simulated and reconstructed \\(\\varepsilon_{w}\\), as well as the wet refractivity field, is presented in Figs. 12 and 13 and Figs. 14 and 15, respectively. Note that all the 3-D plots are presented in the topocentric cartesian coordinate of the single GNSS station. Turbulent eddies can be found not only in 2-D profiles but also in the 3-D distribution of the wet refractivity field overhead the single GNSS station (see Figs. 10, 12, and 13). However, the impacts of these turbulent eddies are relatively small and decayed associated with the exponential decay of the wet refractivity with respect to the altitude. Therefore, no apparent fluctuations Fig. 11: Two-dimensional profiles of (a) simulated and (b) reconstructed 3-D wet refractivity field in the north–south direction. Fig. 8: Sky plot of the residuals of the reconstructed SEs compared to the simulated ones. Fig. 12: Three-dimensional distribution of the fluctuating component \\(\\varepsilon_{w}\\) of the simulated wet refractivity field. Fig. 10: Two-dimensional profiles of (a) simulated and (b) reconstructed fluctuating component of the 3-D wet refractivity field in the north–south direction. Fig. 9: Statistical histogram distributions of the residuals of (a) reconstructed parameters \\(\\mathbf{p}^{i}\\) and (b) reconstructed \\(\\mathbf{SE}^{i}\\). coarse for us as this conventional model cannot represent the atmospheric conditions/processes on relatively smaller scales (e.g., from hundreds of meters to several kilometers in space and several minutes in time). The data coverage overhead of a standalone GNSS station is also limited by the nearly repetitive satellite tracks. Nevertheless, high-spatiotemporal-resolution data coverage is coming soon with multiconstellations, especially the low Earth orbit (LEO) constellations. It has been shown in many previous studies that the upcoming LEO satellites will significantly improve the coverage and the geometry of observations overhead a standalone GNSS station with the so-called LEO augmented GNSS technique (LeGNSS) (see [77, 78]). Until then, SWDs with a higher spatiotemporal resolution can be retrieved with LeGNSS. This would be of great help for applying our method to resolve the 3-D wet refractivity field on a local scale as these SWDs could give a better representation of the real complex weather conditions and atmospheric processes. Therefore, our contribution can be considered as the first step to see whether the proposed method was feasible and robust from the viewpoint of mathematics with ideal conditions in the simulation. It should be pointed out that the results from real datasets may be different from the ones presented here. This is because our simulation is based on a priori covariance functions, which are unsighted to the means in the considered quantities. We here simply assume that the mean of the fluctuating component \\(\\varepsilon_{w}\\) of the wet refractivity field is zero. However, this is not always true. As a result, the results of our simulation can only be considered as perturbations \\(\\delta\\varepsilon_{w}\\) superimposed on an unknown \"mean\" (or to say it better, a background value) \\(\\tilde{\\varepsilon}_{w}\\), also a potentially fluctuating component of the wet refractivity field. In practice, this \"hidden\" mean part must be calibrated using observations from other techniques, e.g., radiosonde, when applying our method to real datasets. The results of modeling \\(\\delta\\varepsilon_{w}\\) from real datasets and our simulation are only comparable if \\(\\tilde{\\varepsilon}_{w}\\) is defined in such a way that the mean of \\(\\delta\\varepsilon_{w}\\) is zero. The derivation of a physically sound \\(\\tilde{\\varepsilon}_{w}\\) is by itself another interesting topic and needs to be further investigated. However, it is out of the scope of this work. In our future work, we will consider simulating the 4-D wet refractivity field, i.e., with the addition of time evolution. The inclusion of time evolution allows us to describe critical atmospheric processes such as deep convection and cloud formation. The introduction of time variability is less problematic from a mathematical point of view as most of the ill-posedness lies with the Radon transform in space. The time scale has not yet been considered, but we infer that it should take a resolution of at least 30 min to accumulate sufficient observations for constraining the model. In that case, the correlation time of up to four days for GNSS-derived SWDs (see Section II) should be included as another a priori information for time-varying parameters. Besides, the Kaula-like rule needs to be reconsidered according to the degree of the orthonormal functions used to represent the temporal variations. However, the number of unknowns to be solved will increase dramatically by introducing the time variability [see (3)]. The corresponding linear systems with a tremendous number of unknowns (up to tens of thousands) will have to be solved with the help of a highly parallel supercomputer with sufficient memory. Taking the 3-D case in this work as a milestone, a fully mature 4-D model with time evolution involved will eventually be accomplished as we move forward in the direction mentioned above. Assuming that the ray path is limited to a certain azimuthally fixed vertical plane, the 2-D geometry of a ray path \\(S\\) launching at an elevation angle \\(el\\) from a single station is shown in Fig. 16. The delay \\(\\overline{\\text{SWD}}\\) therefore can be computed as \\[\\overline{\\text{SWD}} =N_{w}^{0}\\cdot\\int_{s}\\text{exp}\\bigg{(}-\\frac{r-r_{0}}{H_{w}} \\bigg{)}ds\\] \\[=N_{w}^{0}\\cdot\\int_{0}^{S}\\text{exp}\\bigg{(}-\\frac{s\\cdot\\sin(el )}{H_{w}}\\bigg{)}ds\\] \\[=-\\frac{N_{w}^{0}H_{w}}{\\sin(el)}\\bigg{[}\\text{exp}\\bigg{(}-\\frac {S\\cdot\\sin(el)}{H_{w}}\\bigg{)}-1\\bigg{]}. \\tag{23}\\] The ray path \\(S\\) usually can be obtained by implementing the so-called ray-tracing, and however, here for simplicity, we take the ray path as a straight line. Thereby, \\(S\\) can be derived from a given \\(el\\) based on the geometry illustrated in Fig. 16 as \\[(r_{0}+S\\cdot\\sin(el))^{2}+(S\\cdot\\cos(el))^{2}=\\big{(}r_{0}+H_{ \\text{tropo}}\\big{)}^{2} \\tag{24}\\] \\[S=\\frac{-2r_{0}\\sin(el)+\\sqrt{4r_{0}^{2}\\sin^{2}(el)+4\\cdot\\Big{(} 2r_{0}\\cdot H_{\\text{tropo}}+H_{\\text{tropo}}^{2}\\Big{)}}}{2} \\tag{25}\\] where \\(H_{\\text{tropo}}=16.5\\) km is the height of the tropopause in our case. The other parameters can be found in the context of Section III. For the other delay \\(\\delta\\)SWD in (8), we have \\[\\delta\\text{SWD}=N_{w}^{0}\\cdot\\int_{0}^{S}\\text{exp}\\bigg{(}-\\frac{s\\cdot\\sin (el)}{H_{w}}\\bigg{)}\\cdot\\varepsilon_{w}ds. \\tag{26}\\] By changing the variable of the integration \\(s=\\mu\\cdot S\\), i.e., \\(ds=S\\cdot d\\,\\mu\\), we have the integral over the interval \\([0,1]\\) that follows the limitation of \\(r\\) (\\(0\\leq r\\leq 1\\)) in the radial polynomials \\(R_{w}^{l}(r)\\) of 3-D Zernike functions. Thereby, together with 3-D Zernike Fig. 16: Geometry of a ray path launching from a single station at the elevation angle \\(el\\). functions in (4), (26) can be rewritten as \\[\\delta\\text{SWD}=N_{w}^{0}\\cdot\\sum_{n=0}^{n_{\\text{max}}}\\sum_{l=0} ^{n}\\sum_{m=0}^{l}\\int_{0}^{1}\\exp\\biggl{(}-\\frac{\\mu S\\cdot\\sin(el)}{H_{w}} \\biggr{)}R_{n}^{l}(\\mu)d\\mu\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\cdot Y_{l}^{m}(\\theta,\\lambda)\\cdot p_{nl}^{m}. \\tag{27}\\] Note that only the radial polynomials \\(R_{n}^{l}(\\mu)\\) and the exponential function in (27) depend on the variable of integration. Considering that the ansatz in (7) and the radial polynomials \\(R_{n}^{l}(\\mu)\\) take the form [64] \\[R_{n}^{l}(\\mu)=\\sum_{x=0}^{k=(n-l)/2}q_{kl}^{v}\\cdot\\mu^{2v+l} \\tag{28}\\] the definite integral in (27) can be derived as \\[\\int_{0}^{1}\\exp \\biggl{(}-\\frac{\\mu S\\cdot\\sin(el)}{H_{w}}\\biggr{)}R_{n}^{l}(\\mu )d\\mu\\] \\[=\\sum_{v=0}^{k=(n-l)/2}q_{kl}^{v}\\int_{0}^{1}\\exp\\biggl{(}-\\frac{ \\mu S\\cdot\\sin(el)}{H_{w}}\\biggr{)}\\mu^{2v+l}d\\mu\\] \\[=\\sum_{v=0}^{k=(n-l)/2}q_{kl}^{v}\\biggl{[}-\\frac{1}{a^{2v+l+1}} \\Gamma(2v+l+1,a\\mu)\\biggr{]}\\biggr{|}_{0}^{1}\\] \\[=\\sum_{v=0}^{k=(n-l)/2}q_{kl}^{v}\\cdot\\frac{l}{a^{2v+l+1}}\ u(2v +l+1,a). \\tag{29}\\] We let \\(a=S\\cdot\\sin(el)/H_{w}\\) in (7). Here, \\(\\Gamma\\) and \\(\\gamma\\) denote the upper and lower incomplete Gamma functions, respectively. By substituting (29) into (27), (10) is derived. ## Acknowledgment The authors would like to thank the anonymous reviewers and the Editor for their constructive comments and suggestions on the manuscript. ## References * [1] F. K. Brunner, \"The effects of atmospheric turbulence on telescopic observations,\" _Bull. Geodesique_, vol. 56, no. 4, pp. 341-355, Dec. 1982, doi: 10.1007/bf02525733. * [2] V. Raizer, _Remote Sensing of Turbulence_, 1st ed. Boca Raton, FL, USA: CRC Press, 2022. * [3] R. E. Good, R. R. Beland, E. A. Murphy, J. H. Brown, and E. M. Dewan, \"Atmospheric models of optical turbulence,\" in _Modeling of the Atmosphere_, vol. 928. Bellingham, WA, USA: SPIE, 1988, p. 165, doi: 10.1117/12.975626. * [4] J. Barriot and P. Feng, \"Beyond mapping functions and gradients,\" in _Geodetic Sciences--Theory, Applications and Recent Developments_, vol. 32. London, U.K.: InctechOpen, Jul. 2021, pp. 137-144, doi: [http://dx.doi.org/10.5772/intechenop.96982](http://dx.doi.org/10.5772/intechenop.96982). [Online]. Available: [http://dx.doi.org/10.5772/intechenop.96982](http://dx.doi.org/10.5772/intechenop.96982). * [5] S. Halsig, T. Artz, A. Iddink, and A. Nothnagel, \"Using an atmospheric turbulence model for the stochastic model of geodetic VLBI data analysis,\" _Earth, Planets Space_, vol. 68, no. 1, pp. 1-14, Dec. 2016, doi: 10.1186/s40623-016-0482-5. * [6] T. Nilsson and R. Haas, \"Impact of atmospheric turbulence on geodetic very long baseline interferometry,\" _J. Geophys. Res., Solid Earth_, vol. 115, no. B3, pp. 1-11, Mar. 2010, doi: 10.1029/2009jb06579. * [7] S. C. Herring, N. Christidis, A. Hoell, J. P. Kossin, C. J. Schreck, and P. A. Stott, \"Explaining extreme events of 2016 from a climate perspective,\" _Bull. Amer. Meteorol. Soc._, vol. 99, no. 1, pp. S1-S157, Jan. 2018, doi: 10.1175/bams-explainingextremevents2016.1. * [8] J. Vaquero-Martinez and M. Anton, \"Review on the role of GNSS meteorology in monitoring water vapor for atmospheric physics,\" _Remote Sens._, vol. 13, no. 12, p. 2287, Jun. 2021, doi: 10.3390/ts1322287. * [9] M. Bevis, S. Businger, T. A. Herring, C. Rocken, R. A. Anthes, and R. H. Ware, \"GPS meteorology: Remote sensing of atmospheric water vapour using the global positioning system,\" _J. Geophys. Res., Atmos._, vol. 97, no. D14, pp. 15787-15801, Oct. 1992, doi: 10.1029/92jd01517. * [10] J. Van Baelen et al., \"On the relationship between water vapour field evolution and the life cycle of precipitation systems,\" _Quart. J. Roy. Meteorol. Soc._, vol. 137, no. S1, pp. 204-223, Jan. 2011, doi: 10.1002/qj.785. * [11] W. Zhang et al., \"Multiscale variations of precipitable water over China based on 1999-2015 ground-based GPS observations and evaluations of reanalysis products,\" _J. Climate_, vol. 31, no. 3, pp. 945-962, Feb. 2018, doi: 10.1175/jcli-d1-07419.1. * [12] M. Troller, A. Geiger, E. Brockmann, J.-M. Bettems, B. Burki, and H.-G. Kahle, \"Tomographic determination of the spatial distribution of water vapor using GPS observations,\" _Adv. Space Res._, vol. 37, no. 12, pp. 2211-2217, Jan. 2006, doi: 10.1016/j.asr.2005.07.002. * [13] Y. Yao and Q. Zhao, \"Maximally using GPS observation for water vapor tomography,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 12, pp. 7185-7196, Dec. 2016, doi: 10.1109/TGRS.2016.2597241. * [14] Z. Adavi and M. Mashhtadi-Hossainali, \"4D tomographic reconstruction of the tropospheric net refractivity using the concept of virtual reference station, case study: Northwest of Iran,\" _Meteorol. Atmos. Phys._, vol. 126, nos. 3-4, pp. 193-205, Nov. 2014, doi: 10.1007/s00703-014-0342-4. * [15] S. Haji-Aghajany, Y. Amerian, and S. Verhagen, \"B-spline function-based approach for GPS tropospheric tomography,\" _GPS Solutions_, vol. 24, no. 3, pp. 1-12, Jul. 2020, doi: 10.1007/s10921-020-01005-x. * [16] A. Flores, G. Ruffini, and A. Rius, \"4D tropospheric tomography using GPS slant wet delays,\" _Annales Geophysicae_, vol. 18, no. 2, pp. 223-234, Feb. 2000, doi: 10.1007/s005850050025. * [17] S. Song, W. Zhu, J. Ding, and J. Peng, \"3D water-vapor tomography with Shanghai GPS network to improve forecasted moisture field,\" _Chin. Sci. Bull._, vol. 51, no. 5, pp. 607-614, Mar. 2006, doi: 10.1007/s11434-006-0607-5. * [18] M. Bender et al., \"Development of a GNSS water vapour tomography system using algebraic reconstruction techniques,\" _Adv. Space Res._, vol. 47, no. 10, pp. 1704-1720, May 2011, doi: 10.1016/j.asr.2010.05.034. * [19] W. Rohm, K. Zhang, and J. Bosy, \"Limited constraint, robust Kalman filtering for GNSS troposphere tomography,\" _Atmos. Meas. Techn._, vol. 7, no. 5, pp. 1475-1486, May 2014, doi: 10.5194/amt-7:1475-2014. * [20] W. Zhang et al., \"Rapid troposphere tomography using adaptive simultaneous iterative reconstruction technique,\" _J. Geodesy_, vol. 94, no. 8, pp. 1-12, Aug. 2020, doi: 10.1007/s00190-020-01386-4. * [21] H. Hersbach et al., \"The ERA5 global reanalysis,\" _Quart. J. Roy. Meteorol. Soc._, vol. 146, no. 730, pp. 1999-2049, Jul. 2020, doi: 10.1002/qj.3803. * [22] T. Foken, _Springer Handbook of Atmospheric Measurements_, 1st ed. Cham, Switzerland: Springer, 2021. * [23] R. A. Anthes, \"Exploring Earth's atmosphere with radio occultation: Contributions to weather, climate and space weather,\" _Atmos. Meas. Techn._, vol. 4, no. 6, pp. 1077-1103, Jun. 2011, doi: 10.5194/amt-4-1077-2011. * [24] J. Barriot, J. Serafini, and L. Sichoix, \"Estimating the 3D time variable water vapor contents of the troposphere from a single GNSS receiver,\" in _Proc. Int. Conf. Earth Obs. Soc. Impacts (ICEO&SI)_, 2013, pp. 3-8. * [25] J. Radon, \"On the determination of functions from their integral values along certain manifolds,\" _IEEE Trans. Med. Imag._, vol. 5, no. 4, pp. 170-176, Dec. 1986, doi: 10.1109/TMI.1986.430775. * [26] F. Colonna, G. Easley, K. Guo, and D. Labate, \"Radon transform inversion using the Shearlet representation,\" _Appl. Comput. Harmon. Anal._, vol. 29, no. 2, pp. 232-250, Sep. 2010, doi: 10.1016/j.acha.2009.10.005. * [27] M. Benna, J.-P. Barriot, and W. Kofman, \"A priori information required for a two or three dimensional reconstruction of the internal structure of a comet nucleus (conser experiment),\" _Adv. Space Res._, vol. 29, no. 5, pp. 715-724, Mar * [30] G. I. Taylor, \"The spectrum of turbulence,\" _Proc. Roy. Soc. Lond. A, Math. Phys. Eng. Sci._, vol. 164, no. 919, pp. 476-490, Feb. 1938, doi: 10.1098/rspa.1938.0032. * [31] S. Williams, Y. Bock, and P. Fang, \"Integrated satellite interferometry: Tropospheric noise, GPS estimates and implications for interferometric synthetic aperture radar products,\" _J. Geophys. Res., Solid Earth_, vol. 103, no. 811, pp. 27051-27067, Nov. 1998. * [32] V. I. Tatarskii, _The Effects of the Turbulent Atmosphere on Wave Propagation_. Springfield, VA, USA: Israel Program for Scientific Translations, 1971. * [33] L. P. Gradinarsky and P. Jarlemark, \"Ground-based GPS tomography of water vapor: Analysis of simulated and real data,\" _J. Meteorol. Soc. Japan. Ser. II_, vol. 82, no. 1B, pp. 551-560, 2004, doi: 10.2151/jmsj.2004.551. * [34] T. Nilsson and L. Gradinarsky, \"Water vapor tomography using GPS phase observations: Simulation results,\" _IEEE Trans. Geosci. Remote Sens._, vol. 44, no. 10, pp. 2927-2941, Oct. 2006, doi: 10.1109/TGRS.2006.877755. * [35] R. N. Trewhaft and G. E. Lanyi, \"The effect of the dynamic wet troposphere on radio interferometric measurements,\" _Radio Sci._, vol. 22, no. 2, pp. 251-265, Mar. 1987, doi: 10.1029/RS022002p00251. * [36] S. Schon and F. K. Brunner, \"Atmospheric turbulence theory applied to GPS carrier-phase data,\" _J. Geodesy_, vol. 82, no. 1, pp. 47-57, Jan. 2008, doi: 10.1007/s00190-007-0156-y. * [37] A. Romero-Wolf, C. S. Jacobs, and J. T. Ratcliff, \"Effects of tropospheric spatio-temporal correlated noise on the analysis of space geodetic data,\" in _Proc. IVS Gen. Meeting_, 2012, no. 1941, pp. 231-235. [Online]. Available: [http://ivscc.gsfc.nasa.gov/publications/gm2012/Romero-Wolf.pdf](http://ivscc.gsfc.nasa.gov/publications/gm2012/Romero-Wolf.pdf) * [38] A. Pany, J. Bohm, D. Macmillan, H. Schuh, T. Nilsson, and J. Wresnik, \"Monte Carlo simulations of the impact of troposphere, clock and measurement errors on the repeatability of VLBI positions,\" _J. Geodesy_, vol. 85, no. 1, pp. 39-50, Jan. 2011, doi: 10.1007/s00190-010-0415-1. * [39] T. R. Emardson and P. O. J. Jarlemark, \"Atmospheric modelling in GPS analysis and its effect on the estimated geodetic parameters,\" _J. Geodesy_, vol. 73, no. 6, pp. 322-331, Jul. 1999, doi: 10.1007/s00190050249. * [40] S. Schon and F. K. Brunner, \"A proposal for modelling physical correlations of GPS phase observations,\" _J. Geodesy_, vol. 82, no. 10, pp. 601-612, Oct. 2008, doi: 10.1007/s00190-008-0211-3. * [41] G. Kermarrec and S. Schon, \"On the Miatern covariance family: A proposal for modeling temporal correlations based on turbulence theory,\" _J. Geodesy_, vol. 88, no. 11, pp. 1061-1079, Nov. 2014, doi: 10.1007/s00190-014-0743-7. * [42] S. Halisq, T. Artz, J. Leek, and A. Nothnagel, \"VLBI analyses using covariance information from turbulence models,\" in _Proc. Int. VLBI Service Geodesy Astrometry Gen. Meeting_, 2014, pp. 272-276. * [43] C. Faccani, R. Ferretti, R. Pacione, T. Paolucci, F. Vespe, and L. Cucuull, \"Impact of a high density GPS network on the operational forecast,\" _Adv. Geosci._, vol. 2, pp. 73-79, Mar. 2005, doi: 10.5194/adgeo-2-73-2005. * [44] Q. Zhao, Y. Yao, and W. Yao, \"GPS-based PWV for precipitation forecasting and its application to a typhoon event,\" _J. Atmos. Solar-Terrestrial Phys._, vol. 167, pp. 124-133, Jan. 2018, doi: 10.1016/j.jaastp.2017.11.013. * [45] G. Guevova et al., \"Review of the state of the art and future prospects of the ground-based GNSS meteorology in Europe,\" _Atmos. Meas. Techn._, vol. 9, no. 11, pp. 5385-5406, Nov. 2016, doi: 10.5194/amt-9-5385-2016. * [46] S. Bonafoni, R. Biondi, H. Brenot, and R. Anthes, \"Radio occultation and ground-based GNSS products for observing, understanding and predicting extreme events: A review,\" _Atmos. Res._, vol. 230, Dec. 2019, Art. no. 104624, doi: 10.1016/j.atmos.2019.104624. * [47] T. Nilsson and G. Elgered, \"Long-term trends in the atmospheric water vapor content estimated from ground-based GPS data,\" _J. Geophys. Res., Atmos._, vol. 113, no. D19, pp. 1-12, Oct. 2008, doi: 10.1029/2008jd010110. * [48] Z. Baldysz, G. Nykiel, A. Araszkiewicz, M. Figurski, and K. Szafranek, \"Comparison of GPS tropospheric delays derived from two consecutive EPN reprocessing campaigns from the point of view of climate monitoring,\" _Atmos. Meas. Techn._, vol. 9, no. 9, pp. 4861-4877, Sep. 2016, doi: 10.5194/amt-94861-2016. * [49] C. Champollion, F. Masson, J. Van Baelen, A. Walpersdorf, J. Chery, and E. Doerflinger, \"GPS monitoring of the tropospheric water vapor distribution and variation during the 9 September 2002 torcinent precipitation episode in the Cevennus (Southern France),\" _J. Geophys. Res. Atmos._, vol. 109, no. D24, pp. 1-15, Dec. 2004, doi: 10.1029/2004jd004897. * [50] H. Brenot et al., \"Preliminary signs of the initiation of deep convection by GNSS,\" _Atmos. Chem. Phys._, vol. 13, no. 11, pp. 5425-5449, Jun. 2013, doi: 10.5194/acp-13-5425-2013. * [51] V. Graflinga, M. Hernandez-Pajares, F. Azpilicueta, and M. Gende, \"Comprehensive study on the tropospheric wet delay and horizontal gradients during a severe weather event,\" _Remote Sens._, vol. 14, no. 4, p. 888, Feb. 2022. * [52] L. Morel et al., \"Validity and behaviour of tropospheric gradients estimated by GPS in Corsica,\" _Adv. Space Res._, vol. 55, no. 1, pp. 135-149, Jan. 2015, doi: 10.1016/j.asr.2014.10.004. * [53] F. Zhang, J.-P. Barriot, G. Xu, and M. Hopourae, \"Modeling the slant wet delays from one GPS receive as a series expansion with respect to time and space: Theory and an example of application for the thahi island,\" _IEEE Trans. Geosci Remote Sens._, vol. 58, no. 11, pp. 7520-7532, Nov. 2020, doi: 10.1109/TGRS.2020.2975458. * [54] M. Vennebusch, S. Schon, and U. Weinbach, \"Temporal and spatial stochastic behaviour of high-frequency slant tropospheric delays from simulations and real GPS data,\" _Adv. Space Res._, vol. 47, no. 10, pp. 1681-1690, May 2011, doi: 10.1016/j.asr.2010.09.008. * [55] W. M. Kaula and R. E. Street, _Theory of Satellite Geodesy: Applications of Satellites to Geodesy_, vol. 20, 1st ed. Waltham, MA, USA: Blaisdell, 1966. * [56] A. Broman, _Introduction to Partial Differential Equations From Fourier Series to Boundary-Value Problems_, 1st ed. New York, NY, USA: Dover, 1970. * [57] W. M. Kaula, \"Determination of the Earth's gravitational field,\" _Rev. Geophysics_, vol. 1, no. 4, pp. 507-551, Nov. 1963, doi: 10.1029/rg01i004p00507. * [58] W. M. Kaula, \"Statistical and harmonic analysis of gravity,\" _J. Geophys. Res._, vol. 64, no. 12, pp. 2401-2421, Dec. 1959, doi: 10.1029/j0e64012p02401. * [59] R. H. Rapp and N. K. Pavlis, \"The development and analysis of geopotential coefficient models to spherical harmonic degree 360,\" _J. Geophys. Res., Solid Earth_, vol. 95, no. B13, pp. 21885-21911, Dec. 1990, doi: 10.1029/jb095ib13p21885. * [60] R. S. Nerem, C. Jekeli, and W. M. Kaula, \"Gravity field determination and characteristics: Retrospective and prospective,\" _J. Geophys. Res., Solid Earth_, vol. 100, no. B8, pp. 15053-15074, Aug. 1995, doi: 10.1029/94jb03257. * [61] W. M. Kaula, \"The investigation of the gravitational fields of the Moon and planets with artificial satellites,\" _Adv. Sp. Sci. Technol._, vol. 5, pp. 210-230, Jul. 1963. * [62] J. Saastamoinen, \"Contributions to the theory of atmospheric refraction: Part II. Reflection corrections in satellite geodesy,\" _Bull. Geodesique_, vol. 107, no. 1, pp. 13-34, Mar. 1973. * [63] F. S. Sohheim, J. Vivekkanandan, R. H. Ware, and C. Rocken, \"Propagation delays induced in GPS signals by dry air, water vapor, hydrometzes, and other particulates,\" _J. Geophys. Res., Atmos._, vol. 104, no. D8, pp. 9663-9670, Apr. 1999, doi: 10.1029/1999949000095. * [64] M. Novotni and R. Klein, \"3D Zernike descriptors for content based shape retrieval,\" in _Proc. 8th ACM Symp. Solid Modeling Appl._, Jun. * [71] J. Bouman, \"Quality assessment of geopotential models by means of redundancy decomposition,\" _DEOS Prog. Lett._, vol. 97, no. 1, pp. 49-54, 1997. * [72] M. Vennebusch and S. Schon, \"Generation of slant tropospheric delay time series based on turbulence theory,\" in _Proc. Int. Assoc. Geodesy Symposia_, vol. 136, 2012, pp. 801-807. * [73] J. P. Barriot and G. Balmino, \"Estimation of local planetary gravity fields using line of sight gravity data and an integral operator,\" _Icarus_, vol. 99, no. 1, pp. 202-224, Sep. 1992, doi: 10.1016/0019-1035(92)90183-8. * [74] C. Hwang, \"Spectral analysis using orthonormal functions with a case study on the sea surface topography,\" _Geophys. J. Int._, vol. 115, no. 3, pp. 1148-1160, Dec. 1993, doi: 10.1111/j.1365-246x.1993.tb01517.x. * [75] N. Fourrie et al., \"AROME-WMED, a real-time mesoscale model designed for the HyRX special observation periods,\" _Geosci. Model Develop._, vol. 8, no. 7, pp. 1919-1941, Jul. 2015, doi: 10.5194/gmd-8-1919-2015. * [76] F. Bouyssel et al., \"The 2020 global operational NWP data assimilation system at Meteo-France,\" in _Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications_, vol. 4, S. K. Park and L. Xu, Eds. Cham, Switzerland: Springer, 2020, pp. 645-664. * [77] X. Li et al., \"LEO constellation-augmented multi-GNSS for rapid PPP convergence,\" _J. Geodesy_, vol. 93, no. 5, pp. 749-764, May 2019, doi: 10.1007/s00190-018-1195-2. * [78] H. Ge et al., \"LEO enhanced global navigation satellite system (LeG-NSS): Progress, opportunities, and challenges,\" _Geo-Spatial Inf. Sci._, vol. 25, no. 1, pp. 1-13, 2022, doi: 10.1080/10095020.2021.1978277. \\begin{tabular}{c c} & Xianjie Li received the M.Sc. degree in geodesy from Wuhan University, Wuhan, China, in 2018. He is currently pursuing the Ph.D. degree in geophysics with the Geodesy Observatory of Tahiti, University of French Polynesia, Faa'a, French Polynesia. His research interests include the study of GNSS precise point positioning technique and its application on mean sea level changes and meteorology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jingna Bai received the master's degree from the GNSS Research Center, Wuhan University, Wuhan, China, in 2022, where she is currently pursuing the Ph.D. degree in geodesy and survey engineering. Her research interests include GNSS data processing and GNSS meteorology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jean-Pierre Barriot received the Ph.D. degree in theoretical physics from the University of Montpellier, Montpellier, France, in 1987, and the Habilitation degree in space physics from the University of Toulouse, Toulouse, France, in 1997. Since 2006, he has been a Distinguished Professor of geophysics with the University of French Polynesia, Faa'a, French Polynesia, and the Head of the Geodesy Observatory of Tahiti, a joint Geodetic Observatory of CNES, NASA, and UPF. He is also an Invited Professor with the University of Wuhan, Wuhan, China. His research interests include geophysics of the Earth and planets, Earth and planetary atmospheres, and orbitography. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yidong Lou received the M.Sc. and Ph.D. degrees in geodesy engineering from Wuhan University, Wuhan, China, in 2004 and 2008, respectively. He is currently a Professor with the GNSS Research Center, Wuhan University. His research findings have been successfully applied to the major project, \"National Beloudu Ground-Based Augmentation System Development and Construction,\" which has realized the wide-area real-time positioning accuracy of Beloudu from meter level to centimeter level, supporting the innovative applications of a nationwide network service. His research interests include the theoretical methods and software of GNSS real-time high-precision data processing and meteorological applications. \\\\ \\end{tabular} \\begin{tabular}{c c} & Weixing Zhang received the Ph.D. degree in geodesy and surveying engineering from Wuhan University, Wuhan, China, in 2016. He is currently an Associate Professor with the GNSS Research Center, Wuhan University. His research interests include GNSS data processing and GNSS meteorology. \\\\ \\end{tabular}
Sensing water vapor contents in the troposphere with ground-based Global Navigation Satellite System (GNSS) stations has been widely studied and related to extreme weather events and climate changes over the years. Usually, GNSS tomography is the tool of choice to retrieve the 3-D water vapor field. However, a dense GNSS network is required, which means that the GNSS tomography is not applicable everywhere, e.g., in island countries, where only one/a few GNSS stations are available. In this work, we propose a new method to retrieve the 3-D wet refractivity field from the data collected at a standalone ground-based GNSS station. Using 3-D Zernike functions to model the turbulent component of the wet refractivity field and the corresponding perturbations in slant wet delays (SWDs), a typical Radon inverse problem is obtained. Two kinds of a priori information, namely, the spatial covariance of the SWDs and a Kaula-like rule, respectively, are proposed and introduced to regularize this ill-posed inverse problem. The proposed method is validated with a simulation experiment. The simulation results indicate its usefulness for retrieving the 3-D wet refractivity field overhead a single GNSS station with the appropriate a priori information. 3-D Zernike functions, atmospheric turbulence, Global Navigation Satellite System (GNSS), Kaula-like rule, slant wet delay (SWD), Tikhonov regularization, wet refractivity field.
Write a summary of the passage below.
ieee/edd1797e_0ed0_4c4c_86c5_ce5c2d1d5a13.md
# Hyperspectral Image Classification With Mixed Link Networks Zhe Meng \\({}^{\\text{\\textcircled{C}}}\\), Licheng Jiao \\({}^{\\text{\\textcircled{C}}}\\), and Feng Zhao \\({}^{\\text{\\textcircled{C}}}\\), Manuscript received October 28, 2020; revised December 29, 2020; accepted January 19, 2021. Date of publication January 25, 2021; date of current version February 22, 2021. This work was supported in part by the National Natural Science Foundation of China under Grant 61901198 and Grant 62071379, in part by the Natural Science Basic Research Plan in Shavanish Province of China under Grant 2019Q-377, and in part by the New Star Team of Xi' an University of Posts and Telecommunications under Grant xy2016-01. (_Corresponding author: Zhe Meng._) Zhe Meng and Feng Zhao are with the School of Telecommunication and Information Engineering, Xi'an University of Posts and Telecommunications, Xi'an 170121, China (e-mail: [email protected]; [email protected]). Licheng Jiao is with the School of Artificial Intelligence, Xidian University, Xi an 710071, China (e-mail: [email protected]). Miaomiao Liang is with the School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China (e-mail: [email protected]). Digital Object Identifier 10.1109/JSTARS.2021.305367 ## I Introduction Remote sensing hyperspectral image (HSI) usually encompasses hundreds of spectral bands, which record abundant and unique information of various objects on the surface of the earth. Hence, HSIs have been employed in a wide variety of applications, including disaster monitoring [1], anomaly detection [2], and precision agriculture [3]. Recently, the classification of HSIs has gained remarkable attention in the hyperspectral community, since many hyperspectral applications are, in essence, classification tasks with the purpose of categorizing the pixels of HSIs into meaningful classes [4]. The traditional HSI classification algorithms concentrate on exploiting the spectral characteristic of a hyperspectral pixel to determine its class, such as multinomial logistic regression [5], decision trees [6], and support vector machine (SVM) [7]. However, due to the high intraclass and low interclass spectral variability, using spectral information alone makes the accurate identification of different objects difficult [8]. Considering the strong local space consistency in HSIs, methods that incorporate spatial-contextual information were proposed, which allow the joint exploitation of the spatial and spectral information to differentiate each hyperspectral pixel and further enhance the classification accuracy [9]. For instance, Li _et al._[10] proposed a multiple features-based HSI classification paradigm, in which local spatial features extracted by the local binary pattern (LBP) operator, global spatial features captured by a Gabor filter, and original spectral features are combined for classification. In addition, multiple kernel learning (MKL) [11], superpixel [12], and sparse representation algorithms [13, 14] have also been explored to integrate spatial-contextual information with spectral signatures to achieve good classification accuracy. In [15], the conventional spectral-spatial feature-based HSI classification methods were systematically reviewed. However, the aforementioned approaches, such as LBP, superpixel, and sparse representation, extract fixed pattern features from raw data, which are highly dependent on prior knowledge and appropriate parameter setting, generally resulting in unsatisfactory performance [16]. Nowadays, deep learning techniques, which allow the automatic extraction of robust and hierarchical features in an end-to-end fashion, have made great breakthrough in many computer vision tasks (e.g., image classification [17] and object detection [18]). In the field of remote sensing, Chen _et al._[19] first introduced the stacked autoencoders (SAEs) to learn deep spectral features together with deep spatial-dominated features for HSI classification. After that, deep learning models such as deep belief network (DBN) [20], convolutional neural network (CNN) [21], recurrent neural network (RNN) [22, 23], and capsule network (CapsNet) [24, 25] were also successfully applied to deal with HSI classification. Owing to the capability of automatically discovering spatial-contextual features, CNN models have been attracting more attention from researchers for HSI classification [26, 27, 28]. For instance, work in [26] jointly made use of the balanced local discriminative embedding algorithm and the CNN to conduct spatial-spectral HSI classification. Pan _et al._[29] proposed a multigrained network (MugNet), which takes full advantage of different grains' spectral and spatial relationship for HSI classification. In [30, 31, 32], 3-D CNN models were proposed to directly learn spatial-spectral representations from raw hyperspectral data. Recently, through engineering more powerful CNN architectures, much progress have been achieved in accurate HSI classification [33, 34, 35, 36, 37]. For instance, in [33], Lee _et al._ introduced residual learning to enhance the learning efficiency of conventional CNN model and employed a multiscale convolutional filter bank to exploit local spatial-spectral relationships of HSIs. In [34], Song _et al._ built very deep residual networks (ResNets) to learn discriminative features and then adopted a feature-fusing mechanism to achieve further performance improvement. Paoletti _et al._[38] proposed the use of deep pyramid ResNet for HSI classification. In [39], an HSI classification framework based on the densely convolutional network (DenseNet) [40] was proposed, which introduces dense connections in the network to strengthen feature propagation, enhancing both the feature discriminability and the classification performance. In [41] and [42], some improved deep networks based on DenseNet were proposed, which can make full use of the multiscale information of HSIs. Considering that shortcut connections (also known as residual connections) in ResNets contribute to effective feature reusage and dense connections are effective for new feature exploration, Kang _et al._[43] introduced the dual path network (DPN) that inherits both advantages of ResNet and DenseNet to learn more discriminative features from hyperspectral data [44]. More recently, Wang _et al._[45] discovered and proved that both the ResNet and DenseNet are derived from the same dense topology intrinsically, in which each layer is connected with all the preceding layers. In addition, they demonstrated that the only difference between the path topologies of these two networks lies in the connection form, i.e., addition in ResNet and concatenation in DenseNet. In this article, inspired by [45], two novel end-to-end mixed link networks (MLNets) are proposed for HSI classification. In MLNets, the additive links and concatenative links are combined by using mixed link architectures in order to enjoy benefits from both sides. Specifically, concatenative links assembled in the proposed networks could avoid repetitive learning of redundant features but focus on some new and more effective feature exploration, while additive links achieve reasonable feature reuse and avoid unnecessary loss of previous information, all of which help the model to extract discriminative features from HSIs. The blending links improve the information flow throughout the network. Moreover, by introducing shifted additions, the modification of raw features in the proposed mixed link architectures can alleviate feature redundancy to some extent. Experimental results on three hyperspectral benchmark data sets reveal that, compared to several state-of-the-art CNN models, such as ResNet, DenseNet, and DPN, the proposed model can achieve better performance in HSI classification. In particular, the proposed MLNets require fewer parameters than the DPN network whilst achieving better results. Notably, on University of Houston dataset, MLNets surpass DPN while being 3.23 times fewer parameters. The rest of this article is organized as follows. Section II reviews the DenseNet and ResNet briefly and reveals that both of them are derived from the same dense topology. Section III describes the proposed method. Section IV presents the experimental results conducted on three benchmark HSI data sets. Finally, Section V concludes this article. ## II Dense Topology in Both DenseNet and ResNet The network architecture plays a crucial role in the classification performance. For accurate HSI classification, much progress have been achieved by engineering network architectures [46, 47, 48]. In particular, most modern deep neural network-based HSI classification frameworks are built based either on the ResNet or on the DenseNet, both of which have obtained state-of-the-art performance in many computer vision tasks [49, 18]. ResNets can be built by stacking microblocks (also known as residual blocks) sequentially. For each residual block, the input features are element-wisely added to the output ones through identity shortcut connection, which not only helps information propagation but also eases the training of the network [50]. In DenseNet, dense connections enable each layer to receive raw information produced by all preceding layers, drawing representational power through effective new feature exploration. Specifically, the feature-maps learned by previous layers are concatenated and inputted into all subsequent layers, which further strengthens information flow [40]. Consider a network with \\(L\\) layers, each of which implements a nonlinear transformation \\(H_{l}(\\cdot)\\). \\(l\\) refers to the layer index and \\(H_{l}(\\cdot)\\) could be a composite function of several operations including convolution (Conv), linear transformation, batch normalization (BN) [51], activation [52], and pooling [53]. Assume that \\(x_{l}\\) is the immediate output of \\(H_{l}(\\cdot)\\). Fig. 1 (left) illustrates the connection pattern in the DenseNet. For the \\(l\\)th layer in the network, it receives \\(c_{l-1}\\) as input, which is the concatenation result of all the previous outputs (i.e., \\(x_{0},x_{1},\\ldots,x_{l-1}\\)). Mathematically, the output of the \\(l\\)th layer can be formulated as \\[x_{l} =H_{l}(c_{l-1})\\] \\[=H_{l}(x_{0}\\parallel x_{1}\\parallel\\cdots\\parallel x_{l-1}) \\tag{1}\\] where \\(\\parallel\\) represents the concatenation operation. Equation (1) indicates that DenseNet belongs to the dense topology clearly, i.e., each layer in the network is connected with all the preceding layers, and the connection function is concatenation, as shown in the right of Fig. 1. Fig. 2 (left) shows the connection pattern in the ResNet, where shortcut connections are introduced to bypass each transformation \\(H(\\cdot)\\). Let \\(r\\) denote the addition result after shortcut connection, and \\(r_{0}\\) equals \\(x_{0}\\). We can formulate the residual learning process as \\[r_{l}=H_{l}(r_{l-1})+r_{l-1}. \\tag{2}\\] Note that \\(H_{l}(\\cdot)\\) takes \\(r_{l-1}\\) as input, and its immediate output is \\(x_{l}\\), i.e., \\(x_{l}=H_{l}(r_{l-1})\\). Considering the recursive property of (2), \\(x_{l}\\) can be rewritten as \\[x_{l} =H_{l}(r_{l-1})\\] \\[=H_{l}(H_{l-1}(r_{l-2})+r_{l-2})\\] \\[=H_{l}(H_{l-1}(r_{l-2})+H_{l-2}(r_{l-3})+r_{l-3})\\] \\[=\\cdots\\]\\[=H_{l}\\left(\\sum_{i=1}^{l-1}H_{i}(r_{i-1})+r_{0}\\right)\\] \\[=H_{l}\\left(\\sum_{i=1}^{l-1}x_{i}+x_{0}\\right)\\] \\[=H_{l}(x_{0}+x_{1}+\\cdots+x_{l-1}). \\tag{3}\\] Equation (3) reveals that \\(r_{l-1}\\) is the element-wise sum of the outputs of the preceding \\(l-1\\) layers, i.e., \\(r_{l-1}=x_{0}+x_{1}+\\cdots+x_{l-1}\\). The graphical view of (3) is illustrated in the right of Fig. 2, and one can see that ResNet also belongs to the dense topology. In addition, by comparing (1) and (3), we can find that the only difference between the topologies of DenseNet and ResNet is in the connection form, i.e., \"\\(\\parallel\\)\" in DenseNet versus \"\\(+\\)\" in ResNet. ## III Methodology The extraordinary success of both DenseNet and ResNet prove the effectiveness of dense topology. However, the additive connection in ResNet makes features from different layers aggregated on the same feature space, which may impede the flow of information throughout the network [40]. As for the DenseNet, concatenative connection allows each layer receiving raw features from all preceding layers, which are effective for the exploration of new features, but there may be the same type of raw features from different layers, resulting in feature redundancy [43]. To combine the advantages of the additive and concatenative connections and overcome their weaknesses, two novel dense topology-based MLNets are proposed for HSI classification. Fig. 3 illustrates the flowchart of the proposed classification approach. Image patches centered at the labeled pixels are extracted and inputted to the MLNet, where a part of patches are used to train the network and the rest of the patches are used to evaluate the classification performance of the trained network. As shown in Fig. 3, the mixed link block (MLB) is the main part of the proposed MLNet. In this article, we propose two different MLBs, i.e., MLB-A and MLB-B, in order to combine the strengths of additive and concatenative connections, which will be detailed as follows. ### _Mixed Link Blocks_ Fig. 4(a) illustrates the architecture of MLB-A. Let us consider \\(X\\) as the input of the MLB-A, which has \\(K\\) channels. The upper additive link first takes \\(X\\) as input and produces \\(k\\) (\\(k<K\\)) feature maps, which are added to the last \\(k\\) channels of the input \\(X\\). The computation process can be formulated as \\[\\hat{X}=X+H_{\\text{Add}}(X) \\tag{4}\\] where \\(\\hat{X}\\) refers to the interim learned features and \\(H_{\\text{Add}}(\\cdot)\\) denotes the function of generating feature maps for the additive link. As for the concatenative link, it appends \\(k\\) new feature maps outside the interim learned features \\(\\hat{X}\\) \\[Y =\\hat{X}\\parallel H_{\\text{Concat}}(X)\\] \\[=(X+H_{\\text{Add}}(X))\\parallel H_{\\text{Concat}}(X) \\tag{5}\\] where \\(Y\\) denotes the output of the MLB-A, \\(\\parallel\\) represents the concatenation operation, and \\(H_{\\text{Concat}}(\\cdot)\\) denotes the function to be learned in the concatenative link. Since the additive link does not change the number of feature maps, both the input \\(X\\) and Fig. 1: Dense topology in the DenseNet. Fig. 3: Flowchart of the proposed MLNet-based HSI classification method. Fig. 2: Dense topology in the ResNet. the interim learned features \\(\\hat{X}\\) have \\(K\\) channels, and the output \\(Y\\) contains \\(K+k\\) channels. The mixed link architecture aims to utilize the strengths of both additive and concatenative links. With this motivation, an alternative architecture named MLB-B is proposed, which performs the concatenative link before the additive link, as shown in Fig. 4(b). Specifically, the concatenative link first takes \\(X\\) as input and produces \\(k\\) feature maps, which are appended outside the input \\(X\\). The computation process can be formulated as \\[\\hat{X}=X\\parallel H_{\\text{Concat}}(X). \\tag{6}\\] Then, the feature maps produced by the additive link are added to the last \\(k\\) channels of the interim learned features \\(\\hat{X}\\) \\[Y =\\hat{X}+H_{\\text{Add}}(X)\\] \\[=X\\parallel H_{\\text{Concat}}(X)+H_{\\text{Add}}(X). \\tag{7}\\] Therefore, in MLB-B, the number of channels of the interim features \\(\\hat{X}\\) and the output \\(Y\\) is \\(K+k\\). Note that although there are many additive positions can be chosen, learning variable positioning is currently unavailable due to the fact that their arrangement is not derivable directly. Therefore, we make a compromise and in MLB-A, we choose to align the position of additive part with the growing boundary of entire feature embedding, as shown in Fig. 4(a). In MLB-B, the position of additive part is aligned with the newly added channels caused by the concatenative link, as shown in Fig. 4(b). In addition, as shown in Fig. 4, \\(H_{\\text{Add}}(\\cdot)\\) and \\(H_{\\text{Concat}}(\\cdot)\\) are implemented with a bottleneck composite layer, i.e., BN-ReLU-Conv(\\(1\\times 1\\))-BN-ReLU-Conv(\\(3\\times 3\\)), in order to improve computational efficiency as in [50] and [40]. In our experiments, the number of feature maps produced by the \\(1\\times 1\\) and \\(3\\times 3\\) Conv layers are 4 \\(k\\) and \\(k\\), respectively, with \\(k=36\\). In MLB-A, only \\(K-k\\) number of channels remain unaltered between the input \\(X\\) and output \\(Y\\). The rest of the channels in \\(Y\\) will be either modified or new features, as shown in Fig. 4(a). As for MLB-B, the input \\(X\\) with \\(K\\) channels stays unaltered, resulting in higher number of unmodified features being passed to subsequent layers, as shown in Fig. 4(b). Therefore, compared with MLB-A, MLB-B has higher feature redundancy. However, in MLB-B, the features produced by the concatenative link also undergo update because of the upcoming additive link, which helps to learn more complex features. Hence, compared with MLB-A, MLB-B is better in feature exploration. Overall, both of the integration way in our proposed architectures present their advantages. This is further confirmed in our subsequent experiments. Considering that in ResNet and DPN, too many features are merged together by addition over the same feature space (called fixed additions in this article), which may impede the information flow [40, 43]. As shown in Fig. 5(a), for residual architecture, all the extracted features are merged together by addition. For dual path architecture in the DPN [see Fig. 5(b)], the additive features (denoted by purple color) are merged over the same fixed space. However, for the proposed mixed link architectures, the shifting of additive positions (denoted by red color) in subsequent feature spaces along multiple MLBs can alleviate this problem, as shown in Fig. 6. Fig. 4: Architectures of the proposed two mixed link blocks (MLBs). (a) MLB-A. (b) MLB-B. Fig. 5: Illustration of the fixed additions in the (a) residual architecture [50] and (b) dual path architecture [43]. Fig. 6: Illustration of the shifted additions in the proposed two mixed link architectures. ### _MLNets for HSI Classification_ Based on the MLBs, this article designs two networks, including MLNet-A and MLNet-B. For instance, the MLNet-A is constructed by stacking several MLB-As. Fig. 3 illustrates the proposed MLNet-based HSI classification framework. Taking the Indian Pines scene as an example, the proposed network aims to classify each hyperspectral pixel into a certain land cover category. As can be seen, it takes image patch centered at each pixel as input. In this way, for each hyperspectral pixel, in addition to its own unique spectral characteristic, the spectral information of adjacent pixels and the spatial contextual information can be considered simultaneously, which reduces the intraclass variability and label uncertainty [38]. The extracted image patch is first fed into a \\(3\\times 3\\) Conv layer to learn the initial spectral-spatial features. The number of output feature maps of the initial Conv layer is set as 2 \\(k\\). Then, the obtained features are further processed by three MLBs. Finally, a global average pooling (GAP) [54] layer is utilized to transform the extracted spectral-spatial feature (with 5 \\(k\\) channels) into a 1-D vector for classification. Specifically, we employ a fully connected (FC) layer followed by a softmax function to predict the conditional probability of each category, and the category with the maximum probability is the prediction result. neighboring area. It comprises 349\\(\\times\\)1905 pixels and 144 spectral bands covering the range from 380 to 1050 nm. The spatial resolution is 2.5 mpp. This dataset contains 15 classes. The false color composite image, the training map, and the test map of the University of Houston are shown in Fig. 9. ### _Experimental Setup_ We set both the batch size and the training epochs to 100 and chose the Adam algorithm [56] to optimize the proposed network. The initial learning rate and the weight decay penalty were set to 0.001 and 0.0001, respectively. In addition, a cosine shape learning rate schedule was adopted, starting from 0.001 and gradually reducing to 0. The proposed network was designed and implemented using the Pytorch framework. Note that before fed into the network, the input HSI data were standardized to zero mean and unit variance. All the experiments were carried out on a personal computer equipped with AMD Ryzen 7 2700X CPU and a single Graphical Processing Unit (GPU) of NVIDIA GeForce RTX 2080. To assess the classification performance, the overall accuracy (OA), average accuracy (AA), and Kappa coefficient (\\(\\kappa\\)) were used. All the experiments were repeated five times and the averaged classification accuracies were reported. ### _Classification Results_ To validate the effectiveness of our MLNet-A and MLNet-B for HSI classification, the proposed models were compared with two different kinds of approaches: 1) three classical methods, including SVM [7], extended morphological profiles (EMP) [9], and 3-D discrete wavelet transform (3DDWT) [57]; and 2) eight deep learning based approaches, including 3-D CNN [31], hybrid spectral convolutional neural network (HybridSN) [58], fully convolutional layer fusion network (FCLFN) [59], DenseNet [39], deep pyramid ResNet (pResNet) [38], MixNet [45], DPN [44], and spatial-spectral squeeze-and-excitation based ResNet (SSSERN) [60]. SVM is a spectral classifier that uses the radial basis function (RBF) as the kernel. For EMP and 3DDWT methods, they are used to extract spatial features from HSIs. The extracted features are concatenated with the original spectral features and fed into the SVM classifier for spectral-spatial classification. 3-D CNN uses 3-D convolution to extract the spectral and spatial information from HSIs simultaneously without relying on any preprocessing. HybridSN is a hybrid 3-D and 2-D model, which reduces the model complexity compared to 3-D CNN alone. FCLFN fuses spectral-spatial features extracted by all Conv layers in the CNN for HSI classification. DenseNet introduces concatenative links between layers, in which each layer is connected with every other layer in a feed-forward fashion. pResNet is an improved ResNet, which introduces additive links in plain CNN and gradually increases the feature map dimension at all Conv layers. MixNet contains three stages each of which is made up of a large number of blocks that have the similar architecture as MLB-A. DPN also combines the advantages of the additive link and concatenative link for HSI classification. However, in DPN the additive features are merged together over the same fixed space, which may impede the flow of information. SSSERN uses spatial-spectral squeeze-and-excitation module to adaptively refine features learned by the residual block, extracting more discriminative features of HSIs. For all CNN-based compared methods, the network architectures were set according to the corresponding references. Considering that the spatial size of input HSI patch has a great impact on the classification performance, for the sake of fairness, we fixed the input patch size to \\(11\\times 11\\) when comparing different CNN-based approaches as in [38], [39], and [60]. Table IV and Fig. 10 present the numerical and visual results of different methods on the Indian Pines dataset. The number of training and test samples used for this experiment is summarized in Table I. As shown in Table IV, the values of OA, AA, and \\(\\kappa\\) of the proposed MLNet-A and MLNet-B are higher than that of other compared approaches. Specifically, our MLNet-A is able to reach the best OA (97.27%), AA (98.38%), and \\(\\kappa\\) (0.9687) values. The MLNet-B achieves very similar values, being its OA, AA, and \\(\\kappa\\) only 0.07%, 0.06%, and 0.0008 lower than MLNet-A, respectively. In addition, compared with other methods, the increases of OA scores obtained by the proposed MLNet-A are 22.91% (SVM), 9.15% (EMP), 8.54% (3DDWT), 5.73% (3-D CNN), 6.31% (HybridSN), 3.67% (FCLFN), 3.44% (DenseNet), Fig. 8: University of Pavia dataset. From left to right: False color composite image, the training map, the test map, and the legend. Fig. 9: University of Houston dataset. From top to bottom: False color composite image, the training map, the test map, and the legend. 2.64% (pResNet), 2.82% (MixNet), 0.87% (DPN), and 0.75% (SSSERN). The enhancements of AA scores are 15.07% (SVM), 5.17% (EMP), 4.99% (3DDWT), 2.55% (3D CNN), 2.96% (HybridSN), 1.63% (FCLFN), 1.15% (DenseNet), 1.14% (pResNet), 1.13% (MixNet), 0.38% (DPN), and 0.36% (SSSERN). The improvements of \\(\\kappa\\) values are 0.2582 (SVM), 0.1040 (EMP), 0.0972 (3DDWT), 0.0653 (3-D CNN), 0.0717 (HybridSN), 0.0420 (FCLFN), 0.0392 (DenseNet), 0.0301 (pResNet), 0.0323 (MixNet), 0.0100 (DPN), and 0.0087 (SSSERN). In addition, as can be observed from Fig. 10, the classification maps obtained by our models are close to the ground truth map. These encouraging results demonstrate the effectiveness of the proposed models for the Indian Pines dataset. Table V and Fig. 11 illustrate the numerical and visual results of different methods on the University of Pavia dataset. The number of training and test samples used for this experiment is summarized in Table II. From the observation of Table V, we can easily find that the proposed MLNet-B and MLNet-A achieve the best and the second best results in terms of the OA, AA, and \\(\\kappa\\) scores. Compared with other approaches, the OA values' improvement obtained by the proposed MLNet-B are 17.34% (SVM), 16.25% (EMP), 7.68% (3DDWT), 15.68% (3-D CNN), 10.32% (HybridSN), 4.34% (FCLFN), 4.52% (DenseNet), 4.06% (pResNet), 4.61% (MixNet), 4.99% (DPN), and 1.74% (SSSERN). The AA values' enhancements are 7.48% (SVM), 8.73% (EMP), 4.36% (3DDWT), 8.76% (3-D CNN), 4.89% (HybridSN), 4.98% (FCLFN), 6.13% (DenseNet), 5.74% (pResNet), 6.19% (MixNet), 5.32% (DPN), and 0.64% (SSSERN). The \\(\\kappa\\) values' increases are 0.2149 (SVM), 0.2049 (EMP), 0.0998 (3DDWT), 0.1964 (3D CNN), 0.1312 (HybridSN), 0.0598 (FCLFN), 0.0631 (DenseNet), 0.0562 (pResNet), 0.0643 (MixNet), 0.0693 (DPN), and 0.0227 (SSSERN). In addition, as shown in Fig. 11, our MLNet-A and MLNet-B reduce misclassified pixels and provide cleaner classification maps (particularly, compared with the 3-D CNN and HybridSN). These results demonstrate that the MLNet-A and MLNet-B models are effective for the University of Pavia dataset. Table VI and Fig. 12 show the numerical and visual results of different approaches on the University of Houston dataset. The number of training and test samples used for this experiment is summarized in Table III. As shown in Table VI, our MLNet-A achieves the best performance from the overall aspect. Fig. 10: Classification maps for the Indian Pines dataset. \\begin{table} \\begin{tabular}{c c c c c c c c c c c c c c c} \\hline \\hline No. & SVM & EMP & 3DDWT & 3D CNN & HybridSN & FCLFN & DenseNet & pResNet & MixNet & DPN & SSSERN & MLNet-A & MLNet-B \\\\ \\hline 1 & 82.34 & 81.77 & 91.55 & 82.89 & 82.45 & 82.05 & 82.34 & 82.03 & 82.77 & 81.56 & 85.58 & 81.77 & 81.79 \\\\ 2 & 83.36 & 83.65 & 96.52 & 84.34 & 84.38 & 84.40 & 85.15 & 84.27 & 85.11 & 84.38 & 85.06 & 85.11 & 85.00 \\\\ 3 & 99.80 & 99.60 & 99.21 & 91.13 & 96.36 & 96.75 & 93.35 & 89.43 & 88.71 & 96.04 & 99.60 & 99.17 & 99.13 \\\\ 4 & 98.96 & 90.06 & 96.31 & 87.16 & 91.93 & 91.78 & 90.64 & 91.53 & 91.31 & 91.95 & 90.93 & 90.74 & 90.06 \\\\ 5 & 99.86 & 99.24 & 99.62 & 94.91 & 99.98 & 99.68 & 99.08 & 99.24 & 99.92 & 99.32 & 99.87 & 99.96 & 100 \\\\ 6 & 99.30 & 99.30 & 97.90 & 92.45 & 97.48 & 94.97 & 93.99 & 96.22 & 93.85 & 93.43 & 96.22 & 95.80 & 95.80 \\\\ 7 & 77.33 & 87.97 & 78.92 & 80.06 & 83.65 & 87.48 & 81.66 & 84.68 & 85.43 & 86.90 & 83.84 & 82.74 & 83.58 \\\\ 8 & 64.48 & 65.81 & 69.42 & 67.12 & 77.57 & 71.00 & 70.29 & 76.14 & 71.66 & 71.55 & 74.61 & 71.43 & 70.85 \\\\ 9 & 69.41 & 80.08 & 70.25 & 83.97 & 82.55 & 78.43 & 71.27 & 78.96 & 73.88 & 78.05 & 79.34 & 81.95 & 82.49 \\\\ 10 & 63.13 & 63.13 & 51.06 & 65.00 & 63.38 & 59.73 & 59.69 & 61.49 & 61.21 & 63.55 & 60.97 & 64.69 & 63.90 \\\\ 11 & 79.13 & 75.24 & 75.62 & 78.90 & 81.76 & 77.50 & 82.07 & 74.71 & 77.53 & 82.60 & 84.76 & 97.95 & 96.53 \\\\ 12 & 77.43 & 76.95 & 81.27 & 87.95 & 98.16 & 92.49 & 91.57 & 90.30 & 91.70 & 91.12 & 91.72 & 95.08 & 93.95 \\\\ 13 & 68.77 & 86.42 & 90.18 & 88.91 & 87.79 & 81.75 & 80.00 & 78.60 & 79.16 & 84.00 & 82.25 & 86.95 & 86.88 \\\\ 14 & 100 & 100 & 90.57 & 97.90 & 98.46 & 97.17 & 93.04 & 92.87 & 93.05 & 100 & 99.84 & 99.92 \\\\ 15 & 97.25 & 100 & 97.04 & 93.87 & 99.96 & 86.72 & 81.01 & 69.98 & 79.66 & 97.72 & 99.28 & 96.62 & 98.22 \\\\ \\hline OA (\\%) & 81.41 & 82.33 & 83.27 & 83.15 & 86.39 & 83.70 & 82.37 & 82.47 & 82.55 & 84.70 & 85.41 & 86.65 & 86.43 \\\\ AA (\\%) & 83.98 & 84.75 & 86.32 & 85.26 & 88.60 & 85.55 & 83.99 & 83.37 & 83.65 & 86.77 & 87.60 & 88.65 & 88.54 \\\\ \\(\\kappa\\times 100\\) & 73.93 & 80.91 & 88.85 & 81.74 & 85.24 & 82.40 & 80.95 & 81.04 & 81.13 & 83.48 & 84.25 & 85.56 & 85.32 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE V CLASSIFICATION ACCuracies of Different Approaches on the University of Pavia Dataset Fig. 11: Classification maps for the University of Pavia dataset. The OA, AA, and \\(\\kappa\\) scores are 86.65%, 88.65%, and 0.8556, respectively. As for MLNet-B, the values of OA, AA, and \\(\\kappa\\) are as high as 86.43%, 88.54%, and 0.8532, respectively, which demonstrates that the architecture of MLB-B can also capture discriminative features. Besides, compared with other methods, the increases of OA scores obtained by the proposed MLNet-A are 5.24% (SVM), 4.32% (EMP), 3.38% (3DDWT), 3.50% (3D CNN), 0.26% (HybridSN), 2.95% (FCLFN), 4.28% (DenseNet), 4.18% (pResNet), 4.10% (MixNet), 1.95% (DPN), and 1.24% (SSSERN). The improvements of AA scores are 4.67% (SVM), 3.90% (EMP), 2.33% (3DDWT), 3.39% (3-D CNN), 0.05% (HybridSN), 3.10% (FCLFN), 4.66% (DenseNet), 5.28% (pResNet), 5.00% (MixNet), 1.88% (DPN), and 1.05% (SSSERN). The enhancements of \\(\\kappa\\) values are 0.0563 (SVM), 0.0465 (EMP), 0.0371 (3DDWT), 0.0382 (3-D CNN), 0.0032 (HybridSN), 0.0316 (FCLFN), 0.0461 (DenseNet), 0.0452 (pResNet), 0.0443 (MixNet), 0.0208 (DPN), and 0.0131 (SSSERN). In addition, as can be seen from Fig. 12, the proposed MLNet-A and MLNet-B can predict most of the categories well. For example, the proposed models show better connectivity for \"Railway\" category, which is consistent with the numerical results in Table VI. Specifically, for \"Railway\" category (class 11), the highest performance among all of the comparisons is 84.76% (SSSERN). However, our MLNet-A and MLNet-B can achieve performance as high as 97.95% and 96.53%, respectively. These positive results demonstrate that the proposed MLNets are also effective for the University of Houston dataset. Table VII provides the total number of parameters of different networks. From Table VII, it is easy to observe that the number of parameters in the proposed MLNets is significantly fewer than that in the DenseNet, pResNet, and MixNet. In addition, MLNets outperform DPN while being 1.35 times, 2.17 times, and 3.23 times fewer parameters on Indian Pines, University of Pavia, and University of Houston datasets, respectively. Although our MLNets contain more parameters than SSSERN, they are able to achieve better classification performance. From Tables IV-VI, one can see that the HybridSN, FCLFN, and DPN methods produce unsatisfactory classification accuracies. The main reason is that these networks need a larger input HSI patch for spatial feature extraction. Next, we further compare the proposed MLNet-A and MLNet-B with these three methods. Note that the best results reported in the corresponding references are used for comparison in this experiment. Table VIII reports the classification accuracies obtained by HybridSN, FCLFN, MLNet-A, and MLNet-B on the Indian Pines dataset. Following HybridSN [58] and FCLFN [59], 10% of the available Fig. 12: Classification maps for the University of Houston dataset. labeled pixels (randomly selected per class) are used as training samples. Table IX illustrates the results obtained by DPN, MLNet-A, and MLNet-B on the University of Houston dataset. Following DPN [44], various numbers of labeled samples (i.e., 30, 40, and 50) are randomly selected from each class as training samples. As we can see in Table VIII, the proposed MLNets achieve better performance than the HybridSN and FCLFN. Specifically, the proposed MLNet-B reaches an OA that is 0.66 percentual points higher than the HybridSN, and 0.49 percentual points higher than the FCLFN. In addition, from Table IX, one can see that our MLNet-A consistently outperforms DPN in terms of the OA, AA, and \\(\\kappa\\) values when using different number of training samples. These comparison results again verify the effectiveness of our MLNets. To sum up, MLNet-A and MLNet-B can obtain similar superior performance on the three real HSI datasets, indicating the effectiveness of the proposed two mixed link architectures for HSI classification. ### _Comparison With Other Popular Building Blocks_ Modern deep neural networks utilize modular design to reduce the complexity of neural architectures. Layers are generally grouped into blocks, e.g., residual block in the ResNet. In this section, the proposed MLB-A and MLB-B are compared with three popular building blocks, including residual block, dense block, and dual path block. This experiment is implemented on the Indian Pines dataset. Specifically, we set the number of blocks to 3 and construct networks on the basis of different building blocks. The number of parameters of different networks is roughly the same for fair comparison. The results are shown in Table X. We find that networks based on the MLB-A and MLB-B can achieve better performance. ### _Effect of Proportion of Training Data_ Fig. 13 summarizes the OAs of the proposed MLNet-A and MLNet-B with different percent of training data on the Indian Pines and University of Pavia datasets. Specifically, 1%, 3%, 5%, 10%, 15%, and 20% of samples per class are randomly chosen for training. Then, the rest of the samples are used for testing. Here, the DPN and SSSERN models are adopted as the reference, which perform well in the previous experiments. As can be seen, as the proportion of training samples increases, the performance of all approaches improves. Especially, when the training percent changes from 1% to 10%, the OAs of different methods increase dramatically. It is also observed from Fig. 13 that MLNet-A and MLNet-B outperform the other two compared methods in most cases, in particular when the training set is very small. For example, with only 1% of training samples per class, MLNet-A and MLNet-B achieve the best and the second best performance on the two datasets. These results demonstrate the excellent performance of the proposed approach for HSI classification. ### _Effect of Shifted Additions_ In this section, the effectiveness of shifted additions in the proposed MLNet-A and MLNet-B is analyzed on the Indian Pines, University of Pavia, and University of Houston datasets. Specifically, we construct an MLNet with fixed additive positions (denoted by MLNet-F) for further comparison. For MLNet-F, the features learned by the additive link are always added to the first \\(k\\) channels of the input. Therefore, the additive features are always merged over the same fixed space, which may impede the information flow. To ensure a fair comparison, we make different networks contain the same number of parameters. The only difference among these three networks lies in the additive positions. Table XI shows the experiment results, from which we can easily find that shifted additions have positive contributions to the classification task, demonstrating the effectiveness of the proposed mixed link architectures. Fig. 13: OAs of various approaches with different training percent over the (a) Indian Pines and (b) University of Pavia datasets. ### _Effect of Parameter \\(k\\)_ In this section, the parameter \\(k\\) is analyzed by using three datasets. The parameter \\(k\\) controls the number of feature maps generated by each link, which decides the representation capacity of the proposed MLNets. Fig. 14 shows the OAs obtained by MLNet-A and MLNet-B on three datasets under different \\(k=\\{12,24,36,48\\}\\). It can be observed that MLNet-A with \\(k=36\\) has achieved the best performance on the Indian Pines and University of Houston datasets. For the University of Pavia dataset, the best choice for parameter \\(k\\) is 24, which is slightly better than that of 36. As for MLNet-B, the curves reach the best OA values when the parameter \\(k\\) is set to 36 for all the three datasets. For the sake of consistency and the generalization of our models, we choose 36 as the default setting of the parameter \\(k\\). ### _Visualization of Features Extracted by Different Networks_ For more direct comparison of the effectiveness for feature learning, we randomly select 16 feature maps from each network, which all take from the final discriminant features before GAP, and visualize their distribution in Fig. 15. In Fig. 15, we can see that feature maps learned by the DenseNet and MixNet are coarse because of the downsampling of input HSI patch, presenting local fine spatial information loss. In addition, compared with the pResNet and DPN, one can see that the feature maps extracted the proposed MLNet-A and MLNet-B present finer local representation and spatial position, which are helpful for distinguishing objects occupying much smaller areas. ### _Effect of the Number of MLBs_ From Fig. 3, one can observe that the proposed MLNets are mainly constructed by stacking several MLBs. The number of MLBs determines the network depth, which has an important impact on the representative capacity of the proposed model. Increasing the number of MLBs can generally improve the classification performance, but more MLBs in the network may suffer from overfitting. Table XII summarizes the OAs of the proposed MLNet-A and MLNet-B with different number of MLBs over the three datasets. This experiment is conducted using the standard training and test sets, and the number of training and test samples are shown in Tables I-III. As can be observed, the best choice of the number of MLBs for the Indian Pines, University of Pavia, and University of Houston datasets are 3, 2, and 1, respectively. In order to find a suboptimal value for the number of MLBs for all datasets, we further carry out extensive experiments. Specifically, we compare the classification accuracy (in terms of OA) of different number of MLBs with different amount of training data. For the Indian Pines dataset, the percentage of training data varies in the set {10%, 20%, 30%}. For the University of Pavia and University of Houston datasets, the percentage of training data varies in the set {1%, 2%, 3%}. The number of MLBs varies in the set {1, 2, 3, 4, 5, 6}. The corresponding OAs obtained by the proposed MLNet-A and MLNet-B are reported in Tables XIII and XIV. As can be observed, when the number of MLBs is larger than 2, the proposed MLNet-A and MLNet-B are able to achieve relatively stable high accuracy for all datasets. Considering that a larger number of MLBs will cause higher computational cost, it is recommended that the number of MLBs should be set to 3 in more general scenarios. However, too large patch size also results in the degradation of performance, especially for the Indian Pines dataset. The reason behind this is that, multiple materials from different categories might be included in an HSI patch with large size, which will harm the classification tasks. ## V Conclusion In this article, by embracing both additive links and concatenative links, the proposed MLNets enable effective feature reusage and new feature exploration, which not only reduce the relearning of redundant features but also help extract informative spatial-spectral features. Furthermore, through shifted additions, the proposed blending connections further enhance the flow of information between layers in the network. To verify the performance of the proposed MLNets, experiments based on three hyperspectral benchmark datasets are conducted. Experimental results demonstrate the superiority of the proposed MLNets over several state-of-the-art methods, such as DenseNet, ResNet, and DPN. In the future works, we will carry out further research and try to figure out the importance of each link by integrating attention mechanism, which may be helpful for extracting more discriminative features from HSIs. ## Acknowledgment The authors would like to thank the anonymous reviewers for their helpful comments and constructive suggestions for this article. ## References * [1] S. Veraverbeke _et al._, \"Hyperspectral remote sensing of fire: State-of-the-art and future perspectives,\" _Remote Sens. Environ._, vol. 216, pp. 105-121, Oct. 2018. * [2] J. Lei, W. Xie, J. Yang, Y. Li, and C.-J. Chang, \"Spectral-spatial feature extraction for hyperspectral anomaly detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 10, pp. 8131-8143, Oct. 2019. * [3] S. W. Shivers, D. A. Roberts, and J. P. McFadden, \"Using paired thermal and hyperspectral aerial imagery to quantify land surface temperature variability and assess crops written California orchards,\" _Remote Sens. Environ._, vol. 222, pp. 215-231, Mar. 2019. * [4] P. Ghamisi, J. Plaza, Y. Chen, J. Li, and A. J. Plaza, \"Advanced spectral classifiers for hyperspectral images: A review,\" _IEEE Geosci. Remote Sens. Mag._, vol. 5, no. 1, pp. 8-32, Mar. 2017. * [5] J. Li, J. M. Bioucas-Dias, and A. Plaza, \"Semisupervised hyperspectral image classification using soft sparse multinomial logistic regression,\" _IEEE Geosci. Remote Sens. Lett._, vol. 10, no. 2, pp. 318-322, Mar. 2012. * [6] S. Delalieux, B. Somers, B. Haest, T. Spanhove, J. V. Borre, and C. Mocher, \"Heathland conservation status mapping through integration of hyperspectral mixture analysis and decision tree classifiers,\" _Remote Sens. Environ._, vol. 126, pp. 222-231, Nov. 2012. * [7] F. Melgani and L. Bruzzone, \"Classification of hyperspectral remote sensing images with support vector machines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 42, no. 8, pp. 1778-1790, Aug. 2004. * [8] J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and J. Chanussot, \"Hyperspectral remote sensing data analysis and future challenges,\" _IEEE Geosci. Remote Sens. Mag._, vol. 1, no. 2, pp. 6-36, Jun. 2013. * [9] J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, \"Classification of hyperspectral data from urban areas based on extended morphological profiles,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 3, pp. 480-491, Mar. 2005. * [10] W. Li, C. Chen, H. Su, and Q. Du, \"Local binary patterns and extreme learning machine for hyperspectral imagery classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 7, pp. 3681-3693, Jul. 2015. * [11] Y. Gu, J. Chanussot, X. Jia, and J. A. Benediktsson, \"Multiple kernel learning for hyperspectral image classification: A review,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 11, pp. 6547-6565, Nov. 2017. * [12] T. Lu, S. Li, L. Fang, X. Jia, and J. A. Benediktsson, \"From subpixel to superpixel: A novel fusion framework for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 8, pp. 4398-4411, Aug. 2017. * [13] Y. Chen, N. M. Nasrabadi, and T. D. Tran, \"Hyperspectral image classification using dictionary-based sparse representation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 10, pp. 3973-3985, Oct. 2011. * [14] Q. Gao, S. Lim, and X. Jia, \"Spectral-spatial hyperspectral image classification using a multiscale conservative smoothing scheme and adaptive sparse representation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 10, pp. 7718-7730, Oct. 2019. * [15] L. He, J. Li, C. Liu, and S. Li, \"Recent advances on spectral-spatial hyperspectral image classification: An overview and new guidelines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 3, pp. 1579-1597, Mar. 2018. * [16] X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, \"Hyperspectral image classification with Markov random fields and a convolutional neural network,\" _IEEE Trans. Image Process._, vol. 27, no. 5, pp. 2354-2367, May 2018. * [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"Imagenet classification with deep convolutional neural networks,\" in _Proc. Adv. Neural Inf. Process. Syst._, Dec. 2012, pp. 1097-1105. * [18] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, \"Object detection with deep learning: A review,\" _IEEE Trans. Neural Netwc. Learn. Syst._, vol. 30, no. 11, pp. 3212-3232, Nov. 2019. * [19] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, \"Deep learning-based classification of hyperspectral data,\" _IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens._, vol. 7, no. 6, pp. 2094-2107, Jun. 2014. * [20] P. Zhong, Z. Gong, S. Li, and C.-B. Schonlieb, \"Learning to diversify deep belief networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 6, pp. 3516-3530, Jun. 2017. * [21] L. Jiao, M. Liang, H. Chen, S. Yang, H. Liu, and X. Cao, \"Deep fully convolutional network-based spatial distribution prediction for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 10, pp. 5585-5599, Oct. 2017. * [22] L. Mou, P. Ghamisi, and X. X. Zhu, \"Deep recurrent neural networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 7, pp. 3639-3655, Jul. 2017. * [23] W. Qi, X. Zhang, N. Wang, M. Zhang, and Y. Cen, \"A spectral-spatial cascaded 3D convolutional neural network with a convolutional long short-term memory network for hyperspectral image classification,\" _Remote Sens._, vol. 11, no. 20, Oct. 2019, Art. no. 2363. * [24] M. E. Paoletti _et al._, \"Capsule networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 4, pp. 2145-2160, Apr. 2018. * [25] K. Zhu, Y. Chen, P. Ghamisi, X. Jia, and J. A. Benediktsson, \"Deep convolutional capsule network for hyperspectral image spectral and spectral-spatial classification,\" _Remote Sens._, vol. 11, no. 3, Jan. 2019, Art. no. 223. * [26] W. Zhao and S. Du, \"Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 8, pp. 4544-4554, Aug. 2016. * [27] M. Zhang, W. Li, and Q. Du, \"Diverse region-based CNN for hyperspectral image classification,\" _IEEE Trans. Image Process._, vol. 27, no. 6, pp. 2623-2634, Jun. 2018. * [28] Q. Gao and S. Lim, \"Classification of hyperspectral images with convolutional neural networks and probabilistic relaxation,\" _Comput. Vis. Image Underst._, vol. 188, Nov. 2019, Art. no. 102801. * [29] B. Pan, Z. Shi, andX. Xu, \"MugNet: Deep learning for hyperspectral image classification using limited samples,\" _ISPRS J. Photogramm. Remote Sens._, vol. 145, pp. 108-119, Nov. 2018. * [30] Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, \"Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 10, pp. 6232-6251, Oct. 2016. * [31] Y. Li, H. Zhang, and Q. Shen, \"Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network,\" _Remote Sens._, vol. 9, no. 1, Jan. 2017, Art. no. 67. * [32] Z. Zhong, J. Li, Z. Luo, and M. Chapman, \"Spectral-spatial residual network for hyperspectral image classification: A 3-D deep learning framework,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 2, pp. 847-858, Feb. 2018. * [33] H. Lee and H. Kwon, \"Going deeper with contextual CNN for hyperspectral image classification,\" _IEEE Trans. Image Process._, vol. 26, no. 10, pp. 4843-4855, Oct. 2017. * [34] W. Song, S. Li, L. Fang, and T. Lu, \"Hyperspectral image classification with deep feature fusion network,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 6, pp. 3173-3184, Jun. 2018. * [35] X. Ma, A. Fu, J. Wang, H. Wang, and B. Yin, \"Hyperspectral image classification based on deep deconvolution network with skip architecture,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 8, pp. 4781-4791, Aug. 2018. * [36] N. Audebert, B. Le Saux, and S. Lefevre, \"Deep learning for classification of hyperspectral data: A comparative review,\" _IEEE Geosci. Remote Sens. Mag._, vol. 7, no. 2, pp. 159-173, Jun. 2019. * [37] Z. Gong, P. Zhong, Y. Yu, W. Hu, and S. Li, \"A CNN with multiscale convolution and diversified metric for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, Jun. 2019. * [38] M. E. Paoletti, J. M. Haut, R. Fernandez-Beltran, J. Plaza, A. J. Plaza, and F. Pla, \"Deep pyramidal residual networks for spectral-spatial hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 2, pp. 740-754, Feb. 2019. * [39] M. Paoletti, J. Haut, J. Plaza, and A. Plaza, \"DeepRedense convolutional neural network for hyperspectral image classification,\" _Remote Sens._, vol. 10, no. 9, Sep. 2018, Art. no. 1454. * [40] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, Jul. 2017, pp. 4700-4708. * [41] Z. Meng, L. Li, L. Jiao, Z. Feng, X. Tang, and M. Liang, \"Fully dense multiscale fusion network for hyperspectral image classification,\" _Remote Sens._, vol. 11, no. 22, Nov. 2019, Art. no. 2718. * [42] C. Zhang, G. Li, and S. Du, \"Multi-scale dense networks for hyperspectral remote sensing image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 11, pp. 9201-9222, Nov. 2019. * [43] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng, \"Dual path networks,\" in _Proc. Adv. Neural Inf. Process. Syst._, Dec. 2017, pp. 4467-4475. * [44] X. Kang, B. Zhuo, and P. Duan, \"Dual-path network-based hyperspectral image classification,\" _IEEE Geosci. Remote Sens. Lett._, vol. 16, no. 3, pp. 447-451, Mar. 2018. * [45] W. Wang, X. Li, J. Yang, and T. Lu, \"Mixed link networks,\" in _Proc. Int. Joint Conf. Artif. Intell._, Jul. 2018, pp. 2819-2825. * [46] W. Wang, S. Dou, Z. Jiang, and L. Sun, \"A fast dense spectral-spatial convolution network framework for hyperspectral images classification,\" _Remote Sens._, vol. 10, no. 7, Jul. 2018, Art. no. 1068. * [47] Z. Meng, L. Li, X. Tang, Z. Feng, L. Jiao, and M. Liang, \"Multipath residual network for spectral-spatial hyperspectral image classification,\" _Remote Sens._, vol. 11, no. 16, Aug. 2019, Art. no. 1896. * [48] S. K. Roy, S. Chatterjee, S. Bhattacharyya, B. B. Chaudhuri, and J. Platos, \"Lightweight spectral-spatial squeeze-and-excitation residual bag-of-features learning for hyperspectral classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 8, pp. 5277-5290, Aug. 2020. * [49] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, \"Residual dense network for image super-resolution,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, Jun. 2018, pp. 2472-2481. * [50] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, Jun. 2016, pp. 770-778. * [51] S. Ioffe and C. Szegedy, \"Batch normalization: Accelerating deep network training by reducing internal covariate shift,\" in _Proc. IEEE Int. Conf. Mach. Learn._, Jul 2015, pp. 448-456. * [52] X. Glorot, A. Bordes, and Y. Bengio, \"Deep sparse rectifier neural networks,\" in _Proc. Int. Conf. Artif. Intell. Statist._, Apr. 2011, pp. 315-323. * [53] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, \"Gradient-based learning applied to document recognition,\" _Proc. IEEE_, vol. 86, no. 11, pp. 2278-2324, Nov. 1998. * [54] M. Lin, Q. Chen, and S. Yan, \"Network in network,\" in _Proc. IEEE Int. Conf. Learn. Represent._, Apr. 2014, pp. 1-8. * [55] P. Ghamsiier _et al._, \"New frontiers in spectral-spatial hyperspectral image classification: The latest advances based on mathematical morphology, Markov random fields, segmentation, sparse representation, and deep learning,\" _IEEE Geosci. Remote Sens. Mag._, vol. 6, no. 3, pp. 10-43, Sep. 2018. * [56] D. P. Kingma and J. L. Ba, \"Adam: A method for stochastic optimization,\" 2014, _arXiv:1412.6980_. * [57] X. Cao, L. Xu, D. Meng, Q. Zhao, and Z. Xu, \"Integration of 3-dimensional discrete wavelet transform and Markov random field for hyperspectral image classification,\" _Neurocomputing_, vol. 226, pp. 90-100, Feb. 2017. * [58] S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, \"HybridSEN: Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification,\" _IEEE Geosci. Remote Sens. Lett._, vol. 17, no. 2, pp. 277-281, Jun. 2019. * [59] G. Zhao, G. Liu, L. Fang, B. Tu, and P. Ghamsi, \"Multiple convolutional layers fusion framework for hyperspectral image classification,\" _Neurocomputing_, vol. 339, no. 28, pp. 149-160, Apr. 2019. * [60] L. Wang, J. Peng, and W. Sun, \"Spatial-spectral squeeze-and-excitation residual network for hyperspectral image classification,\" _Remote Sens._, vol. 11, no. 7, Apr. 2019, Art. no. 884. \\begin{tabular}{c c} & Zhe Meng (Member, IEEE) received the B.S. and Ph.D. degrees from Xidian University, Xi'an, China, in 2014 and 2020, respectively. He is currently a Lecturer with the School of Telecommunication and Information Engineering, Xi'an University of Posts and Telecommunications. His research interests include deep learning and hyperspectral image classification. \\\\ \\end{tabular} \\begin{tabular}{c c} & Licheng Jiao (Fellow, IEEE) received the B.S. degree from Shanghai Jiaotong University, Shanghai, China, in 1982, and the M.S. and Ph.D. degrees from Xi'an Jiaotong University, Xi'an, China, in 1984 and 1990, respectively. Since 1992, he has been a Professor with Xidian University, Xi'an, China, where he is currently the Director of Key Laboratory of Intelligent Perception and Image Understanding of the Ministry of Education of China. His research interests include image processing, natural computation, machine learning, and intelligent information processing. Dr. Jiao is the Chairman of the Awards and Recognition Committee, the Vice Board Chairperson of the Chinese Association of Artificial Intelligence, the Fellow of IET/CAAI/CIE/CCF/CAA, a Council of the Chinese Institute of Electronics, a Committee Member of the Chinese Committee of Neural Networks, and an Expert of the Academic Degrees Committee of the State Council. \\\\ \\end{tabular} \\begin{tabular}{c c} & Miaomiao Liang received the Ph.D. degree in pattern recognition and intelligent systems from Xidian University, Xi'an, China, in 2018. She is currently a Lecturer with the School of Information Engineering, Jiangxi University of Science and Technology. Her research interests include computer vision, machine learning, and remote sensing images processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Feng Zhao (Member, IEEE) received the B.S. degree in computer science and technology from Heilongjiang University, Heilongjiang, China, in 2004, the M.S. degree in signal and information processing from the Xi'an University of Posts and Telecommunications, Xi'an, China, in 2007, and the Ph.D. degree in pattern recognition and intelligent system from Xidian University, Xi'an, China, in 2010. She has been a Professor with the School of Telecommunication and Information Engineering, Xi'an University of Posts and Telecommunications, since 2015. She has authored or coauthored more than 30 articles and two books. Her research interests include fuzzy information processing, pattern recognition, and image processing. Dr. Zhao was a recipient of New-Star of Young Science and Technology Award supported by Shaanxi, in 2014, and the IET International Conference on Ubi-media Computing Best Paper Award, in 2012. \\\\ \\end{tabular}
Convolutional neural networks (CNNs) have improved the accuracy of hyperspectral image (HSI) classification significantly. However, CNN models usually generate a large number of feature maps, which lead to high redundancy and cannot guarantee to effectively extract discriminative features for well characterizing the complex structures of HSIs. In this article, two novel mixed link networks (MLNets) are proposed to enhance the representational ability of CNNs for HSI classification. Specifically, the proposed mixed link architectures integrate the feature reusage property of the residual network and the capability of effective new feature exploration of the densely convolutional network, extracting more discriminative features from HSIs. Compared with the dual path architecture, the proposed mixed link architectures can further improve the information flow throughout the network. Experimental results on three hyperspectral benchmark datasets demonstrate that our MLNets achieve competitive results compared with other state-of-the-art HSI classification approaches. Convolutional neural network (CNN), deep learning, hyperspectral image (HSI) classification, mixed link network (MLNet).
Provide a brief summary of the text.
ieee/eee30f01_0c22_4127_a8e3_d38afc5d6a14.md
# A New Taxonomy for Distributed Spacecraft Missions Jacqueline Le Moigne Manuscript received May 1, 2019; revised September 23, 2019 and December 3, 2019; accepted December 18, 2019. Date of publication January 22, 2020; date of current version March 17, 2020. This work was supported in part by the NASA Goddard Space Flight Center Internal Research and Development Program and in part by the NASA Earth Science Technology Office Advanced Information Systems Technology Program. _(Corresponding author: Jacqueline Le Moigne.)_ J. Le Moigne was with the NASA Goddard Space Flight Center, Greenbelt, MD 20771 USA. She is now with NASA Earth Science Technology Office, Greenbelt, MD 20771 USA (e-mail: [email protected]). J. C. Adams is with NASA Science Mission Directorate, Washington, DC 20546 USA (e-mail: [email protected]). S. Nag is with Bay Area Environmental Research Institute, NASA Ames Research Center, Moffett Field, CA 94035 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TSARS.2020.2964248 John Carl Adams, and Sreeja Nag Manuscript received May 1, 2019; revised September 23, 2019 and December 3, 2019; accepted December 18, 2019. Date of publication January 22, 2020; date of current version March 17, 2020. This work was supported in part by the NASA Goddard Space Flight Center Internal Research and Development Program and in part by the NASA Earth Science Technology Office Advanced Information Systems Technology Program. _(Corresponding author: Jacqueline Le Moigne.)_ J. Le Moigne was with the NASA Goddard Space Flight Center, Greenbelt, MD 20771 USA. She is now with NASA Earth Science Technology Office, Greenbelt, MD 20771 USA (e-mail: [email protected]). J. C. Adams is with NASA Science Mission Directorate, Washington, DC 20546 USA (e-mail: [email protected]). S. Nag is with Bay Area Environmental Research Institute, NASA Ames Research Center, Moffett Field, CA 94035 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TSARS.2020.2964248 John Carl Adams, and Sreeja Nag Manuscript received May 1, 2019; revised September 23, 2019 and December 3, 2019; accepted December 18, 2019. Date of publication January 22, 2020; date of current version March 17, 2020. This work was supported in part by the NASA Goddard Space Flight Center Internal Research and Development Program and in part by the NASA Earth Science Technology Office Advanced Information Systems Technology Program. _(Corresponding author: Jacqueline Le Moigne.)_ J. Le Moigne was with the NASA Goddard Space Flight Center, Greenbelt, MD 20771 USA. She is now with NASA Earth Science Technology Office, Greenbelt, MD 20771 USA (e-mail: [email protected]). J. C. Adams is with NASA Science Mission Directorate, Washington, DC 20546 USA (e-mail: [email protected]). S. Nag is with Bay Area Environmental Research Institute, NASA Ames Research Center, Moffett Field, CA 94035 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TSARS.2020.2964248 ## I Introduction Although space- and ground-segment technologies have advanced significantly over the years, the evolution of our observing systems has been quite linear. We continue to use stove-piped spacecraft missions that collect more data and downlink it at ever faster bit rates, without applying potentially useful and timely information that may be available from other observing system assets or ground systems. This motivated Goddard's study of spacecraft constellations in 1999, NASA's \"Earth Science Vision 2025\" [1] in 1999-2002, as well as more recent internal studies at several NASA Centers. The cornerstone of the 2025 Vision was to improve prediction, specifically including daily and even hourly measurements. The Vision described a new paradigm in which holistic, integrated insight, foresight, and discovery replaced point monitoring and exploration. In order to reach this Vision, different new technologies were proposed including the exploration of new vantage points, such as L1 and L2 and Molinya orbits; real time, adaptive, remote, and _in situ_ sensor swarms; and SensorWebs and on-demand virtual instruments. In particular, the concept of SensorWeb was extensively studied and was later the topic of several NASA Earth Science Technology Office (ESTO) solicitations and awards; for example, the weather prediction technology study to identify the science applications and technology improvements needed to aim for weather forecasts of 10-14 days in the 2025 timeframe. It was followed by the development of an \"Architecture for Advanced Weather Prediction Technologies, in 2008, using a two-way interactive SensorWeb and modeling system\" [2]. Other projects dealt with the application of SensorWebs to disaster management [3, 4, 5]. In the Earth Science Vision, a SensorWeb was seen as creating _On-Demand Virtual Instruments_ in which _any user could dynamically reconfigure the SensorWeb or its components and mine the digital libraries/metadata warehouses to provide products that are uniquely tailored for the desired measurement_. It would provide _the ability to rapidly carry out scientific \"experiments\" without waiting for the selection_, _development, and launch of a new mission_[1]. Similarly described by Torres-Martinez _et al._[6], the SensorWeb concept proposed by NASA defined _a virtual organization of multiple numbers and types of sensors combined into an intelligent \"macroinstrument\" in which information collected by any one sensor could be used by any other sensor in the web, as necessary, to accomplish a coordinated observing mission_. Overall, SensorWebs were proposed to do the following: 1. acquire simultaneously multiple observation types; 2. use multiple vantage points and multiple resolutions simultaneously in a constellation or formation flying configuration; 3. use low-cost micro- and nanosatellites, e.g., utilizing sensorcraft with deployable apertures; 4. acquire overlapping measurements for calibration and validation; 5. utilize reprogrammable and reconfigurable sensor systems; and 6. increase the autonomy of space systems. Another study performed by Barrett [7] identified two types of motivations for multiple spacecraft missions: First, _scientific motivation_, i.e., get better resolution to either isolate the signal when it is a microphenomenon or to cover the entire signal space when it is a fast or a macrophenomenon; and second, _engineering motivation_, i.e., provide extensibility, be able to addand/or replace sensors in the future, potentially incrementally, or provide redundancy using spares to respond to failures. The main recommendation coming out of these studies was for NASA to have a strategy defining an incremental deployment of ground and space assets across a full range of sensor-web-capable earth science missions. This would necessitate the development of specific standards and capabilities to ensure scalability, homogeneity, and operability of such missions [6]. Another historical program of interest is the DARPA F6 Fractionated Spacecraft program [8]. F6 was started in 2008 with the goal of developing and demonstrating on orbit key capabilities for spacecraft fractionation. This was envisioned using a \"cluster of wirelessly interconnected modules that could share their resources.\" The main goal was to demonstrate adaptability and survivability of space systems. The program relied on the development of open interface standards [9] that would ensure the sustainment and development of future fractionated systems and low-cost associated commercial hardware. Cellularized spacecraft, which is related to fractionated spacecraft, has been developed under the more recent DARPA Phoenix Program [10, 11], with the goal of changing the paradigm by which space systems are engineered, first designed, then developed, then built, and finally deployed. With that purpose, that program aims at reaching the terrestrial paradigm of \"assemble, repair, upgrade, and reuse\" [9]. This includes developing the following technologies: _advanced GEO space robotics_, including on-orbit assembly, repair, life extension, and refueling; _satlets_, i.e., small independent modules that incorporate essential satellite functionality but share data, power, and thermal management capabilities to provide a low-cost, modular satellite architecture; and _a standardized payload orbital delivery (POD) system_. The Phoenix concept could improve satellite usefulness and lifespan and could lower their development and deployment costs. But, although the SensorWeb and many related distributed missions' concepts and technologies were extensively studied and matured before 2008, it is only recently that national space organizations, industry, and academia have been proposing and developing distributed spacecraft missions (DSMs)and constellations; some examples are the recent NASA-funded Cyclone Global Navigation Satellite System (CYGNSS) [12] and Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of SmallSats (TROPICS) [13] earth science missions, ESA QB50 [14] and Proba-3 [15], and new commercial ventures, such as Planet Labs [16], OneWeb [17], and Capella Space [18]. Additional information about missions and research performed between 2000 and 2017 can be found in [26] and [27]. Due to this renewed interest in distributed concepts, starting in 2013, NASA Goddard conducted several internal consecutive studies, during which the general concept of \"DSMs\" and its related terminology was defined. The main objectives of these studies have been first to summarize what has been explored and developed previously in the domain of distributed missions, what is the state-of-the-art, who are the main players, and what are the main challenges, and then to provide a preliminary characterization of the tradeoffs that link science return and mission architectures. The outcomes of the study included a full terminology, some preliminary mission taxonomies, a survey of past, current, and future DSMs, examples of potential Science applications, a list of technology challenges, and some preliminary results in developing DSMs' cost and risk analysis tools. This article summarizes our results from terminology and taxonomy points of view, with the goal of facilitating the development of such future concepts (particularly when several organizations are involved), as well as the trade analysis and the actual design of future DSMs. ## II Distributed Spacecraft Missions _Main Definition:_ A DSM is a mission that involves multiple spacecraft to achieve one or more common goals. This general definition of a DSM, defined as the incept of our studies, purposefully, does not specify if the multiple spacecraft are launched together, achieve common goals by design or in an _ad hoc_ fashion, or if the common goals are scientific. \"Multiple\" in this case refers to \"two or more\" and can refer to tethered or nontethered satellites, although very few tethered concepts have been proposed so far. The various levels of details that describe a DSM are embedded in the specific terms defined in Section IV. For example, a DSM can be defined from inception and we call it a \"constellation,\" or it can become a DSM after the fact, in which case it is an \"ad hoc\" DSM or a \"virtual\" mission. For all the various types of DSMs, we do not assume the spacecraft to be of any specific sizes, i.e., the studies were not restricted to CubeSats or SmallSats (these sizes are defined in Section III), although lowering costs often involve choosing smaller spacecraft. As described in Section I, for the past 20 years, the concept of distributed observations has not been systematically traded when designing main stream missions, e.g., decadal survey missions, although it has been considered when it was the only solution capable to satisfy some given science requirements (e.g., in earth science for the GRACE mission or in heliophysics for the magnetospheric multiscale mission, MMS, designs). Nevertheless, this concept is being found again in more recent studies, such as in the 2017 Earth Science Decadal Survey [38] where ideas, such as \"advanced cost-effective observation methodologies such as ad hoc and _distributed observations_,\" \"given cost considerations, miniaturization using CubeSats, SmallSats, and _satellite constellations_ could be an efficient pathway to technological development,\" and \"rapid capture and delivery of synoptic data by space-borne assets following a disaster can directly mitigate the loss of life and infrastructure. These data can be obtained by rapidly _retasking existing satellites_, deploying new satellites dedicated to a specific measurement objective, or by deploying a _constellation_ of future satellites that provide the temporal fidelity required,\" being proposed for science as well as for disaster monitoring. Similar ideas can also be found in heliophysics, and even planetary science and astrophysics. Additionally, some flagship missions, such as future Landsat missions are currently being redesigned considering this concept. This gap of more than 10 years in a systematic interest given to a DSM is probably explained by the cost and, potentially, the complexity associated with such missions. The high costs that were estimated for a potential DSM were often the consequence of constellation designs based on repeating \\(n\\) times the design and the building of one spacecraft, therefore leading to costs being \\(n\\) times the cost of a monolithic mission; this is explained not only by the mission design itself but also by cost models that have been designed for monolithic missions and do not take into account cost savings associated with economies of scale and with risk minimization when dealing with a DSM. On the other hand, it is true that building a distributed mission adds to the complexity of the mission, not only in the development phase, but also in the operational phase, and this complexity translates into additional costs and risk to the mission. Therefore, it is only now that new technologies and capabilities, such as SmallSats, CubeSats, hosted payloads, instrument miniaturization, onboard computing, better space communications, and ground systems automation, have appeared and became mature, that a DSM seems to be feasible for a reasonable and potentially lower cost and risk than monolithic missions. At the same time that these new technologies have been developed, a new economic environment is developing, with lower or flat space budgets, a greater international competition, and a steady growth of the private sector in space ventures. As stated in a series of articles published in Space News by Wertz [39], space needs to be reinvented, and having a mix of traditional, large programs with some much lower cost, more rapid, more responsive programs is a way to respond to this new environment. Among the more responsive programs, distributed and disaggregated assets offer solutions that significantly reduce risk. In particular, as we heard in many of the science interviews that we conducted, apart from science goals that can only be attained with a DSM, distributed missions are usually motivated by several goals, among which, increasing data resolution in one or several dimensions (e.g., temporal, spatial, or spectral), decreasing launch costs, increasing data bandwidths, as well as ensuring data continuity and intermission validation and complementarity. Therefore, our goal in developing the proposed terminology and taxonomy was to capture these science goals and turn them into trades that will be used to design the future DSMs; the characteristics defined in the remainder of this article represent a preliminary characterization of the tradeoffs that link science return and mission architectures. An example of the utility of this characterization is illustrated by our design of the Trade-space Analysis Tool for Constellations (TAT-C) [25], which provides a framework to facilitate DSM prepphase A investigations and optimize DSM designs with respect to _a priori_ science goals. TAT-C was designed based on these principles with the following: 1. TAT-C inputs that include: _mission concept_ (e.g., area of interest, mission duration, and launch options); _satellite specifications_ (e.g., existing satellites, altitude/inclination ranges, specific orbit needs, and communication bands); _payload specifications_ (e.g., concept of operations, number and the type of instruments, mass, volume, and optical characteristics); and _constraints_ on the range of output values; and 2. TAT-C science outputs that include: all _metrics_ computed for each architecture (e.g., average of spatial and temporal metrics); _spatial information_ (e.g., spatial resolution, swath overlap percentage, occultation positions, and coverage); _temporal information_ (e.g., revisit, access, and repeat times); _angular information_ (e.g., view zenith, solar illumination); and _radiometric information_ (e.g., signal-to-noise fall-off). This terminology and taxonomy allowed us to clarify the variables that were essential to trade when designing DSM concepts. The remainder of this article is organized in the following way. A nomenclature of spacecraft size and mass is given in Section III. The full DSM terminology taxonomy is described in Section IV and a preliminary use of this taxonomy for DSM design is given in Section V. ## III Small Satellite Nomenclature Because DSMs are often designed and flown using small satellites as individual elements to make the system cost feasible, this section attempts to provide a nomenclature of what is a \"small\" spacecraft. The term \"SmallSat\" has been used with various meanings, and some of these discrepancies have been captured in 2010, as they relate to European missions [19]. A formal small satellite classification was first performed in 1991 by Sweeting [20], and then refined by Kramer _et al._ in 2008 [21]. In 2004, Konecny [22] extended the range of minisatellites from 100 to 1000 kg, abolishing the medium satellite class. The new classification was then reviewed by Xue _et al._ in 2008 [23]. Another definition is given in the FY13 SmallSat Technology Partnerships solicitation from the NASA Space Technology Mission Directorate (STMD) [24]: \"Small spacecraft, for the purpose of this notice, are defined as those with a mass of 180 kg or less and capable of being launched into space as an auxiliary or secondary payload.\" In this last nomenclature, minisatellites start at 100 kg but the upper mass is limited to 180 kg instead of 500 kg, and the threshold between femto- and picosatellites is slightly different. Nag _et al._[40] discussed small satellite classes in detail, with examples from international missions, and its impact on cost and risk. For the purpose of our study and the remainder of this article, we will adopt the nomenclature shown in Table I, utilizing the general term of _SmallSats for spacecraft of less than 180 kg_ and _minisatellites for spacecraft of mass in the range of 100-180 kg_. Note that CubeSats usually fall in the nano- to microsatellite range. ## IV DSM Terminology Taxonomy Generally, a _taxonomy_ can be defined as the description, identification, nomenclature, and classification of a group of things or concepts. Although the main terms related to DSMs have been defined in the past, for example in [7, 26], and [29], many other terms have not been defined or not defined consistently. Additionally, the meaning of some of these terms has also evolved with new technologies being developed, e.g., CubeSats, and it is important to define them, being grouped together under the same umbrella and for the purpose of collaborative mission design, development, and operations. _Main definition:_ A DSM is a mission that involves multiple spacecraft to achieve one or more common goals. This general definition of DSM (given in Section II and repeated above), purposefully, does not specify if the multiple spacecraft are launched together, achieve common goals by design, or in an _ad hoc_ fashion (i.e., application-driven), or if the common goals are scientific. These different levels of details are embedded in the following definitions. In order to derive this terminology, various DSM characteristics were considered and their different instantiations were classified in the taxonomy described in Table II. All the terms shown in this taxonomy and that need to be defined accurately are described in the remainder of this section; note that the terms are listed according to Table II, with each box referred to as \"TAB.\" The three main characteristics that have been considered are: TAB 1) Organization; TAB 2) Physical Configuration; and TAB 3) Functional Organization. Under these three main TABs, a DSM can be defined by a certain number of characteristics. _Under TAB 1_,\" Organization,\" two characteristics define a DSM, \"Appearance\" and \"Inter-Spacecraft Relationship.\" **TAB 1.1 Appearance** Under this TAB, three different types of appearance have been defined for all types of DSMs that will be defined in TAB 2.1. These are \"Homogeneous,\" \"Heterogeneous,\" and \"Reconfigurable.\" _TAB 1.1.1 Homogeneous Constellation or Formation_ A DSM whose member spacecraft employ functionally identical bus, payload, and operational characteristics (e.g., MMS and Iridium). _TAB 1.1.2 Heterogeneous Constellation or Formation or Fractionated Spacecraft_ A DSM whose member spacecraft employ a different bus, payload, or operational characteristics. Note that a fractionated spacecraft is always heterogeneous. **TABLE 1.1.3 Reconfigurable Constellation or Formation or Fractionated Spacecraft** A DSM that possesses the ability to change one or more intrinsic characteristics while on orbit. Some of these characteristics may include any or all of the following changes: orbit, attitude, relative spacing, observing activity coordination with other spacecraft, number of spacecraft, and other TBD characteristics. Iridium is an example of a nonreconfigurable, but homogeneous constellation. MMS is reconfigurable and homogeneous and F6 would have been a reconfigurable and heterogeneous mission. **TABLE 1.2 Inter-Spacecraft Relationships** _TAB 1.2.1 None_ This describes a DSM with no or no specific interspacecraft relationships. _TAB 1.2.2 Hierarchical Relationship_ A constellation system in which one (called mothership) or several of the distributed spacecraft has a higher degree of capability and serves as the central focal point for the constellation communication, control, and command, and/or general coordinator of all constellation activities. _TAB 1.2.3 Peer-to-Peer Relationship_ A system in which all the distributed spacecraft can interact with every other with equivalent control, capabilities, and responsibilities, assuming that appropriate communication and a predetermined routing protocol between nodes (e.g., disruption-tolerant networking for low earth orbit (LEO) constellations [28]). _TAB 1.2.4 Rendezvous Mission_ A rendezvous mission is a mission in which two spacecraft perform an orbital maneuver such that they approach each other at a very close distance and come to within actual or visual contact. **TABLE 1.2.4.1 Cooperative Rendezvous Missions** A cooperative rendezvous mission is a mission in which two spacecraft cooperate with each other to achieve a rendezvous maneuver. The two spacecraft arrive at the same orbit and approach at a very close distance, in a cooperative manner; this can be followed or not followed by docking during which the two spacecraft come into contact. One example is the rendezvous and docking performed between a spacecraft (or a space shuttle) and the International Space Station. **TABLE 1.2.4.2 Uncooperative Rendezvous Missions** This is a mission performing a type of space maneuver during which one spacecraft under known control arrives at the same orbit and approaches at a very close distance of another uncontrolled spacecraft or space object; this can be followed or not followed by docking or landing. This is the case, for example, when one spacecraft is servicing another nonfunctioning satellite or a spacecraft tumbling out of control (e.g., DARPA Phoenix Satellite Servicing). Another example is a rendezvous mission between a spacecraft and a natural object such as an asteroid (e.g., OSIRIS-REx mission). _Under TAB 2_, \"Physical Configuration,\" four characteristics define a DSM, \"Spatial Relationship,\" \"Spatial Control,\" \"Temporal Relationship,\" and \"Temporal Control.\" **TABLE 2.1 Spatial Relationship** Under \"Spatial Relationship,\" the two main types of DSM are the general type called \"Constellations\" and \"Virtual or _Ad Hoc_ Missions.\" In other words, a Constellation is the most general term defining a DSM. Then, under a Constellation, some specific types can be defined, e.g., \"Formations,\" \"Fractionated,\" and \"Clusters.\" Note that some DSMs may comprise one or more of the listed relationships. For example, multiangular observations may be done by clusters and the temporal resolution could be improved by a constellation of clusters (also called \"clustellation\"). In fact, this mix and match of different types of DSMs to make a coalition is possible under the proposed taxonomy. _TAB 2.1.1 Constellation_ A reference to a space mission that, beginning with its inception, is composed of two or more spacecraft that are placed into specific orbit(s) for the purpose of serving a common objective (e.g., MMS or Iridium). **TABLE 2.1.1.1 General Constellation** This refers to the most general type of constellations and might have various attributes; for example, a constellation maybe called _uniform_ when the spacecraft are uniformly distributed in multiple orbital planes and uniformly distributed in each orbital plane. The \"Walker Delta\" constellation (GPS, Galileo) and \"Walker Star\" constellation (Iridium) are examples of uniform constellations. _TAB 2.1.1.2 Formation_ Two or more spacecraft that conduct a mission such that the relative distances and three-dimensional spatial relationships (i.e., distances and angular relationships between all spacecraft) are tightly controlled (usually through direct sensing) by one spacecraft of at least one other spacecraft state (e.g., GRACE and PRISMA). A special case of Formations is a String of Pearls Formation defined in the following manner. _TAB 2.1.1.2.1 String of Pearls_ A String of Pearls orbital configuration is a type of formation flying in which all the spacecraft are flying in the same orbit separated in the along-track direction by fixed distances (e.g., Terra, SAC-C, EO1, and Landsat-7). _TAB 2.1.1.3 Fractionated Spacecraft_ A fractionated spacecraft is a satellite architecture where the functional capabilities of a conventional monolithic spacecraft are distributed across multiple modules that are not structurally connected and that interact through wireless links. These modules are capable of sharing their resources and utilizing resources found elsewhere in the cluster. Unlike general constellations and formations, the modules of a fractionated spacecraft are always largely heterogeneous and perform distinct functions corresponding, for instance, to the various subsystem elements of a traditional satellite (e.g., DARPA F6 System)_TAB 2.1.1.4 Cluster_ A collection of spacecraft that is not uniformly distributed over a particular spatial region, in contrast to a Walker constellation, e.g., clusters aggregate in certain orbital regions (e.g., MMS and COSMIC). A cluster may be, subjectively, considered \"tight\" or \"loose\" depending on the relative proximity of the member spacecraft. _TAB 2.2.4 Virtual or \"Ad Hoc\" Mission_ A virtual mission is a DSM that exploits observations made from multiple missions that were designed independently, but the output can be considered in a coordinated fashion as if they were acquired from a single mission. A virtual mission exploits the coordinated positions and the complementary of the observations to add value to each of the individual measurements. An example of such a virtual mission is the A-Train. The original A-Train DSM included the Aqua, Aura, and PARASOL satellites that were later joined by the CloudSat, CALIPSO, GCOM-W1, and OCO-2 satellites. PARASOL has now ceased operations, whereas CloudSat/CALIPSO have lowered their orbit and also left A-Train (see [https://atrain.nasa.gov/](https://atrain.nasa.gov/) for more information). _TAB 2.2 Spatial Control_ Under spatial control, missions' characteristics are defined in terms of the end results as well as how this particular type of spatial control has been obtained. _TAB 2.2.1 Pre-Determined_ This describes missions that do not have any specific spatial control, except the one predetermined before launch. This is often the case of low-budget CubeSat missions. _TAB 2.2.2 Ground Controlled_ This type of constellation is spatially controlled from the ground. An example is the MMS mission. _TAB 2.2.3 On-Orbit Controlled_ This type of constellation is spatially controlled in orbit with some level of autonomy (described in TAB 3.2). A special case of this type of constellation is a _swarm_, described below. _TAB 2.2.3.1 Swarm_ A reference to a space mission that is composed of a high number of micro- or nanospacecraft that serve a common objective and that are uncontrolled or loosely coordinated from the ground but with some sort of onboard autonomous control. _TAB 2.2.4 Mix of Ground and On-Orbit Controlled_ _TAB 2.2.4.1 Formation Flyers (FFs)_ FFs were defined in TAB 2.1.1.1; FF can either be controlled from the ground or controlled onboard or a combination of both. In this TAB, the specific spatial control patterns associated with FF are defined. _TAB 2.2.4.1.1 \"Tight\" or Precision Formation Flying_ This represents a subjective characteristic referring to the preciseness required of a particular formation. There does not appear to be any particular set of objective metrology standards regarding the degree of precision and is determined entirely by the application; such applications can be distributed \"virtual\" aperture, often associated with applications, such as interferometry or distributed spacecraft optics, for which a very precise formation is required. The terms \"tight\" and \"precision\" are sometimes used with different meanings depending on the degree of precision that is required. PRISMA and PROBA-3 are examples of Precision Formation Flying. _TAB 2.2.4.1.1 Tandem Flyers_ Tandem Flyers represent a specific case of Precision FF. These are two or more spacecraft that follow one another in the same orbital plane (e.g., GRACE and GRAIL). It represents a special case of precision formation flying, with lesser requirements on control, owing to two spacecraft in the same orbit. _TAB 2.2.4.1.2 \"Loose\" Formation Flying_ This represents a subjective description of a smaller degree of precision and accuracy needed to be maintained between the spacecraft that comprise the FF. The degree of precision required in a loose FF is not as strict as the one required by a Precision FF. _TAB 2.2.3 Temporal Relationship_ This TAB mainly considers the temporal deployment of the multiple spacecraft in the constellation. _TAB 2.3.1 Deployment_ Deployment includes two main different types of temporal deployment, \"All at Once\" and \"Phased.\" _TAB 2.3.1.1 All At Once Deployment_ In this type of mission, all constellation spacecraft are launched at the same time. They can be deployed from the same or different launchers as long as they become operational at the same time. This is the case of missions such as MMS, GRACE, and CYGNSS. _TAB 2.3.1.2 Phased Deployment_ A phased deployment of a constellation is often employed for very large constellations or for megaconstellations. In this case, individual or groups of spacecraft are launched incrementally by design. This deployment strategy is also used for heterogeneous constellations with spacecraft launch in different orbits or at different altitudes. Examples of such constellations are QB50 or the Planet Labs series of spacecraft. A special case of phased deployment is an _accretionary or incremental deployment by reaction_. This is the case when new spacecraft are placed into specific orbits based on evolutionary mission circumstances. This was the case of the _ad hoc_ DSM A-Train for which CloudSat, CALIPSO, GCOM-W1, and OCO-2 were added to the A-Train to achieve additional requirements based on the observations made by the first satellites. _TAB 2.4 Temporal Control_ Just as for Spatial Control, Temporal Control considers both the end result and the means by which the DSM temporal control is obtained. _TAB 2.4.1 Pre-Determined_ This term characterizes missions for which the measurement acquisition is predetermined, and no specific temporal control is required after launch. _TAB 2.4.2 Precise Observation Timing_ Precise observation timing is required when the DSM mission goals require measurements to be very precisely intercorrelated; the position and the orientation of each spacecraft and their payloads need to be closely controlled to optimize the measurement acquisition. This is usually something designed as part of the overall mission. CYGNSS is an example of a DSM demonstrating precise observation timing. _TAB 2.4.3 \"Flash Mob\"_ The \"flash mob\" concept is also related to intercorrelated measurements but corresponds to a more agile DSM, e.g., a swarm, that reacts in real time to transient or real-time events and phenomena. There has been proposed heliophysics mission concepts but no actual missions exhibiting this type of behavior. _Under TAB 3_, the \"Functional Configuration\" of DSMs or constellations is being considered. This covers the mechanisms by which specific functionalities are being achieved. _TAB 3.1 Functional Distribution_ Under Functional Configuration, this first TAB looks at functionality distribution between spacecraft. The two following TABs give some examples of such types of distribution although these do not represent an exhaustive list of potential configurations. _TAB 3.1.1 Cooperative Maneuvering_ Missions with spacecraft that have functionalities are compatible to be used together to create a virtual DSM. _TAB 3.1.2 Collaborative Missions_ These are missions that are designed to create coordinated observations. Among those are missions with reconfigurability or targeting capabilities. A special case is missions that can create a \"virtual instrument,\" but also DSMs that react to a transient event or phenomenon. _TAB 3.2 Autonomy_ The general concept of \"Autonomy\" has been recently defined by the NASA Autonomous Systems Capability Leadership Team [30] and this is the definition that we will adopt in this article: \"Autonomy is the ability of a system to achieve goals while operating independently of external control.\" Here, a system can refer to either a monolithic or a highly complex distributed system. Autonomy is not equivalent to artificial intelligence (AI), but may use AI to achieve the specified goals; autonomy is also not equivalent to \"automation,\" which is the automatically controlled operation of a system but is not \"self-directed.\" Therefore, a system may be automated without being autonomous and autonomy may rely on automation for some of the tasks required to achieve its goals. Autonomy involves many functions, including plan validation, planner/scheduler [28], [32], diagnostics, state estimation, onboard processing, and onboard decision making; each of these functions can be performed by humans or by software. \"Autonomy\" implies a system's capability for realizing \"self-governance\" and \"self-direction,\" as well as \"self-management.\" Autonomy is self-governance and self-directive in the sense that it requires the delegation of responsibility to the system to meet its prescribed operational goals. The self-management aspect provides for the self-configuring, self-healing, self-optimizing, and self-protecting properties required for a fully autonomous system. As described in [35], a space system may have four levels of mission execution autonomy (according to the ECSS-E-ST-70C standard), spanning from [low] ground-based, preplanned control to [high] goal-oriented, onboard mission replanning. It may have two levels of data management autonomy and two levels of FDIR autonomy as well. In TAB 3.2, we adopted these four basic levels of Autonomy as they relate to DSMs. Many other characteristics could describe the term \"autonomy,\" but they are not limited to the concept of DSM and therefore are not included in this taxonomy. _TAB 3.2.1 Ground-Based Controlled Mission Execution_ In this case, the DSM execution is entirely performed under ground control, with no onboard autonomy. There is real-time control from the ground for nominal operations and it may only include some execution of time-tagged commands for safety issues. _TAB 3.2.2 Onboard Execution of Pre-Planned Mission Goals_ The DSM includes onboard execution of preplanned, ground-defined, mission operations. Again, there is real-time control from the ground for nominal operations and it may only include some execution of time-tagged commands for safety issues. _TAB 3.2.3 Semi-Autonomy_ A semiautonomous DSM represents a combination of system autonomy and ground control. It includes onboard execution of adaptive mission operations, particularly event-based autonomous operations and execution of onboard operations' control procedures. _TAB 3.2.4 Full Autonomy_ In order to achieve and maintain full autonomy (i.e., execution of goal-oriented mission operations onboard including goal-oriented mission replanning), the DSM needs the following enabling properties: it needs to be self-aware of the internal capabilities and state of the managed component; it needs to be self-situated in the sense that it is aware of its environment and context; and, finally, it needs to be able to monitor and adjust itself through the use of such things as sensors, effectors, and control loops. A special case of autonomous DSM is an _intelligent and collaborative constellation (ICC)_: this is a specific type of constellation that uses onboard intelligence to perceive its environment and takes actions that maximizes its chances of success in creating coordinated observations. An ICC can also potentially learn from its experiences. To achieve these capabilities, an ICC involves the combination of real-time data understanding,situational awareness, problem solving, planning and learning from experience, all of them combined with communications, and cooperation between the multiple spacecraft, in order to take full advantage of various sensors distributed on multiple platforms. Table III shows a few of current and past DSMs and their characteristics as identified in Table II. Of course, Table III is not exhaustive; there are many more such missions in the U.S. and in the world, and only a few representative missions have been listed in Table III, mainly from the earth science and communications domains (with a few heliophysics and global positioning missions), to illustrate the definitions that were proposed in this section. ## V Designing Earth Science Missions Using the Distributed Spacecraft Mission Taxonomy As the general characteristics of a DSM have been defined and categorized in the taxonomy defined in Section IV, this section investigates how these concepts can be utilized to design future earth science missions. This design can be informed by other factors, such as those described in two previous taxonomies [7, 26]. In 2001, Barrett [7] characterized distributed missions (in the heliophysics domain) in terms of the phenomena that needed to be observed and of the information that needed to be gathered. They first categorized DSMs by the type of phenomena to be measured, for example, _slow/predictable_ or _fast/intermittent phenomena_, occurring on a microscale (few points) or on a macroscale (many/all points). This is equivalent to characterizing the missions in terms of the mission science goals, i.e., the information that needs to be collected, with a signal space (e.g., spatial, angular, and temporal resolution), a symbol space (e.g., spectral and radiometric resolution), and a behavior (e.g., global coverage and revisit times). When dealing with multiple platform missions, an orbit and aggregation taxonomy is required in addition to the information taxonomy. Each orbit corresponds to a different phenomenon location and each different type of aggregation corresponds to a different motivation. Table IV summarizes the considerations from [7] for a selected number of examples, particularly in the heliophysics domain. In 2017, Selva _et al._[26] also provided a taxonomy of \"DSM concepts demonstrated in flight or proposed, based on morphological analysis\"; although the paper does not describe a precise taxonomy and terminology such as the one described in Section IV and Table II, it provides a comprehensive assessment of DSM concepts and technologies and some conclusions about the current barriers to DSM implementation. In particular, it considers the maturity of key enabling technologies: subsystem level technologies, such as high-precision attitude determination and control, high-precision thrusting, high-bandwidth communications, high throughput onboard data processing; as well as some other higher system-level capabilities. These novel technologies and capabilities need to be considered when designing a DSM. Examples of some other specific trades that need to be considered when characterizing distributed missions and that are not included in the previous taxonomy are the manufacturing approach of the multiple spacecraft, the launch options and opportunities, the deployment time, the operational complexity on the ground and on orbit, the cost, the risks associated with development schedule, mission costs, and the on-orbit operations, just to name a few. The sensitivities associated with each DSM characteristic represent the tradeoffs that will need to be considered when designing the distributed mission, which are, for example: * _The mass of each spacecraft_ will be chosen as a function of the manufacturing capabilities, the launch options (and costs), and the size of the sensors. Some mass categories are: \\(<\\)1 kg, \\(<\\)10 kg, \\(<\\)500 kg, \\(<\\)5000 kg, and \\(>\\)5000 kg. The number of spacecraft (e.g., 2-10, 10-50, or \\(>\\)50) will bring trades in terms of manufacturing approaches and ground complexity. * _The spacecraft variability_ corresponds to the spacecraft being all identical/homogeneous or heterogeneous either through different instruments, different buses, or with fractionated spacecraft. * _Launches can be approached_ through multiple launches, hosted payloads, rideshares, or dispensers. * _On-orbit plans_ include mothership and slaves model, swarm, formation, constellation, or _ad hoc_. * _Spacecraft interactions_ can be modeled as independent, ground coordinated, cross communicating, or fractionated. * _The coverage goal_ considers temporal coverage, spatial coverage, repeatable tracks, and redundancy. * _The orbit selection_ is a function of the type of information to collect but also of the launch options that are available. These can be LEO inclined, LEO polar, geosynchronous, or other. Based on the mission science goals, and the other trades considerations described earlier in this section, the taxonomy defined in Section IV can be utilized to design future earth science missions. Assuming that the mission is either monolithic or distributed and that distributed missions fall into one of four main categories, i.e., constellations, formation flying, fractionated, or _ad hoc_/virtual missions, the following attributes are traded to design the mission: _appearance and functionality_, _spatial relationship of the DSM, interspacecraft relationship and functional configuration, spatial control, temporal deployment, temporal control, autonomy, number of spacecraft, spacecraftmass, launch approach, and launcher approach._ Table V summarizes this design process, with the mission categories shown on the horizontal axis and the mission attributes shown on the vertical axis. Orbital parameters are not considered here to stay general but should be included for specific science domains. Each characteristic may have several values, but its range depends on the mission category. For example, a constellation may have a homogeneous or a heterogeneous appearance and functionality, but a fractionated mission can only be heterogeneous. Similarly, a constellation can be temporally deployed all at once or incrementally (by design or by reaction). To illustrate this design process, Table VI shows specific values for one monolithic mission (Landsat-7) and four reference DSMs, corresponding to the four types of DSM categories: ST-5 for general Constellation (shown in light green), GRACE for Formation Flying (shown in light blue), F6 for Fractionated (shown in light yellow), and the A-Train for Ad hoc/Virtual mission (shown in purple). As described in Section II, the taxonomy defined in Section IV and the design process highlighted in this section have already been used to design TAT-C [25]. TAT-C is a prephase A mission design tool that facilitates DSM prephase A investigations and optimizes DSM designs with respect to _a priori_ science goals. TAT-C, through a modular architecture including a knowledge base, a cost and risk module, an orbit and coverage module, an instrument module, a launch module, and carefully designed trade-space search iterator and user interface enables to quickly assess, visualize and validate a very large number of potential DSM constellation architectures in response to input and output science requirements. ## VI Conclusion This article has presented various concepts related to the design of DSMs, first by introducing the definitions of various terms defining the various characteristics of a DSM, then relatingthese characteristics to the choices or sensitivities that need to be considered when designing such missions. Based on these considerations, a DSM taxonomy has been proposed; this taxonomy has already been utilized in developing a trade-space tool for designing constellations. Over time and with the development of future DSM and related capabilities, this taxonomy will be refined. For example, another concept, which is important in relation to DSM is the concept of _SensorWeb_[33, 34]. Although a SensorWeb does not fit the current DSM taxonomy, it is a concept that will be useful to trade when designing future earth science missions. According to [36], \"A SensorWeb is a distributed system of sensing nodes (space, air, or ground) that are interconnected by a communications fabric and they function as a single, highly coordinated, virtual instrument. It semi- or -autonomously detects and dynamically reacts to events, measurements, and other information from constituent sensing nodes and from external nodes (e.g., predictive models) by modifying its observing state, so as to optimize mission information return.\" The concept of SensorWeb is now being considered in earth science for new observing strategies (NOS) [37] in which the concepts of DSMs and SensorWebs will be traded to optimize the acquisition of measurements, such as those defined in the 2017-2027 Earth Science Decadal Survey [38]. By extending the DSM concepts to SensorWebs, NOS will take advantage of multisensor nodes producing measurements integrated from multiple vantage points and in multiple dimensions (spatial, spectral, temporal, and radiometric) to provide a unified picture of earth science physical processes or natural phenomena. ## Acknowledgment The authors would like to acknowledge the support of the NASA Goddard Space Flight Center Internal Research and Development and of the ESTO Advanced Information Systems Technology Programs. They would also like to acknowledge all inputs that were provided by a large team of experts during the preparation of the DSM terminology and taxonomy, especially P. Campbell, R. Connerton, P. Dabney, T. Doiron, J. Ferrara, K. Huemmrich, S. Hunter, M. Johnson, G. Karpati, L. Kepko, B. Lakew, D. Leisawitz, F. Lemoine, J. Livas, K. Luu, R. Lyon, D. Mandl, J. Masek, J. Mather, H. Moseley, J. Moses, N. Paschalidis, W. Powell, J. Rodriguez-Ruiz, A. Seas, N. Shah, D. Smith, T. Swanson, K. Thome, S. Tompkins, W. Truszkowski, and W. Wiscombe. ## References * [1] M. R. Schoeberl, D. Paules, D. Andrucyk, and J. L. Duda, \"The earth science vision: An intelligent web of sensors,\" in _Proc. Int. Geosc. Remote Sens. Symp._, vol. 7, 2001, pp. 126-128. * [2] M. S. Seabloom, S. J. Talabac, J. Ardizzone, and J. Terry, \"A sensor web simulator for design of new earth science observing systems,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Boston, MA, USA, 2008, pp. V-298-V-301. * [3] D. Mandl _et al._, \"Sensor Web 2.0: Connecting earth's sensors via the internet,\" in _Proc. Earth Sci. Technol. Conf._, College Park, MD, USA, Jun. 2008. [Online]. Available: [https://etson.nasa.gov/conferences/estc2008/presentations/Mandal1P.pdf](https://etson.nasa.gov/conferences/estc2008/presentations/Mandal1P.pdf), Accessed on: Feb. 7, 2020. * [4] D. Mandl _et al._, \"A space-based sensor web for disaster management,\" in _Proc. Int. Geosci. Remote Sens. Symp._, Boston, MA, USA, 2008, pp. V-294-V-297. * [5] A. Donnellan _et al._, \"Understanding earthquake fault systems using quakesem analysis and data assimilation tools,\" in _Proc. IEEE Aerosp. Conf._, Big Sky, MT, USA, 2009, pp. 1-9. * [6] E. Torres-Martinez, G. Paules, M. Schoeberl, and M. W. Kalb, \"A web of sensors: Enabling the earth science vision,\" _Acta Astronaut._, vol. 53, pp. 423-428, 2003. * [7] A. Barrett, \"Multiple platform mission taxonomy,\" JPL CaTech Internal Presentation, Section 367, Jan. 2001. * [8] O. Brown, \"Application of value-centric design to space architectures: The case of fractionated spacecraft,\" in _Proc. AIAA Space 2008 Conf. Expo._, San Diego, CA, USA, 2008, Paper AIAA-2008-7869. * [9] DARPA Tactical Technology Office, The Phoenix Program, [Online]. Available: [https://www.darpa.mil/news-events/2014-04-02](https://www.darpa.mil/news-events/2014-04-02) * [10] D. Barnhart, L. Hill, M. Turnbull, and P. Will, \"Changing satellite morphology through cellularization,\" in _Proc. AIAA SPACE 2012 Conf. Expo._, 2012, Paper AIAA-2012-5262. * [11] A. A. Kerzhner _et al._, \"Architecting cellularized space systems using model-based design exploration,\" in _Proc. AIAA SPACE 2013 Conf. Expo._, 2013, Paper AIAA 2013-5371. * [12] C. Ruf _et al._, \"CYGNSS: Enabling the future of hurricane prediction [remote sensing satellites],\" _IEEE Geosci. Remote Sens. Mag._, vol. 1, no. 2, pp. 52-67, Jun. 2013. * [13] W. J. Blackwell _et al._, \"An overview of the TROPICS NASA earth venture mission,\" _Quart. J. Roy. Meteorol. Soc._, vol. 144, pp. 16-26, Mar. 2018. * [14] D. Masutti _et al._, \"The QB50 mission for the investigation of the mid-lower thermosphere: Preliminary results and lessons learned,\" in _Proc. 15th Int. Planetary Probe Workshop_, Budder, CO, USA, Jun. 2018. [Online]. Available: [https://www.colorado.edu/event/ipw2018/sites/default/files/attached-files/21_](https://www.colorado.edu/event/ipw2018/sites/default/files/attached-files/21_)\"_qb50_ippw_2018_v1.pdf, Accessed on: Feb. 7, 2020. * [15] J. S. Llorente _et al._, \"PROBA-3: Precise formation flying demonstration mission,\" _Acta Astronaut._, vol. 82, no. 1, pp. 38-46, Jan. 2013. * [16] Planet Labs. [Online]. Available: [https://www.planet.com/](https://www.planet.com/) * [17] OneWeb. [Online]. Available: [https://www.oenweb.world/](https://www.oenweb.world/) * [18] Capella Space. [Online]. Available: [https://www.capellaspace.com/](https://www.capellaspace.com/) * [19] R. Sandau, \"Status and trends of small satellite missions for earth observation,\" _Acta Astronaut._ vol. 66, no. 1, pp. 1-12, 2010. * [20] M. N. Sweeting, \"Why satellites are scaling down,\" _Space Technol. Int._, vol. 7, pp. 55-59, 1991. * [21] H. J. Kramer and A. P. Cracknell, \"An overview of small satellites in remote sensing,\" _Int. J. Remote Sens._, vol. 29, no. 15, pp. 4285-437, 2008. * [22] G. Konecny, \"Small satellites-A tool for earth observation?\" in _Proc. XXth ISPRS Congr.-Commission_, 2004, vol. 4, pp. 580-582. * [23] Y. Xue, Y. Li, J. Guang, X. Zhang, and J. Guo, \"Small satellite remote sensing and applications--History, current and future,\" _Int. J. Remote Sens._, vol. 29, no. 15, 2008, pp. 4339-54372. * [24] NASA Space Technology Mission Directorate, \"NASA ARC Solicitation: Small Test Technology Partnerships, NASA Cooperative Agreement Notice, Solicitation Number: NNA13ZUA00IC, 2013. * [25] J. Le Mogione _et al._, \"Tradespace analysis tool for designing constellations (TAT-C),\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Fort Worth, TX, USA, Jul. 2017, pp. 1181-1184. * [26] D. Selva _et al._, \"Distributed earth satellite systems: What is needed to move forward?\" _J. Aerosp. Inf. Syst._, vol. 14, no. 8, Aug. 2017, pp. 412-438. * [27] V. L. Foreman, \"Emergence of second-generation low earth orbit satellite constellations: A prospective technical, economic and policy analysis,\" S.M. thesis, Dept. Aeronaut. Astronaut., Massachusetts Inst. Technol., Cambridge, MA, USA, 2018. * [28] S. Nag _et al._, \"Autonomous scheduling of agile spacecraft constellations with delay tolerant networking for reactive imaging,\" in _Proc. Int. Conf. Autom. Planning Scheduling SPARK Workshop_, Berkeley, CA, USA, Jul. 2019. [Online]. Available: [https://appliedsciences.nasa.gov/system/files/sites/default/files/3_4_Nag_TFRSAC_2019.pdf](https://appliedsciences.nasa.gov/system/files/sites/default/files/3_4_Nag_TFRSAC_2019.pdf), Accessed on: Feb. 7, 2020. * [29] W. J. Larson, J. R. Wertz, and B. D'Souza, Eds., _Space Mission Analysis and Design_, vol. 8, 3rd ed. New York, NY, USA: Space Technol. Library, 1999. * [30] T. W. Fong, J. D. Frank, J. M. Badger, I. A. Nesnas, and M. S. Fears, \"Autonomous systems taxonomy,\" NASA Ames Res. Center, Moffett Field, CA, USA, NASA Tech. Rep. ARC-E-DAA-TN56290, May 2018. * [31] J. Starek, B. Acikmese, I. Nessas and M. Pavone, \"Spacecraft autonomy challenges for next-generation space missions,\" in _Challenges in Aerospace Decision and Control: Air Transportation Systems (Springer Lecture Notes in Control and Information Sciences)_, vol. 460, E. Feron, Ed. Berlin, Germany: Springer, 2016, pp. 1-48. * [32] S. Chien, \"Using autonomy flight software to improve science return on earth observing one,\" _J. Aerosp. Comput., Inf., Commun._, vol. 2, no. 4, pp. 196-216, 2005. * [33] S. Chien _et al._, \"An autonomous earth observing SensorWeb,\" _IEEE Intell. Syst._, vol. 20, no. 3, pp. 16-24, May/Jun. 2005. * [34] D. Mandl _et al._, \"Use of the earth observing one (EO-1) satellite for the Namibia sensorweb flood early warning pilot,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 6, no. 2, pp. 298-308, Apr. 2013. * [35] J. Eickhoff, _Onboard Computers, Onboard Software and Satellite Operations: An Introduction_. New York, NY, USA: Springer, 2011, p. 201. * [36] S. J. Talabac, M. S. Seabloom, G. J. Higgins, and B. T. Womack, \"A sensor web observing system simulator,\" Inf.Tech@Aerospace, Arlington, VA, USA, Sep. 2005, Pater AlAA 2005-6945. * [37] J. Le Moigne, M. Little, and M. Cole, \"New observing strategy (NOS) for future earth science missions,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Yokohama, Japan, Jul. 2019, pp. 5285-5288. * [38] National Academies of Science, Engineering, and Medicine, \"2017-2027 Decadal Survey for Earth Science and Applications from Space.\" [Online]. Available: [https://nas-sites.org/americastimetechoes/2017-2027-decadal-survey-for-earth-science-and-applications-from-space/](https://nas-sites.org/americastimetechoes/2017-2027-decadal-survey-for-earth-science-and-applications-from-space/). * [39] J. Wertz, \"Reinventing space: Dramatically reducing space mission cost,\" _Space News_, 12 articles from Feb. 4-Apr. 29, 2013. * [40] S. Nag, J. Le Moigne, and O. L. de Weck, \"Cost and risk analysis of small satellite constellations for earth observation,\" _IEEE Aerosp. Conf._, Big Sky, MT, USA, pp. 1-16, Mar. 2014, doi: 10.1109/AERO.2014.6836396. \\begin{tabular}{c c} & Jacqueline Le Moigne (Senior Member, IEEE) received a B.S. and a M.S. in Mathematics, and a Ph.D. in Computer Science, all from the University Pierre and Marie Curie, Paris, France. She has been with NASA since 1998, currently Manager of the Advanced Information Systems Technology (AIST) Program of the NASA Earth Science Technology Office (ESTO). Previously, she was Assistant Chief for Technology in the Software Engineering Division at NASA Goddard, also working with NASA Space Technology Research Grants Program. She has published over 130 journal, conference publications and book chapter articles, including more than 20 journal papers; she co-authored an edited book on \"Image Registration for Remote Sensing\" and holds a Patent on this topic. She has been PI of several projects focused on Distributed Spacecraft Missions. Dr. Le Moigne has been Associate Editor for the IEEE Transactions on Gigoscience and Remote Sensing and for the journal _Pattern Recognition_. She is a NASA Goddard Senior Fellow, was a Program Evaluator for the Accreditation Board in Engineering and Technology and a NATO Science for Peace and Security Committee Panel Member. She has been recipient of a NASA Exceptional Service Medal and of the Goddard Information Science and Technology Award. \\\\ \\end{tabular} \\begin{tabular}{c c} & John Carl Adams received a B.S. and a M.S. in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 1984 and 1986 respectively; he then completed a Ph.D. in Aeronautics and Astronautics at Stanford University in 1999. He has 34 years in the design, analysis and simulation of spacecraft guidance, navigation and control systems. This includes 2 years working on space station attitude dynamics and control at the Charles Stark Draper Laboratory, 12 years working in the Automation and Robotics and Pointing Control Laboratories at the Lockheed Martin Space Systems, Co., Advanced Technology Center, and 20 years working for NASA, first at the Johnson Space Center, next at the Goddard Space Flight Center, and now at NASA Headquarters. His dissertation research examined the use of GPS attitude determination for spacecraft. His work in the areas of spacecraft GPS and formation flying has been documented in numerous IEEE, AIAA, and ION conference and journal publications, including the paper \"Experimental Demonstration of GPS as a Relative Sensor for Formation Flying Spacecraft\" in _Navigation_, the journal of the Institute of Navigation. He has previously been involved in the design and development of GPS receivers for HEO spacecraft applications. He served for three years as the Associate Branch Head in the GN&C Component Hardware and Systems Branch, spent four years in the role of Chief Technologist for the Mission Engineering and Systems Analysis Division, and five years as the Branch Head for the Navigation and Mission Design branch at NASA Goddard. \\\\ \\end{tabular} \\begin{tabular}{c c} & Sreeja Nag (Member, IEEE) received the Ph.D. degree in Space Systems Engineering from the Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, USA, in 2015. She is a Senior Research Scientist at NASA Goddard Space Flight Center, Greenbelt, MD, USA, and NASA Ames Research Center Moffett Field, CA, USA (contracted by Bay Area Environmental Research Institute). Her research interests include distributed space systems, automated planning and scheduling of constellations, swarm decision making in space, and space traffic management. \\\\ \\end{tabular}
Due to many technical and programmatic changes, distributed spacecraft missions (DSMs) and constellations are becoming more common, both in national space agencies as well as in industry and academia. These changes are the results of various driving factors, such as maturing technologies, minimizing costs, and new science requirements. But they are also made possible by the availability of easier and more frequent launches and the capability to handle increased requirements in terms of scalable mission operations and \"big\" data analytics on the ground and onboard. With the increase in this type of missions and with the need to connect and interrelate all the data that will be generated by these various missions as well as with the data acquired from ground and airborne sensors, there is a need to define more accurately all the terms used in relation to DSMs. This article presents a terminology including various definitions that describe DSMs and related concepts, their organization, physical configuration, and functional configuration, as well as a taxonomy from which DSMs can be designed. Distributed spacecraft mission (DSM), nomenclature, taxonomy, terminology.
Give a concise overview of the text below.
ieee/ef3a4ac4_0ca0_4a6c_aab0_79e4d72f36a6.md
# System Design for Geosynchronous Synthetic Aperture Radar Missions Stephen Hobbs, Cathryn Mitchell, Biagio Forte, Rachel Holley,, Boris Snapir, and Philip Whittaker Manuscript received November 25, 2013; revised January 28, 2014; accepted April 2, 2014. Date of publication May 8, 2014; date of current version June 12, 2014. This work was supported in part by the U.K.'s Centre for Earth Observation Instrumentation and in part by the Engineering and Physical Sciences Research Council under Grant EP/H003304/1.S. Hobbs and B. Snapir are with the Space Research Centre, Cranfield University, Bedford MK43 0AL, U.K. (e-mail: [email protected]).C. Mitchell and B. Forte are with the University of Bath, Bath BA2 7AY, U.K. R. Holley is with NPA Satellite Mapping, Kent TN8 6SR, U.K.P. Whittaker is with Surved Satellite Technology Ltd., Guildford GU2 7YE, U.K.Color versions of one or more of the figures in this paper are available online at [http://ieeexplore.ieee.org.Digital](http://ieeexplore.ieee.org.Digital) Object Identifier 10.1109/TGRS.2014.2318171 ## I Introduction Geosynchronous synthetic aperture radar (GEO SAR) offers significant advantages compared with low-Earth-orbit (LEO) systems. The concept raises significant technical challenges too. This paper provides an overview of mission concepts and identifies the principal system design choices and constraints. Tomiyasu and Pacelli [1] first discussed a GEO SAR mission. The proposed orbit inclination was 50\\({}^{\\circ}\\) to provide coverage of North and South America, the antenna diameter was 15-30 m and a mean transmitter RF power of 0.1-1 kW gave a spatial resolution of 100 m. Madsen _et al._[2] adapted the concept and improved the ground resolution to 10-45 m (varying with position) at the cost of increased power (20 kW electrical), using L-band. Applications included disaster response, tectonic mapping, and soil moisture. Similar studies from the United States include [3] and [4]. All these studies recognize the much improved temporal sampling, which is possible from GEO compared with LEO, and the new measurement opportunities this creates. In Europe, one of the first published GEO SAR concepts was by Prati _et al._[5] in 1998. They described a bistatic passive radar reusing L-band broadcast signals. Such a system could achieve 120-m spatial resolution using an antenna with a diameter 4.8 m. The orbit inclination is small (satellite motion of only 25 km from the geostationary position is assumed). However, a long integration time of up to 8 h is required to form a satisfactory image. Imaging effects of clutter and partially stable targets, as well as measuring the atmospheric phase screen (APS) are noted. Research on other GEO SAR concepts (mainly conventional monostatic) has continued with contributions from Cranfield [6, 7, 8], Milan [9, 10, 11], and Barcelona [12, 13] in particular. These recent studies have made significant contributions in the areas of system design and APS estimation/phase compensation. For the low inclination orbits and modest antenna sizes, which these authors have assumed, integration times are relatively long, and thus, atmospheric phase compensation is needed. There has been particular interest again in applications for short repeat period interferometry related to geohazards. A third and very active GEO SAR research community exists in China. The main concepts discussed relate to systems using high-inclination orbits with large antennas and high power to achieve fine resolution. These systems provide excellent coverage of continental areas, such as the Chinese mainland. Particular attention has been given to methods of adapting frequency-domain focusing algorithms to cope with the curved trajectories typical of GEO SAR, e.g., [14, 15, 16, 17]. Other topics studied include aspects of system design [18] and atmospheric perturbations [19]. Reference [20] described two indicative mission concepts currently being evaluated, with inclinations of 16\\({}^{\\circ}\\) and 53\\({}^{\\circ}\\). GEO implies longer integration times \\(t_{\\rm int}\\) than for LEO. The atmosphere may change significantly during \\(t_{\\rm int}\\), affecting the phase of the received signals. SAR depends on accurate phase compensation, and thus, an important classifications of types of missions are: a) those that require atmospheric phase compensation to achieve their design spatial resolution; and b) those that achieve full spatial resolution without phase compensation. The U.S. and Chinese missions tend to fall into the second group, and the European ones into the first. This bifurcation of concepts is discussed below. We focus here on the engineering design of monostatic concepts. Bi- and multistatic concepts are also under consideration [11]. Much of the system design is common to all types or canbe extended in obvious ways. We do not discuss polarimetry, but note that the BIOMASS P-band mission, which is likely to suffer more severe Faraday polarization rotations than any GEO SAR concept so far considered, expects to provide useful polarimetric data. The aims of the initial system design outlined here are to assess the feasibility of a mission concept and to identify the main technical challenges. System design is iterative: later iterations include the realism needed to improve the design starting with the most significant challenges. It is more important that the initial system design be complete than that it incorporates comprehensive detail from the outset. The paper has two main sections. Section II reviews the main physical constraints on GEO SAR system design. Section III proposes an outline system design methodology, which addresses these constraints and identifies feasible sets of system parameters. Example designs are shown and we briefly discuss the information available for estimating the APS. A short discussion closes the paper. ## II Mission Constraints Before discussing system design, it is important to understand relevant constraints. The factors discussed here are: * atmosphere; * orbit; * SAR image focusing; * signal averaging in time and space; * radar performance. ### _Atmosphere_ Refractive index fluctuations in the atmosphere affect the signal phase. This is mainly due to changes in the ionospheric electron content and the tropospheric humidity. System design is an iterative process, and thus, initial models of atmospheric perturbations are usually simple. For this initial design, we start with simple, even simplistic, representations of atmospheric perturbations. As the system design develops, increasingly sophisticated models are used to assess system performance in a wider range of conditions and to resolve design challenges. Useful overviews of the effects of the atmosphere on SAR imaging from space are provided by [21, 22, 23]. #### Ii-A1 Ionosphere Ionization of Earth's atmosphere (from heights of 50 km to over 500 km) by short wavelength solar radiation changes its refractive index enough to affect radio propagation. Changes in the level of this ionization in space and time affect radar imaging from the Earth orbit. Ionization is measured in terms of the free electron density (total electron content [TEC]), expressed generally as column density, i.e., number of electrons per unit area of the Earth's surface for a vertical column to the \"top\" of the atmosphere. The column density is usually expressed in TEC units (TECUs), i.e., units of \\(10^{16}\\) electrons per square meter. Ionospheric plasma density and its variability increase near the peaks of the 11-year solar cycle. The ionosphere has a regular diurnal pattern of behavior, driven by the Sun, in addition to which it varies with space and time on a wide range of scales. The most active areas are near the equator from sunset to midnight and at high latitudes. In mid-latitudes, where much GEO SAR imaging is likely to be done, some of the most important features are traveling ionospheric disturbances (TIDs). Reference [24] reported observations of medium-scale TIDs over Europe, where occurrence is below 15% for most of the year, but in winter around midday (UT), the rate can reach 70%; there are also peaks up to 45% during nighttime. Reference [25] reported that typical TID amplitudes are 0.2-1 TECU (peak-to-peak, solar minimum) to 1-2 TECU (solar maximum). Large-scale TIDs are much rarer although amplitudes well over 10 TECU are sometimes seen. Periods range from 0.5 to 3 h, with a typical value of 1.5 h. Other observations of medium-scale TIDs [26] give velocities of 150-250 m \\(\\cdot\\) s\\({}^{-1}\\) and wavelengths of 100-300 km. Although published values differ, representative TID speeds, amplitudes, and wavelengths are given in Table I. A simple approach is to model these as waves propagating from the poles to the equator: this is used here. TEC values can be converted to an equivalent range error using (1) [27, p. 211], with \\(K=-40.28\\) m\\({}^{3}\\cdot\\) s\\({}^{-2}\\). Increasing TEC reduces the path phase delay, and the process is dispersive, i.e., the effect varies with frequency. Thus \\[\\delta=\\frac{K}{f^{2}}\\,\\text{TEC}. \\tag{1}\\] #### Ii-A2 Proposphere Most of the mass of Earth's atmosphere is in the troposphere (the lowest layer of the atmosphere, from the ground to 8-14 km). The troposphere's components are relatively stable except for the amount of water (which is mainly as vapor). The variable water content causes fluctuations in refractive index that affect radio waves. The total vertical path delay due to water varies geographically, is up to 0.8 m and is independent of frequency [28, p. 524]. Weather and turbulence on a wide range of spatial and temporal scales cause fluctuations in the delay. The most demanding conditions for radar imaging are rapid changes over short-length scales, usually associated with severe weather. Some representative values of fluctuations that have typically caused problems for radar interferometry at mid-latitudes are given in Table II. ### _Orbit_ For our purposes, GEO orbits have the same period as Earth's rotation (rather than some other multiple of the period). This means that the semi-major axis \\(a\\) is 42 164 km. Other orbit parameters that can be chosen are inclination \\(i\\), eccentricity \\(e\\), right ascension of the ascending node (RAAN) \\(\\Omega\\), and argument of perigee \\(\\omega\\). The GEO region is regulated by the International Telecommunications Union (ITU) because of its commercial value [29]. Communication satellites are allocated specific bands in the radio spectrum together with an orbit location specified by its longitude around the equatorial ring. The satellite is required to station-keep within a tolerance of \\(\\pm 0.1^{\\circ}\\) (\\(\\pm\\)73.6 km) in longitude. Limits on eccentricity and inclination are not currently specified but both are usually close to zero for operational communication satellites so that displacements from the nominal position are only a few times 10 km at most. Regular station-keeping maneuvers are necessary to counteract perturbations: these are typically done a few times a month. Some GEO SAR concepts assume orbits, which significantly exceed the standard ITU allocation. It is often possible to make appropriate changes to eccentricity so that the satellite does not cross too close to the GEO ring (within about 200 km of the geostationary height at the equator). For SAR motion is needed to synthesize the aperture. Synthetic apertures compatible with the current ITU guidelines can therefore have a maximum size of around 100 km. The orbit inclination and eccentricity and their relative phasing (i.e., \\(e\\), \\(i\\), \\(\\Omega\\), \\(\\omega\\)) can be chosen to create various shapes and sizes of relative orbit. A convenient model for these small displacement orbits about a nominal geostationary point is defined by the Hill's equations [30, p. 393]. Expressions for orbit speed relative to Earth for circular GEO orbits with inclination \\(i\\) at equator crossing and the north or south extremes can be written in terms of the inertial orbit velocity \\(v_{G}=3075\\) m \\(\\cdot\\) s\\({}^{-1}\\). Thus \\[v= \\,2v_{G}\\sin i/2\\hskip 28.452756pt(\\text{equator crossing}) \\tag{2}\\] \\[v= \\,v_{G}(1-\\cos i)\\hskip 28.452756pt(\\text{N and S extremes}). \\tag{3}\\] An orbit only slightly displaced from geostationary with a relative orbit diameter of \\(d\\) has a maximum azimuthal speed of \\(v=\\pi d/T_{\\text{day}}\\) (\\(T_{\\text{day}}\\) is one sidereal day). #### Iii-B1 Manoeuvers A further practical constraint on satellite orbits is that \"large\" maneuvers are expensive: satellites do not significantly change orbit once their initial orbit is established. The cost is quantified in terms of the velocity change \\(\\Delta V\\) required for the maneuver since this directly relates to the change in orbit and can be converted to required propellant mass simply. In the GEO region, maneuvers equivalent to about 50 m \\(\\cdot\\) s\\({}^{-1}\\) are needed each year to counteract perturbations [31, p. 138] (which are primarily due to the gravity fields of the Sun and Moon); over a typical comast lifetime of 15 years this amounts to 750 m \\(\\cdot\\) s\\({}^{-1}\\), which is a significant cost to the mission. Modern satellites increasingly use low-thrust electric propulsion for station-keeping because of its mass efficiency. High-inclination orbits will be also subject to orbit perturbations and will require an appropriate propulsion system and fuel load. However, some moderate inclination orbits (\\(i\\simeq 7.5^{\\circ}\\)) are quasi-stable [32, p. 219] and require much less orbit maintenance. These orbits offer interesting possibilities for long lifetime GEO SAR missions. ### _SAR Image Focusing_ SAR image focusing is the process of forming the radar image from the raw signal time series that contains responses for targets at all azimuth positions within a given range gate. Signals for a particular azimuth position have a unique phase history: the image focusing process allocates the response for targets with this specific phase history to the (complex) backscatter value for that particular azimuth position. Standard SAR focusing algorithms assume that targets are static and that the atmosphere above them does not change. However, coherent changes in signal phase during signal integration (e.g., due to target motion in the slant range direction) result in image artifacts. A LEO SAR example is the along-track displacement of moving targets, such as ships. For GEO SAR, the effects are more pronounced because of the increased range. Motion of individual scatterer results in azimuth shifts as for LEO SAR, but in addition, phase changes common to a group of pixels can cause an appreciable azimuth shift of that part of the image in GEO SAR (perhaps due to atmospheric changes). #### Iii-C1 Target Motion, Clutter Fig. 1 shows the geometry used to derive the azimuth shift due to target motion (based on Rees [33, p. 305]). The satellite crosses the \\(Oxz\\) plane at \\(t=0\\) moving parallel to the \\(y\\)-axis. At \\(t=0\\), the target is at the origin with velocity u. For the broadside geometry assumed, a static target's \\(y\\) position is the satellite position when the Doppler shift is zero; this condition also gives the apparent position of a moving target. The zero Doppler condition is \\(\\mathbf{r}^{\\prime}\\cdot\\mathbf{v}^{\\prime}=0\\), where Fig. 1: Satellite and target geometry for calculating apparent azimuth shift due to target motion. At \\(t=0\\), the target is at the origin and the satellite at \\(\\mathbf{r}_{\\pm 0}\\). \\(\\mathbf{r}^{\\prime}\\) and \\(\\mathbf{v}^{\\prime}\\) are the relative position and velocity of satellite and target. Thus \\[\\mathbf{r}^{\\prime} = (\\mathbf{r}_{s0}+\\mathbf{v}t)-\\mathbf{u}t,\\quad\\mathbf{v}^{\\prime} =\\mathbf{v}-\\mathbf{u}\\] \\[\\mathbf{r}^{\\prime}\\cdot\\mathbf{v}^{\\prime} = (\\mathbf{r}_{s0}+(\\mathbf{v}-\\mathbf{u})t)\\cdot(\\mathbf{v}- \\mathbf{u}). \\tag{4}\\] Hence, the apparent target azimuth offset \\(\\delta y\\) to first order (noting \\(\\mathbf{r}_{s0}\\cdot\\mathbf{v}=0\\)) is its position at time \\(t_{0}\\) given by \\[0 = (\\mathbf{r}_{s0}+(\\mathbf{v}-\\mathbf{u})t_{0})\\cdot(\\mathbf{v}- \\mathbf{u})\\] \\[t_{0} = -\\frac{\\mathbf{r}_{s0}\\cdot(\\mathbf{v}-\\mathbf{u})}{(\\mathbf{v} -\\mathbf{u})\\cdot(\\mathbf{v}-\\mathbf{u})}=\\frac{\\mathbf{r}_{s0}\\cdot\\mathbf{u }}{|\\mathbf{v}-\\mathbf{u}|^{2}}\\] \\[\\delta y = vt_{0}=v\\frac{\\mathbf{r}_{s0}\\cdot\\mathbf{u}}{|\\mathbf{v}- \\mathbf{u}|^{2}}. \\tag{5}\\] Equation (5) gives much larger azimuth offsets for GEO SAR than for LEO SAR because the slant range is larger and the relative velocity may be far smaller. Moving targets and clutter can therefore have large apparent azimuth displacements. The following are the two qualifications that apply. 1. The motion should be coherent for the full integration time, which may be several minutes or longer. 2. Pulse compression and azimuth presumming can filter out returns above a critical speed \\(v_{c}\\), which may be very low [11]. The minimum pulse-repetition frequency (PRF) is set to avoid azimuth ambiguities. If the actual PRF is greater than this, then presumming can be used to filter out high Doppler frequencies due to clutter and thus reduce the image degradation due to clutter. This is discussed in more detail by [11, 34]. Slow steady motion during image formation can still give appreciable azimuth shifts (e.g., 0.1 mm \\(\\cdot\\) s\\({}^{-1}\\) can lead to shifts of several \\(\\times\\) 100 m). Such motion might be due to thermal expansion of buildings or other structures. #### Iii-B2 Temporal Change in Refractive Index Changes of the refractive index along the slant path from the radar to the target can cause image artifacts or defocusing. The change may be due to temporal or spatial variation of refractive index: for GEO SAR, the temporal changes become important. The rate of change of phase at the intersection between the slant path from radar to target and the phase screen is due to the temporal change of the phase screen at that point \\((\\partial\\phi/\\partial t)\\) plus the scalar product between the APS spatial gradient and the intersection point velocity. This ensures, for example, that if the intersection point moves at the advection velocity of a \"frozen\" phase screen then no phase change occurs. If \\(\\mathbf{v}_{\\mathbf{i}}\\) is the velocity of the intersection point, the total rate of change \\(d\\phi/dt\\) is \\[\\frac{d\\phi}{dt}=\\frac{\\partial\\phi}{\\partial t}+\\mathbf{v}_{\\mathbf{i}}\\cdot \ abla\\phi. \\tag{6}\\] Assuming a simple sinusoidal phase disturbance (7), the fractional rate of change of phase is given by (8). Typical values of these terms for LEO and GEO are shown in Table 3. In LEO, the high satellite velocity means that spatial variation \\((\\mathbf{v}_{\\mathbf{i}}\\cdot\ abla\\phi)\\) of refractive index is important. However, in GEO satellites tend to have lower speeds and then the temporal variation \\((\\partial\\phi/\\partial t)\\) dominates. Thus \\[\\phi = \\phi_{1}e^{i(\\mathbf{k}\\cdot\\mathbf{r}-\\omega t)} \\tag{7}\\] \\[\\frac{1}{\\phi}\\frac{d\\phi}{dt} = -i\\omega+i\\mathbf{v}_{\\mathbf{i}}\\cdot\\mathbf{k}. \\tag{8}\\] The phase rate causes an azimuth shift. Appendix -A shows that this shift \\(\\delta y\\) depends on wavelength, azimuthal velocity, slant range and rate of phase change. Using (20), the shift can be expressed in terms of the azimuth resolution \\(\\Delta y\\) (10) \\[\\delta y = \\frac{r\\dot{\\phi}\\lambda}{2\\pi v\\cos\\theta(\\mathbf{e}_{2}\\cdot \\mathbf{e}_{a})} \\tag{9}\\] \\[= \\frac{\\Delta yt_{\\mathrm{int}}\\dot{\\phi}}{\\pi\\cos\\theta}. \\tag{10}\\] Azimuth shifts in SAR images due to atmospheric perturbations have been previously reported by several authors over the last 50 years (e.g., [19, 21, 22, 23]). However, the shift has not been explicitly related to the phase rate, nor used to measure phase rate from azimuth displacement. A suitable image sequence from GEO SAR provides an opportunity to make this measurement of \\(\\overline{\\phi}\\). It should be possible to track both strong point targets (giving \\(\\overline{\\phi}\\) at pixel scale) and image features (giving \\(\\overline{\\phi}\\) at the scale of a group of pixels), depending on the image properties and the scale of atmospheric perturbations. Since in some circumstances, the azimuth shift is several times the azimuth resolution, it should be easily measurable. This azimuth shift may be significant for GEO SAR since it allows the phase screen to be estimated without needing persistent scatterers. To estimate typical magnitudes, the phase rate can be taken to be due to a change of \\(\\delta z=10\\) mm of one-way zenith optical path length due to tropospheric humidity over \\(l=20\\) km horizontally, this pattern being advected over the target at \\(w=10\\) m \\(\\cdot\\) s\\({}^{-1}\\). This gives a phase rate at C-band \\((\\lambda=5\\) cm) of \\(\\dot{\\phi}=2\\pi w\\delta z/(l\\lambda)=6\\times 10^{-4}\\) rad \\(\\cdot\\) s\\({}^{-1}\\) approximately. Table 4 shows the estimated azimuth shifts due to this phase rate in LEO and in GEO. The shift in LEO is negligible but for GEO SAR, it becomes appreciable (and therefore allows the APS rate of change to be measured in principle). Results from a GEO SAR simulator are consistent with this model of the azimuth shift due to the APS temporal change [8]. ### _Spatial and Temporal Averaging_ SAR imaging inherently averages in space and time. Spatial averaging is within the point target response, and temporal averaging is during the integration period. Quantifying the effects of temporal averaging is important to understand, which temporal changes, particularly clutter and the APS, may affect image focusing. The APS at any point changes with time. Linear phase changes within the integration time cause azimuth shifts but do not otherwise (to first order) corrupt the image. However, deviations from linearity cause loss of focus. The effects of the nonlinear phase change on image quality need to be quantified to ensure that the system design does not exceed acceptable limits. We model temporal phase screen changes using sinusoidal components. The sinusoid is analyzed as a linear best fit \\((\\hat{m}t+\\hat{c})\\) plus nonlinear deviations \\(\\delta\\phi(t)\\) from this (11). It is assumed that the focusing algorithm will displace the target according to the linear component and that the nonlinear deviation from the linear change causes loss of focus. The loss of focus is quantified by the amplitude reduction of the phasor integral (\\(y\\)), (12)). Thus \\[\\Delta\\phi(t)=a\\sin(\\omega t+\\psi_{0})=(\\hat{m}t+\\hat{c})+\\delta\\phi(t) \\tag{11}\\] \\[y(\\psi_{0},t_{1},\\omega\\Delta t,a)=\\frac{1}{\\Delta t}\\int\\limits_{t_{1}-\\Delta t /2}^{t_{1}+\\Delta t/2}\\exp\\left(i\\delta\\phi(t)\\right)dt. \\tag{12}\\] In the limit of small sinusoid amplitudes \\(a\\) and time intervals \\(\\Delta t\\) (expressed as phase interval \\(\\omega\\Delta t\\)) \\(y\\) is 1. As \\(a\\) and \\(\\omega\\Delta t\\) increase, \\(y\\) decreases. There is modest dependence on the initial phase offsets \\(\\psi_{0}\\) and \\(t_{1}\\), thus \\(y()\\) has been numerically evaluated for all initial phases to give gain \\(\\overline{y}(\\omega\\Delta t,a)\\): the lowest gain values over all phase offsets are plotted in Fig. 2, where \\(a\\) is the screen amplitude, and \\(\\omega\\Delta t\\) is the phase interval. For system design, it is useful to quantify the limits within which temporal averaging can be ignored. We choose the contour \\(\\overline{y}=0.95\\) (contours for \\(\\overline{y}=0.9\\) or \\(0.8\\), for example, might also have been chosen; note that this is the gain for amplitude, not intensity). This can be approximated (see Appendix B) by the fitting functions \\[a= a_{95}(a_{95}=0.45\\text{ rad})\\qquad\\text{for }\\omega\\Delta t\\geq\\psi_{0 :95} \\tag{13}\\] \\[\\omega\\Delta t= \\frac{c_{95}}{\\sqrt{a}}\\left(c_{95}=2.9\\text{ rad}^{\\frac{2}{2}}\\right)\\qquad\\text{for }a\\geq a_{95}\\] \\[\\psi_{0:95}= \\frac{c_{95}}{\\sqrt{a_{95}}}\\simeq 4.32\\text{ rad}. \\tag{14}\\] If \\(a\\) or \\(\\omega\\Delta t\\) are smaller than these values, then the gain is greater than 0.95, and temporal averaging does not cause significant degradation. The phase amplitude for two-way propagation at incidence angle \\(\\theta\\) is estimated using (15) and (16) where \\(\\delta_{i}\\), \\(\\delta_{t}\\) is the vertical delay amplitude (mean to peak) due to the ionosphere or troposphere, respectively. The ionospheric delay increases with wavelength, whereas the tropospheric phase delay decreases \\[\\phi_{i}=\\frac{4\\pi}{\\lambda\\cos\\theta}\\,\\delta_{i}=\\frac{4\\pi K \\text{TEC}\\lambda}{c^{2}\\cos\\theta} \\tag{15}\\] \\[\\phi_{t}=\\frac{4\\pi}{\\lambda\\cos\\theta}\\,\\delta_{t}. \\tag{16}\\] Equations (14)-(16) and (20) are used to give the change in azimuth resolution \\(\\Delta y\\) with integration time \\(t_{\\rm int}\\) along the contour of averaging gain (for \\(a\\geq a_{95}\\)). Equations (17) and (18) give these expressions for ionospheric and tropospheric perturbations, respectively (by substituting the wavelength parameter). Thus \\[t_{\\rm int}^{\\frac{2}{2}} =\\frac{c_{95}c}{\\omega}\\sqrt{\\frac{r\\cos\\theta}{8\\pi K\\text{TEC} \\lambda y}} \\tag{17}\\] \\[t_{\\rm int} =\\left(\\frac{c_{95}}{\\omega}\\right)^{2}\\,\\frac{\\Delta yv\\cos \\theta}{2\\pi r\\delta_{t}}. \\tag{18}\\] ### _Radar Design Constraints_ Radar system design is complex because so many parameters are interrelated. For initial system design, however, three main constraints should be accounted for. #### Iii-E1 Spatial Resolution Spatial resolution is a primary user requirement. The natural radar coordinates are range and azimuth. Range resolution is determined by the bandwidth \\(\\Delta f\\) of the transmitted pulse. For a conventional monostatic configuration (transmitter and receiver in the same place) slant range resolution \\(\\Delta r\\) is equal to half the pulse length \\(c\\tau/2=c/(2\\Delta f)\\) because the radiation travels out and back. \\(\\Delta r\\) projected on Fig. 2: Signal attenuation due to nonlinearity of APS time variation: contours are of worst integration gain over all phase offsets [\\(a\\) is the screen amplitude and \\(\\omega\\Delta t\\) is the phase interval of (12)]. Earth's surface (incidence angle \\(\\theta\\)) gives the across-track resolution \\(\\Delta x\\) \\[\\Delta x=\\frac{\\Delta r}{\\sin\\theta}=\\frac{c\\tau}{2\\sin\\theta}=\\frac{c}{2\\Delta f \\sin\\theta}. \\tag{19}\\] Azimuth resolution is determined by the aperture size parallel to Earth's surface and perpendicular to the range direction. For SAR, the effective aperture is synthesized by moving a real aperture during the signal integration time; full resolution is achieved through numerical processing of the received signals. The angular resolution for an aperture of length \\(d\\) at wavelength \\(\\lambda\\) is \\(\\delta\\alpha=\\lambda/d\\) if the radiation passes once through the aperture. For a monostatic radar, the radiation passes out and back through the same aperture, and the angular resolution improves to \\(\\delta\\alpha=\\lambda/2d\\) (many texts ignore or fudge the extra factor \\(1/2\\); [23] includes it correctly). For GEO SAR imaging, spotlight mode may be used and the synthesized antenna length is the integral of satellite velocity relative to Earth in the azimuth direction \\(\\int vdt\\) (or velocity multiplied by integration time \\(t_{\\rm int}\\) for short periods; this may be less than the full beamwidth). The azimuth resolution is \\(r\\) multiplied by angular resolution [ignoring orbit curvature, (20)] \\[\\Delta y=r\\delta\\alpha=\\frac{r\\lambda}{2vt_{\\rm int}}. \\tag{20}\\] Choosing spatial resolution thus implies constraints on transmitted bandwidth, slant range, wavelength, integration time, and satellite velocity. It is important to note that (azimuthal) satellite velocity changes during the orbit. It typically sinusoidally varies and thus falls to zero at the extremes of the motion. This degrades azimuthal resolution from that possibility when speed is higher, and for motion over a significant portion of the 24-h period, the sinusoidal variation should be accounted for. Fig. 3 assumes sinusoidal azimuthal motion and shows how the azimuthal resolution degrades relative to the best value achievable for a given integration time as a function of the end time of the image acquisition. For example, \\(t_{\\rm int}=0.2\\) h ending at 4 h has resolution 20% worse than if it were to end near 6 h, whereas if \\(t_{\\rm int}=3\\) h ending at 2 h, azimuth resolution is about five times worse. Best resolution is achieved when the integration time is centered on 6 h, since speed is highest then. Integration periods, which include times of very low speed, are of little use since resolution is badly degraded. This daily variation (1 sidereal day) has significant operational implications. The variation in azimuthal velocity also affects signal-to-noise ratio (SNR) \\(S\\) and imaging ambiguities: in general, a lower speed allows more time for signal integration and thus improves \\(S\\) and reduces azimuth ambiguities. #### Iii-B2 SNR A fundamental radar requirement concerns image quality. This is conventionally described by the \\(\\mbox{SNR}=S\\) achieved for a given spatial resolution. Equation (21) shows how \\(S\\) depends on other system parameters (effective mean transmitted RF power \\(P_{t}f_{t}\\), spatial resolution \\(l\\) assumed equal in range and azimuth, surface backscatter \\(\\sigma^{0}\\) and incidence angle \\(\\theta\\), antenna area \\(A\\), receiver noise factor \\(F_{n}\\) and surface temperature \\(T_{s}\\); \\(k\\) is Boltzmann's constant). The equation can be derived from equation 11 of [5] (apart from the factor \\(\\cos\\theta\\), accounting for the local incidence angle) and assumes coherent integration of signals during \\(t_{\\rm int}\\). Pulse compression is parameterized by the duty cycle factor \\(f_{t}\\). The equation ignores RF signal losses and therefore should be interpreted to give the _effective_ transmitter power \\(P_{t,{\\rm eff}}\\), where the _actual_ RF power needed \\(P_{t,{\\rm act}}=P_{t,{\\rm eff}}/\\eta\\) and \\(\\eta\\) is the RF efficiency factor \\[S=\\frac{P_{t}f_{t}t_{\\rm int}l^{2}\\sigma^{0}A^{2}\\cos\\theta}{4\\pi\\lambda^{2}r ^{4}F_{n}kT_{s}}. \\tag{21}\\] Equations (21) and (23)-(25) should be interpreted with caution. They assume that range and azimuth resolution are equal: in practice, this may not be the case. An appropriate choice of \\(S\\) and \\(l\\) requires careful evaluation of the system requirements and of the APS compensation method. For high-resolution backscatter images, the optimal design will emphasize spatial resolution and accept a low \\(S\\) since the backscatter image quality can be improved with multilooking. If the user requires high-quality phase information (e.g., for interferometry) then high \\(S\\) is needed that tends to compromise spatial resolution. APS compensation brings additional requirements and is an area of active research. Several APS estimation methods have been suggested: good spatial resolution and signal quality help all of them, but optimal solutions have not yet been clearly identified. Good relevant work in this area is provided by [9, 10, 12, 13, 36]. Since APS compensation may start with coarse resolution, short \\(t_{\\rm int}\\) images during which the atmosphere is assumed quasi-static, the azimuth resolution may be severely degraded relative to the final image. However, the product of integration time and azimuth resolution is determined by velocity and does not change significantly between the coarse and fine images: \\(S\\) therefore does not degrade for the coarse resolution images, and in fact can be improved by averaging pixels in the range direction to equalize range and azimuth resolution in the coarse images. #### Iii-B3 Image Ambiguities--Antenna Size Range and azimuth ambiguities occur if the radar pulses transmitted are too frequent or too sparse. To derive the limits, we assume a Fig. 3: Contours of azimuth resolution degradation factor \\(f\\) as a function of integration time \\(t_{\\rm int}\\) and end time of the image \\(t_{\\rm end}\\) during the orbit assuming sinusoidal motion (the satellite is at the limits of the azimuth motion at 0 and 12 h). For a given integration time \\(\\Delta y(t_{\\rm int},t_{\\rm end})=f\\Delta y_{0}(t_{\\rm int})\\), where \\(\\Delta y_{0}(t_{\\rm int})\\) is the best resolution that can be achieved for that integration time. rectangular aperture \\(d_{1}\\times d_{2}\\), with \\(d_{1}\\) the dimension across-track and \\(d_{2}\\) along-track. To avoid range ambiguities (only one pulse's return from the illuminated area received at any moment) the maximum pulse-repetition frequency \\(n_{\\mathrm{PRF}}\\) is \\(cd_{1}/(4R\\lambda\\tan\\theta)\\). To avoid azimuth ambiguities (ambiguous directions must lie outside the illuminated footprint) requires a minimum \\(n_{\\mathrm{PRF}}\\) of \\(2v/d_{2}\\). The requirement that the minimum value must be less than the maximum defines a minimum for the product \\(d_{1}d_{2}\\), i.e., a minimum antenna area \\(A_{\\mathrm{min}}\\). This antenna size ensures that imaging ambiguities fall outside the antenna footprint and therefore can be ignored. In some cases, this requirement is excessive, e.g., if the beam footprint exceeds the Earth disk, and then, a smaller antenna can be used. Thus \\[\\frac{2v}{d_{2}}<n_{\\mathrm{PRF}}<\\frac{cd_{1}}{4r\\lambda\\tan\\theta},\\quad A_{ \\mathrm{min}}=\\frac{8vr\\lambda\\tan\\theta}{c}. \\tag{22}\\] If an area larger than \\(A_{\\mathrm{min}}\\) is used, then there is some freedom to choose \\(n_{\\mathrm{PRF}}\\), and azimuth presumming can be used to reduce the data rate. The antenna size depends on (azimuthal) velocity \\(v\\). Since this varies during an orbit, the required antenna size is a function of orbit position in principle: system design must generally accept the worst case sizing. High-inclination orbits can result in speeds over 1 km s\\({}^{-1}\\), which require very large antennas; this can be ameliorated using squint imaging to reduce the azimuthal velocity component. Many other factors also affect SAR system design, but those listed here quantify the primary requirements. ## III System Design Process The aim of system design is to identify a set of parameters, which define a feasible GEO SAR system meeting given requirements. For GEO orbits, the key parameters, which the designer can choose, are \\(v\\) (the azimuthal velocity component, i.e., the choice of orbit), wavelength \\(\\lambda\\), spatial resolution \\(l\\), and integration time \\(t_{\\mathrm{int}}\\) [these are themselves interrelated, (20)]. The principal requirement is usually spatial resolution, although wavelength and integration time may be also important. Equation (20) relates integration time \\(t_{\\mathrm{int}}\\), spatial resolution \\(l\\), wavelength \\(\\lambda\\) and (azimuthal) orbit speed \\(v\\). Equation (21) can be therefore rewritten to give antenna area \\(A\\) in terms of any three of these parameters (23)-(25). As above, the equations assume equal resolution in range and azimuth and are in terms of _effective_ rather than _actual_ transmitted RF power. Thus \\[A^{2} = \\frac{4\\pi r^{4}SF_{n}kT_{s}}{P_{t}f_{t}\\sigma^{0}\\cos\\theta} \\cdot\\frac{\\lambda^{2}}{t_{\\mathrm{int}}l^{2}} \\tag{23}\\] \\[= \\frac{16\\pi r^{2}SF_{n}kT_{s}}{P_{t}f_{t}\\sigma^{0}\\cos\\theta} \\cdot v^{2}t_{\\mathrm{int}}\\] (24) \\[= \\frac{8\\pi rSF_{n}kT_{s}}{P_{t}f_{t}\\sigma^{0}\\cos\\theta}\\cdot \\frac{\\lambda v}{l}. \\tag{25}\\] Equations (23)-(25) show how antenna size scales with system parameters, such as \\(v\\), \\(P_{t}f_{t}\\), \\(\\lambda\\), resolution \\(l\\), and integration time. In particular, the required diameter is proportional to \\((P_{t}f_{t})^{-1/4}\\), to \\((l/\\lambda)^{-1/4}\\) and to \\(v^{1/4}\\) (these parameters then determine \\(t_{\\mathrm{int}}\\)). Thus, increasing mean transmitted power by a factor of 10 reduces the required antenna diameter to 56% of its original size. A two-step process for initial system design is presented here. The first step considers the tradeoff between wavelength, integration time, and spatial resolution for a given orbit (Fig. 4). This step addresses the orbit, atmospheric perturbation, and averaging constraints. The second step then calculates the antenna size needed for a given mean transmitter power and integration time, which ensures the SNR and antenna area constraints are satisfied. The first step is illustrated in Fig. 4. The significant atmospheric length and timescales are plotted in Fig. 4(a). The dark shading shows the scales defined in Tables I and II. Choosing an orbit defines the (maximum) azimuthal velocity component: for a 50-km diameter relative orbit, this is \\(1.8\\cdot\\mathrm{s}^{-1}\\) (as used for Figs. 4 and 5). Once the speed is defined, the spatial resolution as a function of integration time for a given wavelength is known (20). Fig. 4(b) adds this information. For initial design, the figure can be redrawn for various values of velocity to represent different points on the orbit--a more sophisticated dynamic model should be used for later design stages. Once the orbit is chosen (defining azimuthal speed), the length and timescales that effectively plot has coordinates of \\(t_{\\mathrm{int}}\\) and wavelength \\(\\lambda\\). The wavelength determines the perturbation phase amplitude [e.g., 5-mm zenith path variation in the troposphere corresponds to 1.26 rad for a two-way vertical path with \\(\\lambda=5\\) cm, (16)]. The averaging constraint functions that approximate the gain contour (14) can be therefore mapped onto the length and timescale plot, see Fig. 4(c). Table V shows the perturbation cases used and the wavelengths beyond which the perturbations can be ignored (longer wavelengths for tropospheric perturbations, shorter ones for the ionosphere). Shading indicates regions where averaging gain is 0.95 or less (blue for ionospheric perturbations, green for the troposphere). Two depths of shading are used for each: the ionospheric conditions represent medium and large-scale TIDs. Large-scale TIDs are rare but restrict integration times significantly. Medium-scale TIDs are more frequent and less restrictive. Two scales of tropospheric disturbance are represented: the most difficult imaging conditions are due to short wavelength structures. In Fig. 4(c), system designs that do not need atmospheric phase compensation for focusing are in the unshaded region. Fig. 5 shows the SNR and antenna area constraints. Equation (21) is rewritten to give antenna area, and thus, diameter of a circular antenna, as a function of transmitted power, integration time and azimuthal speed [(24), nonvarying parameter values are given in Table VI]. A high value of SNR is assumed (20 dB) since accurate backscatter phase measurements are wanted for the APS retrievals (20 dB in power corresponds to SNR \\(=10\\) for the electric field phasors, i.e., a phase error \\(\\simeq\\)0.1 rad). If the system were designed primarily to create backscatter images then a better design solution would be to reduce the SNR, perhaps as low as a few dB, and to use the extra capability to achieve finer spatial resolution. Multilooking then provides images with good spatial resolution and reduced noise (the uncertainties due to speckle and measurement noise are more balanced using this approach). Antenna diameter is plotted in Fig. 5(a) [the same function applies for all wavelengths, (24)]. Fig. 5(b) adds lines showing the minimum antenna size [which depends on speed and wavelength, (22)] for frequencies between 0.75 and 24 GHz (as in Fig. 5(b)). Fig. 4: Development of the length and timescale plot summarizing system design options for a given orbit (relative diameter 50 km, \\(v_{\\rm max}=1.8\\) m s\\({}^{-1}\\)). (a) Atmospheric perturbation length and timescales. (b) Atmospheric perturbations and azimuth resolution for the chosen orbit and frequencies of 0.75, 1.5, 3, 6, 12, and 24 GHz. (c) Atmospheric perturbations, azimuth resolution, and averaging constraints (images formed using \\(t_{\\rm int}\\) from unshaded regions do not need atmospheric phase correction). Fig. 5: Antenna diameter as a function of integration time for an effective mean transmitted power of 500 W, \\(S=15\\) dB and constant orbit speed of 1.8 m s\\({}^{-1}\\). (a) Antenna diameter as a function of integration time. (b) Antenna diameter as a function of integration time with shading showing limits of minimum antenna size for \\(n=0.75,1.5,3,6,12,\\) and \\(24\\) GHz (shaded, and using the same line styles as Fig. 4). Fig. 4). Longer wavelengths need larger antennas: the shading indicates diameters smaller than that needed at 0.75 GHz, i.e., the most demanding case. It seems anomalous that a longer integration time requires a larger antenna; however, the increase in \\(t_{\\rm int}\\) implies improved spatial resolution, and it is this that drives the increase in size. An example alternative presentation of the antenna sizing is given in Fig. 6. This accounts for the sinusoidal orbit motion and shows the tradeoff between transmitter power and orbit (for low inclination orbits) for a given spatial resolution and frequency (which are often set by user requirements), and shows the antenna size and integration time required. In this case, orbit diameters below 77 km do not create a synthetic aperture large enough to give 50-m spatial resolution with \\(\\lambda=0.2\\) m, and so, no solutions are shown. (It is assumed that the integration time is chosen optimally, cf. Fig. 3.) These or similar diagrams can be used to identify feasible system designs. In particular, they identify systems that can achieve the desired azimuth resolution without needing atmospheric phase corrections to focus the image (i.e., in the unshaded region). Better resolution is possible, but only with phase compensation (the shading indicates which of the corrections--ionosphere and/or troposphere--are needed). Figs. 4 and 5 assume constant velocity. Over short periods, this is reasonable, but for \\(t_{\\rm int}\\) of several hours or near the extremes of the orbital motion it becomes important to account for the varying velocity. Fig. 7 shows results for three orbits (100-km relative diameter, azimuthal speed 3.6 m \\(\\cdot\\) s\\({}^{-1}\\); 7.5\\({}^{\\circ}\\) inclination, azimuthal speed \\(\\sim\\)\\(100\\) m \\(\\cdot\\) s\\({}^{-1}\\); 60\\({}^{\\circ}\\) inclination, azimuthal speed \\(\\sim\\)\\(1500\\) m \\(\\cdot\\) s\\({}^{-1}\\); the effective azimuthal speed can be controlled to an extent using squint imaging). As the orbit speed increases, the azimuth resolution achieved for a given integration times improves. The perturbations depend on wavelength: long wavelengths are most affected by ionospheric perturbations, tropospheric humidity affects short wavelengths most. As the orbit speed increases, the required antenna sizes increase. For low speeds, the antenna is easily sized to avoid imaging ambiguities; however, as the speed increases this constraint becomes more demanding. It is important to note that the validity of the results depends on the accuracy of the input assumptions (e.g., the scales of the significant atmospheric perturbations). This system design framework encompasses all the main GEO SAR concepts using near circular orbits; examples include: * High-inclination orbits: azimuthal speed is high, thus fine resolution is possible, but this requires a large antenna and high power. * Low inclination orbits: long integration times are needed to achieve fine resolution (and therefore, atmospheric phase corrections are needed); systems are feasible with modest power and antenna area. Fig. 8 summarizes the key system design decision of whether or not atmospheric phase compensation will be needed to achieve the final desired spatial resolution. The threshold of 2-3 min is approximate (although consistent with other estimates, e.g., [9]); forming an image quickly enough to avoid the need for phase compensation tends to require large antennas and high power. ### _Example System Designs_ An example outline system design uses Fig. 7(a) and (b). To achieve 100-m spatial resolution at C-band (\\(f=6\\) GHz, dash-dot line) using a GEO SAR with relative orbit diameter 100 km, an integration time of about 45 min is needed [Fig. 7(a)]. This will require phase correction for both ionospheric and tropospheric perturbations. Every minute, the system can form an unperturbed image (with resolution of 4 km): the atmospheric phase corrections should be ideally derived from this time series. Fig. 7(b) shows that for \\(t_{\\rm int}=45\\) min an antenna diameter of around 5.5 m will be required (\\(P_{t}f_{t}=1\\) kW), this is well above the minimum aperture diameter. Thus most of the key system parameters have been defined, and a design is achieved that satisfies all the main constraints. The advantage of a graphical method of the outline system design as proposed here is that the designer can see easily whether design parameters are close to constraints or not. Further design iterations will use increasingly detailed quantitative methods. ### _Atmospheric Phase Corrections_ Atmospheric phase corrections or measurements are an important aspect of GEO SAR design and applications. Phase correction is needed if a sequence of coarse resolution images is used to estimate atmospheric phase so that the fine azimuth resolution image can be focused. In principle, the atmospheric phase is measurable in two ways. * \\(\\Delta\\phi\\): The phase due to the atmosphere (averaged over \\(t_{\\rm int}\\)) adds to the backscatter phase: changes in this should therefore be directly measurable for suitable targets. * \\(\\overline{\\phi}\\): Linear rates of change of atmospheric phase will cause an azimuth shift, which itself is measurable. Targets must remain stable, at least for the coarse resolution integration time, for \\(\\Delta\\phi\\) and \\(\\dot{\\phi}\\) to be measurable: unstable targets contribute to clutter. Since the atmospheric phase Fig. 6: Antenna diameter (meters, solid lines) and integration time (minutes, dashed lines) as a function of (low inclination) orbit diameter and effective transmitter power (\\(\\lambda=0.2\\) m, 50-m resolution, 15-dB SNR; allowing for sinusoidal variation in azimuthal speed). represents physical processes that can be modeled, data assimilation is an appropriate method for phase estimation. Two cases are likely to be encountered: i) targets that remain coherent throughout the integration time required to achieve fine azimuth resolution; or ii) incoherent targets. The sequence of coarse resolution images of a natural surface, even a static one, will, in general, not be coherent with each other since they are formed using nonoverlapping segments of the satellite orbit. Areas that remain coherent are therefore likely to be ones dominated by a single persistent Fig. 7: System design charts for three candidate orbits illustrating the impact of atmospheric perturbations and the antenna sizing for a given mean effective power and SNR (using constant orbit speed approximation). (a) Resolution versus integration time (100-km orbit). (b) Antenna sizing (100-km orbit). (c) Resolution versus integration time (7.5\\({}^{\\circ}\\) orbit). (d) Antenna sizing (7.5\\({}^{\\circ}\\) orbit). (e) Resolution versus integration time (60\\({}^{\\circ}\\) orbit). (f) Antenna sizing (60\\({}^{\\circ}\\) orbit). scatterer (at the coarse resolution). The other way, in which the coarse images can be coherent with each other, is for the images to be taken using the same orbit segment: this requires a delay of 1 day or a constellation of satellites (perhaps with non-Keplerian orbits). Unstable surfaces represent an important fraction of many scenes. These might be water surfaces or dense vegetation. At long wavelengths, even quite dense vegetation may be sufficiently stable (particularly in favorable weather conditions). Atmospheric structures strong enough to affect image focusing are typically kilometer or more in size, and thus, only a few stable areas every few km may be sufficient to estimate the APS adequately. The following comments discuss the two target types and how the estimated APS can be used to form the fine resolution image. #### Iii-B1 Coherent Targets If a dominant point target remains coherent through the fine resolution integration time, then phase and phase rate can be measured almost at pixel scale. If there is an azimuth offset between this target and the reference position assumed for SAR image focusing, then a phase due to this offset has to be corrected for. #### Iii-B2 Incoherent Scenes For natural surfaces, the phase change will not be directly measurable since images in the sequence are not coherent. However, the phase rate will be sometimes measurable by tracking the azimuth shift of recognizable features in the image. The azimuth shift is most apparent for systems with low azimuthal velocity. #### Iii-B3 Using the Atmospheric Phase Correction The atmospheric phase correction required is a function of (2-D) space and time--similar to the real atmosphere: \\(\\phi(\\mathbf{r},t)\\). This can be easily used by time-domain SAR focusing algorithms. It is less clear how it will be used in frequency-domain algorithms. The APS information is significant information in its own right, and may well be one of the primary products from a GEO SAR. ### _Further Design Iterations_ This paper describes only an initial system design method. Further iterations should be used to test significant assumptions and to examine system design features that pose important challenges for mission feasibility. Areas for further study are likely to include: * more realistic atmospheric perturbation scenarios; * clutter effects representative of the surfaces to be imaged; * initial quantification of the data handling architecture, e.g., radar PRF selection, data bandwidths, opportunity for onboard presumming of the raw data; * orbit control and tracking; * initial system sizing (particularly mass and power budgets). ## IV Discussion and Conclusion An initial system design method for GEO radar imaging has been proposed. The method accounts for important system design constraints, and is a general framework that includes all the principal GEO SAR concepts under discussion and all radar wavebands. GEO SAR applications include both surface monitoring, e.g., ground motion and geohazards, and atmosphere (ionospheric electrons and tropospheric moisture). The system design presented here focuses on engineering constraints: user applications have not been discussed in detail but will be a major factor in any complete system design. Example design solutions are suggested here to illustrate the design method. The solutions depend on the assumed atmospheric properties, as well as other system parameters, and thus should be reviewed based on a range of likely atmospheric conditions for the region and applications of interest. GEO SAR is versatile in terms of operations, since viewing can be directed anywhere within the field of view at any time. However, imaging performance is best when the azimuthal motion is large. Periods around the two times each day when the azimuthal component is near zero are less useful for imaging. Ionospheric disturbances cycle over a solar day, whereas the orbit repeats on a sidereal day. A GEO orbit will be therefore favorably aligned for imaging a particular region at different solar times through the year. Given the range of potential applications, their differing needs for temporal coverage and resolution, and varying atmospheric constraints, it will be a significant operational challenge to develop the imaging schedule. Several areas of further work are suggested by this paper. Some of the most important for system design are to extend the range of atmospheric perturbations included (e.g., to include ionospheric scintillations [35]) and to quantify the impact of actual surface properties on imaging. In addition, it is important to assess potential applications that might justify investment in a GEO SAR mission. Finally, a development roadmap is required. This may include technology demonstrators and should mitigate technology risks early on. Studies so far suggest that GEO SAR has great potential. It could provide radically new data products with temporal resolution, which single LEO satellites cannot match. Its ability to measure ground properties and dynamic atmospheric structure simultaneously is unrivalled, and the GEO viewpoint enables highly versatile imaging modes. It cannot provide complete coverage (water surfaces and other unstable targets are not measurable), however its potential contribution to the global EO system--including complementing LEO SAR--is significant. Fig. 8: GEO SAR imaging overview: whether atmospheric phase compensation is needed determines system design options. ## Appendix A Azimuth Shift Due to APS Change The azimuth shift due to changes in atmospheric path delay is derived here. It is in principle equivalent to the azimuth shift due to motion of the target in the slant direction. Assume the rate of change of the phase screen (One-way, zenith) over the target is \\(\\dot{\\phi}\\) during integration time \\(t\\). At wavelength \\(\\lambda\\), this phase change can be converted to an equivalent change in (one-way) optical path length \\(\\delta l\\) [(26), allowing for the local incidence angle \\(\\theta\\)]. This extra path increases steadily during integration time \\(t\\), and has the same effect as a slight rotation of the satellite trajectory by an angle \\(\\alpha=\\delta l/vt\\) (Fig. 9) in the plane of the satellite velocity and slant range. Since SAR focusing assumes the actual trajectory and not the rotated trajectory, points appear to be displaced in azimuth opposite to the satellite velocity by a distance \\(\\delta y\\), which subtends the angle \\(\\alpha\\). The geometrical factor \\(\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a}\\) (derived below) accounts for velocity in general not being parallel to the range gate. Thus \\[\\delta l =\\dot{\\phi}t\\frac{\\lambda}{2\\pi}\\frac{1}{\\cos\\theta} \\tag{26}\\] \\[\\delta y =-\\frac{r\\alpha}{\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a}}=-\\frac{r \\delta l}{vt(\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a})}=\\frac{r\\dot{\\phi}\\lambda}{2 \\pi v\\cos\\theta(\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a})}. \\tag{27}\\] The geometry of the azimuth shift due to changes in the APS is defined by three vectors: * the velocity vector (unit vector \\(\\mathbf{e}_{v}\\)); * the slant range vector from target to satellite (unit vector \\(\\mathbf{e}_{r}\\)); * the vector normal to the target plane (unit vector \\(\\mathbf{e}_{N}\\)). The rotation due to the changing phase delay occurs in the velocity-range plane; the azimuth offset occurs in the target plane. Vectors defining orthogonal coordinate directions in either the velocity-range or target planes are: * unit vector \\(\\mathbf{e}_{a}\\) in the velocity-range plane, normal to \\(\\mathbf{e}_{r}\\); * unit vector \\(\\mathbf{e}_{1}\\) in the target plane, parallel to the projection of \\(\\mathbf{e}_{r}\\) onto the target plane; * unit vector \\(\\mathbf{e}_{2}\\) in the target plane, perpendicular to \\(\\mathbf{e}_{1}\\) and parallel to the azimuth direction in the range gates. Fig. 10 shows the geometry assumed. These vectors are defined using the following relationships. \\[\\mathbf{e}_{a} =a\\left[\\mathbf{e}_{v}-\\mathbf{e}_{r}(\\mathbf{e}_{v}\\cdot \\mathbf{e}_{r})\\right],\\quad a=\\frac{1}{\\sqrt{1-(\\mathbf{e}_{v}\\cdot\\mathbf{e }_{r})^{2}}}\\] \\[\\mathbf{e}_{1} =b\\left[\\mathbf{e}_{r}-\\mathbf{e}_{N}(\\mathbf{e}_{r}\\cdot \\mathbf{e}_{N})\\right],\\quad b=\\frac{1}{\\sqrt{1-(\\mathbf{e}_{r}\\cdot\\mathbf{e }_{N})^{2}}}\\] \\[\\mathbf{e}_{2} =\\mathbf{e}_{1}\\times\\mathbf{e}_{N}.\\] The azimuth offset within the range gate is such that when projected onto \\(\\mathbf{e}_{a}\\) it has magnitude \\(r\\alpha\\) (slant range multiplied by the rotation angle). \\[r\\alpha=-\\delta y\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a},\\qquad\\delta y=-\\frac{r \\alpha}{\\mathbf{e}_{2}\\cdot\\mathbf{e}_{a}}. \\tag{28}\\] ## Appendix B Averaging Gain Limit The limit for small phase intervals \\(\\omega\\Delta t\\) can be approximated using a Taylor expansion around \\(t_{0}\\) ignoring terms higher than second order. Linear time dependence causes an azimuth shift and thus is ignored as a source of defocusing (we write \\(\\theta=\\omega\\delta t\\) for the phase interval variable, and \\(\\Delta\\theta=\\omega\\Delta t\\)). \\[\\phi(t)= a_{0}\\sin(\\omega t+\\psi_{0}) \\tag{29}\\] \\[=\\phi(t_{0})+\\dot{\\phi}(t_{0})(t-t_{0})+\\ddot{\\phi}(t_{0})\\frac{ (t-t_{0})^{2}}{2}+\\ldots\\] \\[\\phi_{1}(t_{0}+\\delta t)= \\phi(t_{0})+\\ddot{\\phi}(t_{0})\\frac{\\delta t^{2}}{2}=\\phi(t_{0}) \\left(1-\\frac{\\theta^{2}}{2}\\right). \\tag{30}\\] The average value of \\((\\theta^{2}/2)\\) over the interval \\((-\\Delta\\theta/2)\\), \\(\\Delta\\theta/2)\\) represents a phase offset that can be subtracted so that Fig. 10: Geometry of APS influence on azimuth shift. Fig. 9: Azimuth shift due to phase screen change during aperture synthesis. (a) Phase change during aperture synthesis, equivalent to a rotation of the trajectory. (b) Azimuth shift due to effective rotation of satellite trajectory. the remainder has zero mean \\[\\overline{\\theta^{2}}=\\frac{1}{\\Delta\\theta}\\int\\limits_{-\\Delta\\theta/2}^{\\frac{ \\Delta\\theta}{2}}\\theta^{2}\\,d\\theta=\\frac{\\Delta\\theta^{2}}{12}. \\tag{31}\\] To second order, the nonlinear part of the phase perturbation due to a sinusoidal phase variation can be written as a constant part \\(\\phi_{1a}\\) and a variable part \\(\\phi_{1b}\\) with zero mean \\[\\phi_{1}(t_{0}+\\delta t)= \\,\\phi(t_{0})\\left(1-\\frac{\\theta^{2}}{2}\\right)=\\phi_{1a}+\\phi_{1b}\\] \\[\\phi_{1b}= \\,\\phi(t_{0})\\left(\\frac{\\Delta\\theta^{2}}{24}-\\frac{\\theta^{2}}{ 2}\\right). \\tag{32}\\] The variable part causes loss of phasor amplitude, which can be quantified (the constant part is only a phase offset) \\[g = \\frac{1}{\\Delta\\theta}\\int\\limits_{-\\Delta\\theta/2}^{\\frac{\\Delta \\theta}{2}}e^{i(\\phi_{1a}+\\phi_{1b})}\\,d\\theta\\] \\[= \\,e^{i\\phi_{1a}}\\times\\frac{1}{\\Delta\\theta}\\int\\limits_{- \\Delta\\theta/2}^{\\frac{\\Delta\\theta}{2}}e^{i\\phi_{1b}}\\,d\\theta=g_{1}\\times g_ {2}\\] \\[g_{2} = \\frac{1}{\\Delta\\theta}\\int\\limits_{-\\Delta\\theta/2}^{\\frac{ \\Delta\\theta}{2}}e^{i\\phi_{1b}}\\,d\\theta. \\tag{34}\\] By symmetry the imaginary part of \\(g_{2}\\) is zero, and thus, only the real part is required. For small \\(x=a_{0}(\\Delta\\theta^{2}/12)\\), \\(\\cos x\\) can be usefully expanded as \\((1-x^{2}/2)\\) \\[g_{2} = \\frac{1}{\\Delta\\theta}\\int\\limits_{-\\Delta\\theta/2}^{\\frac{\\Delta \\theta}{2}}\\cos\\left(\\phi(t_{0})\\left[\\frac{\\Delta\\theta^{2}}{24}-\\frac{\\theta ^{2}}{2}\\right]\\right)d\\theta \\tag{35}\\] \\[\\simeq \\,1-\\frac{\\sin^{2}(\\omega t_{0}+\\psi_{0})}{10}\\left(\\frac{a_{0} \\Delta\\theta^{2}}{12}\\right)^{2}. \\tag{36}\\] Contours of constant gain are thus lines with \\(a_{0}\\Delta\\theta^{2}=c\\) (constant), or \\(\\Delta\\theta=c/\\sqrt{a_{0}}\\) (14). ## Acknowledgment Discussions with colleagues, particularly D. Hall (EADS Astrium), Prof. F. Rocca and Prof. A. Monti-Guarnieri (Politecnico di Milano), Prof. M. Rycroft, and postgraduate students at Cranfield University, have contributed significantly to the research. The author would like to thank the Royal Society Wolfson Award and EPSRC Grant EP/H003304/1. The reviewers' thoughtful and constructive comments are much appreciated. ## References * [1] K. Tomiyasu and J. L. Pacelli, \"Synthetic aperture radar imaging from an inclined geosynchronous orbit,\" _IEEE Trans. Geosci. Remote Sens._, vol. GE-21, no. 3, pp. 324-329, Jul. 1983. * [2] S. Madsen, W. Edelstein, L. DiDemenico, and J. LaBrecque, \"A geosynchronous synthetic aperture radar; for tectonic mapping, disaster management and measurements of vegetation and soil moisture,\" in _Proc. IEEE IGARS_, 2001, vol. 1, pp. 447-449. * [3] E. Im, S. L. Durden, Y. Rahmat-Samil, M. Lou, and J. Huang, \"Conceptual design of a geostomidary radar for monitoring hurricanes,\" presented at the Earth Sci. Technol. Conf., Univ. of Maryland, College Park, MD, USA, Jun. 24-26, 2003. * [4] W. Edelstein, S. Madsen, A. Moussessian, and C. Chen, \"Concepts and technologies for synthetic aperture radar from MEO and geosynchronous orbits,\" in _Enabling Sensor and Platform Technologies for Spaceborne Remote Sensing_, vol. 5659, G. Komar, J. Wang, and T. Kimura, Eds. Bellingham, WA, USA: SPIE, 2005, pp. 195-203. * [5] C. Prati, F. Rocca, D. Giancola, and A. Monti Guarnieri, \"Passive geosynchronous SAR system reusing backscattered digital audio broadcasting signals,\" _IEEE Trans. Geosci. Remote Sens._, vol. 36, no. 6, pp. 1973-1976, Nov. 1998. * [6] S. Hobbs, \"GeoSAR: Summary of the group design project, MSc in astronatas and space engineering 200506,\" Cranfield University, Bedford, U.K., College of Aeronautics pp. 0509, Aug. 2006. * [7] D. Bruno and S. E. Hobbs, \"Radar imaging from geosynchronous orbit: Temporal decorrelation aspects,\" _IEEE Trans. Geosci. Remote Sens._, vol. 48, no. 7, pp. 2924-2929, Jul. 2010. * [8] S. Hobbs _et al._, \"Simulation of geosynchronous radar and atmospheric phase compensation constraints,\" in _Proc. IET Int. Radar Conf._, Xi'an, China, 2013, pp. 1-6. * [9] A. Monti Guarnieri, F. Rocca, and A. B. Ibars, \"Impact of atmospheric water vapor on the design of a Ku band geosynchronous SAR system,\" in _Proc. IEEE IGARS_, Cape Town, Republic of South Africa, Jul. 12-17, 2009, vol. 2, pp. II-945-II-948, * [10] A. Monti Guarnieri _et al._, \"Design of a geosynchronous SAR system for water-vapour maps and deformation estimation,\" presented at the Fringe, Paris, France, 2011, ESA SP-697. * [11] A. Monti Guarnieri _et al._, \"Wide coverage, fine resolution, geosynchronous SAR for atmospheric and terrain observations,\" presented at the _Living Planet Symp._, paper no. ESA SP-72, 2013. * [12] J. Ruiz Rodon, A. Broquetas, E. Makhoul, A. Monti Guarnieri, and F. Rocca, \"Results on spatio-temporal atmospheric phase screen retrieval from long-term GEOSA acquisition, presented at the Proc. IEEE IGARSS, Munich, Germany, Jul. 22-27, 2012, pp. 3289-3292. * [13] J. Ruiz Rodon, A. Broquetas, A. Monti Guarnieri, and F. Rocca, \"Geosynchronous SAR focussing with atmospheric phase screen retrieval and compensation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 8, pp. 4397-4404, Aug. 2013. * [14] M. Bao, Y. Liao, Z. J. Tian, M. D. Xing, and Y. C. Li, \"Imaging algorithm for GEO SAR based on series reversion,\" in _Proc. IEEE CIE Int. Conf. Radar_, Chengdu, China, Oct. 24-27, 2011, vol. 2, pp. 1493-1496. * [15] X. Dong, Y. Gao, C. Hu, T. Zeng, and C. Dong, \"Effects of Earth rotation on GEO SAR characteristics analysis,\" in _Proc. IEEE CIE Int. Conf. Radar_, Chengdu, China, Oct. 24-27, 2011, vol. 1, pp. 34-37. * [16] C. Hu, T. Long, T. Zeng, F. Liu, and Z. Liu, \"The accurate focusing and resolution analysis method in geosynchronous SAR,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 10, pp. 3548-3563, Oct. 2011. * [17] C. Hu, Z. Liu, and T. Long, \"An improved CS algorithm based on the curved trajectory in geosynchronous SAR,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 5, no. 3, pp. 795-808, Jun. 2012. * [18] X. Dong, C. Hu, and T. Zeng, \"Antenna area constraint in GEO SAR,\" in _Proc. IET Int. Radar Conf._, Xi'an, China, Apr. 14-16, 2013, pp. 1-4. * [19] L. Kou, M. Xiang, X. Wang, and M. Zhu, \"Irmospospheric imaging on L-band geosynchronous circular SAR imaging,\" _IET Radar, Sonar Navigat._, vol. 7, no. 6, pp. 683-701, Jul. 2013. * [20] C. Hu, X. Li, T. Long, and Y. Gao, \"GEO SAR interferometry: Theory and feasibility study,\" in _Proc. IET Int. Radar Conf._, Xi'an, China, Apr. 14-16, 2013, pp. 1-5. * [21] R. S. Lawrence, C. G. Little, and H. J. Chivers, \"A survey of ionospheric effects upon earth-space radio propagation,\" _Proc. IEEE_, vol. 52, no. 1, pp. 4-27, Jan. 1964. * [22] S. Quegan and J. Lamont, \"Ionospheric and tropospheric effects on synthetic aperture radar performance,\" _Int. J. Remote Sens._, vol. 7, no. 4, pp. 525-539, 1986. * [23] Z.-W. Lu, J. Wu, and Z.-S. Wu, \"A survey of ionospheric effects on space-based radar,\" _Waves Random Media_, vol. 14, no. 2, pp. S189-S273, 2004. * [24] Y. Otsuka _et al._, \"GPS observations of medium-scale traveling ionospheric disturbances over europe,\" _Annales Geophys._, vol. 31, no. 2, pp. 163-172, 2013. * [25] Z. Katamzi, N. Smith, C. Mitchell, P. Spalla, and M. Materassi, \"Statistical analysis of travelling ionospheric disturbances using TEC observations from geostationally satellites,\" _J. Atmosp. Solar-Terrestrial Phys._, vol. 74, pp. 64-80, 2012. * [26] M. Hernandez-Pajares, J. Juan, J. Sanz, and A. Aragon-Angel, \"Propagation of medium scale traveling ionospheric disturbances at different latitudes and solar cycle conditions,\" _Radio Sci._, vol. 47, no. 6, pp. 1-22, 2012. * [27] R. Hanssen, _Radar Interferometry, Data Interpretation and Error Analysis_. Norwell, MA, USA: Kluwer, 2001. * [28] B. Parkinson and J. Spilker, Eds., _Global Positioning System: Theory and Applications_, vol. 1. Reston, VA, USA: AIAA, 1996, ser. Progress in Astronautics and Aeronautics, vol. 163. * [29]_Station-Keeping in Longitude of Geostationary Satellites in the Fixed-Satellite Service_, Int. Telecommun. Union Std. ITU-R S.484-3, 1992, 2000. * [30] D. Vallado, _Fundamentals of Astrodynamics and Applications_, 4th ed. Portland, OR, USA: Microcosm Press, 2013. * [31] P. Fortescue, J. Stark, and G. Swinred, _Spacecraft Systems Engineering_, 4th ed. New York, NY, USA: Wiley, 2011. * [32] J. Wertz, D. F. Everett, and J. J. Puschell, _Space Mission Engineering: The New SMAD_. Portland, OR, USA: Microcosm Press, 2011, ser. Space Technology Library. * [33] W. Rees, _Physical Principles of Remote Sensing_, 3rd ed. Cambridge, U.K.: Cambridge Univ. Press, 2013. * [34] J. Ruiz Rodon, A. Broquetas, A. Monti Guarnieri, and F. Rocca, \"A ku-band geosynchronous synthetic aperture radar mission analysis with medium transmitted power and medium-sized antenna,\" in _Proc. IEEE IGARSS_, Vancouver, BC, Canada, Jul. 24-29, 2011, pp. 2456-2459, no. WE2.T04.3. * [35] N. C. Rogers, S. Quegan, J. S. Kim, and K. P. Papathanassiou, \"Impacts of ionospheric scintillation on the BIOMASS P-band satellite SAR,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 3, pp. 1856-1868, Mar. 2014. * [36] D. Belcher, \"Theoretical limits on SAR imposed by the ionosphere,\" _IET Radar, Sonar Navigat._, vol. 2, no. 6, pp. 435-448, 2008. \\begin{tabular}{c c} & Stephen Hobbs received the B.S. degree in mathematics and physics from Trinity College, Cambridge University, Cambridge, U.K., in 1980, and the Ph.D. degree in ecological physics from Cranfield Institute of Technology, Bedford, U.K. for work on kite anemometry. Since 2004, he has been the Director with the Cranfield Space Research Centre, Cranfield University, Bedford, U.K. Since 1992, he has been involved with the School of Engineering's space engineering research and teaching. Prior to that, he worked on radar remote sensing and instrumentation in Cranfield's Ecological Physics Research Group and College of Aeronautics. In 2001, he was second to Astrum U.K. Ltd., Hertfordshire, U.K., to work on the European Space Agency's GAIA mission and a small radar satellite. His current research includes sustainability of space activities and measurement physics aspects of geosynchronous radar remote sensing. Dr. Hobbs is a member of the Royal Meteorological Society, the Institute of Physics, and the Remote Sensing and Photogrammetry Society (for which he is convenor of the Synthetic Aperture Radar Special Interest Group). \\\\ \\end{tabular} \\begin{tabular}{c c} & Cathryn Mitchell received the B.Sc. and Ph.D. degrees in physics from the University of Wales Aberystwyth, Dyfed, U.K. In 1999, she joined the University of Bath, Bath, U.K. Her research interests include the effects of atmospheric scintillation and multipath propagation on GPS navigation signals, tomography (medical and geophysical), and the influence of the Sun upon the magnetosphere and ionosphere. Her research interests include radio propagation, signal processing, and the inversion of multidirectional signals to reveal interesting information about the natural world. \\\\ \\end{tabular} \\begin{tabular}{c c} & Biagio Forte received the B.S. degree in physics from the University of Trieste, Trieste, Italy, and the Ph.D. degree in geophysics from Karl-Franzens University of Graz, Graz, Austria. In 2012, he joined the University of Bath, Bath, U.K., as a Prize Fellow in Space Weather. His fellowship aims at devising countermeasures to mitigate space weather vulnerabilities affecting satellite navigation, for example, in support to civil aviation. His research interests include physics and chemistry of the upper ionized atmosphere, propagation of electromagnetic waves, RF engineering, space weather phenomena and their effects, remote sensing of atmospheres, and low-frequency radio astronomy. \\\\ \\end{tabular} \\begin{tabular}{c c} & Rachel Holley (M'12) received the MESci degree in earth sciences from the University College, Oxford University, Oxford, U.K., in 2004, and the Ph.D. degree from the University of Reading, Berkshire, U.K., for work on mitigation of atmospheric path delays in interferometric synthetic aperture radar (InSAR). Since 2009, she has been with the InSAR Surveying Department, NPA Satellite Mapping, Kent, U.K. As a Senior Technical Lead within the department, she has worked on a wide variety of commercial, publicly funded and internal research projects. She has particular experience using Persistent Scatterer Interferometry and differential interferometric SAR techniques in challenging terrain for natural hazard applications, mining, oil, and gas field deformation and geothermal areas. Dr. Holley is a member of IEEE Geoscience and Remote Sensing Society, the European Geoscience Union, the Remote Sensing and Photogrammetry Society, and the Geological Remote Sensing Group. \\\\ \\end{tabular} \\begin{tabular}{c c} & Boris Snapir received the M.Sc. degree in astrononautics and space engineering. He is a Researcher with Cranfield University's Space Research Centre, Bedford, U.K., studying radar remote sensing for applications in agriculture and meteorology. He is developing methods of data assimilation to couple models of surface processes, such as soil moisture with Earth observation data, and is also applying these techniques to aspects of the geosynchronous synthetic aperture radar mission design. \\\\ \\end{tabular} \\begin{tabular}{c c} & Philip Whittaker received the Ph.D. degree from the Surrey Space Centre, University of Surrey, U.K., investigating payload architectures for in-orbit monitoring of the RF spectrum. He is a Senior RF Systems Engineer with Surrey Satellite Technology Ltd. (SSTL), Guildford, U.K. and a Technical Lead for synthetic aperture radar (SAR) activities. Since 2001 he has worked with SSTL on telemetry, telecommand and control subsystems and RF payloads for navigation and remote sensing. For the past three years, he was responsible for developing mission and payload concepts for low-cost spaceborne SAR solutions. \\\\ \\end{tabular}
Geosynchronous synthetic aperture radar (GEO SAR) has been studied for several decades but has not yet been implemented. This paper provides an overview of mission design, describing significant constraints (atmosphere, orbit, temporal stability of the surface and atmosphere, measurement physics, and radar performance) and then uses these to propose an approach to initial system design. The methodology encompasses all GEO SAR mission concepts proposed to date. Important classifications of missions are: 1) those that require atmospheric phase compensation to achieve their design spatial resolution; and 2) those that achieve full spatial resolution without phase compensation. Means of estimating the atmospheric phase screen are noted, including a novel measurement of the mean rate of change of the atmospheric phase delay, which GEO SAR enables. Candidate mission concepts are described. It seems likely that GEO SAR will be feasible in a wide range of situations, although extreme weather and unstable surfaces (e.g., water, tall vegetation) prevent 100% coverage. GEO SAR offers an exciting imaging capability that powerfully complements existing systems. Atmosphere, geosynchronous (GEO), mission, synthetic aperture radar (SAR), system.
Condense the content of the following passage.
ieee/ef59a929_3247_4146_af71_662c586480a0.md
# CroMoDa: Unsupervised Oriented SAR Ship Detection via Cross-Modality Distribution Alignment Xi Chen, Zhirui Wang, Wenhao Wang, Xinyi Xie, Jian Kang, and Ruben Fernandez-Beltran, Manuscript received 29 April 2024; revised 7 June 2024; accepted 26 June 2024. Date of publication 1 July 2024; date of current version 12 July 2024. This work was supported in part by the National Natural Science Foundation of China under Grant 6231027 and Grant 62101371, in part by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant XDA0360300, and in part by the Jiangsu Province Science Foundation for Youths under Grant BK20120707. _(Corresponding authors: Zhirui Wang; Jian Kang.)_Xi Chen, Wenhao Wang, and Jian Kang are with the School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China (e-mail: [email protected]). Zhirui Wang is with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China, and also with the Key Laboratory of Network Information System Technology, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]). Xinyi Xie is with the College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China. Ruben Fernandez-Beltran is with the Department of Computer Science and Systems, University of Murcia, 30100 Murcia, Spain (e-mail: [email protected]). Digital Object Identifier 10.1109/ISTARS.2024.3420901 ## I Introduction Synthetic aperture radar (SAR) is widely used in earth observation, military reconnaissance, and ocean monitoring due to its ability to provide high-resolution remote sensing images under all-weather and all-day conditions. SAR images, due to their unique imaging mechanism, have the capability to acquire images under adverse weather conditions compared to optical imaging. This makes SAR images show great potential in applications such as ship detection and tracking [1, 2, 3, 4, 5]. Traditional SAR ship detection methods mainly include constant false alarm rate (CFAR) [6, 7, 8] and extended fractal (EF) [9]. CFAR is essentially based on a segmentation idea, where the detection threshold is compared with the grayscale value of pixels, treating pixels above the threshold as ships and those below it as backgrounds. The detection threshold is determined by the statistical characteristics of the background clutter, and calculating this threshold often requires significant computational resources, resulting in poor real-time performance and low detection accuracy in complex background. EF fully integrates the grayscale information of the image and the spatial distribution of grayscale values, using the spatial difference in reflection energy between the ship and clutter for ship detection. However, EF requires high image quality, and the presence of noise, blur, or distortion in the image can affect the extraction and analysis of fractal features, thereby impacting the accuracy and stability of ship detection. In recent years, with the rapid development of deep learning methods, a large number of approaches based on convolutional neural networks (CNNs) have achieved significant results in the field of SAR ship detection. Currently, detectors based on CNNs are mainly divided into single-stage detection algorithms [10, 11, 12, 13] and two-stage detection algorithms [14, 15, 16, 17]. Single-stage detection algorithms have advantages in computational speed compared with two-stage detection ones, but are inferior in terms of detection accuracy [18]. Faster region-based convolutional neural network (Faster R-CNN) is a typical two-stage detector that generates candidate boxes through a region proposal network (RPN) and then processes these boxes with classification and regression branches, achieving superior performance [16]. Although SAR ship detection methods based on CNNs have achieved significant improvements in performance, they inevitably require a large amount of labeled data for network training. Currently, obtaining sufficient annotated data for supervised training still faces several challenges. On the one hand, there are few instance-level annotated datasets available for SAR ship detection tasks, and even fewer datasets adopt oriented annotations. Moreover, most of the labeled datasets are small,and training networks on them can lead to an overfitting problem, reducing generalization performance. On the other hand, the interpretation of SAR images requires annotators with relevant professional knowledge, further increasing the manpower and time costs. In contrast, the annotation of optical images is relatively easier, making the use of optical images to train networks for SAR ship detection a new perspective. However, in traditional supervised deep learning, a network can only achieve good performance if both the training and testing data follow the same probability distribution. The distribution difference between optical and SAR images implies that networks trained on optical images cannot achieve satisfactory results on SAR images for inference [19, 20, 21]. This performance degradation caused by such difference is referred to as the domain shift problem [22]. The goal of this article is to achieve SAR ship detection by exploiting the labeled optical images and unlabeled SAR images themselves, where the unsupervised domain shift problem should be tackled. Chen et al. [22] are the first to achieve cross-domain detection by aligning image-level and instance-level features. Recently, many subsequent works have been proposed on the basis of such a framework [23, 24, 25, 26, 27]. However, the aforementioned studies on domain-adaptive object detection are designed for natural images, which deal with domain-shift problems such as weather change, day and night change, etc. Since the imaging mechanisms of optical and SAR images are totally different, such an image modality change will make the direct applications of those methods struggle with the detection performances of SAR ships. Building on adversarial domain adaptive learning, Zhao et al. [28] consider the significant differences in scattering intensities of SAR images and use entropy vectors to distinguish between high-entropy and low-entropy areas, to which corresponding weights are assigned. This method is used for cross-domain detection tasks with only SAR imagery and relies on a large amount of annotated SAR images. Xu et al. [29] attempt to gradually reduce the distribution differences between optical and SAR images through a multilevel alignment network at the image, convolution, and instance levels. Such feature alignment in cases of large distribution differences is prone to negative transfer, which limits the performance improvement. Shi et al. [30] propose an unsupervised domain adaptation detection method for the transition from optical to SAR images, transferring knowledge progressively at the pixel level, feature level, and inference level. At the pixel level, generative adversarial networks (GANs) are used to convert optical images into SAR-like images in the transition domain. At the feature level, domain-invariant features are acquired through adversarial learning. At the inference level, self-training is conducted using pseudo-labels generated by the feature-aligned detector. Although this method can achieve good detection results, the three independent stages can increase the training cost of the network, and the non-end-to-end framework might introduce cumulative errors. Pan et al. [31] note that two distributions with low correlation are more likely to encounter negative transfer, and introduce the prediction consistency loss [22] to solve such an issue. Although the abovementioned methods can mitigate the detection degradation problem caused by the significant modality change from optical to SAR imagery, they ignore the impact of speckle noise and other interference in SAR images on the detection performance. Moreover, without the supervised information from SAR images, situations where object features are aligned with the ones from the background may occur in existing methods, which reduces the discrimination capability of networks between objects and background areas. In addition, the oriented annotation of SAR ships is more suitable for applications in real scenarios due to the fewer effects produced by the sea clutter background than the horizontal annotation. Taking these into account, as shown in Fig. 1, we aim to achieve unsupervised oriented SAR ship detection given labeled optical images and SAR images without any supervised information, and propose a cross-modality distribution alignment method, named CroMoDa. Based on the Faster R-CNN structure, we propose four components to mitigate the impact caused by the modality change on the detection performance and achieve cross-modality-oriented SAR ship detection in an unsupervised manner. Specifically, they are composed of: 1) image-level feature alignment; 2) low-level feature despeckling; 3) cross-modality pseudo-label self-training; and 4) cross-modality object alignment. The image-level feature alignment is exploited for learning modality-invariant features at multiscales. By introducing low-level feature despeckling loss, the impact of speckle noise on low-level features can be prevented, as well as other interference produced by sea clutter and strong sidelobes. To learn more SAR-specific features, the cross-modality pseudo-label self-training strategy generates high-quality pseudo-labels to provide supervised information for SAR ships, which can reduce the occurrence of false alarms. The proposed cross-modality object alignment aims to precisely narrow the distribution distances between ship instances extracted from both modalities. To this end, the contributions of this article can be summarized as follows. Fig. 1: Unsupervised oriented SAR ship detection network is learned by using labeled optical and unlabeled SAR images. 1. We propose an end-to-end unsupervised oriented SAR ship detection framework based on cross-modality distribution alignment. To the best of our knowledge, this is the first end-to-end unsupervised oriented SAR ship detection framework by knowledge transfer from optical to SAR images in this field. 2. Compared to other state of the arts (SOTAs), we consider the impact of speckle noise and other interference inherent in SAR images on cross-modality detection. The proposed low-level feature despeckling loss can mitigate noise and interference at the feature level while retaining the characteristics of SAR ships. 3. To enable the network to better capture ship features from SAR images, we introduce a cross-modality pseudo-label self-training strategy. Through the proposed forward twice backward once (FTBO) mechanism, we achieve end-to-end pseudo-label self-training. Furthermore, by introducing a pseudo-label bank, we enhance the network's training speed and avoid the impact of cumulative errors, which are commonly encountered in the pseudo-label learning stage. 4. The effectiveness of CroMoDa is evaluated on two dataset configurations by comparing it to the other SOTAs. On average, the detection improvement can be reached around 10% mean average precision (mAP) compared to the second-best method. Notably, since most current cross-modality unsupervised object detection methods are based on Faster R-CNN, our method is also designed based on Faster R-CNN for ease of comparison. However, the method proposed in this article can be easily applied to other object detectors such as SSD [10], YOLO [11], RetinaNet [12], etc. The rest of this article is organized as follows. Section II presents some related work from the perspective of SAR ship detection and transfer learning. Section III introduces the proposed CroMoDa in detail. Extensive experiments are conducted in Section IV with the analysis. Finally, Section V concludes this article and provides some hints on future research. ## II Related Work ### _SAR Ship Detection_ Deep learning, with its powerful feature extraction and pattern recognition capabilities, has propelled SAR ship detection into a new era. Yang et al. [32] design a receptive field enhancement module that captures multiscale contextual information, improving the detection performance for ships of different sizes. Niu et al. [33] consider the high computational cost of feature pyramid networks in current detectors, innovatively design an encoder-decoder detection model that can extract multilevel features. They also exploit masks to estimate the orientation information of ships, using this information for weak supervision of the network. Considering the speckle noise in SAR images, Zhao et al. [34] employ a dual feature fusion attention mechanism to suppress noise background and integrate shallow features with denoised features, enhancing the network's robustness against speckle noise. Zhang et al. [35] propose a frequency attention module to adaptively process the frequency information of SAR images, which makes the network focus more on ships than sea clutter. To improve the detection speed, Wang et al. [36] encode the convolutional kernels of well-performed networks into a list of probabilities and use the firefly optimization algorithm to search for a set of lightweight network encodings. Through pruning techniques, they transform the network encoding into a lightweight ship detection network, achieving a balance between accuracy and speed. Zhang et al. [37] propose a multiscale global scattering feature association network for SAR ship detection. In SAR images, ships often have large aspect ratios and arbitrary orientations. The horizontal bounding box detection algorithms lead to redundant background areas, making them difficult to accurately locate targets in complex scenes. Chen et al. [38] propose an adaptive recalibration network for detecting multiscale arbitrarily oriented ships. Pan et al. [39] propose a multistage oriented detection network using an angle correlation strategy to generate multiangle anchor boxes. By progressively increasing the intersection over union (IoU) threshold for assigning positive and negative candidate boxes across three stages, they ensure accuracy in localization and classification. Yang et al. [40] introduce a dual feature alignment module for capturing ship orientation and shape information and design an adaptive IoU calculation method that incorporates the aspect ratio of ships. To enhance the efficiency of detection in large SAR images, Jia et al. [41] identify areas that may contain ships in very large images through sea-land segmentation and CFAR, then use detectors to identify ships within these areas, and finally develop a false alarm discrimination network to further eliminate false alarms, achieving rapid and high-accuracy SAR ship detection. Wu et al. [42] propose an instance-segmentation-assisted ship detection network, by thresholding segmentation results to obtain approximate ship contours and integrating the instance segmentation results into the detection pipeline to achieve high-accuracy ship detection. The aforementioned deep learning methods have achieved good results in SAR ship detection, while they all require the training and test datasets to have the same distribution. In real scenarios, it is challenging to require the test dataset to follow the same distribution, and the cost of reannotation is high. ### _Domain Adaptation for Object Detection_ Domain adaptation has been widely applied in object detection. For similar images, changes in viewing angles, lighting conditions, and imaging methods can cause variations in feature distributions, making the development of a cross-domain detection model very important [43, 44, 45]. Currently, methods of unsupervised domain adaptation in the detection field generally employ three techniques: style transfer, feature alignment, and self-training. Among these, style transfer often serves as a preliminary task for feature alignment and self-training, transforming source-domain images into images with a style similar to the target domain to mitigate domain distribution differences. Style transfer is typically based on GANs [46, 47, 48]. Chen et al. [22] align the feature distributions between the source and target domains at the image and instance feature levels using adversarial learning. This is the first time adversarial training used for cross-domain object detection tasks. Saito et al. [23] propose a strong-weak alignment model that tightly aligns local features such as texture and color, while applying weak alignment to global features to reduce the domain gap without compromising the model's performance. Chen et al. [26] point out that not all features have equal transferability and enhance global discriminability by reweighting interpolated image features. Wu et al. [27] design two disentangling layers to decompose features into domain-specific and domain-invariant features, and extract instance-invariant features for object detection. Cai et al. [49] propose a background-focused distribution alignment for pedestrian detection, which avoids the issue of misalignment between foreground features in source-domain images and background features in target domain images. Li et al. [50] integrate self-supervised learning tasks, such as rotation angle prediction and strong/weak data augmentation into cross-domain object detection, to facilitate knowledge transfer between domains. Zhou et al. [51] introduce a new class-level discriminator that achieves the discriminability of instances with different categories while also maintaining category consistency between source and target domains. For self-training methods, pseudo-labels in the target domain are generated and exploited for the supervised training of detection networks. Khodabandeh et al. [52] propose a robust object detection framework to address issues with incorrect category labels and inaccurate bounding box locations in pseudo-labels. Kim et al. [24] present a domain adaptive detection method based on single-stage object detectors, using weak self-training to reduce the impact of inaccurate pseudo-labels and leveraging adversarial background score regularization to extract discriminative features of backgrounds, thereby reducing domain shift. Li et al. [53] propose an adaptive teacher framework, where the student network uses domain adversarial training to generate domain-invariant features, and exponential moving average is applied to the teacher network for generating high-quality pseudo-labels. The aforementioned methods all focus on domain adaptive object detection for natural images, where the imaging mechanisms of the source and target domains are the same. Directly applying these methods to cross-modality ship detection from optical to SAR images cannot yield satisfactory results. ## III Methodology Let \\(\\mathcal{D}_{o}=\\{(x_{i}^{o},y_{i}^{o},b_{i}^{o})\\}_{i=1}^{N_{o}}\\) denote the optical training dataset consisted of \\(N_{o}\\) images \\(x_{i}^{o}\\) with the ship class indices \\(y_{i}^{o}\\) and their oriented bounding boxes \\(b_{i}^{o}\\), and \\(\\bar{\\mathcal{D}}_{s}=\\{(x_{i}^{s})\\}_{i=1}^{N_{s}}\\) are the \\(N_{s}\\) training SAR images without any labeled information. Although they share the same label space, the joint probability distributions \\(p(\\mathcal{D}_{o})\\) and \\(p(\\bar{\\mathcal{D}}_{s},y_{i}^{s},b_{i}^{s})\\) are not identical. Thus, when a ship detector trained on optical images is directly applied to SAR images in the inference stage, there will be a significant decline in detection performance. For mitigating such a cross-modality distribution gap, the proposed method trains a CNN with \\(\\mathcal{D}_{o}\\) and \\(\\bar{\\mathcal{D}}_{s}\\) that can align the decision boundaries defined by \\(p(y^{o},b^{o}|x^{o})\\) and \\(p(y^{s},b^{s}|x^{s})\\). To achieve such a goal, CroMoDa, built upon by oriented Faster R-CNN, is mainly composed of four parts: 1) image-level feature alignment; 2) low-level feature despeckling; 3) cross-modality pseudo-label self-training; and 4) cross-modality object alignment. The overall framework of CroMoDa is illustrated in Fig. 2. ### _Image-Level Feature Alignment_ Following the principle idea in [23], the image-level feature alignment module progressively aligns low-level and high-level features so that the backbone network can output feature maps with similar distributions when the input image modality changes. It includes two parts: low-level feature alignment and high-level feature alignment. Specifically, the low-level and high-level features of optical and SAR images extracted by the backbone network are fed into the discrimination network after passing through a gradient reversal layer. The discrimination network distinguishes whether the features belong to the optical or the SAR modality. Due to the gradient inversion, the feature extraction network can generate features with distributions that are as similar as possible. Through the adversarial interaction between the feature extraction network and the discrimination network, the backbone network is able to map the input optical images and SAR images to the same feature space, thereby obtaining intermediate-level features for the final ship detection. #### Iii-A1 Low-Level Feature Alignment Image \\(x_{i}\\) and its modality label \\(d_{i}\\) are first fed into the backbone network, where they pass through the feature extraction network \\(E_{1}\\) to obtain low-level features \\(\\mathbf{F}_{1}\\), i.e., \\(\\mathbf{F}_{1}=E_{1}(x_{i})\\). Then, the obtained low-level features \\(\\mathbf{F}_{1}\\) are input into the low-level feature discrimination network \\(D_{1}\\), resulting in the modality category probability maps \\(\\mathbf{P}_{i}^{\\mathrm{L}}\\in\\mathbb{R}^{C\\times H\\times W}\\). To align the low-level features obtained from optical and SAR images, we use the following loss function: \\[L_{\\mathrm{img-low}}=\\frac{1}{HW}\\sum_{h=1}^{H}\\sum_{w=1}^{W}\\left[d_{i}\\left( \\mathbf{p}_{i}^{\\mathrm{L}}\\right)^{2}+\\left(1-d_{i}\\right)\\left(1-\\mathbf{p }_{i}^{\\mathrm{L}}\\right)^{2}\\right] \\tag{1}\\] where \\(H\\) and \\(W\\) denote the height and width of the feature maps, respectively, and \\(\\mathbf{p}_{i}^{\\mathrm{L}}\\) denote the channel features extracted from \\(\\mathbf{P}_{i}^{\\mathrm{L}}\\) with the spatial indices \\((h,w)\\). #### Iii-A2 High-Level Feature Alignment High-level features contain rich semantic information and the modality style of images. By aligning high-level features, the backbone network can learn common semantic representations on different imagery modalities. We use the feature extraction network \\(E_{2}\\) to obtain high-level features \\(\\mathbf{F}_{2}\\) from \\(\\mathbf{F}_{1}\\), i.e., \\(\\mathbf{F}_{2}=E_{2}(\\mathbf{F}_{1})\\). Furthermore, the discrimination network outputs the modality category probabilities \\(\\mathbf{p}_{i}^{\\mathrm{H}}\\in\\mathbb{R}^{C\\times 1\\times 1}\\) of \\(\\mathbf{F}_{2}\\). To achieve the feature alignment, we exploit the following loss: \\[L_{\\mathrm{img-high}}=d_{i}\\,L_{\\mathrm{foc}}(\\mathbf{p}_{i}^{\\mathrm{H}})+ \\left(1-d_{i}\\right)L_{\\mathrm{foc}}(1-\\mathbf{p}_{i}^{\\mathrm{H}}). \\tag{2}\\] \\(L_{\\mathrm{foc}}(\\cdot)\\) represents the focal loss function, which assigns larger weights to the images that are difficult to be classified, thereby enabling the network to better align high-level features [12]. To this end, we apply adversarial learning at the low and high levels of the backbone network, enabling us to align the optical and SAR images in terms of edges, texture features, and high-level semantic features. Accordingly, the loss for image-levelfeature alignment is \\[L_{\\text{img}}=L_{\\text{img-low}}+L_{\\text{img-high}}. \\tag{3}\\] ### _Low-Level Feature Despeckling for SAR Images_ Due to the coherent processing of SAR images, sea waves often appear as clutters or speckles, which can strongly interfere with the ship interpretation. Moreover, sidelobe effects can make SAR ships appear differently with respect to the optical ones by introducing additional noise. Fig. 3(a) and (b) illustrates SAR ship examples interfered by sea clutter and sidelobe effects. Although the above image-level feature alignment can effectively align the overall features of images, it fails to achieve good alignment of ships between the two modalities at the object level. The trained optical ship detector cannot be easily adapted to identify SAR ships due to the interference unique to SAR images. In order to avoid the potential false alarms produced by those noise, we propose a low-level feature despeckling loss term described by \\[L_{\\text{desp}}\\!=\\!\\frac{1}{HW}\\sum_{h=1}^{H}\\sum_{w=1}^{W}\\left(\\left|\\mathbf{ F}_{1}^{h+1,w}-\\mathbf{F}_{1}^{h,w}\\right|+\\left|\\mathbf{F}_{1}^{h,w+1}- \\mathbf{F}_{1}^{h,w}\\right|\\right) \\tag{4}\\] which constrains the total variation of the low-level features for SAR images. By minimizing \\(L_{\\text{desp}}\\), we would like to achieve the learned low-level features of SAR images that are not affected by the interference. It is worth noting that the despeckling loss is only carried out on low-level features rather than the high ones. Since we consider that those interference directly causes effects at the early stage of feature extraction, minimizing the total magnitude of high-level features' gradients may weaken the global semantic information. ### _Cross-Modality Pseudo-Label Self-Training for SAR Ships_ The image-level feature distributions of the two modalities can be aligned as closely as possible by exploiting the above-mentioned strategies. As a result, the network can utilize the knowledge trained from optical images with labels and predict the ship locations in SAR images as well. However, due to different imaging principles, ships are characterized differently in the two imagery modalities. In SAR images, ships often appear to be composed of bright spots or strong scatterers, contrasting with the smooth and continuous objects typically observed in optical images. As shown in Fig. 4, directly exploiting the image-level aligned network to predict the ship locations in SAR images will contain lots of false alarms, due to their similar features Fig. 3: (a) and (b) Speckle noise, sea clutter, and strong sidelobe effects existed in SAR images. Fig. 2: Overall framework for CroMoDa. demonstrated as the objects in optical ones. To solve such an issue, we seek to generate some supervised information in SAR images to make the network better characterize the SAR ships. Pseudo-label self-training technology generates pseudo-labels through the prediction results of unlabeled data and uses the data containing pseudo-labels as additional samples for network training. Such a strategy further improves the generalization capability of the network, which is widely applied in semisupervised or unsupervised learning tasks [52, 53, 54]. We integrate such technology into our model to enable it to capture the unique characteristics that distinguish SAR ships from optical ones. Currently, pseudo-label self-training methods in the field of object detection are mostly based on a multistage teacher-student framework. This involves initially training the model using only source-domain data to obtain initial weights, which are then used to initialize both the teacher and student networks. During training, the teacher network is responsible for generating pseudo-labels on the target domain, and the student network utilizes these pseudo-labels for training. Such a framework has some disadvantages. 1. Due to the modality differences between optical and SAR images, the teacher network, when trained solely on optical images, generates pseudo-labels of low quality on SAR images. Consequently, it is challenging for the student network to learn modality-specific knowledge through those labels, and there is even a risk of reinforcing erroneous knowledge through training. 2. Most of the current methods are not end-to-end, and the overall performance is limited by the performance of individual learning stages. To address the aforementioned issues, we propose an online pseudo-label self-training strategy, termed FTBO. As illustrated in Fig. 5, to obtain the losses supervised by pseudo-labels, we achieve two forward propagation processes and one backward propagation process executed on the RPN and the ROI module. During the first forward propagation, high-quality pseudo-labels are obtained through two levels of filtering, at the instance and image levels, and are stored in an offline _pseudo-label bank_. The second forward propagation retrieves pseudo-labels from the pseudo-label bank to guide the positive and negative sample allocation for candidate boxes and to calculate the loss for both RPN and the ROI module. After two forward propagations, network parameters are updated through backward propagation based on the calculated loss. Notably, the first forward propagation occurs at each training iteration, whereas the second forward propagation and backward propagation only occur when the pseudo-label bank contains labels for the input images. In detail, the proposed FTBO is introduced as follows. #### Iii-B1 First Forward The SAR image features \\(\\mathbf{F}_{2}\\) extracted by the backbone are passed through the RPN to output candidate boxes, which then yield the predicted boxes with category probabilities and regression coordinates through the detection head and nonmaximum suppression (NMS). The category probability is used as the confidence score for the associated prediction. Generally, predicted boxes with higher confidence scores are more likely to be correctly identified. The boxes with lower ones are more likely to be background regions. Therefore, by setting a threshold \\(\\theta_{\\text{ins}}\\), most predicted boxes of background regions can be filtered out. However, when there are a large number of ships in the image with low confidence, instance-level filtering may remove these low-confidence ships. Utilizing the images with only a few pseudo-labels for training can diminish the network's detection performance. To avoid such an issue, an image-level confidence \\(c_{i}\\) is introduced to assess the overall quality of pseudo-labels within an image: \\[c_{i}=\\frac{1}{M}\\sum_{m=1}^{M}c_{i,m}\\mathds{1}_{c_{i,m}>\\theta_{m}} \\tag{5}\\] where \\(c_{i,m}\\) denotes the confidence score of the \\(m\\)th predicted box in \\(x_{i}^{s}\\), \\(M\\) is the number of predicted boxes, and \\(\\mathds{1}\\) is the indicator function. Importantly, we exploit a pseudo-label bank \\(\\mathbb{B}=\\{x_{i}^{s},y_{i}^{s},b_{i}^{s}\\}\\) to store pseudo-labels with relatively high overall quality and the corresponding SAR images \\(x_{i}^{s}\\). By setting an image-level confidence threshold \\(\\theta_{\\text{img}}\\), if the image-level confidence \\(c_{i}\\) exceeds \\(\\theta_{\\text{img}}\\) and \\(\\mathbb{B}\\) does not contain pseudo-labels for that image, then the corresponding pseudo-labels for the image are stored in \\(\\mathbb{B}\\) for subsequent loss calculation. It is noteworthy that we use \\(\\mathbb{B}\\) for the following reasons. 1. The adversarial training process is not always stable. For some images, reliable pseudo-labels can be generated in the early training stage, while they may not satisfy the above criteria in the later stage. In that case, historical pseudo-labels can be retrieved from \\(\\mathbb{B}\\) to participate in training, thereby enhancing the network's training stability and convergence speed. Fig. 4: (a) and (b) Lots of false alarms present in the detection results by only considering image-level feature alignment. Fig. 5: Proposed FTBO strategy for cross-modality pseudo-label self-training of SAR ships. 2. For images that cannot generate pseudo-labels, weak data augmentation can be applied during the training process. Pseudo-labels obtained from the augmented images are also stored in \\(\\mathbb{B}\\) for subsequent training of the network. 3. _Second Forward and First Backward:_ The second forward propagation, leading to loss computation and subsequent backward propagation, commences only when the pseudo-label bank contains pseudo-labels for the current image. In that case, \\(\\mathbf{F}_{2}\\) is passed through the RPN to output candidate boxes. Under the guidance of pseudo-labels, these candidates are classified into positive and negative samples. By exploiting the detection head and NMS, each predicted box is associated with category probabilities and regression coordinates. For positive samples, both classification and regression losses are calculated, whereas for negative samples, only classification losses are computed. The following loss terms are exploited: \\[L_{\\mathrm{RPN-s}}=L_{\\mathrm{cls}}(p_{i,m}^{\\text{RPN}},y_{i,m}^{*})+L_{ \\mathrm{loc}}(b_{i,m}^{\\text{RPN}},b_{i,m}^{*})\\] (6) \\[L_{\\mathrm{ROI-s}}=L_{\\mathrm{cls}}(p_{i,m}^{\\text{ROI}},y_{i,m}^ {*})+L_{\\mathrm{loc}}(b_{i,m}^{\\text{ROI}},b_{i,m}^{*}).\\] (7) Here, \\(p_{i,m}\\) and \\(b_{i,m}\\), respectively, denote the class probability and regression parameters of the \\(m\\)th candidate or predicted box corresponding to the \\(i\\)th image, while \\(y_{i,m}^{*}\\) and \\(b_{i,m}^{*}\\) represent the generated pseudo-labels. \\(L_{\\mathrm{cls}}\\) is the cross-entropy classification loss and \\(L_{\\mathrm{loc}}\\) denotes the smooth \\(L_{1}\\) loss exploited for the regression. Based on the abovementioned terms, the overall loss for SAR ship detection supervised by the generated pseudo-labels is \\[L_{\\mathrm{det-s}}=L_{\\mathrm{RPN-s}}+L_{\\mathrm{ROI-s}}. \\tag{8}\\] ### _Cross-Modality Object Alignment_ Considering that the basic visual appearance and structural features of objects should remain relatively consistent across different domains, existing works improve the network's generalization capability on the target domain by adopting instance-level alignment to narrow the gap between the feature representations of instances in the source and target domains [22, 26, 27, 31]. Although these methods can alleviate the instance-level feature discrepancy across domains to some extent, they face an issue where the instance-level features, extracted from the ROI based on candidate boxes in \\(\\mathbf{F}_{2}^{o}\\), encompass both the object and background features. When aligning instance features between optical and SAR images, the alignment of object instances with background instances occurs, causing the distributions of object and background features to converge. This process dilutes the unique features of ships, weakening the effectiveness of instance feature alignment. This kind of alignment is referred to as \"arbitrary alignment,\" as shown in Fig. 6. To address the aforementioned issue, we propose a novel instance-level alignment strategy, named cross-modality object alignment. It aims at achieving alignment only on instance features of ship regions in optical and SAR images. This strategy enables the network to more effectively extract shared features between optical and SAR ships, facilitating the transfer of knowledge learned from optical images to SAR ones at a high semantic level. Specifically, for optical images, instance features of optical ships from the features \\(\\mathbf{F}_{2}^{o}\\) are extracted through the ROI module with accessible labels. Likewise, for SAR images, pseudo-labels from \\(\\mathbb{B}\\) are exploited to obtain instance features of SAR ships. Consistent with the instance feature fusion method proposed in [23], we obtain the low-level and high-level features of images from two discriminator networks \\(D_{1}\\) and \\(D_{2}\\) through the image feature alignment module. These features are then concatenated with the instance ones extracted by the ROI on \\(\\mathbf{F}_{2}\\) to obtain final instance features with global information. Since instance features are extracted based on ground-truth labels from optical images and high-quality pseudo-labels from SAR images, the extracted features represent the ship characteristics with a high probability, so that the alignment between the object and background regions can be avoided. The loss calculation for the proposed cross-modality object alignment is similar to the image-level feature alignment. The obtained instance features are passed through a gradient reversal layer and then input into an instance feature discriminator network \\(D_{3}\\), resulting in modality category probabilities \\(\\mathbf{p}_{i,j}^{\\text{I}}\\). \\(i\\) denotes the image index and \\(j\\) is the object index. Then, the loss term for the cross-modality object alignment is calculated by \\[L_{\\text{obj}}=d_{i}\\,L_{\\text{focus}}(\\mathbf{p}_{i,j}^{\\text{I}})+(1-d_{i}) \\,L_{\\text{focus}}(1-\\mathbf{p}_{i,j}^{\\text{I}}). \\tag{9}\\] It is worth noting that \\(L_{\\text{obj}}\\) is obtained only when pseudo-labels for the current image exist in \\(\\mathbb{B}\\). ### _End-to-End Unsupervised Cross-Modality SAR Ship Detection_ Based on the above proposed parts, the overall training for unsupervised cross-modality SAR ship detection is conducted in an end-to-end manner. The overall loss for training the network is \\[L=L_{\\mathrm{det-o}}+\\lambda_{1}L_{\\text{img}}+\\lambda_{2}L_{\\text{deep}}+ \\lambda_{3}L_{\\text{obj}}+\\lambda_{4}L_{\\mathrm{det-s}} \\tag{10}\\] Fig. 6: Difference between arbitrary alignment of instances exploited in the other SOTAs and the proposed cross-modality object alignment. where \\(L_{\\rm det-o}\\) is the detection loss for optical images supervised by ground-truth labels, and \\(\\lambda_{1},\\lambda_{2},\\lambda_{3},\\text{ and }\\lambda_{4}\\) are the hyperparameters for balancing different loss terms. Algorithm 1 summarizes the training scheme for the proposed CroMoDa. ``` Input: Optical dataset \\(\\mathcal{D}_{o}\\) and SAR dataset \\(\\mathcal{\\bar{D}}_{s}\\), \\(\\theta_{\\rm img}\\), \\(\\theta_{\\rm ins}\\), \\(\\mathbb{B}\\), \\(\\lambda_{1}\\), \\(\\lambda_{2}\\), \\(\\lambda_{3}\\), \\(\\lambda_{4}\\) Output: CroMoDa network for epochdo foreiterationdo Sample mini-batch \\(\\mathbf{X}^{o}\\), \\(\\mathbf{X}^{s}\\) from \\(\\mathcal{D}_{o}\\), \\(\\mathbf{\\bar{D}}_{s}\\); Calculate the ship detection loss of optical images \\(L_{\\rm det-o}\\); Calculate the image-level feature alignment loss \\(L_{\\rm img}\\); Calculate the low-level feature despeckling loss \\(L_{\\rm desp}\\); forido Calculate image-level confidence \\(c_{i}\\) for \\(x_{i}^{s}\\); if\\(c_{i}>\\theta_{img}\\) and \\(b_{i}^{*}\\),\\(y_{i}^{*}\\) not in \\(\\mathbb{B}\\)then Save \\(b_{i}^{*}\\),\\(y_{i}^{*}\\) to the pseudo-label bank \\(\\mathbb{B}\\); end if if\\(b_{i}^{*}\\),\\(y_{i}^{*}\\) in \\(\\mathbb{B}\\)then Calculate the pseudo-label self-training loss \\(L_{\\rm det-s}\\); Calculate the cross-modality object alignment loss \\(L_{\\rm obj}\\); end if end Update network parameters by backpropagation. end ``` **Algorithm 1**Training Scheme for the Proposed CroMoDa. ## IV Experiments In this section, we first introduce the datasets used to evaluate the detection performance, the experimental setup, and the evaluation metrics adopted. We then compare our method against several current SOTA unsupervised domain adaptation detection methods on two types of optical-SAR dataset combinations. Furthermore, we conduct a series of ablation experiments on the proposed method to verify its effectiveness. ### _Experimental Setup_ #### Iv-A1 Dataset Configuration We select DOTA [55] as the optical training dataset. SSDD [1] and RSDD [56] are the two SAR datasets used for unsupervised ship detection. The DOTA dataset is currently one of the mainstream optical remote sensing datasets for object detection, with images sourced from Google Earth and two Chinese satellites (Gaofen-2 and Jilin-1). This dataset contains a total of 2806 images, with sizes ranging from \\(800\\times 800\\) to \\(4000\\times 4000\\) pixels, and includes 188 282 instances across 15 categories, such as airplanes and ships, using oriented annotations. We first obtain 31 879 images of size 1024 \\(\\times\\) 1024 through slicing and then select 1869 images containing ships as the training dataset. The SSDD dataset is composed of 1160 SAR images acquired from three SAR satellites: Radarsat-2, TerraSAR-X, and Sentinel-1. Those images have an average size of \\(500\\times 500\\), containing a total of 2456 oriented ships. The image resolution ranges from 1 to 15 m per pixel, featuring ships of various sizes and materials in both nearshore and offshore scenes. We divide the SSDD dataset into 928 training images and 232 test images with an 8:2 split ratio without overlap and repeat the division process three times to obtain three train-test datasets for reliable evaluation. The RSDD-SAR dataset is composed of 127 scenes and 7000 SAR images of size 512 \\(\\times\\) 512, which contains a total of 10263 oriented ships. We select 1600 and 400 images for training and testing with the same split ratio, and repeat three times as above. #### Iv-A2 Implementation Details During the training phase, images from both modalities are resized to a dimension where the longer side is 1024 pixels, maintaining the aspect ratio of the images. In addition, random horizontal flipping is applied. In the testing phase, SAR images are resized in the same manner without any data augmentation. We utilize the stochastic gradient descent optimizer to train the network, with an initial learning rate of 0.005, a weight decay of 0.0001, and a momentum of 0.9. The learning rate is reduced by a factor of 0.1 at the 32nd and 44th epochs. The network is trained for a total of 48 epochs, with a batch size of four training images, including two optical and two SAR images. All the implementations are based on MMRate [57] and carried out on a server with an NVIDIA RTX3090 GPU. The hyperparameter settings are \\(\\theta_{\\text{ins}}=0.5\\), \\(\\theta_{\\text{img}}=0.97\\), \\(\\lambda_{1}=0.5\\), \\(\\lambda_{2}=1.0\\), \\(\\lambda_{3}=0.2\\), and \\(\\lambda_{4}=1.0\\). During the testing phase, detected boxes with an IoU greater than 0.5 compared with the ground-truth boxes are considered to be correct detection. To quantitatively assess our method, we use the following four metrics: \\[\\text{Precision} =\\frac{\\text{TP}}{\\text{TP}+\\text{FP}} \\tag{11}\\] \\[\\text{Recall} =\\frac{\\text{TP}}{\\text{TP}+\\text{FN}}\\] (12) \\[\\text{F}_{1} =\\frac{2\\times\\text{Precision}\\times\\text{Recall}}{\\text{Precision} +\\text{Recall}}\\] (13) \\[\\text{mAP} =\\frac{1}{m}\\sum_{i=1}^{m}\\int P(r)dr \\tag{14}\\] where TP, FP, and FN represent true positive, false positive, and false negative, respectively. Precision is the proportion of Fig. 7: PR curves of all the considered methods on (a) DOTA \\(\\rightarrow\\) SSDD and (b) DOTA \\(\\rightarrow\\) RSDD. actual positives in samples predicted as positive by the network, while Recall is the proportion of actual positives that the network correctly predicts as positive among all true positives. The F1 score is the harmonic mean of Precision and Recall, offering a composite measure of the model's performance. \\(P(\\cdot)\\) represents Precision as a function of Recall, and integrating this over the range gives the average precision (AP) for a category. Averaging the AP values across multiple categories yields the mAP, which is used to comprehensively evaluate the network's performance. It is worth noting that each of our experiments is conducted on three different sets of train-test splits, with each run being randomized three times for each set. The final experimental results are obtained by taking the average of these nine runs. #### Vi-A3 Baselines To evaluate the effectiveness of the proposed method, we compare it with several SOTA object detection methods: 1) _R-Faster RCNN [16]_; in the experiments, we exploit _R-Faster RCNN (optical)_ to represent the approach of using only labeled optical images for training the network and testing it on SAR images, while _R-Faster RCNN (supervised)_ represents training and testing the network on SAR images in a fully supervised manner; 2) _DAF [22]_; 3) _SWDA [23]_; 4) _HTCN [26]_; 5) _IIOD [27]_; 6) _UDPT [30]_; and 7) _IDAA [31]_. ### _Experimental Results_ #### Vi-B1 Comparison to SOTA Methods To validate the effectiveness of the proposed method, we conduct quantitative and qualitative experiments compared with the abovementioned SOTAs on two dataset configurations: DOTA \\(\\rightarrow\\) SSDD and DOTA \\(\\rightarrow\\) RSDD. Table I demonstrates the quantitative results of all the considered methods, where the mean and standard deviation values are calculated on a basis of nine runs. It can be observed that the proposed method achieves the best detection performance of all. For DOTA \\(\\rightarrow\\) SSDD, CroMoDa's \\(F_{1}\\) and mAP scores, respectively, improve around 3% and 9% compared with the second-best method. For DOTA \\(\\rightarrow\\) RSDD, the proposed method significantly increases the detection accuracy by around 13% \\(F_{1}\\) score and 14% mAP score compared with the second best. In addition, our method maintains high precision while achieving the highest recall. Since pseudo-labels of SAR ships are considered for supervising the network learning in both UDPT and CroMoDa, they demonstrate qualitative improvement in detection performances on both datasets compared to the others. Such improvement is attributed to the suppression of false alarms by utilizing the pseudo-label self-training strategy. Table I shows a considerable standard deviation in some results, which can be attributed to two primary factors. First, both the proposed method and the comparative methods employ adversarial training within the network. The adversarial nature between the feature extraction network and the discriminator network can lead to instability during training, resulting in significant variations in the outcomes of each model training session. Second, we perform three random splits of the training and testing sets, and different splits led to substantial differences in the accuracy of the trained models. Table II presents a comparative analysis of our proposed method with other methods in terms of FPS, parameter count, and floating-point operations. It is evident that our method aligns closely with most methods in terms of inference speed, parameter count, and floating-point operations, with notable differences observed compared to DAF, UDPT, and IDAA. However, our method significantly outperforms others in terms of detection accuracy. It is noteworthy that UDPT exhibits identical metrics to R-Faster RCNN because UDPT is a three-stage algorithm. The third-stage model is identical to R-Faster RCNN and is exclusively used for inference. IDAA introduces an imbalanced prediction consistency loss on top of DAF, resulting in identical parameter counts. The number of floating-point operations for IDAA, UDPT, DAF, and R-Faster RCNN is the same because we delete part of the network and loss calculation during inference. Fig. 7 displays the Precision-Recall (PR) curves of all the considered methods, with Fig. 7(a) and (b) corresponding to the results on the SSDD and RSDDdatasets, respectively. The area under the PR curve, enclosed by the curve and the \\(x\\)-axis, represents the AP value, providing an intuitive indication of the performance. The closer the curve is to the upper right corner of the coordinate axis, the better the detection performance. It can be observed that our curve is almost positioned above all the others, fully demonstrating the superiority of our method. Our method's curve is almost a horizontal line in the upper left corner, while the others show a process of steep decline followed by a rise. This indicates that the predicted boxes of our method with high confidence are very accurate, whereas other methods tend to predict backgrounds as ships and assign high confidence to these incorrect predictions. In terms of the qualitative experiments, Fig. 8 shows the visualized detection results of our method compared with the other SOTA methods on the SSDD dataset, including both nearshore and offshore scenes. In the nearshore scene, although DAF, IIDD, and IDAA can correctly detect all the ships, they inevitably produce a large number of false alarms due to the presence of many areas on land similar to ships. In addition, while HTCN, SWDA, and UDPT provide predicted boxes for all ship areas, some of them have a small IoU with reference to the ground truths and are deemed false alarms. Compared with the others, our method successfully predicts all the ships and suppresses the occurrence of false alarms, achieving the best detection result. In the offshore scene, strong sidelobe interference creates a significant difference in the appearance of some ships, leading to false negatives via DAF, HTCN, IIDD, and UDPT. Although SWDA successfully detects ships with sidelobe interference, its performance is affected by sea clutter, resulting in false alarms. Our method benefits from the use of despecleking on low-level features, which filters out a significant amount of sea clutter, weakens the impact of sidelobe interference on detection, and outperforms the others. Fig. 9 shows the results for the RSDD dataset. In Scene 1, DAF, HTCN, SWDA, and IOD all experience a large number of false alarms, and DAF's prediction boxes almost cover the entire land area on the left and right of the image. In addition, DAF, HTCN, IDAA, and UDPT all fail to predict the ship located at the bottom of the Fig. 8: Visualized detection results of 1) nearshore scene and 2) offshore scene selected from SSDD test images. (a1) and (a2) DAF. (b1) and (b2) HTCN. (c1) and (c2) SWDA. (d1) and (d2) IOD. (e1) and (e2) IDAA. (f1) and (f2) UDPT. (g1) and (g2) CroMoDa. (h1) and (h2) Ground truth. TP, FP, and FN are annotated in different colors. scene. Although DAF, HTCN, and UDPT can correctly identify the location of the ships, inaccuracies in the size and orientation of their predicted boxes led to false negatives. In Scene 2, DAF, HTCN, SWDA, IID, and IDAA, to some degrees, predict sea clutter as ships. Due to the influence of land, the predicted boxes from DAF, HTCN, IID, IDAA, and UDPT deviated from the ground truths, leading to false negatives. In contrast, our method achieves the best predicted performance in both scenes. Fig. 10 illustrates the pseudo-labels generated during the training process by our proposed method and UDPT. Fig. 10(a), (b) and (c), (d) represent nearshore and offshore scenes, respectively. It can be observed that the proposed method generates high-quality pseudo-labels in both scenes. This is attributed to the fact that we have incorporated image-level filtering on top of instance-level filtering, which helps in avoiding the model's misguidance by noisy labels during the early stages of training. In addition, the introduced pseudo-label bank helps in mitigating the cumulative errors caused by noisy pseudo-labels during the training process. #### Iv-A2 Ablation Study Tables III and IV correspond to the results of the ablation studies on the two dataset configurations, respectively. In the tables, \"Baseline\" refers to R-Faster RCNN with image-level feature alignment. It can be observed that each component of our method contributes to enhancing the network's performance, and their combined effects enable the network to achieve the best result. By adding \\(L_{\\text{desp}}\\), we observe an increase in both precision and recall values on the two datasets. This improvement is attributed to \\(L_{\\text{desp}}\\) reducing background noise and suppressing sidelobe interference, which decreases false alarms and enhances the detection rate of SAR ships. The pseudo-label self-training technique significantly boosts accuracy because it treats a large number of low-confidence ships as background, Fig. 9: Visualized detection results of two nearshore scenes selected from RSDD test images. (a1) and (a2) DAF. (b1) and (b2) HTCN. (c1) and (c2) SWDA. (d1) and (d2) IID. (e1) and (e2) IDAA. (f1) and (f2) UDPT. (g1) and (g2) CroModa. (h1) and (h2) Ground truth. TP, FP, and FN are annotated in different colors. with their confidence levels gradually approaching zero during training. Cross-modality object alignment, by aligning the features of the ship regions, makes their features in optical and SAR images more similar, effectively improving the network's accuracy and recall rates. Effect of \\(L_{\\text{deep}}\\)In Table V, we investigate different detection performances when \\(L_{\\text{deep}}\\) is imposed at different stages, i.e., image despeckling, low-level feature despeckling, and high-level feature despeckling. Moreover, the detection result without using \\(L_{\\text{deep}}\\) is also reported. It can be observed that \\(L_{\\text{deep}}\\) conducted on low-level features can achieve the best detection accuracy compared to the others. Since the speckle effect belongs to image-level characteristics within SAR images, high-level features extracted by the backbone are mainly semantic and are less affected by the speckle. Thus, \\(L_{\\text{deep}}\\) imposed on high-level features does not exhibit any performance improvement. Furthermore, Fig. 11 demonstrates two detection results and the associated low-level features without and with \\(L_{\\text{deep}}\\), where Fig. 11(a) and (b) corresponds to the effects of sea clutter and sidelobe interference. On the left side of the subfigures, it can be observed that both sea clutter and sidelobe interference can lead to much noise over the background areas of the low-level features. This results in the occurrence of false alarms in the detection results, and the predicted boxes for ships also exhibit deviations. As comparison, the minimization of \\(L_{\\text{deep}}\\) conducted on low-level features can lead to much smoother background features, which better dominate the appearance of ships. In return, the locations of ships can be more precisely predicted. Effect of object alignmentIn order to analyze the effect of the proposed object alignment method, we compare the instance-level similarities for the inter- and intra-modality cases with respect to the method proposed in SWDA. Fig. 12(a) illustrates the instance similarities across the DOTA dataset and the SSDD dataset, while Fig. 12(b) shows the self-similarity of instance features within the SSDD dataset. Likewise, Fig. 12(c) and (d) presents the same results for the RSDD dataset. Observed from the results, our method can increase the similarity between instance features in optical images and SAR images by comparing to SWDA. This improvement is attributed to our proposed cross-modality object alignment strategy, which aligns the ship features, thereby narrowing the gap in feature distribution between ship areas in optical and SAR images. Moreover, according to Fig. 12(b) and (d), our method increases instance similarity among ships in SAR images, which is due to the use of a pseudo-label self-training strategy during the training process. It enables the network to learn more distinctive features specific to SAR ships. Effect of BFig. 13 demonstrates the impact of the proposed pseudo-label bank on the network's performance in the test set of DOTA \\(\\rightarrow\\) SSDD. In addition, we compare the performance with the other two cases: 1) without using a bank, denoted as w/o Bank; and 2) Bank updating. Compared to the proposed strategy, Bank updating refers to the strategy in which \\(\\mathbb{B}\\) is updated in each iteration. By comparing ours with w/o Bank, using a pseudo-label bank in self-training can accelerate the network convergence speed and significantly improve the Fig. 10: Visualization results of pseudo-labels. (a) and (c) UDPT. (b) and (d) Ours. network's precision. This improvement is because w/o Bank directly uses pseudo-labels generated in the current batch for network training. However, the adversarial training of the network is usually unstable during the training process. Images that can generate pseudo-labels in one training iteration may not be able to produce suitable pseudo-labels in the next, resulting in fewer pseudo-labels being involved in training. By contrast, our pseudo-label bank can store suitable pseudo-labels, ensuring that the training stage can proceed with these labels even when new suitable pseudo-labels cannot be generated. It can be seen that the performance of w/o Bank continues to decline during the later stages of training. A possible reason is that w/o Bank directly uses pseudo-labels generated in the current batch to update the network, and the next batch uses the updated network to generate pseudo-labels for training. In cases where pseudo-labels contain noise, the error may gradually amplify as training progresses, leading to a continuous decline in network performance [58]. To verify the above speculation, we compare Bank updating with our method. It can be observed that in the early stages of training, the performance of Bank updating is almost identical to our method. However, in the middle and later stages, as the pseudo-labels are updated, the errors in the pseudo-labels gradually lead to a decline in performance. The learning curve falls between ours and w/o Bank, indicating that while Bank updating helps mitigate some of the instability, it still suffers from the accumulation of errors over time, unlike our method, which maintains higher stability and accuracy throughout the training process. #### V-B3 Discussion As mentioned earlier, we conduct extensive experiments from different perspectives to verify the effectiveness of CroMoDa. Many existing cross-modality object detection methods have shown excellent performance on data within the same modality, but they generally perform poorly on the highly challenging task of cross-modality transfer from optical to SAR. Compared to optical images, SAR images contain speckle noise and sidelobe interference, which can affect the intensity consistency of ship objects, resulting in a significant number of missed detections. In addition, the speckle noise Fig. 11: Impact of \\(L_{\\text{disp}}\\) on the detection results and low-level features, where (a) and (b) correspond to the effects of sea clutter and sidelobe interference. Fig. 12: Ship feature similarity distributions of (a) DOTA \\(\\leftrightarrow\\) SSDD, (b) SSDD \\(\\leftrightarrow\\) SSDD, (c) DOTA \\(\\leftrightarrow\\) RSDD, and (d) RSDD \\(\\leftrightarrow\\) RSDD. Fig. 13: Impact of applying different pseudo-label bank strategies on the network’s detection performances during self-training. in land areas resembles ship objects in appearance, leading to numerous false alarms. To the best of our knowledge, current cross-modality object detection algorithms have not considered the impact of noise on cross-modality results. Therefore, we add a denoising loss to the backbone network to mitigate the influence of noise on features and conduct experiments to evaluate the effect of adding the denoising loss at different stages of the backbone network. Based on the obtained results, we believe that using denoising loss on low-level features is optimal. A reasonable explanation is that speckle noise is a local feature of the underlying image, and denoising at lower levels can prevent interference with higher level features. Moreover, high-level features represent global semantic features, and using denoising loss on them can weaken the global semantic features. Most current cross-modality object detection methods incorporate adversarial training in the model to learn modality-invariant feature representations, known as transferability. However, achieving transferability comes at a cost: adversarial training may compromise the discriminability of the features. Discriminability refers to the model's ability to locate and distinguish different instances [26]. In the cross-modality ship object detection task from optical to SAR, this manifests as a reduced ability of the model to differentiate between ships and the background, as well as decreased localization accuracy. We introduce pseudo-label self-training in the model, generating pseudo-labels for SAR images to perform supervised training, thereby enhancing the discriminability of features on SAR images. Furthermore, we find in experiments that the quality of pseudo-labels directly affects the model's performance. Simple instance-level filtering can introduce a large number of noisy labels, but adding image-level filtering can remove noisy pseudo-labels. In addition, generating pseudo-labels based on classification scores does not guarantee localization accuracy, leading to localization errors. Current pseudo-label self-training methods update pseudo-labels during training, causing cumulative errors as the model iterates. We introduce a pseudo-label bank to avoid cumulative errors caused by multiple updates of pseudo-labels. ## V Conclusion In this article, we propose a cross-modality SAR ship detection method by applying the knowledge learned from labeled optical images in an unsupervised manner. Experimental results on two optical-SAR dataset configurations demonstrate that our method significantly outperforms other SOTA methods in detection accuracy with an mAP score greater than 10%, proving its effectiveness and superiority in cross-modality SAR ship detection. In the future, we plan to explore lightweight cross-modality SAR ship detection methods to achieve real-time applications. ## References * [1] J. Li, C. Qu, and J. Shao, \"Ship detection in SAR images based on an improved faster R-CNN,\" in _Proc. SAR Big Data Era: Models, Methods Appl._, 2017, pp. 1-6. * [2] Z. Cui, Q. Li, Z. Cao, and N. Liu, \"Dense attention pyramid networks for multi-scale ship detection in SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 11, pp. 8983-8997, Nov. 2019. * [3] T. Zhang, L. Jiang, D. Xiang, Y. Ban, L. Pei, and H. Xiong, \"Ship detection from PoiSAR imagery using the ambiguity removal polarimetric notch filter,\" _ISPRS J. Photogrammetry Remote Sens._, vol. 157, pp. 41-58, 2019. * [4] Y. Zhao, L. Zhao, B. Xiong, and G. Kuang, \"Attention receptive pyramid network for ship detection in SAR images,\" _IEEEJ. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 2738-2756, 2020. * [5] J. Kang, Z. Wang, R. Zhu, and X. Sun, \"Supervised contrastive learning regularized high-resolution synthetic aperture radar building footprint generation,\" _J. Radars_, vol. 11, pp. 157-167, 2022. * [6] G. Gao, L. Liu, L. Zhao, G. Shi, and G. Kuang, \"An adaptive and fast CFAR algorithm based automatic censoring for target detection in high-resolution SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 47, no. 6, pp. 1685-1697, Jun. 2009. * [7] X. Wang, G. Li, X.-P. Zhang, and Y. He, \"A fast CFAR algorithm based on density-censoring operation for ship detection in SAR images,\" _IEEE Signal Process. Lett._, vol. 28, pp. 1085-1089, 2021. * [8] H. Yang et al., \"GPU-oriented designs of constant false alarm rate detectors for fast target detection in radar images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5231214. * [9] D. Charalampidis and G. W. Stein, \"Target detection based on multiresolution fractal analysis,\" _Proc. SPIE_, vol. 6567, 2007, Art. no. 65671B. * [10] W. Liu et al., \"SSD: Single shot multibox detector,\" in _Proc. 14th Eur. Conf. Comput. Vis._, 2016, pp. 21-37. * [11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, \"You only look once: Unified, real-time object detection,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2016, pp. 779-788. * [12] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, \"Focal loss for dense object detection,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2017, pp. 2999-3007. * [13] M. Tan, R. Pang, and Q. V. Le, \"EfficientDet: Scalable and efficient object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2020, pp. 10781-10790. * [14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, \"Rich feature hierarchies for accurate object detection and semantic segmentation,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2014, pp. 580-587. * [15] R. Girshick, \"Fast R-CNN,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2015, pp. 1440-1448. * [16] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster R-CNN: Towards real-time object detection with region proposal networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 39, no. 6, pp. 1137-1149, Jun. 2017. * [17] Z. Cai and N. Vasconcelos, \"Cascade R-CNN: Delving into high quality object detection,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 6154-6162. * [18] K. He, X. Zhang, S. Ren, and J. Sun, \"Spatial pyramid pooling in deep convolutional networks for visual recognition,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 37, no. 9, pp. 1904-1916, Sep. 2015. * [19] M. Wang and W. Deng, \"Deep visual domain adaptation: A survey,\" _Neurocomputing_, vol. 312, pp. 135-153, 2018. * [20] Y. Sun et al., \"Multisensor fusion and explicit semantic preserving-based deep hashing for cross-modal remote sensing image retrieval,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2021, Art. no. 5216164. * [21] Y. Sun et al., \"Consistency center-based deep cross-modal hashing for multi-source remote sensing image retrieval,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5217616. * [22] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, \"Domain adaptive faster R-CNN for object detection in the wild,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 3339-3348. * [23] K. Saito, Y. Ushiku, T. Harada, and K. Saenko, \"Strong-weak distribution alignment for adaptive object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2019, pp. 6949-6958. * [24] S. Kim, J. Choi, T. Kim, and C. Kim, \"Self-training and adversarial background regularization for unsupervised domain adaptive one-stage object detection,\" in _Proc. IEEE/CVF Int. Conf. Comput. Vis._, 2019, pp. 6092-6101. * [25] Z. He and L. Zhang, \"Multi-adversarial faster-RCNN for unrestricted object detection,\" in _Proc. IEEE/CVF Int. Conf. Comput. Vis._, 2019, pp. 6668-6677. * [26] C. Chen, Z. Zheng, X. Ding, Y. Huang, and Q. Dou, \"Harmonizing transferability and discriminability for adapting object detectors,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2020, pp. 8866-8875. * [27] A. Wu, Y. Han, L. Zhu, and Y. Yang, \"Instance-invariant domain adaptive object detection via progressive disentanglement,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 44, no. 8, pp. 4178-4193, Aug. 2022. * [28] S. Zhao, Z. Zhang, W. Guo, and Y. Luo, \"An automatic ship detection method adapting to different satellites SAR images with feature alignment and compensation loss,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5225217. * [29] C. Xu, X. Zheng, and X. Lu, \"Multi-level alignment network for cross-domain ship detection,\" _Remote Sens._, vol. 14, no. 10, 2022, Art. no. 2389. * [30] Y. Shi, L. Du, Y. Guo, and Y. Du, \"Unsupervised domain adaptation based on progressive transfer for ship detection: From optical to SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5230317. * [31] B. Pan, Z. Xu, T. Shi, T. Li, and Z. Shi, \"An imbalanced discriminant alignment approach for domain adaptive SAR ship detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5108111. * [32] X. Yang, X. Zhang, N. Wang, and X. Gao, \"A robust one-stage detector for multiscale ship detection with complex background in massive SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2021, Art. no. 5217712. * [33] Y. Niu, Y. Li, J. Huang, and Y. Chen, \"Efficient encoder-decoder network with estimated direction for SAR ship detection,\" _IEEE Geosci. Remote Sens. Lett._, vol. 19, 2022, Art. no. 4504405. * [34] M. Zhao, X. Zhang, and A. Kauy, \"Multitask learning for SAR ship detection with Gaussian-mask joint segmentation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5214516. * [35] L. Zhang, Y. Liu, W. Zhao, X. Wang, G. Li, and Y. He, \"Frequency-adaptive learning for SAR ship detection in clutter scenes,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5215514. * [36] J. Wang, Z. Cui, T. Jiang, C. Cao, and Z. Cao, \"Lightweight deep neural networks for ship target detection in SAR imagery,\" _IEEE Trans. Image Process._, vol. 32, pp. 565-579, 2023. * [37] X. Zhang, S. Feng, C. Zhao, Z. Sun, S. Zhang, and K. Ji, \"MGSA-Net: Multi-scale global scattering feature association network for SAR ship target recognition,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 17, pp. 4611-4625, 2024. * [38] C. Chen, C. He, C. Hu, H. Pei, and L. Jiao, \"MSARN: A deep neural network based on an adaptive recalibration mechanism for multi-scale and arbitrary-oriented SAR ship detection,\" _IEEE Access_, vol. 7, pp. 159262-159283, 2019. * [39] Z. Pan, R. Yang, and Z. Zhang, \"MSR2N: Multi-stage rotational region based network for arbitrary-oriented ship detection in SAR images,\" _Sensors_, vol. 20, no. 8, 2020, Art. no. 2340. * [40] X. Yang, Q. Zhang, Q. Dong, Z. Han, X. Luo, and D. Wei, \"Ship instance segmentation based on rotated bounding boxes for SAR images,\" _Remote Sens._, vol. 15, no. 5, 2023, Art. no. 1324. * [41] H. Jia, X. Pu, Q. Liu, H. Wang, and F. Xu, \"A fast progressive ship detection method for very large full-scene SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 62, 2024, Art. no. 5206615. * [42] Z. Wu, B. Hou, B. Ren, Z. Ren, S. Wang, and L. Jiao, \"A deep detection network based on interaction of instance segmentation and object detection for SAR images,\" _Remote Sens._, vol. 13, no. 13, 2021, Art. no. 2582. * [43] B. Wang, Y. Xu, Z. Wu, T. Zhan, and Z. Wei, \"Spatial-spectral local domain adaption for cross domain few shot hyperspectral images classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5539515. * [44] Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, \"Hyperspectral images super-resolution via learning high-order coupled tensor ring representation,\" _IEEE Trans. Neural Netw. Learn. Syst._, vol. 31, no. 11, pp. 4747-4760, Nov. 2020. * [45] Y. Sun et al., \"Unsupervised deep hashing through learning soft pseudo label for remote sensing image retrieval,\" _Knowl.-Based Syst._, vol. 239, 2022, Art. no. 107807. * [46] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, \"Image-to-image translation with conditional adversarial networks,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2017, pp. 1125-1134. * [47] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \"Unpaired image-to-image translation using cycle-consistent adversarial networks,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2017, pp. 2223-2232. * [48] Z. Yi, H. Zhang, P. Tan, and M. Gong, \"DualGAN: Unsupervised dual learning for image-to-image translation,\" in _Proc. IEEE Int. Conf. Comput. Vis._, 2017, pp. 2849-2857. * [49] Y. Cai, B. Zhang, B. Li, T. Chen, H. Yan, and J. Zhang, \"Rethinking cross-domain pedestrian detection: A background-focused distribution alignment framework for instance-free one-stage detectors,\" _IEEE Trans. Image Process._, vol. 32, pp. 4935-4950, 2023. * [50] K. Li et al., \"Improving cross-domain detection with self-supervised learning,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2023, pp. 4745-4754. * [51] W. Zhou, D. Du, L. Zhang, T. Luo, and Y. Wu, \"Multi-granularity alignment domain adaptation for object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2022, pp. 9581-9590. * [52] M. Khodabandeh, A. Vahdat, M. Ranjbar, and W. Macready, \"A robust learning approach to domain adaptive object detection,\" in _Proc. IEEE/CVF Int. Conf. Comput. Vis._, 2019, pp. 480-490. * [53] Y.-J. Li et al., \"Cross-domain adaptive teacher for object detection,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2022, pp. 7571-7580. * [54] A. RoyChowdhury et al., \"Automatic adaptation of object detectors to new domains using self-training,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2019, pp. 780-790. * [55] G.-S. Xia et al., \"DOTA: A large-scale dataset for object detection in aerial images,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2018, pp. 3974-3983. * [56] X. U. Congan et al., \"RSDD-SAR: Rotated ship detection dataset in SAR images,\" _J. Radars_, vol. 11, pp. 581-599, 2022. * [57] Y. Zhou et al., \"MMRotate: A rotated object detection benchmark using PyTorch,\" in _Proc. 30th ACM Int. Conf. Multimedia_, 2022, pp. 7331-7334. * [58] C. Chen et al., \"Progressive feature alignment for unsupervised domain adaptation,\" in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit._, 2019, pp. 627-636. \\begin{tabular}{c c} & Xi Chen received the B.E. degree in communication engineering from Anhui Normal University, Wuhu, China, in 2022. He is currently working toward the M.E. degree with the School of Electronic and Information Engineering, Soochow University, Suzhou, China. His research interests include cross-domain object detection in remote sensing images based on deep learning techniques. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhirui Wang received the B.Sc. degree from the Harbin Institute of Technology, Harbin, China, in 2013, and the Ph.D. degree from Tsinghua University, Beijing, China, in 2018. He is currently an Assistant Researcher with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing. His research interests include the intelligent interpretation of remote sensing images, such as terrain classification, and target detection and recognition. \\\\ \\end{tabular} \\begin{tabular}{c c} & Wenhao Wang received the B.E. degree in electronic information engineering from the Jinling University of Science and Technology, Nanjing, China, in 2023. He is currently working toward the M.E. degree in electronic information engineering with the School of Electronic Information, Soochow University, Suzhou, China. His research interests include deep-learning-based remote sensing image object detection. \\\\ \\end{tabular} Xinyi Xie is working toward the bachelor's degree with the College of Information Engineering, Zhejiang University of Technology, Hangzhou, China. Her research interests include deep-learning-based multimodal target detection. Jian Kang (Member, IEEE) received the B.S. and M.E. degrees in electronic engineering from the Harbin Institute of Technology, Harbin, China, in 2013 and 2015, respectively, and the Dr.-Ing. degree from the Signal Processing in Earth Observation Group, Technical University of Munich, Munich, Germany, in 2019. In 2018, he was a Guest Researcher with the Institute of Computer Graphics and Vision, TU Graz, Graz, Austria. From 2019 to 2020, he was with the Faculty of Electrical Engineering and Computer Science, Technische Universitat Berlin, Berlin, Germany. He is currently with the School of Electronic and Information Engineering, Soochow University, Suzhou, China. His research interests include signal processing and machine learning techniques, and their applications in remote sensing, as well as intelligent synthetic aperture radar (SAR)/interferometric SAR data processing, and deep-learning-based techniques for remote sensing image analysis. Dr. Kang was the recipient of the Best Student Paper Award at 2018 European Conference on Synthetic Aperture Radar, Aachen, Germany. His joint work was selected as one of the ten Student Paper Competition Finalists at 2020 IEEE International Geoscience and Remote Sensing Symposium. He was selected as one of the 2022 IEEE Geoscience and Remote Sensing Letters Best Reviewers. He was a Guest Editor for IEEE Journal of Selected Topics in Applied Earth Observations and _Remote Sensing_. Ruben Fernandez-Beltran (Senior Member, IEEE) received the B.Sc. degree in computer science, the M.Sc. degree in intelligent systems, and the Ph.D. degree in computer science from Jaume I University, Castellon de la Plana, Spain, in 2007, 2011, and 2016, respectively. He is currently an Assistant Professor with the Department of Computer Science and Systems, University of Murcia, Murcia, Spain, and a collaborating Member of the Institute of New Imaging Technologies, Jaume I University. He was a visiting Researcher with the University of Bristol, Bristol, U.K.; the University of Caceres, Caceres, Spain; Technische Universitat Berlin, Berlin, Germany; and the Autonomous University of Mexico State, Toluca, Mexico. His research interests include multimedia retrieval, spatiospectral image analysis, pattern recognition techniques applied to image processing, and remote sensing. Dr. Fernandez-Beltran was the recipient of the Outstanding Ph.D. Dissertation Award at Jaume I University in 2017.
Most state-of-the-art synthetic aperture radar (SAR) ship detection methods based on deep learning require large amounts of labeled data for network training. However, the annotation process requires significant manpower and resources especially for SAR images, since relevant background knowledge should be necessary for the annotators. Considering the available optical imagery datasets with labels, we propose an unsupervised oriented SAR ship detection method based on cross-modality distribution alignment, termed as CroMoDa. It consists of four components: 1) image-level feature alignment; 2) low-level feature despeckling; 3) cross-modality pseudo-label self-training; and 4) cross-modality object alignment. By aligning the multilevel feature distributions, modality-invariant features across the two imagery modalities can be learned. Considering speckle noise and other interferences in SAR images, the proposed loss term conducted on low-level features can enhance the object information to improve the detection accuracy. Moreover, the proposed pseudo-label self-training can better generate oriented SAR ship annotations than the other methods, which facilitates learning more discriminative features for SAR ship instances. With two optical-SAR dataset configurations, the proposed method is effectively evaluated by comparing to the other state of the arts, which demonstrates the great potential for SAR ship detection in real applications. Cross modality, synthetic aperture radar (SAR), synthetic aperture radar ship detection, transfer learning, unsupervised object detection.
Write a summary of the passage below.
ieee/effc8520_b072_4d94_8495_994a8373da96.md
# Demonstration of Single-Pass Millimeterwave SAR Tomography for Forest Volumes Michael Schmitt,, and Xiao Xiang Zhu, Manuscript received August 7, 2015; revised October 13, 2015 and November 17, 2015; accepted December 2, 2015. Date of publication December 29, 2015; date of current version January 19, 2016. This work was supported by the Helmholtz Association under the framework of the Young Investigators Group \"SPEEO\" (VH-NG-1018, www.swipes.bgu.tum.de).The authors are with Signal Processing in Earth Observation, Technical University of Munich, 80333 Munich, Germany, and also with the Remote Sensing Technology Institute, German Aerospace Center (DLR), 82234 Wessling, Germany (e-mail: [email protected]; [email protected]).Color versions of one or more of the figures in this paper are available online at [http://ieecyclore.ieee.org.Digital](http://ieecyclore.ieee.org.Digital) Object Identifier 10.1109/LGRS.2015.2506150 ## I Introduction Since the first practical demonstration of synthetic aperture radar (SAR) tomography (TomoSAR), the volumetric analysis of forested areas by this technique has been an important research topic [1]. In this context, most of the literature has focused on long-wavelength radar, such as L- or P-band [2, 3, 4]. Only few experiments have investigated the potential of shorter wavelength SAR using X-band sensors [5]. However, recently, a TomoSAR inversion method aiming at the reconstruction of discrete scattering profiles [6] has been proposed, which has already been used to generate detailed 3-D point clouds of forested areas using Ka-band data with a wavelength in the millimeterwave domain [7]. Based on these point clouds, even the 3-D reconstruction of individual trees could be demonstrated [8]. The results of these studies indicated that millimeterwave SAR provides the advantage of showing almost no canopy penetration and therefore providing accurate height estimates almost comparable to LiDAR remote sensing. In contrast, it is still an open question whether millimeterwave signals do provide any canopy penetration at all, and whether they could potentially be employed for a TomoSAR analysis of the whole forest volume. This letter provides the first-ever demonstration of volume SAR tomography using airborne multiantenna millimeterwave SAR data. ## II Millimeterwave SAR Characteristics Since millimeterwave SAR systems are not as common in the remote sensing community as, e.g., X- or L-band systems, the characteristics of this particular microwave domain are shortly sketched here. In this context, a German experimental airborne millimeterwave interferometric SAR (InSAR) sensor is used for demonstration. The peculiarities of millimeterwave SAR have already been discussed in, e.g., [9] or [10]. However, a short recapitulation with respect to very high resolution InSAR and TomoSAR applications certainly is within the scope of this letter. ### _Some Millimeterwave Peculiarities_ Typical wavelengths of millimeterwave frequencies differ from the common radar bands (L, C, X) in about one order of magnitude. This leads to two main advantages of millimeterwave systems. First, they enable a significant miniaturization of the sensor hardware, which makes them particularly feasible for use on unmanned aerial vehicles (UAVs). Second, it is possible to achieve very high resolutions with comparably short synthetic apertures. This eventually means that images of vegetated areas can be well focused because the blurring effect caused by wind-induced movements of leaves, branches, etc., is reduced. This already provides a significant benefit, when it comes to a detailed analysis of forested areas aiming at the single-tree level. Concerning the signal propagation through the atmosphere, already Skolnik [11] has discussed that millimeterwaves can provide an interesting alternative to X-band sensors. For example, Danklmayer and Chandra [12] have shown that Ka-band imaging capabilities are available more than 95% of the time, even in rain-prone regions of the world, although attenuation caused by precipitation is, of course, significantly less for longer wavelength radars. In general, rough surfaces cause diffuse scattering, whereas smooth surfaces result in specular reflections. At millimeterwave frequencies, most surfaces appear rough, and diffuse scattering dominates the images, leading to coherent averaging within the resolution cells. Since this is an effect similar to multilook processing, the inherent speckle effect appears less severe than in common radar bands. In addition, the high sensitivity with respect to surface roughness certainly provides a benefit, when analysis techniques based on distributed scatterers rather than point scatterers are used. In the context of InSAR processing, one of the main differences, with respect to longer wavelengths, is the different amount of volume penetration. While L- or P-band SAR is expected to penetrate most (vegetation) volumes down to theground, the X- or C-band is usually expected to exhibit phase centers somewhere within the volume. In the millimeterwave domain (Ka-band to W-band), in contrast, canopy penetration is expected to be much less likely. For tomographic inversion, however, the question arises whether millimeterwaves can be used for volume analysis at all. A first answer to this question based on real-data experiments is the main scope of this letter. ### _Experimental System Description_ For the demonstrations in this letter, data acquired during a 2013 campaign of the German experimental sensor MEMPHIS is analyzed. The sensor was developed by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR) in 1998 [13]. Although it can operate both in the Ka-band (35 GHz) and in the W-band (94 GHz) and offers a fully polarimetric configuration, for the investigations presented in this letter, only the HH data of the Ka-band system are considered. In this interferometric configuration, MEMPHIS provides four receiving antennas, thus being a multibaseline sensor with an overall baseline span (or elevation aperture) of 27.5 cm, which leads to a Rayleigh resolution of \\(\\rho_{\\eta}\\approx 42\\) m in the elevation direction. The relevant system parameters can be found in Table I. ## III Single-Pass Millimeterwave SAR Tomography ### _TomoSAR Inversion for Discrete and Continuous Reflectivity Profiles_ For the sake of simplicity, two of the best known and most simple to implement spectral analysis techniques are used for the evaluation of the millimeterwave test data. MUltiple SIgnal Classification (MUSIC) is a parametric spatial frequency estimator for signals affected by white noise with superresolution capabilities [14]. It is specifically designed for point scatterers, i.e., for discrete scattering profiles. Being a subspace-based technique, it aims at separation of the signal and noise subspaces by eigendecomposition of the covariance matrix of the investigated resolution cell, as follows: \\[\\mathbf{C}=[\\mathbf{E}_{s}\\quad\\mathbf{E}_{n}]\\begin{bmatrix}\\mathbf{D}_{s}& \\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{D}_{n}\\end{bmatrix}\\begin{bmatrix}\\mathbf{E}_{s}\\\\ \\mathbf{E}_{n}\\end{bmatrix} \\tag{1}\\] where \\(\\mathbf{D}=\\begin{bmatrix}\\mathbf{D}_{s}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{D}_{n}\\end{bmatrix}\\) is the matrix containing the eigenvalues of \\(\\mathbf{C}\\) in descending order, and \\(\\mathbf{E}=[\\mathbf{E}_{s}\\quad\\mathbf{E}_{n}]\\) is the matrix containing the corresponding eigenvectors. For TomoSAR inversion, then, the so-called MUSIC pseudospectrum \\[P_{\\mathrm{MUSIC}}(\\eta)=\\frac{1}{\\mathbf{a}^{H}(\\eta)\\mathbf{E}_{n}^{H} \\mathbf{E}_{n}\\mathbf{a}(\\eta)} \\tag{2}\\] is calculated, where \\(\\mathbf{a}(\\eta)\\) is the steering vector corresponding to a scattering contribution expected at elevation \\(\\eta\\). Although only the noise-related eigenvectors are used for spectral analysis, it has been shown that MUSIC exhibits significant superresolution capabilities and generally provides better results than classic beamforming or adaptive beamforming using Capon's method [15]. Nevertheless, the Capon estimator [16] is used as an example for a nonparametric estimator aiming at continuous reflectivity profiles. In contrast to MUSIC, it is not based on eigenvector analysis. Instead, it uses the inverse of the covariance matrix, in order to weight the individual elevations adaptively according to their estimated power. The Capon spectrum therefore is simply calculated by \\[P_{\\mathrm{CAPON}}(\\eta)=\\frac{1}{\\mathbf{a}^{H}(\\eta)\\mathbf{C}^{-1} \\mathbf{a}(\\eta)}. \\tag{3}\\] A schematic sketch of the TomoSAR configuration for forested areas, comparing both the continuous and the discrete reflectivity hypotheses, is presented in Fig. 1. In the remainder of this letter, the amplitudes of the (pseudo)reflectivities are shown in decibels, for display purposes, as follows: \\[\\hat{P}(\\eta)[\\text{dB}]=10\\cdot\\log_{10}\\left|P(\\eta)\\right|. \\tag{4}\\] ### _Test Data_ The available experimental MEMPHIS data were acquired during a campaign over Munich, Germany, in 2013. The test scene contains the \"Alter Nordfriedhof,\" an abandoned Fig. 1: Sketch of the TomoSAR acquisition geometry for forested areas. If the continuous reflectivity hypothesis is used, the whole reflectivity profile of the resolution cell is reconstructed and will probably show stronger reflectivities at or inside tree structures. In contrast, if the discrete reflectivity hypothesis is used, then only discrete scattering contributions at the sensor-facing tree structures are expected. cemetry, which is used as a public park today. As shown in Fig. 2(a), it is mainly characterized by a light planting of trees, resembling a grove or little wood. A corresponding SAR intensity image is shown in Fig. 2(b). The model order map displayed in Fig. 2(c) was calculated by the method described in [6]. Details about the test data are listed in Table I. The ground truth used for the following evaluations was acquired during a terrestrial laser scanning (TLS) campaign in July 2014. From three scanning locations, three point clouds with approximately 35 million points each (point density approximately 2500 pts/m\\({}^{2}\\)) were created and coregistered afterward. Details about the resulting data set are summarized in Table II. ### _Experimental Results_ Fig. 3 shows the tomographic slices corresponding to the solid line in Fig. 2, which were processed with both Capon and MUSIC for different model orders. In addition, the LiDAR ground truth projected into radar geometry (green points) is shown for comparison in Fig. 4. While Fig. 3(a) shows the classical nonparametric Capon tomogram, Fig. 3(b)-(d) shows the MUSIC tomograms for fixed model orders of \\(K=\\{1,2,3\\}\\) (i.e., \\(K\\) eigenvectors of each resolution cell's sample covariance matrix are used for spanning the signal subspace and \\(4-K\\) for spanning the noise subspace). In order to provide more material for further analysis, additionally, a MUSIC tomogram with automatic model order selection [corresponding to the model order map shown in Fig. 2(c)] is provided in Fig. 5. It is extended by some range bins toward the sensor (marked by the dashed line in Fig. 2), discretized, and geocoded as point cloud, with the points being colorized by the pseudointensities of the respective range-elevation cell. This way, the tomographic data can be overlayed with the LiDAR ground truth in world geometry. In all tomograms, an overall agreement between the forest structure and the tomogram reflectivities can be seen, whereas Fig. 5 also contains the shadowed resolution cells, where no scattering contribution (or no signal-related eigenvalue-eigenvector pair, that is) was detected. From these results, three interesting features can be discovered: * There are scene parts, which should actually be covered by tree canopy, but are still part of the tomographic reconstruction (as an example, see rectangle in Fig. 5). * Although MUSIC pseudoreflectivities and Capon reflectivities are not directly related, in both results the strongest values are found at the tree crowns facing the sensor. In addition, strong (pseudo)reflectivities are also found within the tree crowns and, for superresolving MUSIC, at the approximate ground level. * While the model order map in Fig. 2(c) generally looks quite reasonable with seemingly proper shadow detection and a large part of the scene affected by layover \\((K=2)\\), higher model orders obviously contain more information, which should not be neglected. All three phenomena indicate that there needs to be a certain penetration of the canopy by the millimeterwave signals. Particularly noteworthy is the situation at the marked gravestone located in the center of the scene (cf. Fig. 5): Here, quite strong scattering occurs, although the monument is actually in the radar shadow of several trees and bushes in front of it. Obviously, a combination of low leaf density and strong backscattering of the man-made object allows for a certain amount of subcanopy imaging in this case. Fig. 2: Test scene. (a) Optical image. (b) SAR intensity image (SAR range direction from left to right). (c) Model order map. The red marking indicates the test profile used in the experimental section: the solid line indicates the tomograms shown in Fig. 3, whereas the dashed line indicates the extension displayed in Fig. 5. ## IV Discussion The experimental results summarized in Section III-C show that there is indeed a certain amount of canopy penetration for millimeterwave SAR signals. Using MUSIC-based inversion in conjunction with a continuous TomoSAR model, strong pseudoreflectivities not only within tree crowns but also in underfoliage scene parts, where radar shadowing from higher trees would be expected, can be seen. As the _in situ_ photograph displayed in Fig. 6 shows, this is probably related to the low leaf density in the tree canopy. In addition, it seems that subcanopy backscattering is supported by strong reflections provided by man-made structures made of concrete. In addition, it has to be noted that wrong model order selection can lead to a severe underestimation of the amount of present signal information. Looking, e.g., at the range bins 1425 to 1435, strong backscattering from both the tree canopy and the undergrowth for \\(K=2\\) and \\(K=3\\) are revealed, while Capon and MUSIC with \\(K=1\\) only show a weak response at intermediate elevation. Particularly for single-pass systems with a very low number of available acquisitions (\\(N=4\\) in the MEMPHIS case), the relevant signal content of volumetric media is spread throughout the few available eigenvalues, such that it is advisable to always choose \\(K\\), as high as possible, in order to retrieve as much information as possible. ## V Summary and Conclusion In this letter, the very first experimental results for millimeterwave SAR tomography of forest volumes have been shown. Using test data acquired by the single-pass multibaseline interferometer MEMPHIS and a ground truth data set acquired by high-precision TLS, it could be shown that there is a certain amount of canopy penetration, if the leaf density is not too high. Fig. 4: LiDAR ground truth projected in the slant range–elevation plane of the radar geometry. Fig. 3: (a) Capon tomogram. MUSIC tomograms with (b) \\(K=1\\), (c) \\(K=2\\), and (d) \\(K=3\\). The (pseudo)intensities [in decibels] range from red (low) to yellow (high). This asks for more investigations with respect to subcanopy target detection, particularly if the object of interest provides strong enough backscattering. However, the main reflectivity contributions occur at the tree crowns facing the sensor. Therefore, also a discrete scattering model can be employed for reconstruction aiming at the canopy only, making this model a promising perspective for canopy height model generation or even individual tree reconstruction, as proposed in [8]. ## Acknowledgment The authors would like to thank T. Brehm and Dr. S. Stanko from the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR), for providing the MEMPHIS test data; C. Magnard and Dr. E. Meier from the Remote Sensing Laboratories at the University of Zurich, for focusing the raw SAR data; and geodesists P. Schreiner and W. Wiedemann from the Technical University of Munich, for the efforts concerning the acquisition and processing of the LiDAR ground truth data. ## References * [1] A. Reigber and A. Moreira, \"First demonstration of airborne SAR tomography using multibaseline L-band data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 38, no. 5, pp. 2142-2152, Sep. 2000. * [2] O. Frey, F. Morsdorf, and E. Meier, \"Tomographic imaging of a forested area by airborne multi-baseline P-band SAR,\" _Sensors_, vol. 8, no. 9, pp. 5884-5896, Sep. 2007. * [3] O. Frey and E. Meier, \"Analyzing tomographic SAR data of a forest with respect to frequency, polarization, and focusing technique,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 10, pp. 3648-3659, Oct. 2011. * [4] S. Tebaldijn and F. Rocca, \"Multibaseline polarimetric SAR tomography of a boreal forest at \\(r\\)- and L-bands,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, no. 1, pp. 232-246, Jan. 2012. * [5] J. Praks, F. Kugler, J. Hyyppa, K. Papathanassiou, and M. Hallikainen, \"SAR coherence tomography for boreal forest with aid of laser measurements,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2008, pp. 469-472. * [6] M. Schmitt and U. Stilla, \"Maximum-likelihood-based approach for single-pass synthetic aperture radar tomography over urban areas,\" _IET Radar, Sonar Navig._, vol. 8, no. 9, pp. 1145-1153, Dec. 2014. * [7] M. Schmitt and U. Stilla, \"Generating point clouds of forested areas from airborne millimeter wave InSAR data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Jul. 2014, pp. 1-4. * [8] M. Schmitt, M. Shaltzad, and X. Zhu, \"Reconstruction of individual trees from multi-aspect TomoSAR data,\" _Remote Sens. Environ._, vol. 165, pp.175-185, Aug. 2015. * [9] H. Essen, \"Airborne remote sensing at millimeter wave frequencies,\" in _Radar Remote Sensing of Urban Areas_, U. Soergel, Ed. Dordrecht, The Netherlands: Springer-Verlag, 2010. * [10] M. Schmitt, C. Magnard, S. Stanko, C. Ackermann, and U. Stilla, \"Advanced high resolution SAR interferometry of urban areas with airborne millimetrewave radar,\" _Photogramm. Fernerbundung Geinf._, vol. 2013, no. 6, pp. 603-617, 2013. * [11] M. Skolnik, _Introduction to Radar Systems_. New York, NY, USA: McGraw-Hill, 1980. * [12] A. Dunklmayer and M. Chandra, \"Precipitation effects for Ka-band SAR,\" in _Proc. Adv. RF Sensors Earth Observ._, Noordwijk, The Netherlands, 2009, pp. 1-8. * [13] H. Schimpf, H. Essen, S. Boehmdsdorff, and T. Brehm, \"MEMPHIS--A fully polarimetric experimental radar,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2002, pp. 1714-1716. * [14] R. O. Schmidt, \"Multiple emitter location and signal parameter estimation,\" _IEEE Trans. Antennas Propag._, vol. AP-34, no. 3, pp. 276-280, Mar. 1986. * [15] F. Gini and F. Lombardini, \"Multibaseline cross-track SAR interferometry: A signal processing perspective,\" _IEEE Aerosp. Electron. Syst. Mag._, vol. 20, no. 8, pp. 71-93, Aug. 2005. * [16] J. Capon, \"High-resolution frequency-wavenumber spectrum analysis,\" _Proc. IEEE_, vol. 57, no. 8, pp. 1408-1418, Aug. 1969. Fig. 5: Geocoded MUSIC tomogram with automatic model order selection overlayed to the TLS point cloud in world geometry. SAR viewing direction is from the upper left. The (pseudo)intensities range from blue (low) to red (high), and the white box indicates the zoomed detail view on the right. Fig. 6: _In situ_ photograph of the test area showing the gravestone and an exemplary section of the canopy.
In this letter, for the first time, the potential of millimeterwave synthetic aperture radar (SAR) is investigated with respect to a tomographic analysis of forest volumes. Exploiting both parametric and nonparametric SAR tomography (TomoSAR) methods designed for both discrete and continuous reflectivity profiles, it is shown that even Ka-band signals with a wavelength of only 8.55 mm can penetrate the tree canopy to a certain extent and allow a separation of ground and tree crowns. First experimental results exploiting airborne multiantenna data are evaluated with respect to LiDAR ground truth and indicate a promising perspective. Forested areas, multibaseline, synthetic aperture radar (SAR) tomography, volume tomography.
Write a summary of the passage below.
ieee/f017d110_ef74_42aa_944a_e6a6610025cc.md
Shallow-Water Bathymetry Retrieval Based on an Improved Deep Learning Method Using GF-6 Multispectral Imagery in Nanshan Port Waters Wei Shen Muyin Chen Zhongqiang Wu and Jiaqi Wang This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see [https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/) ## I Introduction A CCURATE bathymetric mapping plays a crucial role in environmental conservation, resource utilization, and efficient port management. Shallow-water depth information is essential for informed decision making and supports various human activities, contributing to both environmental sustainability and economic prosperity in seaport regions [1]. Mapping shallow-water depth in port areas is of utmost importance for various maritime activities. Traditionally, sound navigation and ranging (SONAR) and light detection and ranging (LiDAR) have been employed for this purpose. However, these methods have limitations in terms of cost and spatial coverage [2]. SONAR technology exhibits high operational efficiency but is limited in its applicability for large-scale usage due to its high cost. On the other hand, LiDAR technology offers a cost-effective solution for bathymetric applications, albeit with relatively lower operational efficiency compared at the same time. The implementation of satellite-derived bathymetry (SDB) revolutionizes the mapping of shallow-water depths in port areas, facilitating improved navigation safety, port management, and coastal zone planning. Its cost effectiveness and spatial-extensive nature make it an attractive choice for monitoring and managing shallow-water areas. It has made great technological advances since the launch of artificial satellites and the rapid development of computer science. It has become one of the main complementary means for shallow-water depth measurement due to the advantages of repeatable observation, wide view, and low cost. Different wavelengths of light have different reflectances when penetrating water. Based on this principle, remote-sensing satellite data can be utilized to retrieve the water depth. The traditional SDB methods are generally categorized as physics-based methods and empirical methods. The first method focuses on the interaction of light in water from a theoretical perspective [3, 4], whereas the second method investigates the empirical relationship between the spectral radiation patterns and the optical parameters. By using a logarithmic conversion ratio model to retrieve the water depth, it is found to reduce the impact of different sediments in the shallow sea to a certain extent [5]. As we have mentioned, most traditional sounding algorithms do not consider the spatial correlation between sounding points and surrounding pixels. The linear relationship is not sufficient to investigate and extend the mathematical and physical relationship between the features and the labels. Although building a universal algorithm to explain the relationship between multidimensional spectral value and in situdata is challenging, statistical methods can be employed to investigate the numerical model and search for the optimal solution [6]. Various machine learning algorithms have been applied to the field of extraction of water depth and underwater terrain inversion. A support vector machine was employed to estimate water depth at two different ports in the cities of Luarca and Candas in the Principality of Asturias (Spain) [7]. Furthermore, there have been instances where these technologies have been applied to the port of China [8]. Through these studies, it is shown that, at this time, the empirical-based method is more applicable than the physic-based method in water areas with turbid and complex water environments, such as a port. Deep learning frameworks have also been employed to improve the accuracy of inversion. Owing to their capabilities in image processing and feature analysis [9, 10, 11], deep learning techniques have gained significant attention in various Earth observation and remote-sensing applications. Chen et al. [12, 13] developed a deep learning model that integrated local aggregation and global attention mechanisms, which could effectively extract the spectral and spatial features of hyperspectral images by using a spectral-induced aligned superpixel segmentation technique and achieve high-accuracy classification results. The authors in [14] and [15] proposed a graph convolutional network for hyperspectral image classification, which can model the relations between samples using graph structures and improve the spatial-spectral feature representations. A cross-channel reconstruction module was introduced for multimodal remote-sensing data classification, which can exchange information between modalities by reconstruction strategy and learn more compact fusion representations [16]. A novel deep learning called grid network was proposed to rethink the feature extraction of hyperspectral images from anisotropic perspectives and to fully explore the spectral and spatial features in multistage and multipath processes [17]. In the context of port surveys, the wavelet neural network model was developed utilizing the spectral reflectance values obtained from top-of-atmosphere computations. The utilization of this neural network model demonstrated its effectiveness as a powerful tool in the field [18]. In the study, a convolutional neural network (CNN) was utilized to perform depth estimation in the Devil's Lake area (North Dakota, USA). The task of depth estimation was approached as a classification problem, leveraging the capabilities of the CNN architecture [19]. Furthermore, the accurate estimation of coastal water depth in the turbid water was achieved using Sentinel-2 Level 2A imagery, particularly in regions characterized by clear seawater [8]. Al Najar et al. [20] demonstrated a promising direction for the applicability of deep learning models in the field of marine surveying. Although they both use the deep learning framework, they are limited by the spectral information as the training data, which does not give full play to the huge potential of deep learning, and the results obtained still have room for improvement. Introducing water factors to improve the results was less studied before. Many existing studies indicate that inherent optical properties (IOPs) can be referred as another candidate factor to improve SDB accuracy [21]. The value of IOPs exhibits significant variations with water depth, establishing a strong correlation between the spatial distribution of water quality and the distribution of water depth. Yang et al. [22] introduced IOPs in turbid waters to retrieve the euphotic zone depth in inland waters using a modified quasi-analytical algorithm (QAA). Zhang et al. [23] proposed a linear model known as the inherent optical parameter linear model (IOPIM) to estimate shallow-water depth using high-spatial-resolution multispectral images. The findings from the IOPIM study demonstrated the potential of using inherent optical parameters (IOPs) to enhance the accuracy of water depth estimation. Huang et al. [24] proposed an updated quasi-analytical algorithm (UQAA) and verified its feasibility. Wu et al. [25] calculated that the UQAA was raised to calculate the phytoplankton pigment absorption coefficient and the chlorophyll-a concentration, functioning as the characteristic factors of water depth estimation. In this study, a novel framework was proposed that combines the UQAA with CNN. The authors aim to improve the accuracy and efficiency of the water depth inversion by combining the UQAA with a CNN-based deep learning framework. The UQAA can calculate the water quality factors that affect the water depth, and the CNN can automatically extract the underwater terrain features from the satellite images. The proposed model was compared with other four classical ML methods (i.e., the back-propagation neural network (BP-NN) [26], random forest (RF) [27], eXtreme Gradient Boosting (XGBoost) [28], and support vector regression (SVR) [7]) to verify the bathymetric ability. The results determine that the proposed framework outperforms other baseline cases. The concept of incorporating additional features beyond optical channels as supplementary training data can be extended to other geographical regions or scenarios that demand precise bathymmetric mapping, including applications in coastal erosion monitoring, coral reef preservation, marine habitat mapping, and underwater archaeology. This approach holds the potential for enhancing the accuracy and applicability of bathymetric estimation in various environmental settings. ## II Data and Methods The main body of the study consists of two steps: Data preprocessing and model training (see Fig. 1). In the data preprocessing part, the remote-sensing data used in this study were acquired from the GF-6 wide-field-view (WFV) multispectral optical satellite, while the in situ measurement data were obtained from SONAR measurements. To enhance the accuracy of bathymetry estimation, the remote-sensing data underwent preprocessing steps, including radiation correction and flare removal. The estimation of Chlorophyll-a concentration (C) and the absorption coefficient of colored dissolved organic matter (CDOM) at 440 nm [\\(a_{g}\\)(440)] were conducted using the UQAA. Subsequently, bathymetric points were extracted from in situ SONAR data, serving as a priori information. All data were stored in floating-point format. Through resampling, the \\(R_{rs}\\) values for the blue, green, and red bands, along with the two water quality factors, were matched with control points derived from GF-6 images within a specified window size at each point. In the model training part, the collected data were utilized in three distinct training methods, depending on specific conditions. Some SONAR data that were not incorporated into the network were reserved for validation purposes, enabling the evaluation of accuracy. Evaluation metrics, such as root-mean-square error (RMSE), mean relative error (MRE), and \\(R^{2}\\) were employed to assess the accuracy of the bathymetry estimation in comparison with other baselines. ### _Data Preprocessing_ #### Iv-A1 Study Area The article selects optical shallow-water areas of Nanshan Port within Sanya, Hainan province as the study area. The geolocation of the study area is shown in Fig. 2. The water quality in this area is clear, the seabed topography changes gently, and a small variety of bottom sediments. The water environmental conditions of Nanshan Port are typically characterized by turbidity. However, incorporating water quality factors can enhance the accuracy of bathymetric inversion. Therefore, the sea area of Nanshan Port was carefully selected to test the feasibility and accuracy of our proposed model. We conducted a qualitative evaluation of the variable bathymetry inversion model specifically for this region. #### Iv-A2 GF-6 WFV Data We obtained GF-6 WFV images from the land observation satellite service [[http://www.sasclouds.com/chinese/normal/](http://www.sasclouds.com/chinese/normal/) (accessed on July 17, 2021)]. Information on the GF-6 satellite is shown in Table I. A resampling of the input GF image is done to match the same image dimension of the in situ data. The geographical location of each point is defined by its longitude and latitude. From the longitude and latitude, a coordinate projection is made using the WGS84 ellipsoid using geospatial data abstraction library package in Python. We reduce the spatial positioning error generated by projecting the measured points onto the image by minimizing \\(E\\) as much as possible \\[\\mathbf{E}=\\sum_{\\boldsymbol{n}=1}\\left(\\boldsymbol{lon}-\\boldsymbol{x}\\right) ^{2}+\\sum_{\\boldsymbol{n}=1}\\left(\\boldsymbol{lat}-\\boldsymbol{y}\\right)^{2} \\tag{1}\\] Image preprocessingThe GF-6 image was radiometric calibrated using the FLAASH algorithm, while the glint effects were eliminated using Hedley's method [29, 30]. The state of the water surface will seriously affect the bathymetry of the shallow sea terrain. When there are wind waves in the scene, the solar flare on the water surface turns out to be very serious. It is necessary to eliminate the flare in the image such that the accuracy of water depth inversion can be improved. In this study, we employ Hedley's method to exploit the linear correlation between the reflectance values of the near-infrared band and other bands, aiming to mitigate the flare effects. The image was resampled to a spatial resolution of 5 m by bilinear interpolation [31], the same as the in situ data. Finally, the corresponding reflectance data were taken as a part of the features. To reduce information redundancy and the dimension of data, the optimum index factor (OIF) is often used to derive the optimal band feature combination [32]. The expression is given as follows: \\[R_{\\text{OIF}}\\ =\\ \\frac{\\boldsymbol{S}_{1}+\\boldsymbol{S}_{2}+\\boldsymbol{S}_{3 }}{\\boldsymbol{R}_{12}+\\boldsymbol{R}_{13}+\\boldsymbol{R}_{23}} \\tag{2}\\] where \\(S_{1}\\), \\(S_{2}\\), and \\(S_{3}\\) represent the standard deviations of any three wavebands, and \\(R_{12}\\), \\(R_{13}\\), and \\(R_{23}\\) represent the correlation coefficients between any three selected bands. The basic principle behind this method is that the amount of information contained in an image is directly proportional to its standard deviation. A higher standard deviation indicates a larger amount of information. Conversely, the independence Fig. 1: General workflow of the proposed system by using different approaches. Fig. 2: Study area at Nanshan port waters area. of an image is inversely proportional to the correlation coefficient between its spectral bands. A lower correlation coefficient indicates a lower degree of information redundancy and better independence. This method combines the interband correlations and the information content of individual band images to achieve widespread application. The fifth and sixth bands of the GF-6 image are the red edge bands, which are mainly used for the agricultural survey. They are not suitable for underwater information detection. We calculated the OIFs of the remaining six bands in Fig. 3. The first, second, and third bands of the GF-6 image are blue, green, and red bands, which were input into the training model. Moreover, these bands are like other satellites, which are better suited to verify the applicability of the UQAA. Updated QAAThe QAA is a semianalytical model based on the bio-optical model proposed by Zhao et al. [26]. Regarding multispectral satellites on the market that only have three-four wavebands of valid ocean data, it is meaningful to limit the number of unknown parameters of the QAA as it reduces the impact of incorrect data on the model. It is shown by Mateo-Perez et al. [27] that the absorption coefficient of chlorophyll at 440 nm and the backscattering coefficients at 550 nm can be presented in terms of the chlorophyll-a concentration \\(C\\) as follows: \\[b_{bp}\\left(\\lambda_{0}=550\\right)=0.0111*C^{0.62} \\tag{3}\\] \\[a_{\\text{phy}}(\\lambda_{1}=440)=0.06*C^{0.65}. \\tag{4}\\] In clear water, we have the following: \\[b_{bp}\\left(\\lambda\\right) =b_{bp}\\left(\\lambda_{0}\\right)\\left(\\lambda_{0}/\\lambda\\right)^ {Y}\\left(Y=0.67875\\right) \\tag{5}\\] \\[b\\left(\\lambda\\right) =b_{w}\\left(\\lambda\\right)+b_{bp}\\left(\\lambda\\right)\\] (6) \\[a_{g}\\left(\\lambda\\right) =a_{g}\\left(\\lambda_{1}\\right)*\\exp\\left(-0.015*\\left(\\lambda- \\lambda_{1}\\right)\\right)\\] (7) \\[a_{\\text{phy}}\\left(\\lambda\\right) =\\left[a_{0}\\left(\\lambda\\right)+a_{1}\\left(\\lambda\\right)\\ln \\left(a_{\\text{phy}}(\\lambda_{1})\\right)\\right]\\,a_{\\text{phy}}(\\lambda_{1})\\] (8) \\[a\\left(\\lambda\\right) =a_{w}\\left(\\lambda\\right)+a_{\\text{phy}}\\left(\\lambda\\right)+a_ {g}\\left(\\lambda\\right)\\] (9) \\[u\\left(\\lambda\\right) =\\frac{b\\left(\\lambda\\right)}{a\\left(\\lambda\\right)+b\\left( \\lambda\\right)}\\] (10) \\[\\lambda_{0} =550,\\;\\lambda_{1}=440. \\tag{11}\\] Therefore, we can construct an equation equal to \\(u(\\lambda)\\) by using \\(a_{g}(440)\\) and chlorophyll-a concentration \\(C\\). Note that in the QAA, we have \\[r_{rs}\\left(\\lambda\\right) =\\frac{R_{rs}\\left(\\lambda\\right)}{0.52+1.7R_{rs}\\left(\\lambda \\right)} \\tag{12}\\] \\[u\\left(\\lambda\\right) =\\frac{-g_{0}+\\left[g_{0}^{2}+4g_{1}r_{rs}\\left(\\lambda\\right) \\right]^{1/2}}{2g_{1}}. \\tag{13}\\] Introducing _R\\({}_{rs}\\)_ from multispectral images, we can represent the below-surface remote-sensing reflectance (_rrs_) and construct an equation set. In this set, \\(g_{0}\\) and \\(g_{1}\\) are the constants given by 0.08945 and 0.1247, respectively. The values \\(a_{0}\\) and \\(a_{1}\\) were from [33] by employing the interpolation function in MATLAB. The quantity \\(a\\) represents the total absorption coefficient of the water body, \\(a_{\\text{phy}}\\) represents the absorption coefficient of chlorophyll, \\(a_{g}\\) represents the absorption coefficient of CDOM, which is represented by the \\(a_{g}(440)\\), \\(a_{w}\\) represents the absorption coefficient of pure water, which is obtained directly, \\(b_{w}\\) represents the backscattering coefficient of pure water, which is obtained directly as well, \\(b_{bp}\\) represents the backscattering coefficient, which is represented by \\(b_{bp}\\) (\\(\\lambda_{0}=550\\)), and \\(\\lambda\\) represents the central wavelength of input bands. Hypothetically, the difference between the actual value of subsurface _R\\({}_{rs}\\)_ of optical in deep water and the predicted value of the semianalytical method tends to be fairly slight. Hence, the optimum values of \\(a_{g}(440)\\) and \\(C\\) can be solved by using the Levenberg-Marquardt method (see Fig. 4). Fig. 5(a) and (b) shows that the water quality factors of Nanshan Port strongly correlate with the trend of water depth distribution. The nearshore, especially the port, is a high-value area of absorption coefficient, which is reflected in Fig. 5. When the mixing dilution effect of seawater is more obvious, the absorption coefficient value is smaller, so the absorption coefficient of deep water will be smaller [34]. The relative relationship between the absorption coefficient and the measured water depth is depicted in Fig. 5(c) and (d). The regression line, represented by a thick red line, is shown to align with the theoretically inferred distribution pattern. The results of UQAA contain obvious data information, which may be introduced into the detection of underwater topography. Fig. 4: Derivation process of UQAA. Fig. 3: OIFs of the GF-6 image. #### Iv-A3 SONAR Data From July 11, 2021 to July 13, 2021, we used the wide band multibeam system (WBMS) produced by NORBIT of Norway to survey the water depth in the experimental area in a shipborne way. The acquired water depth data exhibit a substantial quantity and a high level of precision. The plane positioning accuracy is better than 0.5 m, and the depth measurement accuracy is better than 0.2 m. We extracted a total of approximately 200 000 measurement depth point cloud data with a resolution of 5 m to generate the training and testing sets required for the study. Nanshan Port contains a vast water area and complex boundaries. To enhance the robustness of the experiment, another set of 50 control points is selected from the nautical chart ([http://webapp.navionics.com](http://webapp.navionics.com)) as a supplement to the whole dataset. We took 5000 water depth points from the measured water depth dataset as samples. Among those samples, 4500 points were selected as training set and the rest 500 points were selected as test set. In Fig. 6, we show the distribution of the measured sample points used in this experiment. Each point in Fig. 6 represents the mean value of the water depth control points within a range. The datum used for the water depth here is the theoretical lowest tidal level in the experimental area. This is in line with the local chart datum. To provide input images in deep learning networks, the entire dataset should have the same dimensions and spatial resolution as the in situ dataset. ### _Model Training_ The subsequent model is constructed based on the flowchart, as depicted in Fig. 1, and it is categorized into two types of comparison approaches and the one proposed approach. #### Iv-B1 Train Model Proposed approach: CNN-based deep learning framework CNN models have advantages in image processing, as well as statistical regression analysis. These networks can minimize an objective function and are trained to approximate a mapping between inputs and outputs. Here, we designed a two dimensions CNN-based deep learning framework to retrieve the water depth. To train the model, we registered the in situ control point with the pixel value in the multispectral images. The center pixel is viewed as weighted averages of nearby known pixels. This method considered the impact of adjacent pixels on water depth retrieval through 2-D CNN to better realize underwater terrain extraction. We show in Fig. 7 the structure of the CNN-based deep learning framework. The convolutional kernel of our proposed model is used to extend the channels to extract more features. Pooling is used to reduce the width and height of the input data to extract important information. The rectified linear unit (ReLu) function serves as the activation function between the convolutional layer and the pooling layer, which is plotted in Fig. 7, with a yellow dashed line. Although it exhibits a linear appearance and behavior, this activation function is in fact nonlinear. Neural networks utilizing this activation function effectively mitigate the issue of vanishing gradients during the training process. The expression is given as follows: \\[f(x)=\\max(0,x) \\tag{14}\\] where \\(x\\) represents the input-independent variable and \\(f\\) (\\(x\\)) represents the function value. Unlike the conventional activation Sigmoid function, the ReLu function is less likely to cause gradient explosion or gradient disappearance. The whole operation process is optimized as the cost of the neural network is reduced by the ReLu function as well. We used a conventional CNN-based framework with the input layer being five neurons with UQAA and three neurons without UQAA. We chose the architecture from ResNet20 [35], which is a small version of a residual architecture achieving state-of-the-art performance on many computer vision tasks. The water depth points extracted from in situ measurements were used as a priori bathymmetric points. A 7 \\(\\times\\) 7 subimage was extracted from the multispectral image with a priori bathymmetric point as the center. We introduced the input feature of size 5(or 3) \\(\\times\\) 7 \\(\\times\\) 7 (bands \\(\\times\\) width \\(\\times\\) height) into the model. The prior water depth was used as a label for model learning. The two dimensions CNN model has five layers, which contain 64, 128, 256, 256, and 512 neurons, respectively. To reduce the gradient descent, the ReLU function was used as the activation function of the Fig. 5: Distribution and correlation regression plots of water quality factors in the port. Fig. 6: Distribution of measured water depth in Nanshan port. hidden layers. Finally, we predicted the water depth by using the linear activation function. In the study, we chose the strategy called minibatch to iteratively train the model [36], which helps to improve the operation efficiency of our proposed model. By dividing the training set into equal subsets as per a certain number randomly, these subsets are referred to as minibatch. This method not only solved the low operation efficiency issue but also improved the convergence speed of the network. In the proposed network model, a for loop is adopted to traverse every minibatch data, making a gradient descent for each batch. We then update the slope parameter and intercept parameter in backpropagation. We calculated the network's prediction error via the loss function over every batch of the training set. Following the calculation of the gradient of the loss function, an optimizer was introduced for updating the different parameters of the network to reduce the loss of the model. We trained the network by using Adam [37] (\\(\\beta 1=0.9\\) and \\(\\beta 2=0.999\\)), which is a standard CNN optimization method based on the stochastic gradient descent. It iterates continuously. When the iteration (epoch \\(=60\\) and minibatch size \\(=20\\)) stops, the loss function converges at the same time, and the training of the model is finished. BaselineSimple BP-NNThe simple BP-NN, as a traditional artificial neural network, mimics the learning process of neurons from feedback. The simple network includes only one hidden layer. The interconnection between adjacent neurons is balanced, involving weights and bias tuned during the process of backward propagation. We implemented the model by using Python. In the simple BP-NN, the only hidden layer was constructed involving 300 nodes, and the transfer function was the hyperbolic tangent (tanh) function. The optimizer was Adam, with a learning rate of 0.05, and the StepLR was selected as the training scheduler. Moreover, the maximum number of training, learning rate, and momentum coefficient were given by 1000, 0.05, and 0.9, respectively. Following the preposing procedures, the data were imputed into the model with bathymetric retrieval computer configuration. Baseline RFIn this integrated supervised learning model, multiple predicted outcomes are calculated, and the prediction results are studied simultaneously to improve the prediction accuracy. Self-help sampling is performed in each decision tree, and error estimation is performed by using the sample data outside the bag. The variables are randomly chosen when the decision tree is generated. In the RF regressor, the training data contained five features (\\(R_{rs}\\) of red/green and blue bands) with UQAA and three features (\\(R_{rs}\\) of red/green and blue bands) without UQAA. We describe the setting of the RF algorithm as follows. The detailed configurations included 1000 decision trees (_n_tree), where the max depth of each tree was 50. Baseline XGBoost XGBoost is based on the gradient boosting decision tree. It carries out the second-order Taylor expansion of the loss function by using the lifting learning algorithm. To avoid overfitting, we constructed the GridSearchCV function from the sklearn package's model_selection module in Python. By giving a value interval, we performed the search in sequence until the best combination of parameters (eta, gamma, max_depth, and _n_estimators) was achieved. Serving as an integrated learning model, the learning rate was 0.05. The maximum depth of regressors was set as 3, and the number of estimators was given by 500. Baseline SVR SVR is a type of generalized linear classifier that performs binary classification of data with an optional supervised learning method. It can also be applied to nonlinear situations. Functional regression is realized by constructing decision functions in high-dimensional space. It is often employed to generate some multidimensional small sample regression models. In this article, radial basis function (RBF) was selected as the kernel method for regression. The RBF was given as the kernel function searching for the optimal solution. We imported the Standscaler function from the sklearn package's preprocessing module in Python to solve the problem that the dimensions of dependent variables are different and cannot be compared. #### Iii-B2 Model Evaluation The depth estimation accuracy of all models can be calculated by using the following errors: \\[\\mathrm{MRE}=\\left.\\left(\\frac{1}{n}\\sum_{i=1}^{n}\\left|h_{i}-\\hat{h}_{i} \\right|/h_{i}\\right)\\right.\\ast 100\\% \\tag{15}\\] Fig. 7: CNN model structure. \\[\\mathrm{RMSE} =\\left(\\sum_{i=1}^{n}{\\left(h_{i}-\\hat{h}_{i}\\right)^{2}}/n\\right)^ {1/2} \\tag{16}\\] \\[R^{2} =1-\\frac{\\sum_{i=1}^{n}{\\left(h_{i}-\\hat{h}_{i}\\right)^{2}}}{\\sum_{ i=1}^{n}{\\left(\\overline{h_{i}}-\\hat{h}_{i}\\right)^{2}}} \\tag{17}\\] where \\(h_{i}\\) represents the measured depth, \\(\\hat{h}\\) represents the estimated depth, and \\(n\\) represents the number of input data. We performed the accuracy evaluation based on three statistical parameters, including MRE and RMSE. Lower MRE and RMSE values indicate a higher accuracy for bathymetric retrieval. RMSE quantifies the absolute error between predicted and observed values, while MRE captures the relative deviation or bias. Both metrics provide valuable insights into the performance and quality of a model's predictions. \\(R^{2}\\) was also used to describe the model-fitting effect. A high \\(R^{2}\\) value reflects good fitting effect of the model. The relative bathymetric error (RBE) is adopted to capture the error at a specific position. The absolute value \\(|\\mathrm{RBE}|\\) is given by \\[|\\mathrm{RBE}|=\\left(\\left|h_{i}-\\hat{h}_{i}\\right|/h_{i}\\right)*100\\%. \\tag{18}\\] ## III Results To evaluate the prediction results, the assessment was carried out through two parallel experiments, and the bathymetry mapping was conducted in Section IV. These two experiments are titled \"Water Depth Estimation Accuracy in Different Sizes of Dataset\" and \"Water Depth Estimation Accuracy in Different Datasets.\" The first experiment aims to validate the optimization capability of the proposed UQAA across various dataset sizes and ensure that it does not result in adverse effects. The second experiment aims to demonstrate the superiority of the authors' proposed method over other approaches, establishing its potential for further investigation and broader application. ### _Water Depth Estimation Accuracy in Different Sizes of Dataset_ In this article, the water depth control points collected by acoustic-sounding instruments were selected as the priori dataset. It is more accurate and more timely compared with spaceborne laser bathymetry. This is due to the fact that sound waves are transmitted from the transducer to the seabed and return at a very fast speed. They are not easy to be lost. We can also use sonar data to test the consistency of water depth. With the help of traditional acoustic instruments as data validation, underwater terrain extraction is more conducive. In order to deeply explore the influence of UQAA on water depth estimation accuracy, different sizes are selected as training sets, which are displayed in Fig. 8, as the \\(X\\)-axis. We observed from Fig. 8 that the accuracy with UQAA was greater than that without it in almost every model. The incorporation of the improved quantitative algorithm for aquatic applications (UQAA) enables subsequent improvements in the obtained outcomes while ensuring the mitigation of any adverse impacts. We found that the RMSEs and MREs of the BP-NN and the RF algorithm changed slightly. Along with UQAA, the RMSEs and MREs of the XGBoost algorithm and SVR algorithm have declined significantly overall. This is especially true when the number of points exceeds 6000. The accuracy of CNN was boosted, obviously, given the input of UQAA results. The RMSE and MRE of the CNN model decreased dramatically, where the RSME decreased by 23 cm and the MRE decreased by 5%. We may draw a preliminary conclusion that the UQAA results could be employed as training data for depth inversion accuracy to optimize the model. To simulate what happens when the model encounters deep water anomalies, we added 50 deep water points extracted from the chart to the training set of 5000 measurement points (see Section III-C). When compared with other models that exhibited fluctuations in accuracy with a dataset of 5000 points, the CNN model demonstrated superior accuracy. This suggests that the model possesses robust outlier handling capabilities and effective predictive abilities. Fig. 8: Bathymetric retrieval accuracies of different algorithms are compared. ### _Water Depth Estimation Accuracy With UQAA_ We utilized 500 test points with normal distribution for the test in Table II and Fig. 9. In Table II, general differences were observed between RMSE without and with UQAA for different machine learning models. They are 0.12 m (CNN), 0.03 m (BP), 0.01 m (RF), 0.03 m (XGBoost), and 0.23 m (SVR). As in the previous results, the CNN and the RF algorithm performed better in the overall test points. The XGBoost model came next. The simple BP-NN and the SVR algorithm turned out to have poor performance in this set of comparisons. Except for the section of 3-6 m, the CNN model had lower RMSE and MRE than the RF algorithm. Since the training points are normally distributed, the accuracy at both ends of the data appears to be generally poor, lacking training. Remarkably, at the end of the validation dataset, where water depth was deeper than 9 m, the CNN model error was still controlled below 1 m. We note in Fig. 9 that the red regression line is plotted on the scatter plot of correlation, compared with the black reference line. Clearly, Fig. 9(c) and (d) indicates the result of the BP-NN, and Fig. 9(i) and (j) indicates the result of the SVR algorithm. There appeared to be a large gap between the regression line and the reference line. Relatively, there was also a small gap that cannot be ignored between the regression line and the reference line in Fig. 9(g) and (h), indicating the result of the XGBoost algorithm. The gaps in Fig. 9(a) and (b), and (e) and (f) are small enough to be ignored. Their \\(R^{2}\\) values were both over 0.9, showing good fitting ability. ## IV Discussion The environmental conditions of Nanshan Port are characterized by turbid water. While direct application of UQAA may not be feasible, the derived water quality factors have the potential to enhance the accuracy of bathymetric inversion. This empirical model serves as a means to verify the practical applicability of the computed water quality factors for optimizing bathymetric inversion. Many factors could influence the inversion results in the depth of variable algorithms. The obtained water quality factors demonstrated their reliability for the study, as they exhibited strong spatial distribution characteristics that were closely correlated with the trend of water depth distribution. In our two previous experiments, we have demonstrated the robustness of the UQAA method after undergoing specific optimizations, which ensures that it does not yield negative impacts on the overall results. Additionally, our observations have clearly indicated that CNN-based deep learning statistical Fig. 9: Correlation between the in situ depths and the water depth estimation results based on different algorithms. models possess enhanced prediction and inversion capabilities, particularly when they incorporate more features compared with the traditional machine learning methods. Moreover, it is crucial to acknowledge that the selection of model hyperparameters and the potential errors introduced during the experiment have a discernible impact on the accuracy of water depth retrieval. Furthermore, we have explored how the proposed method can be effectively applied in practical scenarios. In this section, we will delve into the specific factors that contribute to variations in inversion results, shedding light on their implications and potential implications for future research and applications. Additionally, we will visualize the bathymetry results to demonstrate the practical feasibility of our proposed approach. Our research aims to provide a comprehensive understanding of the factors influencing water depth retrieval accuracy, guiding the development of more robust and reliable methodologies. By addressing these challenges and refining the approach, we seek to enhance the application of our proposed method in diverse geographic regions and scenarios. Through collaboration with fellow researchers and experts, we aspire to advance the field of water depth estimation and contribute to the sustainable management of coastal environments and marine resources. ### _Design of Model Superparameters_ In our comparative experiment, the superparameters of the machine model, including the BP-NN, RF, XGBoost, and SVR algorithm, were represented by the optimal parameters retrieved through the grid search approach [28]. The best results were taken as the comparison group. It is essential to select the convolution kernel size before model training. In Table III, we show different sizes relevant to the relationship between kernel size and model precision. Except for the size of \\(1\\times 1,R^{2}\\) of other models maintained a relatively high level for the training set or validation set. The result shows that the CNN model can achieve the lowest RMSE and MRE with a kernel size of 5 \\(\\times\\) 5 pixels. Usually, there would be some error in convolution computation when the kernel size is too small or too large [38]. _Comparative Analysis of Water Depth Estimation Algorithms With the Introduction of Water Quality Factors_ Similarly, following the BOOSTING integration concept [39], the XGBoost algorithm's overall RMSE was comparable to the RF algorithm. However, the MRE was higher in the XGBoost algorithm. In contrast, when considering the RMSE and MRE of the CNN model, the accuracy of the RF and XGBoost algorithms without UQAA appeared to be higher. Nevertheless, when incorporating UQAA results, the CNN model demonstrated marked improvement in terms of numerical values, suggesting enhanced accuracy. Interestingly, both the RF and XGBoost algorithms exhibited consistent RMSE and MRE values, with or without the input of UQAA results. The RMSE initially decreased and then stabilized within the range of 0.5-0.8 m. The MRE fluctuated between 6% and 10%. When dealing with a limited number of calibration points in practical applications, the results obtained for water depth retrieval were also noteworthy. The accuracy of the BP-NN and CNN models started to increase in the dataset containing 1000-2000 points, and the improvement became more evident after introducing UQAA. The CNN model showed better accuracy, particularly with less than 1500 input points, and this was further enhanced with the inclusion of UQAA results. The influence of different training sets on the overall results is expected, and it is worth noting that the variable training algorithm with UQAA demonstrated higher accuracy compared with the algorithm without UQAA. We demonstrated the superiority of the CNN model with UQAA results across different training set sizes, highlighting its potential and efficacy for water depth estimation. ### _Error Analysis of SDB and In Situ Measurement_ The bathymetry error comes from the GF-6 WFV image and the field data provided by the multibeam sonar. First, the impact of water surface anomalies on the overall inversion results cannot be completely removed, although preprocessing operations, including the sunlight correction, have been performed for the GF-6 WFV image. Second, the spatial resolution of the GF-6 satellite is 16 m, whereas the spatial resolution of the multibeam sonar measurement is only 0.5 m. This yields errors in matching water depth points with image pixel points without considering positioning errors [40]. In addition, the quality factors of a water body are derived by analyzing the \\(R_{rs}\\) of the water surface combined with the empirical coefficient. This indicates that there could be some errors in areas under complex water conditions. Third, it is difficult to synchronize field measurements with other remote-sensing data. It is conducive to model construction and accuracy evaluation by taking the in situ measurement data as prior data. However, since it is difficult to synchronize in the time dimension, inevitably, there will be accuracy errors between the inversion value and true value. To address these challenges, Hong et al. [41] proposed a novel spectral mixture model to address spectral variability for hyperspectral unmixing, which is the problem of estimating the abundance maps of different materials from hyperspectral imagery. Spectral variability refers to the variations of spectral signatures of the same material due to various factors. The proposed model overcomes the limitations of the classical linear mixingmodel, which assumes that the spectral signatures are fixed and known. The model models the main spectral variability (i.e., scaling factors) separately by using an endmember dictionary, which contains the spectral signatures of different materials. It then models the other spectral variabilities by using a spectral variability dictionary, which contains the deviations from the endmember dictionary. The model applies a data-driven learning strategy to learn both dictionaries from the hyperspectral data, by using a low-coherence prior knowledge, which assumes that the atoms of the two dictionaries are not similar to each other. The model also estimates the abundance maps simultaneously by using a reconstruction strategy, which minimizes the difference between the observed data and the model output. The article claims that the model can achieve higher accuracy and lower errors than the state-of-the-art methods on synthetic and real datasets by effectively reducing the spectral variability and capturing the spatial-spectral features of the hyperspectral data. We believe that the ALMM method is relevant and effective for our research, and we plan to explore the possibility of incorporating it into our framework to further optimize the data quality and obtain better inversion results in the future. We expect that by using the ALMM method, we can reduce the errors caused by spectral variability and improve the accuracy and robustness of our method. ### _Spatial Distribution of RBEs_ The spatial distribution of RBEs reveals the direction of improving inversion accuracy and the difference in retrieval results under different algorithms. We show in Fig. 10(a), (c), and (e) the red symbols for the absolute RBEs, which are greater than 30%, yellow symbols for the absolute RBEs, which are in 20%-30%, light blue symbols for the absolute RBEs, which are in 10%-20% interval, and dark blue symbols for the absolute RBEs, which are less than 10%. In Fig. 10(a) and (c), we calculated the distribution of absolute RBEs by the CNN model with UQAA and without UQAA. Without UQAA, the number of points in absolute RBEs over 10% (color light blue, red, and yellow) area increased. We observed from Fig. 10(b) and (d) that there was a correlation distribution map between absolute RBEs and real water depth. In the trained models, the high point density area is between 5 and 7 m, whereas absolute RBEs were mostly lower than 10%. Moreover, the CNN model with UQAA results has more concentrated areas of high point density between 5 and 7 m. The corresponding absolute RBE was less than 5%. In Fig. 10(a) and (c), we observed that the distribution of both maps was similar, especially in absolute RBEs above 10% region. The accuracy of northeast part was lower than that of the other parts. Regarding remote-sensing sounding, the overestimation often occurred in the shallow area near the port. The underestimation always occurred in the deep area away from the port [8]. This phenomenon was in line with the result of Fig. 10(a) and (c). However, it was relatively improved in favor of UQAA results. To verify the universality of the model, another dataset of 5000 points was introduced into the CNN model for validation. The result is shown in Fig. 10(e). Although the spatial distribution of Fig. 10(e) from the new 5000 points showed more points of high absolute RBE, they had the same overall trends of distribution. Given that a part of the points was input to train the model in Fig. 10(a), it was expected that Fig. 10(a) had lower errors than Fig. 10(e), and there were many points with high point density over the line of 10% RBE, as shown in Fig. 10(f). In Fig. 10(e), we observed little difference between the results produced by the two datasets since the error points were mainly concentrated in shallow-water areas. Different sample points influenced the result, but they had a small impact on the result of space distribution. Multidimensional data input with spatial information, such as UQAA results, showed a greater impact on the model inversion effect compared with similar data from different sets. ### _Bathymetry Maps of Models_ To figure out how well this method performs in practical applications, we introduced large-scale images with no a priori data into three previously trained models in Fig. 11. We observed significant visual differences between the results of the CNN and the other two models. The CNN model showed a relatively shallow depth distribution, which is close to the actual situation. #### Iv-E1 Bathymetry With Priori Data From priori data, despite the actual presence of relatively deep water near the port, the depth estimation obtained from the CNN model showed an unexpectedly higher value. This discrepancy highlights a drawback associated with the CNN model. However, it is important to note that both the RF and XGBoost algorithms exhibited similar Fig. 10: Spatial distribution of RBEs and correlation distribution with different algorithms and different dataset. (a) and (b) were with UQAA. (c) and (d) were without UQAA input. (e) and (f) was with another data set of 5000 points. spatial patterns, with a tendency toward deeper water depths near the port as well. The average water depth of the CNN model in a deep water area was about 7 m, which was close to the chart data, as displayed in Fig. 11(d) and (e). We show in Fig. 11(d), acquired from the nautical chart, the gentle variation of water depth distribution, which is more like the map of the CNN model. In Fig. 11(d), there is almost no water area near Nanshan Port that exceeds 10 m, except for an artificial channel in the middle. Machine learning models based on the mathematical and statistical principles have poor learning of outlier water depth. This phenomenon becomes apparent in the maps generated by the RF and XGBoost algorithms, leading to an overestimation of the overall water depths, surpassing the actual values. In contrast, the performance of the CNN model reveals the presence of the waterway, although its depth is underestimated compared with the measured values. However, it is worth noting that the overall depth of the waterway aligns more closely with the actual value at the same time. #### Iv-D2 Bathymetry Without Priori Data To further validate the predictive power of the model for no-prior data, it was found that there was an estuary outside the measured data area in the north part of Fig. 11(d). It can be set as a standard to measure the prediction ability of the model in an unknown area. In Fig. 11(a), the water depth in this area was incorrect. The estuary can be displayed in Fig. 11(b) and (c). RMSE is an important quantitative index to measure the accuracy of water depth inversion. However, RMSE is greatly impacted by the distribution of validation points. In areas lacking accurate validation data, it can only be verified by comparing the nautical chart. Although the CNN model has some deviation in the inversion of localized high values, it still shows a strong prediction ability and can be used in the actual thematic products. ## V Conclusion In this article, we proposed the idea of combining a UQAA with a CNN-based deep learning framework to estimate the water depth and extract underwater terrain data automatically. By using the UQAA, a map of water quality factors was drawn in Fig. 6. This was strongly related to the bottom brightness implying the water depth information. We extracted the bathymetric points from the in situ measured data by the WBMS system to train the CNN model with preprocessed GF-6 images and derived water quality factors. We considered four classic methods as baselines. We also discussed and evaluated the accuracy of bathymetry for different training sets and different algorithms. To verify the performance of the proposed method, we compared the inversion data results and image results with the validation data. They had been divided before and had no input into model training. Our comparison indicated that the CNN model with UQAA can outperform all the baselines with or without UQAA input, with the RMSE being 0.55, the MRE being 6.63%, and the \\(R^{2}\\) being 0.93 when the number of training set was 5000. In general, when the number of training points was between 1000 and 10 000, the results of bathymetry with UQAA results were better than those without UQAA results. The accuracy of water depth retrieval, especially in our proposed CNN model, can be improved by considering the spatial distribution and numerical analysis of the water quality parameters. Moreover, we analyzed the spatial distribution of errors between the estimated depth and the measured depth. We found that introducing UQAA results as feature data can decrease the errors in the shallow area near the port. The inversion feasibility of the CNN model with UQAA was tested in areas lacking accurate verification data as well. Generally speaking, in actual surveying and mapping, we pay more attention to the accuracy of the results. Considering the accuracy advantages and growth potential brought by deep learning, applying deep learning models to bathymetry is of great research value. Moreover, due to experimental conditions, we can only use the GPU of our laptop for inversion calculations. If we use a server cluster, we can further reduce the time cost of deep learning. For future research, more optical parameters obtained through QAA could be inputted into deep learning neural network model and applied to remote-sensing images with higher spatial resolution. These retrieval results could be applied to port management and underwater terrain acquisition. Moreover, the following directions could be explored for further improvement. 1. Other types of remote-sensing data, such as thermal infrared, microwave, or LiDAR, could be investigated to complement the optical data and provide more information for water depth estimation and underwater terrain extraction. 2. The effects of different environmental factors, such as water turbidity, sun glint, cloud cover, or wave height, could be examined on the accuracy and robustness of the proposed method, and adaptive strategies could be developed to cope with these challenges. 3. The CNN-based deep learning framework and the UQAA could be optimized by using more advanced network architectures, loss functions, regularization techniques, or data augmentation methods to enhance the feature extraction and fusion capabilities of the method. Fig. 11: Bathymetry maps at a large spatial scale. (a) Was the RF algorithm with UQAA. (b) Was the XGBoost algorithm with UQAA. (c) Was the CNN model with UQAA. (d) Was from nautical chart. (e) Was from NORBIT system. 4. The proposed method could be applied to other regions or scenarios that require accurate bathymetric maps, such as coastal erosion monitoring, coral reef conservation, marine habitat mapping, or underwater archaeology. ## References * [1] S. J. Purkis et al., \"High-resolution habitat and bathymetry maps for 65,000 sq. km of Earth's remotest coral reefs,\" _Cooral Refs_, vol. 38, pp. 467-488, 2019, doi: 10.1007/s00338-019-01802-y. * [2] J. Horta, A. Pacheco, D. Moura, and G. Ferreira, \"Can recreational echosounder-Camptlotter systems be used to perform accurate neashore bathymetric surveys?,\" _Ocean Dyn._, vol. 64, no. 11, pp. 1555-1567, 2014, doi: 10.1007/s10236-014-0773-y. * [3] J. Hedley, C. Roofsema, and S. Phinn, \"Efficient radiative transfer model inversion for remote sensing applications,\" _Remote Sens. Environ._, vol. 113, no. 11, pp. 2527-2532, 2009, doi: 10.1016/j.rese.2009.07.008. * [4] A. Dekker et al., \"Intercomparison of shallow water bathymetry, hydrogoptics, and benthyng techniques in Australian and Caribbean coastal environments,\" _Limmod. Oceanogr. Methods_, vol. 9, no. 9, pp. 396-425, 2011, doi: 10.4319/dom.2011.9.396. * [5] I. Caballero, R. Stumpf, and A. Meredith, \"Preliminary assessment of turbidity and chlorophyll impact on bathymetry derived from Sentinel-2A and Sentinel-3A satellites in South Florida,\" _Remote Sens._, vol. 11, no. 6, 2019, Art. no. 645, doi: 10.3390/rs11060645. * [6] T. Sagawa, Y. Yamashita, T. Okumura, and T. Yamanokuchi, \"Satellite derived bathymetry using machine learning and multi-temporal satellite images,\" _Remote Sens._, vol. 11, no. 10, 2019, Art. no. 1155, doi: 10.3390/rs11101155. * [7] V. Mateo-Perez, M. Corral-Bobadilla, F. Ortega-Fernandez, and E. P. Vergara-Gonzalez, \"Port bathymetry mapping using support vector machine technique and sentinel-2 satellite imagery,\" _Remote Sens._, vol. 12, no. 13, Jul. 2020, Art. no. 2069, doi: 10.3390/rs12132069. * [8] J. Zhong, J. Sun, Z. L. Lai, and Y. Song, \"Nearshore bathymetry from ICESA-2 LiDAR and sentinel-2 imagery datasets using deep learning approach,\" _Remote Sens._, vol. 14, no. 17, Sep. 2022, Art. no. 4229, doi: 10.3390/rs14174229. * [9] T. Hoeser, F. Bachorfer, and C. Kuenzer, \"Object detection and image segmentation with deep learning on Earth observation data: A review--Part II: Applications,\" _Remote Sens._, vol. 12, no. 18, Sep. 2020, Art. no. 3053, doi: 10.3390/rs12183053. * [10] T. Hoeser and C. Kuenzer, \"Object detection and image segmentation with deep learning on Earth observation data: A review--Part I: Evolution and recent trends,\" _Remote Sens._, vol. 12, no. 10, May 2020, Art. no. 1667, doi: 10.3390/rs12101667. * [11] L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson, \"Deep learning in remote sensing applications: A meta-analysis and review,\" _ISPRS J. Photogramum. Remote Sens._, vol. 152, pp. 166-177, 2019, doi: 10.1016/j.ISPRS.2019.04.015. * [12] Z. Chen, G. Wu, H. Gao, Y. Ding, D. Hong, and B. Zhang, \"Local aggregation and global attention network for hyperspectral image classification with spectral-induced aligned superpixel segmentation,\" _Expert Syst. Appl._, vol. 232, 2023, Art. no. 120828. * [13] Z. Chen, D. Hong, and H. Gao, \"Grid network: Feature extraction in anisotropic perspective for hyperspectral image classification,\" _IEEE Geosci. Remote Sens. Lett._, vol. 20, Jul. 2023, Art. no. 5507105, doi: 10.1109/LGRS.2023.3297612. * [14] D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, \"Graph convolutional networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 59, no. 7, pp. 5966-5978, Jul. 2021. * [15] X. Cao, X. Fu, C. Xu, and D. Meng, \"Deep spatial-spectral global reasoning network for hyperspectral image denoising,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Apr. 2021, Art. no. 5504714, doi: 10.1109/TGRS.2021.3069241. * [16] X. Wu, D. Hong, and J. Chanussot, \"Convolutional neural networks for multimodal remote sensing data classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Nov. 2021, Art. no. 5517010, doi: 10.1109/TGRS.2021.3124913. * [17] Z. Chen et al., \"Global to local: A hierarchical detection algorithm for hyperspectral image target detection,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Dec. 2022, Art. no. 5544915, doi: 10.1109/TGRS.2022.3225902. * [18] M. El-Disary, \"Satellite-based bathymetric modeling using a wavelet network mode,\" _ISPRS Int. J. Geo-Inf._, vol. 8, no. 9, Sep. 2019, Art. no. 405, doi: 10.3390/ijig\\\\ \\end{tabular} \\begin{tabular}{c c} & Wei Shen received the Ph.D. degree in cartography and geographic information system from Beijing Normal University, Beijing, China, in 2007. He is currently a Professor and master's supervisor with the College of Marine Science, Shanghai Ocean University, Shanghai, China, the Head of the Marine Surveying and Mapping major, and a member of the Marine Surveying and Mapping Professional Committee of the Chinese Society of Surveying and Mapping. His research interests include marine surveying and mapping, GIS, RS, LIDAR, underwater information detection and processing, and virtual reality and simulation. Over the past five years, he has authored or coauthored 25 papers in domestic and foreign core journals (including 7 international indexed papers). \\\\ \\end{tabular} \\begin{tabular}{c c} & Muyin Chen is currently working toward the M.Sc. degree in marine sciences with Shanghai Ocean University, Shanghai, China. His research interests include the application of deep learning to marine remote sensing optical data. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhongqiang Wu (Member, IEEE) received the M.Sc. degree in marine sciences from Shanghai Ocean University, Shanghai, China, in 2016, and the Ph.D. degree in geography from Nanjing University, Nanjing, China, in 2022. He is currently a Lecturer with the School of Information Science and Technology, Hainan Normal University, Haikou, China. His research interests include ocean remote sensing and remote-sensing-based bathymetry. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jiaqi Wang is currently working toward the M.Sc. degree in marine sciences with Shanghai Ocean University, Shanghai, China. His research interests include the application of machine learning to marine remote sensing optical data. \\\\ \\end{tabular}
In a seaport, accurate bathymetric maps are valuable for both environmental and economic reasons. One of the main complementary methods for measuring shallow-water depth is the retrieval of the water depth by satellite. The results of the water depth inversion are greatly influenced by factors related to water quality. The proposed updated quasi-analysis algorithm (UQAA) allows for the calculation of water quality factors, and their spatial distribution characteristic strongly correlates with the trend in water depth distribution. By using satellite-derived bathymetry, these parameters can be used in the model training to extract the underwater terrain. This article proposes the idea of combining the UQAA with a convolutional neural network (CNN) based deep learning framework to retrieve the depth of the water and automatically extract the underwater terrain. We compare four different existing machine learning algorithms as baselines, using GF-6 multispectral remote-sensing images and in situ depth data in Nanshan Port as a priori validation set. We find that the result of the CNN model using the UQAA is better than other baselines, where the root-mean-square error was down to 0.55 m, the mean relative error was 6.63%, and the \\(R^{2}\\) was 0.92. The developed method, which introduces the water quality factors containing geographic information as feature quantities, provides a new direction for further improvement. Bathymetry, convolutional neural networks (CNNs), deep learning, GF-6, inherent optical properties (IOPs).
Give a concise overview of the text below.
ieee/f0e2a6fb_596c_48da_bc47_d1f5d8ca3935.md
# Kernel Approximation on a Quantum Annealer for Remote Sensing Regression Tasks Edoardo Pasetto, Morris Riedel, Kristel Michielsen, and Gabriele Cavallaro, Manuscript received 31 January 2023; revised 27 August 2023; accepted 31 December 2023. Date of publication 5 January 2024; date of current version 19 January 2024. This work was supported in part by the project JUNIQ that has received funding from the German Federal Ministry of Education and Research (BMRF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia, in part by the European High-Performance Computing Joint Undertaking (JU) under Grant EUROCCE project, in part by EU/EEA states under Grant 10110903, in part by the Center of Excellence (Cog) Research on AI-, in part by Simulation-Based Engineering at Exascale (RAISE) receiving funding from EU's Horizon 2020 Research and Innovation Framework Programme under Grant H2020-INFRAEDI-2019-1 and Grant 951733. _Corresponding author: Gabriele Cavallaro._ Edoardo Pasetto and Kristel Michielsen are with Julich Supercomputing Centre, Wilhelm-Johner Strafe, 52428 Julich, Germany, also with RWTH Aachen University, D-52056 Aachen, Germany, and also with ADAS, 52425 Julich, Germany (e-mail: [email protected]; [email protected]).Moris Riedel and Gabriele Cavallaro are with the University of Iceland, 107 Reykjavik, Iceland, also with Julich Supercomputing Centre, Wilhelm-Johner Strafe, 52428 Julich, Germany, and also with ADAS, 52425 Julich, Germany (e-mail: [email protected]; [email protected]).Digital Object Identifier 10.1109/ISTARS.2024.3350385 ## I Introduction The task of estimating biophysical quantities from remote sensing (RS) measurement data is a well-studied problem in the research community, covering a range of applications such as water chlorophyll concentration estimation [1, 2, 3], ozone concentration estimation [4], and crop yield prediction [5]. The task can be interpreted as an inverse modeling problem whose objective is to find a relationship between acquired measurements of some specific physical quantities and a value of interest [1]. On a formal point of view the objective is to determine a function \\(y=f(\\mathbf{x}):\\,\\mathbb{R}^{d}\\rightarrow\\mathbb{R}\\), where \\(\\mathbf{x}\\in\\mathbb{R}^{d}\\) is the input feature vector containing the data of the optical measurements and the scalar \\(y\\ \\in\\ \\mathbb{R}\\) is the quantity of interest to be determined. The learning of process of the function \\(f(.)\\) is carried out by observing a training set of data observation, i.e a set of N pairs of observation measurements vectors and their corresponding target value \\(\\{(\\mathbf{x}_{i},y_{i}),\\ i=1,\\ldots,N\\}\\). Regression tasks in remote sensing (RS) have been studied by applying different supervised learning algorithms and among the most popular are support vector regression (SVR) [6, 7], kernel ridge regression (KRR) [8], and Gaussian process regression (GPR) [9]. A common feature of these methods is the usage of a _kernel function_\\(k(\\mathbf{x},\\mathbf{x}^{\\prime})\\), which allows to calculate the dot product between a nonlinear map of the input vectors in a transformed feature space taking as argument the original input vectors, i.e., \\(k(\\mathbf{x},\\mathbf{x}^{\\prime})=\\phi(\\mathbf{x})^{T}\\phi(\\mathbf{x}^{\\prime})\\), where \\(\\phi(.)\\) is a nonlinear feature map. One of the advantages of using kernel methods comes from the so-called _kernel trick:_ if in the mathematical formulation of a learning algorithm feature vectors appear only as dot products between them, it is possible to \"kernelize\" the algorithm by substituting such products with the kernel function calculated on the same feature vectors [10, 11]. The main characteristic of this procedure is that it is not necessary to know the nonlinear feature mapping \\(\\phi(.)\\) nor the transformed vectors themselves since the only information needed can be obtained implicitly by the evaluation of the kernel function. Kernel methods, however, tend to scale badly as the size of the training set increases [12]. Starting from this observation, Rahimi et al. [12, 13] proposed the random kitchen sinks (RKS) kernel approximation algorithm, which approximates the kernel function by using randomized features. This procedure, also known as Random Fourier Features, therefore does not employ a kernel function but instead explicitly generates transformed feature vectors through randomization. Quantum computing (QC) [14, 15] is a computational model based upon the properties of quantum mechanics that was theoretically proven to have the potential to outperform classical computers in terms of computational complexity on some specific tasks [16, 17]. However, the availability of a reliable large-scale quantum computer might still be a distant goal [18]. The growing interest towards the application of different QC algorithms to enhance machine learning (ML) frameworks laid the foundations for the development of the research field of quantum machine learning (QML) [19, 20, 21, 22, 23]. In the context of RS, QML have been applied to image classificationthrough the usage of a hybrid quantum-classical neural network whose quantum layer was implemented with a parametrized quantum circuit [24, 25, 26]. A QML-based implementation of the Random Fourier features has been recently proposed with gate-based quantum computing [27] and quantum annealing (QA) [28]. In the QA-based implementation, also referred to as adiabatic quantum kitchen sinks (AQKS), data are linearly encoded in the Hamiltonian of a quantum system which is then evolved, and the measurement value taken at the end of the process is then used to generate the transformed feature vectors that are then used to train a support vector machine (SVM) for binary classification tasks. In this work, the AQKS kernel approximation algorithm is applied to two different kernel-based regression algorithms: SVR and GPR on two real RS datasets related to chlorophyll concentration estimation. The obtained results are then compared with those obtained by the corresponding traditional kernel-based versions and those obtained by the same algorithm trained using the classical RKS kernel approximation algorithm. The implementation of the AQKS kernel approximation algorithm is done using a D-Wave Advantage system quantum annealer, whereas the work in [28] simulated the quantum system through trotterization. Moreover, since the workflow of AQKS requires to solve with the quantum annealer many problems of small size, the concept of _parallel quantum annealing_[29] was used in order to reduce the computational time during the learning process by running multiple problem instances in the same annealing cycle. This is possible because, if two or more problems are independent, they can be solved in the same annealing cycle by solving the optimization problem obtained by summing them together. For the sake of clarity in the notation, the algorithms implemented with a traditional kernel, the AQKS kernel approximation, and the RKS kernel approximation are referred to as _classical_, _quantum_, and _RKS-based_, respectively. Our contributions in this work can be summarized as follows: Implementation of AQKS on a real QA device, application of such a scheme to regression problem with two different algorithms, integration of AQKS with parallel QA to reduce the computational time, and to the best of our knowledge, first time application of such a scheme to a real RS use-case. ## II Quantum Annealing and QUBO Problem Formulation To solve a problem with a quantum annealer it is necessary to reformulate it as as a quadratic binary unconstrained optimization (QUBO), which corresponds to the optimization of the following energy function: \\[\\min_{a_{1},\\ldots,a_{N}}\\ \\sum_{i=1}^{N}\\sum_{j=i+1}^{N}a_{i}Q_{ij}a_{j} \\tag{1}\\] where \\(a_{i}\\in\\{0,1\\}\\) and \\(Q\\) is an upper-triangular matrix containing the coefficients of the problem that is referred to as QUBO weight matrix. By defining \\(\\mathbf{a}\\ \\in\\{0,1\\}^{N}\\triangleq[a_{1},\\ldots a_{N}]\\) it is possible to rewrite (1) in matrix product form as \\[\\min_{\\mathbf{a}}\\ \\mathbf{a}^{T}Q\\mathbf{a}. \\tag{2}\\] Alternatively, it is also possible to reformulate the problem as a Ising spin model [30], which is a binary model whose variables take value in the set \\(\\{-1,-1\\}\\). For QA purposes both problem formulations can be used. ## III Kernel Regression Methods In this section a description of the classical kernel-based regression methods is now provided. In principle any symmetric and positive semidefinite function \\(k(\\mathbf{x},\\mathbf{x}^{\\prime})\\) can be used as kernel function [12]. In ML, one of the most popular choice for kernel function is the radial basis function (RBF) kernel, which has the property of depending only on the distance of the inputs, i.e: \\(k(\\mathbf{x},\\mathbf{x}^{\\prime})=k(||\\mathbf{x}-\\mathbf{x}^{\\prime}||)\\). The formula of the RBF is as follows: \\[k(\\mathbf{x},\\mathbf{x}^{\\prime})=exp\\bigg{(}\\frac{||\\mathbf{x}-\\mathbf{x}^{ \\prime}||}{\\gamma}\\bigg{)}. \\tag{3}\\] The prediction function of the kernel-based algorithms used in this work can be formulated as a weighted sum of kernel function evaluations between the \\(N\\) training data points and the input vector \\(\\mathbf{x}\\) \\[f(\\mathbf{x})=\\sum_{i=1}^{N}\\alpha_{i}k(\\mathbf{x}_{i},\\mathbf{x})+b \\tag{4}\\] where \\(\\alpha_{1},\\ldots,\\alpha_{N}\\) are a set of scalar whose value is determined in the learning phase on the training set. The prediction function is linear with respect to the kernel function evaluations so the nonlinear modeling in the original feature space is achieved by applying a linear model in the transformed feature space. In the following, it will be denoted as \\(\\mathbf{X}\\) the \\(N\\times d\\)_design matrix_ in which each of its row corresponds to a training sample, i.e., \\(\\mathbf{X}[i,:]=\\mathbf{x}_{i}\\ i=1,\\ldots,N\\) and as \\(\\mathbf{y}\\ \\in\\mathbb{R}^{N}\\) the corresponding target vector. Let us also define as \\(\\mathbf{K}\\) the \\(N\\times N\\) symmetric matrix, referred to as _Gram_ matrix, that stores the kernel function evaluation between every pair of training sample \\(\\mathbf{x}_{i}\\) and \\(\\mathbf{x}_{j}\\), i.e., \\(\\mathbf{K}_{ij}=\\mathbf{K}_{ji}=k(\\mathbf{x}_{i},\\mathbf{x}_{j})\\). ### _Support Vector Regression_ The formulation of the SVR can be obtained by considering the optimization of a regularized regression problem where the considered loss function is a \\(\\epsilon-\\)insensitive loss function [31], i.e., a function that gives an error only if the absolute difference between the actual value and the predicted one is greater than a value \\(\\epsilon>0\\)[10] \\[\\mathcal{L}_{\\epsilon}(f(\\mathbf{x})-y)=\\begin{cases}0,&\\text{if}\\ |f(\\mathbf{x})-y|<\\epsilon;\\\\ |f(\\mathbf{x})-y|,&\\text{otherwise}.\\end{cases} \\tag{5}\\] The loss function to be minimized is then \\[C\\sum_{n=1}^{N}\\mathcal{L}_{\\epsilon}(f(\\mathbf{x}_{n})-y_{n})+\\frac{1}{2}|| \\mathbf{w}||^{2}. \\tag{6}\\] In the formula, \\(C\\) is a parameter that controls the overfitting that by convention multiplies the error term in the equation, and therefore can be thought as a (inverse)-regularization parameter [10]. The vector \\(\\mathbf{w}\\) is associated with the linear coefficients in the transformed feature space. It can be shown that the training of the SVR amounts to the solving of the following constrained optimization problem [10]: \\[L(\\boldsymbol{\\alpha},\\hat{\\boldsymbol{\\alpha}}) =\\frac{1}{2}\\sum_{n=1}^{N}\\sum_{m=1}^{N}(\\alpha_{n}-\\hat{\\alpha}_{ n})(\\alpha_{m}-\\hat{\\alpha}_{m})k(\\mathbf{x}_{n},\\mathbf{x}_{m})+\\] \\[\\quad-\\epsilon\\sum_{n=1}^{N}(\\alpha_{n}+\\hat{\\alpha}_{n})+\\sum_{n =1}^{N}(\\alpha_{n}-\\hat{\\alpha}_{n})y_{n} \\tag{7}\\] subject to the constraints \\[\\sum_{n=1}^{N}(\\alpha_{n}-\\hat{\\alpha}_{n})=0 \\tag{8a}\\] \\[0\\leq\\alpha_{n}\\leq C\\] (8b) \\[0\\leq\\hat{\\alpha}_{n}\\leq C \\tag{8c}\\] with respect to the variables \\(\\alpha_{i}\\) and \\(\\hat{\\alpha_{i}}\\) with \\(i~{}\\in{1,\\ldots,N}\\). Once the values of \\(\\alpha_{1},\\ldots,\\alpha_{N}\\) and \\(\\hat{\\alpha}_{1},\\ldots,\\hat{\\alpha}_{N}\\) have been determined a prediction on an input sample \\(\\mathbf{x}\\) can then be made through the formula \\[f(\\mathbf{x})=\\sum_{n=1}^{N}(\\alpha_{n}-\\hat{\\alpha}_{n})k(\\mathbf{x}, \\mathbf{x}_{n})+b. \\tag{9}\\] The value of b can be obtained from any point for which \\(0<\\alpha_{n}<C\\) or \\(0<\\hat{\\alpha_{n}}<C\\) through the formula \\[b=t_{n}-\\epsilon-\\sum_{m=1}^{N}(\\alpha_{m}-\\hat{\\alpha}_{m})k(\\mathbf{x}_{n}, \\mathbf{x}_{m}). \\tag{10}\\] It is preferable, however, to average over multiple data points in order to get a more stable estimation [10]. ### _Gaussian Process Regression_ The regression approach of GPR is different from that of SVR because it provides a output distribution of the target \\(\\mathbf{y}\\) instead of a point estimation. Such probability distribution is Gaussian and therefore, it is completely determined by the value of the mean \\(\\mu^{*}\\) and variance \\(\\sigma^{*}\\). In GPR, the relationship between the input vectors stored in \\(\\mathbf{X}\\) and the target values is modeled as a sum between a Gaussian multivariate function \\(\\mathcal{N}(\\mathbf{0},\\mathbf{K})\\) and a independent noise component \\(\\mathcal{N}(\\mathbf{0},\\beta^{-1}\\mathbf{I}_{N})\\). The Gram matrix is used to construct the covariance matrix that is used to model the generation process of the training set. By the properties of the Gaussian function [10] the target values assume the following probability distribution: \\[\\mathbf{y}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{K}+\\beta\\mathbf{I}_{N}). \\tag{11}\\] To make a prediction on a unseen input \\(\\mathbf{x}\\), let us consider \\(\\mathbf{X}^{*}\\) the \\(N+1\\times d\\) matrix obtained by vertically concatenating the vector \\(\\mathbf{x}\\) to the matrix \\(\\mathbf{X}\\), i.e., the last row of \\(\\mathbf{X}^{*}\\) is equal to the investigated input vector \\(\\mathbf{x}\\) while the other rows are equal to the row of the design matrix \\(\\mathbf{X}\\). The probability distribution of the associated output vector \\(\\mathbf{y}^{*}~{}\\in\\mathbb{R}^{N+1}\\), according to the GPR framework is \\[\\mathbf{y}^{*}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{K}^{*}+\\beta\\mathbf{I}_{N+1 }). \\tag{12}\\] The \\(N+1\\times N+1\\) matrix \\(\\mathbf{K}^{*}\\), is the Gram matrix calculated on the design matrix \\(\\mathbf{X}^{*}\\). In the prediction phase the first N element of the vector \\(\\mathbf{y}_{i}^{*},~{}i~{}\\in{1,\\ldots,N}\\) are fixed to the values of the training samples \\(y_{i}\\). The last element of \\(\\mathbf{y}^{*}\\), which is the value of interest in the regression problem, will have a probability distribution that depends on the value taken by the first N entries of the vector and the kernel function evaluations stored in the Gram matrix \\(\\mathbf{K}^{*}\\). Because of the property of the Gaussian multivariate function such conditional posterior probability is still Gaussian and its parameters are given by \\[\\mu^{*}=\\boldsymbol{\\kappa}^{T}(\\mathbf{K}+\\beta\\mathbf{I}_{N})^{-1}\\mathbf{y} \\tag{13}\\] \\[\\sigma^{*}=k(\\mathbf{x},\\mathbf{x})-\\boldsymbol{\\kappa}^{T}(\\mathbf{K}+\\beta \\mathbf{I}_{N})^{-1}\\boldsymbol{\\kappa} \\tag{14}\\] where \\(\\mu^{*}\\) and \\(\\sigma^{*}\\) denote the mean and variance, respectively, and \\(\\kappa\\) is defined as \\(\\boldsymbol{\\kappa}~{}\\in\\mathbb{R}^{N}\\triangleq[k(\\mathbf{x}_{i},\\mathbf{x}),\\ldots,k(\\mathbf{x}_{N},\\mathbf{x})]\\). By defining \\(\\boldsymbol{\\alpha}~{}\\in\\mathbb{R}^{N}\\triangleq(\\mathbf{K}+\\beta\\mathbf{I} _{N})^{-1}\\mathbf{y}\\) (13) can be expressed in the form of (4) as: \\(\\boldsymbol{\\alpha}^{T}\\boldsymbol{\\kappa}\\). Since in this work, we were interested in a point estimation of the target values, the value of the mean was taken as prediction output for the GPR. ## IV Adiabatic Quantum Kitchen Sinks An implementation of RKS employing parametric quantum circuits as random feature generators has been recently proposed [28]. In such a procedure, data are encoded in the parameters of quantum circuit, i.e., the angle rotations of the quantum gates that make up the circuit, and the randomization in the feature generation process is obtained by carrying out the measurement on the quantum state after the application of the quantum circuit. They key aspect of this method is that the data encoding is done by a linear function, therefore the nonlinear modeling achieved in the feature transformation is attributable to quantum computation effects. In the QA-based AQKS, implementation data is encoded in a QUBO problem that is then solved with QA. The resulting solution after the Hamiltonian evolution is then used to construct the transformed feature vectors. The encoding is determined by \\(E\\) random matrices \\(\\mathbf{A}_{i},~{}i=1,\\ldots,E\\) of size \\(q\\times d\\) and \\(E\\) random vectors \\(\\mathbf{b}_{i},~{}i=1,\\ldots,E\\) of size \\(q\\), where \\(q\\) is a hyperparameter that controls the dimension of the resulting QUBO problem and \\(d\\) is the dimension of the input feature space. For each training sample \\(\\mathbf{x}_{i}\\), \\(E\\) random vectors \\(\\mathbf{h}_{i}^{e}\\) are generated with the formula \\[\\mathbf{h}_{i}^{e}=\\mathbf{A}_{e}\\mathbf{x}_{i}+\\mathbf{b}_{e} \\tag{15}\\] where the subscripts \\(i\\) and the superscripts \\(e\\) are used to denote the random vector \\(\\mathbf{h}\\) generated from trainig sample \\(i\\) at episode \\(e\\). Each vector \\(\\mathbf{h}_{i}^{e}\\) is then encoded in a QUBO problem of size \\(q\\) with the following rule: \\[Q_{l} =\\mathbf{h}_{i,l}^{e} \\tag{16}\\] \\[Q_{l,m} =\\mathbf{h}_{i,l}^{e}\\mathbf{h}_{i,m}^{e} \\tag{17}\\] with \\(l,m~{}\\in{\\{1,\\ldots,q\\}}\\). At the end of the annealing evolution the vector \\(\\phi(\\mathbf{x}_{i},\\mathbf{A}_{e},\\mathbf{b}_{e})\\) of length \\(q\\) is obtained by performing a measurement process and by normalizing by a factor 1/_E_. The transformed feature vector \\(\\mathbf{z}_{i}\\) of size \\(E\\times q\\) is then obtained by concatenating the \\(E\\) vectors \\(\\{\\phi(\\mathbf{x}_{i},\\mathbf{A}_{e},\\mathbf{b}_{e})~{}e=1,\\ldots,E\\}\\). The encoding procedure is again linear and therefore any nonlinearity in the data transformation comes from the QA process. The complete algorithmic workflow for generating the transformed AQKS, defined by Noori et al. [28], is outlined in Algorithm 1 for convenience ``` Input parameters: training samples \\(\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N}\\}\\), p(\\(\\mathbf{A}\\)) and p(\\(\\mathbf{b}\\)),E,q Output: transformed feature vectors \\(\\mathbf{z}_{1},\\ldots,\\mathbf{z}_{N}\\) \\(\\mathbf{A}_{1},\\ldots,\\mathbf{A}_{E}\\) and \\(\\mathbf{b}_{1},\\ldots,\\mathbf{b}_{E}\\) from p(\\(\\mathbf{A}\\)) and p(\\(\\mathbf{b}\\)) for\\(i=1,\\ldots,N\\)do for\\(e=1,\\ldots,E\\)do apply encoding \\(\\mathbf{h}_{i}^{e}=\\mathbf{A}_{e}\\mathbf{x}_{i}=\\mathbf{b}_{e}\\) encode \\(\\mathbf{h}_{i}^{e}\\) in a QUBO weight matrix obtain \\(\\phi(\\mathbf{x}_{i},\\mathbf{A}_{e},\\mathbf{b}_{e})\\) by performing measurement and normalization after the annealing evolution end for Apply concatenation to the vectors \\(\\phi(\\mathbf{x}_{i},\\mathbf{A}_{e},\\mathbf{b}_{e})\\) to get \\(\\mathbf{z}_{i}=[\\phi(\\mathbf{x}_{1},\\mathbf{A}_{1},\\mathbf{b}_{1}),\\ldots, \\phi(\\mathbf{x}_{i},\\mathbf{A}_{E},\\mathbf{b}_{E})]\\) ``` **Algorithm 1**AQKS Feature Vectors Generation. The distribution p(\\(\\mathbf{A}\\)) is generally a multivariate Gaussian where each element of A follows a normal distribution \\(\\mathcal{N}(\\mu_{a},\\sigma_{a})\\) while p(\\(\\mathbf{b}\\)) is a uniform distribution. In our experiments, for each annealing cycle a total of 1000 readouts were considered by setting the parameter _num_reads_ in the sampling function from the D-Wave software accordingly. The final value was obtained by doing a weighted average over the obtained samples using as weighting factor the relative occurrence of each vector. The workflow of AQKS requires the solving of \\(N\\times E\\) QUBO problems of size \\(q\\) to generate the transformed feature vectors. The values of the parameters used in the experiments in this work were \\(E\\)=50 and \\(q\\)=4 for the NOMAD dataset and \\(E\\)=100 and \\(q\\)=2 for the SeaBAM dataset, whereas for both cases \\(\\mu_{a}\\)=0, \\(\\sigma_{a}\\)=0.01. The vector \\(\\mathbf{b}\\) was ignored in the encoding phase. Since the value of \\(q\\) is generally small, then the annealer will be used to solve many problems of small size in which the vast majority of the available physical qubits will remain unused. In this work, therefore we integrate AQKS with parallel QA to run multiple problem instances together to reduce the computational time. The implementation of AQKS with parallel QA will be referred to as parallel AQKS. ## V Parallel QA When solving a QUBO problem with a D-Wave quantum annealer the problem graph must be _minor-embedded_[32] in the quantum processing unit. This is done because the hardware topology, which is a Chimera topology for the D-Wave 2000Q and a Pegasus topology for Advantage, does not provide a full connectivity on the hardware graph and therefore, it is often necessary to represent a logical qubit with multiple physical qubits. During this process each logical qubit, which corresponds to a binary variable in the QUBO model, is mapped to a group of connected qubits, which are referred to as a _chain_. The first step in the minor embedding process is the construction of the problem graph \\(G(V,E)\\), in which each of the nodes in \\(V\\) represent a binary variable in the QUBO problem and for each quadratic term in the QUBO a weighted edge with weight equal to the corresponding quadratic coefficient is added. The problem graph is then minor-embedded in the graph defined by the hardware topology. After that, a subgraph of the quantum hardware topology will be then assigned to the problem and the solver will start the annealing procedure on the qubits of such subgraph. In some cases, especially if the problem is of small dimension, it will happen that many of the available qubits will remain unused during the annealing process. Starting from this observation, parallel QA [29] was proposed in order to make better use of the available quantum hardware, considering that two or more independent QUBO problem can be solved together in the same annealing cycle. Let us in fact consider two QUBO problems \\(Q^{1}\\) and \\(Q^{2}\\), of size \\(m\\) and \\(n\\), respectively. For the sake of convenience in the notation, let us also denote the variables of \\(Q^{1}\\) as \\(\\{a_{1},\\ldots,a_{m}\\}\\ \\in\\{0,1\\}^{m}\\), and those of \\(Q^{2}\\) as \\(\\{a_{m+1},\\ldots,a_{m+n}\\}\\ \\in\\{0,1\\}^{n}\\). Now let us consider the QUBO problem \\(Q^{*}\\triangleq Q^{1}+Q^{2}\\), whose variables will then be \\(a_{1},\\ldots,a_{m+n}\\in\\{0,1\\}^{m+n}\\). It is easy to verify from the problem definition that the minimum of \\(Q^{*}\\) is equal to the sum of the minimum of \\(Q^{1}\\) and \\(Q^{2}\\). Moreover, the optimal solution of \\(Q^{*}\\) will preserve the optimal solutions of \\(Q^{1}\\) and \\(Q^{2}\\), i.e., the first \\(m\\) variables of the optimal solution of \\(Q^{*}\\) will be equal to the optimal solution of \\(Q^{1}\\) whereas the remaining \\(n\\) variables will be equal to the optimal solution of \\(Q^{2}\\). The problem graph related to \\(Q^{*}\\), since there are no edges between \\(a_{i}\\) and \\(a_{j}\\) with \\(i\\ \\in\\{1,\\ldots,m\\}\\) and \\(j\\ \\in\\{m+1,\\ldots,m+n\\}\\), will be composed by two independent graphs that are identical to the problem graphs of \\(Q^{1}\\) and \\(Q^{2}\\). This reasoning could be extended to more than two problems, thus setting the theoretical basis for solving multiple QUBO problems together. The structure of the encoding problem defined in Section IV is a fully connected graph of size \\(q\\). Each of the \\(N\\times E\\) problems that are needed to generate the feature vectors has the same graph structure, therefore the same embedding scheme can be used when solving together the same number of problems. By solving multiple QUBO problems in parallel we therefore managed to obtain the feature transformation for 20 samples in each annealing cycle. The complete workflow for the proposed parallel implementation of AQKS is outlined in Algorithm 2. In the pseudocode of Algorithm 2, it was assumed that the number of training sample \\(N\\) was a multiple of the number of samples processed in each annealing cycle, _samples_per_run_. If this is not the case, i.e., \\(N=p*samples\\_per\\_run+r,\\text{with}\\ p,r\\ \\in\\mathbb{N}\\) and \\(0<r<samples\\_per\\_run\\), the algorithm will run with \\(num\\_iteration=p+1\\): The first \\(p\\) iterations will follow the procedure described by Algorithm 2, while the last one will iterate the for loop over the variable \\(n\\) over \\(1,\\ldots,r\\) instead of \\(1,\\ldots,samples\\_per\\_run\\). ## VI Experimental Validation ### _Datasets_ The experimental validation in this work has been carried out on two real RS dataset related to water chlorophyll contention [33]. 1. _SEABAM_[34] (SeaWiFS Bio-optical Algorithm Mini-Workshop) The first dataset used contains 919 in situ measurements of chlorophyll concentration in water taken from several locations in U.S. and Europe. However, due to some missing data value only 793 samples were used in the experiments. The measurements were carried out with the Sea-viewing Wide Field-of- view Sensor (SeaWiFS) at five different wavelengths (412, 443, 490, 510, and 555 nm) and the chlorophyll concentration takes values in the range 0.019 and 32.787 _mg/m\\({}^{3}\\)_. 2. _NOMAD_[35] (NASA bio-Optical Marine Algorithm Dataset) The second dataset used is also an in situ dataset and contains several bioptical data information such as surface irradiances, water-leaving radiances, diffuse downwelling attenuation coefficients, and chlorophyll concentration values. In this work data taken at five different wavelengths (411, 443, 489, 510, and 555 nm) were used as input features vectors for the regression algorithms. Specifically, for each spectral band the corresponding feature value was taken as the ratio between the corresponding spectral water-leaving radiance and the spectral surface irradiance [2]. For the experimental part of this work, a total of 1210 measurements were used and the chlorophyll concentration value ranged between 0.017 and 70.21 \\(mg/m^{3}\\). For the training phase in both datasets, the values of both the feature vector and the target value were converted to the logarithmic domain. The reason for this is that the values of the bio-physical quantities were assumed to be log-normally distributed [36]. ### _Implementation Details_ For each dataset the two regression methods (SVR, GPR) implemented with the parallel AQKS kernel approximation were tested on ten different randomly sampled training and test sets of size 200 each. On each of these run a classical implementation of the regression algorithm using a RBF kernel and a RKS kernel approximation were tested and their results in terms of R2 score and mean squared error (MSE) were compared as a benchmark. The results achieved in terms of R2 score and MSE by the three different kernel implementation were then compared. In each run the hyperparameters of the regression algorithms were tuned by running a exhaustive grid search defined over a discrete hyperparameter space on a five-fold validation on the training set. Specifically, the training set has been divided in five different subsets (folds) and each hyperparameter configuration was tested on each fold after being trained on the remaining four other. The configuration that achieved the highest average R2 score over the five different folds was selected. Since the parameters of parallel AQKS kernel approximation were not optimized empirically because of the computational burden, it was not performed an optimization of the kernel parameter \\(\\gamma\\) for the classical and RKS-based algorithm. Such value was set to 1 for the SVR and 2 for the GPR in the classical case, whereas it was set to 1 for both SVR and GPR in the RKS implementation. The number of components in the RKS algorithm was set to 50. All the classical algorithms have been implemented using the python library scikit-learn [37]. The hyperparameter spaces for the learning algorithms are as follows. 1. _SVR:_\\(C:[2^{-8},2^{-7},2^{-6},2^{-5},2^{-4},2^{-3}2^{-2},2^{-1},1,2,2^{2},\\\\ 2^{3},2^{4},2^{5},2^{6},2^{7},2^{8}]\\), \\(\\epsilon:[10^{-3},10^{-2},10^{-1}]\\) 2. _GPR:_ Noise parameter \\(\\beta\\): \\([10^{-10},10^{-9},10^{-8},10^{-7},10^{-6},\\\\ 10^{-5},10^{-4},10^{-3},10^{-2}]\\) As indicated in Section VI-A, the training phase has been conducted by considering the logarithm values of both the input vector and the target value. The trained prediction function then provided a target value estimation in the logarithmic domain. For the evaluation of the chosen performance metrics two different setting were considered: In the first one, the comparison between the predicted and the actual values was carried out by comparing the value provided by the prediction function and the logarithm of the target value, whereas in the second setting the evaluation was conducted by considering the original target value and the prediction value in the original domain (obtained by exponentiation). In the following these two settingswill be referred to as _logarithm_ setting and _original_ setting, respectively. ## VII Results The results on the NOMAD dataset in the logarithm and original setting are reported in Tables I and II, respectively. Tables III and IV show the results for the SEABAM dataset (logarithm and original setting, respectively). In the logarithm domain the three kernel implementations performed similarly in terms of R2 score and MSE on both datasets with the classical GPR implementation obtaining slightly better results overall. Interesting insights can be considered by analyzing the results in the original domain: For the NOMAD dataset the parallel AQKS implementation achieved the best average results on both R2 score and MSE. In the SEABAM dataset, the situation was more diverse: The classical SVR implementation achieved the best R2 score, whereas the classical GPR obtained the worst performances on the same evaluation metric. The parallel AQKS GPR performed slightly better than RKS implementation while for the SVR the latter kernel approximation method performed slightly better. Regarding the MSE, the results were also similar with the classical SVR and GPR obtaining the best and worst results, respectively. It is also worth noting that the proposed parallel AQKS implementation never obtained a negative value for the R2 score across the various experimental runs, while the RKS-based implementation obtained a negative score once with GPR algorithm (experimental run 9 on the SEABAM in the original setting) and the classical GPR twice (experimental run 7 for the SEABAM and experimental run 8 for the NOMAD, both in the original setting). Another interesting fact can be observed by analyzing the best R2 score achieved across the various experimental runs. In the original setting for the SEABAM dataset both the RKS-based and classical algorithm always obtained a higher best R2 across the different runs with respect to the AQKS even when the AQKS achieved a higher average score. This fact might indicate a better robustness of the AQKS in terms of generalization with respect to new dataset sampling, however further research is needed to verify this hypothesis. ## VIII Conclusion The objective of this work was to develop a AQKS kernel approximation implementation on a quantum annealer using parallel QA for regression applications. The choice of using a parallel implementation was motivated by the high number of QUBO problems that are needed in the workflow. The proposed implementation managed to achieve results comparable to those obtained by classical kernel methods and the traditional RKS kernel approximation algorithm, which could be indicative of its potential. The maximum number of samples obtained on each annealing cycle, given the number of epochs \\(E\\) and the number of qubits \\(q\\), is limited by the size of the quantum hardware. In our work we managed to obtain 20 transformed feature vectors in each annealing cycle, which makes the process unfeasible for large datasets. The problem graph for the parallel annealing, since is composed of many independent smaller subgraphs, is sparsely connected and therefore, might scale well with a increased availability of physical qubits in future GA hardware. Further research could also be conducted to improve upon the proposed implementation. For instance, the samples that are selected on each annealing cycle were chosen in a sequential approach based on their sample index in the dataset; further research could investigate a way to select the samples to be considered in the same annealing cycle to increase the performances. The code associated with this work can be found at this GitHub repository.1 Footnote 1: GitHub repository: [https://gitlab.jsc.fr-juelich.de/sdrs/quantum-kernel-estimation-parallel-random-kitchen-sinks](https://gitlab.jsc.fr-juelich.de/sdrs/quantum-kernel-estimation-parallel-random-kitchen-sinks) ## References * [1] Y. Bazi and F. Melgani, \"Semisupervised PSO-SVM regression for biophysical parameter estimation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 45, no. 6, pp. 1887-1895, Jun. 2007. * [2] Y. Bazi, N. Alajlan, and F. Melgani, \"Improved estimation of water chlorophyll concentration with semisupervised Gaussian process regression,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, no. 7, pp. 2733-2743, Jul. 2012. * [3] H. Zhang, P. Shi, and C. Chen, \"Retrieval of oceanic chlorophyll concentration using support vector machines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 41, no. 12, pp. 2947-2951, Dec. 2003. * [4] F. DelFrate, A. Ortenzi, S. Casadio, and C. Zehner, \"Application of neural algorithms for a real time estimation of ozone profiles from gome measurements,\" in _Proc. Scanning Present Resolving Future, IEEE Int. Geosci. Remote Sens. Symp._, 2001, vol. 6, pp. 2668-2670. * [5] H. Aghighi, M. Azadbakht, D. Ashouroftro, H. S. Shahrabi, and S. Radiom, \"Machine learning regression techniques for the slice image yield prediction using time-series images of landsat 8 oil,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 12, pp. 4563-4577, Dec. 2018. * [6] X. Wang, L. Ma, and X. Wang, \"Apply semi-supervised support vector regression for remote sensing water quality retrieving,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2010, pp. 2757-2760. * [7] A. Rabe, S. van der Linden, and P. Hostert, \"Simplifying support vector machines for regression analysis of hyperspectral imagery,\" in _Proc. 1st Workshop Hyperspectral Image Signal Process.: Evol. Remote Sens._, 2009, pp. 1-4. * [8] G. Mateo-Garcia, V. Laparra, and L. Gomez-Chova, \"Optimizing kernel ridge regression for remote sensing problems,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2018, pp. 4007-4010. * [9] Y. Bazi, N. Alajlan, F. Melgani, H. AlHichri, and R. R. Yager, \"Robust estimation of water chlorophyll concentrations with Gaussian process regression and iowa aggregation operators,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 7, no. 7, pp. 3019-3028, Jul. 2014. * [10] C. M. Bishop, _Pattern Recognition and Machine Learning_. Berlin, Germany: Springer, 2006. * [11] K. P. Murphy, _Machine Learning: A Probabilistic Perspective_. Cambridge, MA, USA: MIT Press, 2013. [Online]. Available: [https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/02620180207refes-1_225ireUrr188ddd313685774748rs-2](https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/02620180207refes-1_225ireUrr188ddd313685774748rs-2) * [12] A. Rahimi and B. Recht, \"Random features for large-scale kernel machines,\" in _Proc. 20th Int. Conf. Neural Inf. Process. Syst._, 2007, pp. 1177-1184. * [13] A. Rahimi and B. Recht, \"Uniform approximation of functions with random bases,\" in _Proc. 46th Annu. Allerton Conf. Commun., Control. Comput._, 2008, pp. 555-561. * [14] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_, 10th ed. Cambridge, U.K.: Cambridge Univ. Press, Dec. 2010. * [15] A. Montanaro, \"Quantum algorithms: An overview,\" _NPJ Quantum Inf._, vol. 2, no. 1, Jan. 2016, Art. no. 10523, doi: 10.1038/njpji.2015.23. * [16] P. Shor, \"Algorithms for quantum computation: Discrete logarithms and factoring,\" in _Proc. 35th Annu. Symp. Found. Comput. Sci._, 1994, pp. 124-134. * [17] L. K. Grover, \"A fast quantum mechanical algorithm for database search,\" in _Proc. 28th Annu. ACM Symp. Theory Comput._, New York, NY, USA, 1996, pp. 212-219, doi: 10.1145/237814.237866. * [18] J. Preskill, \"Quantum computing in the NISQ era and beyond,\" _Quantum_, vol. 2, Aug. 2018, Art. no. 79, Oct. 20233149-2018-08-06-79. * [19] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, \"Quantum machine learning,\" _Nature_, vol. 549, no. 7671, pp. 195-202, 2017. * [20] N. Mishra et al., \"Quantum machine learning: A review and current status,\" in _Proc. Data Manage., Anal. Innov._, 2021, pp. 101-145. * [21] M. Schuld and F. Petruccione, \"Quantum machine learning,\" in _Encyclopedia of Machine Learning and Data Mining_. Berlin, Germany: Springer, 2017, pp. 1034-1043, doi: 10.1007/978-1-4899-7687-1, 913. * [22] V. Dunjko and H. J. Briegel, \"Machine learning and artificial intelligence in the quantum domain: A review of recent progress,\" _Rep. Prog. Phys._, vol. 81, no. 7, Jun. 2018, Art. no. 074001, doi: 10.1088/1361-663/aab406. * [23] M. Schuld and N. Killoran, \"Quantum machine learning in feature hilbert spaces,\" _Phys. Rev. Lett._, vol. 122, no. 4, Feb. 2019, Art. no. 040504, * [24] A. Sebastianielli, D. A. Zaidenberg, D. Spiller, B. L. Saux, and S. L. Ullo, \"On circuit-based hybrid quantum neural networks for remote sensing imagery classification,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 15, pp. 565-580, 2022. * [25] S. Ostobankar and M. Ductu, \"Classification of remote sensing images with parameterized quantum gates,\" _IEEE Geosci. Remote Sens. Lett._, vol. 19, 2022, Art. no. 8020105. * [26] P. Gawron and S. Lewinski, \"Multi-spectral image classification with quantum neural network,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2020, pp. 3513-3516. * [27] C. M. Wilson et al., \"Quantum kitchen sinks: An algorithm for machine learning on near-term quantum computers,\" 2018. [Online]. Available: [https://arxiv.org/abs/1806.08321](https://arxiv.org/abs/1806.08321) * [28] M. Noori et al., \"Analog-quantum feature mapping for machine-learning applications,\" _Phys. Rev. Appl._, vol. 14, no. 3, Sep. 2020, Art. no. 034034, doi: 10.103/physrepubled.14.034034. * [29] E. Pelofke, G. Hahn, and H. N. Djidey, \"Parallel quantum annealing,\" _Sci. Rep._, vol. 12, no. 1, Mar. 2022, Art. no. 4499, doi: 10.1038/2Fst45198-022-08394-8. * [30] F. Barahona, \"On the computational complexity of ising spin glass models,\" _J. Phys. A: Math. Gen._, vol. 15, no. 10, pp. 3241-3253, Oct. 1982, doi: 10.1088/0305-4470/15/10/028. * [31] V. N. Vapnik, _The Nature of Statistical Learning Theory_. New York, NY, USA, Springer2000, doi: 10.1007/978-1-4757-3264-1. * [32] V. Choi, \"Minor-embedding in adiabatic quantum computation: I. the parameter setting problem,\" _Quantum Inf. Process._, vol. 7, pp. 193-209, 2008. [Online]. Available: [https://arxiv.org/abs/0804.4884v1](https://arxiv.org/abs/0804.4884v1) * [33] D. Pastorello, _Concise Guide to Quantum Machine Learning_. Singapore: Springer, 2023, doi: 10.1007/978-981-19-6897-6. * [34] J. E. O'Reilly et al., \"Ocean color chlorophyll algorithms for seawits,\" _J. Geophys. Res.: Oceans_, vol. 103, no. C11, pp. 24937-24953, 1998, doi: 10.1029/98JC02160. * [35] P. J. Werdell and S. W. Bailey, \"An improved in-situ bio-optical data set for ocean color algorithm development and satellite data product validation,\" _Remote Sens. Environ._, vol. 98, pp. 122-140, 2005. * [36] J. W. Campbell, \"The lognormal distribution as a model for the bio-optical versatility of the sea,\" _J. Geophys. Res._, vol. 100, pp. 13237-13254, 1995. * [37] F. Pedregosa et al., \"Scikit-learn: Machine learning in python,\" _J. Mach. Learn. Res._, vol. 12, pp. 2825-2830, 2011. \\begin{tabular}{c c} & Edaardo Pasetto received the B.Sc. and M.Sc. degrees in information and communication engineering from the University of Trento, Trento, Italy, in 2019 and 2021, respectively. He is currently working toward the Ph.D. degree in physics with Forschungszentrum Julich, Germany, and RWTH Aachen University, Aachen, Germany. His main research interest include the application of hybrid quantum-classical machine learning frameworks to RS applications. \\\\ \\end{tabular} \\begin{tabular}{c c} & Morris Riedel (Member, IEEE) received the Ph.D. degree in parallel and distributed systems from the Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, in 2012. He is currently a Full Professor of high-performance computing with an emphasis on parallel and scalable machine learning with the School of Natural Sciences and Engineering, the University of Iceland, Reykjavik, Iceland. He has worked in data-intensive parallel and distributed systems since 2004, and since then, he has held various positions at the Juelich Supercomputing Centre, Forschungszentrum Julich, Julich, Germany. In addition, he is the Head of the joint High Productivity Data Processing research group between the Juelich Supercomputing Centre and the University of Iceland. Since 2020, he is also the EuroHPC Joint Undertaking governing board member for Iceland. His online YouTube and university lectures include High-Performance Computing - Advanced Scientific Computing, Cloud Computing and Big Data - Parallel and Scalable Machine and Deep Learning, as well as Statistical Data Mining. In addition, he has performed numerous hands-on training events in parallel and scalable machine and deep learning techniques on cutting-edge HPC systems. He has authored or coaauthored extensively in the areas of his interest. His research interests include high-performance computing, remote sensing applications, medicine and health applications, pattern recognition, image processing, and data sciences. \\\\ \\end{tabular} \\begin{tabular}{c c} & Kristel Michielsen received the Ph.D. degree in physics for her work on the simulation of strongly correlated electron systems from the University of Groningen, Groningen, the Netherlands, in 1993. Since 2009, she is Group Leader of the research group Quantum Information Processing, Julich Supercomputing Centre (JSC), Forschungszentrum Julich, Julich, Germany, and Professor of quantum information processing with RWTH Aachen University, Aachen, Germany. She and her group have ample experience in performing large-scale simulations of quantum systems. With her group and a team of international collaborators, she set the world record in simulating a quantum computer with 48 qubits. In 2019, she participated in a research collaboration that proved Google's quantum supremacy. She is leading and building up the Julich UNified Infrastructure for Quantum computing (JUNIQ), JSC. Her research interests include classical simulations of electrodynamics, quantum mechanics, quantum computing, quantum computing architectures, quantum algorithms, quantum benchmarking, and modular quantum-HPC hybrid computing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Gabriele Cavallaro (Senior Member, IEEE) received the B.Sc. and M.Sc. degrees in telecommunications engineering from the University of Trento, Trento, Italy, in 2011 and 2013, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Iceland, Reykjavik, Iceland, in 2016. From 2016 to 2021, he has been the Deputy Head of the \"High Productivity Data Processing\" (HPDP) research group, Julich Supercomputing Centre (USC), Forschungszentrum Julich, Julich, Germany. Since 2022, he is the Head of the \"AI and ML for Remote Sensing\" Simulation and Data Lab, JSC, and an Adjunct Associate Professor with the School of Natural Sciences and Engineering, University of Iceland. From 2020 to 2023, he held the position of Chair for the High-Performance and Disruptive Computing in Remote Sensing (HDCRS) Working Group under the IEEE GRSS Earth Science Informatics Technical Committee (ESI TC). In 2023, he took on the role of Co-chair for the ESI TC. Concurrently, he serves as Visiting Professor with the \\(\\Phi\\)-lab, European Space Agency (ESA), where he contributes to the Quantum Computing for Earth Observation (QC4EO) initiative. His research interests include remote sensing data processing with parallel machine learning algorithms that scale on distributed computing systems and cutting-edge computing technologies, including quantum computers. Dr. Cavallaro was the recipient of the IEEE GRSS Third Prize in the Student Paper Competition of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2015 (Milan - Italy). He has been serving as an Associate Editor for IEEE Transactions on Image Processing (TIP) since October 2022. \\\\ \\end{tabular}
The increased development of quantum computing hardware in recent years has led to increased interest in its application to various areas. Finding effective ways to apply this technology to real-world use-cases is a current area of research in the remote sensing community. This article proposes an adiabatic quantum kitchen sinks (AQKS) kernel approximation algorithm with parallel quantum annealing on the D-Wave Advantage quantum annealer. The proposed implementation is applied to support vector regression and Gaussian process regression algorithms. To evaluate its performance, a regression problem related to estimating chlorophyll concentration in water is considered. The proposed algorithm was tested on two real-world datasets and its results were compared with those obtained by a classical implementation of kernel-based algorithms and a random kitchen sinks implementation. On average, the parallel AQKS achieved comparable results to the benchmark methods, indicating its potential for future applications. Parallel quantum annealing, quantum annealing (QA), quantum computing (QC), regression, remote sensing (RS).
Summarize the following text.
ieee/f1dedae8_e8bd_47fd_8946_04f3c3b359ab.md
A Practical Plateau Lake Extraction Algorithm Combining Novel Statistical Features and Kullback-Leibler Distance Using Synthetic Aperture Radar Imagery Xin Zhou, Zhengjia Zhang, Qihao Chen,, and Xiuguo Liu, Manuscript received March 21, 2020; revised May 7, 2020; accepted August 10, 2020. Date of publication August 14, 2020; date of current version August 26, 2020. This work was supported by the National Natural Science Foundation of China under Grant 41471355 and Grant 41801348. _(Corresponding author: Xiuguo Liu.)_The authors are with the School of Geography and Information Engineering, China University of Geosciences, Wuhan 430000, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).Digital Object Identifier 10.1109/ISTARS.2020.3016349 ## I Introduction The Qinghai-Tibet plateau (QTP), including Pamirs-Hindu Kush-Karakorum-Himalayas, has an average elevation of more than 4000 m and an area of over than 3 \\(\\times\\) 10\\({}^{6}\\) km\\({}^{2}\\) in the center of Asia [1]. This broad region has the largest reservoir of perennial ice outside of the earth's polar ice sheets, which is the source of several of Asia's great rivers, including Yellow, Yangtze, Indus, Ganges, Brahmaputra, Irrawaddy, Salween, and Mekong, supplying water for more than 1.5 billion people downstream [2]. Therefore, the QTP is also known as \"Asia's water tower.\" As one of the most significant features of QTP [3], some plateau lakes have increased significantly and even collapsed in recent years due to climate changes [4], which might cause dramatical damage to the permafrost ecology and plateau infrastructure, such as Qinghai-Tibet railway [5]. Therefore, the methods of plateau lake extraction have received critical attention, which is the basis of lake monitoring. Whereas manual measurement consumes abundant resources and is unable to monitor plateau lake continuously, the remote sensing technique offers feasible alternative approaches that overcome these constraints [6]. The passive optical sensors, including Landsat and MODIS, are the most common data used for surface-water extraction. Landsat data have been available since 1972 for mapping surface-water extent [7], which has a long archive, free of charge, and has a medium resolution about 30 m [8]. Although the spatial resolution of MODIS products is coarse, huge advantages in extensive coverage and frequent observation provide more opportunities for surface-water mapping [9]. The normalized difference water index (NDWI) and modified NDWI are commonly used to delineate water from nonwater with the optical imagery [10, 11]. However, suffering from illumination and weather conditions, it is a great challenge for optical sensors to obtain lake areas with higher time resolution. Synthetic aperture radar (SAR), which is an active microwave sensor with broad coverage, high resolution, and continuous imaging capability, has been applied in plateau lake extraction [12, 13, 14]. Currently, there are three types of methodologies for water land separation using SAR images, including threshold-based, segmentation-based, and classification-based. Because the water surface is easily distinguishable in SAR images with the specular reflectance, the threshold-based methods obtain the optimal threshold by the difference between the land and water on the histogram, which are frequently used due to simplicity and efficiency, such as Otsu [15] and valley-emphasis [16]. Nonetheless, many factors, such as wind-induced waves [17], dry sand and wet snow [18], and terrain shadows [19] have an impact on extraction, resulting in a high false-alarm rate. Recently, by using context information, active contours models (ACMs), including parameter ACMs [20], and geometric ACMs [21, 22, 23], have widely applied in water surface delineation, which provide smooth and closed boundary contours. However, complicated calculations and massive time consuming reduce the availability of segmentation-based algorithms in large-scale lake extraction. The classification-based algorithms, combining feature sets, and machine learning classifiers, identify the categories of each pixel or object. Although effective classifiers such as classification and regression tree [24] and random forest (RF) [25] help achieve better results, a practical feature set is equally crucial. The texture features such as the gray-level co-occurrence matrix (GLCM) features [26, 27], and local Moran's index (LMI) feature [28] were used in water extraction, combining with SVM and unsupervised algorithm, respectively. However, the features mentioned above only achieve a small-scale description for SAR images, which are susceptible to the mountain shadows or wind-induced water waves, leading to misclassifications and false alarms. On the other hand, the postprocessing approaches are commonly used in lake extraction to deal with the resources of misclassifications and false alarms, including terrain shadows [29], false edges related to isolated targets [22], and seasonal rivers [30]. Due to the complex environment of the QTP, a comprehensive and effective solution to false alarms is needed. One possible solution is to postprocess the results of the initial classification using some statistical distance or metrics, which can describe the distance between two distributions without being affected by a small amount of noise. Several metrics, like the posterior probability of the region's mean [31], minimum stochastic distance [32], and Kullback-Leibler distance (KLD) [33], have been reported. For two regions modeling by the generalized gamma distribution (GID), the KLD is more suitable than others to measure the similarity between them. In this article, a practical plateau lake extraction algorithm combining novel statistical features named object-based GTD (OGTD) and KLD using SAR imagery is proposed, solving the high false-alarm rate from two aspects. Aim at achieving the large-scale description of lake surface using spatial context information, the new proposed OGTD features, which is based on scale-adaptive segmentation and two parameters of GTD, are combined with conventional texture features. Meanwhile, a postprogress step using KLD between initial label regions is introduced to deal with false alarms and misclassifications of initial classification results. Unlike the features that participate in the initial classification, KLD postprocessing considers spatial context information using the initial classification results. ## II Study Area and Data The Hoh Xil region, which is the so-called \"No man land,\" is located in the hinterland of the QTP, between 33\\({}^{\\circ}\\)30'-36\\({}^{\\circ}\\)29' N and 81\\({}^{\\circ}\\)56'-94\\({}^{\\circ}\\)06' E, with an area of about 2.35 \\(\\times\\) 105 km\\({}^{2}\\). In this region, lakes distribute densely, including 107 lakes with an area above 1.0 km\\({}^{2}\\), and most of them are endorbic lakes. Eight lakes, including Zhuonai lake, Yanhu lake, Haidingmuorelake, Tealshi lake, Keekexiili lake, Cuodarima, Duoregiaucu, and Kekao lake, are chosen in this study. These lakes have expanded rapidly due to climate change and even collapsed in recent years. For instance, Zhuonai lake outburst on September 15, 2011, a large amount of water discharge flew the downstream Yanhu lake, causing significant expansions of lake area. Yanhu lake has increased more than 30 km\\({}^{2}\\) in the following year and continued to increase from 2012 to 2015 [34]. Recently, Liu _et al._[35] showed that according to change trends of area and water level during 2016-2018, Yanhu lake will overflow in the next 1 to 2 years. Fig. 1 is the map of the study area in the Hoh Xil region. The Sentinel-1 Level-1 ground range detected (GRD) SAR image at Interferometric Wide Swath (IW) mode was used for verification and analysis in this article. The resolution in both of the azimuth and range direction is 10 m. The image was acquired on August 2, 2019, and downloaded from the Scientific Data Hub supported by the European Space Agency (ESA), with a 25723 \\(\\times\\) 16736 size. Both copolarized (VV) and cross-polarized (VH) data were available, as shown in Fig. 2. The eight lakes mentioned previously were covered by one scene image, in the different swath. ## III Methods The proposed method consists of 1) preprocessing, 2) feature calculation and classification, and 3) postprocessing using a modified KLD. The complete workflow of the proposed approach is shown in Fig. 3. The high false-alarm rate of plateau lake extraction is suppressed from two aspects, including multifeatures and KLD postprocessing. Fig. 1: Map of the study area in the Hoh Xil region. Fig. 2: Sentinel-1 GRD image acquired on August 2, 2019. (a) VH polarization. (b) VV polarization. ### _Preprocessing_ The preprocessing includes calibration, speckle filtering, and image cropping, using the SNAP, the ENVI 5.3.1, and the SARscape 5.2.1 software packages. First, the digital number (DN) value is converted to \\(\\sigma\\)0 using the following equation by the SNAP software: \\[\\sigma_{0}=\\mathrm{CalFactor}\\times\\mathrm{DN}^{2}\\times\\mathrm{sin}\\theta_{i} \\tag{1}\\] where the \\(\\theta_{i}\\) is the incidence angle. After calibration, SAR images need to be converted from the logarithmic domain to the linear domain for subsequent statistical modeling by GTD, i.e., from the decibel image to the amplitude image \\[\\mathrm{I}_{A}=\\sqrt{\\left(\\sigma_{0}/10\\right)^{10}}. \\tag{2}\\] In order to suppress the inherent speckle, a LEE filter with a 3 \\(\\times\\) 3 pixels sliding window is applied, which is a general step in SAR image preprocessing. Usually, the geocoding step is used to correct the radar coordinate system to the geodetic coordinate system. Nevertheless, geocoding will introduce geometric errors, affecting the accuracy of subsequent SAR image interpretation [22]. After filtering, the eight lakes mentioned in Section II are cropped into six images, of which one is Yanhu lake and Haidinguauer lake, one is Duoregtaucuo and Cuodarina, and the other is one each. The ground-truth graph of each lake is manually digitized by visual interpretation of the amplitude image before the quantitative evaluation of the extraction results. ### _OgtD Features and Multifeature Set_ The GTD, a state-of-the-art empirical model, provides a novel way to express the SAR amplitude image in this study. Despite the lack of a strict physical basis, the empirical models, as purely mathematical theories, such as log-normal, Weibull, and Fisher, have been proved with an excellent performance in well-known examples. Particularly, the GTD was used for modeling many types of clutter, not only for the extremely heterogeneous state like the urban area but also for the homogeneous state such as farm and water surface. The probability density function (pdf) of GTD revised in [36] is expressed as \\[p\\left(z\\right)=\\frac{\\left|v\\right|k^{k}}{\\sigma\\Gamma\\left(k\\right)}\\Big{(} \\frac{z}{\\sigma}\\Big{)}^{kv-1}\\mathrm{exp}\\left\\{-k\\Big{(}\\frac{z}{\\sigma} \\Big{)}^{v}\\right\\}\\left.\\sigma,\\left|v\\right|,k>0,z\\geq 0 \\tag{3}\\] where \\(\\sigma\\), \\(k\\), and \\(v\\) represent scale, shape, and power parameter of the GTD, respectively; \\(\\Gamma(\\cdot)\\) denotes the gamma function. The estimators based on the method of logarithmic cumulants are shown as \\[\\frac{\\Psi^{3}\\left(1,\\hat{k}\\right)}{\\Psi^{2}\\left(2,\\hat{k}\\right)}=\\frac{ \\hat{c}_{2}^{3}}{\\hat{c}_{3}^{2}} \\tag{4}\\] \\[\\hat{v}=\\mathrm{sgn}\\left(-\\hat{c}_{3}\\right)\\sqrt{\\Psi\\left(1,\\hat{k} \\right)/\\hat{c}_{2}} \\tag{5}\\] \\[\\hat{\\sigma}=\\mathrm{exp}\\left\\{\\hat{c}_{1}-\\left(\\Psi\\left(\\hat{k}\\right)- \\mathrm{ln}\\hat{k}\\right)/\\hat{v}\\right\\} \\tag{6}\\] where \\(\\Psi(\\cdot)\\) and \\(\\Psi(m,\\cdot)\\) denote the digamma function and the _m_th-order polygamma function, respectively; \\(\\mathrm{sgn}(\\cdot)\\) represents the sign function; \\(\\hat{c}_{1}\\), \\(\\hat{c}_{2}\\), \\(\\hat{c}_{3}\\) represent the first three-order sample log-cumulants as \\[\\left\\{\\begin{array}{l}\\hat{c}_{1}=\\frac{1}{N}\\sum_{i=1}^{N}\\ln x_{i}\\\\ \\hat{c}_{2}=\\frac{1}{N}\\sum_{i=1}^{N}\\left(\\ln x_{i}-\\hat{c}_{1}\\right)^{2}\\\\ \\hat{c}_{3}=\\frac{1}{N}\\sum_{i=1}^{N}\\left(\\ln x_{i}-\\hat{c}_{1}\\right)^{3} \\end{array}\\right. \\tag{7}\\] where \\(N\\) represents the number of samples participating in the estimation. Based on the different \\(\\hat{k}\\) estimators, which are related to the ratio \\(\\hat{c}_{2}^{3}/\\hat{c}_{3}^{2}\\), the corresponding \\(\\hat{v}\\) and \\(\\hat{\\sigma}\\) of the GTD are derived from (5) and (6), respectively. In (4), since the \\(\\Psi^{3}(1,\\hat{k})/\\Psi^{2}(2,\\hat{k})\\) is a continuous monotonically increasing function, a unique minimum of 0.25 is obtained when \\(\\hat{k}\\) approaches zero. Therefore, the ratio of second-order and third-order log-cumulant must require to satisfy \\(\\hat{c}_{2}^{3}/\\hat{c}_{3}^{2}\\geq 0.25\\), otherwise the estimator \\(\\hat{k}\\) would estimate fail. An approximation method for solving the problem Fig. 3: Workflow of the proposed approach. when \\(\\hat{c}_{2}^{3}/\\hat{c}_{3}^{2}<0.25\\), is written as follow: \\[\\frac{\\hat{k}^{2}}{\\hat{k}+\\frac{1}{2}}=\\frac{\\hat{c}_{2}^{3}}{\\hat{c}_{3}^{2}}. \\tag{8}\\] In order to obtain OGTD features, the graph-based segmentation [37] and the stepwise evolution analysis (SEA) framework [38] are employed to perform the optimal scale segmentation. In recent years, the object-based image analysis technique has widely applied in interpreting SAR images based on segmentation, showing a positive effect on speckle suppression in classification [39]. Generally, the segmentation process has two steps. First, the image is divided into nonoverlapping and separated homogeneous regions by the oversegmentation process. Then, an optimal scale parameter that determines the maximum possible variation of heterogeneity is selected for adjacent regions merging. Unfortunately, the choice of the optimal scale parameter is a significant challenge by using a trial-and-error strategy, depending on the subjective intuition. Furthermore, the meaning of the scale parameter is ambiguous in various segmentation approaches, so it is difficult to make a selection of the optimal one. To solve the ambiguity of the scale parameter, the SEA framework is introduced for automated scale parameter estimation, analyzing local variance, and Moran's indexes step-by-step. For each object, two GTD parameters scale (\\(\\hat{v}\\)) and power (\\(\\hat{\\sigma}\\)) are calculated. The shape parameter \\(k\\) is excluded because the previous study has shown that the \\(k\\) would be ineffective and unable to describe the real data when the third-order sample log-cumulant is close to zero [40]. In the OGTD features extraction process, all parameters are almost self-adaptive. Therefore, the parameters setting and choice problem are avoided. Two types of texture features containing GLCM features and LMI feature, adopted widely inland water separation in previous papers, construct the multifeature set together with the OGID features. The GLCM features are computed from the similarity of different pixels over a given distance using the gray level co-occurrence matrix. In this study, six GLCM features are chosen according to the previous studies [26, 27], including homogeneity, contrast, dissimilarity, entropy, angular second moment (ASM), and correlation. The GLCM and GLCM features are calculated as \\[S_{d}\\left(i,j\\right)=\\frac{P_{d}\\left(i,j\\right)}{\\sum_{i=1}^{K =1}\\sum_{j=1}^{K}P_{d}\\left(i,j\\right)} \\tag{9}\\] \\[\\text{Homogeneity}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}\\frac{S_{d} \\left(i,j\\right)}{1+\\left(i+j\\right)^{2}}\\] (10) \\[\\text{Contrast}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}\\left(i-j\\right)^{2 }S_{d}\\left(i,j\\right)\\] (11) \\[\\text{Dissimilarity}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}\\left|\\left(i- j\\right)\\right|S_{d}\\left(i,j\\right)\\] (12) \\[\\text{Entopy}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}S_{d}\\left(i,j\\right) \\ln\\left(S_{d}\\left(i,j\\right)\\right)\\] (13) \\[\\text{ASM}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}\\left[S_{d}\\left(i,j \\right)\\right]^{2}\\] (14) \\[\\text{Correlation}=\\sum_{i=1}^{k=1}\\sum_{j=1}^{K}\\frac{\\left(i- \\mu_{x}\\right)\\left(j-\\mu_{y}\\right)S_{d}\\left(i,j\\right)}{\\sigma_{x}\\sigma_{ y}} \\tag{15}\\] where \\(d\\) is the separated distance, \\(K\\) is the gray level, \\(P_{d}(i,j)\\) is the second-order statistical probability between two gray level \\(i\\) and \\(j\\), \\(S_{d}(i,j)\\) is the GLCM, \\(\\mu_{x}\\) and \\(\\sigma_{x}\\) represent the mean value and the standard deviation of rows, and \\(\\mu_{y}\\) and \\(\\sigma_{y}\\) are columns. A local Moran statistic for an observation \\(i\\) is defined as [28] \\[\\mathrm{F}_{\\mathrm{mor}}=\\frac{x_{i}-\\mu}{\\sigma^{2}}\\sum_{j=1}^{N}w_{ij} \\left(x_{j}-\\mu\\right) \\tag{16}\\] where \\(N\\) is the number of pixels in one observation, \\(\\mu\\) is the mean value, \\(\\sigma\\) is the standard deviation, \\(w_{ij}\\) is the relationship between neighborhood. In this study, the \\(w_{ij}\\) is set as a rook case, shown as \\[\\mathrm{W}_{\\mathrm{rook}}=\\begin{pmatrix}0&1&0\\\\ 1&0&1\\\\ 0&1&0\\end{pmatrix}. \\tag{17}\\] Finally, the LMI feature is calculated according to the expression \\[\\mathrm{LMI}=\\frac{\\mathrm{F}_{\\mathrm{mor}}-\\mathrm{F}_{\\mathrm{close}}}{ \\mathrm{F}_{\\mathrm{mor}}+\\mathrm{F}_{\\mathrm{close}}} \\tag{18}\\] where \\(\\mathrm{F}_{\\mathrm{close}}\\) represents the results of morphological closing operation. ### _Postprocessing by Modified KLD_ Generally, the postprocessing step has two purposes. The first is to eliminate the isolated targets in the initial classification, and the second is to eliminate false alarms or misclassified targets. For the first purpose, the isolated regions are removed by the pixel number of regions. In this article, the eliminate threshold value is set as 100, that means if a region which is less than 100 pixels, it will be removed. Not only because the isolated targets affect the interpretation, but also the too-small sample size will cause the parameter estimation of the GTD in KLD calculation to be extremely inaccurate. For the second purpose, pixels with a slope greater than 15\\({}^{\\circ}\\), generated based on the DEM, will be masked to eliminate the impact of terrain shadows on water extraction. After that, the KLD postprocessing is used to deal with the remaining false alarms and misclassifications. The KLD, which is one of the most widely used metrics for measuring the distance between two distributions \\(p_{1}(x)\\) and \\(p_{2}(x)\\), defined as \\[J_{D}\\left(p_{1}(x)\\,,p_{2}\\left(x\\right)\\right)=\\mathrm{KL}\\left(p_{1}\\left( x\\right),p_{2}\\left(x\\right)\\right)+\\mathrm{KL}\\left(p_{2}\\left(x\\right)\\,,p_{1}(x)\\right) \\tag{19}\\] where \\(\\mathrm{KL}(\\cdot,\\cdot)\\) represents the KL divergence. In this study, the two distributions are modeling by GTD. Thus, for the given two GTDs with the parameters as \\(\\{k_{1},v_{1},\\sigma_{1}\\}\\) and \\(\\{k_{2},v_{2},\\sigma_{2}\\}\\), the KL divergence between \\(p_{1}(x)\\) and \\(p_{2}(x)\\) can be derived as \\[\\mathrm{KL}\\left(p_{1}\\left(x\\right),p_{2}\\left(x\\right)\\right)=A_{1}+A_{2}+A_{3} +A_{4} \\tag{20}\\]where \\(A_{1}=\\ln(C_{1}/C_{2})\\), with \\(C_{m}=|v_{m}|k_{m}{}^{k_{m}}/[\\sigma_{m}^{k_{m}v_{m}}\\Gamma(k_{m})],m=1,2\\), \\(A_{2}=-k_{1}\\), \\(A_{3}=(k_{1}v_{1}-k_{2}v_{2})(\\ln(\\sigma_{1}v_{1}/k_{1})+\\Psi(k_{1}))/v_{1}\\), and \\[A_{4}=\\begin{cases}\\left(\\frac{\\sigma_{1}}{\\sigma_{2}}\\right)^{v_{2}}\\frac{k_ {1}\\Gamma(k_{1}+v_{2}/v_{1})}{k_{1}{}^{2/v_{1}}\\Gamma(k_{1})}&\\frac{v_{2}}{v_ {1}}>-k_{1}\\\\ \\infty,&\\text{otherwise}\\end{cases}. \\tag{21}\\] Similarly, the KL divergence between \\(p_{1}(x)\\) and \\(p_{2}(x)\\) can be derived as \\[\\mathrm{KL}\\left(p_{2}\\left(x\\right),p_{1}\\left(x\\right)\\right)=B_{1}+B_{2}+B_ {3}+B_{4} \\tag{22}\\] where \\(B_{1}=\\ln(C_{2}/C_{1})\\), \\(B_{2}=-k_{2}\\), \\(B_{3}=(k_{2}v_{2}-k_{1}v_{1})(\\ln(\\sigma_{2}v_{2}/k_{2})+\\Psi(k_{2}))/v_{2}\\), and \\[B_{4}=\\begin{cases}\\left(\\frac{\\sigma_{2}}{\\sigma_{1}}\\right)^{v_{1}}\\frac{k_ {1}\\Gamma(k_{2}+v_{1}/v_{2})}{k_{2}{}^{v_{1}/v_{2}}\\Gamma(k_{2})}&\\frac{v_{1}} {v_{2}}>-k_{2}\\\\ \\infty,&\\text{otherwise}\\end{cases}. \\tag{23}\\] For two GIDs, the KLD is actually expressing the similarity between them. When the KLD is larger, the difference is larger, and the smaller the distance is, the more similar. When the category of one distribution is known, i.e., land or water, KLD can be used to measure the similarity of another unknown to the known one. For example, for an individual region in initial labeling results, if the KLD to the water is smaller than land, it is more likely to be a water region. However, there are cases where a region is water in the initial classification result, and the discrimination by the KLD is closer to the land. The category information of the region is already included in the initial classification results, which should be used in the postprocessing. A parameter for modifying the KLD, called a KL penalty factor, is introduced in postprocessing in order to use the initial classification results and spatial information. If a region of an unknown category is surrounded by water, then the region is more likely to be water and less likely to be an island. For the initial classification results with only two categories, if it is an independent water region, the surrounding area must be land. Although for some reason, this region is judged to be water in the results of the initial classification, it is still more likely to be land from the perspective of spatial relationships. Therefore, when calculating the KLD from this region to land and water, the constraint of spatial relationship will be introduced by the KL penalty factor. This would make the region less likely to be water unless the region and land are sufficiently different. Based on the initial classification results, the biggest water and land area are chosen to be the default reference. For an individual region in initial labeling results, the KLD from land reference (\\(\\mathrm{KLD_{L}}\\)) and water reference (\\(\\mathrm{KLD_{W}}\\)) can be calculated separately. And the KL penalty factor \\(P_{\\mathrm{KL}}\\) is shown as \\[P_{\\mathrm{KL}}=\\frac{\\mathrm{KLD_{L}}+\\mathrm{KLD_{W}}}{2} \\tag{24}\\] which is the mean value of \\(\\mathrm{KLD_{L}}\\) and \\(\\mathrm{KLD_{L}}\\). The modified KLD (MKLD) between region and water reference is defined as \\[\\mathrm{MKLD_{W}}=\\mathrm{KLD_{W}}+P_{\\mathrm{KL}}. \\tag{25}\\] Similarly, the MKLD between region and land reference is defined as \\[\\mathrm{MKLD_{L}}=\\mathrm{KLD_{L}}+P_{\\mathrm{KL}}. \\tag{26}\\] For a region judged to be water in the initial classification, the \\(\\mathrm{MKLD_{W}}\\) and \\(\\mathrm{KLD_{L}}\\)will be compared. If \\(\\mathrm{MKLD_{W}}>\\mathrm{KLD_{L}}\\), the region is considered to be closer to the water, and the category of this region will be changed to land. In contrast, if \\(\\mathrm{MKLD_{W}}<\\mathrm{KLD_{L}}\\), the category of the region will not change. Similarly, for regions labeled as land in the initial classification, the \\(\\mathrm{MKLD_{L}}\\)and \\(\\mathrm{KLD_{W}}\\) will be compared to determine whether to update the category. ## IV Experiments and Results In order to verify the importance of OGID features and KLD postprocessing false-alarm suppression ability in plateau lake extraction, six SAR images with lakes were preprocessed, as shown in Fig. 4. ### _Feature Importance Evaluation_ Here, four types of features were calculated for three polarization modes (VH, VV, and dual-polarization), including amplitude, GLCM features (homogeneity, contrast, dissimilarity, entropy, ASM, and correlation), LMI feature, and OGID features (v and \\(\\sigma\\)). For the GLCM features, the window size is 15 \\(\\times\\) 15 in both height and width directions, which will be discussed in Section V, and the gray-level K is 64. For OGID features and LMI feature, the parameter setting is the default. The image with VV polarization covered Zhuonai lake was chosen to show all the features mentioned previously (see Fig. 5). After feature calculations, all features were constructed into three types of feature images, including VH polarization (10 bands), VV polarization (10 bands), and dual-polarization (20 bands). The feature images of the dual-polarization include all features of VH polarization and VV polarization. In order to further understand the function of different features in plateau lake extraction, the RF was used for feature evaluation, which can provide the contribution of different features in the classification. The number of trees in the RF classifier was set as Fig. 4: Amplitude images and ground-truths graph of lakes. (a) Amplitude images in VV polarization. (b) Ground-truth graphs digitized by visual interpretation of the amplitude images. 20, which will be discussed in Section V, the criterion is Gini impurity, and the max depth was set as default that means the nodes are expanded until all leaves are pure or until all leaves contain less than minimum samples split. Fig. 6 shows the results of feature importance evaluation by the RF algorithm. In different polarizations, the feature importance ranks are different. For the VH polarization, the new proposed OGTD\\(\\sigma\\) and OGTD v are in the top two, contributing more than 45% of the total. The next two are ASM and entropy, which are GLCM features, contribute 13.48% and 9.72%, respectively. The remaining feature contributions are all under 10%. For the VV polarization, the OGTD\\(\\sigma\\) provides an enormous contribution, approximately 40%. And the next three features contribute more than 15%, including ASM, entropy, and dissimilarity. Except for OGID v, the remaining features provide little contribution. For the dual-polarization, generally, the features calculated from VV polarization have a higher contribution than that of VH polarization. Among them, OGTD\\(\\sigma\\) from VV polarization provides 25% at the top of the rank. And the GLCM features from VV polarization, such as homogeneity, ASM, entropy, and dissimilarity, decrease in order. The OGTD\\(\\sigma\\) from VH polarization exceeds two VV polarization features, including correlation and amplitude, and is the most important feature from VH polarization. In summary, OGTD\\(\\sigma\\) is essential in the extraction of plateau lakes in all three polarization modes among the ten features used in this article, while OGTD v has a higher contribution to the extraction of lakes than other features except for OGTD\\(\\sigma\\) in VH polarization. ### _Classification by Combining Multifeatures and KLD_ Three commonly used classifiers, including SVM, multilayer perceptron (MLP), decision tree (DT), and their combinations with KLD postprocessing, were chosen to verify the proposed method, which combined the RF and KLD. All of the algorithms were compared in three types of polarization modes, including VH polarization, VV polarization, and dual-polarization. The kernel was set as radial basis function in SVM classifier, and the gamma is the inverse of the number of features, 0.1 for single-polar and 0.05 for dual-polar. The MLP classifier has two hidden layers with size 20, and the activation function for the hidden layer is Relu. For all six lake images, the training samples are 20 000 points from each ground-truth graph by random generating. All of the images of lakes were classified by four chosen methods, which were compared with the results after adding KLD postprocessing. The results of Zhuonai lake in VH polarization were shown in Fig. 7. For SVM and MLP, there are a large number of isolated speckles in the classification results, which is the primary source of false alarms, suppressed in the results of DT and RF. When the KLD postprocessing step was added to these methods, the speckles were better suppressed in the results images with the complete boundaries. Regions smaller than 100 pixels were eliminated in the postprocessing, and the remaining black spots may be thermokarst ponds slightly larger than 100 pixels. However, due to the influence of the initial classification results based on SVM and MLP, parts of the Zhuonai lake boundaries are missing, although the false alarms in them have been suppressed. For VV polarization, overall, the classification results are better than VH polarization. The results of Yanhu lake (and Haidingnuoer lake) were chosen to show in Fig. 8. However, in the initial classification results of the four methods, there are still speckled misclassifications in the interior of the Yanhu lake. Compared with the initial classification, the results after adding KLD postprocessing have fewer speckles with pure interiors. It is hard to judge, which of the DT\\(+\\)KLD and RF\\(+\\)KLD results is better because they both have smooth boundaries and lower false alarms. The dual-polarization results are better than the single-polar channel results, including VH and VV. Fig. 9 has shown the results of Duoregicauo (and Cuodarina) in dual-polarization. For the initial classification results, there are a few speckles in Duoregicauo and false alarms in the bottom left corner of the image. The KLD postprocessing eliminated a part of false alarms in the initial classification results, especially in the results of RF\\(+\\)KLD based. In order to evaluate the extraction results of all lakes quantitatively, two metrics, including overall accuracy (OA) and false-alarm rate (FR), were introduced, shown as bold in Fig. 5: Multifeatures calculated from the VV polarization amplitude image of Zhuonai lake. (a) Homogeneity. (b) Contrast. (c) Dissimilarity. (d) Entropy. (e) ASM. (f) Correlation. (g) LMI. (h) OGTD v. (i) OGTD \\(\\sigma\\). Fig. 6: Results of feature importance evaluation by the RF algorithm. Tables I-VI. The OA factor indicates the correct rate, and FR is able to indicate false alarms. For VH polarization, the OAs of Zhuonai lake based on SVM and MLP are about 80%. The corresponding FRs are as high as 12.48% and 15.36%, respectively, suffering from speckles. Also, for Zhuonai lake, DT and RF achieved better results, with FR of 3.57% and 2.31%, respectively. Among the four methods without KLD postprocessing, RF achieved the highest of OA and lowest of FR. After KLD postprocessing, the results have improved in the increasing of OA and controlling of FR. From Tables I and II, it can be seen that in the performance of OA, more than half of the lakes based on RF\\(+\\)KLD are optimal. As for FR, more than half of the lowest belongs to DT\\(+\\)KLD, which is due to the effective suppression of false alarms by KLD postprocessing. Although for VV polarization, the RF results are better than the other three results in the initial classification, and the DT\\(+\\)KLD is better than RF\\(+\\)KLD when KLD postprocessing was added. When using the features of two polarization channels simultaneously, RF\\(+\\)KLD achieved the best of OA in five lakes, except for Zhuonai lake. Meanwhile, the suppression of false alarms by RF\\(+\\)KLD is excellent, with five lakes below 1%. ## V Discussion ### _Noise Analysis in Different Polarization_ For dark targets in SAR images, such as water and oil, because their backscattering intensity may lower than the instrument sensor noise floor [41, 42], i.e., noise equivalent sigma zero Fig. 8: Results of Yanhu lake in VV polarization. (a) SVM. (b) MLP. (c) DT. (d) RF. (e) SVM+KLD. (f) MLP+KLD. (g) DT+KLD. (h) RF+KLD. Fig. 7: Results of Zhuonai lake in VH polarization. (a) SVM. (b) MLP. (c) DT. (d) RF. (e) SVM+KLD. (f) MLP+KLD. (g) DT+KLD. (h) RF+KLD. Fig. 9: Results of Duoregaicuo in dual-polarization. (a) SVM. (b) MLP. (c) DT. (d) RF. (e) SVM+KLD. (f) MLP+KLD. (g) DT+KLD. (h) RF+KLD. (NESZ), their extraction accuracies are easily affected. In such a situation, the features such as the OGTD features are no longer to accurately describe the ground objects, but the background thermal noise. For the Sentinel-1 IW mode dual-polarization product, the image may suffer crosstalk among channels and perform lower signal-to-noise ratio (SNR) [43, 44]. Hence, it is necessary to analyze the signal level in plateau lake extraction tasks based on the Sentinel-1 IW images. For the Sentinel-1 GRD product, the NESZ can be calculated based on metadata XML file contained in each product and equations in the Sentinel-1 product specification provided by ESA. Nevertheless, both the noise metadata file and the equations are in the log domain, so the amplitude image needs to be converted back to the log domain first, i.e., decibels. Eleven sampling points with 7 \\(\\times\\) 7 neighborhood were selected on the lake and land, respectively, and their mean values and noise floor curve with respect to incidence angles were plotted in Fig. 10. As shown in Fig. 10, both of the VH channel and VV channel, the boundaries of the noise floor curve between different swaths are significant. For each swath, the noise level is low in the middle and is massive on both sides. For VH polarization, the backscatter values of the lake sampling points are equal to or less than the noise floor, while the backscatter values of the land sampling points are higher than the noise floor. Therefore, the scattering characteristics of the lake have been contaminated by noise, leading to low accuracy and easily affected results when extracting lakes. In the results of VV polarization, the backscattering values of both the lake and the land are significantly higher than the noise floor, and a higher SNR ratio can provide the lake with sufficient energy to distinguish it from noise. However, it can be seen from previous experiments that the results of the VH images using the proposed method also show good performance. Furthermore, when two polarization channels are jointly used, the accuracy will be further improved, which indicates that VH polarization can provide information not included in VV polarization in the classification. ### _Estimated Window Size Selection of GLCM Features_ For GLCM features, the size of the estimation window is an important parameter because it may affect the performance of GLCM features in the importance evaluation and the accuracy of the lake extraction. A smaller estimation window only expresses a small scale of texture features, while a larger one is easily causing the edge of the feature image to be blurred, requiring a huge calculation cost. It is necessary to obtain a practical and reliable estimation window size through experiments. The start window size is 3 and increases by two until 21. The normalized values of the GLCM features were plotted according to different categories, as shown in Fig. 11. The black dashed line is the difference in GLCM features between the two categories. By observing the trend of the black dashed line, the optimal estimated window size will be obtained. From Fig. 11(a), for VH polarization, in the beginning, the difference between the two features increased with increasing window size. When the estimated window reached 15, this increased rate slowed down. For some features in VV polarization, such as homogeneity and dissimilarity, the changing trend, in the beginning, is relatively stable. The other features are the same as in the VH polarization. Rising speed slows down when the window size reaches 15. Although the window size increases and the difference still increases, it consumes more computing resources. Therefore, 15 is considered to be an optimal window size choice. ### _Number of Trees in the RF Classifier_ The performance of the RF algorithm is not easily affected by the parameters. Generally, there is less discussion about the RF parameters. However, in the case of much noise, i.e., SAR image, some parameters of the RF may affect the results, such as Fig. 11: Normalized GLCM feature values of lake and land, and their value differences. (a) VH polarization. (b) VV polarization. Fig. 10: Signal-to-noise analysis of each polarization channel, the vertical bars show the mean and standard deviation of the backscatter values \\(\\sigma_{0}\\) in (1) indicated by sample points. (a) VH polarization. (b) VV polarization. the number of trees. Higher number of trees can provide a better performance but makes code slower. Thus, a property number of trees make predictions stronger and more stable. A two-step optimal tree number estimation scheme was used with a cross-validation score, including rough estimation and precise estimation. In the rough estimation step, the tree numbers were set as 1, 10, and increase by ten until 100, as shown in Fig. 12(a). The scores rise significantly between 1 and 25 trees, and this interval was used for precise estimation. For the precise estimation, the tree number was started from 1 tree, and increase by one until 25, as shown in Fig. 12(b). In the result of VH polarization, when the number of trees reaches 20, the score of the RF model is basically stable. For VV polarization and dual-polarization, the fluctuation of the score is not obvious. Combining the results of different polarization channels, 20 is a suitable tree number choice. ### _Band Selection Analysis_ For most cases, the dual-polarization has better lake extraction results than VV polarization. While for Kekexiil lake and Duoergaicuo, the VV polarization using SVM and SVM\\(+\\)KLD method have better results than dual-polarization. An experiment has been added to verify whether accuracy can be improved by reducing feature bands. According to the results of RF feature importance evaluation, the least important features were removed one by one, including amplitude from VH, amplitude from VV, correlation from VH, dissimilarity from VH, and LMI from VH. Then, the OAs and FRs have been calculated, shown as Fig. 13. For Kekexiil lake, from 15 to 20 features, the OAs of the four methods changed slightly, although FRs of them have a little fluctuation when combining 16 feature bands and RF. Moreover, in the results of Kekexiil lake, SVM and SVM\\(+\\)KLD are indeed able to achieve lower FRs than RF and RF\\(+\\)KLD, which is different from the other five lakes. For Duoergaicuo, the OAs of RF decrease as feature bands decrease, while the OAs of SVM are unchanged. Meanwhile, the fluctuation of FRs of SVM and SVM\\(+\\)KLD is delicate, while the FRs of RF and RF\\(+\\)KLD have a little increased. By changing the number of feature bands, the OAs and FRs of the results generated by four selected algorithms changed slightly. The features in the lower order of feature importance evaluation rank may not provide further information for classification, but they also do not hinder the performance of the algorithms. Although for Kekexiil lake and Duoergaicuo, the VV polarization using SVM and SVM\\(+\\)KLD method have better results than dual-polarization, it is better to use more features and RF\\(+\\)KLD that is more robust when performing plateau lake extraction in QTP. ## VI Conclusion Aiming at achieving the large-scale description of plateau lakes and solving the high false alarms in the plateau lake extraction, a practical algorithm combining novel statistical features named OGTD features and KLD has been proposed to extract shoreline of plateau lakes in the QTP, using SAR imagery. The OGTD features, together with conventional two types of texture features, including GLCM features and LMI feature, and amplitude, were used to refine the description of the lake surface. A postprocess using the modified KLD between two regions after initial classification was used to suppress the false alarms in initial classification and improve accuracy. The proposed approach was applied to extract the lakes in the Hoh Xil region. Some conclusions can be summarized as follow. 1. The proposed OGTD features, using the large-scale description of spatial context to suppress false alarms, was compared with the conventional texture features, including GLCM features, LMI feature, and amplitude, using RF importance evaluation. For the VH polarization, the OGTD \\(\\sigma\\) and OGTD v are the first two important features, contributing a total of more than 45%. For the VV polarization and dual-polarization, the OGTD \\(\\sigma\\) can provide approximately 40% and 25% contribution, respectively. The results showed that OGTD features are not only suitable for high SNR, such as VV polarization, but also for VH polarization, which is susceptible to thermal noise. At the same time, the OGTD features provide an accurate Fig. 12: Two-step optimal tree number estimation scheme. (a) Rough estimation. (b) Precise estimation. Fig. 13: OAs and FRs after band selection. (a) Kekexiil lake. (b) Duoergaicuo. description for the extraction of plateau lakes, which improve the accuracy while suppressing false alarms. 2. The four classification methods, including SVM, MLP, DT, and RF, and their combination with KLD postprocessing were compared. In the results of methods without KLD postprocessing, the performance of RF in OA and FR is optimal. After combining with KLD postprocessing, the method of DT\\(+\\)KLD is significantly improved, which achieved results comparable to RF\\(+\\)KLD in single-polar channels, while the latter achieved the highest OAs and the lowest FRs in five lakes. The KLD postprocessing improves the accuracy of the four methods tested in this article and can suppress false alarms at the same time, reaching the OA of 99.54% while maintaining the FR of 0.32%. More long-term studies involving spatial analysis of plateau lakes are needed to better understand the response of permafrost to climate warming and steadily increasing human activities. Future work to be carried out includes 1) using the polarimetric features to improve the mapping accuracy and 2) using multi-temporal data to monitor seasonal changes in the area of the lakes, compare lake expansion rates and landscape dynamics, and link the results to the meteorological records to predict the effects of future climate change in the QTP. ## Acknowledgment The authors would like to thank the European Space Agency for providing Sentinel-1 data. ## References * [1] G. Zhang, T. Yao, H. Xie, S. Kang, and Y. Lei, \"Increased mass over the Tibetan Plateau: From lakes or glaciers?\" _Geophys. Res. Lett._, vol. 40, no. 10, pp. 2125-2130, 2013. * [2] W. W. Immezrezel, L. P. H. Van Beek, and M. F. P. Bierkens, \"Climate change will affect the Asian water towers,\" _Science_, vol. 328, no. 5984, pp. 1382-1385, 2010. * [3] X. Zhou, X. Liu, and Z. Zhang, \"Automatic extraction of lakes on the Qinghai-Tibet Plateau from Sentinel-1 SAR images,\" in _Proc. SAR Big Data Era_, 2019, pp. 1-4. * [4] J. Pei _et al._, \"Recovered Tibetan antelope at risk again,\" _PLoS One_, vol. 14, 2019, Art. no. e0211798. * [5] Z. Zhang, C. Wang, H. Zhang, Y. Tang, and X. Liu, \"Analysis of permafrost region coherence variation in the Qinghai-Tibet Plateau with a high-resolution TerraSAR-X image,\" _Remote Sens._, vol. 10, no. 2, 298, 2018. * [6] X. Li, D. Long, Q. Huang, P. Han, F. Zhao, and Y. Wada, \"High-temporal-resolution water level and storage change data sets for lakes on the Tibetan Plateau during 2000-2017 using multiple altimetric missions and Landsat-derived lake shortening,\" _Earth Syst. Sci. Data Discuss._, vol. 11, no. 4, pp. 1603-1627, 2019. * [7] Y. Yang _et al._, \"Landsat 8 OLI image based terrestrial water extraction from heterogeneous backgrounds using a reflectance homogenization approach,\" _Remote Sens. Environ._, vol. 171, pp. 14-32, 2015. * [8] M. G. Tulbure, M. Broich, S. V. Stehman, and A. Kommareddy, \"Surface water extent dynamics from three decades of seasonally continuous Land-satine series at subcontinental scale in a semi-arid region,\" _Remote Sens. Environ._, vol. 178, pp. 142-157, 2016. * [9] L. Feng, X. Hou, and Y. Zheng, \"Monitoring and understanding the water transparency changes of fifty large lakes on the Yangtze Plain based on long-term MODIS observations,\" _Remote Sens. Environ._, vol. 221, pp. 675-686, 2019. * [10] C. S. Watson, O. King, E. S. Miles, and D. J. Quiucey, \"Optimising NDWI surgical pond classification on Himalayan debris-covered glaciers,\" _Remote Sens. Environ._, vol. 217, pp. 414-425, 2018. * [11] K. V. Singh, R. Setia, S. Sahoo, A. Prasad, and B. Pateriya, \"Evaluation of NDWI and MNDWI for assessment of waterlogging by integrating digital elevation model and groundwater level,\" _Geocarto Int._, vol. 30, no. 6, pp. 650-661, 2015. * [12] S. Martinis _et al._, \"Comparing four operational SAR-based water and flood detection approaches,\" _Int. J. Remote Sens._, vol. 36, no. 13, pp. 3519-3543, 2015. * [13] H. Vickers, E. Malnes, and K.-A. Hogda, \"Long-term water surface area monitoring and derived water level using synthetic aperture radar (SAR) at Altevain, a medium-sized Arctic lake,\" _Remote Sens._, vol. 11, no. 23, p. 2780, 2019. * [14] B. Pham-Duc, C. Prigent, and F. Aires, \"Surface water monitoring within Cambodia and the Vietnamese Mekong Delta over a year, with Sentinel-1 SAR observations,\" _Water_, vol. 9, no. 6, p. 366, 2017. * [15] A. Pham, D. N Ha, C. D Man, T. T Nguyen, H. Q Bui, and T. TN Nguyen, \"Rapid assessment of flood innovation and damaged rice area in red river delta from sentinel 1A imagery,\" _Remote Sens._, vol. 11, no. 17, p. 2034, 2019. * [16] P. Nakmuenai, F. Yamazaki, and W. Liu, \"Automated extraction of inundated areas from multi-temporal dual-polarization RADARSAT-2 images of the 2011 central Thailand flood,\" _Remote Sens._, vol. 9, no. 1, p. 78, 2017. * [17] S. Martinis, J. Kersten, and A. Twele, \"A fully automated TerraSAR-X based flood service,\" _ISPRS J. Photogram. Remote Sens._, vol. 104, pp. 203-212, 2015. * [18] M. Santoro and U. Wegmmuller, \"Multi-temporal synthetic aperture radar metrics applied to map open water bodies,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 7, no. 8, pp. 3225-3238, Aug. 2014. * [19] Y.-S. Song, H.-G. Sohn, and C.-H. Park, \"Efficient water area classification using Radarast-1 SAR imagery in a high relief mountainous environment,\" _Photogram. Eng. Remote Sens._, vol. 73, no. 3, pp. 285-296, 2007. * [20] A. Niedermeier, E. Romanaccessen, and S. Lehner, \"Detection of coastlines in SAR images using wavelet methods,\" _IEEE Trans. Geosci. Remote Sens._, vol. 38, no. 5, pp. 2270-2281, Sep. 2000. * [21] M. Silveria and S. Heleno, \"Separation between water and land in SAR images using region-based level sets,\" _IEEE Geosci. Remote Sens. Lett._, vol. 6, no. 3, pp. 471-475, Jul. 2009. * [22] B. Tian, Z. Li, P. Tang, P. Zou, M. Zhang, and F. Niu, \"Use of intensity and coherence of X-band SAR data to map Thermostlast lakes on the northern Tibetan Plateau,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 9, no. 7, pp. 3164-3176, Apr. 2016. * [23] Q. Meng, X. Wen, L. Yuan, and H. Xu, \"Factorization-based active contour for water-land SAR image segmentation via the fusion of features,\" _IEEE Access_, vol. 7, pp. 4047-40358, 2019. * [24] L. Wang, P. Marzahn, M. Bernier, and R. Ludwig, \"Mapping permafrost landscape features using object-based image classification of multi-temporal SAR images,\" _ISPRSJ. Photogram. Remote Sens._, vol. 141, pp. 10-29, 2018. * [25] W. Huang _et al._, \"Automated extraction of surface water extent from Sentinel-1 data,\" _Remote Sens._, vol. 10, no. 5, p. 797, 2017. * [26] W. Lv, Q. Yu, and W. Yu, \"Water extraction in SAR images using GLCM and support vector machine,\" in _Proc. IEEE 10th Int. Conf. Signal Process. Proc._, 2010, pp. 740-743. * [27] Y. Zhang, G. Zhang, and T. Zhu, \"Seasonal cycles of lakes on the Tibetan Plateau detected by Sentinel-1 SAR data,\" _Sci. Total Environ._, vol. 703, 2020, Art. no. 135563. * [28] J. C. Valdiverio-Navarro, A. Salazar-Garibay, A. Tellez-Quinones, M. Orozco-del-Castillo, and A. A. Lopez-Caloca, \"Inland water body extraction in complex reliefs from Sentinel-1 satellite data,\" _J. Appl. Remote Sens._, vol. 13, no. 1, 2019, Art. no. 16524. * [29] S. Martinis, A. Twele, and S. Voigt, \"Towards operational near real-time flood detection using a split-based automatic thresholding procedure on high resolution TerraSAR-X data,\" _Nat. Hazards Earth Syst. Sci._, vol. 9, no. 2, pp. 303-314, 2009. * [30] L. Wang, M. Jolivel, P. Marzahn, M. Bernier, and R. Ludwig, \"Thermokarst pond dynamics in subarctic environment monitoring with radar remote sensing,\" _Permafrost. Perglical Process._, vol. 29, no. 4, pp. 231-245, 2018. * [31] Y. Wu, K. Ji, W. Yu, and Y. Su, \"Region-based classification of polarimetric SAR images using Vishart MRF,\" _IEEE Geosci. Remote Sens. Lett._, vol. 5, no. 4, pp. 668-672, Oct. 2008. * [32] W. B. Silva, C. C. Freitas, S. J. S. Sant'Anna, and A. C. Frery, \"Classification of segments in PoISAR imagery by minimum stochastic distances between Wishart distributions,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 6, no. 3, pp. 1263-1273, Jun. 2013. * [33] X. Qin, H. Zou, S. Zhou, and K. Ji, \"Region-based classification of SAR images using Kullback-Leibler distance between generalized gamma distributions,\" _IEEE Geosci. Remote Sens. Lett._, vol. 12, no. 8, pp. 1655-1659, Aug. 2015. * [34] B. Liu _et al._, \"Outburst flooding of the moraine-dammed Zhuonai Lake on Tibetan plateau: Cause and impacts,\" _IEEE Geosci. Remote Sens. Lett._, vol. 13, no. 4, pp. 570-574, Apr. 2016. * [35] W. H. Liu _et al._, \"Analysis on expansion trend and outburst risk of the Yanhu Lake in Holm Xil region, Qinghai-Tibet Plateau,\" _J. Glaciology Geocrovol._, vol. 41, no. 1, pp. 1-12, 2019. * [36] H. Li, W. Hong, Y. Wu, P. Fan, and S. Member, \"On the empirical-statistical modeling of SAR images with generalized gamma distribution,\" _IEEE J. Sel. Topics Signal Process._, vol. 5, no. 3, pp. 386-397, Jun. 2011. * [37] P. F. Felzenszwald and D. P. Huttenlocher, \"Efficient graph-based image segmentation,\" _Int. J. Comput. Vis._, vol. 59, no. 2, pp. 167-181, 2004. * [38] Z. Hu, Q. Zhang, Q. Zou, Q. Li, and G. Wu, \"Stepwise evolution analysis of the region-merging segmentation for scale parameterization,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 7, pp. 2461-2472, Jul. 2018. * [39] Q. Xu, Q. Chen, S. Yang, and X. Liu, \"Superpixel-based classification using K distribution and spatial context for polarimetric SAR images,\" _Remote Sens._, vol. 8, no. 8, p. 619, 2016. * [40] G. Gao, S. Gao, K. Ouyang, J. He, and G. Li, \"Scheme for characterizing clutter statistics in SAR amplitude images by combining two parametric models,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 10, pp. 5636-5646, Oct. 2018. * [41] S. Skrunes, C. Brekke, C. E. Jones, and B. Holt, \"A multisensor comparison of experimental oil spills in polarimetric SAR for high wind conditions,\" _IEEE J. Sel. Topics Appl. Earth Observ Remote Sens._, vol. 9, no. 11, pp. 4948-4961, Nov. 2016. * [42] L. Huang, B. Liu, X. Li, Z. Zhang, and W. Yu, \"Technical evaluation of Sentinel-1 IW mode cross-pol radar backscattering from the ocean surface in moderate wind condition,\" _Remote Sens._, vol. 9, no. 8, p. 854, 2017. * [43] P. W. Vachon and J. Wolfe, \"C-band cross-polarization wind speed retrieval,\" _IEEE Geosci. Remote Sens. Lett._, vol. 8, no. 3, pp. 456-459, May 2010. * [44] H. Shen, W. Perrie, Y. He, and G. Liu, \"Wind speed retrieval from VH dual-polarization RADARSAT-2 SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 9, pp. 5820-5826, Sep. 2014.
Due to wind-induced waves, dry sand, wet snow, and terrain shadows, the lake extraction from synthetic aperture radar (SAR) imagery in the Qinghai-Tibet plateau is accompanied by false alarms. In this article, a practical plateau lake extraction algorithm combining novel statistical features and Kullback-Leibler distance (KLD) using SAR imagery has been proposed. First, a mathematical description for the plateau lake surface called object-based generalized gamma distribution (OGTD) features has been proposed, which is able to suppress the false alarms by using spatial context information as the large-scale descriptor. Second, the random forest classifier is used to train a multifeature set, including conventional texture features and OGTD features, and output an initial labeling result. Finally, to suppress the false alarms in the initial lake extraction results, automatic postprocessing based on KLD has been used. The algorithm is tested by several experiments using Sentinel-1 SAR data, performing better than the state-of-the-art algorithms, achieving the overall accuracy of 99.54% while maintaining a false-alarm rate of 0.32%. Kullback-Leibler distance (KLD), lake extraction, object-based generalized gamma distribution (OGTD), Qinghai-Tibet plateau (QTP), synthetic aperture radar (SAR).
Condense the content of the following passage.
ieee/f1fbd844_ec4d_402d_9d28_019c3cc5a33d.md
# TKP-Net: A Three Keypoint Detection Network for Ships Using SAR Imagery Xiunan Li\\({}^{\\text{\\textcircled{C}}}\\), Peng Chen \\({}^{\\text{\\textcircled{C}}}\\), Jingsong Yang \\({}^{\\text{\\textcircled{C}}}\\), Wentao An \\({}^{\\text{\\textcircled{C}}}\\), _Member, IEEE_, Gang Zheng \\({}^{\\text{\\textcircled{C}}}\\), _Senior Member, IEEE_, Dan Luo \\({}^{\\text{\\textcircled{C}}}\\), Aiying Lu \\({}^{\\text{\\textcircled{C}}}\\), and Zimu Wang \\({}^{\\text{\\textcircled{C}}}\\) Manuscript received 28 August 2023; revised 16 October 2023; accepted 28 October 2023. Date of publication 1 November 2023; date of current version 23 November 2023. This work was supported in part by the National Key R&D Program of China under Grant 2027Fk3902400 and in part by China High Resolution Earth Observation System Program under Grant 41-Y30F07-9001-20/_22. (Corresponding authors: Peng Chen; Jingsong Jiang.)_ Xiunan Li is with the Ocean College, Zhejiang University, Zhoushan 316021, China, and also with the State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou 310012, China (e-mail: [email protected]). Peng Chen, Gang Zheng, Dan Luo, Aiying Lu, and Zimu Wang are with the State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou 310012, China (e-mail: [email protected]). Dan Luo \\({}^{\\text{\\textcircled{C}}}\\), Aiying Lu \\({}^{\\text{\\textcircled{C}}}\\), and Zimu Wang \\({}^{\\text{\\textcircled{C}}}\\) ## I Introduction Monitoring ships is of considerable importance in military and civilian fields [1], including maintaining national marine security, supervising water traffic, maritime fishery management, and maritime rescue [2]. Optical remote-sensing images have clear details and textures, which have the key advantages for identifying ship types. However, this process is easily impeded by weather conditions, including clouds, rain, and fog, and can only be conducted during daylight hours. In contrast, synthetic aperture radar (SAR) can be used at night and has fewer limitations imposed by weather conditions. Therefore, it is widely used in ship detection [3]. With the development of high-resolution SAR satellites, finer scale ship information [4], such as the length, width [5], and direction, can be extracted from high-resolution SAR images. Traditional SAR image detection algorithms first need to perform sea and land segmentation to remove land interference, and then perform ship detection. The ship detection results are often affected by the accuracy of land and sea segmentation [6]. The classic algorithm used in this context is the constant false alarm rate (CFAR) algorithm, such as the cell average CFAR detector [7], two-parameter CFAR detector [8], and order statistics CFAR detector [9]. The CFAR-based detection algorithm adaptively selects an appropriate detection threshold for ship detection based on the background clutter statistical distribution model (such as the K distribution [10] and gamma distribution [11]) [12]. This method works well when the sea surface clutter estimation model is consistent with the detection image distribution. However, this algorithm type requires manual analysis of the target and background. The parameter design is complex, lacks versatility, and makes it difficult to deal with complex sea conditions and ship motion blur, sidelobe effects, and interference generated by near-shore ships [12]. It is also difficult to extract the accurate length, width, and bow information using the CFAR method. Recently, target detection algorithms based on deep convolutional neural networks (DCNN) have made considerable progress, and their powerful feature extraction and learning capabilities have strongly surpassed those of the traditional methods in terms of accuracy and robustness, and have received considerable attention from researchers. Depending on whether an anchor exists, this method can be anchor-based or anchor-free. Anchor-based methods predominantly include R-CNN [13], fast R-CNN [14], faster R-CNN [15], R-FCN [16], SSD [17], YOLOv2 [18], YOLOv3 [19], and YOLOv4 [20]. The anchor-based method needs to manually set many anchor boxes and introduce an excess number of hyperparameters, such as the scale, ratio, and quantity of anchors. A relatively small number of anchor boxes can match the target, causing an imbalance between the target and background that affects the convergence speed. The anchor-free method does not need to set the anchor box parameters. It is simple to use and its accuracy is comparable with that of the anchor-based algorithm and is now being used by an increasing number of researchers. CornerNet [21], ExtremeNet [22], and CenterNet [23] are representative anchor-free methods based on keypoints. CornerNet uses two keypoint in the upper left and lower right corners to locate the upper left and lower right corners of the object bounding box. It then uses the embedding vector to determine whether the two keypoints belong to the same object. ExtremeNet detects objects by predicting extreme and center points and grouping keypoints according to the geometric structure. CenterNet only uses the central keypoint to locate the target. The width and height parameters of the target are obtained through regression, which substantially improves the detection speed. Based on a DCNN, the current SAR ship detection algorithm omits the sea-land segmentation strategy of the traditional algorithm and uses an end-to-end method to send SAR images to the DCNN for detection. It then directly outputs ship detection results, which improves the efficiency and accuracy of detection [24]. Based on CenterNet, Guo et al. [25] proposed CenterNet\\(++\\), which introduced a feature refinement module to extract contextual information for detecting small ships. Considering the multiscale and multiangle characteristics of SAR image scenes and ship targets, Hu et al. [6] proposed a BANet-based anchor-free method for multiscale ship detection in SAR images. By introducing a deformable convolution, local ship features can be extracted effectively. Deng et al. [26] proposed a novel and effective method for learning deep ship detection from scratch, which achieved a high level of detection performance using Sentinel-1 data. The aforementioned algorithms all employ horizontal bounding boxes (HBBs) for object detection. However, in the case of remote-sensing images captured from an aerial perspective, objects in the image can have arbitrary orientations, and the use of HBBs can result in poor detection performance for densely arranged, arbitrarily oriented, and variably scaled objects. This is partly due to the inclusion of a significant amount of irrelevant background information within the HBBs, which results in imprecise object representations, as well as the nonmaximum suppression (NMS) algorithm's tendency to remove highly overlapping ships, resulting in missed detections. Recently, detection methods based on an oriented bounding box (OBB) have been proposed to solve these problems. Xia et al. [27] used a method of returning four vertex coordinates to generate an OBB based on a faster R-CNN algorithm. Gliding vertex [28] works on the assumption that the order of coordinates generated by this method of returning four vertices is easy to confuse. Therefore, they choose to return four ratios, which represent the relative sliding offsets on each side of the HBB. Yang and Yan [29] represented the OBB as a center point, length, width, and angle, and the angle was predicted by classification rather than regression. He et al. [30] proposed a keypoint-based anchor-free algorithm to generate SAR ship OBBs by learning polar coordinate encoding. Although the OBB algorithm obtained a compact bounding box and had certain direction information, it had 180\\({}^{\\circ}\\) ambiguity and could not be used to obtain the precise direction of each ship [see Fig. 1(a)]. Estimating the direction of motion of a ship from SAR images is usually performed indirectly using the ship-wake method [see Fig. 1(b)]. Ship wakes typically occur in many forms. The most common are turbulent wakes, Kelvin wakes, and narrow V-wakes [31]. Turbulent wakes occurring as dark lines along the longitudinal axis of a ship are the most commonly observed wakes in SAR images [32]. A narrow V-tail typically appears as a bright edge next to a darker trail [33]. The Kelvin wake is composed of shear, divergent, and sharp waves, and its V-shaped angle is approximately 39\\({}^{\\circ}\\)[34], which is usually smaller than the angle in SAR images. Ship wakes are predominantly shown as linear structures in the image [35]. The most common method is to use the Radon or Hough transform to detect lines [36]. The lines and ships detected are combined to obtain the ship's navigation direction [37]. However, the visibility of wake information in SAR images is affected by radar parameters, ship parameters, and background sea conditions [38] and is difficult to observe in many cases. We proposed a keypoint-based OBB detection algorithm that could automatically extract high-resolution SAR position and bow information using DCNN and keypoints without wakes [see Fig. 1(c)]. The proposed dataset reflected a strong performance. The contributions of this study were given as follows. 1. We proposed an end-to-end ship detection method based on three keypoints for arbitrary direction SAR. The rotation angle-prediction problem was transformed into an estimation and matching problem for keypoints, which avoided the angle periodicity problem based on the regression method. 2. Aiming to address the problem of 180\\({}^{\\circ}\\) ambiguity in the current OBB detection method for determining ship direction, we automatically obtained the shape and scattering information of the SAR bow and stern through two keypoints. We then assessed whether the two keypoints were the bow and obtained the accurate bow classification. 3. We exploited the difference in information extracted from two keypoints to construct a simple and practical analytical model to evaluate the bow classification results. Fig. 1: Ship direction detection method. (a) Rotating box detection. (b) Vessel-wake detection. (c) Rotating box detection \\(+\\) Head point extraction. The rest of this article is organized as follows. In Section II, the proposed method is described in detail. Section III presents the experimental datasets and comparisons. Section IV discusses the limitations of our method. Finally, Section V concludes this article. ## II Proposed Method Our proposed method is shown in Fig. 2 and has been divided into three main parts, that is, feature extraction and fusion, target detection head, and prediction module. The feature extraction and fusion module was used to extract the features of multiscale ships. The target detection head module was used to generate keypoint heatmaps, keypoint offsets, two side lengths of OBB boxes, direction vectors, and bow classification results. The prediction module combines the information from the detection head module to form the OBB and the head point position. ### _Feature Extraction and Fusion Module_ The feature extraction and fusion modules comprised two parts. The first part used the classic ResNet50 [39] structure for feature extraction. The second part used feature maps from different scales of the ResNet residual module for layer-by-layer fusion. We employ skip connections to fuse multiscale features at different layers, aiming to capture both fine-grained details and high-level contextual information. Unlike feature pyramid network [40], our method does not adopt the multibranch prediction head because the multibranch prediction head is more computationally intensive. The resolution of the feature map of the last layer of the ResNet was 32 times lower than that of the original image. For small-scale objects, the loss of information after repeated downsampling is relatively high, and small objects are easily missed. The semantic information was combined with shallow, high-resolution morphological information. As shown in Fig. 3, _C4_ was fused with _C5_ to obtain _P4_; _P4_ and _C3_ were fused to obtain _P3_; and _P3_ and _C2_ were fused to obtain _P2_. Taking the fusion of _C4_ and _C5_ as an example, assuming that the input image size is 512 \\(\\times\\) 512 \\(\\times\\) 3, the size of the _C5_ feature map is 16 \\(\\times\\) 16 \\(\\times\\) 2048, and the size of the feature map obtained by double upsampling through bilinear interpolation was 32 \\(\\times\\) 32 \\(\\times\\) 2048. Then, 3 \\(\\times\\) 3 convolution became 32 \\(\\times\\) 32 \\(\\times\\) 1024, spliced with _C4_ to 32 \\(\\times\\) 32 \\(\\times\\) 2048, and finally, 1 \\(\\times\\) 1 convolution becomes 32 \\(\\times\\) 32 \\(\\times\\) 1024. _P4_, _C3_, _P3_, and _C2_ exhibited similar fusion processes. The detailed structural parameters of the feature extraction and fusion modules are listed in Table I. ### _Detection Head_ #### Ii-B1 Keypoints Estimation Heatmaps are often used to locate keypoints in human bones. We use the same keypoint extraction method as CenterNet and CornerNet. In order to avoid the problem of close proximity of multiple keypoints, we set the parameters of the Gaussian function according to the size of each ship, and the maximum Gaussian kernel scale set does not exceed the width of the ship. Here, we use the center Keypoint \\(C\\), upper Keypoint \\(T\\), and lower Keypoint \\(B\\) to locate the key positions of the ship. The keypoints were divided, as shown in Fig. 4. The coordinate axis was established with the center Keypoint \\(C\\) as the origin. When the coordinates of keypoints \\(B\\) and \\(T\\) are on the _X_-axis, the left side of the _Y_-axis is Keypoint \\(B\\), and the right side is Keypoint \\(T\\). When Keypoints \\(B\\) and \\(T\\) are not on the _X_-axis, Fig. 3: Feature extraction and fusion module. Fig. 2: Overall architecture of our method. The network structure can be divided into three parts, that is, feature extraction and fusion module, detection head module, and prediction part. After the data had passed through the feature extraction and fusion module, the feature map, offset, size parameter, vector, and ship head layers had five parameter parts. The feature map and the offset were combined to generate keypoint coordinates. The size parameters and vectors were used to match the keypoint coordinates to generate OBB, and the ship head layers were used to determine which keypoint the bow was on. the keypoints above the \\(X\\)-axis are \\(T\\) and the keypoints below the \\(X\\)-axis are \\(B\\). The heat map of the keypoint labels is generated using a two-dimensional (2-D) Gaussian function \\(\\,e^{-\\frac{c_{s}^{2}+c_{s}^{2}}{2\\sigma^{2}}}\\,\\). The loss function uses a Gaussian focal loss [41], as shown in the following equation: \\[\\begin{split}& L_{\\text{hm}}=\\frac{-1}{N}\\sum_{h\\,=\\,1}^{H}\\sum_{w\\,=\\,1}^{W }\\\\ &\\times\\!\\begin{cases}(1\\!-\\!\\mathrm{hm}_{hw})^{\\alpha}\\text{log} \\,(\\mathrm{hm}_{hw})&\\text{if}\\,\\,\\,\\widehat{\\mathrm{hm}}_{hw}\\!=\\!1\\\\ &\\left(1\\!-\\!\\widehat{\\mathrm{hm}}_{hw}\\right)^{\\beta}\\!(\\mathrm{hm}_{hw})^ {\\alpha}\\text{log}\\,(1-\\mathrm{hm}_{hw})&\\text{otherwise}\\end{cases}\\end{split} \\tag{1}\\] where \\(N\\) is the number of objects in the image, and \\(H\\) and \\(W\\) are the height and width, respectively. After downsampling the image four times, \\(\\alpha\\) and \\(\\beta\\) are the hyperparameters (\\(\\alpha=2\\) and \\(\\beta=4\\)), \\(\\mathrm{hm}_{hw}\\) is the predicted heatmap value, and \\(\\widehat{\\mathrm{hm}}_{hw}\\) is the heatmap value of the real label. The coordinate position of the original image to the coordinate position of the heatmap is \\(d\\) times smaller than that of the original image and is rounded down. Therefore, there is a deviation between the coordinates of the keypoints extracted from the heatmap and those of the original image. An offset parameter was used to quantify this error, as shown in Fig. 5. The deviation at any keypoint is given as follows: \\[\\begin{split} o\\,=\\left(\\frac{x}{d}-\\left\\lfloor\\frac{x}{d} \\right\\rfloor,\\frac{y}{d}-\\left\\lfloor\\frac{y}{d}\\right\\rfloor\\right)\\end{split} \\tag{2}\\] where \\(x\\) and \\(y\\) are the real coordinates of the keypoint, and \\(d\\) is the scaling factor. The loss function uses the \\(L1\\) smoothing loss function as follows: \\[\\begin{split} L_{\\text{off}}=\\frac{1}{N}\\sum_{k=1}^{N}\\text{ Smooth}L1\\text{Loss}\\,(o_{k},\\hat{o}_{k})\\end{split} \\tag{3}\\] where \\(o_{k}\\) is the predicted bias and \\(\\hat{o}_{k}\\) is the true bias. #### Iii-B2 Scale Parameters Regression The scale parameters of the rotating box are represented by the long side \\(l\\) and the short side \\(s\\). The loss function uses a smooth \\(L1\\) loss function, as shown in the following equations: \\[\\begin{split} L_{l}=\\frac{1}{N}\\sum_{k=1}^{N}\\text{ Smooth}L1\\text{Loss}\\,\\Big{(}l_{k},\\widehat{l_{k}}\\Big{)}\\end{split} \\tag{4}\\] \\[\\begin{split} L_{s}=\\frac{1}{N}\\sum_{k=1}^{N}\\text{ Smooth}L1\\text{Loss}\\,(s_{k},\\widehat{s_{k}}).\\end{split} \\tag{5}\\] #### Iii-B3 Keypoints Matching When two or more keypoints are used to locate the target, a keypoint matching problem occurs. The proposed method uses a vector from the center point to the endpoint, and the distance from the endpoint coordinates meets a certain threshold to confirm whether they belong to the same target, especially for the regression vector numerical stability. We mapped the prediction vector \\(\\overrightarrow{v_{k}^{*}}\\) between 0 and 1. The loss function of the vectors is given as follows: \\[\\begin{split} L_{v}=\\frac{1}{N}\\sum_{k=1}^{N}\\text{ Smooth}L1\\text{Loss}\\,\\bigg{(}(2\\overrightarrow{v_{k}}-1)\\,\\frac{l_{k}}{2},\\overrightarrow{v_{k}} \\bigg{)}.\\end{split} \\tag{6}\\] #### Iii-B4 Bow Classification To facilitate the extraction of information related to the bow and stern of the ship, we use two Fig. 4: Division of keypoints. (a) When the coordinates of keypoints \\(B\\) and \\(T\\) are not on the \\(X\\)-axis, the keypoint above the \\(X\\)-axis is \\(T\\), and the keypoint below the \\(X\\)-axis is \\(B\\). (b) When the coordinates of points \\(B\\) and \\(T\\) are on the \\(X\\)-axis, the left side of the \\(Y\\)-axis is keypoint \\(B\\), and the right side is the keypoint \\(T\\). Fig. 5: Offset of the feature map to the original image. independent classification branch modules in the architecture. These classification modules are designed to extract information from positions corresponding to the upper and lower keypoints. The purpose of introducing these two independent classification modules is to capture features specifically associated with the ship's bow and stern regions. The upper and lower keypoints serve as reference points, allowing the classification modules to focus on extracting relevant information in proximity to these keypoints. There are two keypoints at the bow and stern of the ship. We need to use the BCE loss function at these two endpoints to determine the probability that the point is the bow. The \\(L_{hc}\\) loss function was designed as follows: \\[L_{hc}=-\\frac{1}{N}\\sum_{k=1}^{N}\\left(\\widehat{hc_{k}}\\text{log}\\left(hc_{k} \\right)+\\left(1-\\widehat{hc_{k}}\\right)\\text{log}\\left(1-hc_{k}\\right)\\right) \\tag{7}\\] where \\(hc_{k}\\) is the predicted result and \\(\\widehat{hc_{k}}\\) is the real label. ### _Prediction in Test Stage_ We extracted the top-_k_ coordinates from the predicted heat maps and then added the corresponding offset, as shown in (8), to obtain the \\(k\\) keypoint coordinates. We utilize a relatively large value of \\(k\\), such as 500, to perform an initial selection in each patch. This ensures a comprehensive coverage of ship candidates, even in scenarios where the number of ships may be high. Next, we refine the selection process by considering the confidence scores associated with the detected ships. As shown in Fig. 6(a), the keypoint heatmap is superimposed on the original image. The position of the keypoint heatmap corresponds closely to the position of the ship \\[\\left\\{\\left(xi+xi_{\\text{off}},yi+yi_{\\text{off}}\\right)\\left|i\\right.=1,2, 3,\\dots,k\\right\\}. \\tag{8}\\] An image contains multiple targets, and many keypoints are generated simultaneously. It is necessary to confirm the keypoints belonging to the same target. Taking the center keypoint as a reference, the coordinates of the center point and vector predicted were added to obtain the coordinates of the two endpoints. We then compared them with the coordinates of the two endpoints extracted from the heat map. If the Euclidean distance between them was less than a certain threshold, the same goal was considered, as in (9)-(11), the coordinates of the central keypoint \\(C\\)\\((Cx,Cy)\\), the coordinates of the upper Keypoint \\(T\\left(Tx,Ty\\right)\\), and the coordinates of the lower keypoint \\(B\\)\\((Bx,By)\\). The vector from the central keypoint to the upper endpoint is \\(Vct\\), that from the lower endpoint is \\(Vcb\\), and \\(s_{k}\\) is the threshold. The keypoint matching is shown in Fig. 6(b) \\[Vct\\ =(2v_{tk}\\ -1)\\frac{l_{k}}{2} \\tag{9}\\] \\[Vcb\\ =(2v_{bk}\\ -1)\\frac{l_{k}}{2} \\tag{10}\\] \\[\\begin{cases}d\\left(C+Vct,T\\right)\\leq s_{k}\\\\ d\\left(C+Vcb,B\\right)\\leq s_{k}.\\end{cases} \\tag{11}\\] We only required the coordinates of the center point and any endpoint, plus the length of the short side of the ship, to obtain the coordinates of the OBB. If a center point matches the upper and lower keypoints simultaneously, two OBBs appear simultaneously. The OBB box score is the average of the center point and endpoint scores. Finally, NMS was used to remove the box with an excessive overlap rate. Fig. 6(c) shows the OBB calculated using the keypoint coordinates and the length of the short side. The upper and lower keypoints set by this algorithm were only related to the spatial position of the center point and did not contain category information. This method decoupled the target detection and bow classification. The two keypoints \\(T\\) and \\(B\\) matched the spatial positions of the bow and stern. This was beneficial for extracting the features of the head and tail. Two classifiers were used at the two endpoints to predict whether a point is a bow. When the center point only matched one endpoint and the classification threshold of this point was greater than 0.5, it was the bow. Otherwise, it was the stern. When two endpoints were matched simultaneously, the scores of the two keypoints were compared, and the classification threshold of the point was used to calculate the position of the bow if the score was relatively large. In Fig. 6(d), the yellow circle represents the ship's bow. ### _Bow Classification Analysis Model_ We established a simple bow classification analysis model, as shown in (12). \\(h_{t}\\) represents the bow classification threshold at point \\(T\\), \\(h_{b}\\) represents the bow classification threshold at point \\(B\\), \\(h_{u}\\) represents the uncertainty of bow classification, and a value Fig. 6: Prediction in the test stage. (a) Keypoint heatmap. (b) Keypoint matching. (c) Rotating box generation. (When two boxes are generated, then a low score can be removed using NMS). (d) Bow classification. closer to zero indicates greater uncertainty \\[h_{u}=\\left|h_{t}-h_{b}\\right|. \\tag{12}\\] ## III Experiment ### _Data_ The data used in this study were obtained from the RSDD-SAR [42] and FUSAR-ship [43] datasets. RSDD-SAR, which includes Gaofen-3 and TerraSAR-X satellite data, has multiple imaging modes, polarization modes, and resolutions. The FUSAR-ship high-resolution ship dataset contained 15 main ship classes. The data slices were taken from 126 GF-3 remote-sensing images. The polarization mode included HH and VV, the resolution was 1.124 m \\(\\times\\) 1.728 m, and the imaging mode was the UFS mode, covering various sea, land, coast, river, and island scenes. A total of 1237 data slices were selected from the RSDD-SAR dataset, and 533 data slices were selected from the FUSAR-ship dataset. We selected 80% of the data from the two subsets as the training set and 20% as the test set. We relabeled the selected data using four corner points and four head points for labeling purposes (see Fig. 7). ### _Experimental Details_ Our deep-learning framework was based on Pytorch1.6, and the GPU model was an NVIDIA GTX 1080Ti. The training and testing image sizes were 512 \\(\\times\\) 512 pixels. The images were randomly rotated and flipped during training. The initial learning rate was set to 0.000125, the optimizer was Adam, the learning rate decay strategy was exponential decay, and iterative training was performed 200 epochs. ### _Comparison of OBB Ship Detection Algorithms_ In this section, the OBB detection performances of our proposed algorithm and five other algorithms (gliding vertex [28], oriented RCNN [45], R3Det [46], YOLOv5 \\(+\\) CSL [29], and faster R-CNN (OBB) [15]) were compared. For a fair comparison, other algorithms, except for the YOLOv5 \\(+\\) CSL model, including the algorithm proposed in this study, used ResNet50 as the backbone network for feature extraction. The evaluation indicators used were precision rate (Precision), recall rate (Recall), \\(F1\\)-score (\\(F1\\)-score), transmission frames per second (FPS), and average accuracy rate (AP), where AP50 means merge, and the average accuracy rate when the intersection over union (IoU) threshold was 0.5. As shown in Table III, the proposed algorithm obtained the most successful results for the \\(F1\\)-score and AP, which were 0.979 and 0.908, respectively. The R3Det algorithm had the highest recall rate and a lower precision rate, resulting in the lowest \\(F1\\)-score. The AP values of the R3Det and oriented RCNN algorithms were second only to those of the proposed algorithm, and the fastest algorithm in terms of speed was the YOLOv5 \\(+\\) CSL algorithm, which reached 50FPS. The speeds of the other algorithms did not differ significantly. The slowest algorithm was R3Det because it uses multiple refinement schemes to improve accuracy. This can be observed in Fig. 12 that the gliding vertex, oriented RCNN, and faster R-CNN (OBB) algorithms produced more false alarms near the shore, whereas R3Det and YOLOv5 \\(+\\) CSL produce more missed detections. Our proposed method showed better performance results near the shore. In the third row, the oriented RCNN algorithm missed small targets, while other algorithms detected small targets. In the fourth row, oriented RCNN and YOLOv5 \\(+\\) CSL missed the target. The rotating box obtained using the gliding vertex algorithm did not correctly surround the target. Although the faster R-CNN (OBB) algorithm detected the correct target, it also detected the noise generated by the ship's motion. R3Det and the proposed algorithm correctly detected objects, and the proposed algorithm obtained bounding boxes that were more consistent with the ground truth. ### _Bow Discriminant Analysis_ Table IV compares the three methods relying on keypoints for bow differentiation. The \\(C+\\) head method, such as CHPDet [47], is an anchor-free method that uses the center point and head point to determine the OBB and bow. \\(C+T+B\\) used the center point and the up and down points to determine the OBB. It then compared the thresholds of the upper and lower keypoints. A point with a larger threshold uses the bow classifier where the point is located to determine whether the point is the bow. The \\(T+B\\) method uses only the upper and lower endpoints for detection. Its bow classification strategy is consistent with that of \\(C+T+B\\). In the \\(C+T+B\\) method, the accuracy rate of bow differentiation was the highest, and the accuracy rate of bow classification (accurate) was the ratio of the number of objects correctly classified as bows to the total number of objects. The \\(T+B\\) scheme was lower than the \\(C+T+B\\) Fig. 11: OBB width and height distribution. Fig. 10: Distribution of bow angles in the training set and test set. (a) Number distribution of ships from different angles in the training set. (b) Number distribution of ships from different angles in the test set. Fig. 9: Key components needed for differentiating between the bow and stern. scheme for each accuracy index. The \\(C+\\) head scheme had the highest accuracy rate, but a low recall rate. The \\(F1\\)-score was not as high as that of the \\(C+T+B\\) scheme. The main defect of the \\(C+\\) head scheme was that when the model could not identify the bow of the ship, and the detection target was not recognized, resulting in missed detection. The proposed method divided bow differentiation and ship detection into two stages such that the bow discrimination result does not affect the detection result. Fig. 13 showed the OBB detection results of SAR ships with bow marks. Deep learning relies on a single categorical variable to make decisions which may be a problem of overconfidence [48]. In many remote-sensing applications, it is critical to estimate prediction uncertainty. The proposed method could simultaneously predict two bow classification results at the same time in most cases, and 82% of the test samples could simultaneously obtain two bow classification results at the same Fig. 12. Visualization comparison of different methods. time. The two classification results were used to determine whether the point was the bow of the ship. Owing to the influence of accidental and perceptual uncertainties, there are contradictions between the two classification results. Using this contradiction can reduce the inconsistencies to a certain extent. Table V presents the proportions of incorrectly and correctly classified samples in the bow classification test samples under different thresholds. As the threshold increases, the proportion of misclassified samples far exceeds the proportion of correctly classified samples. Assuming that we have higher requirements for the reliability of the bow classification, we can set a large threshold. With a threshold of 0.9, our method can label 61.5% of the wrong samples as low confidence samples, and only 4.6% of the correctly classified samples are affected. This approach provides an analytical tool for decision reliability. ### _Ship Detection in Large-Scale SAR Images_ Large-scale SAR image testing can be used to validate and evaluate the performance of ship detection algorithms in real-world scenarios. Utilizing large-scale SAR images for testing enables a more accurate assessment of algorithm performance in handling complex backgrounds, multiple targets, occlusions, and other challenging situations. We tested on two large-scale SAR images, as shown in Fig. 14(a), and ship detection had some false alarms on land or islands. As shown in Fig. 14(b), the detection result of large-scale SAR ships on the sea surface was better when there was no interference from land or islands. We selected ships with wakes on the image to verify the accuracy of our bow classification. As shown Fig. 13: TKP-net detection results. The yellow circle represents the bow. in Fig. 14(c), the direction of most ships was consistent with the direction verified by ship wakes. This can prove to a certain extent that our method has good performance in extracting ship direction from large-scale SAR images. ### _Ship Detection and Bow Discrimination in Dense Ports_ This study addresses the task of ship detection and bow discrimination in dense port environments as one aspect of our brder research focus. We conduct an experiment to evaluate the proposed method and the obtained performance metrics are listed in Table VI. It can be seen from Table VI that the performance of ship detection and bow discrimination in dense ports has declined. Ship detection in dense port areas faces challenges, such as high ship density, background interference, and ship diversity. There are usually a large number of ships berthing or passing through the port area, and the distance between the ships is relatively close, which increases the overlapping and occlusion of ships. In addition, various objects and structures in the port area, such as docks, buildings, cranes, and stacked cargo, will generate a lot of interference and occlusion, making the detection of ships more difficult. In addition, the port area frequently moors and passes the ships of various types and sizes, including cargo ships, passenger ships, fishing boats, etc., which adds to the complexity of ship classification and identification. As shown in Fig. 15(a), there is no dense arrangement of ships, and the detection effect is better when the ships are docked at the port. As shown in Fig. 15(b), missed detection occurs when ships are densely arranged. Therefore, our proposed method suffers from some limitations in dense port areas. Our proposed method exhibits an average inference time of 73.6 s for large-scale SAR images. The image dimensions Fig. 14: Results in large-scale SAR images. (a) Large-scale SAR image with islands. (b) Sea surface SAR image without islands. (c) Bow extraction result with wake verification. The red circle represents the wrong classification result. Fig. 15: Ship detection and bow extraction in dense port areas. (a) Ships docked on the shore alone. (b) Ships are densely packed together. in question are 15 668 \\(\\times\\) 21 725. These values represent the computational time our model requires when processing such large-scale SAR images. ### _Accuracy of Direction for Different Types of Ships_ We selected data with class labels from the test set to test the accuracy of bow classification. It can be seen from Table VII that the classification accuracy of the bow is the highest for oil tankers and the lowest for fishing boats. It can be seen from Fig. 16 that the objects classified incorrectly by the bow are basically relatively small. Because the scattering structure of the small target is not clear enough, the amount of information provided is also small. ## IV Discussion The method we proposed for ship detection and bow classification has numerous advantages and innovations in the field. However, we also acknowledge certain limitations that need to be considered and addressed in further research and applications. First, our algorithm requires high-resolution SAR data. Lower resolution SAR data may result in decreased algorithm performance, affecting the accuracy of ship detection and recognition. Therefore, our algorithm may be constrained by data resolution limitations in certain data sources or application scenarios. Second, our algorithm has certain requirements regarding the clarity of ship scattering structures. If the scattering structures of ships are too blurry or indistinct, the accuracy of the algorithm may be compromised. Polarization information provides valuable insights into the scattering behavior of objects, including ships, and helps enhance the detection and characterization of ships in SAR images [49, 50]. In the future, we can consider incorporating polarization information into our method to improve its performance. In addition, our algorithm's performance may decline when dealing with densely packed port areas. This is due to potential occlusion, overlap, or interference between ships in such congested port regions. In these cases, our algorithm may experience false detections or missed detections, requiring further improvements to enhance its capability in handling dense port areas. To address the aforementioned limitations, we can explore the following avenues for improvement. 1. Investigating techniques to adapt to low-resolution SAR data, such as image enhancement or multiscale processing methods, to enhance the algorithm's robustness. 2. Conducting further research on feature extraction and model design for ship scattering structures to address the challenges posed by blurry scattering structures. 3. Explore the utilization of prior knowledge or other sensor data to assist in ship detection and recognition tasks in dense port areas. It is important to emphasize that our algorithm demonstrates good performance and robustness in most cases. However, further research and improvements are required to address the aforementioned limitations. An honest and comprehensive description of these limitations is crucial for readers to gain the accurate understanding of our research work and to provide guidance and inspiration for future studies. In future research, we will strive to overcome these limitations and propose the corresponding improvement strategies to further enhance the performance and applicability of our algorithm. ## V Conclusion In this study, a method for obtaining various ship information, including position, length, width, and direction from high-resolution SAR images, was proposed. A ship detection method for any direction based on three keypoints without an anchor box has been proposed. In the first step of the method, the angle-prediction problem of the rotating box was converted into an estimation and matching problem for the keypoint position to determine the rotating box. In the second step, bow discrimination was performed using classifiers placed at two keypoints. Fig. 16: Bow extraction results of different types of ships. The green circle is the ground truth label, and the yellow circle is the predicted bow coordinates. The experimental results show that our method achieves good performance with an AP of 90.8, an \\(F1\\)-score of 97.9%, and an accuracy of 92.5% for bow classification in nondense port area ship detection. We also used the difference in information extracted from the two keypoints to establish a simple and effective uncertainty analysis method. [MISSING_PAGE_POST] * [48] K. Ristovski, S. Vucetic, and Z. Obradovic, \"Uncertainty analysis of neural-network-based aerosol retrieval,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, no. 2, pp. 409-414, Feb. 2012. * [49] R. L. Paes, F. Nunziata, and M. Migliaccio, \"On the capability of hybrid-polarity features to observe metallic targets at sea,\" _IEEE J. Ocean. Eng._, vol. 41, no. 2, pp. 346-361, Apr. 2016. * [50] M. Adil, F. Nunziata, A. Buono, D. Velotto, and M. Migliaccio, \"Polarimetric scattering by a vessel at different incidence angles,\" _IEEE Geosci. Remote Sens. Lett._, vol. 20, Aug. 2023, Art. no. 4008605. \\begin{tabular}{c c} & Xinnan Li received the B.S. degree in geographic information system from Ludong University, Yanti, China, in 2016, and the M.S. degree in physical oceanography from the Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou, China, in 2020. He is currently working toward the Ph.D. degree in multi-source remote sensing ship detection and recognition with Ocean College, Zhejiang University, Zhoushan, China. His research interests include ocean microwave remote sensing, deep learning, and image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Peng Chen was born in Hunan, China. He received the Ph.D. degree in geographic information system from Zhejiang University, Hangzhou, China, in 2011. He is a Senior Engineer in marine remote sensing with the State Key Laboratory of Satellite Ocean Environment Dynamics, Hangzhou, China, working on the development of algorithms detecting marine targets (ship, oil rig, and oil slick). \\\\ \\end{tabular} \\begin{tabular}{c c} & Jingsong Yang received the B.S. degree in physics and the M.S. degree in theoretical physics from Zhejiang University, Hangzhou, China, in 1990 and 1996, respectively, and the Ph.D. degree in physical oceanography from the Ocean University of China, Qingdao, China, in 2001. Since 1996, he has been with the Second Institute of Oceanography (SIO), Ministry of Natural Resources, Hangzhou, China, where he is the Head of the Microwave Marine Remote Sensing, State Key Laboratory of Satellite Ocean Environment Dynamics. Since 2002, he has been a Supervisor of graduate students with SIO and, since 2011, an Adjunct Professor and a Doctoral Supervisor with Zhejiang University. He has more than 20 years of experience in microwave marine remote sensing. He has been a Principal Investigator and a participant of more than 20 research projects and authored or coauthored more than 100 scientific articles in peer-reviewed journals and international conference proceedings. His research interests include microwave marine remote sensing, data fusion, image processing, and satellite oceanography. \\\\ \\end{tabular} \\begin{tabular}{c c} & Wentao An (Member, IEEE) received the B.S. degree in communication engineering from Nankai University, Tianjin, China, in 2003, and the Ph.D. degree in electronic engineering from Tsinghua University, Beijing, China, in 2010. He is currently an Associate Researcher with the Department of Systematic Engineering, National Satellite Ocean Application Service, Beijing, China. His research interests include polarimetric synthetic aperture radar data processing and target detection with SAR imagery. \\\\ \\end{tabular} \\begin{tabular}{c c} & Gang Zheng (Senior Member, IEEE) received the B.Eng. degree in electronic information engineering from Zhejiang University, Hangzhou, China, in 2003, and the M.S. and Ph.D. degrees in radio physics from the University of Electronic Science and Technology of China, Chengdu, China, in 2006 and 2010, respectively. From 2010 to 2013, he was an Assistant Researcher with the State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou, where he was an Associate Researcher, from 2013 to 2020, and has been a Researcher, since 2020. His research interests include ocean microwave remote sensing, artificial intelligence (AI) applications, image processing, and electromagnetic numerical modeling. Dr. Zheng is an Editorial Board Member of the ocean section of Remote Sensing and a Topic Editor for _Big Earth Data_. From 2018 to 2020, he also served as the Guest Editor for _Remote Sensing_, special issues on AI-Based Remote Sensing Oceanography, Synergy of Remote Sensing and Modelling Techniques for Ocean Studies, and Tropical Cyclones Remote Sensing and Data Assimilation. \\\\ \\end{tabular} \\begin{tabular}{c c} & Dan Luo is currently working toward the Ph.D. degree in AIS anomaly trajectory detection and classification with the College of Oceanology, Zhejiang University, Hangzhou, China. His research interests include remote sensing image processing and AIS data analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Aiying Lu received the B.S. degree in geoinformation science and technology from the Ocean University of China, Qingdao, China, in 2021. She is currently working toward the M.S. degree in physical oceanography with the Second Institute of Oceanography, Ministry of Natural Resources, Hangzhou, China. Her research interests include hyperspectral remote sensing and maritime target recognition. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zimu Wang received the B.S. degree in remote sensing from the School of Remote Sensing Information Engineering, Wuhan University, Wuhan, China, in 2022. He is currently working toward the M.S. degree in multi-source remote sensing maritime target and environment sensing technology with the Second Institute of Oceanography, Department of Natural Resources, Hangzhou, China. His research interests include ocean microwave imaging and object detection. \\\\ \\end{tabular}
Remote-sensing ship monitoring is a crucial area of research with key applications in military and civilian fields. The ability to extract information, such as ship length, width, and heading from remote-sensing data, particularly from synthetic aperture radar (SAR) images, is of paramount importance. Current state-of-the-art SAR image ship monitoring focuses primarily on ship detection. Assessing the direction of ships usually relies on the observability of wake features. However, the observability of these wake features is often affected by factors, such as the SAR system parameters, ship attributes, and dynamic marine environments. This can make accurate direction assessments a challenging task. In response to these challenges, this study has presented a novel and effective algorithm for ship monitoring from SAR images based on an anchor-free framework and the powerful feature extraction capabilities of convolutional neural networks. The proposed method learned the scattering and morphological information of a ship's bow and stern from high-resolution SAR images to determine the ship's direction with a high level of accuracy using a rotating bounding box. The algorithm was tested on a dataset, achieving an average precision of 90.8% and bow classification accuracy of 92.5%, demonstrating its potential contributing to the advancement of remote sensing. Convolutional neural network (CNN), keypoint, ship detection, ship head classification, synthetic aperture radar (SAR).
Give a concise overview of the text below.
ieee/f28a3077_c41b_4b52_9127_b99cca38b2ec.md
A Hyperspectral Image Classification Method Based on Weight Wavelet Kernel Joint Sparse Representation Ensemble and \\(\\beta\\)-Whale Optimization Algorithm Mingwei Wang\\({}^{\\text{\\textcircled{C}}}\\), Zitong Jia, Jianwei Luo\\({}^{\\text{\\textcircled{C}}}\\), Maolin Chen\\({}^{\\text{\\textcircled{C}}}\\), Shuping Wang, and Zhiwei Ye This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/) ## I Introduction In recent years, hyperspectral remote sensing sensors have been applied to collect images with enough spectral resolution that contains hundreds of bands and allows the discrimination of objects with similar attributes [1]. A hyperspectral image (HSI) has been considered as an applicable tool for Earth observation because of its ability to obtain independent and continuous bands, analyze information from visible to near-infrared wavelength ranges, and supply multiple features from the fixed wavelength. It provides abundant spectral information and has a huge potential for the interpretation of different ground objects [2, 3]. As a result, the analysis of HSI has become a subject of research interest in remote sensing, which has been applied in a series of fields such as quantitative analysis [4], environmental monitoring [5], and land-cover mapping [6]. In addition, image classification is a significant step in identifying object types on the Earth's surface, and HSI classification aims to distinguish each sample into a discrete group of specific category labels [7, 8]. Existing HSI classification techniques are separated into two scopes: unsupervised and supervised [9]. For unsupervised techniques, fuzzy clustering [10], rough set [11], and iterative self-organizing data analysis technique algorithm [12] have been utilized to classify HSI samples. In these techniques, the process of classification is only based on the characteristics of feature values, and the misclassification is obvious as spectral characteristics are similar for different objects. For supervised techniques, active learning [13], random forest [14], and support vector machine (SVM) [15] have been utilized to obtain the category label of each pixel. Although these classifiers make full use of spectrum difference, the category label of the current pixel is usually impacted by the feature values on the neighbor. Therefore, several ideas are presented to synthesize the spatial and spectral characteristics of HSIs, and they are based on the hypothesis that samples within a local space have approximate spectral characteristics and express the same objects [16, 17]. In addition, HSI classification based on a deep learning model has been proposed to sufficiently synthesize spatial and spectral information, thus obtaining the category label of each pixel, but it is supported by the sufficient number of training samples and the sufficient amount of iterations, which is time-consuming as the data dimension increases [18]. As a well-behaved supervised classification model, sparse representation (SR) is used to recover the original data and report class discriminative information, which has been widely used in the field of pattern recognition [19]. In addition, joint representation is presented to promote the stability of SR and boost its capability [20]. For HSI classification, samples with the same category are theoretically located in a low-dimensional subspace, and joint SR (JSR) makes associative decision on neighbor pixels as to which are feasible and particularly suitable for HSIs [21]. For the classification process, a testing sample is similarly expressed by a certain number of rules from the training dictionary, and the reconstructed matrix is utilized to determine the category label by searching for the minimum [22]. For instance, Peng _et al._[23] designed a local adaptive JSR (LAJSR) technique for HSI classification; the dictionary construction and SR phases were improved by choosing representative rules from an additional dictionary. Tu _et al._[24] proposed an HSI classification approach based on the balance of JSR and correlation coefficient (JSR-CC), which synthetically considered both local spatial and spectral similarities. Furthermore, the reconstructed matrix is usually computed by a linear kernel, making it difficult to reflect the inner product of nonlinear mapping between input spectral features and output category labels. Hence, Zhang _et al._[25] proposed a novel HSI classification technique using JSR and nonlinear kernel extension, which mapped the input into a high-dimensional space to separate different objects and reflect better performance than that using the linear kernel. However, the category label is determined by kernel computing of higher order polynomial; the misclassification for specific categories will be enlarged if the order is uncertain within effective time. The category label is obtained by the probability of kernel computing for JSR, and it is the same as other nonlinear classifiers in mechanism, such as k-nearest neighbor (KNN) and SVM. Moreover, the wavelet function is a series of formulations that are based on wavelet analysis and adequately keeps regularity and orthogonality; it has been employed in the field of HSI classification as the kernel of KNN and SVM to substitute for a linear kernel [26, 27]. As a result, the wavelet function is able to act as the kernel of JSR in theory. Ensemble learning is a machine learning paradigm that synthesizes multiple subclassifies to solve the same problem; better discrimination ability is obtained than the single classifier according to different emphases of subclassifiers especially for indeterminate objects and has been applied for HSI classification [28, 29]. However, the category label is usually obtained by the voting strategy for ensemble learning; the discrimination is confused if the votes are similar for two categories. As for JSR, the category label is assigned by searching for the minimum of reconstructed error for each sample, and the reconstructed matrix of ensemble learning can be updated by that of subclassifiers with weight setting. A higher weight means that the subclassifier produces more contribution for classification, and a suitable weight setting is able to balance the reconstructed error of subclassifiers [30]. In general, how to obtain the optimal weight of subclassifiers is seen as a combination optimization problem, and it can be solved by the swarm intelligence algorithm with heuristic search guiding strategies [31]. Among them, the whale optimization algorithm (WOA) is a newly proposed swarm intelligence algorithm and has been widely used in diverse applications especially for weight optimization [32, 33]. However, the convergence rate is not fast enough with a fixed population updating equation and the small probability of local search. Nowadays, the factorial function with a single parameter has been combined with the swarm intelligence algorithm to enhance the exploration phase, but it is not adapted to various population updating conditions such as WOA with multiple parameters [34, 35]. Here, the \\(\\beta\\) function is combined with the WOA, two parameters are corresponding to two evolution processes, and the weight is adaptively located on the range of [0,1]. Therefore, an HSI classification technique based on the weight wavelet kernel JSR ensemble (W\\({}^{2}\\) JSEE) model and the \\(\\beta\\)-WOA is proposed to conduct pixel-level classification for HSIs. Because the spectral feature is output by 16 bits, the discrimination is not significant for different categories, and the misclassification is obvious as the dataset is mapped into the linear kernel. The classification accuracy is improved as the wavelet function is acted as the kernel of KNN and SVM; the dataset is mapped into quadratic, exponential, and trigonometric functions with different types and has been utilized in the field of HSI classification, but it is not acted as the kernel of JSR for previous work. In addition, a series of subclassifiers based on JSR with wavelet kernels are integrated by ensemble learning, the wavelet kernel of JSR concerns on the homogeneity for each subclassifier, and the ensemble with multiple wavelet kernels emphasizes the heterogeneity. Furthermore, the swarm intelligence algorithm is widely used to solve the nonpolynomial hard problem, such as weight optimization, and the \\(\\beta\\)-WOA is designed to obtain the optimal weight of subclassifiers, and the category label is output by total reconstructed error minimization of ensemble learning. The main contributions of this article are concluded as follows. 1. To improve the scale of mapping, the wavelet function is acted as the kernel of JSR, and the HSI dataset is mapped into quadratic, exponential, and trigonometric functions with different types. 2. To synthesize the homogeneity and heterogeneity of the JSR ensemble, the W\\({}^{2}\\) JSEE model is proposed by using different types of wavelet function as the kernel, and the classification map is output by pixel level. 3. To balance the reconstructed error of subclassifiers, weight setting is conducted for ensemble learning, and the category label is obtained by total reconstructed error minimization. 4. To enhance the exploration phase of the WOA, the \\(\\beta\\)-WOA is designed by fusing the \\(\\beta\\) function into two evolution processes of the WOA, and the optimal weight of subclassifiers is obtained. The overall construction of this article is listed as follows. Section II describes the related work of JSR and WOA. Section III illustrates the principle of the proposed W\\({}^{2}\\) JSEE model and \\(\\beta\\)-WOA and the fundamental process of HSI classification. Section IV analyzes the experimental results and expends discussion of data statistics and visual senses. Finally, Section V concludes this article. ## II Related Work ### _Basic Theory of JSR_ JSR is devoted to minimizing the reconstructed error of some independent SRs, and the inner correlations between different SRs are synthetically considered. In the HSI, spectral characteristics of a pixel are strongly correlated with its neighbor pixels, which means that they belong to the same object with large probability, and the spatial correlations are ensured by supposing that neighbor pixels within a local space are jointly indicated by some common-sense rules from a training dictionary [36]. In particular, the size of local space at center pixel \\(y_{t}\\) is signed by \\(l\\times l\\), and pixels within such a space are marked by \\(y_{i}\\), where \\(i=1,2,\\ldots,l\\times l\\). All of the above pixels are stacked into a matrix \\(Y=[y_{1},y_{2},\\ldots,y_{t},\\ldots,y_{l\\times l}]\\in R^{b\\times l^{2}}\\). The matrix is succinctly represented as follows: \\[Y =[y_{1},y_{2},\\ldots y_{t},\\ldots,y_{l\\times l}]\\!=\\![D\\alpha_{1 },D\\alpha_{2},\\ldots D\\alpha_{t},\\ldots,D\\alpha_{l\\times l}]\\] \\[=D[\\alpha_{1},\\alpha_{2},\\ldots\\alpha_{t},\\ldots,\\alpha_{l \\times l}]=DA \\tag{1}\\] where \\(A=[\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{t},\\ldots,\\alpha_{l\\times l}]\\in R^{b \\times l^{2}}\\) is the recovered data with regard to \\(Y\\). The selected rules in \\(D\\) are assigned by rows and columns of elements that are not equal to 0 in \\([\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{t},\\ldots,\\alpha_{l\\times l}]\\), by setting part of rows as the value of 0 on the reconstructed matrix \\(A\\). The neighbor pixel \\(Y\\) is expressed by a subset of common-sense rules. Afterward, the matrix is recovered by seeking the equation to represent the following optimization problem: \\[\\hat{A}=\\text{arg}\\min_{A}\\|Y-DA\\|_{F}\\ \\ \\ \\text{s.t.}\\|A\\|_{\\text{row},0}\\leq K \\tag{2}\\] where \\(\\|A\\|_{\\text{row},0}\\) is the joint sparse norm that finds the most representative nonzero rows in \\(A\\), and \\(|\\cdot|_{F}\\) is the Frobenius norm. As \\(\\hat{A}\\) is recovered, the category label at the center pixel \\(y_{t}\\) is judged by the reconstructed error that is defined as follows: \\[\\text{label}(y_{t})=\\text{arg}\\min r(y)=\\text{arg}\\min_{i=1,2,\\ldots,c}\\|Y-D_ {i}\\hat{A}_{i}\\|_{2} \\tag{3}\\] where \\(\\hat{A}_{i}\\) indicates the rows in \\(\\hat{A}\\) associated with the category index of \\(i\\). ### _Mathematical Model of WOA_ In 2016, Mirjalili designed a swarm intelligence algorithm called WOA that is based on the predatory strategy of humpback whales. Humpback whales tend to catch crowd of krill or small fishes near the surface. The process is conducted by producing specific bubbles with a ring path, and the operator is separated into three parts: encircling prey, spiral bubble-net attacking, and searching for prey. The main procedure for the WOA is depicted as follows [37]: _Encircling prey_: Humpback whales have the ability to search for the position of prey and surround them, and the mechanism of global search is represented by the process. It is assumed that the position of optimal solution is the objective prey or it is the proximate solution moving close to the optimum in theory, and others should endeavor to motivate their positions toward to it. The process is written as follows: \\[\\vec{S}=|\\vec{C}\\cdot X^{*}(t)-X(t)| \\tag{4}\\] \\[X(t+1)=X^{*}(t)-\\vec{A}\\cdot\\vec{S} \\tag{5}\\] where \\(t\\) is the number of current iterations, \\(X^{*}(t)\\) is the position of prey, and \\(X(t)\\) and \\(X(t+1)\\), respectively, represent the position of humpback whales in the current and the next procedure. \\(\\vec{A}\\) and \\(\\vec{C}\\) are the variable vectors that are expressed as \\(\\vec{A}=2\\vec{a}\\cdot\\vec{r}-\\vec{a}\\) and \\(\\vec{C}=2\\cdot\\vec{r}\\), \\(\\vec{a}=2-2*t/T\\) is gradually decreased within the scope of [2,0], \\(T\\) is the maximum number of iteration, and \\(\\vec{r}\\) is a random number on the range of [0,1]. _Bubble-net attacking:_ Each humpback whale moves close to the prey within a compact ring and follows a spiral-shaped path in the meantime, and the mechanism of local search is represented by the process. A probability of 0.5 is set to choose whether following the compact ring or spiral mechanism, the position of humpback whale is renewed. The formulation of the process is expressed as follows: \\[X(t+1)=\\begin{cases}X^{*}(t)-\\vec{A}\\cdot\\vec{S},&\\text{if}\\ \\ \\ p<0.5\\\\ \\vec{S}^{\\prime}\\cdot e^{bl}\\cdot\\text{cos}(2\\pi l)+X^{*}(t),&\\text{if}\\ \\ \\ p \\geq 0.5\\end{cases} \\tag{6}\\] where \\(\\vec{S}^{\\prime}\\) is the distance of current humpback whale to prey, which is expressed as \\(\\vec{S}^{\\prime}=|X^{*}(t)-X(t)|\\), \\(b=1\\) represents a constant number that is the situation of logarithmic spiral, and \\(l\\) and \\(p\\) are two random numbers, respectively, within the scope of [\\(-\\)1,1] and [0,1]. _Searching for prey:_ The position of current humpback whale is updated according to the random walk strategy rather than the best humpback whale, the strategy of random search is reflected by the process, and the details are expressed as follows: \\[\\vec{S}=|\\vec{C}\\cdot X_{\\text{rand}}-X(t)| \\tag{7}\\] \\[X(t+1)=X_{\\text{rand}}-\\vec{A}\\cdot\\vec{S} \\tag{8}\\] where \\(X_{\\text{rand}}\\) indicates the position of a random humpback whale selected from the population. ## III Proposed Methodology ### _Classification Process With W\\({}^{2}\\) JSRE_ As for (3), the category label is determined by reconstructed error minimization, and it is computed on the same scale with the linear kernel, which makes it difficult to express the difference of feature values on multiple scales and emerge the relationship of nonlinear mapping in detail. The basic theory of wavelet analysis is to combine wavelet basis that builds an arbitrary function following the time series \\(t\\), there are five types of wavelet function that are proposed by analytical expressions with compactly supported and can be decomposed to different scales, and they are defined as follows [38, 39]: \\[f_{1}(t) =\\text{exp}(-t^{2}/2) \\tag{9}\\] \\[f_{2}(t) =(1-t^{2})\\cdot\\text{exp}(-t^{2}/2)\\] (10) \\[f_{3}(t) =\\text{cos}(1.75\\cdot t)\\cdot\\text{exp}(-t^{2}/2) \\tag{11}\\]\\[f_{4}(t) =\\frac{\\text{sin}(0.5\\pi\\cdot t)}{0.5\\pi\\cdot t}\\cdot\\text{cos}(1. 5\\pi\\cdot t) \\tag{12}\\] \\[f_{5}(t) =\\frac{e^{i4\\pi t}-e^{i2\\pi t}}{i2\\pi\\cdot t}. \\tag{13}\\] The wavelet function contents the fixed condition of shift-invariant form, it is based on the inner product of nonlinear mapping on different scales, and the difference between the original and recovered data can be represented by shift-invariant form [41]. Nowadays, the wavelet function is acted as the kernel of wavelet kernel SVM (WSVM) and wavelet kernel KNN (WKNN), and the classification result is improved as the dataset is mapped into different scales. More importantly, the dataset with ten thousands of samples is difficultly expressed by a linear kernel mapping. For JSR, the learning mechanism is the same with SVM and KNN, and the wavelet function can be acted as the kernel of JSR; the reconstructed error is defined on the basis of (3) and is expressed as follows: \\[r_{1}(y) =\\text{exp}(-\\|Y-D_{i}\\hat{A}_{i}\\|_{2}/2) \\tag{14}\\] \\[r_{2}(y) =(1-\\|Y-D_{i}\\hat{A}_{i}\\|_{2})\\cdot\\text{exp}(-\\|Y-D_{i}\\hat{A} _{i}\\|_{2}/2)\\] (15) \\[r_{3}(y) =\\text{cos}(1.75\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1})\\cdot\\text{exp} (-\\|Y-D_{i}\\hat{A}_{i}\\|_{2}/2)\\] (16) \\[r_{4}(y) =\\frac{\\text{sin}(0.5\\pi\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1})}{0.5 \\pi\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1}}\\cdot\\text{cos}(1.5\\pi\\times\\|Y-D_{i}\\hat{ A}_{i}\\|_{1})\\] (17) \\[r_{5}(y) =\\frac{e^{i4\\pi\\|Y-D_{i}\\hat{A}_{i}\\|_{1}}-e^{i2\\pi\\|Y-D_{i}\\hat{ A}_{i}\\|_{1}}}{i2\\pi\\|Y-D_{i}\\hat{A}_{i}\\|_{1}} \\tag{18}\\] where \"\\(\\cdot\\)\" represents the inner product between the vectors of reconstructed error with two different scales, and the original dataset is mapped into quadratic, exponential, and trigonometric functions with different types. Experimental results demonstrate that a scale parameter is involved in the dilation and, thus, can be naturally used to accommodate the multiscale phenomenon [40]. The category label of a sample is determined by five subclassifiers (JSRs) with different wavelet kernels at the same time, which is able to improve the discrimination ability compared with single JSR and linear kernel. The significance of subclassifiers is decided by weight setting, and the reconstructed error of the proposed W\\({}^{2}\\) JSRE model is computed as follows: \\[\\text{label}(y_{t})=\\text{arg}\\min\\sum_{j=1}^{5}\\omega_{j}\\times r_{j}(y) \\tag{19}\\] where \\(\\omega_{j}\\) is the weight of the \\(j\\)th subclassifier, which is directly multiplied with the reconstructed matrix, and weight represents the significance of subclassifiers. It is seen as a fuzzy quantitative analysis for the ensemble learning of JSRs, and the performance is better than the traditional voting strategy with fixed category analysis. ### _Weight Optimization With \\(\\beta\\)-WoA_ The exploration phase is represented by searching for prey to conduct random walk, which is computed by the position of a random humpback whale, but the operation efficiency is decreased by random number generation and the evolution trend is uncollected for the enlarge of \\(\\vec{S}\\) in (8). The \\(\\beta\\) function is a factorial function with analytic continuation in the complex plane; two parameters \\(\\gamma\\) and \\(\\eta\\) are defined to adjust the value. For the improvement of the swarm intelligence algorithm, it is necessary to weaken the random process and synthesize multiple parameters updating the individuals. The value range of the \\(\\beta\\) function is [0,1], which is adapted to the weight \\(\\omega_{j}\\) of subclassifiers. As for the proposed \\(\\beta\\)-WOA, the exploration phase is based on the \\(\\beta\\) function instead of searching for prey, and, respectively, acting on encircling prey and bubble-net attacking, which is defined as follows: \\[X(t+1)=\\int_{0}^{1}t^{\\gamma-1}(1-t)^{\\eta-1}dt \\tag{20}\\] where \\[\\gamma =(X^{*}(t)-\\vec{A}\\cdot\\vec{S})^{-1}\\] \\[\\eta =(\\vec{S}^{\\prime}\\cdot e^{bl}\\cdot\\text{cos}(2\\pi l)+X^{*}(t))^{ -1}. \\tag{21}\\] There is no random humpback whale that needs to be extracted, all of individuals are, respectively, computed by two processes of encircling prey and bubble-net attacking, they are corresponding to \\(\\gamma\\) and \\(\\eta\\) of \\(\\beta\\) function, and the population is updated by (20) afterward. As a result, the global and local processes are integrated for each individual and iteration, and time complexity is decreased by no random sample generation. Moreover, the coding length of the \\(\\beta\\)-WOA is equal to 5, which is the same as the number of subclassifiers, and directly represents the weight of subclassifiers. ### _Definition of the Objective Function_ The key issue of HSI classification based on the W\\({}^{2}\\) JSRE model is how to establish a reasonable mapping between the solution and the \\(\\beta\\)-WOA. As for weight setting, it is expressed by a constant on the range of [0,1] for subclassifiers and corresponding to a bit of \\(\\beta\\)-WOA. Each individual of \\(\\beta\\)-WOA includes 5 bits: the first bit represents the weight of the first JSR (subclassifier), the second bit is the weight of the second JSR (subclassifier), and so on. The entire code indicates the solution about the optimal weight of the W\\({}^{2}\\) JSRE model, and the fitness value is computed according to the average entropy of the reconstructed matrix, which is defined as follows: \\[F(i)=-\\sum_{i=1}^{s}\\min_{j}\\hat{A_{ij}}\\text{log}_{2}(\\hat{A_{ij}})/s \\tag{22}\\] where \\(s\\) is the scale of testing samples, and \\(j\\) is the category index that takes on the minimum for the \\(i\\)th testing sample. A larger fitness value means that the reconstructed error is smaller, and the category label is more likely to obey the true distribution. ### _Implementation of the Proposed Method_ The proposed HSI classification technique is easy to be fulfilled. The W\\({}^{2}\\) JSRE model is used for pixel-level classification of HSIs and the category label is obtained for each sample, the \\(\\beta\\)-WOA is used to search for the optimal weight of subclassifiers (JSRs), and the exact flow is listed as follows. ## IV Experimental Results and Discussion ### _Data Description_ To evaluate the performance of the proposed HSI classification technique based on the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA, three public collected HSIs and two measured airborne HSIs are used in the experiments. The first HSI was acquired by the ROSIS sensor during a flight campaign over Pavia University, Italy, and the geometric resolution was 1.3 m [42]. The image was composed of \\(610\\times 340\\) pixels with 103 spectral bands. Fig. 1 displays the ground truth of PaviaU scene. The number and names of corresponding categories that were used are shown in Table I. The second HSI was collected by the AVIRIS sensor and covered the agricultural region of Indian Pines, India, in 1992 [42]. The spectral range was 0.4-2.5 \\(\\mu\\)m with a spectral resolution about 10 nm, and the image was composed of \\(145\\times 145\\) pixels and 220 spectral bands with a spatial resolution of 20 m. Fig. 2 displays the ground truth of Indian scene. The number and names of corresponding categories that were used are shown in Table II. The third HSI was collected by the 224-band AVIRIS sensor over Salinas Valley, California, and it was characterized by high spatial resolution. The image was composed of \\(512\\times 217\\) pixels and available only as sensor radiance data, and 20 water absorption bands were discarded [42]. Fig. 3 displays the ground Fig. 1: Original image and reference map of PaviaU. Fig. 2: Original image and reference map of Indian. truth of Salinas scene. The number and names of corresponding categories that were used are shown in Table III. The fourth HSI was collected by the CASI sensor over the suburban area of Xiongan, China, in the summer of 2017. The spectral range was 0.36-1.05 \\(\\mu\\)m with a spectral resolution of 7.2 nm, and the image was composed of \\(160\\times 190\\) pixels with 96 spectral bands. Fig. 4 shows the ground truth of XionganS scene. The number and names of corresponding categories that were used are shown in Table IV. The fifth HSI was acquired by the SASI sensor over the urban area of Xiongan, China, in the spring of 2018. The spectral range was 1.0-2.5 \\(\\mu\\)m with a spectral resolution of 15 nm, and the image was composed of \\(270\\times 232\\) pixels with 100 spectral bands. Fig. 5 shows the ground truth of XionganU scene. The Fig. 4: Original image and reference map of XionganS. Fig. 5: Original image and reference map of XionganU. Fig. 3: Original image and reference map of Salinas. number and names of corresponding categories that were used are shown in Table V. ### _Parameters Setting of Different Algorithms_ As for the \\(\\beta\\)-WOA, there is one parameter that needs to be set by the corresponding reference [32]. Moreover, some commonly used swarm intelligence algorithms are also assessed to conduct weight optimization. As the illustration in Section III, the \\(\\beta\\)-WOA is utilized here, whereas particle swarm optimization (PSO) [43], differential evolution (DE) [44], cuckoo search (CS) [45], grey wolf optimizer (GWO) [46], ant lion optimizer (ALO) [47], and standard WOA are utilized to make intuitive comparisons. All of the above algorithms are ended as the of evaluations reaches 300. Thirty independent operations are conducted because of the randomness of initial population. Although the computational complexity is \\(O(n\\mathrm{logn})\\) for the algorithms above [48], there is no random humpback whale that needs to be extracted for the \\(\\beta\\)-WOA, and each bit will be adaptively located on [0,1] by the range of \\(\\beta\\) function, which will cost less CPU time than that of the standard WOA. The parameters of these algorithms are set by constants and based on the empirical value of corresponding references, and they are listed in Table VI. ### _Experimental Results on Swarm Intelligence Algorithms_ In this subsection, evaluation of training samples with the weight optimized by different swarm intelligence algorithms is investigated. For five HSIs in Section IV-A, 10% of pixels for each category are randomly extracted as the training samples to obtain weights of subclassifiers. Table VII shows the experimental results with different swarm intelligence algorithms, where Fiv and Std represent the average and standard deviation of fitness value, respectively, and Time is the CPU time after 30 independent operations. As for the data in Table VII, the optimization ability of the WOA is obviously better than that of PSO, DE, CS, GWO, and ALO, and the fitness value is higher than 0.30 for the five datasets. In addition, the \\(\\beta\\) function is operated for encircling prey and bubble-net attacking of the basic WOA, which is acted as the heuristic information of exploration phase. More importantly, the fitness value is further improved compared with the basic WOA, which illustrates that the reconstructed error remains in a small interval between the original and recovered datasets. With regard to the operating efficiency, the convergence speed of the WOA is better than that of other algorithms because of less multiplications, and there is no random humpback whale that needs to be extracted for the \\(\\beta\\)-WOA, and the CPU time is further decreased to some extent. Meanwhile, the weights optimized by the \\(\\beta\\)-WOA are suitably assigned for five subclassifiers; these are set as 0.2242, 0.1101, 0.6585, 0.2343, and 0.3887 for the Indian dataset, and all of subclassifiers have a certain contribution for training. However, the category label may focus on one or two subclassifiers by using other algorithms. The weight is greater than 0.9 for a subclassifier, and the performance of ensemble learning does not sufficiently play. In brief, the optimization ability of the \\(\\beta\\)-WOA is superior, and the convergence speed is fast enough to obtain the satisfactory weight, which is applicable for the practical work of sample training about HSI classification. ### _Experimental Results About HSI Classification on Pixel Level_ In this subsection, five HSIs, named PaviaU, Indian, Salinas, XionganS, and XionganU, are utilized to conduct pixel-level classification of HSIs and verify the performance of the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA. Moreover, some corresponding and newly proposed HSI classification techniques such as JSR [22], LAJSR [23], JSR-CC [24], wavelet kernel JSR (WJSR), WKNN [26], WSVM [27], and deep learning model, such as fully convolutional networks (FCN) [49], discriminative stacked autoencoder (DSAE) [50], are also used to make an overall comparison. In addition, the classification results with different percentages of training samples (Indian image is not operated because of less number of samples for Alfafa and Oats categories) and three subclassifiers of ensemble learning are also exhibited to make a further verification; the experiments are not conducted for LAJSR, JSR-CC, and WJSR because of the correlation of JSR-based techniques. The classification maps of different techniques are listed in Fig. 6: Classification results of PaviaU image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers). Fig. 7: Classification results of Indian image. (a) WKNN, (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WISR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers). Fig. 8: Classification results of Salinas image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSFE (three subclassifiers). (j) W\\({}^{2}\\) JSFE (five subclassifiers). Figs. 6-10, and Tables VIII-XII outline the overall classification accuracy (OA), Kappa coefficient, and CPU time of each HSI. Based on the data in Tables VIII-XII, there are no samples that are accurately classified to Alfafla or Oats categories for Indian image by using traditional techniques. The OA of JSR-based techniques is obviously better than that of WKNN and WSVM, and it is higher than 80% for all categories of XionganU and XionganS images. Compared with the linear kernel, the wavelet kernel improves the scale of mapping, and the Kappa coefficient has reached 0.91 for five images. As for the W\\({}^{2}\\) JSRE model, the OA is superior to 95% for five images, and the Kappa coefficient Fig. 9: Classification results of XionganS image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers). exceeds 0.95. In particular, the OA has reached 99% for PaviaU and Salinas images, and it is higher than 95% for all categories of above two images. Experimental results illustrate that almost any samples are truly classified, and the discrimination ability is enhanced to analyze the samples with similar feature values. Although the OA via deep learning model is close to that of the proposed W\\({}^{2}\\) JSRE model, the process will take more than 2000 s to complete classification for XionganS image, and it is difficult to satisfy real-time processing. As shown in Figs. 6-10, the classified noise is obviously appeared via WCNN and WSVM, which makes it difficult to recognize different objects from the images, where Grass and Vegetation categories are confused because of the similar spectral characteristics. The JSR-based techniques are able to obtain better classification performance, and the classified noise is eliminated to some extent, but the misclassification still exists on the edge region. The classification maps of WJSR clearly reflect different objects and correspond to the reference maps. In addition, ensemble learning is efficient to comprehensively judge the category label by a series of subclassifiers, and the objects are continuously presented for each category by using five subclassifiers. However, the learning ability is not sufficient as lack of training samples and inadequate of subclassifiers, and scattered noise is reflected on the classification maps. As for the curve of Fig. 11, the OA is Fig. 10: Classification results of XionganU image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers). improved as the percentage increase of training samples, and it keeps stable on a high level as most of noise eliminated, but it is difficult to reflect a further improvement as the percentage is reached 10%, and the extent is only 0.4% as more than 10% of pixels acted as training samples. In short, the proposed W\\({}^{2}\\) JSRE model is suitable for some practical work of HSI classification, and the classification maps are well coincided with the reference maps. ## V Conclusion In the article, an HSI classification technique based on the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA is proposed. The category label of each pixel is obtained by reconstructed error minimization of JSR, and the wavelet function is acted as the kernel of JSR. Moreover, ensemble learning is used to conduct detailed analysis of independent features, and the \\(\\beta\\)-WOA is utilized to obtain the optimal weight of subclassifiers. In general, it is observed that the swarm intelligence algorithm is adapted to achieve the suitable weight and represent the contribution of each subclassifier. In particular, the \\(\\beta\\)-WOA has the highest fitness value among the algorithms, which is appropriate to synthesize the discrimination ability of five subclassifiers. Furthermore, the optimal weight is employed to obtain the category label of HSIs, and the OA is compared with some newly proposed and corresponding HSI classification techniques. In all, the proposed W\\({}^{2}\\) JSRE model Fig. 11: OA of different percentage of training samples. (a) PaviaU image. (b) Salinas image. (c) XionganS image. (d) XionganU image. recognizes different objects on the image, and it is sufficient to distinguish most of similar objects, which has reached 95% for pixel-level classification. As a summary, JSR combined with the wavelet kernel has a good property to solve the classification problem in most cases, the misclassification is apparently weaken by ensemble learning, and the weight optimized by the \\(\\beta\\)-WOA is reasonable to improve the OA to some extent. In the future, it is preferable to combine the spatial and spectral features with different types of subclassifier for HSI classification. ## References * [1]S. Arjovsky, M. G. Unsal, and H. H. Orcku (2020) Use of the heuristic optimization in the parameter estimation of generalized gamma distribution: comparison of GA, DE, PSO and SA methods. Comput. Statist.374, pp. 1-31. Cited by: SSII-A. * [2]S. Chen, C. Guo, and J. Lai (2016-05) Deep ranking for person re-identification via joint representation learning. IEEE Trans. Image Process.25 (5), pp. 2353-2367. Cited by: SSII-A. [MISSING_PAGE_POST] . Chen, and Y. Chen (2019-05) A new learning algorithm for hyperspectral image classification. IEEE Trans. Image Process.32 (5), pp. * [43] J. Kennedy and R. Eberhart, \"Particle swarm optimization,\" in _Proc. Int. Conf. Neural Netw._, 1995, vol. 4, pp. 1942-1948. * [44] K. V. Price, \"Differential evolution: A fast and simple numerical optimizer,\" _Proc. North Amer. Fuzzy Inf. Process._, 1996, pp. 524-527. * [45] X. S. Yang and S. Deb, \"Cuckoo search via Levy flights,\" in _Proc. World Congr. Nat. Biol. Inspired Comput._, 2009, pp. 210-214. * [46] S. Mirjalili, S. M. Mirjalili, and A. Lewis, \"Grey wolf optimizer,\" _Adv. Eng. Softw._, vol. 69, pp. 46-61, 2014. * [47] S. Mirjalili, \"The ant lion optimizer,\" _Adv. Eng. Softw._, vol. 83, pp. 80-98, 2015. * [48] C. Witt, \"Tight bounds on the optimization time of a randomized search heuristic on linear functions,\" _Combinatorics, Probab. Comput._, vol. 22, pp. 294-318, 2013. * [49] L. Zou, X. Zhu, C. Wu, Y. Liu, and L. Qu, \"Spectral-spatial exploration for hyperspectral image classification via the fusion of fully convolutional networks,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 659-674, 2020. * [50] P. Zhou, J. Han, G. Cheng, and B. Zhang, \"Learning compact and discriminative stacked autoencoder for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 7, pp. 4823-4833, Jul. 2019. \\begin{tabular}{c c} & Mingwei Wang received the B.S. degree in electronic information science and technology from Hubei Normal University, Huangshi, China, in 2011, the M.S. degree in software engineering from the Hubei University of Technology, Wuhan, China, in 2015, and the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, Wuhan, in 2018. Since 2018, he has been a Professional Researcher with the Institute of Geological Survey, China University of Geosciences, Wuhan. His major research interests include hyperspectral image processing, swarm intelligence algorithm, and deep learning models. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zitong Jia received the B.S. degree in hydrology and water resources engineering from Northwest Agriculture and Forest University, Kaiyang, China, in 2020. She is currently working toward the M.S. degree with the China University of Geosciences, Wuhan, China. Her major research interests include hydrologic model research based on geographic information system, remote sensing image processing, and classification of land-cover change. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jianwei Luo received the B.S. degree in computer science and technology from the Wuhan University of Technology, Wuhan, China, in 2005. Since 2012, he has been a Senior Engineer with the Hubei Cancer Hospital, Huazhong University of Science and Technology, Wuhan. His major research interests include hyperspectral imaging theory, machine learning models, and computer application technology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Maolin Chen received the B.S., M.S., and Ph.D. degrees from Wuhan University, Wuhan, China, in 2012, 2014, and 2018, respectively, all in photogrammetry and remote sensing. He is currently an Associate Professor with the School of Civil Engineering, Chongqing Jiaotong University, Chongqing, China. His research interests include feature extraction, image interpretation, and object recognition of laser scanning data. \\\\ \\end{tabular} \\begin{tabular}{c c} & Shuping Wang received the B.S. and M.S. degrees from the Hubei University of Technology, Wuhan, China, in 2011 and 2014, respectively, both in computer science and technology. Since 2015, he has been an Engineer with the Hubei Cancer Hospital, Huazhong University of Science and Technology, Wuhan. His major research interests include medical image processing, swarm intelligence algorithm, and computer application technology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhiwei Ye received the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 2006. He is currently a Professor with the School of Computer Science, Hubei University of Technology, Wuhan. He has authored or coauthored more than 30 papers in the area of digital image processing and swarm intelligence algorithm. His research interests include image analysis, pattern recognition, and data mining. \\\\ \\end{tabular}
Joint sparse representation (JSR) is a commonly used classifier that recognizes different objects with core features extracted from images. However, the generalization ability is weak for the traditional linear kernel, and the objects with similar feature values associated with different categories are not sufficiently distinguished especially for a hyperspectral image (HSI). In this article, an HSI classification technique based on the weight wavelet kernel JSR ensemble model and the \\(\\beta\\)-whale optimization algorithm is proposed to conduct pixel-level classification, where the wavelet function is acted as the kernel of JSR. Moreover, ensemble learning is used to determine the category label of each sample by comprehensively decision of some subclassifiers, and the \\(\\beta\\) function is utilized to enhance the exploration phase of the whale optimization algorithm and obtain the optimal weight of subclassifiers. Experimental results indicate that the performance of the proposed HSI classification method is better than that of other newly proposed and corresponding approaches, the misclassification and classified noise are eliminated to some extent, and the overall classification accuracy reaches 95% for all HSIs. \\(\\beta\\) function, ensemble learning, hyperspectral image (HSI) classification, joint sparse representation (JSR), wavelet kernel, weight setting.
Condense the content of the following passage.
ieee/f379d87f_cd56_4939_9e27_7b331ce21942.md
L-Hypersurface Based Parameters Selection in Composite Regularization Models With Application to SAR and TomoSAR Imaging Yizhe Fan \\({}^{\\copyright}\\), Kun Wang \\({}^{\\copyright}\\), Jie Li \\({}^{\\copyright}\\), Guoru Zhou \\({}^{\\copyright}\\), Bingchen Zhang, and Yirong Wu Manuscript received 31 May 2023; revised 22 July 2023 and 21 August 2023; accepted 2 September 2023. Date of publication 6 September 2023; date of current version 14 September 2023. This work was supported by the National Natural Science Foundation of China under Grant 61991421. _(Corresponding author: Yizhe Fan.)_ Yizhe Fan, Kun Wang, Jie Li, and Guoru Zhou are with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China, also with the Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Bingchen Zhang and Yirong Wu are with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 10094, China, and also with the School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/JSTARS.2023.3312510 ## I Introduction Sparse signal processing focuses on representing the signal in a sparse way so as to make the processing faster and simpler with useful information stored in few coefficients [1]. Regularization is a basic method of sparse signal processing. Synthetic aperture radar (SAR) is the major technology of modern microwave imaging in remote sensing and has the all-time and all-weather observing ability. The combination of sparse signal processing and SAR imaging can potentially improve the performance and reduce calculation complexity. With the development of sparse signal processing from point targets [1] to phase targets, regularization models are no longer limited in single penalty term, but combined penalty terms, including \\(\\ell_{1}\\) and total variation (TV) norm [2], nonconvex (NC) and TV-norm [3], NC and nonlocal TV norm [4], \\(\\ell_{1}\\) and \\(\\ell_{2}\\) norm [5], NC and \\(\\ell_{2,1}\\) norm [6], combined dictionaries [7], and morphology regularization [8]. Each penalty term of these composite regularization models is multiplied by a regularization parameter. The selection of regularization parameters directly controls the quality of reconstruction results. Tomographic SAR (TomoSAR) is an essential technique for retrieving spatial information from multibaseline interferometric SAR images acquired with different view angles, which has been intensively developed in the past two decades and shows promising results [9, 10, 11, 12]. Recently, sparse signal processing has been widely used in SAR tomography reconstruction because of the advantages of superresolution and limited number of baselines. Introducing Spatial regularization, which is a composite regularization model, to TomoSAR [11] can retain the sparsity of targets and, at the same time, enhance spatial smoothness. Regularization parameters also have a great effect on the reconstruction quality of this method. Composite regularization models have an extensive use not only in sparse signal processing, but also in variety fields using computational imaging [13, 14, 15]. In all application scenarios, regularization parameters control the effect of corresponding penalty terms and can greatly influence the performance of regularization models. Past studies have provided us a series of parameter selection methods for regularization models with single penalty term, such as the L-curve [16, 17, 18], Stein's unbiased risk estimate (SURE) [17, 18, 19, 20], generalized crossvalidation (GCV) [17, 18, 20, 21, 22], etc. However, the existing multiparameter selection methods summarized by Grasmair and Naumova [23] mainly deal with penalty terms in seminorm(\\(\\|\\mathbf{T}\\mathbf{x}\\|_{p}^{p}\\)) form, little attention had been paid to parameters selection for composite regularization models with arbitrary penalty terms in recent years. Thus, an adaptive multiparameter selection method is needed, regarding the gradually widely use of combined penalty terms. Although deep learning approaches [24, 25, 26, 27] can deal with multiple regularization parameters selection, the access to the ground truth image cannot be granted in real-life applications, such as in SAR imaging. In addition, deep learning approaches also can be very computationally demanding, but the training results sometimes only apply to limited situations, such as a fixed signal-to-noise ratio. The main contributions of this article are listed as follows. 1. We extend the L-curve method to multiparameter selection in composite regularization models and propose an L-hypersurface method, which can adaptively select multiple regularization parameters without limitation to the form of penalty terms. 2. We establish a corner location method for hypersurface based on inner product and orient area. Such method can locate the point of largest curvature on non-monotonous L-hypersurface. 3. We apply the L-hypersurface method to composite regularization model for SAR and TomoSAR imaging. The effectiveness of the proposed method is verified via simulation and real data experiments. The rest of this article is organized as follows. Section II introduces the regularization models in SAR and TomoSAR image reconstruction. The fundamental theories of L-hypersurface method, as well as the specific algorithm are elaborated in Section III. Section IV analyzes the experimental results and evaluates the performance of the proposed method. Finally, Section V concludes this article. ## II Regularization Model in SAR and TomoSAR Image Reconstruction The sparse microwave imaging model can be expressed as [1] \\[\\mathbf{y}=\\mathbf{A}\\mathbf{x}+\\mathbf{n} \\tag{1}\\] where \\(\\mathbf{y}\\in\\mathbb{C}^{M\\times 1}\\) denotes the vector form of echo wave, \\(\\mathbf{x}\\in\\mathbb{C}^{N\\times 1}\\) denotes the vector form of image, \\(\\mathbf{A}\\in\\mathbb{C}^{M\\times N}\\) denotes the measurement matrix of the imaging system and \\(\\mathbf{n}\\in\\mathbb{C}^{M\\times 1}\\) denotes the additive Gaussian white noise vector. The sparse microwave imaging process is to solve a regularization problem, that is, to minimize the objective function \\[\\hat{\\mathbf{x}}=\\arg\\min_{\\mathbf{x}}\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+\\sum _{i=1}^{K}\\lambda_{i}p_{i}(\\mathbf{x}) \\tag{2}\\] where \\(\\hat{\\mathbf{x}}\\) denotes the imaging result, \\(\\|\\cdot\\|_{2}\\) denotes the \\(\\ell_{2}\\)-norm, \\(\\lambda_{1},\\lambda_{2}\\ldots,\\lambda_{K}\\) denote the regularization parameters controlling penalty terms \\(p_{1}(\\cdot),p_{2}(\\cdot),\\ldots,p_{K}(\\cdot)\\), respectively. ### _SAR Regularization Model_ Variety kinds of composite regularization models exist for SAR image reconstruction [2, 3, 4, 5, 6, 7]. Here, we introduce the regular-used NC and TV regularization model [3] \\[\\hat{\\mathbf{x}}=\\arg\\min_{\\mathbf{x}}\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+ \\lambda_{1}p_{\\text{NC}}(\\mathbf{x})+\\lambda_{2}p_{\\text{TV}}(\\mathbf{x}). \\tag{3}\\] The NC penalty and TV penalty are defined as \\[p_{\\text{NC}}(\\mathbf{x})=\\text{MC}(\\mathbf{x})=\\sum_{k=1}^{N}\\begin{cases}|x_{i}|- \\dfrac{|x_{i}|^{2}}{2\\theta},&|x_{i}|\\leq\\theta\\\\ \\theta/2,&|x_{i}|>\\theta\\end{cases} \\tag{4}\\] \\[p_{\\text{TV}}(\\mathbf{x})=\\text{TV}(|\\mathbf{x}|)=\\sum_{i,j}\\left\\|\ abla(|\\mathbf{X}|)_{ i,j}\\right\\|_{2} \\tag{5}\\] where \\(x_{i}\\) denotes the \\(i\\)th element of image vector \\(\\mathbf{x}\\), \\(\\mathbf{X}\\) is the 2-D complex-valued matrix of vector \\(\\mathbf{x}\\), the operator \\(|\\cdot|\\) represents magnitude calculation and \\(\ abla(|\\mathbf{X}|)_{i,j}\\) is the gradient vector of a pixel in the \\(i\\)th row and \\(j\\)th column, which is defined as \\[\ abla(|\\mathbf{X}|)_{i,j}= \\left(D_{h}\\left|\\mathbf{X}\\right|_{i,j},D_{v}\\left|\\mathbf{X}\\right|_{i, j}\\right) \\tag{6}\\] \\[D_{h}\\left|\\mathbf{X}\\right|_{i,j}= \\left|\\mathbf{X}_{i+1,j}\\right|-\\left|\\mathbf{X}_{i,j}\\right|\\] (7) \\[D_{v}\\left|\\mathbf{X}\\right|_{i,j}= \\left|\\mathbf{X}_{i,j+1}\\right|-\\left|\\mathbf{X}_{i,j}\\right| \\tag{8}\\] \\(\\lambda_{1}\\) controls the effect of NC penalty and \\(\\lambda_{2}\\) controls the effect of TV penalty. In the NC-TV regularization-based sparse SAR imaging model, the NC-norm penalty, playing the role of sparsity inducing regularizer, can enhance point-based features, and the TV-norm penalty will enhance region-based features, maintaining the continuity of the backscattering coefficient of distributed targets within a certain area. Therefore, the NC-TV model can maintain the reconstruction accuracy of targets as well as protecting the reconstruction result from speckles. The distinguish functions of NC and TV penalties are instrumental in showing the tradeoff between them. When \\(\\lambda_{1}\\) and \\(\\lambda_{2}\\) are relatively small, the function of penalty terms cannot be shown. When \\(\\lambda_{1}\\) is relatively large, the sparsity of image will be overly enhanced, which could lead to target loss. When \\(\\lambda_{2}\\) is relatively large, there will also be issues caused by oversmoothing, including the targets might be covered by noise and the texture details might loss. In general, the quality of image is controlled by \\(\\lambda_{1}\\) and \\(\\lambda_{2}\\). Thus, it is significant to select parameters in NC-TV regularization. ### _TomoSAR Regularization Model_ The spatial regularization model in TomoSAR reconstruction [11] is given as \\[\\hat{\\mathbf{x}}= \\arg\\min_{\\mathbf{x}}\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+ \\lambda_{1}\\|\\mathbf{x}\\|_{1}+p_{s}(\\mathbf{x})\\] \\[= \\arg\\min_{\\mathbf{x}}\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}+ \\lambda_{1}\\|\\mathbf{x}\\|_{1}\\] \\[+\\lambda_{2}\\|\\mathbf{D}_{x}\\left|\\mathbf{x}\\right|\\|_{2}^{2}+\\lambda_{3} \\|\\mathbf{D}_{y}\\left|\\mathbf{x}\\right|\\|_{2}^{2}+\\lambda_{4}\\|\\mathbf{D}_{z}\\left|\\mathbf{x} \\right|\\|_{2}^{2} \\tag{9}\\] where \\(p_{s}(\\mathbf{x})\\) is the spatial smoothness penalty \\[p_{s}(\\mathbf{x})=\\lambda_{2}\\|\\mathbf{D}_{x}\\left|\\mathbf{x}\\right|\\|_{2}^{2}+\\lambda_{3} \\|\\mathbf{D}_{y}\\left|\\mathbf{x}\\right|\\|_{2}^{2}+\\lambda_{4}\\|\\mathbf{D}_{z}\\left|\\mathbf{x} \\right|\\|_{2}^{2}. \\tag{10}\\] \\(\\|\\cdot\\|_{1}\\) denotes the \\(\\ell_{1}\\) norm, matrices \\(D_{x},D_{y}\\), and \\(D_{z}\\) stand for the finite difference operators in the \\(x\\)-, \\(y\\)- and \\(z\\)-directions. \\(\\lambda_{1}\\), \\(\\lambda_{2}\\), \\(\\lambda_{3}\\), and \\(\\lambda_{4}\\) control the weight of each term. \\(\\lambda_{1}\\), associated with the sparsity constraint, has greatest influence on imaging result. When \\(\\lambda_{1}\\) is too large, there are some holes in the reconstruction, while a value of \\(\\lambda_{1}\\) that is too small leaves sidelobes and outliers. The effect of a too large value of \\(\\lambda_{2}\\), \\(\\lambda_{3}\\), or \\(\\lambda_{4}\\) is the extension of structures in the corresponding direction of the spatial smoothing. When the spatial regularization is too weak, outliers located far from the actual surfaces can be observed, according to [11]. ## III L-Hypersurface Method In this section, a multiparameter selection method named L-hypersurface is proposed. The kernel idea of L-curve method involves plotting the residual term \\(\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}\\) and the penalty term \\(p_{k}(\\mathbf{x})\\), with the optimization parameter selected based on the corner of the L-shape curve. This allows for the best possible parameter selection result. We introduce the idea to multiple regularization parameters selection. ### _L-Curve Method_ The L-curve is a classic method for selecting regularization parameter \\(\\lambda\\) in regularization model with single penalty term, as (11) shows \\[\\hat{\\mathbf{x}}=\\arg\\min_{\\mathbf{x}}\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}+\\lambda p(\\mathbf{x }). \\tag{11}\\] The basic idea of L-curve method is to plot the residual term \\(\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}\\) and the penalty term \\(p(\\mathbf{x})\\). The relationship curve is called an L-curve, because its shape contains a steep part and a horizontal part. The junction point of the steep and horizontal part on the L-curve is commonly named as corner, which represents the optimal parameter for the regularization model. Compared with statistical methods, such as GCV and SURE, the L-curve is a graphical method, which can be more intuitive and efficient for multiparameter regularization problems. The L-curve method also has its ability to select parameters according to the needs of practical situation through defining corner in different ways. Regarding the low complexity and high applicability of L-curve, we decided to extend its ideas to multiparameter selection and establish an _L-hypersurface_ method. Although Belge et al. [28] has the similar idea in establishing their own multiparameter selection method named by L-hypersurface, the performance of their method still depends on the form of penalty terms. In this article, we are going to establish a specific method for locating the corner on L-hypersurface regardless of the form of penalty terms. ### _Generation of L-Hypersurface_ First, we generate an L-hypersurface in light of L-curve, expanding the 2-D curve to a high-dimensional hypersurface. The L-hypersurface for composite regularization model is given as \\[\\mathcal{L}\\!\\!:=\\!\\!\\left\\{\\left[\\lg\\|\\mathbf{A}\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}}-\\mathbf{ y}\\|_{2}^{2},\\lg p_{1}(\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}}),\\ldots,\\lg p_{K}(\\hat{\\mathbf{x}}_{ \\mathbf{\\lambda}})\\right]\\bigg{|}\\mathbf{\\lambda}\\in\\mathbb{R}^{K}\\right\\} \\tag{12}\\] where \\(\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}}\\) denotes the solution to (2) with \\(\\mathbf{\\lambda}=(\\lambda_{1},\\lambda_{2},\\ldots,\\lambda_{K})\\) as regularization parameters. Each of the residual term and penalty terms associates with one dimension of the coordinate system, all of which formulate a \\(K\\) dimensional hypersurface on \\((K+1)\\) dimensional hyperspace. The \\(\\lg(\\cdot)\\) form is used to enhance the turning point of the hypersurface [16]. Each point on the hypersurface \\(\\mathcal{L}\\) represents a solution to optimization problem (2) with a specific set of parameters. For parameters \\(\\mathbf{\\lambda}_{0}\\), the corresponding point on \\(\\mathcal{L}\\) can be described as \\[\\mathcal{L}(\\mathbf{\\lambda}_{0}) = \\left[\\lg(\\delta_{0}),\\lg(\\delta_{1}),\\ldots,\\lg(\\delta_{K})\\right]\\] \\[\\delta_{0} = \\left\\|\\mathbf{A}\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}_{0}}-\\mathbf{y}\\right\\|_{2}^{2},\\] \\[\\delta_{k} = p_{k}(\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}_{0}}),\\,(k=1,2,\\ldots,K). \\tag{13}\\] Next, we make the following statement: Each point on the hypersurface \\(\\mathcal{L}\\) is a solution to the following set of equations: \\[\\delta_{0}= \\min_{\\mathbf{x}}\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}\\text{ subject to}\\] \\[p_{k}(\\mathbf{x})\\leq\\delta_{i}(i=1,2,\\ldots,K)\\] \\[\\delta_{1}= \\min_{\\mathbf{x}}p_{1}(\\mathbf{x})\\text{ subject to}\\] \\[\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}\\leq\\delta_{0},p_{k}(\\mathbf{x})\\leq \\delta_{k}(i=2,3\\ldots,K)\\] \\[\\delta_{2}= \\min_{\\mathbf{x}}p_{2}(\\mathbf{x})\\text{ subject to}\\] \\[\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}\\leq\\delta_{0},p_{k}(\\mathbf{x})\\leq \\delta_{k}(i=1,3\\ldots,K)\\] \\[\\vdots\\] \\[\\delta_{K}= \\min_{\\mathbf{x}}p_{k}(\\mathbf{x})\\text{ subject to}\\] \\[\\left\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\right\\|_{2}^{2}\\leq\\delta_{0},p_{k}( \\mathbf{x})\\leq\\delta_{k}(i=1,2\\ldots,K-1). \\tag{14}\\] The statement can be proved by contradiction. Suppose that \\(\\delta_{0}\ eq\\min_{\\mathbf{x}}\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}\\,s.t.\\,p_{k}(\\mathbf{x}) \\leq\\delta_{k}(k=1,2,\\ldots,K)\\), which means there exists \\(\\hat{\\mathbf{x}}^{\\prime}\\) such that \\(p_{i}(\\mathbf{x}^{\\prime})\\leq\\delta_{k}(i=1,2,\\ldots,K)\\) and \\(\\|\\mathbf{A}\\hat{\\mathbf{x}}^{\\prime}-\\mathbf{y}\\|_{2}^{2}<\\|\\mathbf{A}\\hat{\\mathbf{x}}_{\\mathbf{ \\lambda}_{0}}-\\mathbf{y}\\|_{2}^{2}=\\delta_{0}\\). Then, we can get \\(\\|\\mathbf{A}\\hat{\\mathbf{x}}^{\\prime}-\\mathbf{y}\\|_{2}^{2}+\\sum_{k=1}^{K}\\{|\\mathbf{\\lambda}_{0 }|\\}p_{k}(\\hat{\\mathbf{x}}^{\\prime})<\\delta_{0}+\\sum_{k=1}^{K}\\{|\\mathbf{\\lambda}_{0 }|\\}\\delta_{k}\\), which contradicts to the fact that \\(\\hat{\\mathbf{x}}_{\\mathbf{\\lambda}_{0}}\\) is the optimized solution to (2). Similarly, each equation in (14) can be proved. The statement guarantees that the L-hypersurface divides the hyperspace into two parts and any reconstruction result \\(\\hat{\\mathbf{x}}\\) must correspond to a point above or on the hypersurface. (Strictly speaking, there is no above/below in high dimensional space, but it can be easily imagined and understood.) Once a set of parameters is given, the possible result can never correspond to a point beneath the L-hypersurface. Hence, the tradeoff must be made at a certain point on the hypersurface, which is regarded as the corner. Determining the location of the corner has a significant impact on the tradeoff, therefore a comprehensive discussion on this subject will follow in next section. ### _Determination of Corner_ Locating the corner is the most significant issue for L-hypersurface method. Variety of methods exists on locating the corner of a single-parameter L-curve [18, 24, 29, 30, 31]. As the number of parameters increases, however, the complexity and uncertainty in locating the corner on the L-hypersurface also escalate, which leads to challenges including the definition of corner, the loss of monotonicity between residual and penalty terms, the intricate patterns of L-hypersurface. In order to dealing with these obstacles, weestablish a new corner determination method based on inner product and orient area. Detailed analyses are as follows. #### Iii-B1 Definition of Corner There are many ideas to define the corner of a single-parameter L-curve, which are given as follows: 1. The point of maximum curvature. 2. The point closest to a reference point. 3. The point of tangency with a decided negative slope. Here, we take 1) as the definition. Since there exsits variety kinds of curvature for hypersurface, such as normal curvature, principal curvature, draw curvature, and Gaussian curvature, the meaning of curvature should be clarified. Thus, giving a reasonable interpretation of curvature is one of the main difficulties for corner determination. Here, we propose a definition of corner based on the inner product. In application, an L-hypersurface can be generated by discrete sampling points. Let \\(J_{k}\\) be the number of sampling value of \\(\\lambda_{k}\\), then we have \\[\\lambda_{k}\\in\\left\\{\\lambda_{k}^{1},\\lambda_{k}^{2},\\ldots,\\lambda_{k}^{J_{k }}\\right\\},\\ 0<\\lambda_{k}^{1}<\\lambda_{k}^{2}<\\ldots<\\lambda_{k}^{J_{k}} \\tag{15}\\] where \\(\\lambda_{k}^{j}(j=1,2,\\ldots,J_{k})\\) denotes the \\(j\\)th sampling value of parameter \\(\\lambda_{k}\\), arranging from smallest to largest. Through interpolating and smoothing operations, we can generate an approximate L-hypersurface of the testing regularization model from \\(J_{1}\\times J_{2}\\times\\cdots\\times J_{K}\\) sampling points. We define the curvature of L-hypersurface \\(\\mathcal{L}\\) at point \\(A\\) as the average of curvature on \\(K\\) propagation planes \\[\\rho(A)=\\frac{1}{k}\\sum_{k=1}^{K}\\rho_{k}(A) \\tag{16}\\] where \\(\\rho_{k}(A)\\) denotes the curvature of propagated curve on the \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) plane at point \\(A\\). Specifically, given a certain point on the L-hypersurface \\[A=\\mathcal{L}(\\lambda_{1}^{j_{1}},\\lambda_{2}^{j_{2}},\\ldots,\\lambda_{K}^{j_{K }}) \\tag{17}\\] the curvature \\(\\rho_{k}\\) on the \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) plane is defined with the assistance of another two points \\(B_{k}\\) and \\(C\\) \\[B_{k} =\\mathcal{L}(\\lambda_{1}^{j_{1}},\\lambda_{2}^{j_{2}},\\ldots, \\lambda_{2}^{j_{t}^{\\prime}},\\ldots,\\lambda_{K}^{j_{K}})\\] \\[C_{k} =\\mathcal{L}(\\lambda_{1}^{j_{1}},\\lambda_{2}^{j_{2}},\\ldots, \\lambda_{2}^{1},\\ldots,\\lambda_{K}^{j_{K}}) \\tag{18}\\] where \\(j_{k}^{\\prime}\\), which will be decided later, satisfies \\(1<j_{k}<j_{k}^{\\prime}<J_{k}\\). \\(B_{k}\\) and \\(C_{k}\\) correspond to the same \\(j_{1},\\ldots,j_{K}(\\text{except}\\,j_{k})\\) as \\(A\\). \\(C_{k}\\) corresponds to the minimum value of parameter \\(\\lambda_{k}\\), which means the minimum value of \\(\\|\\mathbf{A\\hat{x}_{k}}-\\mathbf{y}\\|_{2}^{2}\\) on the subcurve is achieved at point \\(C\\). Fig. 1(a) shows the projection of a subcurve, which can be described as \\(\\mathcal{L}(\\lambda_{1}^{j_{1}},\\lambda_{2}^{j_{2}},\\ldots,\\lambda_{2}^{j_{t} ^{\\prime}},\\ldots,\\lambda_{K}^{j_{K}})\\) (\\(\\hat{j}_{k}\\) is a variable), of L-hypersurface on plane \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\), where \\(A\\), \\(B_{k}\\), and \\(C_{k}\\) are marked on the curve. The curvature of point \\(A\\) on the \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) plane is defined by inner product \\[\\rho_{k}(A,B_{k},C_{k})=\\pi-\\arccos\\frac{\\langle\\overrightarrow{A^{\\prime}B _{k}^{\\prime}},\\overrightarrow{A^{\\prime}C_{k}^{\\prime}}\\rangle}{\\left| \\overrightarrow{A^{\\prime}B_{k}^{\\prime}}\\right|\\left|\\overrightarrow{A^{ \\prime}B_{k}^{\\prime}}\\right|} \\tag{19}\\] where \\(Q^{\\prime}\\) denotes the projected point of \\(Q\\) on the \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) plane, \\(\\langle\\cdot,\\cdot\\rangle\\) is the inner product operator, \\(|\\cdot|\\) calculates the length of vector. #### Iii-B2 Loss of Monotonicity Between Regularization Terms and Residual Term Back to the L-curve for single parameter, the residual term monotonically decreases as the regularization term increasing. For composite regularization problems, the expectation that regularization terms would be a decreasing function of residual term is usually failed, especially when the penalty term is not a Tikhonov form (\\(\\|\\mathbf{\\Gamma}\\mathbf{x}\\|_{2}\\)), but, for example, the TV-norm. In addition, the solution algorithms, such as ADMM [32], ISTA [33], and FISTA [34], also might lead to nonmonotonicity of results. Most prior algorithms for locating the corner relay on monotonicity of the curve and discard those points where monotonicity is not fulfilled. For L-hypersurface, however, there might be so much discarded points that the true corner of the curve will be missed if we ignore all the nonmonotonic points. Thus, we should deal with those nonmonotonic points not by discarding them. Here, we resolve the interference of nonmonotonicity through the selection of point \\(B_{k}\\). More specifically, for all possible \\(B_{i}\\)s, we calculate the curvature \\(\\rho_{k}(A,B_{k})\\) through (19) and set the maximum value as \\(\\rho_{k}(A)\\). Geometrically, while \\(B_{k}\\) moves from \\(A\\) to the end of the curve, the maximum curvature decided by \\(\\triangle AB_{k}C_{k}\\) gives the projected curvature of \\(A\\) on plane \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) \\[\\rho_{k}(A) =\\max_{B_{k}}\\rho_{k}(A,B_{k},C_{k})\\] \\[=\\max_{\\stackrel{{ j_{k}}}{{(1<j_{k}<j_{k}^{\\prime}<j _{k})}}}\\pi-\\arccos\\frac{\\langle\\overrightarrow{A^{\\prime}B_{k}^{\\prime}}, \\overrightarrow{A^{\\prime}C_{k}^{\\prime}}\\rangle}{\\left|\\overrightarrow{A^{ \\prime}B_{k}^{\\prime}}\\right|\\left|\\overrightarrow{A^{\\prime}B_{k}^{\\prime}} \\right|}\\] \\[=\\min_{\\stackrel{{ j_{k}}}{{(1<j_{k}<j_{k}^{\\prime}<j _{k})}}}\\arccos\\frac{\\langle\\overrightarrow{A^{\\prime}B_{k}^{\\prime}}, \\overrightarrow{A^{\\prime}C_{k}^{\\prime}}\\rangle}{\\left|\\overrightarrow{A^{ \\prime}B_{k}^{\\prime}}\\right|\\left|\\overrightarrow{A^{\\prime}B_{k}^{\\prime}} \\right|}. \\tag{20}\\] Fig. 2 shows a nonmonotonic situation, the value of \\(p_{k}(\\mathbf{x})\\) increases as \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\) increases in the circled part. For points \\(A^{1}\\) and \\(A^{2}\\), we suppose \\(B_{k}^{1}\\) and \\(B_{k}^{2}\\) are the points corresponding to maximum curvature, respectively, so that the circled part has no effect on calculating \\(\\rho_{k}(A^{1})\\) and \\(\\rho_{k}(A^{2})\\). For Fig. 1: Determination of corner: A subcurve of L-hypersurface projected on plane \\(\\|\\mathbf{Ax}-\\mathbf{y}\\|_{2}^{2}-p_{k}(\\mathbf{x})\\). (a) Convex pattern of L-shape. (b) Concave pattern of L-shape. point \\(A^{3}\\), \\(B^{3}_{k}\\) is the point corresponding to maximum curvature, which is on the nonmonotonically decreasing part. According to our method, the corner would be located near \\(A^{3}\\), since \\(\\triangle A^{3}B^{3}_{k}C_{k}\\) gives the maximum curvature on the projected curve. If we ignore the circled part, however, \\(B^{3}_{k}\\) is ignored and the curvature of \\(A^{3}\\) will be calculated differently, and the corner might move to \\(A^{2}\\). This is called the miss matching of corner and is avoided by our method. #### Iii-C3 Convex and Concave Pattern of L-Shape No matter the curve is an L-shape [see convex shape in Fig. 1(a)] or an inverted L-shape [see concave shape in Fig. 1(b)], the formula (19) gives the same result. The shape of curve is absolutely decided by the form of regularizers and the solution algorithm. Therefore, we need to distinguish the convex or concave pattern of curves and select the maximum curvature point with L-shape curve. We introduce the concept of oriented area to separate the two patterns [30]. The oriented area of triangle \\(\\triangle AB_{k}C_{k}\\) is defined as \\[\\text{area}(AB_{k}C_{k})=\\frac{1}{2}\\det(\\overrightarrow{C_{k}A^{\\prime}}, \\overrightarrow{A^{\\prime}B^{\\prime}_{k}}). \\tag{21}\\] For certain triangle with vertex \\(A_{k}\\), the sign of area identifies the shape of this part of the curve. The curve is an L-shape if \\(\\text{area}(AB_{k}C_{k})>0\\), and inverted L-shape, if \\(\\text{area}(AB_{k}C_{k})<0\\). Finally, we determine the corner of the L-hypersurface as the point with maximum curvature and corresponding triangles which have positive area \\[A_{\\text{corner}}:= \\arg\\min_{A}\\frac{1}{K}\\sum_{k=1}^{K}\\min_{B_{k}}\\arccos\\frac{ \\overrightarrow{\\left\\langle A^{\\prime}B^{\\prime}_{k}\\right\\rangle}, \\overrightarrow{A^{\\prime}C^{\\prime}_{k}}}{\\left|\\overrightarrow{A^{\\prime}B^ {\\prime}_{k}}\\right|\\left|\\overrightarrow{A^{\\prime}B^{\\prime}_{k}}\\right|}\\] \\[\\text{subject to area}(AB_{k}C_{k})>0. \\tag{22}\\] The result can be reformulated as \\[A_{\\text{corner}}= \\arg\\min_{A,j_{k}^{\\prime}}\\frac{1}{K}\\sum_{k=1}^{K}\\arccos \\frac{\\overrightarrow{\\left\\langle A^{\\prime}B^{\\prime}_{k}\\right\\rangle}, \\overrightarrow{A^{\\prime}C^{\\prime}_{k}}}{\\left|\\overrightarrow{A^{\\prime}B^ {\\prime}_{k}}\\right|\\left|\\overrightarrow{A^{\\prime}B^{\\prime}_{k}}\\right|}\\] \\[\\text{subject to }1<j_{k}<j^{\\prime}_{k}<J_{k}\\] \\[\\text{area}(AB_{k}C_{k})>0\\] \\[(k=1,\\ldots,K). \\tag{23}\\] ### _Algorithm_ ``` Input:\\(A\\), \\(y,\\{\\lambda^{1}_{k},\\lambda^{2}_{k},\\ldots,\\lambda^{j_{k}}_{k}\\}(k=1,2,\\ldots,K)\\), \\(\\rho_{\\max}=0\\), \\(area_{k}=0\\), \\(A_{corner}=\\mathbf{0}\\) 1:for all possible \\(\\lambda\\)do 2:\\(A=\\mathcal{L}(\\boldsymbol{\\lambda})\\) 3:\\(C_{k}=\\mathcal{L}(\\lambda^{j_{1}}_{1},\\lambda^{j_{2}}_{2},\\ldots,\\lambda^{1} _{2},\\ldots,\\lambda^{j_{K}}_{K})\\) 4:for all possible \\(j^{\\prime}_{1}\\ldots,j^{\\prime}_{K}\\)do 5:\\(B_{k}=\\mathcal{L}(\\lambda^{j_{1}}_{1},\\lambda^{j_{2}}_{2},\\ldots,\\lambda^{j_ {2}}_{2},\\ldots,\\lambda^{j_{K}}_{K})\\) 6:\\(\\rho_{k}=\\rho_{k}(A,B_{k},C_{k})\\) 7:\\(area_{k}=area(A,B_{k},C_{k})\\) 8:\\(\\rho=\\frac{1}{K}\\sum_{k=1}^{K}\\rho_{k}\\) 9:if\\(area_{k}>0\\;\\&\\;\\rho>\\rho_{max}\\)then 10:\\(\\rho_{max}=\\rho\\), \\(A_{corner}=A\\) 11:endif 12:endfor 13:endfor ``` **Output:**\\(A_{corner}\\) Fig. 3: SAR simulation experiment scene. (a) Real value. (b) Simulated value and selected point target (P1) with neighborhood (D1), distributed targets (D2,D3,D4), edge (E1,E2). Fig. 2: Determination of corner: The nonmonotonicity of curve. #### Iv-B1 Simulation Experiments In the simulation experiments, we generate a \\(128\\times 128\\) pixels scene with distributed targets that occupy \\(50\\times 50\\) pixels each and point targets in the blank, as Fig. 3(a) shows. According to [36], the complex distributed targets should be Rayleigh distributed in amplitude and uniformly distributed in phase for simulation, as Fig. 3(b) shows. A \\(1024\\times 1024\\) Fourier matrix is chosen as the measurement matrix. A \\(20\\) dB Gaussian white noise is added to the raw echo wave, according to (1). The L-hypersurface is shown in Fig. 4 and the selected corner \\(A_{\\text{corner}}\\) is marked on the figure. Fig. 5(a) shows the projection of L-hypersurface on \\(\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}-p_{\\text{NC}}(\\mathbf{x})\\) space, which means each curve in (a) corresponds to the same \\(\\lambda_{2}\\). Fig. 5(b) shows the projection of L-hypersurface on \\(\\|\\mathbf{A}\\mathbf{x}-\\mathbf{y}\\|_{2}^{2}-p_{\\text{TV}}(\\mathbf{x})\\) space, which means each curve in (b) corresponds to the same \\(\\lambda_{1}\\). The projection points of the corner \\(A_{\\text{corner}}\\) are also marked on the curves. From visual results, the corner is located at the turning points of L-shape curves on each projection space and at the point of relative large curvature on the L-hypersurface. Thus, our proposed corner determination method works. Next, the reconstruction quality of the selected corner is tested to demonstrate the effectiveness of our L-hypersurface method. The performance of selected parameters is evaluated by visual results and objective indicators. The optimized regularization Fig. 4: L-hypersurface of simulation experiments. (a) Main view. (b) Another view. parameters are decided by the corner \\[A_{\\text{corner}}=\\mathcal{L}(\\lambda_{1}^{c},\\lambda_{2}^{c})=\\mathcal{L}(1.77,6. 00). \\tag{24}\\] Fig. 6 presents reconstruction images via different pairs of regularization parameters \\((\\lambda_{1},\\lambda_{2})\\in\\{\\lambda_{1}^{c}/2,\\lambda_{1}^{c},2\\lambda_{1}^{c }\\}\\times\\{\\lambda_{2}^{c}/2,\\lambda_{2}^{c},2\\lambda_{2}^{c}\\}\\). The results indicate that \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\) give the best visual impression. Since the NC-TV regularization model can simultaneously enhance the point-based and region-based features, the speckle suppression and texture preserving performance should both be taken into consideration. In objective evaluation, amplitude of target background ratio (TBR) is used to evaluate point targets. A higher TBR value indicates a lower background noise. Equivalent number of looks (ENL) is used to evaluate distributed targets. When the ENL value is bigger, it indicates the image is smoothed well. Edge preservation index (EPI) are used to evaluate the texture preserving feature. The definitions of TBR, ENL, and EPI are as follows: \\[\\text{TBR}=10\\lg\\left(\\frac{I_{\\text{target}}}{(\\sum_{(i,j)\\in\\mathcal{B}}I_{ i,j}/N_{\\mathcal{B}})}\\right) \\tag{25}\\] where \\(\\mathbf{I}=\\{I_{i,j}\\}=\\{|\\mathbf{X}|_{i,j}^{2}\\}\\) denotes the intensity matrix of image, \\(I_{\\text{target}}\\) denotes the intensity of the evaluating point target, \\(\\mathcal{B}\\) denotes the neighborhood of the target which has totally \\(N_{\\mathcal{B}}\\) pixels \\[\\text{ENL}=\\frac{\\mu(\\mathbf{I})^{2}}{\\sigma(\\mathbf{I})} \\tag{26}\\] where \\(\\mu(\\cdot)\\) calculates the mean and \\(\\sigma(\\cdot)\\) calculates the standard deviation \\[EPI=\\frac{\\sum_{l=1}^{L}|\\hat{\\mathbf{x}}|_{l,1}-|\\hat{\\mathbf{x}}|_{l,2}}{\\sum_{l=1}^ {L}|\\mathbf{x}|_{l,1}-|\\mathbf{x}|_{l,2}} \\tag{27}\\] where \\(|\\hat{\\mathbf{x}}|_{l,1}\\) and \\(|\\hat{\\mathbf{x}}|_{l,2}\\) denote the amplitude of reconstructed image on both sides of the edge, \\(|\\mathbf{x}|_{l,1}\\) and \\(|\\mathbf{x}|_{l,2}\\) denote the amplitude of reference image on both sides of the edge. Let Fig. 3 be the reference image in simulation experiment. Fig. 8: Experimental scene (CSA) and selected targets. Fig. 6: SAR reconstruction images of simulation data via different pairs of regularization parameters. Fig. 7: Comparison between Belge’s and our L-hypersurface method. As Fig. 3(a) shows, the TBR of point target P1 in neighborhood D1, the ENL of distributed targets D3, D4, the EPI of edges E1 and E2 are listed in Table I. The results corresponding to \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\) are in bold. According to the quantitative results, we draw to the following conclusions. 1. The TBR of our selected parameters' reconstructed image is relatively high, which means a satisfactory denoising result. When \\(\\lambda_{1}=\\lambda_{1}^{c}/2,\\lambda_{2}=\\lambda_{2}^{c}\\), the TBR becomes lower than the selected parameters, so that the image quality gets worse because of higher background noise. When \\(\\lambda_{2}=2\\lambda_{2}^{c}\\) the point targets might be lost by reconstruction results. 2. The theoretical ENL of targets D3 and D4 in the original experiment scene, the amplitude of which is Rayleigh distributed, is 1. Since the ENL of \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\)'s reconstructed image is significantly improved, the selected parameters have a good speckle suppression performance. When \\(\\lambda_{2}=\\lambda_{2}^{c}/2\\), the ENL of targets D3 and D4 are still low. Although \\(\\lambda_{2}=2\\lambda_{2}^{c}\\) has a better smoothing performance than \\(\\lambda_{2}^{c}\\), the noise is also excessively smoothed, so that the detailed feature might loss. More holes appear in the distributed target when \\(\\lambda_{1}=2\\lambda_{1}^{c},\\lambda_{2}=\\lambda_{2}^{c}\\). 3. The EPI of \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\)'s reconstructed image is close to 1, which means a superb texture preserving ability. We also apply the Belge's L-hypersurface method [28] to the simulation data. Selected parameters of Belge's method are (3.42,3.42) and our method are (1.77,6.00). According to Fig. 7 and the analysis above, apparently, the optimum Fig. 9: SAR reconstruction images of real data via different pairs of regularization parameters. parameters of our method have better reconstruction performance. Less holes and smoother region can be observed in (1.77,6,000)'s reconstruction result. To sum up, the simulation results successfully demonstrate the effectiveness of our L-hypersurface method. #### V-A2 Real Data Experiments In order to show the performance of our parameter selection method, we conduct the experiment using the complex-value image of Gaofen-3 C-band SAR satellite. The main parameters of this data are as follows: the bandwidth of the signal is 60 MHz, the sampling frequency in the range direction is 66.66 MHz, the azimuth resolution is 3 m and the pulse repetition rate is 1484.64 Hz. First, we use chirp scaling algorithm (CSA) to reconstruct the target scene. As shown in Fig. 8, there are many speckles in the image which cause the land to no longer be continuous and uniform. Then, we use the NC-TV regularization model for reconstruction and apply our L-hypersurface multiparameter selection method. The optimized regularization parameters are \\((\\lambda_{1}^{c},\\lambda_{2}^{c})=(70,150)\\). Fig. 9 presents reconstruction images via different pairs of regularization parameters \\((\\lambda_{1},\\lambda_{2})\\in\\{\\lambda_{1}^{c}/4,\\lambda_{1}^{c}/2,\\lambda_{1} ^{c},2\\lambda_{1}^{c},4\\lambda_{1}^{c}\\}\\times\\{\\lambda_{2}^{c}/4,\\lambda_{2} ^{c}/2,\\lambda_{2}^{c},2\\lambda_{2}^{c},4\\lambda_{2}^{c}\\}\\). The highlighted reconstruction result generated by \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\), which has the best visual performance, achieves an ideal tradeoff between NC and TV penalties. When \\(\\lambda_{1}=2\\lambda_{1}^{c}\\), a detail features lost happens in the field region R1 (marked on Fig. 8). When \\(\\lambda_{1}=\\lambda_{1}^{c}/2\\), the strong noise remains in the scene, resulting a weak edge preservation performance. When \\(\\lambda_{2}=2\\lambda_{2}^{c}\\), the scene might be excessively smoothed, while \\(\\lambda_{2}=\\lambda_{2}^{c}/2\\) has weak speckle suppression ability and leaves distributed targets unsmoothed. When the value of \\(\\lambda_{1}\\) or \\(\\lambda_{2}\\) become even larger or smaller, the effect of corresponding penalty term is strengthened or weakened, leading to even worse reconstruction results. Table II gives the ENL of distributed targets marked as region R2 in Fig. 8. It can be inferred from the results that the selected parameters can simultaneously enhance region-based features while suppressing noise, fully demonstrating the effect of NC-TV regularization model. ### _TomoSAR Imaging_ In this section, we apply the L-hypersurface method to the spatial regularization model (9) for SAR tomography reconstruction. #### V-B1 Simulation Experiments The simulated data is generated corresponding to 40-channel complex SAR images and the system parameters of Yuncheng real data. A 0 dB Gaussian white noise is added to the raw echo wave and a uniformly distributed random phase noise between \\(-\\pi\\) and \\(\\pi\\) is added to the scene according to [10]. Fig. 10 shows the theoretical distribution of the scatters (ground truth). The scene are composed of a 64 \\(\\times\\) 64 pixels ground at altitude \\(h=0\\,\\mathrm{m}\\), a wall, and a 44 \\(\\times\\) 24 pixels roof at \\(h=20\\,\\mathrm{m}\\). The simulated building is higher than the estimated elevation resolution (as given by the Fourier inversion). The optimized regularization parameters are \\((\\lambda_{1}^{c},\\lambda_{2}^{c})=(2.4,0.05)\\). \\((\\lambda_{1}^{c},\\lambda_{2}^{c})\\) represent the optimized regularization parameters in model (9). Fig. 11 presents reconstruction results via different pairs of regularization parameters \\((\\lambda_{1},\\lambda_{2})\\in\\{\\lambda_{1}^{c}/4,\\lambda_{1}^{c},4\\lambda_{1}^{ c}\\}\\times\\{\\lambda_{2}^{c}/4,\\lambda_{2}^{c},4\\lambda_{2}^{c}\\}\\), where Fig. 11(a) shows the 3-D scatters and Fig. 11(b) shows the elevation-range section of reconstruction results. The performance of selected parameters is objectively evaluated by the _accuracy_ and _completeness_ criteria introduced in [11] and [37]. For a discrete 3-D reconstruction result \\(\\hat{\\mathbf{P}}\\), the accuracy and completeness are respectively defined as \\[A(\\hat{\\mathbf{P}},\\mathbf{P})=\\frac{1}{N_{\\hat{\\mathbf{P}}}}\\sum_{j=1}^{N_{\\hat{\\mathbf{P}}} }\\min_{k}\\left\\|\\hat{p}_{j}-p_{k}\\right\\|_{2} \\tag{28}\\] \\[C(\\hat{\\mathbf{P}},\\mathbf{P})=\\frac{1}{N_{\\mathbf{P}}}\\sum_{j=1}^{N_{\\mathbf{P}}}\\min_{j} \\left\\|\\hat{p}_{j}-p_{k}\\right\\|_{2} \\tag{29}\\] where \\(N_{\\hat{\\mathbf{P}}}\\) denotes the number of points in the reconstruction \\(\\hat{\\mathbf{P}}\\), \\(N_{\\mathbf{P}}\\) denotes the number of points in the ground truth \\(\\mathbf{P}\\), \\(\\hat{p}_{j}\\) denotes the \\(j\\)th point of \\(\\hat{\\mathbf{P}}\\) and \\(p_{k}\\) denotes the \\(k\\)th point of \\(\\mathbf{P}\\). _Accuracy_ represents the mean distance from each point in \\(\\hat{\\mathbf{P}}\\) to the ground truth \\(\\mathbf{P}\\), indicating whether reconstructed points are correctly located. _Completeness_ represents the mean distance from each point in \\(\\mathbf{P}\\) to the points in reconstruction result \\(\\hat{\\mathbf{P}}\\), indicating whether the ground truth is well represented by the set of points in the reconstruction. In Figs. 12 and 13, we plot accuracy as a function of completeness for different pairs of parameters, where Fig. 12 compares the performance of different \\(\\lambda_{1}\\) in model (9) as \\(\\lambda_{2}=\\lambda_{2}^{c}(=\\lambda_{3}=\\lambda_{4})\\) and Fig. 13 compares the performance of different \\(\\lambda_{2}\\) as \\(\\lambda_{1}=\\lambda_{1}^{c}\\). Based on the quantitative results, we draw to the following conclusions. Fig. 10: TomoSAR simulation experiment scene. 1. When \\(\\lambda_{1}=\\lambda_{1}^{c}/4\\), numerous sidelobes and outliers are left in the scene. When \\(\\lambda_{1}=4\\lambda_{1}^{c}\\), the reconstruction of roof and ground fails and the wall is overly sparsified. The accuracy versus completeness curve gradually moves away from the axes' origin while \\(\\lambda_{1}\\) is getting increasingly lower or higher than the optimized value \\(\\lambda_{1}^{c}\\). 2. When \\(\\lambda_{2}=\\lambda_{2}^{c}/4\\), the roof and ground are insufficiently smoothed and more holes appear in the wall. Relatively more outliers located far from the actual surfaces can be observed when the spatial regularization is too weak. When \\(\\lambda_{2}=4\\lambda_{2}^{c}\\), the extension of structures in the elevation direction can be observed. The accuracy versus completeness curve shows the same trend as \\(\\lambda_{1}\\)'s case, while \\(\\lambda_{2}\\) is getting increasingly lower or higher than the optimized value \\(\\lambda_{2}^{c}\\). By comparison, the best tradeoff between accuracy and completeness (point of the curve closest to the origin of the axes), is reached by selected parameters, indicating the effectiveness of proposed parameters selection method. #### Iv-B2 Real Data Experiments Furthermore, we present the 3-D reconstruction results on one building. We conduct the experiment using the Aerospace Information Research Institute, Chinese Academy of Sciences array data acquired on the city of YunCheng in Shanxi province. The radar system operates at 15 GHz and has 8 channels in the crosstrack direction. The distance between the adjacent channels is 0.08 m. The height Fig. 11: TomoSAR reconstruction results of simulation data via different pairs of regularization parameters. (a) 3-D scatters. (b) Elevation-range sections. of the radar platform is 972 m and the local incidence angle is \\(30^{\\circ}\\). The corresponding intensity maps of the areas are shown in Fig. 14. As we know, the height of highlighted building is 50 m. For this scene, the regularization parameters are computed from the simulation experiments. Fig. 15(a) presents the reconstruction result of \\(\\ell_{1}\\) regularization model. Fig. 15(b) presents the result of spatial regularization model, where the regularization parameters are selected by the L-hypersurface method. Compared with the \\(\\ell_{1}\\) regularization, most of the outliers are suppressed and the building is reconstructed with higher completeness when the spatial regularization is used. The decreased in number of outliers and the increase in number of reconstruction scatters of the building show the effect of spatial smoothing. The results reflect a satisfied performance of the L-hypersurface multiparameter selection method in practical applications. ### _Computational Efficiency_ Since the result of our L-hypersurface method should be obtained under different combinations of regularization parameters, the process can be very time consuming when high-precision optimal parameters are required. To improve the computational efficiency, we divide the corner determination process into two steps, rough calculation and precise calculation. Here are the following two strategies. 1. _Strategy No. 1:_ Directly calculate the combinations of regularization parameters in sets \\[\\{\\lambda_{1}^{1},\\lambda_{1}^{2},\\lambda_{1}^{3},\\ldots,\\lambda_{1}^{J_{1}} \\}\\times\\{\\lambda_{2}^{1},\\lambda_{2}^{2},\\lambda_{2}^{3},\\ldots,\\lambda_{2}^ {J_{2}}\\}.\\] (30) 2. _Strategy No. 2:_ First, rough calculation step. Calculate the combinations of regularization parameters in sets \\[\\left\\{\\lambda_{1}^{1},\\lambda_{1}^{U},\\lambda_{1}^{2U},\\lambda_{1}^{3U},\\ldots,\\lambda_{1}^{J_{1}}\\right\\}\\] \\[\\times\\left\\{\\lambda_{2}^{1},\\lambda_{2}^{U},\\lambda_{2}^{2U}, \\lambda_{2}^{3U},\\ldots,\\lambda_{2}^{J_{2}}\\right\\}\\] (31) where \\(U\\) is a user selectable parameter that controls the sampling interval of rough calculation step. Second, precise calculation step. Let \\((\\lambda_{1}^{t_{1}\\cdot U},\\lambda_{2}^{t_{2}\\cdot U})\\) denote the selected regularization parameters of the rough calculation step. Then, calculate the combinations of regularization parameters in sets \\[\\left\\{\\lambda_{1}^{1},\\lambda_{1}^{(t_{1}-2)U},\\lambda_{1}^{(t_ {1}-2)U+1},\\ldots,\\lambda_{1}^{(t_{1}+2)U},\\lambda_{1}^{J_{1}}\\right\\}\\] \\[\\times\\left\\{\\lambda_{2}^{1},\\lambda_{2}^{(t_{2}-2)U},\\lambda_{2} ^{(t_{2}-2)U+1},\\ldots,\\lambda_{2}^{(t_{2}+2)U},\\lambda_{2}^{J_{1}}\\right\\}.\\] (32) Suppose that the calculation time of each pair \\((\\lambda_{1},\\lambda_{2})\\) is the same, denoted as \\(T\\), the total time cost is approximately \\(J_{1}\\times J_{2}\\times T\\) for strategy No. 1 and \\(\\frac{J_{1}}{U}\\times\\frac{J_{2}}{U}\\times T+(4U+3)^{2}\\times T\\) for strategy No. 2. We take the SAR simulation data in Section IV-A1) as an example. Table III gives the comparison of different strategies. The results indicate that a proper sampling interval \\(U\\) can significantly accelerate parameters selection process. An excessively large value of \\(U\\) will not only increase the time cost, but also lead to positioning deviation of the corner. We empirically suggest selecting \\(U\\) around \\(\\lfloor\\sqrt[4]{J_{1}J_{2}}/2\\rfloor\\), where \\(\\lfloor\\cdot\\rfloor\\) denotes the floor operator, due to \\[\\arg\\min_{U}\\frac{J_{1}}{U}\\times\\frac{J_{2}}{U}\\times T+(4U+3)^{ 2}\\times T\\] \\[\\approx\\arg\\min_{U}\\frac{J_{1}}{U}\\times\\frac{J_{2}}{U}+16\\,U^{2}\\] \\[=\\frac{\\sqrt[4]{J_{1}J_{2}}}{2}. \\tag{33}\\] Based on the rough-and-precise strategy, the time cost of our L-hypersurface method is affordable. ## V Conclusion An L-hypersurface parameters selection method for composite regularization is proposed. The proposed method is no longer limited to penalty terms of seminorm form, but can be Fig. 14: Experimental area. (a) Optical image (copyright Google). (b) Intensity map. Fig. 15: TomoSAR reconstruction results of real data. (a) \\(\\ell_{1}\\) regularization. (b) Spatial regularization. applied to regularization models with arbitrary penalty terms. Since reconstruction results based on the selected corner are better than other test points, both visually and numerically, the effectiveness of the proposed method has been verified by the simulation experiments. Experiments based on Gaofen-3 SAR satellites real data show that the optimized parameters have satisfactory reconstruction results, which implies the value of proposed method in practical application. The parameters selection performance of L-hypersurface in NC-TV regularization and spatial regularization indicates the strong potential of applying the L-hypersurface method to other composite regularization models. The L-hypersurface method can be directly expanded to more than two regularization parameters, however, this will lead to more serious time consuming issue. Although an accelerating approach has been provided in this article, further researches focusing on improving the computational efficiency of locating the corner are considered by us. A strict mathematical analysis on the rationality of the proposed method is also expected. ## References * [1] B. Zhang, W. Hong, and Y. Wu, \"Sparse microwave imaging: Principles and applications,\" _Sci. China Inf. Sci._, vol. 55, no. 8, pp. 1722-1754, 2012. * [2] M. Cetin and W. C. Karl, \"Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization,\" _IEEE Trans. Image Process._, vol. 10, no. 4, pp. 623-631, Apr. 2001. * [3] Z. Xu, M. Liu, G. Zhou, Z. Wei, B. Zhang, and Y. Wu, \"An accurate sparse SAR imaging method for enhancing region-based features via nonconvex and TV regularization,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 350-363, 2021, doi: 10.1109/JSTARS.2020.303431. * [4] Z. Xu, B. Zhang, Z. Zhang, M. Wang, and Y. Wu, \"Nonconvex-nonlocal total variation regularization-based joint feature-enhanced sparse SAR imaging,\" _IEEE Geosci. Remote Sens. Lett._, vol. 19, 2022, Art. no. 4515705, doi: 10.1109/LGRS.2022.322185. * [5] L. Pruente, \"Elastic net regularization for SAR imaging from incomplete data,\" in _Proc. 12th Eur. Conf. Synthetic Aerante Radar_, 2018, pp. 1-6. * [6] G. Zhou, M. Liu, Z. Xu, M. Wang, B. Zhang, and Y. Wu, \"Azimuth ambiguities suppression using group sparsity and nonconvex regularization for sliding spotlight mode: Results on Olua-1 SAR data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2022, pp. 1660-1663. * [7] S. Samadi, M. Cetin, and M. A. Masandi-Shirazi, \"Multiple feature-enhanced SAR imaging using sparsity in combined dictionaries,\" _IEEE Geosci. Remote Sens. Lett._, vol. 10, no. 4, pp. 821-825, Jul. 2013. * [8] L. Yang, S. Chen, S. Huan, H. Li, and M. Xing, \"Structure-guaranteed SAR imagery via spatially-variant morphology regularization in ADMM manner,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5234414, doi: 10.1109/TGRS.2022.3197439. * [9] A. Budillon, A. Evangelista, and G. Schirirzi, \"Three-dimensional SAR focusing from multipass signals using compressive sampling,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 1, pp. 488-499, Jan. 2011. * [10] X. X. Zhu and R. Bamler, \"Tomographic SAR inversion by \\(L_{1}\\)-norm regularization-The compressive sensing approach,\" _IEEE Trans. Geosci. Remote Sens._, vol. 48, no. 10, pp. 3839-3846, Oct. 2010. * [11] C. Rambour, L. Denis, F. Tupin, and H. M. Oriot, \"Introducing spatial regularization in SAR tomography reconstruction,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 1, pp. 8600-8617, Nov. 2019. * [12] C. Rambour, A. Budillon, A. C. Johnsy, L. Denis, F. Tupin, and G. Schirzi, \"From interferometric to tomographic SAR: A review of synthetic aperture radar tomography-processing techniques for scatterer unmixing in urban areas,\" _IEEE Geosci. Remote Sens. Mag._, vol. 8, no. 2, pp. 6-29, Jun. 2020. * [13] A. Li, D. Chen, K. Lin, and G. Sun, \"Hyperspectral image denoising with composite regularization models,\" _J. Sensors_, vol. 2016, 2016, Art. no. 6586032. * [14] Z. Zhang, J. Zhang, Z. Wei, and L. Xiao, \"Cartoon-texture composite regularization based non-blind deblurring method for partly-textured blurred images with poisson noise,\" _Signal Process._, vol. 116, pp. 127-140, 2015. * [15] M. Pham, X. Lin, A. Ruszczynski, and Y. Du, \"An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems,\" _J. Glob. Optim._, vol. 81, pp. 179-202, 2021. * [16] P. C. Hansen, \"Analysis of discrete ill-posed problems by means of the l-curve,\" _SIAM Rev._, vol. 34, no. 4, pp. 561-580, 1992. * [17] O. Batu and M. Cetin, \"Parameter selection in sparsity-driven SAR imaging,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 47, no. 4, pp. 3040-3050, Oct. 2011. * [18] G. D. Martin-del Campo-Becerra, S. A. Serafin-Garcia, A. Reigber, and S. Ortega-Cisneros, \"Parameter selection criteria for tomo-SAR focusing,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 1580-1602, 2020, doi: 10.1109/JSTARS.2020.3042661. * [19] C. M. Stein, \"Estimation of the mean of an multivariate normal distribution,\" _Ann. Statist._, vol. 9, no. 6, pp. 1135-1151, 1981. * [20] S. Ramani, Z. Liu, J. Rosen, J.-F. Nielsen, and J. A. Fessler, \"Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and sure-based methods,\" _IEEE Trans. Image Process._, vol. 21, no. 8, pp. 3659-3672, Aug. 2012. * [21] G. H. Golub, M. Heath, and G. Wahba, \"Generalized cross-validation as a method for choosing a good ridge parameter,\" _Technometrics_, vol. 21, no. 2, pp. 215-223, 1979. * [22] M. Liu, B. Zhang, Z. Xu, and Y. Wu, \"Efficient parameter estimation for sparse SAR imaging based on complex image and azimuth-range decouple,\" _Sensors_, vol. 19, no. 20, 2019, Art. no. 4549. * [23] M. Grasmair and V. Naumova, \"Multi-parameter approaches in image processing,\" in _Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision_. Berlin, Germany: Springer, 2023, pp. 943-967. * [24] Y. Xu, Y. Pei, and F. Dong, \"An extended l-curve method for choosing a regularization parameter in electrical resistance tomography,\" _Meas. Sci. Technol._, vol. 27, no. 11, 2016, Art. no. 114002. * [25] K. Kunisch and T. Pock, \"A bileveled optimization approach for parameter learning in variational models,\" _SIAM J. Imag. Sci._, vol. 6, no. 2, pp. 938-983, 2013. * [26] E. De Vito, Z. Kereta, and V. Naumova, \"Unsupervised parameter selection for denoising with the elastic net,\" _2018, arXiv:1809.08696_. * [27] T. T. Toma and D. S. Weller, \"Fast automatic parameter selection for MRI reconstruction,\" in _Proc. IEEE 17th Int. Symp. Biomed. Imag._, 2020, pp. 1078-1081. * [28] M. Beleg, M. E. Kilmer, and E. L. Miller, \"Simultaneous multiple regularization parameter selection by means of the l-hypersurface with applications to linear inverse problems posed in the wavelet transform domain,\" in _Proc. SPIE Conf. Bayesian Inference Inverse Problems_, 1998, pp. 328-336. * [29] S. Oraintra, W. C. Karl, D. A. Castanon, and T. Q. Nguyen, \"A method for choosing the regularization parameter in generalized Tikhonov regularized linear inverse problems,\" in _Proc. Int. Conf. Image Process._, 2000, pp. 93-96. * [30] J. L. Castellanos, S. Gomez, and V. Guerra, \"The triangle method for finding the corner of the l-curve,\" _Appl. Numer. Math._, vol. 43, no. 4, pp. 359-373, 2002. * [31] A. Cultrera and L. Callegaro, \"A simple algorithm to find the l-curve corner in the regularisation of ill-posed inverse problems,\" _IOP SciNotes_, vol. 1, no. 2, 2020, Art. no. 025004. * [32] S. Boyd et al., \"Distributed optimization and statistical learning via the alternating direction method of multipliers,\" _Found. Trends Mach. Learn._, vol. 3, no. 1, pp. 1-122, 2011. * [33] I. Daubechies, M. Defrise, and C. De Mol, \"An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,\" _Commun. Pure Appl. Math.: A. J. Issued Courant Inst. Math. Sci._, vol. 57, no. 11, pp. 143-1457, 2004. * [34] A. Beck and M. Teboulle, \"A fast iterative shrinkage-thresholding algorithm for linear inverse problems,\" _SIAM J. Imag. Sci._, vol. 2, no. 1, pp. 183-2020, 2009. * [35] Z. Xu, B. Zhang, G. Zhou, L. Zhong, and Y. Wu, \"Sparse SAR imaging and quantitative evaluation based on nonconvex and TV regularization,\" _Remote Sens._, vol. 13, no. 9, 2021, Art. no. 1643. * [36] C. Oliver and S. Quegan, _Understanding Synthetic Aperture Radar Images_. Rijeka, Croatia: SciTech, 2004. * [37] O. D'Hondt, C. Lopez-Martinez, S. Guillaso, and O. Hellwich, \"Nonlocal filtering applied to 3-D reconstruction of tomographic SAR data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 1, pp. 272-285, Jan. 2018. Yizhe Fan received the bachelor's degree in electronic information engineering in 2022 from the University of Chinese Academy of Sciences, Beijing, China, where he is currently working toward the Ph.D. degree in signal and information processing. His research interests include radar signal processing, compressed sensing, sparse SAR imaging, and deep learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Kun Wang received the bachelor's degree in electronic information engineering in 2022 from the University of Chinese Academy of Sciences, Beijing, China, where he is currently working toward the M.S. degree in signal and information processing. His research interests include TomoSAR imaging and deep learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jie Li received the bachelor's degree in electronic information engineering from the Beijing Institute of Technology, Beijing, China, in 2019. She is currently working toward the Ph.D. degree in signal and information processing with the University of Chinese Academy of Sciences, Beijing, China. Her research interests include TomoSAR imaging and deep learning. \\\\ \\end{tabular} \\begin{tabular}{c c} & Guoru Zhou received the bachelor's degree in electronic information engineering from the Beijing Institute of Technology, Beijing, China, in 2020. She is currently working toward the Ph.D. degree in signal and information processing with the University of Chinese Academy of Sciences, Beijing, China. Her research interests include radar signal processing, compressed sensing, and sparse SAR imaging. \\\\ \\end{tabular} \\begin{tabular}{c c} & Bingchen Zhang received the bachelor's degree in electronic engineering and information science from the University of Science and Technology of China, Hefei, China, in 1996 and the M.S. degree and the Ph.D. degree in signal and information processing from the Institute of Electronics, Chinese Academy of Sciences (IECAS), Beijing, China, in 1999 and 2017, respectively. Since 1999, he has been a Scientist with IECAS. His research interests include synthetic aperture radar (SAR) signal processing and airborne SAR system design, implementation, and data processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yirong Wu received the Ms.D. degree in microwave electromagnetic field from the Beijing Institute of Technology, Beijing, China, in 1988 and the Ph.D. degree in signal and information processing from the Institute of Electronics, Chinese Academy of Sciences (IECAS), Beijing, China, in 2001. Since 1988, he has been with IECAS, where he currently serves as the Director. He has over 20 years of experience in remote-sensing processing system design. His research interests include microwave imaging, signal and information processing, and related applications. \\\\ \\end{tabular}
Composite regularization models are widely used in sparse signal processing, making multiple regularization parameters selection a significant problem to be solved. Variety kinds of composite regularization models are used in sparse microwave imaging, including \\(\\ell_{1}\\) and \\(\\ell_{2}\\) penalty, nonconvex and total variation penalty, combined dictionary, etc. In this article, a new adaptive multiple regularization parameters selection method named L-hypersurface is proposed. The effectiveness of the proposed method is verified by experiments. Simulation experiments indicate that the selected optimal regularization parameters have satisfied reconstruction results, both visually and numerically. Furthermore, experiments on Gaofen-3 synthetic aperture radar satellite data are also exploited to show the performance of the proposed method. Composite regularization, 1-hypersurface, regularization parameter selection, sparse signal processing, synthetic aperture radar (SAR), tomographic SAR (TomoSAR).
Provide a brief summary of the text.
ieee/f3a767db_bcbe_4929_8df2_e28ef666878e.md
# Secure Cooperative Localization for Connected Automated Vehicles Based on Consensus Xin Xia\\({}^{\\copyright}\\), Runsheng Xu, and Jiaqi Ma Manuscript received 10 July 2023; accepted 4 September 2023. Date of publication 11 September 2023; date of current version 16 October 2023. This work was supported in part by the United States Department of Transportation (USDOT) Connected Automated Vehicles (CAV) Performance Data Project, in part by the USDOT Automated Driving System Demonstration Program, and in part by the California Resilient and Innovative Mobility Initiative (RIII) Program. The associate editor coordinating the review of this article and apporons for publication was Dr. Geethu Joseph. (_Corresponding author: Jiaqi Ma._)The authors are with the UCLA Mobility Laboratory, Department of Civil and Environmental Engineering, University of California at Los Angeles, Los Angeles, CA 9095 USA (e-mail: [email protected]).Digital Object Identifier 10.1108/JSEN.2023.3312610 ## I Introduction Cyber-physical vehicular and transportation systems, enabled by the Internet of Things (IoT) sensing, edge and cloud computing, 5G communication, advanced control, and drive-by-wire systems in the vehicles and infrastructure, offer opportunities to improve the performance of individual vehicles and traffic. With the development of connected vehicle (CV) and automated vehicle (AV) technologies [1, 2, 3, 4, 5, 6], cooperative driving automation (CDA) [7], as standardized by SAE J3216 [8], aims at combining both technologies in connected automated vehicles (CAVs) to enable real-time cooperation of equipped vehicles, other road users, and infrastructure. As outlined in the pioneering review of AVs and CAVs control [9], CDA technology will further improve safety, mobility, environmental sustainability, situational awareness, and operational efficiency of traffic flow. Cooperative localization is one of the critical components. It enables the downstream modules such as planning and control of CDA by leveraging shared sensory information from the CVs and infrastructures through vehicle-to-everything (V2X) communication. For example, aided by the localization information, driving safety is effectively ensured through the proposed sliding mode control algorithm [10]. The cooperation between different vehicles brings the potential to fuse diverse information to improve the localization accuracy of CAVs. However, the multimodality sensors or communication channels for the cooperation also make CAVs vulnerable to cyberattacks. This raises security issues for the localization system [11]. Aiming at leveraging the shared multisensor information from the CAVs in a secure manner, this article proposes a secure cooperative localization method for the CAVs using a consensus estimation framework with considerations of cyberattacks on the sensory information. ### _State-of-the-Art_ Localization is one of the most basic modules of any automated driving platform. It has been extensively studied in the past decades [12, 13, 14, 15, 16]. Based on diverse sensors such as inertial measurement unit (IMU), magnetometer, global navigation satellite system (GNSS), camera, radar, or light detection and ranging (LiDAR) equipped on individual vehicles, the multisensor-fusion-based methods typically are represented by the GNSS/inertial navigationsystem (INS) integration system [13], GNSS/INS/LiDAR fusion [17], camera/LiDAR-based simultaneous localization and mapping (SLAM) [18], and map-matching-based localization [19]. These topics have been explored extensively with much significant progress. With the fast development of CAVs and intelligent transportation system (ITS), the cooperation between the system elements (vehicles and infrastructure) is enabling more possibilities (e.g., sharing and using diverse sensory information between vehicles) other than the individual-vehicle-based localization to improve the localization accuracy [20]. Also, in the event of cyberattacks, the cooperation provides more flexibility to detect and defend against the attacks that are injected into the sensors or the V2X communication to improve the system security [11]. To address the issue where GNSS fails at places such as city anyons and indoor parking lots for a short period of time, the relative distance between CAVs is integrated with the GNSS position through an extended Kalman filter (KF) in [21], and [22]. The algorithm relies more on the relative distance to constrain the localization error when the GNSS fails. When using the interdistance from radars, due to the different update rates between the GNSS, radars, and communication units, a track-matching approach using the chi-square statistic test method is used to associate the information from multiple sensors [23]. In [24], in addition to the relative distance between different vehicles from radar, the relative azimuth from the camera is also used to supplement GNSS in a Bayesian framework. To fill the gap where GNSS is unavailable, in [25], the distance between the ego vehicle and roadside unit (RSU) with a known position is integrated with the onboard sensors. Then, the algorithm will localize the ego vehicle based on a weighted linear least square algorithm. With cooperation between CAVs, the issues coming from the GNSS's sensitivity to environments have been addressed to some extent [26]. In terms of sensor fusion framework, a multisensor multivehicle framework is proposed based on global/centralized filtering and local filtering using the onboard sensors, GNSS, and relative distance from other vehicles [26]. Compared with centralized estimation in [24], and [25], distributed estimation is implemented on each vehicle. It shows greater resilience to the vulnerabilities from the failure of the sensors and communications and requires less energy-consuming communication and parallel processing [27]. In this sense, it is straightforward to take advantage of distributed estimation to design the localization algorithm for the CAVs [28]. Among distributed estimation methodologies, the consensus KF (CKF) [27], which has been utilized to localize unmanned aerial vehicles in a formation and has shown promising performance [29], is one of the appropriate methods for the localization of CAVs. Another merit of the CKF is its capability to incorporate shared information from different CAVs. In addition to the benefits stated above, it also allows the neighboring estimator to reach a consensus on the localization [27]. To the best of the authors' knowledge, the CKF [27] has not been investigated to localize the CAVs in the literature. This article aims to bridge this gap while also considering potential cyberattacks on sensors and communication data for security. As mentioned above, the information sharing between CAVs enables them to cooperate with each other for localization. However, in the meantime, this cooperation makes sensors or communication vulnerable to attacks [30]. Before fusing sensors from different sources, i.e., vehicles or infrastructure, the information should be inspected to detect attacks or other types of faults [31]. In [28], the velocity and the position of the CAVs in a platoon are estimated in an unknown input observer (UIO). A threshold method is adopted to diagnose the faults/attacks based on the output of the UIO. Once a fault/attack is declared, the shared information from the remaining CAVs is used for estimation. In [30], a piecewise-constant attack injected in the GNSS position measurement is detected by a scheme based on a modified unbiased finite impulse response estimator. The scheme is able to generate an intermediate value only related to the attack for detecting the attack conveniently. In [31], given the multiple redundant sensors to measure the same physical variable of the CAV, the attacks are detected directly if there is a difference between the measurement from a specific sensor and the averaged measurement for all the sensors larger than a threshold. In [32], a drift attack caused by the GNSS spoofing is added to an optimizer as a variable to be solved. Then, the attack is diagnosed based on the estimated value of the attack. For the different attack detection methods in [31], stochastic variables are generated, and then, one sample or multiple samples will be used to decide whether an attack occurred [33]. The detection accuracy relies on the thresholds heavily if only one sample of the generated stochastic variable is used. This means that the performance of the attack detector depends heavily on the threshold selection. If a set of samples from the generated stochastic variable is used, the detection accuracy is usually higher. The generalized likelihood ratio test (GLRT) is a well-developed detection method that is based on a set of samples [34]. However, the relatively large set (large window size) can lead to a time delay for the attack detection such as the case with the GLRT in [34]. This time delay possibly leaves the system under attack for a short time before the detection is done. To address the time delay of this kind, this article proposes a delay-prediction framework to enhance attack detection. Once the attack is detected, the failed sensory information can be discarded in the multisensor-fusion localization algorithms. Specifically, the corresponding node in the CKF can be adjusted to isolate the attacks in the sensory measurements. ### _Main Contributions_ In this article, to achieve the multisensor-fusion localization given the shared information of CAVs, a Kalman-consensus information filter (KCIF) is applied and a delay-prediction GLRT-based attack detection method is presented for improving the security of the localization system. Specifically, two main contributions of this article are summarized as follows. 1. To leverage the shared position information from the CAVs, this article applies a KCIF to fuse the measurements from the ego vehicle and adjacent vehicle(s) with different communication topologies. Inspired by [24]and [26] but differing from them which only consider how to fuse the shared sensory measurements when the measurements are normal, our consensus estimation framework also offers the convenience to accommodate the possible attacks properly. To the best of the authors' knowledge, no research using the KCIF has been reported for cooperatively localizing the CAVs considering the attacks in sensory measurements. 2. For detecting cyberattacks in the sensory measurements, a GLRT-based method is designed. Compared with the GLRT-based detection algorithm in [34], our proposed delay-prediction framework is not only able to detect the attack but also address the induced temporal latency of the decision made by the GLRT-based method [33]. It is worth noting that this framework can be generally integrated with a multiple-sample-based attack/fault detection algorithm regardless of the specific form of the detection algorithm to address the temporal delay issue, which is induced in the detection algorithm. Then, based on the attack indicator (AI) from the GLRT-based algorithm, a rule-based attack isolation method is integrated with the KCIF for isolating the attack data samples. The secure cooperative localization is validated via numerical simulation. The remainder of this article is organized as follows. The problem studied in this article is formulated in Section II. The secure cooperative localization method is designed in Section III. Section IV provides test results in different communication topology and attack settings and discusses the findings and performance, and finally, Section V concludes this article. ## II Problem Formulation In this section, the communication topology for CAV information sharing, node kinematic model, and node measurement model are presented. Based on these models, the cooperative localization algorithm is designed in the following. ### _Communication Topology_ The scenario shown in Fig. 1 is CAV platooning [35], and without loss of generality, this article will focus on such scenarios, i.e., longitudinal scenarios. Each vehicle in Fig. 1 is equipped with an IMU, a GNSS receiver, and a sensor such as radar, LiDAR, or camera, which can measure the relative distance between the ego vehicle and adjacent vehicles. The IMU can measure the longitudinal acceleration of the ego vehicle. The GNSS receiver provides the position of the ego vehicle, and the radar, LiDAR, or camera measures the relative distance between the ego vehicle and its neighbor. Vehicles are also equipped with V2X transceivers [such as cellular V2X or dedicated short-range communication (DSRC)] to establish vehicle to vehicle (V2V) communication and enable sharing of sensory information among adjacent vehicles through a directed graph \\(\\mathcal{G}_{d}=\\{V,\\ E\\}\\) or undirected graph \\(\\mathcal{G}_{u}=\\{V,\\ E\\}\\)[22]. In the graph, \\(V=\\{1,2,\\ldots,N\\}\\) is the set of nodes and \\(E\\subseteq V\\times V\\) is the set of edges in connection. The adjacency matrix \\(\\mathcal{A}\\) and the Laplacian matrix \\(\\mathcal{D}\\) are adopted to show the properties of the graph \\(\\mathcal{G}\\)[36]. The entry \\(a_{ij}\\) of the adjacency matrix \\(\\mathcal{A}\\in\\mathbb{R}^{N\\times N}\\) is given as \\[\\left\\{\\begin{aligned} a_{ij}=1,&\\quad\\text{if}\\,\\{j,i \\}\\in E\\\\ a_{ij}=0,&\\quad\\text{if}\\,\\{j,i\\}\ otin E,\\end{aligned} \\right.\\qquad i,\\,j=\\{1,2,\\ldots,N\\} \\tag{1}\\] where, for the directed graph, \\(j,i\\in E\\) denotes that there is a directed edge from node \\(j\\) to node \\(i\\), meaning that node \\(i\\) has access to the sensory information of node \\(j\\) through V2V communication for \\(\\mathcal{G}_{d}=\\{V,\\ E\\}\\); for the undirected graph, \\(j,i\\in E\\) denotes that there is an undirected edge between nodes \\(j\\) and \\(i\\), meaning that node \\(i\\) or \\(j\\) has access to each other's sensory information through V2V communication for \\(\\mathcal{G}_{u}=\\{V,\\ E\\}\\). Besides, there are no self-loops, and thus, \\(a_{il}=0,\\ i=1,\\ldots,N\\). Node \\(j\\) is the neighbor of node \\(i\\) when \\(a_{ij}=1\\), and the neighbor set of node \\(i\\) is denoted as \\(\\mathcal{N}_{i}=\\{j|a_{ij}=1\\}\\). Then, the entry of the degree matrix \\(\\mathcal{D}\\) for this graph is given as \\[\\beta_{ij}=\\left\\{\\begin{aligned} 0,&\\quad\\text{if}\\,\\,i\ eq j \\\\ \\sum\ olimits_{k=1}^{N}a_{ik},&\\quad\\text{if}\\,\\,i=j, \\end{aligned}\\right.\\qquad i,\\,j=\\{1,2,\\ldots,N\\}\\,. \\tag{2}\\] Accordingly, the Laplacian matrix \\(\\mathcal{L}\\in\\mathbb{R}^{N\\times N}\\) is given as \\[\\mathcal{L}=\\mathcal{D}-\\mathcal{A}. \\tag{3}\\] In this article, the secure localization algorithm is tested based on the communication with both the directed and undirected graph topologies as two representative scenarios. There can be, however, more complicated topological scenarios. In the case of CAV platooning, there are multiple possible ways of communication, such as all predecessor and leader-predecessor [37]. The proposed methodology can be applied to analyze any type of communication topology since Fig. 1: Example scenarios for cooperative localization. Each vehicle is equipped with an IMU, a GNSS receiver, and a sensor such as radar, LiDAR, or camera, which can measure the relative distance/velocity between the ego vehicle and adjacent vehicles. \\(\\Omega_{2}\\) denotes the measured relative distance between Vehicles 1 and 2 and \\(\ u_{x}\\) is the velocity of the traffic flow. Vehicles are able to share sensory information through V2X communication (such as cellular V2X or DSRC). For the scenario with directed communication case shown in (a), only the following vehicle has access to its front neighbor; for the scenario with undirected communication case shown in (b), the front and the following vehicle has access to its neighbor. Other communication topologies are also possible. The GNSS will be exposed to the attacks. it is feasible to add any nodes into the consensus estimation algorithm as long as the types of measurements, possibly from different types of sensors, for each node are homogeneous. A homogeneous node means that the node has the same capability to sense its neighbor vehicles and communicate with them such that the node is able to provide the relative position between itself and neighbor vehicles and then publish the relative position information to its neighbor vehicles. For example, the interdistance might come from camera, LiDAR, or radar sensor. The details are discussed in Section III. ### _Node Kinematic Model_ In this article, since the main contribution is the cooperative localization method design for the CAVs, without the loss of generality, we simplify the kinematic model with the assumption that the vehicle is mainly maneuvered in the longitudinal direction, and therefore, a longitudinal vehicle kinematic model is used. Another reason for this assumption is that platooning and similar safety-critical applications are one of the main the application scenarios of this localization algorithm since the vehicles in a platoon follow very closely to each other and any faults and attacks can result in serious consequences [30]. Note that, although only the longitudinal vehicle kinematics is considered in this article, the lateral vehicle kinematics can also be incorporated into the overall secure localization framework by only changing the node kinematic model to address the more comprehensive driving maneuvers. The longitudinal vehicle kinematic model is presented in (4) and (5) \\[\\dot{v}=a+\\omega_{a} \\tag{4}\\] where \\(v\\) denotes the longitudinal velocity, \\(a\\) denotes the longitudinal acceleration, and \\(\\omega_{a}\\) is the random noise of the accelerometer \\[\\dot{p}=v \\tag{5}\\] where \\(p\\) denotes the position. By choosing the velocity and position as the states \\(\\mathbf{x}=[p\\ v]^{\\top}\\), we have the standard state equation \\[\\dot{\\mathbf{x}}=\\mathbf{A}\\mathbf{x}+\\mathbf{B}\\mathbf{u}+\\mathbf{\\Gamma}\\mathbf{\\omega} \\tag{6}\\] where \\(\\mathbf{A}=\\begin{bmatrix}0&1\\\\ 0&0\\end{bmatrix}\\) is the state matrix, \\(\\mathbf{B}=\\begin{bmatrix}0\\\\ 1\\end{bmatrix}\\) is the input matrix of the vehicle, \\(\\mathbf{u}=a\\) is the input, \\(\\mathbf{\\omega}=\\omega_{a}\\) is the noise, and \\(\\mathbf{\\Gamma}=\\begin{bmatrix}0\\\\ 1\\end{bmatrix}\\) is the input matrix of the noise. When estimating the states such as the velocity or the position by an estimator such as KF, the model described by (4) and (5) needs to be discretized as in the following equation: \\[\\mathbf{x}_{k+1}=\\mathbf{\\Phi}_{k}\\mathbf{x}_{k}+\\mathbf{\\Xi}_{k}\\mathbf{u}_{k}+\\mathbf{\\Lambda}_{k} \\mathbf{\\omega}_{k} \\tag{7}\\] where \\(\\mathbf{\\Phi}_{k}=e^{\\mathbf{A}\\Delta T}\\approx\\mathbf{I}+\\mathbf{A}\\Delta T\\) is the state transition matrix of the system (6) with the discrete-time realization, \\(\\mathbf{\\Xi}_{k}=\\int_{0}^{\\Delta T}e^{\\mathbf{A}\\mathcal{U}}\\mathbf{d}\\mathbf{B}\\approx \\mathbf{B}\\Delta T\\) is the input matrix, and \\(\\mathbf{\\Lambda}_{k}=\\int_{0}^{\\Delta T}e^{\\mathbf{A}\\mathcal{U}}\\mathbf{d}\\mathbf{\\Gamma}\\). **Remark 1**: _In the longitudinal vehicle kinematic model, we made some simplifications such as ignoring the bias error and gravity component in the longitudinal accelerometer caused by the nonzero pitch angle of the vehicle body. It is worth noting that, although these factors are significant for the localization algorithm development, the estimation of them has been well addressed in the literature such as [38]. Leveraging the off-the-shelf algorithms is available to tackle the issues caused by these errors. Another note is that the model in (6) is based on the vehicle kinematics, which is robust against the vehicle dynamic model uncertainties, meaning that the method in this article will not be affected by the vehicle dynamic model uncertainties._ ### _Node Measurement Model_ Given the sensor configuration discussed in Section II-A, the pieces of information from the GNSS that provides the global position and radar (or sensors that generate the same types of measurements) that provide the interdistance between vehicles are adopted to develop the CAV measurement model. For the yellow vehicle in Fig. 1, the node measurement model is derived to implement an estimator to estimate the state of the vehicle. Through the instrumented GNSS receiver and radar, the yellow vehicle has access to the position measurement and relative distance from itself to the green vehicle in Fig. 1. Through the V2V communication, the ego yellow vehicle can also request the sensory information of its adjacent vehicles (green vehicle in \\(\\mathcal{G}_{d}\\), green and red vehicles in \\(\\mathcal{G}_{u}\\), or possibly other vehicles dependent on the communication topology) to enrich the measurements. These measurements have the potential to improve both the accuracy and robustness against the attacks on the ego yellow vehicle via the proposed consensus-based estimation. Specifically, for instance, in \\(\\mathcal{G}_{u}\\), the relative distance \\(d_{12}\\) and \\(d_{23}\\) and the GNSS position of the green and red vehicles are available for the yellow vehicle for enhanced localization. The measurement model is given by the relative distance as \\[p_{\\text{G}e}=p_{e}+\\eta_{\\text{G}e}+f_{\\text{G}e} \\tag{8}\\] where the subscript \\(e\\) means the ego vehicle, \\(p_{\\text{G}}\\) is the GNSS position, \\(p\\) is the true position, \\(\\eta_{\\text{G}}\\) is the Gaussian white noise of the GNSS measurement, and \\(f_{\\text{G}}\\) denotes the injected attack. Along with the measurements from the ego vehicle, the GNSS positions of the adjacent vehicle \\(i\\in\\mathcal{N}_{i}\\) can be transformed to the position measurements of the ego vehicle with the relative distance measurements. The GNSS position of vehicle \\(i\\) obtained through wireless communication has the same measurement model as (8) and it is given as \\[p_{\\text{G}i}=p_{i}+\\eta_{\\text{G}i}+f_{\\text{G}i} \\tag{9}\\] where the subscript \\(i\\) means the adjacent vehicle \\(i\\). Then, combining (9) with the transformation given as \\[d_{\\text{R}ie}=d_{ie}+\\eta_{\\text{R}ie}+f_{\\text{R}ie} \\tag{10}\\] where \\(d_{\\text{R}ie}\\) means the relative distance between the adjacent vehicle \\(i\\) and ego vehicle \\(e\\) measured by the sensors such as radar, LiDAR, or camera. \\(d_{ie}\\) means the true relative distance,\\(f_{\\text{R}ie}\\) denotes the injected attack in the measurement \\(d_{\\text{R}ie}\\), and \\(\\eta_{\\text{R}ie}\\) means the Gaussian white noise of the relative distance. Then, the measurement of the ego vehicle position can be derived as \\[p_{ei}=p_{i}+d_{ie}+\\underbrace{\\eta_{\\text{R}ie}+\\eta_{\\text{G}i}}_{\\eta}+ \\underbrace{f_{\\text{R}ie}+f_{\\text{G}i}}_{f} \\tag{11}\\] where \\(p_{ei}\\) means the measurement of the ego vehicle \\(e\\) through its adjacent vehicle \\(i\\). From (11), it can be seen that both the attacks from the GNSS \\(f_{\\text{G}i}\\) and the relative distance \\(f_{\\text{R}ie}\\) will be propagated to the position measurement \\(p_{ei}\\) for the ego vehicle's position. In other words, though cooperative localization can potentially improve accuracy, leveraging the information from immediately adjacent vehicles or even vehicles further away (via chaining and adding up consecutive sensor data; see Remark 2) such as the leader of the platoon that uses a certain communication topology will also incur more risks due to the propagation and make the ego vehicle localization more vulnerable to any attack on the surrounding traffic. This necessitates continuous monitoring of the sensory measurements used for the cooperative localization and detection of faults and attacks. This in turn allows us to fuse those measurements from adjacent vehicles cooperatively and improve the localization accuracy in a consensus framework if those attacks are detected and isolated properly. This process of attack isolation will be discussed in Section III-C. **Remark 2**: _Although, in (11), only the position of the ego vehicle \\(e\\) and the position of its adjacent vehicle \\(i\\) are associated in \\(\\mathcal{G}_{d}\\) or \\(\\mathcal{G}_{u}\\), more position measurements for the ego vehicle can be derived from the CVs through the transformation formulated in (10) as long as the communication topology is able to provide the link between the corresponding CV and the ego vehicle. In this way, not only the information from the adjacent vehicle(s) and the ego vehicle can be fused, but also more information from CVs further away is possible to be incorporated in our proposed consensus localization framework. The key to enabling this fusion is having the bridged communication between the ego vehicle and other CVs to transfer the sensory information._ Based on (8) or (11), the standard measurement model is given as \\[z=\\mathbf{H}\\mathbf{x}+\\eta+f \\tag{12}\\] where \\(z\\) denotes the position measurement in (8) or (11), \\(H=[1,0]\\), \\(\\eta\\) denotes the Gaussian white noise in (8) or (11), and \\(f\\) denotes the attack from the position of the GNSS in (8) or the relative distance (11). For the ego vehicle, the measurements given in (8) and (11) are homogeneous, and then, the consensus estimation technique such as a CKF is adopted to fuse the pieces of information from different vehicles to improve both the position accuracy and the resilience against cyberattacks. ## III Secure Localization Method Based on the models developed in Section II, in this section, the framework of the secure cooperative localization algorithm is first presented, the CKF for fusing the sensory information is introduced, the GLRT-based attack detection algorithm is developed, and the attack defense method is given. ### _Framework_ The framework of the proposed algorithm is shown in Fig. 2. A consensus Kalman information filter (CKIF) is adopted to fuse the sensory information (i.e., the position of the GNSS and relative distance from the radar) from the ego vehicle \\(i\\) and the adjacent vehicles in \\(\\mathcal{N}_{i}\\) with the communication topology \\(\\mathcal{G}_{d}\\) or \\(\\mathcal{G}_{u}\\). For each node in the CKIF, the GLRT-based attack detection will be performed to diagnose whether the sensory measurement from the ego vehicle or its neighbor is attacked. Then, based on the attack detection results, a rule-based delay-prediction method is proposed to isolate the attack. ### _Consensus Kalman Information Filter_ In this section, a CKIF is applied to achieve a consensus estimation of the states in (7) by fusing the homogeneous measurements from different vehicles in a platoon [27]. Specifically, for the communication topology with \\(\\mathcal{G}_{d}\\), based on the adjacency matrix \\(\\mathcal{D}\\), the ego vehicle \\(i\\) only has access to its front vehicle's sensory measurements such as GNSS position, and therefore, two nodes are included in the CKIF. For the communication topology \\(\\mathcal{G}_{u}\\), the ego vehicle has access to sensory measurements (i.e., GNSS and radar for measuring the global position and interdistance, respectively) from its front and rear vehicles, and thus, three nodes are included. In other words, the difference when applying the CKIF to estimate the states of the ego vehicle between the communication topology \\(\\mathcal{G}_{d}\\) and \\(\\mathcal{G}_{u}\\) is the number of the nodes included in the CKIF. When more vehicles are connected, i.e., the homogeneous sensory information can be shared between the CAVs, more nodes can be incorporated in the CKIF. Therefore, the CKIF framework can feasibly apply to both \\(\\mathcal{G}_{d}\\) and \\(\\mathcal{G}_{u}\\), as well as other communication topologies. The two different Fig. 2: Framework of the secure cooperative localization. The sensory information input to the CKIF includes the IMU information of the ego vehicle (yellow vehicle), GNSS position of the ego vehicle and adjacent vehicle(s), and relative distance from the ego vehicle to adjacent vehicle(s). Each of the positions from the ego vehicle’s GNSS or derivation given by (11) from adjacent vehicle(s) drives a node in the CKIF. All the sensory measurements are monitored by the GLRT-based attack detection method. Once the attack is detected, the corresponding position measurements are isolated by a rule-based method. nodes based on the ego vehicle's or its adjacent vehicle(s)' information are discussed correspondingly. #### Iii-B1 Node-Based on the Measurement From the Ego Vehicle Based on (7), the GNSS position given in (12) can drive one node in the CKIF to estimate the position. In this node, the predicted variables (7) will provide the prior information and the GNSS position from the ego vehicle is explored to provide the measurement for the posteriori estimation. #### Iii-B2 Node(s) Based on the Measurement From the Adjacent Vehicle(s) Besides the measurement from the ego vehicle's GNSS position, the position measurement from the adjacent vehicles given as (11) can be leveraged to drive other node(s) in the CKIF. Based on the node kinematic model (7) for the ego vehicle and the measurement model (11) from the adjacent vehicles, for the directed communication topology \\(\\mathcal{G}_{d}\\), one node beside the node in Section III-B1 can be formulated, and for the undirected communication topology \\(\\mathcal{G}_{u}\\), two more nodes beside the node in Section III-B1 are augmented. **Remark 3**: _Due to the different number of the nodes in the CKIF for \\(\\mathcal{G}_{d}\\) and \\(\\mathcal{G}_{u}\\), both the localization accuracy and robustness against the attack will be difference. The intuitive speculation for the difference is that: to some extent, with more nodes in the CKIF to come to a consensus estimation, the position accuracy is higher and the robustness is also better due to the more redundancy of sensory measurements. This, however, comes at the cost of additional data communication and computational loads. In Section IV, this speculation will be exemplified and discussed._ With the nodes given in Sections III-B1 and III-B2, the CKF shown in (13) is adopted [39] \\[\\mathbf{\\hat{x}}_{k|k}^{i} =\\mathbf{\\hat{x}}_{k|k-1}^{i}+\\mathbf{K}_{k}^{i}\\left(\\mathbf{Z}_{k}^{i}-\\bm {H}_{k}^{i}\\mathbf{\\hat{x}}_{k|k-1}^{i}\\right)\\] \\[\\quad+\\ \\mathbf{C}_{k}^{i}\\sum_{j\\in\\mathcal{N}_{i}}\\left(\\mathbf{\\hat{x}}_{ k|k-1}^{j}-\\mathbf{\\hat{x}}_{k|k-1}^{i}\\right)\\] \\[\\mathbf{K}_{k}^{i} =\\mathbf{P}_{k}^{i}\\left(\\mathbf{H}_{k}^{i}\\right)^{\\top}\\left(\\mathbf{R}^{i }+\\mathbf{H}_{k}^{i}\\mathbf{P}_{k}^{i}\\left(\\mathbf{H}_{k}^{i}\\right)^{\\top}\\right)^{-1}\\] \\[\\mathbf{M}_{k}^{i} =\\left(\\mathbf{F}_{k}^{i}\\mathbf{P}_{k}^{i}\\left(\\mathbf{F}_{k}^{i}\\right)^{ \\top}+\\mathbf{K}_{k}^{i}\\mathbf{R}^{i}\\left(\\mathbf{K}_{k}^{i}\\right)^{\\top}\\right)\\] \\[\\mathbf{F}_{k}^{i} =\\mathbf{I}-\\mathbf{K}_{k}^{i}\\mathbf{H}_{k}^{i},\\mathbf{C}_{k}^{i}=\\gamma\\mathbf{F}_ {k}^{i}\\mathbf{G}_{k}^{i}\\] \\[\\mathbf{G}_{k}^{i} =\\mathbf{\\Phi}_{k}^{i}\\mathbf{M}_{k}^{i}\\left(\\mathbf{\\Phi}_{k}^{i}\\right)^{ \\top}+\\mathbf{Q}_{k}^{i}+\\mathbf{P}_{k}^{i}\\mathbf{S}_{k}^{i}\\mathbf{P}_{k}^{i}\\] \\[\\mathbf{P}_{k+1}^{i} =\\mathbf{\\Phi}_{k}^{i}\\mathbf{M}_{k}^{i}\\left(\\mathbf{\\Phi}_{k}^{i}\\right)^{ \\top}+\\mathbf{Q}^{i},\\quad\\mathbf{\\hat{x}}_{k+1|k}^{i}=\\mathbf{\\Phi}_{k}^{i}\\mathbf{\\hat{x}}_{ k|k}^{i} \\tag{13}\\] where the superscript \\(i\\) denotes node \\(i\\), \\(\\mathbf{\\hat{x}}_{k|k}^{i}\\) and \\(\\mathbf{\\hat{x}}_{k+1|k}^{i}\\) are estimation and prediction of the state \\(\\mathbf{\\hat{x}}_{k}^{i}\\), and the matrix inversion lemma \\((\\mathbf{A}+\\textbf{BCD})^{-1}=\\mathbf{A}^{-1}-\\mathbf{A}^{-1}\\textbf{B}(\\mathbf{C}^{-1}+\\textbf {D}\\mathbf{A}^{-1}\\textbf{B})^{-1}\\textbf{D}\\mathbf{A}^{-1}\\) is utilized for computation of \\(\\mathbf{K}_{k}^{i}\\). \\(\\mathbf{H}^{i}\\) is the measurement matrix and \\(\\mathbf{I}\\) is an identity matrix. \\(\\mathbf{R}^{i}\\) and \\(\\mathbf{Q}^{i}\\) denote the measurement and process noise covariance matrices, respectively. \\(\\mathbf{P}^{i}\\) denotes the state covariance matrix, and \\(\\mathbf{\\Phi}^{i}\\) denotes the state transition matrix. For the convenience of implementation, by defining the weighted measurements \\(\\mathbf{y}_{k}^{i}=(\\mathbf{H}_{k}^{i})^{\\top}(\\mathbf{R}^{i})^{-1}\\mathbf{z}_{k}^{i}\\) for node \\(i\\) and the information matrix \\(\\mathbf{\\hat{x}}_{k}^{i}=(\\mathbf{H}_{k}^{i})^{\\top}(\\mathbf{R}^{i})^{-1}\\mathbf{H}_{k}^{i}\\), the information form of the CKF, shown in (14), i.e., CKIF, is applied \\[\\mathbf{\\hat{x}}_{k|k}^{i} =\\mathbf{\\hat{x}}_{k|k-1}^{i}+\\mathbf{M}_{k}^{i}\\left(\\mathbf{y}_{k}^{i}-\\mathbf{ \\hat{x}}_{k}^{i}\\mathbf{\\hat{x}}_{k|k-1}^{i}\\right)\\] \\[\\quad+\\ \\mathbf{C}_{k}^{i}\\sum_{j\\in\\mathcal{N}_{i}}\\left(\\mathbf{\\hat{x}}_{ k|k-1}^{j}-\\mathbf{\\hat{x}}_{k|k-1}^{i}\\right)\\] \\[\\mathbf{M}_{k}^{i} =\\left(\\left(\\mathbf{P}_{k}^{i}\\right)^{-1}+\\mathbf{\\hat{x}}_{k}^{i}\\right) ^{-1}\\] \\[\\mathbf{C}_{k}^{i} =\\gamma\\mathbf{F}_{k}^{i}\\mathbf{G}_{k}^{i}\\] \\[\\mathbf{P}_{k+1}^{i} =\\mathbf{\\Phi}_{k}^{i}\\mathbf{M}_{k}^{i}\\left(\\mathbf{\\Phi}_{k}^{i}\\right)^{ \\top}+\\mathbf{Q}^{i}\\] \\[\\mathbf{\\hat{x}}_{k+1|k}^{i} =\\mathbf{\\Phi}_{k}^{i}\\mathbf{\\hat{x}}_{k|k}^{i}. \\tag{14}\\] Exchanging the prediction of the states, the node based on the measurement from the ego vehicle and the node(s) based on the measurement from the adjacent vehicle(s) attempt to reach a consensus on the (estimated) states from (7). Algorithm 1 represents the CKIF strategy, where nodes \\(i\\) and \\(j\\) denote the node(s) in Section III-B1 and the node(s) in Section III-B2. ``` Input :\\(\\mathbf{\\Phi}_{k}^{i}\\); \\(\\mathbf{z}_{Gi}\\) (position measurement from ego vehicle \\(i\\)) and \\(\\mathbf{z}_{ij}\\) (position measurement from ego vehicle's adjacent vehicles \\(j\\)); \\(\\mathbf{H}\\); \\(\\mathbf{Q}^{i}\\); \\(\\mathbf{R}^{i}\\) and \\(\\mathbf{R}^{j}\\); \\(\\mathbf{\\hat{x}}_{k|k-1}^{j}\\) (prediction states of node \\(j\\) ); initial state \\(\\mathbf{x}^{i}\\) (0) ;initial state covariance \\(\\mathbf{P}^{i}\\)(0) Output :\\(\\mathbf{\\hat{x}}_{k|k}^{i}\\) 1whileGNSS updateddo 2 Compute the information vector \\(\\mathbf{y}_{k}^{i}=\\mathbf{H}^{\\top}(\\mathbf{R}^{i})^{-1}\\mathbf{z}_{Gi}+\\sum_{j\\in\\mathcal{N}_{i} }(\\mathbf{H}^{\\top}(\\mathbf{R}^{j})^{-1}\\mathbf{z}_{ij})\\); 3 Compute the information matrix \\(\\mathbf{\\hat{g}}_{k}^{i}=\\mathbf{H}^{\\top}(\\mathbf{R}^{i})^{-1}\\mathbf{H}+\\sum_{j\\in\\mathcal{N}_ {i}}(\\mathbf{H}^{\\top}(\\mathbf{R}^{j})^{-1}\\mathbf{H})\\); 4 Compute the consensus Kalman state estimation \\(\\mathbf{\\hat{x}}_{k|k}^{i}=\\mathbf{\\hat{x}}_{k|k-1}^{i}+\\mathbf{M}_{k}^{i}(\\mathbf{y}_{k}^{i}- \\mathbf{\\hat{g}}_{k}^{i}\\mathbf{\\hat{x}}_{k|k-1}^{i})+\\mathbf{C}_{k}^{i}\\sum_{j\\in \\mathcal{N}_{i}}(\\mathbf{\\hat{x}}_{k|k-1}^{i}-\\mathbf{\\hat{x}}_{k|k-1}^{i})\\) \\(\\mathbf{M}_{k}^{i}=((\\bman adaptive KF [42]. Also, it is more complex to address the drift attack [33] Thus, in this section, the drift attack model similar to [30] is introduced, and then, its detection and isolation method shown in Fig. 2 is presented. #### Iii-B1 Attack Model For our study, the false data injection attack, which is a drift error, is considered. The drift error does not frequently change and is injected in the original GNSS position or the interdistance [30], i.e., given \\(\\forall k\\), there exist \\(\\exists k_{1},k_{2}\\in\\mathbb{N}^{+}\\) such that the attack signal \\(\\varepsilon_{k}\\) \\[\\varepsilon_{k}=c_{k},\\quad\\forall k\\in\\left[k_{1},k_{2}\\right],\\ k_{2}-k_{1} \\geq n-1,\\ n\\in\\mathbb{N}^{+} \\tag{15}\\] where \\(c_{k}\\) is a constant. This kind of error for the GNSS position or the interdistance is commonly encountered [30] and is investigated in this article. Note that another common kind of false data injection attack, which is a large noise error, will not be discussed because it can be addressed by a method such as chi-square method and adaptive KF in [42]. #### Iii-B2 Delay-Prediction Framework For the attack detection in a KF framework, usually, the innovation or residual is used to determine whether an attack has occurred [33]. In our case, the residual \\(\\boldsymbol{e}\\) of the CKIF given in (16) is generated for the attack detector \\[\\boldsymbol{\\varepsilon}_{k}=\\boldsymbol{z}_{k}-\\boldsymbol{H}\\boldsymbol{ \\hat{x}}_{k|k} \\tag{16}\\] where \\(\\boldsymbol{z}_{k}\\) is from (12) and \\(\\boldsymbol{\\hat{x}}_{k|k}\\) is the estimated state from the CKIF. As stated in Section I-A, attack detection can be done based on only one sample [31] or multiple samples of the residual or innovation [33]. In this work, the GLRT-based method in [43], which is a multiple-sample-based method, is adopted to detect the attack due to its high detection accuracy. However, the tradeoff is that several samples of the residual or innovation are required. Fig. 3 shows the delay-prediction-based attack detection framework. The input \\(u_{k}\\) and measurement \\(z_{k}\\) for the KF are delayed by \\(\\tau=\\upsilon\\Delta T\\) first. The delayed \\(u_{k-\\upsilon}\\) and \\(z_{k-\\upsilon}\\) are used to perform the time update and measurement update of the KF to obtain the estimated states \\(x_{k-\\upsilon}\\). Furthermore, \\(\\hat{x}_{k}\\) is predicted. \\(x_{k-\\upsilon}\\) and \\(\\hat{x}_{k}\\) are then used to generate the residual \\(\\varepsilon_{k-\\upsilon}\\) and pseudo innovation \\(\\xi_{k}\\) of the KF. These two variables will be used as input to the GLRT-based attack detection algorithm. The detailed process of attack detection is discussed in the following. In the real-time implementation, a buffer, such as Buffer 2 in Fig. 3, is used to save the current and historical information, but this operation will induce a time delay for the decision due to the moving average effect. This means that given Buffer 2 of the residual or innovation at time \\(t\\), the decision made belongs to time \\(t_{2}\\) instead of time \\(t\\). There will be a lag \\(\\varrho=t-t_{2}\\), which is related to the size of the samples of the residual or innovation. This lag will prevent us from instantly isolating the attack, making the estimated states influenced by the attacks in this delay. Although smaller sets will reduce the time delay, they will degrade the attack detection accuracy. To explicitly account for the time delay of the GLRT-based attack detection algorithm, a delay-prediction framework is proposed. Given the lag \\(\\varrho\\) for the GLRT-based method no matter how we choose the window size of the samples, we delay the IMU information, the GNSS position of the ego vehicle, and the position derived from the adjacent vehicles for a time \\(\\tau\\) to estimate the states \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) in (7) of the CAVs by the CKIF at time \\(t-\\tau\\). This active delay will allow us to save the fresh sensory information from \\(t-\\tau\\) to \\(t\\) to diagnose the attack in the measurements as long as we can predict the current estimated states \\(\\boldsymbol{\\hat{x}}(t)\\) based on \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) and the fresh IMU information free from the attacks. Then, using the predicted states and the fresh measurements from the ego vehicle's GNSS and adjacent vehicles, we can generate a set of pseudo innovations and save it in buffer 2 of Fig. 3 for GLRT-based attack detection. The decision made based on buffer 2 by the GLRT-based method at \\(t_{2}\\) is prior to the states \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) at the time \\(t-\\tau\\), and using this decision, the attack could be isolated instantly by tuning the measurement covariance matrix in the corresponding nodes in the CKIF. Then, the time delay issue of the multiple-sample-based attack detection algorithm could be addressed. However, this mechanism works when the measurement transits from normal status to an attacked status but has a deficiency when the measurement transits from an attacked status back to a normal status. To tackle this, based on \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\), Buffer 1 in Fig. 3 with the residual generated by (16) is also reserved for another GLRT-based attack decision method. The decision from this GLRT-based detector tagged to \\(t_{1}\\) is able to reflect the status of the measurement at the time \\(t-\\tau\\) to some extent. Note that although there is also a latency between \\(t-\\tau\\) and \\(t_{1}\\), this latency will not cause a huge impact because it just delays a short term to use the normal measurements instead of attacked ones. Thereby, based on the decisions from the GLRT-based method using Buffers 1 and 2 in Fig. 3, the attack model studied in this article can be detected properly. In the following, the state prediction and pseudo innovation generation are presented. Based on the estimated states \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) and the IMU information, the states \\(\\boldsymbol{\\hat{x}}(t)\\) is predicted by the following equation [44]: \\[\\boldsymbol{\\hat{\\delta}}\\left(t\\right) =\\boldsymbol{A}\\left(t\\right)\\left(\\boldsymbol{\\hat{x}}\\left(t- \\tau\\right)+\\boldsymbol{\\delta}\\left(t-\\boldsymbol{\\hat{\\delta}}\\left(t-\\tau \\right)\\right)+\\boldsymbol{B}\\left(t\\right)\\boldsymbol{u}\\left(t\\right)\\right.\\] \\[\\boldsymbol{\\hat{x}}\\left(t\\right) =\\boldsymbol{\\hat{x}}\\left(t-\\tau\\right)+\\boldsymbol{\\delta} \\left(t\\right)-\\boldsymbol{\\delta}\\left(t-\\tau\\right) \\tag{17}\\] where \\(\\tau=\\upsilon\\Delta T\\) is the actual delay time, \\(\\upsilon\\) is the window size of the required innovation for the GLRT-based attack detection algorithm, \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) is the delayed estimated states, \\(\\boldsymbol{u}\\) is the input in (7), and \\(\\boldsymbol{\\delta}\\) is the intermediate states. From (17), it can be seen that, given the delay estimated states \\(\\boldsymbol{\\hat{x}}(t-\\tau)\\) and \\(\\boldsymbol{u}(t)\\), the states at current timestamp \\(t\\) can be predicted. For the real implementation, (17) is discretized and we have \\[\\boldsymbol{\\delta}_{k+1} =\\boldsymbol{\\Phi}_{k}\\boldsymbol{\\hat{x}}_{k}+\\boldsymbol{A} \\,\\Delta T\\left(\\boldsymbol{\\hat{x}}_{k-\\upsilon}-\\boldsymbol{\\delta}_{k- \\upsilon}\\right)+\\boldsymbol{\\Xi}_{k}\\boldsymbol{u}_{k}\\] \\[\\boldsymbol{\\hat{x}}_{k} =\\boldsymbol{\\hat{x}}_{k-\\upsilon}+\\boldsymbol{\\delta}_{k+1}- \\boldsymbol{\\delta}_{k+1-\\upsilon} \\tag{18}\\] where \\(k\\geq\\upsilon\\) and \\(\\boldsymbol{\\hat{x}}_{k}\\) is the predicted states for the current timestamp \\(k\\). Once we have the predicted states, a buffer will Fig. 3: Delay-prediction-based attack detection framework. be used to save the predicted states from \\(k+1-\\upsilon\\) to \\(k+1\\). Then, the pseudo innovation \\(\\mathbf{\\xi}_{k}\\) in (19) will be computed and tested by the GLRT-based method to determine whether an attack has occurred shown in the blue blocks in the lower branch in Fig. 3 \\[\\mathbf{\\xi}_{k}=\\mathbf{z}_{k}-\\mathbf{H}\\mathbf{\\hat{x}}_{k|k-\\upsilon}. \\tag{19}\\] The term pseudo is used here because, rigorously, the predicted state used for computing the innovation is the one-step prediction from the KF, which is the condition that the innovation satisfies the Gaussian noise distribution [45]. The prediction step size \\(\\upsilon\\) used in (18) depends on the buffer size, which will be used in the GLRT-based algorithm and is larger than one apparently. On the one hand, the prediction errors have been proved stable and bounded in [44]. On the other hand, from our experience, only ten samples of the innovation (around 1 s in the time domain) are enough for the GLRT-based algorithm to detect the attack. In this regard, this short-time prediction based on the IMU information will not generate a large cumulative error in the position [19]. Compared with the attack, this cumulative error within a short time is negligible. In Section III-C3, based on the residual and pseudo innovation, the GLRT-based attack detection algorithm is introduced. **Remark 4**: _In this proposed delay-prediction framework, the benefit is to maintain the detection accuracy and real-time performance for the multiple-sample-based method. Although we combine this framework with the GLRT-based method in [34], and [43], note that this framework can be generalized to any multiple-sample-based attack/fault detection method when a temporal delay is induced, which is meaningful to the community._ #### Iii-C3 GLRT-Based Attack Detection Given the set of the residual and innovation from Buffers 1 and 2 in Fig. 3 and motivated by [34], the attack detection is formulated as a binary hypothesis testing problem, where the detector can choose between the two hypotheses \\(\\mathcal{H}_{0}\\) and \\(\\mathcal{H}_{1}\\) defined as \\[\\mathcal{H}_{0}:\\text{The attack has occurred}\\] \\[\\mathcal{H}_{1}:\\text{There is no attack}. \\tag{20}\\] Since the residual in (16) or the pseudo innovation (19) will be used to detect the attack, in order to derive the probability density function (pdf) for each hypothesis, the attack and noise model in the residual or pseudo innovation are specified. The model \\(y_{k}\\), which represents the innovation \\(\\mathbf{\\zeta}_{k}\\) or the residual \\(\\mathbf{e}_{k}\\) at the time instance \\(k\\), is defined as [34] \\[y_{k} =s_{k}\\left(\\theta\\right)+\\varpi_{k}\\] \\[s_{k}\\left(\\theta\\right) =s_{k}^{\\lambda}\\left(\\theta\\right),\\quad\\varpi_{k}=\\varpi_{k}^{ \\lambda}. \\tag{21}\\] Here, \\(s_{k}^{\\lambda}(\\theta)\\) represents the attack signal \\(\\lambda\\), \\(\\theta\\) denotes the set of unknown parameters of the signal, and \\(\\varpi_{k}^{\\lambda}\\) is the noise in \\(\\lambda\\). Without the attack signal, the residual becomes noise assumed to satisfy the Gaussian distributed zero-mean condition [45]. The pseudo innovation is also assumed approximated to this condition [45]. Then, for the two hypotheses, the followings hold: \\[\\mathcal{H}_{0}:\\exists k\\in\\Omega_{n} \\text{s.t. }s_{k}^{\\lambda}\\left(\\theta\\right)\ eq 0\\] \\[\\mathcal{H}_{1}:\\forall k\\in\\Omega_{n} \\text{s.t. }s_{k}^{\\lambda}\\left(\\theta\\right)= 0 \\tag{22}\\] where \\(\\Omega_{n}=\\{l\\in\\mathbb{N}:n\\leq l\\leq n+N-1\\}\\) and \\(N\\) is the window size of the residual or the pseudo innovation, which is related to the lag issue mentioned in Section III-C2. From (22), it can be seen that if there is no attack in the position from the ego vehicle's GNSS or adjacent vehicle(s), the signal component except the noise should be zero. Otherwise, it should be none zero and can be detected. With (21), the residual or the pseudo innovation originates from a family of pdfs as in (with \\(i\\in\\{0,1\\}\\)) \\[p\\left(z_{n};\\theta,\\mathcal{H}_{i}\\right)=\\prod_{k\\in\\Omega_{n}}p\\left(y_{k} ^{\\lambda};\\theta,\\mathcal{H}_{i}\\right) \\tag{23}\\] where \\(z_{n}\\triangleq\\{y_{k}\\}_{k=n}^{n+N-1}\\) denotes the residual or the pseudo innovation sequence from time instant \\(n\\) to \\(n+N-1\\), \\(p(*;\\theta)\\) denotes a pdf depending on the parameter \\(\\theta\\), i.e., \\(p(z_{n};\\theta,\\mathcal{H}_{i})\\) means the pdf for the two hypotheses based on the parameter \\(\\theta\\) given the residual or the pseudo innovation sequence \\(z_{n}\\), and the details of the pdfs are defined as \\[p\\left(y_{k}^{\\lambda};\\theta,\\mathcal{H}_{i}\\right)=\\frac{1}{\\left(2\\pi\\sigma _{\\lambda}^{2}\\right)^{3/2}}\\exp\\left(-\\frac{1}{2\\sigma_{\\lambda}^{2}}\\|y_{k}^ {\\lambda}-s_{k}^{\\lambda}\\left(\\theta\\right)\\|^{2}\\right) \\tag{24}\\] where \\(\\sigma_{\\lambda}\\) denotes the noise variance of the residual or the pseudo innovation [the noise variance of the position from the ego vehicle's GNSS or adjacent vehicle(s)]. Then, the GLRT [34] is derived to determine whether \\(\\mathcal{H}_{1}\\) [there is no attack in the position from the ego vehicle's GNSS or adjacent vehicle(s)] happens when \\[L_{G}\\left(z_{n}\\right)=\\frac{p\\left(z_{n};\\hat{\\theta}^{1},\\mathcal{H}_{1} \\right)}{p\\left(z_{n};\\hat{\\theta}^{0},\\mathcal{H}_{0}\\right)}>\\gamma \\tag{25}\\] where \\(\\hat{\\theta}^{1}\\) and \\(\\hat{\\theta}^{0}\\) are the maximum likelihood estimates of the unknown parameters when \\(\\mathcal{H}_{1}\\) is true and the unknown parameters when \\(\\mathcal{H}_{0}\\) is true, respectively, and \\(\\gamma\\) is a threshold. In the real implementation, \\(\\gamma\\) is a tuning parameter given the tolerant false alarm probability, i.e., the probability of deciding on the hypothesis \\(\\mathcal{H}_{1}\\) when hypothesis \\(\\mathcal{H}_{0}\\) is true. In the real-world application, we have set the probability of the false alarm rate as 0.1 and we will have the probability of the detection accuracy as 0.95, which is sufficient for the real-world application [34]. In this case, under \\(\\mathcal{H}_{0}\\) (there is an attack), the signal is completely unknown since the CAVs have no prior information for the attack signal, and then, \\(\\hat{\\theta}^{0}=\\{y_{k}\\}_{k=n}^{n+N-1}\\) and \\[p\\left(z_{n};\\hat{\\theta}^{0},\\mathcal{H}_{0}\\right)=\\frac{1}{\\left(2\\pi \\sigma_{\\lambda}^{2}\\right)^{3N/2}}. \\tag{26}\\] Under the \\(\\mathcal{H}_{1}\\) hypothesis, we have \\[p\\left(z_{n};\\hat{\\theta}^{1},\\mathcal{H}_{1}\\right)=\\frac{1}{\\left(2\\pi\\sigma_ {\\lambda}^{2}\\right)^{3N/2}}\\exp\\left(-\\frac{1}{2\\sigma_{\\lambda}^{2}}\\|y_{k}^ {\\lambda}\\|^{2}\\right). \\tag{27}\\]With (25)-(27), an attack in the position from the ego vehicle's GNSS or adjacent vehicle(s) \\(\\mathcal{H}_{1}\\) can be detected if \\[T\\left(z_{n}\\right)=\\frac{1}{N}\\sum_{k\\in\\Omega_{n}}\\left(\\frac{1}{2\\sigma_{ \\lambda}^{2}}\\|y_{k}^{\\lambda}\\|^{2}\\right)<\\gamma^{\\prime} \\tag{28}\\] where \\(\\gamma^{\\prime}=-(2/N)\\ln(\\gamma)\\). This means that if the energy of the residual or the pseudo innovation is less than a certain threshold \\(\\gamma^{\\prime}\\), there is no attack considered in the position measurement. Otherwise, an attack has occurred and an AI is set. As can be seen in Fig. 3, corresponding to Buffers 1 and 2, there will be two indicators AI\\({}_{1}\\) and AI\\({}_{2}\\), respectively. Then, taking AI\\({}_{1}\\) and AI\\({}_{2}\\), we have \\(AI\\), which could handle the cases when the position measurement transitions from normal status to an attacked mode or from an attack mode to a normal status. The holistic attack detection process is shown in Fig. 3. After having the AI, a rule-based strategy to defend the attack in the CKIF is designed and will be discussed in Section III-C4. **Remark 5**: _In this work, GLRT-based attack detection is selected for our application due to its high detection accuracy and conciseness [34]. Other similar multiple-sample-based methods, such as sequential probability ratio tests, are also applicable to be integrated with the proposed delay-prediction framework [33]. Another notice is that the inputs to the detection algorithm are the residual and innovation. These pieces of information are the difference between the prior information provided by the IMU and the actual sensory measurements. These kinds of inputs are chosen because the prior IMU information is free from attacks. In other words, due to the possible attacks in the measurements, it is challenging to directly design the attack detection method based on the redundant measurements._ #### Iii-B4 Attack Defense Method In this section, based on the AI, a rule-based attack isolation method is designed to prevent the localization results to be affected by the attack. Once an AI is declared for a certain sensory measurement, the measurement update will be isolated in the CKIF. Specifically, the isolation will be executed by increasing the corresponding element in the measurement matrix \\(\\boldsymbol{R}\\) to an infinite value. This operation will prevent the corresponding node to be affected by the attack. However, in the meantime, the measurements in the nodes without the attack will be continuously leveraged for the measurement update. It can be seen that as long as not all the measurements are attacked, there always exists the measurement update in the nodes of the CKIF. The worst case is that all the measurements are attacked, and then, the CKIF will run in a time update mode, meaning that the states will be estimated consecutively by integrating the acceleration from the IMU. When the attacks disappear, the temporary changes to the measurement matrix \\(\\boldsymbol{R}\\) will be canceled and the CKIF will run normally. The details of the algorithm of the secure cooperative localization method are given in Algorithm 2. ``` Input :\\(\\boldsymbol{\\Phi}_{k}^{i}\\); \\(\\boldsymbol{z}_{Gi}\\) (position measurement from ego vehicle \\(i\\)) and \\(\\boldsymbol{z}_{ij}\\) (position measurement from ego vehicle's adjacent vehicles \\(j\\)); \\(\\boldsymbol{H}\\); \\(\\boldsymbol{Q}^{i}\\); \\(\\boldsymbol{R}^{i}\\) and \\(\\boldsymbol{R}^{j}\\); \\(\\tilde{\\boldsymbol{z}}_{k|k-1}^{j}\\) (prediction states of node \\(j\\) ); initial state \\(\\boldsymbol{x}^{i}\\) (0) ;initial state covariance \\(\\boldsymbol{P}^{i}\\)(0) Output :\\(\\tilde{\\boldsymbol{z}}_{k|k}^{j}\\) 1whileGNSS updateddo 2 GLRT-based attack detection for the ego vehicle measurement based on (28); 3if\\(\\boldsymbol{z}_{Gi}\\) is attackedthen 4 Isolate it from the CKIF by enlarging the element in the covariance matrix; 5else 6for\\(j\\in\\mathcal{N}_{i}\\)do 7 Run GLRT-based algorithm to test the adjacent vehicle(s) measurement; 8if\\(\\boldsymbol{z}_{Gi}\\) is attackedthen 9 Isolate it from the CKIF by enlarging the element in the covariance matrix; 10 11 end if 12 13 end for 14 15 end for 16 Run Algorithm. 1; 17 18 end while ``` **Algorithm 2** Secure Cooperative Localization Method **Remark 6**: _It can be seen that the CKIF well fits the secure cooperative localization problem from the sensor fusion and attack defense perspectives: not only it can handle the measurements from different vehicles conveniently, i.e., when vehicles are connected or disconnected to the ego vehicles, the added or deleted measurement can be accommodated by adding or removing the corresponding node in the CKIF, but also it can deal with the attacked sensors by adapting the measurement covariance matrix. In addition, the proposed attack detection and defense method improves the resilience to the attack for the CKIF._ ## IV Results and Discussion In this section, the proposed secure cooperative localization method is validated by numerical simulations. The localization results with the directed or undirected communication and attack settings are exemplified and discussed. ### _System Requirements and Case Study Settings_ In the case study, the CAV platooning application with four CAVs is considered to show the detailed localization results. Also, for the statistical analysis, ten vehicles in the platooning are included to implement the proposed algorithm. On each vehicle, sensors, including an IMU to obtain the longitudinal accelerations, a normal GNSS receiver without differential corrections to obtain the positions, and a LiDAR sensor to calculate the relative distance between the ego vehicle and its neighbor(s), are required. For exchanging the sensor information, an onboard unit (Cellular-V2X or DSRC) is necessary to transmit and receive the shared information from the CAVs wirelessly with a directed or undirected communication topology. In addition to that, each vehicle should be installed with a computer that can process the sensory data and run the proposed algorithms. The requirement for the computer is similar to our previous work in [46]. To simulate the real sensory measurement, noise is added to these sensors. For the acceleration measurement, it is assumed the noise satisfies \\(\\alpha_{a}\\sim\\mathcal{N}(0,\\,(1\\;\\mathrm{m}^{2})^{2})\\)[47]. \\(\\mathcal{N}(\\mu,\\,\\sigma^{2})\\) denotes the Gaussian distribution with mean \\(\\mu\\) and variance \\(\\sigma^{2}\\). In order to simulate the acceleration zero-bias instability of the IMU, a constant bias error between \\(-0.1\\) and \\(0.1\\;\\mathrm{m/s^{2}}\\) is added to the acceleration (\\(0.05\\;\\mathrm{m/s^{2}}\\) in our case). The noise of the GNSS position satisfies \\(\\eta_{\\mathrm{G}}\\sim\\mathcal{N}(0,\\,3\\;\\mathrm{m}^{2})\\)[48]. During the platooning operation, the GNSS of the vehicles is compromised by a drift attack, which is larger than \\(3\\sigma\\) of the noise, at a certain time. The relative distance obtained from the LiDAR sensor has the noise \\(\\eta_{R}\\sim\\mathcal{N}(0,\\,1\\;\\mathrm{m}^{2})\\)[49]. The following distance of the CAVs in the platoon is \\(30\\) m and it is assumed the controllers in the CAVs can make the vehicles keep the same longitudinal acceleration. Note that although, in the real platoon application, there will be transient response between vehicles, as long as the sensory information between CAVs can be measured and shared, the control behavior will not affect our cooperative localization algorithm. The acceleration of the leading vehicle (green vehicle) in Fig. 1 is shown in Fig. 4. The red line represents the actual acceleration of the leading vehicle and the blue line shows the noisy acceleration measured by the IMU. First, in \\(t=0\\)-\\(4\\) s, the formation accelerates at \\(3\\;\\mathrm{m/s^{2}}\\) and the velocity will reach \\(12\\;\\mathrm{m/s}\\) shown in Fig. 5. After that, the formation will keep this velocity until \\(t\\)=\\(20\\) s. Then, between \\(t=20\\) and \\(23\\) s, the formation starts to decelerate at \\(-4\\;\\mathrm{m/s^{2}}\\) and the CAVs will stop at \\(t=23\\) s. ### _Results_ #### Iv-B1 Localization With Directed Communication Topology The secure cooperative localization results of the CAVs with the directed communication are discussed in this section in terms of the attack detection and the position estimation results. First, in order to simulate the drift attacks in the measured position, an offset position error is added to the GNSS position measurement at certain intervals during the platooning. Note that, since in (11), the attacks from the GNSS position of other adjacent vehicles or the relative distance result in the attacks \\(f\\), the contributions from the attacks with different sources to the position measurement are the same. For attack detection, as long as there is an attack in \\(f\\) from the GNSS position or relative distance, it can be detected without knowing that it comes from the GNSS position or relative distance. Thus, to make the validation concise, only the attacks in the GNSS position are used in the case study. The attacked GNSS position measurements are shown in Fig. 6. The curves of Vehicles 1-4 represent the position measurements from the GNSS of all vehicles in Fig. 1. The GNSS in Vehicles 1-3 is attacked and the injected attacks are shown in Table I. The attacks from communication and sensors are considered. For simulating the attacks such as data tampering in the communication, the GNSS position of Vehicles 1-3 in the platoon is attacked by adding a drift error. Regarding the attacks for the sensors, the GNSS in a certain region is spoofed such as in \\(t=10\\)-\\(13\\) s, both Vehicles 2 and 3 are attacked. The actual position of Vehicles 1-4 is represented by the center black lines in each colored curve in Fig. 6. The attack detection results are shown in Fig. 7, the black dashed line gives the actual status if an attack occurs, and the red line means the detected AI \\(\\alpha\\) for Vehicles 1-3. To be specific, Fig. 4: Acceleration of the leading vehicle. The red line shows the actual acceleration and the blue line represents the acceleration measurement from the IMU. Fig. 5: Velocity of the leading vehicle. Fig. 6: Position of CAVs. Vehicles 1–4 mean the CAVs from the green vehicle to the blue vehicle in Fig. 1 driven with the acceleration in Fig. 4. for instance, the partial enlargement views in Fig. 7 provide the exact moments when the attacks happen or disappear for Vehicle 1. Since the time delay \\(\\tau\\) or the prediction horizon in our delay-prediction framework is set as 0.1 s, for our implementation, we can see that \\(\\alpha\\) veh 1 for both the reference and detected is behind the actual moment when the first attack occurs at \\(t=8\\) s given in Table I. However, \\(\\alpha\\) for Vehicle 1 (red line) can be set prior to the reference (black dashed line), meaning that the delayed estimator (KCIF) can be notified in advance when the attack is going to occur. This is because based on Buffer 2 in Fig. 3, the attack can be detected. Then, the delayed estimator KCIF is able to isolate the attack ahead. Also, from Fig. 7, it can be seen that the attack can also be detected accurately without any false positive. When the GNSS measurement transits from an attacked status to normal status (\\(t=14\\) s and \\(t=19\\) s in Fig. 7), it can be seen that the detection is behind the reference for a short term (less than 0.1 s) due to using Buffer 1 in Fig. 3 for the recovering process. The cost of this delay is that the KCIF continues to keep in the attack isolation mode for a short term without leveraging the valid GNSS measurement during this delay. However, the cost is negligible since, in the isolation mode, the corresponding node in the KCIF will run in the time update mode based on the IMU information and the cumulative error is small. Similar attack detection results can be seen for Vehicle 2 (\\(\\alpha\\) Vehicle 2) and Vehicle 3 (\\(\\alpha\\) Vehicle 3) with drift errors with different magnitudes. In addition, during t = 10-13 s, when both the GNSS measurements in Vehicles 1 and 2 are attacked, the attack detection algorithm can still provide the accurate detection results. Then, based on these estimated states in the KCIF, through (18), the states at the current moment can be predicted by using the information from the IMU. Thereby, the attack detection results demonstrate that the latency issue of the GLRT attack detection has been addressed and our proposed delay-prediction framework can detect the attack accurately. After having the attack detection results, the KCIF is able to defend the attacks by the rule-based attack isolation approach as in Section III-C4 and estimate the position of the CAVs. The localization results and the partial enlargement for Vehicles 2 and 3 are given in Fig. 8 to show the performance of the consensus estimation framework. For the comparison purpose, the position results from a normal KF, which is based on the sensors in an individual vehicle, and a state-of-the-art approach multi-sensor multi vehicle (MSMV) in [26] are also presented. To implement the KF and MSMV, both of them are integrated with the proposed attack detection algorithm. Note that the normal KF can only fuse the information from an individual vehicle, but the CKIF and MSMV can leverage the information from both the ego vehicle and other CAVs. This is the major difference between the normal KF and the other two methods. From Fig. 8(a), it can be seen that both the positions from the KF, KCIF, and MSMV follow well with the reference to some extent because both the methods have been integrated with our proposed attack detection method, preventing them from being attacked. In another aspect, in the partial enlargement figure shown in Fig. 8(b), we can see that the attack in the raw position does not affect the estimated position of KF, KCIF, and MSMV. From the position error in Table II and Fig. 9, differences can be identified between the KF and the cooperative localization approaches, including Fig. 8: Position error of Vehicles 2 and 3 with directed communication topology. MSMV, KF, and KCIF mean the results of the state-of-the-art cooperative localization method MSMV in [26], the normal KF, and the KCIF, respectively. Veh means vehicle. Fig. 7: Attack detection results. AI denotes the attack indicator and veh means vehicle. KCIF and MSMV. For Vehicle 2, it can access the sensory measurements of its front vehicle (Vehicle 1), which include the position from the GNSS in Vehicle 2 and (11) based on the relative distance obtained from the sensor and the GNSS sensors in Vehicle 1. They can be leveraged in the KCIF and MSMV to estimate the position when there is a certain sensor attacked. Thus, the absolute mean error (AME) and RMSE of the position from both KCIF and MSMV are smaller than that from the KF. In \\(t=\\) 10-13 s, both the GNSS in Vehicles 1 and 2 are attacked, and therefore, the position errors for the KF, KCIF, and MSMV are similar and drift a bit because there are no measurements to correct the errors coming from the IMU. However, in \\(t=\\) 20-23 s, due to the attack in Vehicle 2, the KF runs in a time update mode without measurement updates and the position error starts to drift due to the acceleration bias error in the IMU. For the KCIF and MSMV, since the GNSS in Vehicle 1 is not attacked in \\(t=\\) 20-23 s, this information could still be leveraged in the KCIF to correct the errors from the prediction process. For Vehicle 3, the position error drifts for the KF when there is an attack in \\(t=\\) 2-5 s and \\(t=\\) 13-16 s but for the KCIF and MSMV since there is no overlap when the attacks happen between Vehicles 2 and 3, which can be inferred from Table I. The measurement updates continue all the time to correct the errors from the prediction process. Based on the comparison between the KF and the cooperative localization methods (KCIF and MSMV), it can be seen that exhausting the information cooperatively from the ego vehicle and its adjacent vehicle in a directed communication topology in the KCIF improves the redundancy for the localization algorithm and makes it more secure regarding the attacks. Comparing the KCIF and MSMV, we can see that the KCIF method shows superior performance regarding AME and RMSE. This is because KCIF is a suboptimal version of the Kalman-consensus filter in [27], which has shown better performance than the existing distributed KFs such as the MSMV in [26]. #### Iv-A2 Localization With Undirected Communication Topology To further investigate the performance of our method in cases where more vehicles cooperate with each other, localization results for the CAVs with undirected communication are presented. Under the undirected communication topology, the CAV can not only share information with its front CAV cooperatively but also with its rear CAV, meaning that more information can be fed into the KCIF and MSMV. From the position errors shown in Fig. 10, it can be seen that in \\(t=\\) 10-13 s, although the GNSS in Vehicles 1 and 2 is Fig. 10: Position error of Vehicles 2 and 3 with undirected and fully connection communication topology. KF; MSMV Un, and KCIF Un mean the results of the normal KF, the state-of-the-art cooperative localization method MSMV in [26] with the undirected communication topology, and the KCIF with the undirected communication topology, respectively. Fig. 9: Position error of Vehicles 2 and 3 with directed communication topology. Veh means vehicle. attacked, the derived position from Vehicle 3 is leveraged in the KCIF and MSMV and the drift error in Fig. 9 has been compensated. With more vehicles connected, the security of our localization algorithm is improved because there is more redundant information that can be fused to compensate for the influence of the attacks. From another aspect, for the localization accuracy, it can be seen from Table II that, from the KF to the KCIF with undirected communication \\(\\mathcal{G}_{tt}\\), the accuracy increases in terms of AME and RMSE. With more information from the vehicles that are used, from Table II, the AME and RMSE of both KCIF and MSMV with \\(\\mathcal{G}_{tt}\\) decrease to some extent compared with those of the KF. In addition, the proposed secure cooperative localization in cases with more CAVs totally connected in \\(\\mathcal{G}_{t}\\) (4-10 CAVs) is tested. From Table II, the accuracy in terms of the AME and RMSE in \\(\\mathcal{G}_{t}\\) where the sensory information from all ten vehicles is used in our estimation algorithm improves by 35.4% and 36.6%, respectively, compared with the one in \\(\\mathcal{G}_{d}\\) where sensory information from only two vehicles is used. It is worth noting that, with the number of vehicles increasing from four to ten, it can be found that the difference between the KCIF and MSMV decreases. It is possible because the random noise in the position can not only be compensated by the filter algorithm but also by the average effect. Therefore, it shows that when more vehicles share information cooperatively, both localization accuracy and security can be advanced further. Please note that in this article, it is assumed that the shared information can be provided by radar, camera, or LiDAR through V2V/V2X communication. However, in a real application, due to the limitation of sensor range or occlusion issues of these sensors in some scenarios such as roads with high traffic density, the object cannot be detected, and thus, the interdistance between the vehicles is not accessible if a traditional object detection algorithm based on sensors in an individual vehicle is used [49]. Accordingly, in this case, the CAV may not be able to share the interdistance between itself and its neighbor CAVs and the communication edge in the cooperative localization algorithm needs to be disconnected. The consequence of this issue is similar to a measurement that has been attacked. However, in our recent paper [49], if a V2V-based object detection algorithm is used, the limitation of sensor range and occlusion issues can be resolved, and thus, the robustness of our cooperative localization can be enhanced further. ## V Conclusion In this article, a secure cooperative localization method for the CAVs is proposed and validated by numerical simulations. It can be concluded from the following results. 1. The sensory information from the ego vehicle and the cooperation with its adjacent vehicle(s) in a directed or undirected communication topology can be well leveraged by the consensus estimation. To some extent, more nodes in the CKIF enable the algorithm to have higher localization accuracy and better security. 2. The injected attacks in the sensory measurement can be detected accurately by the GLRT-based method and the temporal lag for the decision from the GLRT-based method has been resolved. With the detection results, the proposed secure cooperative localization has shown resilient performance to the attacks. Both the security and the localization accuracy have been improved compared with a normal centralized KF. ## Acknowledgment This project belongs to OpenCDA ecosystem. ## References * [1] Y. Li, Z. He, Y. Li, Z. Gao, R. Chen, and N. El-Sheimy, \"Enhanced wireless localization based on orientation-compensation model and differential received signal strength,\" _IEEE Sensors J._, vol. 19, no. 11, pp. 4201-4210, Jun. 2019. * [2] J. Betz et al., \"Autonomous vehicles on the edge: A survey on autonomous vehicle racing,\" _IEEE Open J. Intell. Transp. Syst._, vol. 3, pp. 458-488, 2022. * [3] G. Chen et al., \"Planning and tracking control of full drive-by-wire electric vehicles in unstructured scenario,\" _Proc. Inst. Mech. Eng. D. Automobile Eng._, 2023. * [4] G. Elghazaly, R. Frank, S. Harvey, and S. Safko, \"High-definition maps: Comprehensive survey, challenges, and future perspectives,\" _IEEE Open J. Intell. Transp. Syst._, vol. 4, pp. 527-550, 2023. * [5] A. Gholamboseinian and J. Seitz, \"Vehicle classification in intelligent transport systems: An overview, methods and software perspective,\" _IEEE Open J. Intell. Transp. Syst._, vol. 2, pp. 173-194, 2021. * [6] E. Thonhofer et al., \"Infrastructure-based digital twins for cooperative, connected, automated driving and smart road services,\" _IEEE Open J. Intell. Transp. Syst._, vol. 4, pp. 311-324, 2023. * [7] R. Xu et al., \"The OpenCDA open-source ecosystem for cooperative driving automation research,\" _IEEE Trans. Intell. Vehicles_, vol. 8, no. 4, pp. 2698-2711, Apr. 2023. * [8] S. Nallamothu et al., \"Detailed concept of operations: Transportation systems management and operations/cooperative driving automation use cases and scenarios,\" United States. Federal Highway Admin., Tech. Rep. FHWA-HRT-20-064, 2020. * [9] W. Liu et al., \"A systematic survey of control techniques and applications in connected and automated vehicles,\" _IEEE Internet Things J._, early access, Aug. 21, 2023, doi: 10.1109/JIOT.2023.3307002. * [10] M. Hua, G. Chen, B. Zhang, and Y. Huang, \"A hierarchical energy efficiency optimization control strategy for distributed drive electric vehicles,\" _Proc. Inst. Mech. Eng. D. J. Automobile Eng._, vol. 233, no. 3, pp. 605-621, Feb. 2019. * [11] M. T. Arafin and K. Kornegay, \"Attack detection and countermeasures for autonomous navigation,\" in _Proc. 55th Annu. Conf. Inf. Sci. Syst. (CISS)_, Mar. 2021, pp. 1-6. * [12] S. Kuntti, S. Fallah, K. Katsarros, M. Diamati, F. Mccullough, and A. Mouazukitis, \"A survey of the state-of-the-art i localization techniques and their potentials for autonomous vehicle applications,\" _IEEE Internet Things J._, vol. 5, no. 2, pp. 829-846, Apr. 2018. * [13] Y. Li et al., \"Toward location-enabled IoT (LE-IoT?): IoT positioning techniques, error sources, and error mitigation,\" _IEEE Internet Things J._, vol. 8, no. 6, pp. 4035-4062, Mar. 2021. * [14] W. Liu, X. Xia, L. Xiong, Y. Lu, L. Gao, and Z. Yu, \"Automated vehicle-sligible angle estimation considering signal measurement characteristic,\" _IEEE Sensors J._, vol. 21, no. 19, pp. 21675-21687, Oct. 2021. * [15] W. Liu, L. Xiong, X. Xia, Y. Lu, L. Gao, and S. Song, \"Vision-aided intelligent vehicle sideship angle estimation based on dynamic model,\" _IET Intell. Transp. Syst._, vol. 14, no. 10, pp. 1183-1189, Oct. 2020. * [16] L. Gao, L. Xiong, X. Xia, Y. Lu, Z. Yu, and A. Khajepour, \"Improved vehicle localization using on-board sensors and vehicle lateral velocity,\" _IEEE Sensors J._, vol. 22, no. 7, pp. 6818-6831, Oct. 2022. * [17] K.-W. Chiang, G.-J. Tsai, Y.-H. Li, Y. Li, and N. El-Sheimy, \"Navigation engine design for automated driving using INS/GNSS/3D LiDAR-SLAM and integrity assessment,\" _Remote Sens._, vol. 12, no. 10, p. 1564, May 2020. * [18] X. Xia, N. P. Bhatt, A. Khajepour, and E. Hashemi, \"Integrated inertial-L1DAR-based map matching localization for varying environments,\" _IEEE Trans. Intell. Vehicles_, early access, Jul. 26, 2023, doi: 10.1109/TIV.2023.3298892. * [19] Y. Li, Z. He, Z. Gao, Y. Zhuang, C. Shi, and N. El-Sheimy, \"Toward robust crowdsourcing-based localization: A fingerprinting accuracy indicator enhanced wireless/magenize/inertial integration approach,\" _IEEE Internet Things J._, vol. 6, no. 2, pp. 3585-3600, Apr. 2019. * [20] F. B. Gunay, E. Ozturk, T. Cavdar, Y. S. Hanay, and A. U. R. Khan, \"Vehicular ad hoc network (VANET) localization techniques: A survey,\" _Arch. Comput. Methods Eng._, vol. 28, no. 4, pp. 3001-3033, Jun. 2021. * [21] F. Lobo, D. Grael, H. Oliveira, L. Villas, A. Almehmadi, and K. El-Khatib, \"Cooperative localization improvement using distance information in vehicular ad hoc networks,\" _Sensors_, vol. 19, no. 23, p. 5231, Nov. 2019. * [22] M. Elazab, A. Noureldin, and H. S. Hassanein, \"Integrated cooperative localization for vehicular networks with partial GPS access in urban canyons,\" _Veh. Commun._, vol. 9, pp. 242-253, Jul. 2017. * [23] M. A. Hossain, I. Elshafey, and A. Al-Sanie, \"Cooperative vehicle positioning with multi-sensor data fusion and vehicular communications,\" _Wireless Netw._, vol. 25, no. 3, pp. 1403-1413, Apr. 2019. * [24] G. Xiao, X. Song, H. Cao, S. Zhao, H. Dai, and M. Li, \"Augmented extended Kalman filter with cooperative Bayesian filtering and multi-models fusion for precise vehicle localisations,\" _IET Radar, Sonar Navigat._, vol. 14, no. 11, pp. 1815-1826, Nov. 2020. * [25] S. Ma, F. Wen, X. Zhao, Z.-M. Wang, and D. Yang, \"An efficient V2X based vehicle localization using single RSU and single receiver,\" _IEEE Access_, vol. 7, pp. 46114-46121, 2019. * [26] P. Yang, D. Duan, C. Chen, X. Cheng, and L. Yang, \"Multi-sensor multi-vehicle (MSMV) localization and mobility tracking for autonomous driving,\" _IEEE Trans. Veh. Technol._, vol. 69, no. 12, pp. 14355-14364, Dec. 2020. * [27] R. Olfati-Saber, \"Kalman-consensus filter: Optimality, stability, and performance,\" in _Proc. 48th IEEE Conf. Decis. Control (CDC), 28th Chin. Control Conf._, Dec. 2009, pp. 7036-7042. * [28] M. Pirani et al., \"Cooperative vehicle speed fault diagnosis and correction,\" _IEEE Trans. Intell. Transp. Syst._, vol. 20, no. 2, pp. 783-789, Feb. 2019. * [29] B. Gong, S. Wang, M. Hao, X. Guan, and S. Li, \"Range-based collaborative relative navigation for multiple unmanned aerial vehicles using consensus extended Kalman filter,\" _Aerosp. Sci. Technol._, vol. 112, May 2021, Art. no. 106647. * [30] Z. Ju, H. Zhang, and Y. Tan, \"Deception attack detection and estimation for a local vehicle in vehicle platooning based on a modified UFIR estimator,\" _IEEE Internet Things J._, vol. 7, no. 5, pp. 3693-3705, May 2020. * [31] T. Yang and C. Lv, \"A secure sensor fusion framework for connected and automated vehicles under sensor attacks,\" 2021, _arXiv:2103.00883_. * [32] J. Shen, J. Y. Won, Z. Chen, and Q. A. Chen, \"Drift with devil: Security of multi-sensor fusion based localization in high-level autonomous driving under GPS spoofing,\" in _Proc. 29th USENIX Secur. Symp. (USENIX Security)_, 2020, pp. 931-948. * [33] I. Hwang, S. Kim, Y. Kim, and C. E. Seah, \"A survey of fault detection, isolation, and reconfiguration methods,\" _IEEE Trans. Control Syst. Technol._, vol. 18, no. 3, pp. 636-653, May 2010. * [34] I. Skog, P. Handel, J. O. Nilsson, and J. Rantakokko, \"Zero-velocity detection--An algorithm evaluation,\" _IEEE Trans. Biomed. Eng._, vol. 57, no. 11, pp. 2657-2666, Nov. 2010. * [35] Y. Guo and J. Ma, \"SCoPTO: Signalized corridor management with vehicle platooning and trajectory control under connected and automated traffic environment,\" _Transportmetrica B, Transp. Dyn._, vol. 9, no. 1, pp. 673-692, Jan. 2021. * [36] Y. Zheng, S. E. Li, K. Li, and L.-Y. Wang, \"Stability margin improvement of vehicular platoon considering undirected topology and asymmetric control,\" _IEEE Trans. Control Syst. Technol._, vol. 24, no. 4, pp. 1253-1265, Jul. 2016. * [37] Z. Wang, Y. Bian, S. E. Shladover, G. Wu, S. E. Li, and M. J. Barth, \"A survey on cooperative longitudinal motion control of multiple connected and automated vehicles,\" _IEEE Intell. Transp. Syst. Mag._, vol. 12, no. 1, pp. 4-24, Spring. 2020. * [38] Y. Li, X. Niu, Y. Cheng, C. Shi, and N. El-Sheimy, \"The impact of vehicle maneuvers on the attitude estimation of GNSS/INS for mobile mapping,\" _J. Appl. Geodesy_, vol. 9, no. 3, pp. 183-197, Jan. 2015. * [39] X. Xia, E. Hashemi, L. Xiong, and A. Khajepour, \"Autonomous vehicle kinematics and dynamics synthesis for sideslip angle estimation based on consensus Kalman filter,\" _IEEE Trans. Control Syst. Technol._, vol. 31, no. 1, pp. 179-192, Jan. 2023. * [40] C. Kwon, W. Liu, and I. Hwang, \"Security analysis for cyber-physical systems against stealthy deception attacks,\" in _Proc. Amer. Control Conf._, Jun. 2013, pp. 3344-3349. * [41] R. Wang, Z. Xiong, J. Liu, J. Xu, and L. Shi, \"Chi-square and SPRT combined fault detection for multisensor navigation,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 52, no. 3, pp. 1352-1365, Jun. 2016. * [42] A. Almaghile, J. Wang, and W. Ding, \"Evaluation of the performances of adaptive Kalman filter methods in GPS/INS integration,\" _J. Global Positioning Syst._, vol. 9, no. 1, pp. 33-40, Jun. 2010. * [43] X. Xia, E. Hashemi, L. Xiong, A. Khajepour, and N. Xu, \"Autonomous vehicles sideslip angle estimation: Single antenna GNSS/IMU fusion with observability analysis,\" _IEEE Internet Things J._, vol. 8, no. 19, pp. 14845-14859, Oct. 2021. * [44] L. Khosravian, J. Trumpf, and R. E. Mahony, \"State estimation for nonlinear systems with delayed output measurements,\" in _Proc. CDC_, Dec. 2015, pp. 6330-6335. * [45] W. Ding, J. Wang, C. Rizos, and D. Kinlyside, \"Improving adaptive Kalman estimation in GPS/INS integration,\" _J. Navigat._, vol. 60, no. 3, pp. 517-529, Sep. 2007. * [46] Z. Meng, X. Xia, R. Xu, W. Liu, and J. Ma, \"HYDRO-3D: Hybrid object detection and tracking for cooperative perception using 3D LiDAR,\" _IEEE Trans. Intell. Vehicles_, early access, Jun. 12, 2023, doi: 10.1109/TIV.2023.3282567. * [47]STMicroelectronics. _ASM330LLHB Specification_. Accessed: Apr. 26, 2023. [Online]. Available: [https://www.st.com/resource/en/datasheet/stm330lh.pdf](https://www.st.com/resource/en/datasheet/stm330lh.pdf) * [48] Ublox. _Ublox ZED-F9T-10B Specification_. [Online]. Available: [https://content-u-blox.com/sites/default/files/ZED-F9T-10B_DataSheet_UBVX-2003365.pdf](https://content-u-blox.com/sites/default/files/ZED-F9T-10B_DataSheet_UBVX-2003365.pdf) * [49] X. Xia et al., \"An automated driving systems data acquisition and analytics platform,\" _Transp. Res. C, Emerg. Technol._, vol. 151, Jun. 2023, Art. no. 104120.
In this article, we present secure cooperative localization for connected automated vehicles (CAVs) based on consensus estimation through leveraging shared but possibly attacked sensory information from multiple adjacent vehicles. First, the communication topology between the CAVs, node kinematic model, and node measurement model for each vehicle are introduced. Then, a consensus Kalman information filter (CKF) is applied to fuse the shared information from connected vehicles. Since the sensory information might be attacked, an attack detection algorithm based on the generalized likelihood ratio test (GLRT) is adopted. A delay-prediction framework is proposed to maintain the accuracy and real-time performance of the detection algorithm. Next, a rule-based attack isolation method is used to defend the attack. Finally, the proposed secure cooperative localization algorithm is validated in extensive numerical simulation experiments. The results confirm that leveraging information from multiple vehicles in a cooperative manner leads to better accuracy and resilience for vehicle localization under attacks. Attack detection and defense, connected automated vehicles (CAVs), consensus estimation, secure cooperative localization.
Write a summary of the passage below.
ieee/f3d3bf7c_fc8a_4b5d_a91c_37a3e9defd1a.md
# Global Prototypical Network for Few-Shot Hyperspectral Image Classification Chengye Zhang, Jun Yue, and Qiming Qin Munuscript received June 8, 2020; revised July 16, 2020, August 4, 2020, and August 10, 2020; accepted August 13, 2020. Date of publication August 18, 2020; date of current version August 28, 2020. This work was supported in part by the National Natural Science Foundation of China under Grant 41901291, in part by the Aeronautical Observation System in the High Resolution Earth Observation System of the National Science and Technology Major Project of China under Grant 30-H30C01-9004-19/21, in part by the National Key Research and Development Program under Grant 2018YFC1800102, in part by the Open Fund of State Key Laboratory of Coal Resources and Safe Mining under Grant SKLCRSM19KFA04, in part by the National Facilities and Information Infrastructure for Science and Technology under Grant Y719H71006, and in part by the Fundamental Research Funds for the Central Universities under Grant 20190P006. (_Corresponding author: Jun Xie_).Chengye Zhang is with the State Key Laboratory of Coal Resources and Safe Mining and the College of Geoscience and Surveying Engineering, China University of Mining and Technology, Beijing 100083, China (e-mail: [email protected]). Jun Yue is with the School of Traffic and Transportation Engineering, Changsha University of Science and Technology, Changsha 410114, China (e-mail: [email protected]). Qiming Qin is with the School of Earth and Space Sciences, Peking University, Beijing 100871, China (e-mail: [email protected]). Digital Object Identifier 10.1109/JSTARS.2020.3017544 ## I Introduction Combining both spatial and spectral information, hyperspectral remote sensing has been widely used in many areas, e.g., agriculture, geology, environment, and ecology [1, 2]. Hyperspectral image classification is to identify the target type of each pixel, which is usually a key step in the applications of hyperspectral remote sensing in many areas. Machine learning provides an important way for hyperspectral image classification automatically. In the past several decades, many machine learning approaches were proposed to solve the problem of hyperspectral image classification, such as support vector machine (SVM) [3, 4], neural network [5], and random forest (RF) [6, 7]. Since the deep learning was proposed [8], it has achieved great success to solve many important problems, such as face recognition, speech recognition, image identification, and automatic translation. After the application of deep learning on hyperspectral image classification [9], the methods for hyperspectral image classification based on deep learning attracted the attention of many scholars. A series of deep networks were successively proposed and the classifying accuracy of hyperspectral image was gradually improved [10, 11, 12, 13, 14, 15, 16]. To make full use of the spatial-spectral information provided by the hyperspectral image, some deep-learning models fused both the spatial and spectral features, and extracted the comprehensive features, which also improved the classification accuracy [17, 18, 19, 20, 21, 22, 23]. However, the hyperspectral image classification has not been solved very well using deep learning. One of the most important reasons is that a mass of labeled samples are required by deep learning for a satisfying accuracy. In fact, it is usually very difficult to acquire enough labeled samples to meet the requirement of hyperspectral image classification using deep learning. In other words, the limited number of training samples hinders the scholars from improving the accuracy of hyperspectral image classification. In this situation, a kind of methods called few-shot learning was introduced into solving the problem of hyperspectral image classification with limited samples, and obtained better accuracy when compared with previous machine learning methods [24, 25]. Few-shot learning is an important branch of machine learning, which is designed for solving the problems with only a few training samples, such as object identification, face recognition, image classification, etc. [26, 27, 28, 29, 30]. For hyperspectral image classification, several methods have been proposed to solve the problem of limited supervised samples. A pixel-pair method was proposed to construct a new data pair combination, which increased the amount of input data for training [31]. Limited to the number of training samples, an unsupervised method called self-taught feature learning was proposed to finish hyperspectral image classification [32]. Sensor-specific models were trained and directly applied ontarget datasets when tuned by only a few training samples [33]. A semisupervised convolutional neural network (CNN) and a supervised deep feature extraction method were proposed to realize hyperspectral image classification using 200 training samples in each class [34, 35]. In few-shot image classification, there are a group of base classes and a group of new classes. The common strategy of few-shot learning is to train the model based on enough labeled samples in the base classes. Then, the trained model is generalized to the new classes, and is used to realize the classification based on limited supervised samples in the new classes. In other words, the classification for \"new classes\" with limited supervised samples is the task, whereas the \"base classes\" with enough labeled samples are utilized to help training the model. A deep few-shot learning (DFSL) method and a spatial-spectral prototypical network were proposed, respectively, [24, 36] for the few-shot hyperspectral image classification. The common basic idea of these two methods is to learn a generalized feature space using large amount of labeled samples and then be applied to new datasets with limited supervised samples, which is similar to this article. However, the differences with this article are in the method of feature extraction. This article utilizes the global prototypical learning with hallucinating new samples in the new classes and realizes classification based on global prototypical representations, and also designs the dense CNN and the spectral-spatial attention network (SSAN) to alleviate the vanishing of gradient information and adaptively adjust the receptive field size. In fact, there are two kinds of few-shot learning: First, the model is trained using base classes and then generalized to the new classes; second, the model is trained using both base classes and new classes, and then the trained model is used to realize the classification of new classes. However, a serious problem is present in these methods. That is the disproportion of the number of labeled samples between the base classes and new classes. In this situation, the classification method is vulnerable to over fit to the base classes [37]. To alleviate this problem, it is a strategy to hallucinate new samples in the new classes and then to learn the global representation for each class. This article proposes a global prototypical network (GPN) to solve the problem of hyperspectral image classification with limited supervised samples. The proposed method combines the learning strategy for global prototypical representations and a novel deep network. The main contributions of this article are listed as follows. 1. A method of global representation learning is proposed to train a network, which is to learn a global prototypical representation for each class in a new feature space. The learning procedure includes hallucinating new samples, generating episodic representations, and updating global prototypical representations. 2. A novel architecture of deep CNN is designed, including two branches: a dense convolutional network and a spectral-spatial attention network. The dense convolutional network is to alleviate the vanishing of gradient information after passing through many layers, whereas the SSAN is to adaptively adjust the receptive field size. 3. The experiments show that the overall accuracy (OA) of the proposed method is better than that of existing methods. The ablation study demonstrates the effectiveness of the global representation learning, the dense convolutional network, and the spectral-spatial attention network, for improving the classification accuracy. This article is organized as follows. In Section II, the proposed method is explained in detail, including the global representation learning and the two-branch deep CNN. In Section III, the experimental data and parameters are described. In Section IV, the results are showed and discussed, and the ablation study and time consumption are analyzed. In Section V, the conclusions of this article are summarized. ## II Method ### _Procedure of the Proposed GPN_ The procedure of the proposed GPN is shown in Fig. 1, including training the network and testing the network. In training, the base classes and new classes (after hallucinating new samples) are used for training a deep network to learn a feature space and also to learn the global prototypical representation in this space for each class. In testing, the unclassified samples are transformed to the embedding-feature space, and are classified according to the similarities with the global prototypical representations based on the nearest neighbor (NN) classifier. ### _Global Representation Learning_ The global representation learning is to learn a function (i.e., network) \\(f_{0}\\): R\\({}^{F}\\)\\(\\rightarrow\\)R\\({}^{D}\\) to transfer the samples from original data space R\\({}^{F}\\) to an embedding-feature space R\\({}^{D}\\). In the embedding-feature space, each sample is a high-dimensional vector, and a vector called global prototypical representation is learned for each class during training. The number of dimensions of the embedding-feature space is denoted as \\(d\\). Before training, ten new samples need to be hallucinated from each new class. For the new class \\(c_{j}\\), \\(k_{r}\\) samples are randomly selected from all the \\(k_{t}\\) samples in \\(c_{j}\\). The selected \\(i\\)th sample can be extracted an embedding feature by the network, which is denoted as \\(f_{i}\\). The new sample \\(t_{cj}\\) hallucinated from the new class \\(c_{j}\\) is defined as (1). ceil (\\(\\cdot\\)) is the function to calculate the rounded-up integer. U (0, \\(u\\)) is a uniform distribution ranging from 0 to \\(u\\) \\[t_{cj}=\\sum_{i=1}^{k_{r}}\\frac{\\tau_{i}}{\\sum_{j}\\tau_{j}}\\cdot f_{i} \\tag{1}\\] \\(k_{r}=\\) ceil (\\(k_{r}^{{}^{\\prime}}\\)), \\(k_{r}^{{}^{\\prime}}\\sim\\) U(0, \\(k_{t}\\)), \\(\\tau_{i}\\sim\\) U(0, 1). The procedure for training a batch is shown in Table I. \\(C_{\\rm all}=\\{c_{1}\\), \\(c_{2}\\), \\(c_{3}\\), , \\(c_{n}\\}\\) is the labels set of all the classes and \\(n\\) is the number of all the classes, consisting of the labels set of the base classes \\(C_{\\rm base}=\\{c_{1}\\), \\(c_{2}\\), \\(c_{3}\\), , \\(c_{s}\\}\\) and the labels set of the new classes \\(C_{\\rm new}=\\{c_{s+1},c_{s+2},c_{s+3},\\) , \\(c_{s+t}\\}\\). \\(s\\) is the number of base classes and \\(t\\) is the number of new classes (\\(n=s+t\\)). \\(\\textbf{g}\\left(c_{i}\\right)\\) is the global prototypical representation of the class \\(c_{i}\\). \\(\\textbf{G}=\\{\\textbf{g}\\left(c_{1}\\right)\\), \\(\\textbf{g}\\left(c_{2}\\right)\\), \\(\\textbf{g}\\left(c_{3}\\right)\\), , \\(\\textbf{g}\\left(c_{n}\\right)\\}\\) is a \\(d\\times n\\) matrix consisting of the global prototypical representations of all the classes. The initialization of **g** (\\(c_{i}\\)) is the mean of embedding features (extracted by the initialized network) of all the labeled samples in class \\(c_{i}\\) (2). \\(T=\\{t_{1},\\,t_{2},\\,t_{3},\\, ,\\,t_{m}\\}\\) is the labeled samples set of all the classes and \\(m\\) is the number of all the labeled samples in both the base classes and new classes. \\(\\textbf{x}_{i}=f_{\\theta}\\) (\\(t_{i}\\)) is a vector representing the sample \\(t_{i}\\) in the new space \\(\\textbf{R}^{D}\\) and the dimension is \\(d\\times 1\\) \\[\\textbf{g}_{\\rm initial}(c_{i})=\\frac{\\sum f_{\\theta}(s_{\\rm g})}{\\rm Num_{i}} \\tag{2}\\] where \\(s_{\\rm g}\\) represents the labeled samples in class \\(c_{i}\\) and \\(\\rm Num_{i}\\) represents the number of all the labeled samples in class \\(c_{i}\\). In a batch, some classes are randomly selected from \\(C_{\\rm all}\\) to form \\(C_{\\rm batch}\\), and the number of selected classes is denoted as \\(n_{\\rm batch}\\). Then, a part of samples are randomly selected from each class in \\(C_{\\rm batch}\\) to form a support dataset \\(S=\\{s_{1},s_{2},s_{3},\\, ,s_{a}\\}\\), and then another part of samples are also randomly selected from each class in \\(C_{\\rm batch}\\) to form a query dataset \\(Q=\\{q_{1},q_{2},q_{3},\\, ,\\,q_{b}\\}\\). The number of samples per class in the support dataset \\(S\\) and the query dataset \\(Q\\) is denoted as \\(u\\) and \\(v\\), respectively. Thus, the total number of samples in \\(S\\) and \\(Q\\) is \\(a=u\\times n_{\\rm batch}\\) and \\(b=v\\times n_{\\rm batch}\\), respectively. The embedding spatial-spectral feature (a \\(d\\times 1\\) vector) of each sample in the support dataset \\(S\\) is extracted by the deep network \\(f_{\\theta}\\): R\\({}^{F}\\)\\(\\rightarrow\\)R\\({}^{D}\\). The mean Fig. 1: Procedure of the proposed GPN in this article. of embedding features of the samples in class \\(c_{i}\\) is computed and regarded as the episodic representation \\(\\mathbf{e}(c_{i})\\) of class \\(c_{i}\\) in this batch (3). The episodic representations of all the class in this batch form the matrix \\(\\mathbf{E}=\\{\\mathbf{e}(c_{i})\\), \\(c_{i}\\)\\(\\in\\)\\(C_{\\mathrm{batch}}\\}\\), which is a \\(d\\times n_{\\mathrm{batch}}\\) matrix \\[\\mathbf{e}(c_{i})=\\frac{\\sum f_{\\theta}(s_{\\mathrm{e}})}{u} \\tag{3}\\] where \\(s_{\\mathrm{e}}\\) represents the labeled samples in class \\(c_{i}\\) in a batch. According to (4), the similarity score \\(\\mathbf{H}_{i}\\) = \\([h_{i}^{1},h_{i}^{2},h_{i}^{3},\\cdots,h_{i}^{n}]^{\\mathrm{T}}\\) between the episodic representation of class \\(c_{i}\\) and each global prototypical representation in \\(\\mathbf{G}=\\{\\mathbf{g}\\) (\\(c_{1}\\)), \\(\\mathbf{g}\\) (\\(c_{2}\\)), \\(\\mathbf{g}\\) (\\(c_{3}\\)), , \\(\\mathbf{g}\\) (\\(c_{n}\\))} is calculated \\[h_{i}^{j}=-||\\delta(\\mathbf{e}(c_{i}))-\\varphi(\\mathbf{g}(c_{j}))||_{2},c_{j} \\in C_{\\mathrm{all}} \\tag{4}\\] where \\(\\delta\\) (-) and \\(\\varphi\\) (-) are embeddings for the episodic representation and the global prototypical representation, respectively. In this article, a loss for class \\(c_{i}\\) in support dataset (called support loss) is defined as \\[L_{s}^{i}=\\mathrm{CE}(c_{i},\\mathbf{H}_{i}) \\tag{5}\\] where \\(\\mathrm{CE}(\\cdot)\\) is a cross entropy loss. The similarity score \\(\\mathbf{H}_{i}=[h_{i}^{1},h_{i}^{2},h_{i}^{3},\\cdots,h_{i}^{n}]^{\\mathrm{T}}\\) is normalized to a probability distribution \\(\\mathbf{P}_{i}=[p_{i}^{1},p_{i}^{2},p_{i}^{3},\\cdots,p_{i}^{n}]^{\\mathrm{T}}\\) by \\[p_{i}^{j}=\\frac{e^{h_{i}^{j}}}{\\sum_{j=1}^{n}e^{h_{i}^{j}}}. \\tag{6}\\] The global prototypical representation of each class is then updated according to \\[\\mathbf{g}_{\\mathrm{update}}(c_{i})=\\mathbf{G}\\mathbf{P}_{i} \\tag{7}\\] where \\(\\mathbf{G}\\) is a \\(d\\times n\\) matrix consisting of the global prototypical representations before being updated, and \\(\\mathbf{P}_{i}\\) is a \\(n\\times 1\\) vector. The similarity score \\(\\mathbf{W}_{k}=[w_{k}^{1},w_{k}^{2},w_{k}^{3},\\cdots,w_{k}^{n_{\\mathrm{batch} }}]^{\\mathrm{T}}\\) between the updated global prototypical representation and the sample \\(q_{k}\\) in the query dataset \\(Q\\) is defined as \\[w_{k}^{i}=-||f_{\\theta}(q_{k})-\\mathbf{g}_{\\mathrm{update}}(c_{i})||_{2},c_{i} \\in C_{\\mathrm{batch}}. \\tag{8}\\] A loss for the sample \\(q_{k}\\) in the query dataset \\(Q\\) (called query loss) is defined as \\[L_{q}^{k}=\\mathrm{CE}(C(q_{k}),\\mathbf{W}_{k}) \\tag{9}\\] where \\(\\mathrm{CE}(\\cdot)\\) is a cross entropy loss. _C_(\\(q_{k}\\)) belongs to the set \\(C_{\\mathrm{batch}}\\) and is the class type of the sample \\(q_{k}\\). Based on the support loss and the query loss, the total loss in this article is defined as \\[L_{\\mathrm{total}}=\\sum_{i=1}^{n_{\\mathrm{batch}}}L_{s}^{i}+\\sum_{k=1}^{v}L_{q }^{k}. \\tag{10}\\] At the last of training a batch, the parameters of the network \\(\\theta\\) are updated based on the total loss. The learning rate \\(\\alpha\\) of the network is set as 1 \\(\\times\\) 10\\({}^{-3}\\), whereas the weight decay and momentum of the proposed network are set as 1 \\(\\times\\) 10\\({}^{-4}\\) and 9 \\(\\times\\) 10\\({}^{-1}\\), respectively. ### _Deep Networks_ The architecture of the deep network (\\(f_{\\theta}\\): R\\({}^{F}\\)\\(\\rightarrow\\)R\\({}^{D}\\)) is shown in Fig. 2, which consists of two parts: a dense convolutional network and an SSAN. #### Iii-C1 Dense Convolution For a normal convolutional network, a connection point is present only between two neighbor layers. In other words, there are \\(n-1\\) connections in a normal convolutional network with \\(n\\) layers. However, in a normal convolutional network, the information about gradient can vanish after passing through many layers [38]. In this article, a dense convolutional network is designed with five convolutional layers (see Fig. 2). The difference from a normal convolutional network is that more connections are present, i.e., ten connections are in a five-layer network (see Fig. 2): the first layer is connected to the second, third, fourth, and fifth layer, respectively; the second layer is connected to the third, fourth, and fifth layer, respectively; the third layer is connected to the fourth and fifth layer, respectively; and the fourth layer is connected to the fifth layer. #### Iii-C2 Spectral-Spatial Attention Network In a normal convolutional network, the size of the convolutional kernel is settled, resulting in a constant size of receptive field for a layer. However, from the view of optic nerve science, the size of receptive field should be different for different nerve cells in a same layer, to response to different simulations. In this article, an SSAN is proposed to realize different sizes of receptive fields in a same layer. The strategy of SSAN is based on selective kernel network [39]. Kernels with different sizes are applied on a layers, respectively, and the size of receptive field is adjusted adaptively. The SSAN in this article consists of three sections: split, fuse, and select (see Fig. 2). SplitIn this section, the SSAN is split to two branches. The one is a convolutional block using a 3 \\(\\times\\) 3 \\(\\times\\) 3 kernel, and the other is to use a 5 \\(\\times\\) 5 \\(\\times\\) 5 kernel. To improve the operational efficiency, the 5 \\(\\times\\) 5 \\(\\times\\) 5 convolution is realized by dilated convolution [40]. FuseThe number of spectral bands of the hyperspectral image is denoted as \\(N\\). The convolutional results from two branches are fused by the following (\"Fuse 1\" module in Fig. 2): \\[\\mathbf{Y}=\\mathbf{Y}^{\\mathrm{k1}}+\\mathbf{Y}^{\\mathrm{k2}} \\tag{11}\\] where \\(\\mathbf{Y}^{\\mathrm{k1}}\\) and \\(\\mathbf{Y}^{\\mathrm{k2}}\\) are the convolutional results from two branches, respectively. \\(\\mathbf{Y}\\)\\(\\in\\)R\\({}^{5\\times 5\\times N}\\) is the fused result. The global information is embedded by the \"global average pooling\" module (see Fig. 2) to generate the spectralwise statistics \\(\\mathbf{U}=[u_{1},u_{2},u_{3},\\) , \\(u_{N}]^{\\mathrm{T}}\\)\\(\\in\\)R\\({}^{N\\times 1}\\) by \\[u_{c}=\\frac{1}{5\\times 5}\\sum_{i=1}^{5}\\sum_{j=1}^{5}\\mathbf{Y}_{c}(i,j) \\tag{12}\\] where \\(u_{c}\\) is the \\(c\\)th component of the spectralwise statistics \\(\\mathbf{U}\\), corresponding to the \\(c\\)th spectral band. \\(\\mathbf{Y}_{c}\\) is the \\(c\\)th component of \\(\\mathbf{Y}\\), which is a 5 \\(\\times\\) 5 matrix (\\(\\mathbf{Y}_{c}\\)\\(\\in\\)R\\({}^{5\\times 5}\\)). The compact feature \\(\\mathbf{Z}\\) is computed by the \"full connected 1\" module (see Fig. 2) according to (13). \"ceil (-)\" is an operator to acquire the rounded-up integer. \\(\\mathbf{Z}\\) is a ceil (_N/r_) vector (i.e.,\\(\\mathbf{Z}\\)\\(\\in\\)\\(\\mathbf{R}^{\\mathrm{ceil}\\ (N/r)\\times 1}\\)). \\(r\\) is the reduction ratio \\[\\mathbf{Z}=\\psi(\\mathbf{M}\\mathbf{U}) \\tag{13}\\] where \\(\\psi\\) is the ReLU function. \\(\\mathbf{M}\\)\\(\\in\\)\\(\\mathbf{R}^{\\mathrm{ceil}\\ (N/r)\\times N}\\) is a parameter matrix needing trained in the network. The purpose of \\(\\mathbf{Z}\\) is to guide the adaptive selections. Select-Spetal-spatial attention 1\" and \"Spectral-spatial attention 2\" are two branches with different receptive scales of spectral-spatial information (see Fig. 2). For the \\(c\\)th spectral band, the weights (\\(a_{c}\\), \\(b_{c}\\)) of the two branches are calculated by \\[a_{c}=\\frac{e^{\\mathbf{A}c\\mathbf{Z}}}{e^{\\mathbf{A}c\\mathbf{Z}}+e^{\\mathbf{ B}c\\mathbf{Z}}}\\] \\[b_{c}=\\frac{e^{\\mathbf{B}c\\mathbf{Z}}}{e^{\\mathbf{A}c\\mathbf{Z}}+e^{\\mathbf{ B}c\\mathbf{Z}}} \\tag{14}\\] where \\(\\mathbf{A}c\\) and \\(\\mathbf{B}c\\) are the \\(c\\)th components of the parameter matrixes \\(\\mathbf{A}\\) and \\(\\mathbf{B}\\), needing trained in the network. \\(\\mathbf{A}c\\)\\(\\in\\)\\(\\mathbf{R}^{1\\times\\ \\mathrm{ceil}\\ (N/r)}\\), \\(\\mathbf{B}c\\)\\(\\in\\)\\(\\mathbf{R}^{1\\times\\ \\mathrm{ceil}\\ (N/r)}\\), \\(\\mathbf{A}\\)\\(\\in\\)\\(\\mathbf{R}^{N\\times\\ \\mathrm{ceil}\\ (N/r)}\\), and \\(\\mathbf{B}\\)\\(\\in\\)\\(\\mathbf{R}^{N\\times\\ \\mathrm{ceil}\\ (N/r)}\\). \"Fuse 2\" is to adaptively select different scales of spectral-spatial information (see Fig. 2). For the \\(c\\)th spectral band, the output (\\(\\mathbf{O}_{c}\\)) of \"Fuse 2\" module is realized by (15). \\(\\mathbf{O}=[\\mathbf{O}_{1}\\), \\(\\mathbf{O}_{2}\\), \\(\\mathbf{O}_{3}\\), , \\(\\mathbf{O}_{N}]\\) is the output of SSAN for all the spectral bands (\\(\\mathbf{O}_{c}\\)\\(\\in\\)\\(\\mathbf{R}^{5\\times 5}\\) and \\(\\mathbf{O}\\)\\(\\in\\)\\(\\mathbf{R}^{5\\times 5\\times N}\\)) \\[\\mathbf{O}_{c}=a_{c}\\cdot\\mathbf{Y}_{c}^{k1}+b_{c}\\cdot\\mathbf{Y}_{c}^{k2}. \\tag{15}\\] ### _NN for Classification_ In this article, all the testing samples are transferred to an embedded-feature space, and the embedded features are extracted by the trained deep network. The similarity scores between the last global prototypical representations and the each testing sample are calculated according to (8). The maximum similarity score determines the class type of each testing sample. In fact, (8) is to calculate the opposite number of the Euclidean distance between a global prototypical representation and the embedded features of a sample. Thus, the maximum similarity score corresponds to the minimum Euclidean distance. In other words, the classification of the testing samples is finished based on Euclidean distance using NN classifier. ## III Experiments ### _Experimental Data_ #### Iii-A1 Training Data There are four hyperspectral datasets that are used as training data in this article. A brief introduction of the hyperspectral datasets for training the network is shown in Table II [24]. #### Iii-A2 Testing Data There are three hyperspectral datasets that are used as testing data in this article. Both the training datasets and the testing datasets are popular and well known in the academic community of hyperspectral remote sensing. A brief introduction of the testing datasets is shown in Table III [24]. in California. IP is short for Indian Pines located in Indiana, whereas UP is short for the University of Pavia located in Pavia. ### _Architecture Details of the Proposed Deep CNN_ The architecture and parameters of the proposed network are shown in Table IV. The inputting size of GPN is 9 \\(\\times\\) 9 \\(\\times\\) \\(N\\) _BAND_. _N_BAND_ is the number of spectral bands. Hence, both the spectral curve of a pixel and a neighborhood (9 \\(\\times\\) 9) of the pixel (spatial information) are input to the network. The layers described in Table IV can be found in Fig. 2. An experiment of parameter sensitivity was conducted to set the parameter \\(d\\), with \\begin{tabular}{c c c c c} \\hline \\hline **Layer Name** & **Input Layer** & **Filter Size** & **Padding** & **Output Size** \\\\ \\hline Input layer & / & / & / & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_ \\\\ \\hline 3-D Conv-layer 1 & Input layer & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline 3-D Conv-layer 2 & 3-D Conv-layer 1 & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline 3-D Conv-layer 3 & 3-D Conv-layer 2 & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline Short-connection 1 & 3-D Conv-layer 1\\&3 & 3-D Conv-layer 1 + 3-D Conv-layer 3 & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline 3-D Conv-layer 4 & Short-connection 1 & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline Short-connection 2 & 3-D Conv-layer 1\\&2\\&4 & 3-D Conv-layer 1 + 3-D Conv-layer 2 + 3- & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline 3-D Conv-layer 5 & Short-connection 2 & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline Short-connection 3 & 3-D Conv-layer 1\\&2\\&3\\&3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)2 & Yes & 9\\(\\times\\)9\\(\\times\\)N \\& _BAND_\\(\\times\\)2 \\\\ \\hline Max Pooling & Short-connection 3 & 2\\(\\times\\)2\\(\\times\\)1 & No & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Convolution with kernel 1 & Max Pooling & 3\\(\\times\\)3\\(\\times\\)3\\(\\times\\)1 & Yes & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Convolution with kernel 2 & Max Pooling & 5\\(\\times\\)5\\(\\times\\)5\\(\\times\\)1 & Yes & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline \\multirow{2}{*}{Fuse 1} & Convolution with kernel 1 \\& 2 & \\begin{tabular}{c} Convolution with kernel 1 + Convolution \\\\ with kernel 2 \\\\ \\end{tabular} & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Global Average Pooling & Fuse 1 & 5\\(\\times\\)5\\(\\times\\)1\\(\\times\\)1 & No & 1\\(\\times\\)1\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Full Connected 1 & Global Average Pooling & / & / & 1\\(\\times\\)1\\(\\times\\)1\\(\\times\\)1 \\\\ \\hline Spectral-spatial Attention 1 & Full Connected 1 & / & / & 1\\(\\times\\)1\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Spectral-spatial Attention 2 & Full Connected 1 \\& / & / & 1\\(\\times\\)1\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Selective Kernel 1 & Convolution with kernel 1 \\& & \\begin{tabular}{c} Convolution with kernel 1 \\& \\\\ spectral-spatial Attention 1 \\\\ \\end{tabular} & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Selective Kernel 2 & \\begin{tabular}{c} Convolution with kernel 2 \\& \\\\ spectral-spatial Attention 2 \\\\ \\end{tabular} & \\begin{tabular}{c} Convolution with kernel 2 \\& Spectral- \\\\ spatial Attention 2 \\\\ \\end{tabular} & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Fuse 2 & Selective Kernel 1 \\& 2 & Selective Kernel 1 + Selective Kernel 2 & 5\\(\\times\\)5\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline 2-D Convolution & Fuse 2 & 3\\(\\times\\)3\\(\\times\\)1\\(\\times\\)1 & No & 3\\(\\times\\)3\\(\\times\\)N \\& _BAND_\\(\\times\\)1 \\\\ \\hline Full Connected 2 & 2-D Convolution & / & / & \\(\\lambda\\)\\(\\times\\)1 \\\\ \\hline \\hline \\end{tabular} Fig. 7: Classification results of the Salinas dataset. (a)–(e) Number (\\(L\\)) of supervised samples is 5, 10, 15, 20, and 25 in each class, respectively. ### _Ablation Study_ The proposed method in this article consists of three main modules, i.e., global representation learning, dense CNN, and SSAN. An ablation study was performed in this article to demonstrate that every module was effective for improving accuracy. In other words, the accuracy of the method was tested when the three modules were replaced, respectively. In the ablation study, the strategy of global representation learning was replaced by a traditional normal triplet learning [49]. The dense branch and SSAN branch were replaced by a normal CNN module, respectively. When a module was replaced, other modules were kept same with the proposed method (GPN). The results of ablation study are shown in Tables VIII-Table X. When the global representation learning, dense CNN, and SSAN were replaced, respectively, the classification accuracy presented decline. In other words, the accuracy of the proposed method is better than that of other cases where the three modules were replaced, respectively. Thus, it is demonstrated that all the three modules Fig. 8: Classification results of the IP dataset. (a)–(c) Number (\\(L\\)) of supervised samples is 5, 10, and 15 in each class, respectively. Fig. 9: Classification results of the UP dataset. (a)–(e) Number (\\(L\\)) of supervised samples is 5, 10, 15, 20, and 25 in each class, respectively. \\begin{tabular}{c c c c} \\hline \\hline Number (\\(L\\)) of supervised samples & 5 & 10 & 15 \\\\ \\hline Support Vector Machine (SVM) & \\(50.23\\pm 1.74\\) & \\(55.56\\pm 2.04\\) & \\(58.58\\pm 0.80\\) \\\\ DFSL+SVM & \\(64.58\\pm 2.78\\) & \\(75.53\\pm 1.89\\) & \\(79.98\\pm 2.23\\) \\\\ SC’SVM & \\(55.42\\pm 0.35\\) & \\(60.86\\pm 5.08\\) & \\(67.24\\pm 0.47\\) \\\\ DFSL+NN & \\(67.84\\pm 1.29\\) & \\(76.49\\pm 1.44\\) & \\(78.62\\pm 1.59\\) \\\\ SS-LPSVM & \\(56.95\\pm 0.95\\) & \\(64.74\\pm 0.39\\) & \\(78.76\\pm 0.04\\) \\\\ DBMA & \\(69.76\\pm 2.32\\) & \\(79.42\\pm 0.25\\) & \\(82.28\\pm 1.84\\) \\\\ Laplacian SVM & \\(52.31\\pm 0.67\\) & \\(56.36\\pm 0.71\\) & \\(59.99\\pm 0.65\\) \\\\ KNN+SNI & \\(56.39\\pm 1.03\\) & \\(74.88\\pm 0.54\\) & \\(78.92\\pm 0.61\\) \\\\ MSDN & \\(69.15\\pm 2.24\\) & \\(78.96\\pm 2.16\\) & \\(81.79\\pm 2.05\\) \\\\ MLR+RS & \\(55.38\\pm 3.98\\) & \\(69.28\\pm 2.63\\) & \\(75.15\\pm 1.43\\) \\\\ Transductive SVM & \\(62.57\\pm 0.23\\) & \\(63.45\\pm 0.17\\) & \\(65.42\\pm 0.02\\) \\\\ 3D-CNN & \\(63.54\\pm 2.72\\) & \\(71.25\\pm 1.64\\) & \\(76.25\\pm 2.17\\) \\\\ MSDN-SA & \\(69.87\\pm 2.13\\) & \\(79.54\\pm 2.01\\) & \\(82.36\\pm 1.96\\) \\\\ SVM+Siamese-CNN & \\(10.02\\pm 1.48\\) & \\(17.71\\pm 4.90\\) & \\(44.00\\pm 5.73\\) \\\\ GPN & \\(\\mathbf{70.45\\pm 1.84}\\) & \\(\\mathbf{80.01\\pm 1.72}\\) & \\(\\mathbf{82.97\\pm 1.63}\\) \\\\ \\hline \\hline \\end{tabular} TABLE \\(\\#\\) Average \\(\\pm\\) STD (OA, \\(\\%\\)) of Ten Runs for IP Dataset in the Ablation Study \\begin{tabular}{c c c c c} \\hline \\hline Number (\\(L\\)) of supervised samples & 5 & 10 & 15 & 20 & 25 \\\\ \\hline Support Vector Machine (SVM) & \\(53.73\\pm 1.30\\) & \\(61.53\\pm 1.14\\) & \\(60.43\\pm 0.94\\) & \\(64.89\\pm 1.14\\) & \\(68.01\\pm 2.62\\) \\\\ DFSL+SVM & \\(72.57\\pm 3.93\\) & \\(84.56\\pm 1.83\\) & \\(87.23\\pm 1.38\\) & \\(90.69\\pm 1.29\\) & \\(93.08\\pm 0.92\\) \\\\ SC’SVM & \\(56.76\\pm 2.28\\) & \\(64.25\\pm 0.40\\) & \\(66.87\\pm 0.37\\) & \\(68.24\\pm 1.18\\) & \\(69.45\\pm 2.19\\) \\\\ DFSL+NN & \\(80.81\\pm 3.12\\) & \\(84.79\\pm 2.27\\) & \\(86.68\\pm 2.61\\) & \\(89.59\\pm 1.05\\) & \\(91.11\\pm 0.83\\) \\\\ SS-LPSVM & \\(69.60\\pm 2.30\\) & \\(75.88\\pm 0.22\\) & \\(80.67\\pm 1.21\\) & \\(78.41\\pm 0.26\\) & \\(85.56\\pm 0.09\\) \\\\ DBMA & \\(80.89\\pm 1.77\\) & \\(85.76\\pm 1.46\\) & \\(89.07\\pm 1.39\\) & \\(92.71\\pm 1.61\\) & \\(94.28\\pm 1.31\\) \\\\ Laplacian SVM & \\(65.72\\pm 0.34\\) & \\(68.26\\pm 2.20\\) & \\(68.34\\pm 0.29\\) & \\(65.91\\pm 0.45\\) & \\(68.88\\pm 1.34\\) \\\\ KNN+SNI & \\(70.21\\pm 1.29\\) & \\(78.97\\pm 2.33\\) & \\(82.56\\pm 0.51\\) & \\(85.18\\pm 0.65\\) & \\(86.26\\pm 0.37\\) \\\\ MSDN & \\(79.59\\pm 1.95\\) & \\(84.63\\pm 1.64\\) & \\(88.16\\pm 1.48\\) & \\(92.89\\pm 1.58\\) & \\(93.56\\pm 1.37\\) \\\\ MLR+RS & \\(69.73\\pm 3.15\\) & \\(80.30\\pm 2.54\\) & \\(84.10\\pm 1.94\\) & \\(83.52\\pm 2.13\\) & \\(87.97\\pm 1.69\\) \\\\ Transductive SVM & \\(63.43\\pm 1.22\\) & \\(63.73\\pm 0.45\\) & \\(68.45\\pm 1.07\\) & \\(73.72\\pm 0.27\\) & \\(69.96\\pm 1.39\\) \\\\ 3D-CNN & \\(71.58\\pm 3.58\\) & \\(79.63\\pm 1.75\\) & \\(83.89\\pm 2.93\\) & \\(85.98\\pm 1.76\\) & \\(89.56\\pm 1.20\\) \\\\ MSDN-SA & \\(80.94\\pm 1.86\\) & \\(85.84\\pm 1.59\\) & \\(89.18\\pm 1.36\\) & \\(92.76\\pm 1.45\\) & \\(94.35\\pm 1.26\\) \\\\ SVM+Siamese-CNN & \\(23.68\\pm 6.34\\) & \\(66.64\\pm 2.37\\) & \\(68.35\\pm 4.70\\) & \\(78.43\\pm 1.93\\) & \\(72.87\\pm 7.36\\) \\\\ GPN & \\(\\mathbf{81.31\\pm 1.60}\\) & \\(\\mathbf{86.65\\pm 1.33}\\) & \\(\\mathbf{89.81\\pm 1.16}\\) & \\(\\mathbf{93.48\\pm 1.21}\\) & \\(\\mathbf{94.94\\pm 1.13}\\) \\\\ \\hline \\hline \\end{tabular} TABLE \\(\\#\\) Average \\(\\pm\\) STD (OA, \\(\\%\\)) of Ten Runs for IP Dataset in the Ablation Study \\begin{tabular}{c c c c c} \\hline \\hline Number (\\(L\\)) of supervised samples & 5 & 10 & 15 & 15 \\\\ \\hline Experimental Way & \\multicolumn{4}{c}{The proposed GPN} \\\\ Overall Accuracy (\\(\\%\\)) & \\(70.45\\pm 1.84\\) & \\(80.01\\pm 1.72\\) & \\(82.97\\pm 1.63\\) \\\\ \\hline Experimental Way & \\multicolumn{4}{c}{A traditional normal triplet learning took the place of global representations learning} \\\\ Overall Accuracy (\\(\\%\\)) & \\(69.49\\pm 2.45\\) & \\(79.08\\pm 2.23\\) & \\(82.06\\pm 1.95\\) \\\\ \\hline Experimental Way & \\multicolumn{4}{c}{A normal convolutional neural network took the place of dense convolutional neural network} \\\\ Overall Accuracy (\\(\\%\\)) & \\(70.04\\pm 2.08\\) & \\(79.82\\pm 1.89\\) & \\(82.63\\pm 1.74\\) \\\\ \\hline Experimental Way & \\multicolumn{4}{c}{A normal convolutional neural network took the place of the SSAN} \\\\ Overall Accuracy (\\(\\%\\)) & \\(69.87\\pm 2.24\\) & \\(79.63\\pm 2.03\\) & \\(82.44\\pm 1.86\\) \\\\ \\hline Experimental Way & \\multicolumn{4}{c}{The dense convolutional neural network was used alone} \\\\ Overall Accuracy (\\(\\%\\)) & \\(63.62\\pm 2.84\\) & \\(76.11\\pm 2.56\\) & \\(79.12\\pm 2.19\\) \\\\ \\hline \\hline \\end{tabular} make contribution to improving the accuracy of the proposed method. Specifically, when the dense branch was replaced by a normal CNN, the accuracy presented a little decline, whereas the accuracy with replacing global representation learning declined most in the three modules. In other words, the accuracy reached the worst when the global representation learning was replaced, whereas the accuracy with replacing dense CNN was better than replacing SSAN and global representation learning. Hence, it can be inferred that the strategy of global representation learning makes the most contribution to improving the accuracy, whereas the SSAN and dense CNN take the second and third places, respectively. But it should be emphasized that all the three modules are effective for improving classification accuracy. In particular, the experiments that directly applied dense CNN and SSRN were conducted without global representation learning, and the results of the declining accuracy in Tables VIII-X where global representation learning was replaced have demonstrated the effectiveness of the global representation learning. It is worth noting that, in the ablation study, the proposed method is not compared to existing methods, and is compared to the method with one module being replaced by a normal module. For example, when the dense CNN was replaced by a normal CNN, the method for comparison still used the global representation learning and SSAN, so the difference in accuracy was just because of the absence of dense CNN, which was less obvious than the comparison to existing methods (e.g., SVM and SC\\({}^{3}\\)SVM). In addition, the dense CNN was used alone for supervised classification. The results are shown in Tables VIII-X. The accuracy with using the dense CNN alone is lower than that with using two or three modules simultaneously, which confirms the effectiveness of the proposed GPN. ### _Time Consumption_ The time consumption of the proposed method was tested and compared with some existing methods using the IP dataset. The details of computer environment for time testing were same for different methods in this article. The results of time consumption for different methods are shown in Table XI. It suggests that the time consumption of GPN is similar with that of several popular methods. ## V Conclusion This study proposed a GPN for hyperspectral image classification using limited supervised samples (i.e., few-shot hyperspectral image classification). The experiments were conducted for verification, and some main conclusions were reached as follows. 1. The accuracy of GPN shows better than existing popular methods under the condition of small samples. The comparative analysis suggests that the GPN is state-of-the-art for solving the classification problem of hyperspectral image using limited supervised samples. 2. The ablation study demonstrates that all the three modules (global representation learning, dense branch, and SSAN branch) are effective for improving the accuracy, whereas the strategy of global representation learning makes the most contribution. 3. The time expenditure of the proposed method (GPN) and several existing popular methods is similar in the same operational environment. In the follow-up study, more hyperspectral datasets should be used to test the effectiveness of the proposed GPN for few-shot hyperspectral image classification. ## Acknowledgment The authors would like to thank the National Center for Airborne Laser Mapping for providing the \"Houston\" dataset, the Space Application Laboratory, Department of Advanced Interdisciplinary Studies, University of Tokyo for providing the \"Chikusei\" dataset, and Grupo de Inteligencia Computacional for providing other datasets. ## References * [1] H. Su, B. Zhao, Q. Du, P. Du, and Z. Xue, \"Multi-feature dictionary learning for collaborative representation classification of hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 4, pp. 2467-2484, Apr. 2018. * [2] H. Su, B. Yong, and Q. Du, \"Hyperspectral band selection using improved firefly algorithm,\" _IEEE Geosci. Remote Sens. Lett._, vol. 13, no. 1, pp. 68-72, Jan. 2016. * [3] F. Melgani and L. Bruzzone, \"Classification of hyperspectral remote sensing images with support vector machines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 42, no. 8, pp. 1778-1790, Aug. 2004. * [4] L. Gao _et al._, \"Subspace-based support vector machines for hyperspectral image classification,\" _IEEE Geosci. Remote Sens. Lett._, vol. 12, no. 2, pp. 349-353, Feb. 2015. * [5] J. A. Benediktsson, P. H. Swain, and O. K. Ersoy, \"Neural network approaches versus statistical methods in classification of multisource remote sensing data,\" in _Proc. 12th Can. Symp. Remote Sens. Geosci. Remote Sens. Symp._, Jul. 1989, vol. 2, pp. 489-492. * [6] P. O. Gislason, J. A. Benediktsson, and J. R. Sveinsson, \"Random forests for land cover classification,\" _Pattern Recognit. Lett._, vol. 27, no. 4, pp. 294-300, 2006. * [7] J. Ham, Y. C. Chen, M. M. Crawford, and J. Ghosh, \"Investigation of the random forest framework for classification of hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 3, pp. 492-501, Mar. 2005. * [8] G. E. Hinton and R. R. Salakhutdinov, \"Reducing the dimensionality of data with neural networks,\" _Science_, vol. 313, no. 5786, pp. 504-507, 2006, doi: 10.1126/science.1127647. * [9] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, \"Deep learning-based classification of hyperspectral data,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 7, no. 6, pp. 2094-2107, Jun. 2014. * [10] P. Du _et al._, \"Advances of four machine learning methods for spatial data handling: A review,\" _J. Geovis. Spatial Anal._, vol. 4, 2020, Art. no. 13. [Online]. Available: [https://doi.org/10.1007/s41651-020-00048-5](https://doi.org/10.1007/s41651-020-00048-5) * [11] H. Su, Y. Yu, Q. Du, and P. Du, \"Ensemble learning for hyperspectral image classification using tangent collaborative representation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 6, pp. 3778-3790, Jun. 2020. * [12] H. Su, B. Zhao, Q. Du, and P. Du, \"Kernel collaborative representation with local correlation features for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 2, pp. 1230-1241, Feb. 2019. * [13] W. Zhao, Z. Guo, J. Yue, X. Zhang, and L. Luo, \"On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery,\" _Int. J. Remote Sens._, vol. 36, no. 13, pp. 3368-3379, 2015, doi: 10.1080/2150704X.215.1062157. * [14] J. Yue, S. Mao, and M. Li, \"A deep learning framework for hyperspectral image classification using spatial pyramid pooling,\" _Remote Sens. Lett._, vol. 7, no. 9, pp. 875-884, 2016, doi: 10.1080/2150704X.216.1193793. * [15] V. Singhal and A. Majumdar, \"Row-sparse discriminative deep dictionary learning for hyperspectral image classification,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 12, pp. 5019-5028, Dec. 2018. * [16] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, \"A new deep convolutional neural network for fast hyperspectral image classification,\" _ISPRS J. Photogram. Remote Sens._, vol. 145, no. A, pp. 120-147, 2018, doi: 10.1016/j.isprisps.2017.11.021. * [17] W. Zhao and S. Du, \"Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 8, pp. 4544-4554, Aug. 2016. * [18] P. Ghamisi _et al._, \"New frontiers in spectral-spatial hyperspectral image classification the latest advances based on mathematical morphology, Markov random fields, segmentation, sparse representation, and deep learning,\" _IEEE Geosci. Remote Sens. Mag._, vol. 6, no. 3, pp. 10-43, Sep. 2018. * [19] J. Li, B. Xi, Q. Du, R. Song, Y. Li, and G. Ren, \"Deep kernel extreme-learning machine for the spectral-spatial classification of hyperspectral imagery,\" _Remote Sens._, vol. 10, no. 12, 2018, Art. no. 2036, doi: 10.3390/rs10122036. * [20] A. Song, J. Choi, Y. Han, and Y. Kim, \"Change detection in hyperspectral images using recurrent 3D fully convolutional networks,\" _Remote Sens._, vol. 10, no. 11, 2018, Art. no. 1827, doi: 10.3390/rs1011827. * [21] A. Ben Hamida, A. Benoit, P. Lambert, and C. Ben Amar, \"3-D deep learning approach for remote sensing image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 8, pp. 4420-4434, Aug. 2018. * [22] Y. Xu, L. Zhang, B. Du, and F. Zhang, \"Spectral-spatial unified networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 10, pp. 5893-5909, Oct. 2018. * [23] H. Shen, M. Jiang, I. Li, Q. Yuan, Y. Wei, and L. Zhang, \"Spatial-spectral fusion by combining deep learning and variational model,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 8, pp. 6169-6181, Aug. 2019. * [24] B. Liu, X. Yu, A. Yu, P. Zhang, G. Wan, and R. Wang, \"Deep few-shot learning for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 4, pp. 2299-2304, Apr. 2019. * [25] S. Xu, J. Li, M. Khodadadzadeh, A. Marinoni, P. Gamba, and B. Li, \"Abundance-indicated subspace for hyperspectral classification with limited training samples,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 12, no. 4, pp. 1265-1278, Apr. 2019. * [26] J. Choe, S. Park, K. Kim, J. H. Park, D. Kim, and H. Shim, \"Face generation for low-shot learning using generative adversarial networks,\" in _Proc. IEEE Int. Conf. Comput. Vis. Workshops_, 2017, pp. 1940-1948. * [27] X. Dong, L. Zhu, D. Zhang, Y. Yang, and F. Wu, \"Fast parameter adaptation for few-shot image captioning and visual question answering,\" in _Proc. ACM Multimedia Conf._, Seoul, South Korea, 2018, pp. 54-62. * [28] A. Shaban, S. Bansal, Z. Liu, I. Essa, and B. Boots, \"One-shot learning for semantic segmentation,\" in _Proc. Brit. Mach. Vis. Conf._, London, U.K., Sep. 2017. Accessed:Feb. 22, 2020. [Online]. Available: [https://kopernio.com/viewer4Moi](https://kopernio.com/viewer4Moi) = arXiv:1709.03410v1kroute = 6 * [29] E. Schwartz _et al._, \"Delta-encoder: An effective sample synthesis method for few-shot object recognition,\" in _Proc. Neural Inf. Process. Syst._, Montreal, QC, Canada, Dec. 2018, pp. 2850-2860. * [30] Y. Wang _et al._, \"Generalizing from a few examples: A survey on few-shot learning,\" 2019. Accessed: Feb. 1, 2020. [Online]. Available: [https://arxiv.org/pdf/1904.05046.pdf](https://arxiv.org/pdf/1904.05046.pdf) * [31] W. Li, G. Wu, F. Zhang, and Q. Du, \"Hyperspectral image classification using deep pixel-pair features,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 2, pp. 844-853, Feb. 2017. * [32] R. Kemker and C. Kanan, \"Self-taught feature learning for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 5, pp. 2693-2705, May 2017. * [33] S. Mei, J. Ji, J. Hou, X. Li, and Q. Du, \"Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 8, pp. 4520-4533, Aug. 2017. * [34] B. Liu, X. Yu, P. Zhang, X. Tan, A. Yu, and Z. Xue, \"A semi-supervised convolutional neural network for hyperspectral image classification,\" _Remote Sens. Lett._, vol. 8, no. 9, pp. 839-848, Sep. 2017. * [35] B. Liu, X. Yu, P. Zhang, A. Yu, Q. Fu, and X. Wei, \"Supervised deep feature extraction for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 4, pp. 1909-1921, Apr. 2018. * [36] H. Tang, Y. Li, X. Han, Q. Huang, and W. Xie, \"A Spatial-spectral prototypical network for hyperspectral remote sensing image,\" _IEEE Geosci. Remote Sens. Lett._, vol. 17, no. 1, pp. 167-171, Jan. 2020. * [37] A. Li, T. Luo, T. Xiang, W. Huang, and L. Wang, \"Few-shot learning with global class representations,\" in _Proc. IEEE Int. Conf. Comput. Vis. Workshops_, 2019, pp. 9715-9724. * [38] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._, 2017, pp. 2261-2269. * [39] X. Li, W. Wang, X. Hu, and J. Yang, \"Selective kernel networks,\" 2019. Accessed: Dec. 22, 2019. [Online]. Available: [https://arxiv.org/pdf/1903.06586.pdf](https://arxiv.org/pdf/1903.06586.pdf) * [40] F. Yu* [45] M. Belkin, P. Niyogi and V. Sindhwani, \"Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,\" _J. Mach. Learn. Res._, vol. 7, pp. 2399-2434, 2006. * [46] K. Tan, J. Hu, J. Li, and P. Du, \"A novel semi-supervised hyperspectral image classification approach based on spatial neighborhood information and classifier combination,\" _ISPRS J. Photogramm. Remote Sens._, vol. 105, pp. 19-29, 2015, doi: 10.1016/j.ISPRS.2015.03.006. * [47] I. Dopido, J. Li, P. R. Marpu, A. Plaza, J. M. Bioucas Dias, and J. A. Benediktsson, \"Semisupervised self-learning for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 71, pp. 4032-4044, Jul. 2013. * [48] T. Joachims, \"Transductive inference for text classification using support vector machines,\" in _Proc. 16th Int. Conf. Mach. Learn._, Bled, Slovenia, Jun. 1999, pp. 200-209. * [49] E. Hoffer and N. Ailon, \"Deep metric learning using triplet network,\" in _Lecture Notes in Computer Science_, A. Feragen _et al._, Eds. Berlin, Germany: Springer, 2015, pp. 84-92. \\begin{tabular}{c c} & Chengye Zhang received the B.Eng. degree in remote sensing from Beihang University, Beijing, China, in 2013, and the Ph.D. degree in GIS from Peking University, Beijing, China, in 2018. Since 2018, he has been an Assistant Professor with the China University of Mining and Technology, Beijing, China. He has been conducting research in the area of hyperspectral remote sensing. Dr. Zhang is a Reviewer of the IEEE J-STARS, _ISPRS Journal of Photogrammetry and Remote Sensing, International Journal of Remote Sensing, Remote Sensing Letters_, and _Journal of Applied Remote Sensing_. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jun Yue received the B.Eng. degree in geodesy from Wuhan University, Wuhan, China, in 2013, and the Ph.D. degree in GIS from Peking University, Beijing, China, in 2018. He is currently an Assistant Professor with the Changsha University of Science and Technology, Changsha, China. His current research interests include remote sensing, image processing, and object detection. In particular, his interests include few-shot learning, reinforcement learning, neural architecture search, multimodal learning, and hyperspectral image understanding. Dr. Yue is a Reviewer of the IEEE Transactions on Geoscience and Remote Sensing, ISPRS Journal of Photogrammetry and Remote Sensing, Information Sciences, International Journal of Remote Sensing, Remote Sensing Letters, IEEE Access, IEEE Transactions on Biomedical Engineering, IEEE Journal of Biomedical and Health Informatics, _Plos One, Journal of Supercomputing, Infrared Physics and Technology, Journal of Supercomputing, Current Medical Imaging Reviews, Energy Conversion and Management, Transactions of the ASABE, and Acta Oceanologica Sinica_. \\\\ \\end{tabular} \\begin{tabular}{c c} & Qiming Qin received the B.S. degree in geography from Nanjing Normal University, Nanjing, China, in 1982, the M.S. degree from Shanavil Normal University, Xi'an, China, in 1987, and the Ph.D. degree from Peking University, Beijing, China, in 1990. He is currently a Professor with the School of Earth and Space Sciences, Peking University, and conducting research in remote sensing and GIS. He has authored or coauthored more than 100 peer-reviewed articles. \\\\ \\end{tabular}
This article proposes a global prototypical network (GPN) to solve the problem of hyperspectral image classification using limited supervised samples (i.e., few-shot problem). In the proposed method, a strategy of global representation learning is adopted to train a network (\\(f_{\\theta}\\)) to transfer the samples from the original data space to an embedding-feature space. In the new feature space, a vector called global prototypical representation for each class is learned. In terms of the network (\\(f_{\\theta}\\)), we designed an architecture of a deep network consisting of a dense convolutional network and the spectral-spatial attention network. For the classification, the similarities between the unclassified samples and the global prototypical representation of each class are evaluated and the classification is finished by nearest neighbor classifier. Several public hyperspectral images were utilized to verify the proposed GPN. The results showed that the proposed GPN obtained the better overall accuracy compared with existing methods. In addition, the time expenditure of the proposed GPN was similar with several existing popular methods. In conclusion, the proposed GPN in this article is state-of-the-art for solving the problem of hyperspectral image classification using limited supervised samples. Deep Learning, dense convolution, global representations, hyperspectral image classification, small number of samples, spectral-spatial attention.
Give a concise overview of the text below.
ieee/f45368e2_2180_48d6_9fb5_92a5f572d6a4.md
# Subspace Structure Regularized Nonnegative Matrix Factorization for Hyperspectral Unmixing Lei Zhou \\({}^{\\copyright}\\), Xueni Zhang, Jianbo Wang, Xiao Bai\\({}^{\\copyright}\\), Lei Tong \\({}^{\\copyright}\\), Liang Zhang, Jun Zhou \\({}^{\\copyright}\\), and Edwin Hancock \\({}^{\\copyright}\\) Manuscript received April 27, 2020; revised June 16, 2020 and July 10, 2020; accepted July 14, 2020. Date of publication July 22, 2020; date of current version August 7, 2020. The code for this article is available on [https://github.com/z/thuaa/Subspace-Regularized-Unmixing](https://github.com/z/thuaa/Subspace-Regularized-Unmixing). _(Lei Zhou, Xueni Zhang, and Jianbo Wang are co-first authors.) (Corresponding authors: Xiao Bai; Liang Zhang.)_ Lei Zhou, Xueni Zhang, Xiao Bai, and Liang Zhang are with the School of Computer Science and Engineering, Beijing Advanced Innovation Center for Big Data and Brain Computing, State Key Laboratory of Software Development Environment, Jiangxi Research Institute, Beihang University, Beijing 100191, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Jianbo Wang is with the First Clinical Medical College of Nanchang University, Nanchang 330006, China (e-mail: [email protected]). Lei Tong is with the Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China (e-mail: [email protected]). Jun Zhou is with the School of Information and Communication Technology, Griffith University, Nathan, Qld 4111, Australia (e-mail: [email protected]). Edwin Hancock is with the Department of Computer Science, University of York, YO10 5DD York, U.K. (e-mail: [email protected]). Digital Object Identifier 10.1109/ISTARS.2020.3011257 ## I Introduction Hyperspectral image (HSI) analysis [1, 2, 3, 4, 5] is one of the fastest-growing technologies in recent years. However, due to low spatial resolution or specific imaging mechanism, the acquired HSIs often contain mixed pixels which span surface areas containing several types of materials. To effectively exploit hyperspectral data, hyperspectral unmixing (HU) [6, 7, 8, 9] has become a basic preprocessing for effective HSI analysis. The objective of HU is to decompose mixed pixels into components with the reference spectral signatures of each of the materials present (endmembers), and to determine their corresponding fractions (abundances). Existing unmixing algorithms mainly exploit one of two mixture models--namely, a linear model or a nonlinear model. Nonlinear mixing models [10, 11] assume that the observed pixel is mixed by a nonlinear function of the component spectral signatures of the endmembers which are weighted by the corresponding abundances. However, the process of nonlinear combination is usually difficult to model physically and to recover in real-world applications. In recent years, linear mixing model (LMM) [12] has therefore been more widely adopted in most works on HU. The reason for this is the balance between model accuracy and tractability. LMM is based on the assumption that different endmembers are mutually independent, so that the observed HSI is a linear combination of the endmembers and their corresponding abundances. Abundant LMM unmixing algorithms have been proposed. Some of these focus on the endmember extraction from statistical and geometrical aspects, such as pixel purity index [13], N-FINDR [14], alternating projected subgradients [15], vertex component analysis [16], independent component analysis [17], and minimum-volume-based unmixing algorithms [18], etc. Other methods address the problem of abundance estimation under the assumption that the endmembers are available [19]. With the almost universal success of deep learning, there are also examples of deep neural network-based HU methods [20, 21, 22]. However, these methods depend on the availability of large amount of training data with groundtruth. In this article, we focus on blind unmixing which learns the endmembers as well as their abundances simultaneously. Nonnegative matrix factorization (NMF) [23, 24] is the most commonly used method for blind source separation. It aims to decompose mixed data through the product of two nonnegative matrices. This is done by minimizing the reconstruction error as measured by Euclidean distance. However, the solution of NMF is usually not unique if there are no further constraints [25]. To alleviate this problem, two kinds of constraints are commonly used on the abundance matrix. The first is the sparsity constraint for abundance matrix. This is based on the fact that the pixels of HSI are mostly mixed by a relatively small number of endmembers. Therefore, [26, 27] presented a spare coding method on the abundance matrix forHU. In this article, \\(L_{p}\\) denotes the \\(p\\) norm. In fact, provided \\(L_{p}(0\\leq p\\leq 1)\\) then the regularizer has the effect of leading to a sparse solution. Moreover, the sparsity of the \\(L_{p}(\\frac{1}{2}\\leq p\\leq 1)\\) solution is negatively correlated with \\(p\\), but the sparsity of the solution for \\(L_{p}(0\\leq p\\leq\\frac{1}{2})\\) is not sensitive with the change of \\(p\\). Therefore, Qian _et al._[28] utilized the \\(L_{1/2}\\) regularizer on the abundance matrix to constrain the sparseness. It has been proved that the \\(L_{1/2}\\) regularizer is more efficient in computation compared to the \\(L_{1}\\) regularizer, and the solution is also closer to the groundtruth. In addition, to avoid the influence of noise, many norm-based robust NMF methods have been proposed. The \\(L_{2,1}\\) norm is commonly integrated into sparse NMF to achieve robustness for pixel noise and outlier rejection since it is rotationally invariant [19, 29, 30]. Additionally, the \\(L_{1,2}\\) norm is also effective for solving band noise problems [31, 32]. The second type of constraint incorporates information concerning the spatial distribution into abundance estimation, and has proved useful in improving the unmixing results. This is due to the fact that endmembers are distributed to form coherent geometric structures, and two correlated pixels usually have similar fractional abundances for the same endmembers. Therefore, the total variation (TV) regularizer [33, 34, 35] was incorporated to promote piece-wise smooth transitions in the abundance matrix for neighboring pixels of the same endmember category. In [36], abundance separation and smoothness constrained NMF was proposed for HU. The abundance separation acted on the spectral domain, and the abundance smoothness constraint was used on the spatial domains to exploit the spatial information. Due to the spatial structure learning ability of manifold method, [37] incorporated manifold structures learning into the NMF model to separate similar neighboring pixels. Inspired by the denoising method [38], Lu _et al._[39] proposed a structure constrained sparse NMF method which exploited clustering-based approach to find the potential structure information. In [40], a clustered multitask network was proposed to solve the unmixing problem, which also used the clustering method to explore the distribution. Recently, spatial group sparsity regularized NMF (SGSNMF) [41] utilized superpixels that are obtained from image segmentation as a spatial prior to the promotion of HU. Although the above methods try to exploit the spatial distribution of pixels, all these methods explore the correlations of pixels within a local neighborhood, and most of them are defined manually. However, each material usually occurs in many different regions in the same HSI. Thus, the spatial distribution of a particular material is not limited to local structures. Moreover, it is obvious that the distributions of materials may be quite diverse in different images. According to [42], each kind of land-cover material in a remotely sensed HSI can be treated as a different subspace. They might have different spectra because of the varying illumination, topography, and other imaging conditions. Therefore, the spatial distribution information can be captured by the subspace structure [43]. This not only represents the global distribution of the materials but can also be learned from the corresponding image. Motivated by this fact, we propose a new method aimed at incorporating the subspace structure regularization into the sparse NMF-based unmixing process. In contrast to deep subspace learning method [44, 45], here we utilize a low-rank representation (LRR) method [46, 47] to learn the similar graph that represents the subspace structures for all materials and which contains the correlations of all pixel pairs. Since the LRR constraint can be incorporated into the NMF constraint, this offers the advantage that we can optimize the subspace learning and the HU simultaneously. As a result, the spatial prior is integrated through regularization into sparse NMF and can be used to perform the HU. Furthermore, based on the assumption that an abundance matrix can be seen as the denoised feature vectors of the original image, the learned abundance matrix can be used to better learn the latent subspace structure. Hence, we introduce a novel joint framework to simultaneously optimize HU and subspace structure learning in a manner which leads to mutual enhancement. The main contributions of this article are summarized as follows. 1. We propose a new HU method which learns the subspace structure of material reflectance to capture the global correlation of all pixels. Then the global similar graph for materials is used as a robust spatial prior to improvement of the quality of the HU. 2. We design an objective function to integrate the spectral-spatial-based unmixing and subspace structure learning into a single unified framework, in which they can be jointly optimized by an iterative algorithm. The joint framework can not only enhance the unmixing performance but also provide better subspace clustering results. 3. Experiments on both simulated and real-world HSI datasets indicate the superiority of the proposed method, which achieves comparable performance to state-of-the-art methods for HU. The remainder of this article is structured as follows. Section II describes the background of the LMM and NMF algorithms. Section III presents our proposed method and demonstrates the implementation details. The experimental results on simulated data and real-world HSI data are presented in Section IV. Finally, we conclude this article in Section V. ## II Background ### _NMF for HU_ The classic LMM for HU is based on the assumption that the observed HSI is a linear mixture of several endmembers. Consider a HSI \\(\\mathbf{Y}\\in\\mathbb{R}^{L\\times N}\\), where the number of wavelength-indexed bands is \\(L\\) and the number of pixels is \\(N\\). Then the original data \\(\\mathbf{Y}\\) can be reconstructed by a linear combination of endmembers as follows: \\[\\mathbf{Y}=\\mathbf{A}\\mathbf{S}+\\mathbf{E} \\tag{1}\\] where \\(\\mathbf{A}\\in\\mathbb{R}^{L\\times P}\\) denotes the endmember matrix, in which each column represents the spectral signature of the corresponding endmember and \\(P\\) is the number of endmembers; \\(\\mathbf{S}\\in\\mathbb{R}^{P\\times N}\\) denotes the abundance matrix, in which each column is the fractions of all endmembers in the corresponding pixel; and \\(\\mathbf{E}\\) is an additive Gaussian white noise. Since the goal of HU is to estimate the endmember and abundance matrices simultaneously, in this task, we only know the matrix \\(\\mathbf{Y}\\), and matrices \\(\\mathbf{A}\\) and \\(\\mathbf{S}\\) are the unknown targets of unmixing. To avoid the large solution space, two commonly adopted constraints can be used on the matrices \\(\\mathbf{A}\\) and \\(\\mathbf{S}\\)[48]. The first is the so-called abundance sum-to-one constraint, which restrict the proportions of each endmember sum to one. Another is the nonnegativity constraint, which restricts elements in both the endmember and abundance matrices must be greater than or equal to zero. With the nonnegativity constraint, NMF is a good way to decompose the original image into the endmember and abundance matrices simultaneously. By reconstructing the original image \\(\\mathbf{Y}\\) through endmember matrix \\(\\mathbf{A}\\) and abundance matrix \\(\\mathbf{S}\\), the target of optimization can be defined as \\[C(\\mathbf{A},\\mathbf{S})=\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A}\\mathbf{S} \\right\\|_{F}^{2}\\quad\\mathrm{s.t.}\\ \\mathbf{A}>0,\\mathbf{S}>0 \\tag{2}\\] where \\(\\|\\cdot\\|_{F}\\) represents the Frobenius norm. The multiplied iterative algorithm is commonly used to solve this objective function. When applied to (2), the multiplicative rule leads to the following two interleaved equations: \\[\\mathbf{A} =\\mathbf{A}.*\\mathbf{Y}\\mathbf{S}^{T}./\\mathbf{A}\\mathbf{S} \\mathbf{S}^{T} \\tag{3}\\] \\[\\mathbf{S} =\\mathbf{S}.*\\mathbf{A}^{T}\\mathbf{Y}./\\mathbf{A}^{T}\\mathbf{A}\\mathbf {S} \\tag{4}\\] where \\((\\cdot)^{T}\\) denotes the matrix transposition, \\(*\\) denotes element-wise multiplication, and \\(./\\) denotes element-wise division. ### _NMF With Sparsity Constraints_ There are several drawbacks of the traditional NMF model (2). First, it is nonconvex, which means it is hard to get the globally optimal solution. Second, the solution of this objective function is not unique, this is because \\(\\mathbf{A}\\mathbf{S}\\) can be replaced by \\((\\mathbf{A}\\mathbf{D})(\\mathbf{D}^{-1}\\mathbf{S})\\) for any nonnegative invertible matrix \\(\\mathbf{D}\\). Therefore, the classical NMF model will make the unmixing process unstable. To solve this problem, more computationally tractable constraints are incorporated into NMF. Due to the fact that each endmember does not occur over the entire image, in most cases the abundance map is sparse. Consider NMF subject to a sparsity constraint, the objective function consists of the reconstruction error and sparsity constraint can be defined as follows: \\[C(\\mathbf{A},\\mathbf{S})=\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A}\\mathbf{S} \\right\\|_{F}^{2}+\\lambda f(\\mathbf{S}) \\tag{5}\\] where \\(\\lambda\\) is a regularization term. Many varieties of regularizer \\(f(\\cdot)\\) exist such that sparsity is encouraged. In this article, we choose to use the \\(L_{1/2}\\) regularizer, which is an alternative to the \\(L_{1}\\) regularizer. It has been proved in [28] that the \\(L_{1/2}\\) regularizer is more efficient in computation compared to the \\(L_{1}\\) regularizer, and the solution is closer to the groundtruth. The \\(L_{1/2}\\) regularized NMF model is defined as \\[C(\\mathbf{A},\\mathbf{S})=\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A}\\mathbf{S} \\right\\|_{F}^{2}+\\lambda\\left\\|\\mathbf{S}\\right\\|_{1/2}. \\tag{6}\\] ## III Approach In this section, we propose a new method that utilizes both sparsity constraint and spatial information. First, we describe the spatial information used, and which is obtained by learning subspace structure from the original image. Then a joint framework is proposed to simultaneously perform HU and subspace structure learning. ### _Proposed Method_ The traditional spectral-based NMF methods for HU usually independently processes the HSI pixels, while ignoring the spatial correlation of pixels. However, as mentioned in Section I, spatial autocorrelation is important prior knowledge for boosting the performance of HU. In previous works, several spatial regularization terms have been introduced. They are based on the assumption that pixels distributed in a local group are more likely to have the same mixed pattern in the abundance matrix. By taking benefit from the spatial structure constraints, the performance of HU has been greatly improved. However, these methods only utilize the local similarity of image pixels to achieve good performance while ignoring the global similarity over the entire image. In most cases, specific materials are distributed in different regions in the HSI. Hence, the global structure similarity shall be considered in the unmixing task. Fig. 1 is an illustration of HU models that take different spatial regularization into consideration. By rescaling the original 3-D HSI cube into a 2-D matrix where each column denotes the spectral signature of a pixel, the observed image is expected to be approximated by two matrices--the endmember matrix and the abundance matrix. Since the endmembers are distributed in certain structure in the original images, such structure information are expected to be kept in the abundance matrix. Several spatial structures used in recent works are compared in the right of Fig. 1. In this figure, the pixels that consist of the same set of endmembers are represented in one color. We can see that there are three materials in the observed HSI, which are represented as \"blue,\" \"yellow,\" and \"green,\" respectively, and they occur in different regions in the whole image. Consider the blue pixel marked with a black box in the original image, different methods capture different spatial information with different spatial structures. Fig. 1(a) shows the spatial information used by TV regularizer. It only correlates four neighbors of a pixel to promote piece-wise smooth. Instead of using Euclidean distance to measure the spatial structure, manifold regularizer in Fig. 1(b) tries to exploit the latent manifold structure of the data using heat kernel. As for the spatial group sparsity regularizer showed in Fig. 1(c), superpixels that obtained by segmentation are used to represent the spatial neighborhood. However, as mentioned before, our proposed subspace structure regularizer considers the correlation of the pixels over the entire image. It aims to explore the global structure of data to enhance the HU process, as shown in Fig. 1(d). Subspace structure learning methods are based on the self-representation property that data points lying in the same subspace can be approximated as a linear combination of the data points from the same subspace. Therefore, the subspace structure of HSI can capture the global correlation of similar pixels which can be used as a robust spatial prior for unmixing. In our research, we make the assumption that each type of endmember forms a subspace, and all variations of endmember in the same type form the data points in the subspace. To exploit the expected global subspace structure, we first introduce LRR, which is a classic subspace learning method. Consider the dataset \\(\\mathbf{Y}=[\\mathbf{y}_{1},\\mathbf{y}_{2}, ,\\mathbf{y}_{N}]\\) in \\(\\mathbb{R}^{L}\\), according to the self-representation property, each data points can be self-represented by themselves \\[\\mathbf{Y}=\\mathbf{Y}\\mathbf{Z}\\] where \\(\\mathbf{Z}=[\\mathbf{z}_{1},\\mathbf{z}_{2}, ,\\mathbf{z}_{N}]\\) is the self-representation matrix, each \\(\\mathbf{z}_{i}\\) is the representation coefficient of \\(\\mathbf{y}_{i}\\). By looking for a LRR of \\(\\mathbf{Z}\\), the global structure of data \\(\\mathbf{Y}\\) can be obtained \\[\\min_{\\mathbf{Z}}rank(\\mathbf{Z})\\] \\[\\mathrm{s.t.}\\quad\\mathbf{Y}=\\mathbf{Y}\\mathbf{Z} \\tag{7}\\] whose optimal solutions \\(\\mathbf{Z}^{*}\\) is called the lowest-rank representations of data \\(\\mathbf{Y}\\). However, it is difficult to solve this optimization problem, since the rank function is discrete. As the nuclear norm is a good convex approximation of matrix rank, the optimization problem can be transformed as follows: \\[\\min_{\\mathbf{Z}}\\left\\|\\mathbf{Z}\\right\\|_{*}\\] \\[\\mathrm{s.t.}\\quad\\mathbf{Y}=\\mathbf{Y}\\mathbf{Z}. \\tag{8}\\] Here, \\(\\left\\|\\mathbf{Z}\\right\\|_{*}\\) is the nuclear norm which is the sum of the singular values of the matrix. Since the self-representation matrix \\(\\mathbf{Z}\\) contains the correlation of all pixels, it is natural to preserve this similarity in abundance matrix. In other words, the pixels in the same subspace in the original image should exist in the same subspace in the abundance matrix. Based on the fact that there are many mixed pixels in HSI, HU is widely used as a crucial preprocessing step for HSI analysis [49] since the obtained abundance can be seen as a denoised feature representation. Therefore, it is better to preserve the latent subspace structure from the unmixed abundance map instead of the original images. By incorporating the subspace regularizer into the sparse NMF model, the optimization problem can be formulated as \\[J(\\mathbf{A},\\mathbf{S},\\mathbf{Z})\\!=\\!\\min_{\\mathbf{A},\\mathbf{ S}}\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A}\\mathbf{S}\\right\\|_{F}^{2}+\\lambda \\left\\|\\mathbf{S}\\right\\|_{1/2}+\\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z} \\right\\|_{F}^{2}\\] \\[\\mathrm{s.t.}\\quad\\mathbf{A}\\geq 0,\\mathbf{S}\\geq 0,1_{K}^{T} \\mathbf{S}=\\mathbf{1}_{N}^{T} \\tag{9}\\] where the first two terms are reconstruction error and sparsity constraint, and the third term constrains the subspace structure of the abundance matrix. Note that we would also want to simultaneously learn and optimize the subspace structure. Therefore, a joint framework on HU and subspace learning can be represented as follows: \\[J(\\mathbf{A},\\mathbf{S},\\mathbf{Z}) =\\min_{\\mathbf{A},\\mathbf{S}}\\frac{1}{2}\\left\\|\\mathbf{Y}- \\mathbf{A}\\mathbf{S}\\right\\|_{F}^{2}+\\lambda\\left\\|\\mathbf{S}\\right\\|_{1/2}\\] \\[\\quad+\\ \\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z}\\right\\|_{F}^{2}+ \\tau\\left\\|\\mathbf{Z}\\right\\|_{*}\\] \\[\\mathrm{s.t.}\\quad\\mathbf{A}\\geq 0,\\mathbf{S}\\geq 0,1_{K}^{T} \\mathbf{S}=\\mathbf{1}_{N}^{T} \\tag{10}\\] Fig. 1: Illustration of the concept of our method and several alternative methods. The original images are decomposed into two matrices—the endmembers matrix and the abundance matrix. When maintaining the spatial structure for each pixel in the abundance matrix, different methods utilized different strategies. Take the blue pixel marked with black box in the original image as an example. (a) TV regularizer considers its four neighborhood as the local structure. (b) Manifold regularizer uses heat kernel to capture the local structure. (c) Segmentation-based regularizer learns a local neighborhood. (d) Our proposed method learns a subspace structure that represents the global distribution of each material. where the first three terms are the objective of spectral-spatial HU and the last two terms learn the latent subspace structures of the materials. ### _Optimization_ Obviously, the presented optimization problem is nonconvex. To iteratively solve this problem, we first define an auxiliary variable \\(\\mathbf{L}\\), then the optimization problem (10) can be transformed to the following problem: \\[J(\\mathbf{A},\\mathbf{S},\\mathbf{Z}) =\\min_{\\mathbf{A},\\mathbf{S}}\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{ A}\\mathbf{S}\\right\\|_{F}^{2}+\\lambda\\left\\|\\mathbf{S}\\right\\|_{1/2}\\] \\[\\quad\\quad+\\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z}\\right\\|_{F }^{2}+\\tau\\left\\|\\mathbf{L}\\right\\|_{*}\\] \\[\\mathrm{s.t.}\\quad\\mathbf{A}\\geq 0,\\mathbf{S}\\geq 0,\\mathbf{L}= \\mathbf{Z},\\mathbf{1}_{K}^{T}\\mathbf{S}=\\mathbf{1}_{N}^{T}. \\tag{11}\\] Here, we consider the auxiliary variable \\(\\mathbf{L}\\) as a denoising version of \\(\\mathbf{Z}\\), then we can add the \\(\\mathbf{L}=\\mathbf{Z}\\) constraint to the objective function, and the objective problem can be relaxed as \\[J(\\mathbf{A},\\mathbf{S},\\mathbf{Z}) =\\min_{\\mathbf{A},\\mathbf{S}}\\frac{1}{2}\\left\\|\\mathbf{Y}- \\mathbf{A}\\mathbf{S}\\right\\|_{F}^{2}+\\lambda\\left\\|\\mathbf{S}\\right\\|_{1/2}\\] \\[\\quad+\\ \\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z}\\right\\|_{F}^{2}+ \\frac{1}{2}\\left\\|\\mathbf{L}-\\mathbf{Z}\\right\\|_{F}^{2}+\\tau\\left\\|\\mathbf{L} \\right\\|_{*}\\] \\[\\mathrm{s.t.}\\quad\\mathbf{A}\\geq 0,\\mathbf{S}\\geq 0,\\mathbf{1}_{K}^ {T}\\mathbf{S}=\\mathbf{1}_{N}^{T}. \\tag{12}\\] Subsequently, we utilize the multiplicative iterative method [24] to solve the above problem (12). Four steps are iteratively updated with other variables fixed: 1) endmember matrix estimation; 2) abundance matrix estimation; 3) reconstruction; and 4) low-rank self-representation learning. The details of each step are as follows. #### Iii-B1 Endmember Estimation In this step, we use the Lagrange multiplier method to estimate the endmember matrix with other variables fixed. Then the objective function is reformulated as \\[J(\\mathbf{A}) =\\min_{\\mathbf{A}}\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A}\\mathbf{ S}\\right\\|_{F}^{2}+Tr(\\mathbf{\\Psi}\\mathbf{A})\\] \\[\\mathrm{s.t.}\\quad\\mathbf{A}\\geq 0 \\tag{13}\\] where \\(\\mathbf{\\Psi}\\) is the Lagrange multiplier. To solve this problem (13), a common method is to separate this equation and set the last term to 0. We can obtain the following equations with the Karush-Kuhn-Tucker (K-K-T) conditions \\[\ abla_{\\mathbf{A}}J(\\mathbf{A}) =\\mathbf{A}\\mathbf{S}\\mathbf{S}^{T}-\\mathbf{Y}\\mathbf{S}^{T}+ \\mathbf{\\Psi}=\\mathbf{0} \\tag{14}\\] \\[\\mathbf{A}.*\\mathbf{\\Psi} =\\mathbf{0}. \\tag{15}\\] By simultaneously multiplying both sides by \\(\\mathbf{A}\\) on (14), and then substituting (15) into (14), the endmember matrix \\(\\mathbf{A}\\) can be updated as \\[\\mathbf{A}\\longleftarrow\\mathbf{A}.*\\mathbf{Y}\\mathbf{S}^{T}./ \\mathbf{A}\\mathbf{S}\\mathbf{S}^{T}. \\tag{16}\\] #### Iii-B2 Abundance Estimation When the endmember matrix is updated, we fix matrix \\(\\mathbf{A}\\). Then the objective function for abundance matrix estimation can be written as \\[J(\\mathbf{S}) =\\min_{\\mathbf{S}}\\frac{1}{2}\\left\\|\\mathbf{Y}-\\mathbf{A} \\mathbf{S}\\right\\|_{F}^{2}+\\lambda\\left\\|\\mathbf{S}\\right\\|_{1/2}\\] \\[\\quad+\\ \\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z}\\right\\|_{F}^{2}+Tr( \\mathbf{\\Gamma}\\mathbf{A})\\] \\[\\mathrm{s.t.}\\quad\\mathbf{S}\\geq 0,\\mathbf{1}_{K}^{T}\\mathbf{S}= \\mathbf{1}_{N}^{T}. \\tag{17}\\] The same with endmember estimation, the Lagrange multiplier method is adopted to solve problem (17). Where \\(\\mathbf{\\Gamma}\\) is the Lagrange multiplier with size \\(K\\times N\\). In the same manner, the following is obtained by the K-K-T conditions: \\[\ abla_{\\mathbf{S}}J(\\mathbf{S}) =\\mathbf{A}^{T}\\mathbf{A}\\mathbf{S}-\\mathbf{A}^{T}\\mathbf{Y}+ \\frac{\\lambda}{2}\\mathbf{S}^{-1/2}\\] \\[\\quad+\\ 2\\mu\\mathbf{S}(\\mathbf{I}-\\mathbf{Z})(\\mathbf{I}-\\mathbf{Z})^ {T}+\\mathbf{\\Gamma}=0 \\tag{18}\\] \\[S.*\\mathbf{\\Gamma} =0. \\tag{19}\\] Similarly, we multiply both sides by \\(\\mathbf{S}\\) on (18) and substitute (19) into (18), the abundance matrix \\(\\mathbf{S}\\) can be updated as \\[\\mathbf{S}\\longleftarrow\\mathbf{S}.*\\mathbf{A}^{T}\\mathbf{Y}./ \\left(\\mathbf{A}^{T}\\mathbf{A}\\mathbf{S}\\right.\\] \\[\\quad+\\ \\left.\\frac{\\lambda}{2}\\mathbf{S}^{-1/2}+\\ 2\\mu\\mathbf{S}(\\mathbf{I}-\\mathbf{Z})(\\mathbf{I}-\\mathbf{Z})^{T}\\right). \\tag{20}\\] #### Iii-B3 Reconstruction In this step, we solve the reconstruction problem with endmember matrix \\(\\mathbf{A}\\) and abundance matrix \\(\\mathbf{S}\\) fixed. The objective function is as follows: \\[J(\\mathbf{Z})=\\min\\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{Z}\\right\\|_{F}^{2}+ \\frac{1}{2}\\left\\|\\mathbf{L}-\\mathbf{Z}\\right\\|_{F}^{2}. \\tag{21}\\] By solving the above, we can get the following updating rule: \\[\\mathbf{Z}\\longleftarrow\\mathbf{Z}.*\\left(\\mathbf{S}^{T}\\mathbf{S}+\\frac{2 }{\\mu}\\mathbf{L}\\right)./\\left(\\mathbf{S}^{T}\\mathbf{S}\\mathbf{Z}+\\frac{2}{ \\mu}\\mathbf{Z}\\right). \\tag{22}\\] #### Iii-B4 Low-Rank Self-Representation Learning In the fourth step, the low-rank self-representation matrix is optimized by the following objective function: \\[J(\\mathbf{L})=\\tau\\left\\|\\mathbf{L}\\right\\|_{*}+\\frac{1}{2}\\left\\|\\mathbf{L}- \\mathbf{Z}\\right\\|_{F}^{2}. \\tag{23}\\] This problem has a closed-form solution and can be solved via the singular value thresholding operator [50]. Then, we solve the objective function (12) with a multiplicative iterative method. The entire process is summarized in Algorithm 1. Finally, we analyze the computational complexity of the proposed method. Compared with standard NMF, there are two more steps to compute the self-representation matrix \\(\\mathbf{Z}\\) and auxiliary variable \\(\\mathbf{L}\\). Since the dimension of \\(\\mathbf{Z}\\) and \\(\\mathbf{L}\\) is \\(N\\times N\\), the additional computational cost for \\(\\mathbf{Z}\\) and \\(\\mathbf{L}\\) is \\(O(PN^{2})\\) caused by the Singular Value Decomposition (SVD) operator. The computational complexity of standard NMF is known as \\(O(LPN)\\). Therefore, the overall computational complexity of our method is \\(O(LPN+PN^{2})\\) which is similar with the standard NMF and is faster than \\(L_{1/2}\\)-NMF with \\(O(LPN+P^{2}N^{2})\\) computational complexity [28]. ### _Convergence Analysis_ In this section, we analyze the convergence of the proposed updating algorithm. Since we solve the optimization problem by an iterative strategy, to guarantee the convergence of the update rule, we need to prove the nonincreasing property of the objective function in each update step. To formulate this problem, we use \\(\\mathbf{A}^{k}\\), \\(\\mathbf{S}^{k}\\), \\(\\mathbf{Z}^{k}\\), \\(\\mathbf{L}^{k}\\) to denote the values of the \\(k\\)th iteration and \\(\\mathbf{A}^{k+1}\\), \\(\\mathbf{S}^{k+1}\\), \\(\\mathbf{Z}^{k+1}\\), \\(\\mathbf{L}^{k+1}\\) to denote the values of the \\((k+1)\\)th iteration. Then, the proof problem can be written as \\[J(\\mathbf{A}^{k+1},\\mathbf{S}^{k},\\mathbf{Z}^{k},\\mathbf{L}^{k}) \\leq J(\\mathbf{A}^{k},\\mathbf{S}^{k},\\mathbf{Z}^{k},\\mathbf{L}^{k}) \\tag{24}\\] \\[J(\\mathbf{A}^{k+1},\\mathbf{S}^{k+1},\\mathbf{Z}^{k},\\mathbf{L}^{k}) \\leq J(\\mathbf{A}^{k+1},\\mathbf{S}^{k},\\mathbf{Z}^{k},\\mathbf{L}^{k})\\] (25) \\[J(\\mathbf{A}^{k+1},\\mathbf{S}^{k+1},\\mathbf{Z}^{k+1},\\mathbf{L}^ {k}) \\leq J(\\mathbf{A}^{k+1},\\mathbf{S}^{k+1},\\mathbf{Z}^{k},\\mathbf{L}^{k})\\] (26) \\[J(\\mathbf{A}^{k+1},\\mathbf{S}^{k+1},\\mathbf{Z}^{k+1},\\mathbf{L}^ {k+1}) \\leq J(\\mathbf{A}^{k+1},\\mathbf{S}^{k+1},\\mathbf{Z}^{k+1}, \\mathbf{L}^{k}). \\tag{27}\\] Since the same problems (24), (25), (27) have been proved in [28] and [34], here we only give the proof for problem (26). Similar to [34], we consider each column of \\(Z\\) independently to prove this problem due to the column separability of the objective function (21). Let \\(\\mathbf{z}\\), \\(\\mathbf{l}\\) denote the same column of \\(\\mathbf{Z}\\), \\(\\mathbf{L}\\), respectively. Then the objective function becomes \\[J(\\mathbf{z})=\\min\\mu\\left\\|\\mathbf{S}-\\mathbf{S}\\mathbf{z}\\right\\|_{F}^{2}+ \\frac{1}{2}\\left\\|\\mathbf{l}-\\mathbf{z}\\right\\|_{F}^{2}. \\tag{28}\\] To prove the nonincreasing property of the objective function, we first introduce an auxiliary function \\(G(\\mathbf{z},\\mathbf{z}^{k})\\) which meet the conditions \\(G(\\mathbf{z},\\mathbf{z})=J(\\mathbf{z})\\) and \\(G(\\mathbf{z},\\mathbf{z}^{k})\\geq J(\\mathbf{z})\\). Then \\(J(\\mathbf{z})\\) is nonincreasing when use the following updating rule: \\[\\mathbf{z}^{k+1}=arg\\min_{\\mathbf{z}}G(\\mathbf{z},\\mathbf{z}^{k}) \\tag{29}\\] since \\[J(\\mathbf{z}^{k+1})\\leq G(\\mathbf{z}^{k+1},\\mathbf{z}^{k})\\leq G(\\mathbf{z}^{ k},\\mathbf{z}^{k})=J(\\mathbf{z}^{k}). \\tag{30}\\] Following [28], \\(G\\) can be defined as \\[G(\\mathbf{z},\\mathbf{z}^{k}) =J(\\mathbf{z}^{k})+(\\mathbf{z}-\\mathbf{z}^{k})(\ abla J(\\mathbf{ z}^{k}))^{T}\\] \\[+\\frac{1}{2}(\\mathbf{z}-\\mathbf{z}^{k})K(\\mathbf{z}^{k})(\\mathbf{ z}-\\mathbf{z}^{k})^{T} \\tag{31}\\] where \\(K(\\mathbf{z}^{k})\\) is a diagonal matrix which is defined as \\[K(\\mathbf{z}^{k})=diag\\left(\\left(\\mathbf{S}^{T}\\mathbf{S}\\mathbf{z}^{k}+ \\frac{2}{\\mu}\\right)./\\mathbf{z}^{k}\\right). \\tag{32}\\] Since \\(G(\\mathbf{z},\\mathbf{z})=J(\\mathbf{z})\\), the Taylor expansion of \\(J(\\mathbf{z})\\) is \\[J(\\mathbf{z})=J(\\mathbf{z}^{k})+(\\mathbf{z}-\\mathbf{z}^{k})( \ abla J(\\mathbf{z}^{k}))^{T}\\\\ +\\ \\frac{1}{2}\\left(\\mathbf{z}-\\mathbf{z}^{k}\\right)\\left(\\mathbf{S }^{T}\\mathbf{S}+\\frac{2}{\\mu}\\mathbf{I}\\right)(\\mathbf{z}-\\mathbf{z}^{k})^{T} +O(\\mathbf{z}) \\tag{33}\\] where \\(O(\\mathbf{z})\\) denotes the higher-order terms of the Taylor expansion. Then the condition \\(G(\\mathbf{z},\\mathbf{z}^{k})\\geq J(\\mathbf{z})\\) is satisfied if \\[\\frac{1}{2}(\\mathbf{z}-\\mathbf{z}^{k})\\left(K(\\mathbf{z}^{k})-\\mathbf{S}^{T} \\mathbf{S}-\\frac{2}{\\mu}\\mathbf{I}\\right)(\\mathbf{z}-\\mathbf{z}^{k})^{T}\\geq 0. \\tag{34}\\] According to [27], \\(K(\\mathbf{z}^{k})-\\mathbf{S}^{T}\\mathbf{S}-\\frac{2}{\\mu}\\mathbf{I}\\) is a positive semidefinite matrix with the nonnegative \\(\\mathbf{z}\\). As aforementioned, next we only need to prove that the update rule (22) is coincident with selecting the minimum of \\(G(\\mathbf{z},\\mathbf{z}^{k})\\). This can be solved by making the gradient to be 0 \\[\ abla_{\\mathbf{z}}G(\\mathbf{z},\\mathbf{z}^{k}) =\\mathbf{S}^{T}(\\mathbf{S}\\mathbf{z}^{k}-\\mathbf{S})+\\frac{2}{\\mu }(\\mathbf{z}^{k}-\\mathbf{l})\\] \\[\\quad+K(\\mathbf{z}^{k})(\\mathbf{z}-\\mathbf{z}^{k})=0 \\tag{35}\\] then, it can be calculated \\[\\mathbf{z} =\\mathbf{z}^{k}-K^{-1}\\left(\\mathbf{S}^{T}(\\mathbf{S}\\mathbf{z}^{ k}-\\mathbf{S})+\\frac{2}{\\mu}(\\mathbf{z}^{k}-\\mathbf{l})\\right)\\] \\[=\\mathbf{z}^{k}-\\mathbf{z}^{k}./\\left(\\mathbf{S}^{T}\\mathbf{S} \\mathbf{z}^{k}+\\frac{2}{\\mu}\\mathbf{z}^{k}\\right).*\\left(\\mathbf{S}^{T}( \\mathbf{S}\\mathbf{z}^{k}-\\mathbf{S})\\right.\\] \\[\\quad+\\ \\frac{2}{\\mu}\\left(\\mathbf{z}^{k}-\\mathbf{l}\\right)\\Bigg{)}\\] \\[=\\mathbf{z}^{k}-\\mathbf{z}^{k}./\\left(\\mathbf{S}^{T}\\mathbf{S} \\mathbf{z}^{k}+\\frac{2}{\\mu}\\mathbf{z}^{k}\\right).*\\left(\\mathbf{S}^{T} \\mathbf{S}\\mathbf{z}^{k}\\right.\\] \\[\\quad+\\ \\frac{2}{\\mu}\\mathbf{z}^{k}-\\mathbf{S}^{T}\\mathbf{S}-\\frac{2}{ \\mu}\\mathbf{I}\\right)\\] \\[=\\mathbf{z}^{k}./\\left(\\mathbf{S}^{T}\\mathbf{S}\\mathbf{z}^{k}+ \\frac{2}{\\mu}\\mathbf{z}^{k}\\right).*\\left(\\mathbf{S}^{T}\\mathbf{S}+\\frac{2}{ \\mu}\\mathbf{I}\\right) \\tag{36}\\] which is coincident with the update rule of (22). That is to say, the proposed update algorithm can make the objective function decrease monotonically at each iteration until convergence has been reached. ### _Implementation Issues_ Then, we discuss several issues during the algorithm implementation. As aforementioned issue, the optimization problem is not convex with both \\(\\mathbf{A}\\) and \\(\\mathbf{S}\\), and an iterative optimization strategy with the above updating rules is proposed to solve it. Therefore, the initialization of the matrix is crucial. Two initialization methods are frequently used--random initialization and vertex component analysis-fully constrained least squares (VCA-FCLS) initialization. Compared to random initialization that setting elements to random values between \\([0,1]\\), the latter that using VCA [16] to recognize endmembers as the input of \\(\\mathbf{A}\\) and then utilizing FCLS [51] to obtain the initial \\(\\mathbf{S}\\), is more effective. In this article, we use VCA-FCLS initialization in all the experiments. For self-representation matrix \\(\\mathbf{Z}\\), we initialize it using LRR on the original image \\(\\mathbf{Y}\\). Another important issue is how to meet the basic full nonnegativity constraint and additivity constraint. Since the updating rules maintain the sign of matrix values, the former constraint can be satisfied as long as the initial matrix is nonnegative. In terms of full additivity constraint, we exploit a similar method as [28]. We augment the original data matrix \\(\\mathbf{Y}\\) and the end-member matrix \\(\\mathbf{A}\\) by a row of constants \\[\\mathbf{Y}_{f} =[\\mathbf{Y};\\delta\\mathbf{1}_{N}^{T}]\\] \\[\\mathbf{A}_{f} =[\\mathbf{A};\\delta\\mathbf{1}_{K}^{T}] \\tag{37}\\] where \\(\\delta\\) is a weight parameter that determine the impact of the additivity constraint. When the larger \\(\\delta\\), the more accurate the result. However, the convergence will be nonuniform. In practice, \\(\\delta=15\\) is a good choice. Two stopping criteria are adopted for our iterative optimization. One is to set the maximum number of iterations. We set this to 3000, in common with most alternative iterative NMF methods. The second stopping criterion is the difference in the gradients of the objective function between successive iterations \\[\\|\ abla C(\\mathbf{A}_{i},\\mathbf{S}_{i})\\|_{2}^{2}\\leq\\epsilon\\|\ abla C( \\mathbf{A}_{1},\\mathbf{S}_{1})\\|_{2}^{2} \\tag{38}\\] where \\(\\epsilon\\) is set to \\(10^{-3}\\). If the gradient difference is small enough, the optimal solution is obtained. ## IV Experimental Results and Discussion To verify the effectiveness of our proposed method, we conducted experiments on both simulated and real-world dataset. The compared HU methods include baseline methods VCA-FCLS [16] and NMF [23], sparsity-based methods \\(L_{1/2}\\)-NMF [28] and graph-regularized \\(L_{1/2}\\)-NMF (GLNMF) [37], spatial information-based methods SGSNMF [41], TV-RSNMF [34], multilayer NMF method MLNMF [52], and sparsity-constrained deep NMF with TV (SDNMF-TV) [35]. The results were evaluated with two commonly used measures to assess the quantitative unmixing performance--spectral angle distance (SAD) and root-mean-square error (RMSE). The SAD compares the similarity of the estimated signature \\(\\hat{\\mathbf{A}}_{k}\\) and the groundtruth endmember \\(\\mathbf{A}_{k}\\), and is defined as \\[SAD_{k}=\\arg\\cos\\left(\\frac{\\mathbf{A}_{k}^{T}\\hat{\\mathbf{A}}_{k}}{\\left\\| \\mathbf{A}_{k}^{T}\\right\\|\\left\\|\\hat{\\mathbf{A}}_{k}\\right\\|}\\right). \\tag{39}\\] The RMSE is defined as \\[RMSE_{k}=\\left(\\frac{1}{N}\\left\\|\\mathbf{S}_{k}-\\hat{\\mathbf{S}}_{k}\\right\\| ^{2}\\right)^{1/2} \\tag{40}\\] where \\(\\hat{\\mathbf{S}}_{k}\\) is the groundtruth abundance matrix for the \\(k\\)th endmember. As stated above, in general, a smaller SAD or RMSE corresponds to a better result. ### _Experiments on Simulated Data_ #### Iv-A1 Simulated Data The simulated dataset in this experiment was generated according to the hyperspectral imagery synthesis (EIAs) toolbox [53]. It is a free software for users to generate simulated HSIs flexibly by controlling several parameters, such as a certain number of groundtruth endmembers, the size of the abundance map, spatial distribution of materials, and different kinds of noises. We randomly selected the endmembers from the U.S. Geological Survey (USGS)1 mineral spectra library, and generated the corresponding abundance maps according to the Gaussian field. To demonstrate the effectiveness of utilizing the global spatial information, we designed the abundance map by mosaicing four smaller abundance matrix together so that each material occurs in different regions of the entire HSIs. Fig. 2 shows nine selected endmembers and Fig. 3 shows the groundtruth abundance maps built from the nine endmembers. Here, the simulated dataset has a size of \\(100\\times 100\\) pixels and 224 spectral bands. Footnote 1: [Online]. Available: [https://www.usgs.gov/labs/spec-lab](https://www.usgs.gov/labs/spec-lab) #### Iv-A2 Parameter Analysis There are two key parameters \\(\\lambda\\) and \\(\\mu\\) in our proposed method, where \\(\\lambda\\) measure the sparsity constraints and \\(\\mu\\) is for subspace structure regularization. First, we discuss the influence of these two parameters on the simulated dataset at the circumstance of SNR\\(=\\)20 dB. In this experiment, we changed \\(\\lambda\\) at the interval \\(\\{0.0005,0.001,0.003,0.01,0.05,0.1,0.2,0.3\\}\\) and \\(\\mu\\) at the interval \\(\\{0.0001,0.001,0.01,0.1\\}\\) to test our proposed method. We set parameter \\(\\tau\\) as 0.001 the same with [34]. The performance of our method for different parameter \\(\\lambda\\) and \\(\\mu\\) is shown in Fig. 4, where (a) displays the SAD results and (b) displays the RMSE results. In general, SAD and RMSE results with respect to \\(\\lambda\\) and \\(\\mu\\) reveal the same trend. When \\(\\lambda\\) and \\(\\mu\\) both are near zero, the results are stable. It should be noted that when \\(\\lambda\\) and \\(\\mu\\) both are zeros, the results correspond to classic NMF. As \\(\\lambda\\) increases it gradually converges to local minima. When \\(\\lambda\\) is too large, the results will be worse than NMF. Similar trend can be seen in parameter \\(\\mu\\). This indicates the effectiveness of the sparsity constraint as well as the subspace structure constraint. With the proper choice of parameter values, the SAD and RMSE can be Fig. 2: Spectral curve of the nine endmembers selected from the USGS mineral spectra library on the simulated datasets. subspace structure is mostly approximate with the abundance map. Therefore, the learned subspace structure can be used as a robust global spatial prior for unmixing. ### _Experiments on Real Data_ In this section, we validate our method on the real-world HSIs. We conducted unmixing experiments on three public hyperspectral datasets--the hyperspectral digital imagery collection experiment (HYDICE) Jasper Ridge dataset, the HYDICE Urban dataset, and the AVIRIS Cuprite dataset. Specifically, we obtain the groundtruth following [16]. For the Cuprite dataset, the reference endmember signatures were chosen from the USGS digital spectral library. #### Iv-B1 HYDICE Jasper Ridge Dataset Jasper Ridge is a widely used hyperspectral data for evaluating the unmixing method, which contains \\(512\\times 614\\) pixels. There are 224 spectral bands from 380 to 2500 nm. Since the groundtruth of this HSI is difficult to obtain, we only used a part of image with \\(100\\times 100\\) pixels. Specifically, the first pixel of the chosen part is (105, 269). To avoid the atmospheric effects and dense water vapor problems, we removed related bands (1-3, 108-112, 154-166, 220, 224), remaining an image of 198 bands, which is the same with other HU methods. As shown in Fig 6(a), the endmembers of Jasper Ridge are \"Tree,\" Soil,\" \"Water,\" and \"Road.\" Quantitative evaluation is presented in Table III which shows the mean SAD and RMSE values of different HU methods. As a representative solution, NMF balances the estimation of Fig. 6: Three real-world HSIs. (a) HYDICE Jasper Ridge dataset. (b) HYDICE urban dataset. (c) AVIRIS Cuprite dataset. endmembers and abundance matrix compared with VCA-FCLS. They both only use nonnegative constraints. \\(L_{1/2}\\)-NMF and GLNMF add different kinds of sparsity constraints and obtain better results. This may because sparse constraints is more effective for unmixing problem, and it can detect expressive endmembers [54]. However, these methods often have poor RMSE performance since they only focus on endmembers. The utilization of spatial information solves this problem to a certain degree. Neighbor-based TV-RSNMF and deep NMF with TV SDNMF-TV both have great performance, and SDNMF-TV is slightly better than the other compared methods. It can be seen from Table III that our proposed method that learns spatial information from original images rather than design manually achieve better performance for real-world HU. In general, our proposed method achieves the lowest mean SAD values as well as the lowest mean RMSE compared with the other methods. This validates the superiority of the proposed subspace regularizer. The qualitative unmixing results are shown in Figs. 7 and 8. From Fig. 7, we can see that the endmember signatures extracted by our method is almost coincident with the reference signatures obtained from the spectral library. Fig. 8 displays the abundance map obtained by our method. The corresponding endmember is illustrated with dark pixels. From Fig. 8, we can see the results quite agree with the four targets, \"Water,\" \"Soil,\" \"Road,\" and \"Tree,\" respectively. Simultaneously, we obtained the clustering results. Since our method can jointly learn the subspace structure of the dataset, then the clustering result can be obtained by a standard spectral clustering algorithm. Fig. 9 shows the results when the number of clusters is set as 2, 3, and 4, respectively. It can be seen that the clustering results conform to the real image intuitively. #### Iv-B3 AVIRIS Cuprite Dataset Cuprite dataset contains 224 spectral bands cover the range of 400-2500 nm. A total of 188 bands remained by removing noisy bands (1-2 and 221-224) and water-vapor absorption bands (104-113 and 148-167). In this experiment, a spatial size of \\(250\\times 191\\) was tailored, which contains 14 kinds of minerals [16]. Since there are only tiny differences between signatures of several minerals, the estimated number of endmembers was reduced to 12 for the unmixing. It is shown in Fig 6(c). Table V compares the SAD results of different HU methods. We use bold to indicate the best and underline for the second best performance for each endmember. As shown in Table V, our method outperforms the compared methods for the mean SAD values. Different methods are good at estimating different endmembers, this might be because most of the endmembers in this dataset are tiny and fragmented, the spatial structure is not so obvious and unified. For endmembers like \"Buddingtonite,\" \"pyrope,\" and \"Chalcedony,\" our proposed method has great advantages. Since the Cuprite dataset has no groundtruth, we only show the grayscale abundance maps obtained by our method in Fig. 10. Compared to the original image shown in Fig 6(c), the results can be verified intuitively. Fig. 9: Clustering results on Jasper Ridge dataset when the number of clusters is set as 2, 3, and 4, respectively. ## V Conclusion In this article, we have proposed a spatial information-based NMF by learning the subspace structure from the original image for blind HU. The presented model effectively exploits the subspace structure of the abundance map to constrain the NMF method. We first incorporate the subspace structure regularizer into the sparse NMF model as a spatial prior to improve the unmixing performance. The learned subspace structure can capture the global distribution of materials in different image regions. Then, we have integrated the spectral-spatial-based unmixing and subspace structure learning in a single unified framework and presented a multiplicative iterative method to optimize it. We compared our method with plenty classical and state-of-the-art NMF-based HU methods on both simulated and real-word HSI datasets. Both quantitative and qualitative results demonstrate the effectiveness of our method. ## References * [1] X. Bai, H. Zhang, and J. Zhou, \"VHR object detection based on structural feature extraction and query expansion,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 10, pp. 6508-6520, Oct. 2014. * [2] J. Liang, J. Zhou, Y. Qian, L. Wen, X. Bai, and Y. Gao, \"On the sampling strategy for evaluation of spectral-spatial methods in hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 2, pp. 862-880, Feb. 2017. Fig. 10: Abundance maps of 12 different endmembers obtained using our method on the Cuprite dataset. From left to right and from top to bottom are Sphene, Andradite, Muscovite, Montmorillonite, Buddingtonite, Kaolinite-2, Alunite, Dumortierite, Kaolinite-1, Pyrope, Chalcogenony, and Nontronite, respectively. * [3] S. Mei, J. Hou, J. Chen, L.-P. Chau, and Q. Du, \"Simultaneous spatial and spectral low-rank representation of hyperspectral images for classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 5, pp. 2872-2886, May 2018. * [4] M. Zhang, W. Li, and Q. Du, \"Diverse region-based CNN for hyperspectral image classification,\" _IEEE Trans. Image Process._, vol. 27, no. 6, pp. 2623-2634, Jun. 2018. * [5] X. Bai, F. Xu, L. Zhou, Y. Xing, L. Bai, and J. Zhou, \"Nonlocal similarity based nonnegative tucker decomposition for hyperspectral image denoising,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 3, pp. 701-712, Mar. 2018. * [6] J. Li, Y. Li, R. Song, S. Mei, and Q. Du, \"Local spectral similarity preserving regularized robust sparse hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 10, pp. 7756-7769, Oct. 2019. * [7] D. Wang, Z. Shi, and X. Cui, \"Robust sparse unmixing for hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 3, pp. 1348-1359, Mar. 2018. * [8] Y. E. Salehani, S. Gazor, and M. Cheriet, \"Sparse hyperspectral unmixing via heuristic \\(l_{p}\\)-norm approach,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 11, no. 4, pp. 1191-1202, Apr. 2018. * [9] L. Tong, J. Zhou, Y. Qian, X. Bai, and Y. Gao, \"Nonnegative-matrix-factorization-based hyperspectral unmixing with partially known endmembers,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 11, pp. 6531-6544, Nov. 2016. * [10] R. Heylen, M. Parente, and P. Gader, \"A review of nonlinear hyperspectral unmixing methods,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 7, no. 6, pp. 1844-1868, Jun. 2014. * [11] B. Yang, B. Wang, and Z. Wu, \"Nonlinear hyperspectral unmixing based on geometric characteristics of bilinear mixture models,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 2, pp. 1-21, Feb. 2018. * [12] F. J. Garcia-Haro, M. A. Gilabert, and J. Meli, \"Linear spectral mixture modelling to estimate vegetation amount from optical spectra data,\" _Int. J. Remote Sens._, vol. 17, no. 17, pp. 3373-3400, 1996. * [13] J. W. Boardman, \"Automating spectral unmixing of AVIRIS data using convex geometry concepts,\" in _Proc. Ann. JPL Airborne Geoscience Workshop_ 1993, vol. 1, pp. 11-14 * [14] M. E. Winter, \"N-FIND: An algorithm for fast autonomous spectral endmember determination in hyperspectral data,\" _Proc. SPIE Int. Soc. Opt. Eng._, vol. 3753, pp. 266-275, 1999. * [15] A. Zymnis, S. J. Kim, J. Skaf, M. Parente, and S. Boyd, \"Hyperspectral image unmixing via alternating projected subgradients,\" in _Proc. Asilomar Conf._, 2008, pp. 1164-1168. * [16] J. M. P. Nascimento and J. M. B. Dias, \"Vertex component analysis: A fast algorithm to unmix hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 4, pp. 898-910, Apr. 2005. * [17] J. Wang and C. I. Chang, \"Applications of independent component analysis in endmember extraction and abundance quantification for hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 44, no. 9, pp. 2601-2616, Sep. 2006. * [18] M. Craig, \"Minimum volume transforms for remotely sensed data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 32, no. 3, pp. 542-552, May 1994. * [19] J. Huang, T.-Z. Huang, L.-J. Deng, and X.-L. Zhao, \"Joint-sparse-blocks and low-rank representation for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 4, pp. 2419-2438, Apr. 2019. * [20] Y. Su, J. Li, A. Plaza, A. Marionion, P. Gamba, and S. Chakravarty, \"DAEN: Deep autoencoder networks for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 7, pp. 4309-4321, Jul. 2019. * [21] F. Khajehrayeni and H. Ghassemian, \"Hyperspectral unmixing using deep convolutional autoencoders in a supervised scenario,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 567-576, Feb. 2020. * [22] R. A. Borsoi, T. Imbiriba, and J. C. M. Bermudez, \"Deep generative endmember modeling: An application to unsupervised spectral unmixing,\" _IEEE Trans. Comput. Imag._, vol. 6, pp. 374-384, 2020. * [23] D. D. Lee and H. S. Seung, \"Learning the parts of objects by non-negative matrix factorization,\" _Nature_, vol. 401, no. 6755, pp. 788-791, 1999. * [24] D. Lee and S. Seung, \"Algorithms for non-negative matrix factorization,\" in _Proc. Adv. Neural Inf. Process. Syst._, 2001, pp. 556-562. * [25] A. Cichocki, R. Zdunek, A. H. Phan, and S. I. Amari, _Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation_. Hoboken, NJ, USA: Wiley, 2009. * [26] P. Hoyer, \"Non-negative matrix factorization with sparseness constraints,\" _J. Mach. Learn. Res._, vol. 5, no. 1, pp. 1457-1469, 2004. * [27] P. O. Hoyer, \"Non-negative sparse coding,\" in _Proc. 12th IEEE Workshop Neural Netw. Signal Process._, 2002, pp. 557-565. * [28] Y. Qian, S. Jia, J. Zhou, and A. Robles-Kelly, \"Hyperspectral unmixing via \\(l_{1/2}\\) sparsity-constrained nonnegative matrix factorization,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 11, pp. 4282-4297, Nov. 2011. * [29] Y. Ma, C. Li, X. Mei, C. Liu, and J. Ma, \"Robust sparse hyperspectral unmixing with \\(l_{2,1}\\) norm,\" _Remote Sens._, vol. 55, no. 3, pp. 1227-1239, Mar. 2017. * [30] D. Kong, C. Ding, and H. Huang, \"Robust nonnegative matrix factorization using \\(l_{2,1}\\)-norm,\" in _Proc. 20th ACM Int. Conf. Inf. Knowl. Manage._, 2011, pp. 673-682. * [31] W. He, H. Zhang, and L. Zhang, \"Sparsity-regularized robust non-negative matrix factorization for hyperspectral unmixing,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 9, no. 9, pp. 4267-4279, Sep. 2016. * [32] R. Huang, X. Li, and L. Zhao, \"Spectral-spatial robust nonnegative matrix factorization for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 10, pp. 8235-8254, Oct. 2019. * [33] M. D. Iordache, J. M. Bioucas-Dias, and A. Plaza, \"Total variation spatial regularization for sparse hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, no. 11, pp. 4484-4502, Nov. 2012. * [34] W. He, H. Zhang, and L. Zhang, \"Total variation regularized reweighted sparse nonnegative matrix factorization for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 7, pp. 3909-3921, Jul. 2017. * [35] X.-R. Feng, H.-C. Li, J. Li, Q. Du, A. Plaza, and W. J. Emery, \"Hyperspectral unmixing using sparsity-constrained deep nonnegative matrix factorization with total variation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 10, pp. 6245-6257, Oct. 2018. * [36] X. Liu, X. Wei, B. Wang, and L. Zhang, \"An approach based on constrained nonnegative matrix factorization to unmix hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 2, pp. 757-772, Feb 2011. * [37] X. Lu, H. Wu, Y. Yuan, P. Yan, and X. Li, \"Manifold regularized sparse NMF for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 5, pp. 2815-2826, May 2013. * [38] W. Dong, X. Li, L. Zhang, and G. Shi, \"Sparsity-based image denoising via dictionary learning and structural clustering,\" in _Proc. IEEE CVPR_, 2011, pp. 457-464. * [39] X. Lu, H. Wu, and Y. Yuan, \"Double constrained NMF for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 52, no. 5, pp. 2746-2758, May 2014. * [40] S. Khoshoshkhan, R. Rajabi, and H. Zayyani, \"Clustered multitask non-negative matrix factorization for spectral unmixing of hyperspectral data,\" _J. Appl. Remote Sens._, vol. 13, no. 2, 2019, Art. no. 026509. * [41] X. Wang, Y. Zhong, L. Zhang, and Y. Xu, \"Spatial group sparsity regularized nonnegative matrix factorization for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 11, pp. 6287-6304, Nov. 2017. * [42] H. Zhang, Z. Han, L. Zhang, and P. Li, \"Spectral spatial sparse subspace clustering for hyperspectral remote sensing images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 6, pp. 3672-3684, Jun. 2016. * [43] L. Zhou, X. Bai, X. Liu, J. Zhou, and E. R. Hancock, \"Learning binary code for fast nearest subspace search,\" _Pattern Recognit._, vol. 98, 2020, Art. no. 107040. * [44] P. Ji, T. Zhang, H. Li, M. Salzmann, and I. Reid, \"Deep subspace clustering networks,\" in _Proc. Adv. Neural Inf. Process. Syst._, 2017, pp. 24-33. * [45] L. Zhou _et al._, \"Latent distribution preserving deep subspace clustering,\" in _Proc. 28th Int. Joint Conf. Artif. Intell._, 2019, pp. 44* [53] S. Grupo de Inteligencia Computacional, Universidad del Pas Vasco / Euskal Herinko Unibertsitate (UPV/EHU). \"Hyperspectral Imagery Synthesis (ELAs) Toolbox.\" [Online]. Available: [http://www.ehu.es/ccwintco/index.php/Hyperspectral](http://www.ehu.es/ccwintco/index.php/Hyperspectral) Imagery Synthesis tools for MATLAB * [54] S. Z. Li, X. W. Hou, H. J. Zhang, and Q. S. Cheng, \"Learning spatially localized, parts-based representation,\" in _Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit._, 2001, vol. 1, pp. I-I. \\begin{tabular}{c c} & Lei Zhou received the bachelor's degree, in 2016 from the School of Mathematics and Systems Science, Beihang University, Beijing, China, where he is currently working toward the Ph.D. degree with the School of Computer Science and Engineering. His current research interests include machine learning, computer vision, and remote sensing image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xueni Zhang received the bachelor's degree, in 2017 from the College of Computer Science and Technology, Jilin University, Jilin, China, and received the master's of engineering degree, in 2020 from the School of Computer Science and Engineering, Beihang University, Beijing, China. Her research interests include computer vision and remote sensing image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jianbo Wang is currently an undergraduate from the First Clinical Medical College of Nanchang University, Jiangxi, China. His research interests include machine learning and image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xiao Bai received the B.Eng. degree in computer science from Beihang University, Beijing, China, in 2001, and the Ph.D. degree in computer science from the University of York, York, U.K., in 2006. He was a Research Officer (Fellow, Scientist) with the Computer Science Department, University of Bath, Bath, U.K. until 2008. He is currently a Full Professor with the School of Computer Science and Engineering, Beihang University. He has authored or coauthored more than 60 papers in journals and refereed conferences. His current research interests include pattern recognition, image processing, and remote sensing image analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Lei Tong received the B.E. degree in measurement and control technology and instrumentation, the M.E. degree in measurement technology and automation devices from Beijing Jiaotong University, Beijing, China, in 2010 and 2012, respectively, and the Ph.D. degree in engineering from Griffith University, Brisbane, Australia, in 2016. Currently, he is a Lecturer with the Faculty of Information Technology, Beijing University of Technology, Beijing, China. His current research interests include signal and image processing, pattern recognition, and remote sensing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Liang Zhang received the B.Eng. degree in computer science and the Ph.D. degree in computer science from Beihang University, Beijing, China, in 2001 and 2007, respectively. His current research interests include machine learning, computer vision, and image processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jun Zhou received the B.S. degree in computer science and the B.E. degree in international business from the Nanjing University of Science and Technology, Nanjing, China, in 1996 and 1998, respectively; the M.S. degree in computer science from Concordia University, Montreal, QC, Canada, in 2002, and the Ph.D. degree in computing science from the University of Alberta, Edmonton, AB, Canada, in 2006. He was a Research Fellow with the Research School of Computer Science, Australian National University, Canberra, ACT, Australia, and a Researcher with the Canberra Research Laboratory, NICTA, Canberra, ACT, Australia. In June 2012, he joined the School of Information and Communication Technology, Griffith University, Nathan, QLD, Australia, where he is currently an Associate Professor. His research interests include pattern recognition, computer vision, and spectral imaging with their applications in remote sensing and environmental informatics. \\\\ \\end{tabular} \\begin{tabular}{c c} & Edwin Haneock (Fellow, IEEE) received the B.Sc. degree in physics, in 1977, the Ph.D. degree in high-energy physics, in 1981, and the D.Sc. degree, in 2008 from the University of Durham, Durham, U.K., and a doctorate Honoris Causa from the University of Alicante, Alicante, Spain, in 2015. He is a Professor with the Department of Computer Science, where he leads a group of some faculty, research staff, and Ph.D. students working in the areas of computer vision and pattern recognition. His main research interests are in the use of optimization and probabilistic methods for high and intermediate level vision. He is a Fellow of the International Association for Pattern Recognition. He is currently the Editor-in-Chief of the journal _Pattern Recognition_, and was the Founding Editor-in-Chief of _IET Computer Vision_ from 2006 until 2012. He has also been a member of the editorial boards of the journals IEEE Transactions on Pattern Analysis and Machine Intelligence, _Pattern Recognition, Computer Vision and Image Understanding_, _Image and Vision Computing_, and the _International Journal of Complex Networks_.
Hyperspectral unmixing is a crucial task for hyperspectral images (HSIs) processing, which estimates the proportions of constituent materials of a mixed pixel. Usually, the mixed pixels can be approximated using a linear mixing model. Since each material only occurs in a few pixels in real HSI, sparse nonnegative matrix factorization (NMF), and its extensions are widely used as solutions. Some recent works assume that materials are distributed in certain structures, which can be added as constraints to sparse NMF model. However, they only consider the spatial distribution within a local neighborhood and define the distribution structure manually, while ignoring the real distribution of materials that is diverse in different images. In this article, we propose a new unmixing method that learns a subspace structure from the original image and incorporate it into the sparse NMF framework to promote unmixing performance. Based on the self-representation property of data points lying in the same subspace, the learned subspace structure can indicate the global similar graph of pixels that represents the real distribution of materials. Then the similar graph is used as a robust global spatial prior which is expected to be maintained in the decomposed abundance matrix. The experiments conducted on both simulated and real-world HSI datasets demonstrate the superior performance of our proposed method. Hyperspectral unmixing (HU), linear mixing model (LMM), nonnegative matrix factorization (NMF), subspace structure, similar graph.
Write a summary of the passage below.
ieee/f5e8585c_a61b_4d56_8a41_104b088f242d.md
SAR Target Recognition via Joint Sparse Representation of Monogenic Components With 2D Canonical Correlation Analysis Yapeng Zhou Yaheng Chen Ranan Gao JINXiong Feng Pengfei Zhao School of Mathematics and Statistics, University of California, Los Angeles, CA 90095-1105, USA [email protected] ###### S ART Target Recognition, Monogenic Signal, Monogenic Component, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic Signal, Monogenic have been designed and tested. Another essential part is the concrete classification algorithm, which is generally comprised by two steps. The first step extracts discriminative representations, or so-called features, from the original SAR images to describe the geometrical properties [7, 8, 9, 10, 11, 12], intensity distributions [13, 14, 15, 16, 17], or scattering characteristics [18, 19, 20, 21, 22, 23] of the targets. Afterwards in the second step, the classifiers are designed to classify these features thus determining the target labels. In the past three decades, the progress in SAR ATR witnessed the developments in the pattern recognition and machine learning fields [24, 25, 26, 27, 28, 29, 30, 31]. Different kinds of feature extraction and classification techniques are applied to SAR ATR. Mishra [13] adopted principal component analysis (PCA) for SAR image feature extracted with a \\(K\\)-nearest neighbor (KNN) classifier for classification. Zhao and Principe [25] introduced support vector machines (SVM) to SAR ATR and validated its superior performance over the traditional template matching algorithms. After then, SVM is taken as a very common classifier in SAR ATR. In [9] and [11], SVM was employed to classify the geometrical features, i.e., Zernike moments and outline descriptors. Cui _et al._[14] and Liu and Li [26] applied SVM to the classification of projections features extracted by PCA and NMF, respectively. The sparse representation-based classification (SRC) was employed as the classifier in [27] to classify the random projection features. Later, Song _et al._[28] examined SRC on different kinds of features extracted by random projection, PCA, and down-sampling, respectively. With the significant developments in deep learning techniques, the convolutional neural network (CNN) was demonstrated notably effective for image interpretation [32, 33], which had also been introduced to SAR ATR [29, 30, 31]. The excellent classification performance of CNN benefits from the powerful feature learning capability of the multi-layer networks. Therefore, the deep features can better convey the discriminability of the original SAR images compared with the traditional hand-created features. As an organic whole, the performance of SAR ATR methods is tightly related to both the features and used classifiers. On one hand, the features should be able to distinguish different kinds of targets. On the other hand, the classifiers should effectively exploit the discrimination ability of the features to make correct decisions. In this work, a SAR ATR method is developed by exploiting the monogenic components including the amplitude, phase, and orientation [34]. Dong and Kuang [35] first introduced monogenic signal to SAR image feature extraction and target recognition. Some classification schemes are designed by them to improve the ATR performance [35, 36, 37, 38]. In [35], a score-level fusion strategy was used, where the three monogenic components were treated independently and their individual decisions were fused parallelly. Considering the possible correlations between different components, the joint sparse representation (JSR) was employed to jointly classify the three components in [36]. In addition, some manifold learning algorithms were adopted to enhance the monogenic features, e.g., Grassmann [37] and Riemannian manifolds [38]. Ning _et al._[39] integrated the monogenic components with the polar mapping to generate a novel feature called \"monogenic polar mapping\" for SAR ATR. Zhou _et al._[40] conducted the selection of the multi-scale monogenic components before the weighted multi-task joint sparse representation. These researches demonstrated the discrimination capability of monogenic components as for SAR target recognition. However, it is assumed that the discrimination contained in the multiscale monogenic components are not fully exploited in these methods because the correlations between different scales or different types of monogenic components are not comprehensively considered. In [35], the monogenic components were down-sampled independently and then concatenated. Afterwards, SRC was employed to classify the fused vectors at each scale, whole results were fused parallelly with a score-level fusion strategy. In fact, the same type of monogenic components at different scales share some correlations. In addition, different types of monogenic components also share inner correlations because they are actually from the same SAR image. However, for the method in [35], both of the two correlations were not specifically considered. As a remedy, the method in [37] applied JSR to the joint classification of different types of monogenic components thus exploiting their inner correlations. However, the correlations between the same type of monogenic components at different scales were still not considered. Some following works [37, 38, 39, 40] made some modifications to [35] and [36] but the problem was not properly considered either. So, in this study, we intend to fully consider the correlations contained in the multiscale monogenic components. First, the 3-scale monogenic components are generated to represent the original SAR images according to the previous works. Afterwards, the 2D canonical correlation analysis (2DCCA) [41] is employed to capture the inner correlations of each type of monogenic components from different scales. So, three feature matrices are formed corresponding to the three types of monogenic components. On one hand, 2DCCA is adopted to analyze the inner correlations between the same type of monogenic components at different scales. In comparison with the simple down-sampling and concatenation strategies, the resulted features fused by 2DCCA could better keep the structural and intensity correlations between different scales. On the other hand, JSR [42, 43, 44, 45, 46] is employed to jointly classify the generated features from different types of monogenic components. As validated in the previous works, JSR is a useful tool to perform the multi-task learning thus exploiting the inner correlations between different tasks, which was successfully used to jointly classify multi-view SAR images [44, 45] or multi-features from SAR images [46]. So, it can be properly used to consider the correlations between different types of monogenic components. As a summery, the proposed method could comprehensively consider the inner correlations contained in the multiscale monogenic components. Therefore, it is promising that more discrimination could be employed to classify different kinds of targets thus improving the ATR performance. The main contributions of this paper can be summarized as follows. (i) A novel way of generating features from multiscale monogenic components is designed via 2DCCA. To the best of our knowledge, it is the first time to introduce 2DCCA to the feature fusion of multiscale monogenic components or the general topic of SAR ATR. Compared with the conventional ways of applying monogenic components in SAR ATR, the proposed feature fusion could better exploit the inner correlations between different scales of monogenic components and generate compact feature matrices for the following classification. (ii) JSR is adopted to classify the features generated from different types of monogenic components. JSR is multitask learning algorithm, which considers the inner correlations between different task simultaneously during the sparse representations. For the generated features from different types of monogenic components, they share some correlations because they are actually from the same SAR images. So, JSR is a proper way to perform the joint classification. (iii) Both the inner correlations between different scales and different types of monogenic components can be exploited to enhance the ATR performance. As analyzed above, 2DCCA considers the correlations between different scales of the same type of monogenic components while JSR makes use of the correlations between different types of monogenic components. Therefore, more discrimination contained in the multiscale monogenic components could be used to improve the ATR performance. The followings of this paper are organized as four sections. In Section 2, the application of 2D monogenic signal to SAR image feature extraction is introduced. Section 3 explains the feature fusion of monogenic components using 2DCCA. In Sections 4, the moving and stationary target acquisition and recognition (MSTAR) dataset is used for experimental evaluation to test the proposed approach. Section 5 summarizes this study with some discussions. ## II 2D Monogenic Signal for Feature Extraction of SAR Images The monogenic signal is the generation of analytic signal to high dimension [34]. Specially, for the image data, the 2D monogenic signal, which is the combination of a 2D signal and its Riesz transform, can be used to analyze its properties. Denote \\(f(z)\\) as the 2D signal and its Riesz transform is calculated as \\(f_{R}(z)\\). Here, \\(z=(x,y)^{T}\\) represents the 2D spatial domain coordinate. Then, the monogenic signal \\(f_{M}(z)\\) is obtained as follow. \\[f_{M}(z)=f(z)-(i,j)f_{R}(z) \\tag{1}\\] where \\(i\\) and \\(j\\) are the imagery units. And the real and imaginary parts are the original signal and its Riesz transform, respectively. Then, three monogenic components can be obtained as equation (2). \\[\\text{amplitude:}A(z) =\\sqrt{f(z)^{2}+|f_{R}(z)|^{2}}\\] \\[\\text{phase:}\\varphi(z) =\\text{at}an2(|f_{R}(z)|\\,,f(z))\\in(-\\pi,\\pi]\\] \\[\\text{orientation:}\\theta(z) =\\text{at}an2(f_{y}(z)/f_{x}(z))\\in(-\\frac{\\pi}{2},\\frac{\\pi}{2}] \\tag{2}\\] where \\(f_{x}(z)\\) and \\(f_{y}(z)\\) correspond to the \\(i\\)-imaginary and \\(j\\)-imaginary components of the monogenic signal, respectively. By analyzing SAR images using 2D monogenic signal, the generated monogenic components are capable of describing the original image from different aspects. The local amplitude \\(A(z)\\) describes the intensity distribution or energy. The local phase \\(\\varphi(z)\\) and local orientation \\(\\theta(z)\\) reflect the structural and geometric information, respectively. With a finite length, the practical signal has an infinite spectrum in the frequency domain. As a remedy, the log-Gabor filter bank is usually utilized to extend to original signal to be infinite. To fully exploit the spectral information of the original image, the log-Gabor filter is performed at different scales. In this study, 3-scale monogenic components are generated according to the parameter setting in [35]. Figure 1 presents an illustration of the 3-scale monogenic components from a MSTAR SAR image. As shown, the monogenic components at different scales reflect the spectral information of the original image from different aspects and it is assumed that they joint use could provide more discrimination for correct target recognition. ## III 2DCCA for Feature Fusion 2DCCA is the generalization of CCA [47] to the 2D space, which can better exploit the correlation between two 2D variables [41]. For two image sets \\(\\left\\{X_{t}\\in\\mathbb{R}^{m_{x}\\times m_{z}},\\,t=1,\\,\\cdots,\\,N\\right\\}\\) and \\(\\left\\{Y_{t}\\in\\mathbb{R}^{m_{y}\\times m_{y}},\\,t=1,\\,\\cdots,\\,N\\right\\}\\), they can be seen as the realizations of random variable matrix \\(X\\) and \\(Y\\), respectively. In the traditional CCA [47], the 2D images are first reshaped into 1D vectors and then the canonical analysis is conducted. However, the vectorization operation loses the 2D structural information of the images. As a remedy, the 2DCCA is proposed to directly analyze the correlations between the two image sets. First, the mean matrices of \\(X_{t}\\) and \\(Y_{t}\\) are calculated as: \\[M_{x}=\\frac{1}{N}\\sum_{t=1}^{N}X_{t},\\quad M_{y}=\\frac{1}{N}\\sum_{t=1}^{N}Y_{t} \\tag{3}\\] Figure 1: **Illustration of feature extraction of SAR images based on monogenic signal.** Then, the original images are centralized as: \\[\\tilde{X}_{t}=X_{t}-M_{x},\\quad\\tilde{Y}_{t}=Y_{t}-M_{y} \\tag{4}\\] The objective of 2DCCA is to seek left transforms (\\(l_{x}\\) and \\(l_{y}\\)) and right transforms (\\(r_{x}\\) and \\(r_{y}\\)), which maximize the correlations between \\(l_{x}^{\\rm T}Xr_{x}\\) and \\(l_{y}^{\\rm T}Yr_{y}\\). Accordingly, 2DCCA is solved as: \\[\\arg\\max~{}\\operatorname{cov}(l_{x}^{\\rm T}Xr_{x},\\,l_{y}^{\\rm T} Yr_{y}),\\quad\\tilde{Y}_{t}=Y_{t}-M_{y}\\] \\[\\text{s.t.}~{}\\operatorname{var}(l_{x}^{\\rm T}Xr_{x})=1,\\quad \\operatorname{var}(l_{y}^{\\rm T}Yr_{y})\\!=\\!1,\\quad\\tilde{Y}_{t}\\!=\\!Y_{t}-M_{y} \\tag{5}\\] The detailed solutions of 2DCCA can be referred to the original work in [41]. Based on the resulting left and right transforms, the corresponding images from the two image sets can be combined as a unified matrix, which could maintain their inner correlations. \\[\\chi_{t}==l_{x}^{\\rm T}X_{t}r_{x}+l_{y}^{\\rm T}Y_{t}r_{y} \\tag{6}\\] In this study, 2DCCA is used for the feature fusion of multi-scale monogenic components. In detail, for each component, its corresponding 3-scale representations are treated as 2D random variables. Denote the 3-scale amplitude components as \\(A_{1}\\), \\(A_{2}\\) and \\(A_{3}\\), respectively. First, \\(A_{1}\\) and \\(A_{2}\\) are fused according to equation (6). Afterwards, their fused matrix is further combined with \\(A_{3}\\) to get the final feature matrix. The sample procedure is performed on the remaining two components. And in this way, three features matrices are generated to convey the information in the 3-scale monogenic components. Figure 2 shows the fused feature matrices with the sizes of 20 \\(\\times\\) 20 generated from the 3-scale monogenic components in Figure 1. For each type of monogenic components, its multiscale representations are fused as a compact feature matrix. Although it is hard to observe some intuitive properties from these feature matrices, the fused features actually reflect the inner correlations contained in the multiscale monogenic components. In the previous works, Dong and Kuang [35] concatenated the corresponding monogenic components at different scales to form a feature vector. Some other works using the monogenic signal for SAR target recognition also adopted the similar idea during feature generation [39, 40]. Although with high efficiency, the inner correlations between different scales cannot be exploited in these methods. Moreover, the structural information of the 2D signal was also neglected to a large extent. This study uses 2DCCA to capture the correlations of different types of monogenic components at different scales. Therefore, the fused features can better convey the original discrimination capability of SAR images. ## IV Joint Sparse Representation of Fused Features for Target Recognition ### Joint Sparse Representation 2DCCA is capable of exploiting the inner correlations of each type of monogenic components from different scales. Actually, the three monogenic components also share some inner correlations. Therefore, we use JSR as the basic classification scheme for the fused features from the three monogenic components according to [36]. JSR is a natural generalization of the single-task sparse representation (or SRC) but considers the inner correlations between different tasks [42, 43, 44, 45, 46]. Denote the three generated features as \\(Y=[\\chi_{1},\\,\\chi_{2},\\,\\chi_{3}]\\), in which the three elements correspond to the amplitude, phase, and orientation, sequentially. The sparse representation problems of the three features can be jointly formulated as: \\[\\min_{\\Xi}\\left\\{g(\\Xi)=\\sum_{m=1}^{3}\\left\\|\\chi_{m}-X_{m}\\alpha_{m}\\right\\| \\right\\} \\tag{7}\\] In equation (7), \\(X_{m}\\) represents the global dictionary formed by the corresponding monogenic features by all the training samples; \\(\\Xi=[\\alpha_{1},\\,\\alpha_{2},\\,\\alpha_{3}]\\) denotes the sparse coefficient matrix. And each column in \\(\\Xi\\) contains the sparse coefficients from the specific task. The objective in equation (7) actually considers different tasks independently. For the related tasks, it is preferred that their inner correlations can be considered to enhance the precision and robustness of JSR. As a remedy, the \\(\\ell_{1,2}\\) mixed norm is introduced to the joint optimization problem as follow. \\[\\min_{\\Xi}g(\\Xi)+\\lambda\\left\\|\\,\\Xi\\right\\|_{1,2} \\tag{8}\\] In equation (8), the mixed norm first calculates the \\(\\ell_{2}\\) norm of each row in \\(\\Xi\\). Afterwards, the \\(\\ell_{1}\\) norm of the resulted vector is adopted as the final result. Therefore, the objective in equation (8) imposes some constraints on the distribution of the non-zero elements in \\(\\Xi\\). In detail, the sparse coefficient vector of each task should share similar non-zero patterns in order to reach a local minimum. To solve the JSR problem in equation (8), some previous signal processing algorithms can be directly employed, e.g., the simultaneous orthogonal matching pursuit (SOMP) [42] and multi-task compressive sensing learning algorithm [43]. Then, based on estimated coefficient matrix, the total reconstruction errors of the three tasks of different training classes are calculated to make decision on the target label as equation (9). \\[\\operatorname{indentity}(Y)=\\min_{k=1,\\cdots,C}\\sum_{m=1}^{3}\\left\\|\\chi_{m}- X_{m}^{k}\\hat{\\alpha}_{m}^{k}\\right\\|_{2} \\tag{9}\\]where \\(X_{m}^{k}\\) and \\(\\hat{\\alpha}_{m}^{k}\\) correspond to the sub-dictionary and coefficient vector to the _m_th monogenic feature from the _k_th class. ### Target Recognition According to the aforementioned analysis, the multiscale monogenic components are first fused using 2DCCA to generate compact and discriminative monogenic features. Afterwards, the generated features are jointly classified by JSR for target classification. Figure 3 shows the framework of the proposed approach. In detail, the whole procedure of target recognition can be summarized as the following five steps: _Step 1:_ For each training sample, its 3-scale monogenic components are obtained using the 2D monogenic signal; _Step 2:_ Perform 2DCCA based on the 3-scale monogenic components of all the training samples, respectively and obtain the transform matrices; _Step 3:_ Fuse the monogenic components of each training sample to build the corresponding dictionary; _Step 4:_ For the test sample, extract its 3-scale monogenic components and generate the fused features in a similar way with the training samples; _Step 5:_ Perform JSR to obtain the target label of the test sample based on the three fused features. In the practical application, the resulted feature matrix for each component is further reshaped into a vector for the convenience of the following joint classification. This study uses SOMP to solve the JSR problem owing of its high effectiveness and efficiency. ## V Experiments ### _Mistar Data Set_ We use the MSTAR data set for the performance evaluation of the proposed method, which has long been the most prevalent benchmark for the validation of SAR ATR methods. In this data set, there are SAR images of 10 classes of military targets, which cover the full 360\\({}^{\\circ}\\) aspect angle and several depression angles, e.g., 15\\({}^{\\circ}\\), 17\\({}^{\\circ}\\), 30\\({}^{\\circ}\\), 45\\({}^{\\circ}\\). Figure 4 shows the exemplar optical and SAR images of the ten targets. The original SAR images in the dataset are collected with the resolution of 0.3m at high signal-to-noise ratios (SNR). In addition, most images contain the intact targets with very small occlusions. For comparison, some state-of-the-art SAR ATR methods are used as listed in Table 1, i.e., SVM [25], SRC [27], CNN [29], MSRC [35], TJSR [36], and RFJSR. SVM and SRC are used to classify the 100-dimension PCA features. CNN works on the raw image intensities. MSRC and TJSR are two methods proposed by Dong et al., which also use the multiscale monogenic components as the basic features for SAR ATR. In detail, MSRC performs the parallel decision fusion of the independent classification results of the three components. TJSR uses JSR to jointly classify the three components. Specifically, we design a RFJSR method to be compared with the proposed one. In this case, we simulate the idea of feature generation in 2DCCA, i.e., using four transformation matrices (two left ones and two right ones). Differently, these matrices in RFJSR are randomly decided with no correlation analysis. In this method, the following classification stage keeps in consistency with the proposed Figure 3: The framework of the proposed method. one. So, by comparing the proposed method and RFJSR, the effectiveness and necessity of 2DCCA can be better explained. ### 10-class Recognition Under SoC 1) Preliminary TEST A preliminary verification is performed under SOC based on the 10-class samples listed in Table 2. Images collected at 17\\({}^{\\circ}\\) and 15\\({}^{\\circ}\\) depression angles are trained and classified, respectively. Specifically, the test configurations of BMP2 and T72 are not fully covered by their training samples. In this experiment, we first set the dimensions of the fused feature matrix from 2DCCA to be 20 \\(\\times\\) 20. For fair comparison, the monogenic features are down-sampled to be 400-dimension vectors in MSRC and TJSR. Figure 5 presents the confusion matrix of the proposed method on the ten classes of targets. In this figure, the X and Y coordinates record the original and predicted target labels, respectively. And the diagonal elements correspond to the recognition rates of different classes. Each of them can be correctly classified with a recognition rate over 96% and the average recognition rate equals 97.88%. BMP2 and T72 suffer relatively lower recognition rates among the ten targets resulted by the configuration variants between the training and test sets. Table 3 compares the average recognition rates of different methods. Although with a marginally lower recognition rate than CNN, the proposed \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Method** & **Feature** & **Classifier** & **Ref.** \\\\ \\hline SVM & PCA features & SVM & [25] \\\\ SRC & PCA features & SRC & [27] \\\\ CNN & Raw image intensities & CNN & [29] \\\\ MSRC & Monogenic components & SRC with Bayesian decision fusion & [35] \\\\ TJSR & Monogenic components & JSR & [36] \\\\ RFJSR & Monogenic components & JSR & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: State-of-the-art methods for comparison. \\begin{table} \\begin{tabular}{c c c c c c c c c c c c} \\hline \\hline & Depr (\\({}^{\\circ}\\)) & BMP2 & BTR70 & T72 & T62 & BDRM2 & BTR60 & ZSU234 & D7 & ZIL131 & 2S1 \\\\ \\hline Training & 17 & 233(Sn.9563) & 233 & 232(Sn.132) & 299 & 298 & 256 & 299 & 299 & 299 \\\\ \\hline \\multirow{3}{*}{Test} & \\multirow{3}{*}{15} & 195(Sn.9563) & \\multirow{3}{*}{196} & 196(Sn.132) & \\multirow{3}{*}{273} & \\multirow{3}{*}{274} & \\multirow{3}{*}{195} & \\multirow{3}{*}{274} & \\multirow{3}{*}{274} & \\multirow{3}{*}{274} & \\multirow{3}{*}{274} \\\\ & & 196(Sn.9566) & & & & & & & & \\\\ \\cline{1-1} \\cline{6-12} & & 196(Sn.521) & & & & & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Training and test samples under SOC. Figure 4: The optical (top) and SAR (bottom) images of the ten targets in MSTAR _data set_. method outperforms all the remaining methods. In comparison with MSRC and TJSR, the superior performance of the proposed method indicates that the proposed method could better make use of the monogenic components to enhance the recognition performance. Compared with TJSR, the higher recognition rate of the proposed method validates that the 2DCCA is capable of generating more discriminative features based on the monogenic components. Notably, the proposed method significantly outperforms RFJSR with a recognition rate margin of 2.04%. It reveals that 2DCCA is an effective way to fuse the multiscale monogenic components as compact and discriminative features for the following classification. #### 3.3.2 Performance at Different Feature Dimensions A further validation is conducted for the methods using the monogenic components including MSRC, TJSR, and the proposal at different feature dimensions. The dimensions of 2DCCA are set to be 10 \\(\\times\\) 10, 20 \\(\\times\\) 20, 30 \\(\\times\\) 30, and 40 \\(\\times\\) 40, respectively. Accordingly, the dimension of the down-sampled monogenic features in MSRC and TJSR are determined as 100, 400, 900, and 1600, correspondingly. Figure 6 plots the average recognition rates of the three methods at different feature dimensions. Clearly, the proposed achieves the highest recognition rate at each feature dimension, which keeps at a high level over 94%. Compared with the down-sampling strategy in MSRC and TJSR and random transformation matrices in RFJSR, 2DCCA is capable of considering the inner correlations of the monogenic components at different scales. Specifically, the proposed method notably outperforms RHJSR at each feature dimension, which reflects the importance of 2DCCA during the generation of the transformation matrices. Therefore, it is reasonable that more information is contained in the monogenic features generated by 2DCCA. Also, we can observe that when the feature dimension goes above 400, the average recognition rate of the proposed method surpass that achieved by CNN, i.e., 98.02%, which further validates the high effectiveness of the proposed method under SOC. In the following experiments, the dimensions of 2DCCA are kept as 20 \\(\\times\\) 20 as a tradeoff between recognition performance and feature complexity. ### Configuration Variants The condition of configuration variants is a common EOC in SAR ATR. Then, the different configurations from the training set present some challenges to the target recognition methods. Table 4 presents the experimental setup for the test of configuration variants. The test samples from BMP2 and T72 have totally different configurations with their corresponding training samples. The detailed recognition results are listed in Table 5. And the average recognition rate is calculated to be 94.23%, which demonstrates the high effectiveness of the proposed method under configuration variants. The average recognition rates achieved by different methods are compared as Table 6. The robustness of the proposed method to possible configuration variants is further validated because of its highest recognition rate. Although CNN has powerful classification ability, its performance highly relates Figure 5: Confusion matrix on the ten classes of targets under SOC. The row represents the ground-truth label while the column records the predicted label. The entries on the diagonal show the recognition rates of the corresponding class. \\begin{table} \\begin{tabular}{c c c c c c c c c} \\hline \\hline & Method & Proposed & SVM & SRC & CNN & MSRC & TJSR & RFJSR \\\\ \\hline \\multicolumn{1}{c}{Recognition rate (\\%)} & 97.88 & 95.12 & 94.88 & 98.02 & 96.44 & 96.92 & 95.84 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Average recognition rates of all the methods on ten classes of targets. Figure 6: Recognition rates of the methods based on monogenic components at different feature dimensions. to the sufficiency of available training samples. Under EOCs, the test samples share different extents of divergences with the training set. Consequently, the recognition performance of CNN may degrade to some degrees. Compared with MSRC, TJSR, and RFJSR, the superiority of the proposed method shows that 2DCCA can generate more discriminative features for the recognition task under configuration variants. ### Depression Angle Variance Another typical EOC included in the original MSTAR data set is depression angle variance, indicating that the test samples are measured at notably different depression angles with the stored training ones. In this experiment, SAR images of three targets, i.e., 2S1, BDRM2, and ZSU23/4 at three available depression angles are used as shown in Table 7. The training samples are collected at 17\\({}^{\\circ}\\) depression angle whereas those to be classified are from 30\\({}^{\\circ}\\) or 45\\({}^{\\circ}\\) depression angle. Figure 7 presents an intuitive comparison between SAR images of 2S1 target with depression angle variance. According to the SAR imaging mechanism [48], the projected target region and radar shadow on the image plane are closely related to the depression angle [49]. The results achieved by the proposed method are shown in Table 8. The average recognition rates at 30\\({}^{\\circ}\\) and 45\\({}^{\\circ}\\) depression angles are, respectively. Table 9 compares the performance of all the methods at different depression angles. At 30\\({}^{\\circ}\\) depression angle, all the methods keep relatively high recognition rates because the test samples still share many similarities with the training ones at 17\\({}^{\\circ}\\) depression angle. However, the test samples at 45\\({}^{\\circ}\\) depression angle are classified with much lower recognition rates because they have many differences with the training samples. At each depression angle, the highest recognition rate is achieved by the proposed approach, showing its better robustness to depression angle variance. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline Method & Proposed & SVM & SRC & CNN & MSRC & TJSR & RFJSR \\\\ \\hline Recognition rate (\\%) & 93.66 & 88.64 & 87.12 & 91.46 & 90.52 & 92.04 & 86.93 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Average recognition rates of all the methods under configuration variants. \\begin{table} \\begin{tabular}{c c c c c c} \\hline & Depr (\\({}^{\\circ}\\)) & BMP2 & T72 & BTR60 & T62 \\\\ \\hline Training & 17 & 233 (Sn 9563) & 232 (Sn 132) & 256 & 299 \\\\ \\hline Test & 15 & 196 (Sn 9566) & 195 (Sn 812) & 195 & 273 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Training and test samples with configuration differences. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline Class & Serial No. & BMP2 & T72 & BTR60 & T62 & Recognition rate (\\%) \\\\ \\hline BMP2 & Sn 9566 & 184 & 3 & 4 & 5 & 93.88 \\\\ & Sn 21 & 179 & 8 & 3 & 6 & 91.33 \\\\ \\hline T72 & Sn 812 & 185 & 2 & 6 & 2 & 94.87 \\\\ & Sn s7 & 177 & 6 & 5 & 3 & 92.67 \\\\ \\hline BTR60 & - & 186 & 5 & 1 & 3 & 95.38 \\\\ \\hline T62 & - & 256 & 3 & 4 & 10 & 93.43 \\\\ \\hline Average & & & & & & 93.66 \\\\ \\hline \\end{tabular} \\end{table} Table 5: Recognition results under configuration variants. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline Method & Proposed & SVM & SRC & CNN & MSRC & TJSR & RFJSR \\\\ \\hline Recognition rate (\\%) & 93.66 & 88.64 & 87.12 & 91.46 & 90.52 & 92.04 & 86.93 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Training and test samples with different depression angles. Figure 7: Examples of SAR images at different depression angles. (a) 17\\({}^{\\circ}\\); (b) 30\\({}^{\\circ}\\); (d) 45\\({}^{\\circ}\\). The results demonstrate that the fused multiscale monogenic components can better handle the possible depression angle variance. ### Random noise corruption The real measured SAR images may contain many noises, e.g., additive Gaussian noises [50] and speckles [51]. In this study, the random noises are added to the original 10-class test samples according to [29] and [35]. We randomly select some pixels of the original image and replace them by sparks with high intensities according to the preset noise level. Some noisy SAR images are shown in Figure 8, which are corrupted by different levels of random noises. These noisy samples are then classified by different methods to examine their robustness. The average recognition rates of different methods at different noise levels are plotted in Figure 9. It shows that the proposed method outperforms the remaining methods \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{Depr. (\\({}^{\\circ}\\))} & \\multirow{2}{*}{Class} & \\multicolumn{3}{c}{Results} & \\multirow{2}{*}{Recognition rate (\\%)} & \\multirow{2}{*}{Average (\\%)} \\\\ \\cline{3-3} \\cline{5-6} & & 2S1 & & BDRM2 & & ZSU23/4 & & 97.57 \\\\ \\hline \\multirow{3}{*}{30} & 2S1 & 281 & 2 & 5 & 97.21 & 97.10 \\\\ & & ZSU23/4 & 3 & 7 & 278 & 96.53 & \\\\ \\hline \\multirow{3}{*}{45} & 2S1 & 228 & 24 & 51 & 75.25 & \\multirow{3}{*}{72.61} \\\\ & & BDRM2 & 37 & 207 & 59 & 68.32 & \\\\ \\cline{1-1} \\cline{3-3} \\cline{5-6} & 2S1 & 236 & 42 & 225 & 74.26 & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table VIII: Recognition results under depression angle variance. Figure 8: Noisy images with different levels of random noises. (a) 0% (b) 5% (c) 10% (d) 15% (e) 20%. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multicolumn{2}{c}{Recognition rate (\\%)} \\\\ \\cline{2-3} \\cline{5-6} & 30\\({}^{\\circ}\\) & 45\\({}^{\\circ}\\) \\\\ \\hline Proposed & 97.10 & 72.61 \\\\ SVM & 94.08 & 64.01 \\\\ SRC & 92.70 & 62.74 \\\\ CNN & 96.74 & 63.68 \\\\ MSRC & 95.24 & 66.24 \\\\ TJSR & 96.02 & 71.12 \\\\ RFJSR & 93.12 & 64.28 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table IX: Average recognition rates of different methods at different levels of random noises. Figure 9: Average recognition rates of different methods at different levels of random noises. at different noise levels. Noticeably, the methods based on multiscale monogenic components including MSRC, TISR, and the proposal achieve better performance than SVM, SRC, and CNN, which indicates the good robustness of monogenic features to random noise corruption. RFJSR could not compete with the other three monogenic components-based methods mainly because the randomly selected transformation matrices can hardly capture the correlations for feature fusion. With the best performance, 2DCCA is validated to be more effective than the conventional down-sampling as for exploiting the discriminability of the multiscale monogenic components. ### Partial Occlusion Although SAR has some penetrating capability, the targets in the real-world environment are still probable to be occluded by the neighboring obstacles like trees or walls. For experimental evaluation, the partially occluded SAR images are first simulated based on the original 10-class test samples according to the occlusion model reported in [19]. In detail, some pixels in the target region of the original image are replaced by the background intensities. Figure 10 shows the simulated SAR images from four different directions at 20% level. Afterwards, the occluded test samples are classified by different methods to examine their performances. Figure 11 plots the average recognition rates of different methods at the occlusion levels from 10% to 50%. At each level, the recognition rate denotes the average of all the 8 direction. The performance comparison validates the best robustness of the proposed method to partial occlusion Similar to the condition of random noise corruption, the methods based on monogenic components achieve superior performance over the remaining ones. As analyzed in Section 2, the monogenic components actually reflect the local characteristics of the target. For the partial occlusion, it is closely related to the local variations of the target. Therefore, the local features could better embody the possible variations of the target resulted by partial occlusions. Compared with MSRC, TJSR, and RFJSR, the higher recognition rate of the proposed method validates the effectiveness of 2DCCA as for fusing the multiscale monogenic components. ## VI Conclusion This study jointly classifies the multiscale monogenic components to SAR ATR with 2DCCA. The multi-scale monogenic components describe the spectral information of the Figure 11: Average recognition rates of the different methods under partial occlusion. Figure 10: Occluded SAR images from different directions: (a) original (b) direction 1 (c) direction 3 (d) direction 5 (e) direction 7. original SAR image in detail. Several previous works have already demonstrated the high effectiveness of monogenic features as for SAR target recognition. And it is a promising work to develop new ways to further exploit their potential. To capture the inner correlations of the same monogenic components at different scales, 2DCCA is employed to fuse them as a unified feature matrix. To classify the fused monogenic features, the multi-task learning algorithm, i.e., JSR, is adopted, which simultaneously considers the correlations of different types of monogenic components during the implementation of single tasks. In this way, it is promising that the multiscale monogenic components can be better exploited to improve the recognition performance. According to the experimental reports on the MSTAR data set, we draw three conclusions as follows: (1) The proposed method could achieve a high average recognition rate of 97.88% for the 10-class classification problem under SOC, which is higher than those of the remaining methods. So, the effectiveness of the proposed approach under SOC can be quantitatively validated. (2) The robustness of the proposed approach under several usual EOCs including configuration variants, depression angle variance, noise corruption, and partial occlusion is demonstrated to be much superior than the compared methods. (3) Owing to the good effectiveness and robustness, the practical value of our method in SAR ATR tends to be greater than the compared ones. There are some promising works to be researched in the future. First, it is possible to extend the conventional 2DCCA to directly fuse multiple 2D random matrices. Therefore, the fused feature may better reflect the inner correlations of all the participated variables. Second, the proposed strategy can be generalized to the classification of more scales of monogenic components to further make use of the 2D monogenic signal as for SAR image feature extraction. ## Conflicts of interest The authors declare no conflict of interest. ## References * [1] Y.-D. Zhang, L. Wu, and C. Wei, \"A new classifier for polarimetric SAR images,\" _Prog. Electromagn. Res._, vol. 94, pp. 83-104, May 2009. * [2] K. El-Darymli, E. W. Gill, P. Mcguire, D. Power, and C. Moloney, \"Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review,\" _IEEE Access_, vol. 4, pp. 6014-6058, 2016. * [3] J. R. Diemushu and J. Wissinger, \"Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR,\" _Proc. SPIE_, vol. 3370, pp. 481-493, Sep. 1998. * [4] T. D. Ross, J. J. Bradley, L. J. Hudson, and M. P. O'Connor, \"SAR ATR: so whats the problem? An MSTAR perspective,\" _Proc. SPIE_, vol. 3721, pp. 662-673, Apr. 1999. * [5] B. Ding and G. Wen, \"Target reconstruction based on 3-D scattering center model for robust SAR ATR,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 7, pp. 3772-3785, Jul. 2018. * [6] Z. Jianxiong, S. Zhiguang, C. Xiao, and F. Qiang, \"Automatic target recognition of SAR images based on global scattering center model,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 10, pp. 3713-3729, Oct. 2011. * [7] The _Air Force Moving and Stationary Target Recognition Database_. Accessed: May 5, 2016. [Online]. Available: [http://www.sdms.afi.afaf.mil/datasets/mstat/](http://www.sdms.afi.afaf.mil/datasets/mstat/) * [8] B. Ding, G. Wen, C. Ma, and X. Yang, \"Target recognition in synthetic aperture radar images using binary morphological operations,\" _Proc. SPIE_, vol. 10, no. 4, 2016, Art. no. 046006. * [9] M. Amoon and G.-A. Rezai-Rad, \"Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zemike moments features,\" _IET Comput. Vis._, vol. 8, no. 2, pp. 77-85, Apr. 2014. * [10] J.-L. Park, S.-H. Park, and K.-T. Kim, \"New discrimination features for SAR automatic target recognition,\" _IEEE Geosci. Remote Sens. Lett._, vol. 10, no. 3, pp. 46-480, May 2013. * [11] G. C. Angnostopoulos, \"SVM-based target recognition from synthetic aperture radar images using target region outline descriptors,\" _Nonlinear Anal. Theory, Methods Appl._, vol. 71, no. 12, pp. e2934-e2939, 2009. * [12] S. Papson and R. M. Narayanan, \"Classification via the shadow region in SAR imagery,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 48, no. 2, pp. 969-980, Apr. 2012. * [13] A. K. Mishra, \"Validation of PCA and LDA for SAR ATR,\" in _Proc. IEEE Region Conf._, Hyderabad, India, Nov. 2008, pp. 1-6. * [14] Z. Cui, Z. Cao, J. Yang, J. Feng, and H. Ren, \"Target recognition in synthetic aperture radar images via non-negative matrix factorisation,\" _IET Radar, Sonar Navigat_, vol. 9, no. 9, pp. 1376-1385, Dec. 2015. * [15] Y. Huang, J. Peia, Y. Yang, B. Wang, and X. Liu, \"Neighborhood geometric center scaling embedding for SAR ATR,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 50, no. 1, pp. 180-192, Jan. 2014. * [16] X. Liu, Y. Huang, J. Pei, and J. Yang, \"Sample discriminant analysis for SAR ATR,\" _IEEE Geosci. Remote Sens. Lett._, vol. 11, no. 12, pp. 2120-2124, Dec. 2014. * [17] M. Yu, G. Dong, H. Fan, and G. Kuang, \"SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation,\" _Remo Sens._, vol. 10, no. 2, p. 211, 2018. * [18] L. C. Potter and R. L. Moses, \"Arbitrated scattering centers for SAR ATR,\" _IEEE Trans. Image Process._, vol. 6, no. 1, pp. 79-91, Jan. 1997. * [19] B. Bham and Y. Lin, \"Stochastic models for recognition of occluded targets,\" _Pattern Recognit._, vol. 36, no. 12, pp. 2855-2873, 2003. * [20] H.-C. Chiang, R. L. Moses, and L. C. Potter, \"Model-based classification of radar images,\" _IEEE Trans. Inf. Theory_, vol. 46, no. 5, pp. 1842-1854, Aug. 2000. * [21] B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, \"Target recognition in synthetic aperture radar images via matching of attributed scattering centers,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 7, pp. 3334-3347, Jul. 2017. * [22] B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, \"A robust similarity measure for attributed scattering centers sets with application to SAR ATR,\" _Neurocomputing_, vol. 219, pp. 130-143, Jan. 2017. * [23] B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, \"Robust method for the matching of attributed scattering centers with application to synthetic aperture radar automatic target recognition,\" _Proc. SPIE_, vol. 10, no. 1, pp. 0160101. * [24] Y. Sun, Z. Liu, S. Todorovic, and J. Li, \"Adaptive boosting for SAR automatic target recognition,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 43, no. 1, pp. 112-125, Jan. 2007. * [25] Q. Zhao and J. C. Principe, \"Support vector machines for SAR automatic target recognition,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 37, no. 2, pp. 643-654, Apr. 2001. * [26] H. Li and S. Li, \"Decision fusion of sparse representation and support vector machine for SAR image target recognition,\" _Neurocomputing_, vol. 113, pp. 97-104, Aug. 2013. * [27] J. J. Thiagarajan, K. N. Ramamurthy, P. Knee, A. Spanias, and V. Berisha, \"Sparse representations for automatic target classification in SAR images,\" in _Proc. 4th Int. Symp. Commun. Control Signal Process.(ICSCSP)_, Limassol, Cyprus, Mar. 2010, pp. 1-4. * [28] H. Song, K. Ji, Y. Zhang, X. Xing, and H. Zou, \"Sparse representation-based SAR image target classification on the 10-class MSTAR data set,\" _Appl. Sci._, vol. 6, no. 1, p. 26, 2016. * [29] S. Chen, H. Wang, F. Xu, and Y.-Q. Jin, \"Target classification using the deep convolutional networks for SAR images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 8, pp. 4806-4817, Aug. 2016. * [30] J. Ding, B. Chen, H. Liu, and M. Huang, \"Convolutional neural network with data augmentation for SAR target recognition,\" _IEEE Geosci. Remote Sens. Lett._, vol. 13, no. 3, pp. 364-368, Mar. 2016. * [31] K. Du, Y. Deng, R. Wang, T. Zhao, and N. Li, \"SAR ATR based on displacement- and rotation-insensitive CNN,\" _Remote Sens. Lett._, vol. 7, no. 9, pp. 895-904, 2016. * [32] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"ImageNet classification with deep convolutional neural networks,\" in _Proc. Neural Inf. Process. Syst. (NIPS)_, Harnalls, NV, USA, Dec. 2012, pp. 1097-1105. * [33] C. Szegedy _et al._, \"Going deeper with convolutions,\" in _Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)_, Boston, MA, USA, Jun. 2015, pp. 1-9. * [34] M. Felsberg and G. Sommer, \"The monogenic signal,\" _IEEE Trans. Signal Process._, vol. 49, no. 12, pp. 3136-3144, Dec. 2001. * [35] G. Dong and G. Kuang, \"Classification on the monogenic scale space: Application to target recognition in SAR image,\" _IEEE Trans. Image Process._, vol. 24, no. 8, pp. 2527-2539, Aug. 2015. * [36] G. Dong, G. Kuang, N. Wang, L. Zhao, and J. Lu, \"SAR target recognition via joint sparse representation of monogenic signal,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 8, no. 7, pp. 3316-3328, Jul. 2015. * [37] G. Dong and G. Kuang, \"Target recognition in SAR images via classification on Riemannian manifolds,\" _IEEE Geosci. Remote Sens. Lett._, vol. 12, no. 1, pp. 199-203, Jan. 2015. * [38] G. Dong and G. Kuang, \"SAR target recognition via sparse representation of monogenic signal on Grassmann manifolds,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 9, no. 3, pp. 1308-1319, Mar. 2016. * [39] C. Ning, W. Liu, G. Zhang, J. Yin, and X. Ji, \"Enhanced synthetic aperture radar automatic target recognition method based on novel features,\" _Appl. Opt._, vol. 55, no. 31, pp. 8893-8904, 2016. * [40] Z. Zhou, M. Wang, Z. Cao, and Y. Pi, \"SAR image recognition with monogenic scale selection-based weighted multi-task joint sparse representation,\" _Remote Sens._, vol. 10, no. 4, p. 504, 2018. * [41] S. H. Lee and S. Choi, \"Two-dimensional canonical correlation analysis,\" _IEEE Signal Process. Lett._, vol. 14, no. 10, pp. 735-738, Oct. 2007. * [42] J. A. Tropp _et al._, \"Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,\" _Signal Process._, vol. 86, no. 3, pp. 589-602, 2006. * [43] S. Ji, D. Dunson, and L. Carin, \"Multitask compressive sensing,\" _IEEE Trans. Signal Process._, vol. 57, no. 1, pp. 92-106, Jan. 2009. * [44] H. Zhang, N. M. Nasrabak, Y. Zhang, and T. S. Huang, \"Multi-view automatic target recognition using joint sparse representation,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 48, no. 3, pp. 2481-2497, Jul. 2012. * [45] B. Ding and G. Wen, \"Exploiting multi-view SAR images for robust target recognition,\" _Remote Sens._, vol. 9, no. 11, pp. 1150, 2017. * [46] S. Liu and J. Yang, \"Target recognition in synthetic aperture radar images via joint multifeature decision fusion,\" _Proc. SPIE_, vol. 12, no. 1, 2018, Art. no. 016012. * [47] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, \"Canonical correlation analysis: An overview with application to learning methods,\" _Neural Comput._, vol. 16, no. 12, pp. 2639-2664, 2004. * [48] B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, \"Target recognition in SAR images by exploiting the azimuth sensitivity,\" _Remote Sens. Lett._, vol. 8, no. 9, pp. 821-830, 2017. * [49] B. Ravichandran, A. Gandhe, R. Smith, and R. Mehra, \"Robust automatic target recognition using learning classifier systems,\" _Inf. Fusion_, vol. 8, no. 3, pp. 252-265, 2007. * [50] S. H. Doo, G. Smith, and C. Baker, \"Target classification performance as a function of measurement uncertainty,\" in _Proc. 5th Asia-Pacific Conf. Symth. Aperture Radar_, Singapore, Sep. 2015, pp. 587-590. * [51] R. Touzi, \"A review of speckle filtering in the context of estimation theory,\" _IEEE Trans. Geosci. Remote Sens._, vol. 40, no. 11, pp. 2392-2404, Nov. 2002. \\begin{tabular}{c c} & YAPENG ZHOU received the Ph.D. degree from Hebei Agricultural University, in 2009, where he is currently a Professor. His current research interests include land consolidation engineering and land evaluation. \\\\ \\end{tabular} \\begin{tabular}{c c} & YAPENG ZHOU received the Ph.D. degree from Hebei Agricultural University, in 2009, where he is currently a Professor. His current research interests include land consolidation engineering and land evaluation. \\\\ \\end{tabular} \\begin{tabular}{c c} & RANRAN GAO received the B.E. degree from Hebei Agricultural University, in 2002. She is currently the General Manager of Hebei Pudu Zhong-hao Land Development and Consolidation Co., Ltd. Her main research interests include land planning and utilization. \\\\ \\end{tabular} \\begin{tabular}{c c} & JINXIONG FENG received the bachelor's degree from the Department of Modern Science and Technology, Hebei Agricultural University, in 2018. He is currently pursuing the master's degree with the Zihuan College, Hebei Agricultural University. His current research interests include degraded grassland and rural revitalization strategies. \\\\ \\end{tabular} \\begin{tabular}{c c} & PENGFEI ZHAO received the bachelor's degree from Hebei Agricultural University, in 2017, where he is currently pursuing the master's degree with the Zihuan College. His current research interests include rural issues and rural revitalization strategies. \\\\ \\end{tabular} \\begin{tabular}{c c} & PENGFEI ZHAO received the bachelor's degree from Hebei Agricultural University, in 2017, where he is currently pursuing the master's degree with the Zihuan College. His current research interests include rural issues and rural revitalization strategies. \\\\ \\end{tabular} \\begin{tabular}{c c} & ILI WANG received the Ph.D. degree from the Institute of Remote Sensing Applications, Chinese Academy of Sciences (CAS), in 2008. He is currently an Associate Professor with the Institute of Remote Sensing and Digital Earth, CAS. He has authored or co-authored more than 100 peer-reviewed papers. His research interest includes the fields of land-use and land-cover change, with a focus on remote sensing technologies and their application in eco-environment change (mapping and monitoring). \\\\ \\end{tabular}
A synthetic aperture radar (SAR) target recognition approach is developed in this paper by exploiting the multiscale monogenic components, which are extracted from SAR images based on the 2D monogenic signal. The 2D canonical correlation analysis is then employed to analyze the correlations of the same monogenic components at different scales. Afterwards, the three monogenic components, i.e., local amplitude, local phase, and local orientation, at different scales are fused as three feature matrices, respectively. In order to further capture the correlations between different types of monogenic components, the joint sparse representation is used for target classification. Therefore, both the correlations of the same monogenic components at multiple scales and the relatedness among different types of monogenic components can be exploited in the proposed scheme. The real measured SAR images from the moving and stationary target acquisition and recognition dataset are classified to examine the validity of the proposal. Compared with some state-of-the-art SAR target recognition methods, the proposed approach is validated to be superior under both standard operating condition and several usual extended operating conditions according to the experimental results. In comparison with some other methods, which also use monogenic components as the basic features, the superiority of the proposed method demonstrates that it could better make use of the monogenic components to improve the classification performance.
Give a concise overview of the text below.
ieee/f5f2e34c_8539_445c_bdec_c10d88d3646d.md
# Parametric modeling and applications of target scattering centers: a review YIN Hongcheng and YAN Hua Manuscript received August 21, 2023. Beijing Institute of Environmental Features, Beijing 100854, China ## 1 Introduction In high frequency region, the total electromagnetic scattering field of a target can generally be considered as the coherent superposition of contributions from a series of discrete point scattering sources distributed at different positions, also known as scattering centers (SCs) [1]. The target parametric SC model represents the response of each SC as an analytic formula related to the radar's frequency, aspect angles, SC's position, amplitude, phase, and other parameters. It has the advantages of simplicity, sparsity and mechanism/structure correlation, and is widely used in the compression, prediction, simulation, diagnosis, imaging, feature extraction, and recognition of radar target signals. It has become a powerful tool for radar signal processing. Since the 1950s, the research on the representation, construction, and application of the target parametric SC model has been paid attention by many academic/engineering teams. From the perspective of technology development, the research history on the parametric SC model can be divided into four stages. The first is the \"theoretical exploration period\" (1950-1980), when the concept of target SC and its physical mechanism is gradually established with the development of electromagnetic theory research. The second is the \"technological breakthrough period\" (1980-2000), when the SC modeling techniques, including backward method and forward method, have been fully developed to form a relatively complete technical system. The third is the \"extended application period\" (2000-2010), when various SC models and methods have been widely expanded and applied in different fields. The fourth is the \"technology upgradation period\" (2010 until now), which is characterized by both the combination of parametric modeling with sparse signal processing, compressed sensing, deep learning and other emerging technologies, and further in-depth development of forward modeling technology. In addition to the perspective of the technical driven mentioned above, the development of the parametric SC model is also influenced by practical engineering applications. On the one hand, with the continuous improvement of the ability of radar to acquire information, people have higher requirements on the target SC model, that is the wide-band, large-aspect, full-polarization, and other wide multi-domain coverage capabilities, so as to meet the high fidelity simulation of radar signals, fine structure feature extraction and other applications. On the other hand, the increasingly widespread use of complex electromagnetic materials and stealthy structures for radar targets also poses challenges to the accurate SC modeling, which requires more accurate and efficient parametric modeling techniques to support engineering applications. The parametric SC modeling study of a target mainly includes two aspects. One is the parametric representation of the target SC, that is, the mathematical expression of the change of the amplitude and phase of each SC with the frequency, aspect, polarization, and other parameters needs to be established from the scattering mechanism. The second is the determination of model parameters, which mainly solves the problem of how to estimate or calculate model parameters under a specific parameter-rized representation form. In addition, the application of target SC model in practical engineering is also paid much attention to. At present, the theory and application of parametric modeling of target SC are still active. Recently, both Guo et al. [2] and Yin et al. [3] gave reviews on the model form and parameter extraction of SCs. However, little work is concerned on the bistatic parametric expression, forward parametric modeling, and the application of the parametric model. In order to compensate for these shortcomings, this paper comprehensively and deeply summarizes and analyzes the domestic and foreign research status and development trend from three aspects: the representation form of the parametric SC model, the parametric modeling method, and application, in a bid to to provide research reference and directional guidance for interested researchers and relevant researchers with application needs. ## 2 Parametric representation of target SCs ### Representation of frequency domain characteristics The early development of SC model is mainly reflected in the improvement of the frequency dependence of SC amplitude. The traditional \"ideal point\" model (also known as the undamped exponential summation (UE) model) does not consider the frequency dependence of the SC. In 1987, Hurst et al. [4] first proposed to use the Prony model (also known as damped exponential summation (DE) model) to describe the frequency-dependent behavior of radar target SC, and pointed out the relationship between the damped exponential factor and scattering mechanism. But the model is only effective in a narrow frequency range. Subsequently, Carriere et al. [5] and Potter et al. [6] from Ohio State University in the United States respectively proposed the geometric theory of diffraction (GTD) model of target SC based on the approximate scattering solution form of several typical bodies and the GTD. The model considers that the frequency dependence of the SC satisfies the power function of the half-integer exponent, where the value of the exponential factor (called the frequency-dependent factor or type factor) corresponds to a specific geometric structure. GTD model has attracted wide attention. One of the reasons is that it can accurately characterize the scattering characteristics of targets in a wide frequency range. The other reason is that it can also give the scattering mechanism or structure type of each SC, which has clear physical meaning. Although the GTD model has been widely used, its scope of application has not been fully and strictly discussed until [7, 8, 9] conducted in-depth exploration on it in recent years. Based on geometrical optics (GO), physical optics (PO), and GTD, it is proved that the GTD model is not only applicable to single scattering structures such as sphere, cylinder, plate, and multiple scattering structures such as top-hat, dihedral corner, and trihedral corner, but also applicable to arbitrary double specular reflection and edge diffraction mechanism. Furthermore, the explicit relation between frequency dependent factor and target geometry is given. Subsequently, [10] further extended this conclusion to the case of any multiple specular reflection, edge diffraction, and tip/corner diffraction mechanism by rigorous derivation, and derived the formula for calculating the frequency-dependent factor of any multiple scattering mechanism, and pointed out that the value of the frequency-dependent factor is related to the reflection times, the geometric type of each reflection and the caustic condition. Their work makes the SC's representation of frequency-domain characteristics more systematic and rigorous. Although the GTD model is accurate in most cases, for the actual complex targets, there may be large errors near some specific angles, especially in the case of processing two-dimensional (2D) frequency-angle scattering data obtained by such as synthetic aperture radar (SAR) and inverse SAR (ISAR). The problem mainly stems from the inability of the GTD model to describe the behavior of non-point scatterer. In this case, it is necessary to combine the angular domain characteristics into the frequency domain parametric representation to construct the form of SC parametric representation model. ### Representation of angle domain characteristics High resolution 2D radar imaging such as SAR and ISAR requires synthetic aperture processing of multi-angle echo signals, and the angular scattering characteristics of targets become very important. Therefore, the SC's frequency domain representation model described above is not enough to meet the requirements of SAR/ISAR signal processing. It needs to be extended to the angle domain. At first, two-dimensional UE model [11], Prony model [12] and GTD model [13] have been proposed based on a pure mathematical extension from the corresponding one-dimensional (1D) models. Although these models can properly represent the angular scattering characteristics of radar targets in many cases, there is still a problem that their accuracy decreases significantly under certain particular perspectives, mainly because they do not fully consider some electromagnetic scattering mechanism of the target. Sacchini et al. [12] investigated the frequency domain and angular domain dependence behaviors of eight canonical scattering structures, and analyzed the relationship between the two damped factor parameters of the 2D Prony model and the scattering structures, but failed to give a new representation form of SC model. Gerry et al. [14] proposed the attributed SCs (ASCs) model based on the scattering analytic forms of PO, GTD and several canonical objects, which solves the inaccurate description of the angular domain dependent behavior of partial scattering mechanism by previous models. The accuracy of the model is significantly improved. The ASC model extends the traditional concept of point SC, and can express both point scattering source, called local SC (LSC), and line scattering source, called distributed SC (DSC). In particular, it should be pointed out that besides the frequency dependent factor of the GTD model is retained in the ASC model, new parameters related to the length and orientation information of the extended scattering structure are added. Therefore, it can be used to extract richer target attributed information from 2D radar data, which is conducive to improving the target recognition performance. Compared with the traditional model, although the ASC model can accurately express the angular characteristics of the SC, sometimes there is certain deviation from the actual data. Therefore, some domestic scholars have been devoted to improving the ASC model. For example, Ai et al. [15] modified the sinc function form describing the characteristics of the angular domain of the extended SC in the ASC model, and proposed the parametric model of the extended SC with non-uniform amplitude distribution. Feng et al. [16] added the Gaussian function multiplier into the angle item of the sinc function. Li et al. [17] replaced the exponential function form describing the LSC with the cosine function form, in the meanwhile introduced the sliding radius of SC position and window function of the SC's visible range. All these works have improved the accuracy of angular domain representation of SC to some extent. However, the ASC model and its improved model only give the dependence on the azimuth angle, and have not yet characterized the law of elevation direction. Therefore, it is a 2D model and belongs to a narrow-angle model, which is only applicable to a small angle range. However, the emerging radar systems such as interferometric SAR, tomographic SAR and wide-angle SAR can receive target scattered signals in wide-angle three-dimensional (3D) wave-number domain, and significantly broaden the dimension and width of target signal parametric domain acquired by radar. It is necessary to establish a high-dimensional model and a wide-angle model for 3D target imaging and to extract more abundant target attribute information. In this regard, Jackson et al. [18; 19] proposed a canonical scattering feature (CSF) model, where the scattering responses of complex objects are approximately expressed as the superposition of the scattering responses of six 3D canonical objects, which are described by different analytic expressions. Although the CSF model can express the scattering characteristics in wide-angle 3D wave-number domain, it has some problems such as poor model universality and difficult parameter estimation due to its complex mathematical form and idealized geometric structure description, which makes the model not be widely concerned and applied. As a matter of fact, the complexity of scattering characteristics in wide-angle 3D wave-number domain of the target makes it difficult to establish a unified and simple parametric expression formula, which leads to the failure to put forward an effective model for a long time. Recently, Lu et al. [20] proposed the multi-manifolds representation (MMR) of target wide-angle scattering to give a parameterized representation of the wide-angle SC applicable to single specular reflection, edge/corner diffraction and other mechanisms, which provides a new way to solve this difficult problem. ### Representation of bistatic polarized domain characteristics As the SC model such as GTD and ASC has been widely concerned, some researchers have tried to extend it to the case of full polarization and bi-station, and there have been a number of studies combining various polarization representation with above SC models. For example, Fuller [21] combined Krogager decomposition and GTD model for classification and identification of various typical structures in complex targets. Xu et al. [22] proposed the coherent polarization GTD (CP-GTD) model and gave the polarization expression of GTD SC model. However, these studies just consider the representation of 1D SC in frequency domain, which is not suitable for the 2D/3D signal processing problems such as polarimetric SAR and polarimetric tomography SAR. Therefore, Jackson et al. [19] combined the 3D CSF model with the odd-even order polarization decomposition representation model to build a parametric SC model of the bistatic full-polarization of radar target. Saville et al. [23] combined the Krogager polarization decomposition model with the 2D GTD model, and utilized the frequency-dependent factor and Krogager decomposition factor to achieve the fine classification of SCs through the analysis of the joint frequency-polarization domain. In China, Duan et al. [24] combined the Cameron polarization decomposition representation with the ASC model, while Li et al. [25] combined the Pauli decomposition with the ASC model. All these works have achieved a more precise classification of SCs. However, the above work just simply combined the polarization model with the SC model, and did not consider the internal correlation between the polarization domain and the frequency-angle domain. As a result, the mechanism types described by these models were incomplete and was not suitable for the case of large bistatic angle. Therefore, Yan et al. [26] revealed the intrinsic relationship between the polarization response and the principle scattering direction of the target scattering mechanism, proposed a 3D rotation transformation representation combining the characteristics of the angular domain and the polarization domain, and established a bistatic polarization scattering parametric model of arbitrary multi-plate structure, which solved the problem that the existing model could not parameterize scattering response with the large bistatic angle. Subsequently, Xing et al. [27] further proposed the bistatic attributed SC model and parameter estimation method on this basis, which solved the problem of inversion of target fine structure feature inversion under the condition of large bistatic angle, and provided a new method for accurate identification of bistatic radar targets. In addition, Qu et al. [28] studied the expression form of the bistatic SC model formed by specular reflection, edge diffraction, creeping wave and other mechanisms for the object composed of sphere, cylinder and cone. These works extend the existing model's applicable types of mechanism and structure to some extent. ### Representation of time-frequency domain and angle-frequency domain characteristics In addition to the representation of SCs from the frequency domain or angle domain, a parametric model can also be established from the time-frequency domain and angle-frequency domain, which is called time-frequency representation (TFR) model. Moore et al. [29] and Trinnalia et al. [30] successively used the time-domain Prony model and the adaptive Gaussian representation (AGR) model to express the time-frequency domain characteristics of targets with both SC and resonance mechanisms, and realized the separation and extraction of features from SC and resonance mechanisms. Subsequently, Trinnalia et al. [31] proposed an angle-frequency domain AGR model for describing the wide-angle scattering data. Chen et al. [32] and Li et al. [33] used the Chirplet expression model for representing the scattering characteristics of objects with rotating components. Guo et al. [34] proposed that the SC could be divided into three classes, i.e., the fixed SC, the distributed SC, and the sliding SC. And they have achieved accurate diagnosis and extraction of various types of SCs according to their different angle-frequency domain characteristics. The advantages of TFR models are concise, strong-universality, and can represent various scattering mechanisms, but the disadvantages are that the model is too simplified to reduce the model accuracy, and the detailed target attribute information cannot be involved in these TFR models. ## 3 Determination of the model parameters With UE, Prony, GTD, ASC, CSF, and other parametric model forms, the parameters need to be determined for the selected parametric model in order to achieve the final parametric modeling. Generally speaking, parametric modeling methods can be divided into narrow-angle method and wide-angle method according to the different angle range of the target scattering data to be modeled. There are several technical ways to determine the model parameters, which can be classified by inverse modeling method, forward modeling method and forward-inverse combination modeling method. ### Narrow-angle method #### 3.1.1 Inverse method Inverse methods, also known as parameter estimation or parameter inversion methods, are a class of methods to estimate or invert model parameters under the condition of known scattering data and model form. There are three categories of inverse methods for parametric SC modeling as follows. The first category is based on Fourier transform or time-domain response methods, including non-parametric processing methods such as local peak search [35], window-adding method [36; 37] and matching filter [38], and CLEAN-type spectrum estimation methods such as CLEAN [39; 40; 41], several improved CLEANs [42; 43; 44; 45; 46], and RELAX [47; 48; 49]. This kind of method has the advantages of model-form independence, high computational efficiency, strong robustness, processing high and long data, but the disadvantages are the low accuracy of parameter estimation and the low spectral resolution. The second category is superresolution spectrum estimation methods, including autoregression (AR) method [50; 51], Burg maximum entropy method [52; 53], extended Prony method [4; 12; 53; 54] and other linear prediction techniques, as well as subspace techniques such as multi-signal classification (MUSIC) [55; 56; 57; 58; 59; 60; 61], estimation of signal parameters using rotational invariant technique (ESPRIT) [59; 60; 61; 62; 63; 64; 65; 66; 67; 68], matrix pencil method [11; 69] and so on. Compared with the methods based on Fourier transform, such methods have the advantages of high estimation performance and high spectral resolution when the signal to noise ratio is high and the data length is small. But the disadvantages are that they are only applicable to the UE model and Prony model, and have problems such as low computational efficiency, large storage capacity and sensitivity to noise. However, combined with model approximation [13; 70], parameter decoupling [6; 14; 40; 41], and subband/subaperture decomposition [71; 72], the superresolution spectral estimation method can still be used for parameter estimation of GTD, ASC, and other models with more complex forms and more parameters. The third category is iterative optimization methods, including iterative quadratic maximum likelihood (IQML) estimation [73; 74], method of direction estimation (MODE) [75], approximate maximum likelihood (AML) estimation [6; 13; 41; 76; 77; 78; 79], etc. These methods usually transform the maximum likelihood (ML) estimation problem into a nonlinear least squares optimization problem. Local optimization techniques such as fastest descent method [6] and Newton method [76], or global optimization techniques such as particle swarm optimization (PSO) [77; 78] and genetic algorithm (GA) [79] are used to solve the problem. In recent years, with the rapid development of regularization, sparse signal representation, compressed sensing and other technologies, many methods such as \\(L_{p}\\) regularization or basis pursuit (BP) [80; 81; 82; 83; 84], orthogonal matching pursuit (OMP) [85; 86], atomic norm minimization (ANM) [87], sparse Bayesian learning (SBL) [17; 25; 88] are used to modeling the SCs. Very recently, a convolutional neural network (CNN) was also applied to estimate the parameters of GTD model, showing competitive advantages [89]. Generally speaking, the iterative optimization method has the advantages of super resolution and sparse sampling data. However, compared with the previous two inverse modeling methods, this kind of method has the highest computational complexity and low efficiency, and the algorithm is easy to fall into the local optimal with the increasing complexity of the model form. The three methods listed above are all based on the frequency-angle domain representation model and processed in the frequency-angle domain (mainly for the last two methods) or image domain (mainly for the first method). In addition, there is also a kind of method based on TFR model and time-frequency analysis, which usually uses the time-frequency analysis tool such as Wigner-Ville distribution (WVD) for non-parametric estimation, or methods based on the basis function of the time-frequency domain such as the Gabor basis and Chirplet basis, and using CLEAN algorithm [30; 90], superresolution spectral estimation [91; 92], sparse time-frequency analysis [93], to realize the parametric modeling for time-frequency domain representation. Compared to frequency-angle domain method and image domain method, time-frequency domain method can more finely recognize the mechanism types. However, separating and extracting those mechanism components accurately and efficiently is more challenging. #### 3.1.2 Forward or forward-inverse combination method Forward modeling method is a method of directly calculating parametric model parameters by using the target exact geometry knowledge and electromagnetic modeling technology. In 1994, Bhalla et al. [94; 95; 96; 97] from the University of Texas proposed a time-domain/image-domain ray-tube integration technology to realize the rapid calculation of 1D/2D/3D ISAR images of the target, and to complete the rapid construction of the 3D SC models of monostatic and bistatic targets by CLEAN algorithm [98; 99; 100; 101; 102]. Compared with the inverse method, Bhalla's method [98; 99; 100; 101; 102] used the prior knowledge of the target geometry and electromagnetic scattering mechanism to calculate the 3D image of the target and the SC parameters under the point frequency and single view, avoiding a large number of swept frequency and angle radar cross section (RCS) calculation and complex parameter estimation calculation in the inverse method, which significantly improves the computational efficiency. Moreover, the accuracy of SC model is less limited by bandwidth and aperture width. This is the earliest work that reflects the idea of forward parametric modeling. The algorithm has been directly integrated into the Xpatch, a famous national code for electromagnetic modeling prediction in the United States. However, since the method proposed by Bhalla needs to calculate the image first and then extract the SCs based on the image, it belongs to a forward-backward combination method more exactly. In 1997, Wang et al. [103] proposed a deterministic generation method for the SC model of complex targets, which is essentially equivalent to an image generation without considering the point spread effect caused by the limited bandwidth. Therefore, as the same as Bhalla's method, it is not a thorough forward modeling method. Both of the above two algorithms need to divide the image grid. The denser the image grid division or the higher the image resolution, the higher the accuracy of the model. In this way, the more computing and storage space will be occupied. In order to solve this problem, Bhalla et al. [99] proposed to generate 1D images in three orthogonal directions, extract 1D SCs to reconstruct the 3D SCs, which effectively alleviated the contradiction between image pixel resolution and computational storage space, and realized 3D SC modeling with higher accuracy. However, this method cannot use Sulivan's scheme [104] to solve the fast convolution of non-uniform sampling, and will lead to a great increase in the calculation time when the SC order is high. In 2016, Yun et al. [105] firstly improved the CLEAN algorithm to suppress the error of SC position by maximizing correlation between point spread function and residual image. More importantly, in 2017, Yun et al. [106] still extracted the SCs directly from the 3D image, but interpolated the non-uniform spatial data onto the uniform grid and accumulate the complex coefficients in each grid, which to some extent avoided the high sampling rate required by the Sulivan scheme. In 2019, Bunger [107] succeeded in extending Bhalla's method to the full-wave computational electromagnetic techniques. In addition, Buddenick et al. [108, 109] applied the bistatic 3D SC modeling technique based on the ray-tube integration technique to the rapidly simulating the vehicular radar signals in road scenes, and proposed a concept of equivalent SC by merging the adjacent SCs to reduce the SC's order significantly. In China, the Electromagnetic Engineering Laboratory of Wuhan University, the Institute of Applied Electromagnetism of Beijing Institute of Technology, National Key Laboratory of Scattering and Radiation and other units have done a lot of work on forward parametric modeling by combining electromagnetic modeling technology with parametric representation. Wuhan University mainly realizes parametric modeling based on high-frequency electromagnetic computing technology. In 2014, He et al. [110] first proposed a forward calculation method of ASC model parameters based on component decomposition and ray clustering. Different from the method proposed by the University of Texas, this method does not require imaging and belongs to a pure forward modeling technique with unrestricted computational accuracy and scale. However, the target geometry needs to be decomposed by components in advance, and the decomposition of geometric components has to be realized manually. Therefore, there are problems such as low efficiency and dependence on human experience. From 2016 to 2018, Zhang et al. [111, 112, 113, 114, 115, 116] presented solutions for accurate correction of 3D SC position and automatic processing of forward parametric modeling, and constantly improved the forward parametric modeling technology based on the high-frequency method. In addition, Zhang et al. [117] also extended this technique to the situation of complex medium targets. Beijing Institute of Technology mainly uses accurate numerical method to realize parametric modeling. For example, Qu et al. [118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131] proposed a method of forward calculation of SC parameters from the surface current distribution based on the accurate numerical technique. The method is completely based on the geometry of the target and the surface induced current to calculate the SCs' locations and amplitudes, although the components must be decomposed before the calculation. In addition, they also proposed a modeling method of SC based on time-frequency analysis technique, which is a forward-backward combination method, in which the SC position is extracted directly from the target geometric model, and the SC amplitude parameter is extracted from the time-frequency image. In addition, Guo et al. [132] also studied the modeling problem of the SC for dielectric sphere target. The National Key Laboratory of Scattering and Radiation mainly uses the forward-inverse combination method to realize parametric modeling. In 2017, Yan et al. [133] proposed a rapid modeling method of 3D SC based on a ship-sea \"four-path\" scattering model to solve the parametric modeling problem of target-background coupling SCs. In 2021, Yan et al. [134, 135, 136] proposed a high-precision and rapid modeling method of the target wideband 3D SCs based on the forward-inverse combination. The method has reached a high level in terms of high efficiency, universality and accuracy, so it has strong engineering practicability. In general, compared with the inverse modeling method, the forward modeling method and the forward-inverse combination method make full use of the electromagnetic theoretical modeling technology, which avoids the complex parameter inversion estimation process and a lot of computation time to generate the scattering data with high sampling rate, so it has greater advantages in the calculation accuracy and efficiency. However, the existing pure forward modeling methods usually require a large number of manual component decomposition operations on the target geometric model before calculation, which not only affects the overall efficiency of parametric modeling, but also depends on human experience. In contrast, some of the forward-inverse combination methods mentioned above generally do not require human involvement, so they have better engineering practicability. However, the forward-inverse combination method usually relies on radar images (ISAR images or time-frequency images), while the accuracy of SC extraction depends on the resolution of radar images, and the generation of radar images with higher resolution requires larger memory space, so this method is limited by actual computing resources. ### Wide-angle method The wide-angle scattering behavior of the target is not only related to the local scattering mechanism, but also to the types and sizes of target's local structures. Compared with the characteristics in frequency domain, the scattering behaviors of target in the wide angle range are much more complex, which is not only difficult to represent as mentioned above, but also difficult to modeling, which has become one of the tough and hot issues in the academic circle. At present, most of the work on wide-angle parametric modeling is based on the inverse method. Wide-angle inverse modeling methods are mainly divided into the following three categories. The first is the direct parameter inversion method based on the physical model, that is, the parameter inversion is directly based on the model form of wide-angle parametric model. Such methods usually need to construct ML estimation or nonlinear least squares (NLS) problems based on a specific parametric model (such as CSF model), and then select an appropriate parameter inversion technique to solve them. In view of the CSF model with complex form, high dimension of parameter space, and high-order, nonlinear non-convex objective function, the image domain parameter decoupling method is usually used to reduce the complexity of solving the inverse problem. For example, as for the problems of establishing the monostatic CSF model and bistatic CSF model respectively, Jackson et al. [18; 137; 138] proposed frequency-domain CLEAN-type inversion methods to estimate the local scattering structure's location, orientation, size, and other parameters. Subsequently, Hammond [139] and Crosser [140] estimated the CSF model parameters based on the dictionary construction and basis tracing (BP) method. Rademacher [141] also realized the classification and parameter estimation of the CSF model by using Bayesian technique. Unfortunately, these methods have only been tested on simple combined objects, not on complex real objects. The second type is polynomial fitting method. In this kind of method, complete or overcomplete basis function is used to fit the SC position and amplitude. Trininalia et al. [30] used a method similar to OMP to fit the variation of SC amplitude with the radar view based on the AGR model. Jonsson et al. [142] used the RELAX algorithm to fit the variation of SC amplitude and position parameters with the radar view based on the Legendre polynomial and quadratic polynomial. Varshney et al. [143], Stojanovic et al. [144], Austin et al. [145], Cetin et al. [146] respectively used the sparse signal method to construct the anisotropic SC model to obtain the high resolution 2D/3D target images. Hu et al. [147] used the 2D polynomial function fitting to describe the variation characteristics of each SC's amplitude in 2D angle domain. In 2015, Gao et al. [148] established the anisotropic SC model by using the dictionary constructed by rectangular window function and solving the nonlinear least squares problem of total variation penalty by RELAX algorithm. In general, the complexity of this type of method is lower than that of the first method, and it has been successfully applied to the wide-angle parametric modeling of the complex targets. However, such methods cannot extract the orientation, size and other attributes of the local geometric structures of the target because they use non-physical mathematical fitting function to express the amplitude variation of the SCs. The third type is multi-angle association method, also known as sub-aperture method. This method divides the wide-angle domain into a series of angular subdomains (sub-apertures), and assumes that the narrow-angle parametric model is still valid in each sub-aperture, so that the narrow-angle parametric modeling technology can be used to extract the SC parameters in each subdomains. Then the SCs in different view are associated to construct the wide-view parametric model. Bhalla et al. [100] created spatial grids in the target coordinate system, and clustered the multi-angle 3D SCs falling into the same grid to realize the wide-angle correlation of SCs. Zhou et al. [149] adopted the _k_-means clustering algorithm to make multi-angle 3D SCs association in the five-dimensional (5D) vector space of joint SC position and visible angle to solve the problem that the method proposed by Bhalla is no longer effective for the sliding-type SCs association, which had certain effect on the SLICY model of simple combined target. Raynal [150] uses KNN clustering algorithm to carry out wide-angle SCs association for 3D SCs in the 5D joint position-angle space, which improves the algorithm's performance. In the same year, Zhou et al. [151] used a 1D Espirit and Hough transform techniques to extract and associate 3D SCs based on the measured high-resolution range profile (HRRP) history data under a single grazing angle, but this method was only effective for fixed SCs. In 2014, Bai et al. [152] used the Kalman filtering technology to realize the multi-angle association of 1D SCs based on the HRRP history data. In 2015, Zhou et al. [153] used the 2D ESPIRIT and Hough transform to extract and associate 3D SCs based on the 2D images for various views under a single elevation, which is also only suitable for the fixed SCs. From 2014 to 2015, Cui et al. [154; 155] used the 1D ESPIRIT and double Hough transform to extract and associate both fixed and sliding 3D SCs based on HRRP history data under two elevations. In 2016, Hu et al. [156] proposed a wide-angle SCs association method using 1D ESPIRITand RANSAC techniques based on HRRP history data. Cui et al. [157] used the basis pursuit de-noising (BPDN) algorithm and Hough transform technology to associate 2D SCs based on the omnidirectional 2D images under two elevations, and realized the extraction and association of both fixed and sliding 3D SCs at the same time. However, their algorithm was only suitable for the association of circular sliding SCs. Zhou [158] used a density-based clustering algorithm, OPTICS, to cluster the SCs in the joint position-angle space, and realized the wide-angle association of all different types of 3D SCs. Based on MMR, Yan et al. [20, 159] further used a sparse multi-manifold clustering (SMMC) technology to improve the performance of wide-angle SCs association, and further realized the angle domain description of SCs via the polynomial fitting combined with the established sparse MMR theory. The above wide-view parametric modeling methods belong to the inverse method. It is relatively simple to construct a wide-angle parametric model using a pure forward modeling method, because no matter how the radar view changes, each SC component of the parametric model has an exact part or structure of target corresponding to it, so the SCs under different view can be naturally associated through the component or structure. For example, Zhang [160] correlated the wide-angle SCs from the pre-decomposed target components and corresponding ray cluster information, and then completed the construction of wide-angle parametric model of target scattering through polynomial fitting. However, in this method there is no definite criterion for component decomposition, which leads to the difficulty of complete automation and depends on the personal experience. ## 4 Application of parametric model Generally speaking, most of the work related to parametric modeling has a certain application background. According to the published literature, most of the applications of parametric models are mainly focused on super resolution imaging, target feature extraction and recognition. In addition, applications such as data compression, rapid signal generation, 3D visualization reconstruction, scattering source diagnosis, target geometry model correction, angular scintillation analysis, miss distance analysis, and target tracking. Although the proportion is relatively small, but it is more extensive. ### Super resolution imaging Traditional radar target imaging algorithm is based on Fourier transform. Although this type of method is efficient, it has high sidelobe and its resolution is restricted by the Rayleigh limit. Model-based superresolution imaging algorithms have attracted great attention due to their advantages of low sidelobe and breaking Rayleigh limit resolution, by means of the prior knowledge of parametric models. In fact, all kinds of parametric spectral estimation methods proposed above can be used for high resolution imaging of radar targets. Similar to parametric spectral estimation methods, the proposed parametric superresolution imaging methods mainly include linear prediction [53, 161, 162, 163, 164], feature decomposition [54, 56, 57, 58, 59, 60, 61, 62, 63, 67, 68], and iterative ML estimation [47, 74, 75, 85, 87]. There are usually two ways to generate images based on parametric model. One is to directly calculate the power spectral density to obtain high-resolution images, but the peak value of the image may be not directly related to the amplitude of the SC. The other is spectrum extrapolation method, that is, frequency/angle extrapolation based on the parametric model and estimated parameters, and then the traditional imaging based on Fourier transform by expanding the frequency/angle range of data, so as to obtain the super resolution image. In any case, parametric super resolution imaging and parametric feature extraction are essentially the same. Both utilize the prior knowledge of parametric models to obtain high-resolution scattering source distribution information under the same data conditions. However, when the model mismatch with the data and the signal-to-noise ratio is low, the performance of the parametric super resolution imaging algorithm will be significantly reduced. ### Target recognition Target feature extraction and recognition based on parametric model is another application of parametric modeling of electromagnetic scattering. On the one hand, the parametric model can carry more detailed attribute information of the target, which is conducive to the improvement of the target recognition performance. On the other hand, compared with the template-based target recognition, the parametric model data stored by the target recognition based on the parametric model is highly compressed and low-dimensional data, which avoids the problem of massive image data storage under the extended operating conditions (EOCs), and greatly improves the computational efficiency of feature matching for target recognition. Therefore, in recent years, the research on target recognition method based on parametric model has been widely concerned by scholars at home and abroad, and has gradually become a research hotspot. In 1998, in DARPA's famous moving and stationary target acquisition and identification (MSTAR) program, the fast 3D SC parametric modeling technique [97, 98, 99, 100, 101] proposed by Bhalla et al. of University of Texas has been successfully applied to the online generation of template data, which provides important support for SAR image interpretation, and automatic target recognition research [165]. Chiang et al. [166; 167; 168] from Ohio University proposed a SAR target recognition method based on the ASC model. The results show that the application of attribute parameter features can significantly improve the accuracy of target recognition. Hammond [139] implemented automatic recognition of the canonical bodies based on the CSF model, but no corresponding solution was available for complex real targets. From the end of 1990s to now, domestic scholars have also carried out a lot of research on this topic. Zhang [169], Jiang et al. [170], Wang et al. [171; 172], and Wang [173] respectively studied the target recognition problem based on GTD model. Ji [174], Liu et al. [175], and Lin et al. [176] studied the SAR target recognition method based on the ASC model respectively. Zhou et al. [177] proposed a target recognition method based on a parametric model of 3D SC with full azimuth angle. Wen et al. [8], Ma et al. [178], and Ding et al. [179; 180] carried out researches on SAR target component detection and target recognition based on the ASC, improved CSF and other models. All of these works achieved some results for their specific problems. ### Data compression and fast generation Through the relations of the frequency and angle dependence in the parametric model, the scattering data can be compressed and reconstructed in a certain bandwidth and angle range, and the radar echo signal can be generated quickly. Abroad, Tseng [37] and Chang et al. [181] extracted the target's scattering mechanism components by combining the window technique and SC model to implement the compression and reconstruction of measurement data such as target wideband RCS data or imaging data. The 3D SC modeling based on ray tube integration technique proposed by Bhalla [99] has capability of the rapid and high-precision reconstruction of radar data such as RCS, HRRP, SAR/ISAR image or time domain echo signal, and was applied to the rapid generation of radar data in large-scale scenario and the online prediction of target recognition template. Xiong et al. [182; 183; 184; 185; 186; 187] carried out a series of in-depth research on the compression, interpolation and extrapolation of RCS data based on the GTD model, and proposed frequency-domain RCS extrapolation based on parameter estimation of genetic algorithm and frequency-angle domain RCS interpolation based on fractional polynomial fitting. Wang [173] applied the 1D GTD model to fit and compress the sweep-frequency RCS data at different angles. In 2016, Qiu [188] applied the SC model to compression, interpolation and extrapolation of RCS data. However, the parametric models used in the above work are narrowband and small-angle models, and it is expected to further improve the data compression rate if the wideband, wide-angle models are adopted. In 2021, Lu et al. [135; 20], Zhang et al. [136], and Yan et al. [159] proposed a wideband wide-angle SC parametric modeling technique based on ray tube integration. Through GTD model, correction of 3D SC position, forward calculation of the frequency dependence factor, wide-angle scattering center association and wide-angle SC parameter fitting, the modeling accuracy and frequency extrapolation ability are greatly improved. In the same year, Hu et al. [189] applied the 3D SC parametric model to the fast SAR echo and image simulation of complex ship targets, significantly improving the simulating efficiency. ### Other aspects In addition to the applications in imaging, recognition and fast simulation, the parametric model is also applied in 3D visualization of radar targets, target tracking and processing of scattering measurement data. In terms of 3D visualization of targets, Jackson et al. [18; 138] proposed a 3D structure reconstruction technique based on the CSF model, and realized 3D parametric modeling and visual reconstruction of geometric structures of canonical bodies, combination bodies and complex vehicle targets. Li et al. [190] and Xu et al. [191] have proposed an object geometric visualization reconfiguration scheme based on 3D imaging, image segmentation and CSF model for typical vehicle. In the same year, Guo et al. [34] proposed a complex target geometric reconstruction method based on SC model for typical aircraft targets. In the aspect of target tracking, Peters et al. [192] used SC model to analyze the tracking problem of complex targets as early as the 1960s. Ross et al. [193] analyzed the glint problem of radar targets via the SC model. In recent years, Qu et al. [118] and Guo et al. [124] studied the method of estimating miss distance based on the parametric SC model, and Guo et al. [127; 128; 130] studied the prediction of glint characteristics and monopulse angle measurement of radar targets based on the ASC model. In addition, parametric models are also used in measurement data processing and geometric model verification. For example, in 1993, Chang [194] applied the SC modeling method to eliminate scattering sources from non-target objects in the measurement of target scattering characteristics. In 2008, Raynal [150] proposed to apply SC model information to CAD model verification,such as locating and correction of missing or redundant parts on target geometric model. In 2016, Liu et al. [113] studied geometric model modification based on finite observed data and parametric SC modeling method. ## 5 Conclusions Since the 1950s, a relatively complete system has been formed for the parametric representation and modeling of target SCs, which has also been widely applied in many engineering fields such as radar imaging, feature extraction and recognition, data compression, fast simulation, etc. It has become a very powerful tool in target characteristics analysis, radar signal processing, radar signal simulation and other fields. Its development trend can be summarized in three aspects as follows. (i) In terms of parametric representation, the form of the model is more and more complex, and the applicable range and dimension of the frequency domain, angle domain and polarization domain are continuously widened, that is, from narrowband, small angle, single polarization, mono-station model to the wideband, wide angle, full polarization, bistation model, and also from the single-domain, low-dimensional model to the multi-domain joint, high-dimensional model. As a result of the complexity of the model form, the description of the target is becoming more and more refined, from the basic description of the SC location and amplitude to the description of the type, size, and other information of local structure or component of target. In fact, the development of the model form is beneficial to the mining and interpretation of the fine attribute information of the target, and also to the real-time/quasi-real-time simulation of the target signal. (ii) In the aspect of model parameter determination, with the increase of model complexity, it poses a great challenge to the method of model parameter determination. To overcome this problem, on the one hand, more advanced signal processing technologies are introduced or developed to implement the parameter estimation, such as wavelet analysis, sparse representation, compressed sensing, dictionary learning, regularization, clustering analysis, deep learning and other technologies. On the other hand, the mechanism knowledge based SC representation is combined with relevant techniques of computational electromagnetics to realize efficient and accurate forward construction of parametric models. (iii) In terms of model application, more and more fields have been involved, from super resolution imaging, target feature extraction and recognition, to data/signal compression and rapid generation, data interpolation and extrapolation, mechanism or scattering source diagnosis and separation, 3D geometric reconstruction and visualization, etc. It can support the digital scenario simulation, living virtual constructive simulation, radar performance test and assessment, virtual proving ground, target recognition and interpretation, and large sample data generation for intelligent algorithm training. However, the current research on parametric modeling of target electromagnetic scattering is still faced with many challenges and new demands, mainly as follows. (i) Most of the current researches focus on the single target with ideal conductor and smooth, but for the target with the rough surface, coated surface, absorbing structure and other complex materials, as well as group targets such as unmanned aerial vehicle (UAV) swarm, the parametric representation form and modeling method are still lacking. (ii) It is insufficient for the research on the parametric representation and modeling of target-background compound scattering, land/sea background scattering, passive jamming such as chaff cloud. (iii) Existing work mainly focuses on monostation and far-field cases, but there is a lack of effective modeling methods for bis/multi-station, and the near-field parametric modeling has not yet been carried out. (iv) The accuracy and efficiency of the existing wide-angle parametric modeling methods are limited, which still cannot meet the actual engineering requirements. (v) The previous parametric modeling work was mostly limited to the static case, without considering the dynamic case. However, the actual target (also including one under the dynamic sea surface, the wind, etc.) is usually in motion, so it is necessary to study the parametric modeling method of the dynamic SCs. (vi) Under radar observation, how to achieve the rapid measurement of target fine information such as geometric shape, material parameters, surface roughness, motion and other parameters? Based on the parametric SC model of actual target, inversion of these target parameters is a feasible approach indirectly, which is worth studying. (vii) It is insufficient for the practical application of parametric models: publicly reported researches on parametric models mainly focus on super-resolution imaging, feature extraction and recognition, and it has not been fully explored in such fields as data compression, fast simulation, signature control and characteristics measurement. (viii) How to combine the advanced artifical intelligence (AI) technology to improve the adaptability and efficiency of parametric modeling is also a relatively concerned issue at present. In view of the above unsolved and challenging prob lems, the authors hope that through the review of this paper, it will start off a round of lively discussion, after providing necessary literatures and direction guidance, and expect more interested scholars to join the ranks of this research. ## References * [1] HUANG P K, YIN H C, XU X J. Radar target characteristics. Beijing: Publishing House of Electronics Industry, 2004. (in Chinese) * [2] GUO K Y, YIN H C, SHENG X Q. Research on scattering center modeling for radar target. Chinese Journal of Radio Science, 2020, 35(1): 106-115. (in Chinese) * [3] YIN H C, GUO K Y. Hot-topics and difficult problems in the research field of electromagnetic scattering characteristics of targets. Chinese Journal of Radio Science, 2020, 35(1): 128-134. (in Chinese) * [4] HURST M P, MITTRA R. Scattering center analysis via Prony's method. IEEE Trans. on Antennas and Propagation 1987, 35(8): 986-988. * [5] CARRIERE R, MOSES R L. High-resolution parametric modeling of canonical radar scatterers with application to radar target identification. Proc. of the IEEE International Conference on Systems Engineering, 1991. DOI: 10.1109/ICSYSE.1991.161070. * [6] POTTER L C, CHIANG D M, CARRIERE R. A GTD-based parametric model for radar scattering. IEEE Trans. on Antennas and Propagation 2002, 43(10): 1058-1067. * [7] XING X Y, YAN H, YIN H C, et al. Analysis of frequency-dependent characteristics of mirror-mirror coupled scattering centers. Guidance and Fuze, 2014, 35(2): 39-43. (in Chinese) * [8] WEN G J, ZHU G Q, YIN H C, et al. SAR ATR based on 3D electromagnetic scattering model. Journal of Radars, 2017, 6(2): 115-135. (in Chinese) * [9] YAN H, LI S, LI H M, et al. Monostatic GTD model for doubly scattering due to specular reflections or edge diffractions. Proc. of the IEEE International Conference on Computational Electromagnetics, 2018. DOI: 10.1109/COMPEM.2018.8496539. * [10] YAN H, ZHANG L, LU J W, et al. Frequency dependent factor expression of GTD scattering center model for arbitrary multiple scattering mechanism. Journal of Radars, 2021, 10(3): 370-381. (in Chinese) * [11] HUA Y. Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. IEEE Trans. on Signal Processing, 1992, 40(9): 2267-2280. * [12] SACCHINI J J, STEEDLY W M, MOSES R L. Two-dimensional Prony modeling and parameter estimation. IEEE Trans. on Signal Processing, 1993, 41(11): 3127-3137. * [13] POTTER L C, MOSES R L. Attributed scattering centers for SAR ATR. IEEE Trans. on Image Processing, 1997, 6(1): 79-91. * [14] GERRY M J, POTTER L C, GUPTA I J, et al. A parametric model for synthetic aperture radar measurements. IEEE Trans. on Antennas and Propagation 1999, 47(7): 1179-1188. * [15] AI F Z, ZHOU J X, HU L, et al. The parametric model of non-uniformly distributed scattering centers. Proc. of the IET International Conference on Radar Systems, 2012. DOI: 10.1049/cp.2012.1712. * [16] FENG A Q, GUO K Y, SHENG X I. Modification and parameter estimation of attributed scattering center model for flat-based warhead without wings. Transactions of Beijing Institute of Technology, 2015, 35(9): 961-967. (in Chinese) * [17] LI Z H, JIN K, XU B, et al. An improved attributed scattering model optimized by incremental sparse Bayesian learning. IEEE Trans. on Geoscience and Remote Sensing, 2016, 54(5): 2973-2987. * [18] JACKSON J A, MOSES R L. Feature extraction algorithm for 3D scene modeling and visualization using monostatic SAR. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery XIII, 2006. DOI: 10.1117/12.666558. * [19] JACKSON J A, RIGLING B D, MOSES R L. Canonical scattering feature models for 3D and bistatic SAR. IEEE Trans. on Aerospace and Electronic Systems, 2010, 46(2): 525-541. * [20] LU J W, ZHANG Y J, YAN H, et al. Global scattering center representation of target wide-angle single reflection/diffraction mechanisms based on the multiple manifold concept. Electronics, 2022, 11(24): 4209-4228. * [21] FULLER D F. Phase history decomposition for efficient scatterer classification in SAR imagery. Wright Patterson AFB: Air Force Institute of Technology, 2011. * [22] XU S K, LIU J H, WEI X Z, et al. Parameter estimation of 3D scattering centers based on CP-GTD model. Acta Electronics Sinica, 2011, 39(12): 2755-2760. (in Chinese) * [23] SAVILLE M A, JACKSON J A, FULLER D F. Rethinking vehicle classification with wide-angle polarimetric SAR. IEEE Aerospace and Electronic Systems Magazine, 2014, 29(1): 41-49. * [24] DUAN J, ZHANG L, XING M D, et al. Polarimetric target decomposition based on attributed scattering center model for synthetic aperture radar targets. IEEE Geoscience and Remote Sensing Letters, 2014, 11(12): 2095-2099. * [25] LI Z H, XU B, YANG J. Polarimetric inverse scattering via incremental sparse Bayesian multitask learning. IEEE Geoscience and Remote Sensing Letters, 2016, 13(5): 691-695. * [26] YAN H, YIN H C, LI S, et al. 3D rotation representation of multiple reflections and parametric model for bistatic scattering from arbitrary multiple structure. IEEE Trans. on Antennas and Propagation, 2019, 67(7): 4777-4791. * [27] XING X Y, YAN H, YIN H C, et al. A bistatic attributed scattering center model for SAR ATR. IEEE Trans. on Antennas and Propagation, 2021, 69(11): 7855-7866. * [28] QU Q Y, GUO K Y, SHENG X Q. An accurate bistatic scattering center model for extended cone-shaped targets. IEEE Trans. on Antennas and Propagation, 2014, 62(10): 5209-5218. * [29] MOORE J, LING H. Time-frequency analysis of the scattering phenomenology in finite dielectric gratings. Microwave and Optical Technology Letters, 1993, 6(10): 597-600. * [30] TRINTINALIA L C, LING H. Joint time-frequency ISAR using adaptive processing. IEEE Trans. on Antennas and Propagation, 1997, 45(2): 221-227. * [31] TRINTINALIA L C, BHALLA R, LING H. Scattering center parameterization of wide-angle backscattered data using adaptive Gaussian representation. IEEE Trans. on Antennas and Propagation, 1997, 45(11): 1664-1668. * [32] CHEN V C, LING H. Time-frequency transforms for radar imaging and signal analysis. Norwood: Artech House, 2001. * [33] LI J, LING H. Application of adaptive chirplet representation for ISAR feature extraction from targets with rotating parts. IEE Proceedings-Radar, Sonar and Navigation, 2003, 150(4): 284-291. * [34] GUO K Y, QU Q Y, SHENG X Q. Geometry reconstruction based on attributes of scattering centers by using time-frequency representations. IEEE Trans. on Antennas and Propagation, 2016, 64(2): 708-720. * [35] MENSA D L. High resolution radar imaging. Dedham: MA Artech House, 1981. * [36] DOMINEK A, PETERS L, BURNSIDE W. A time domain technique for mechanism extraction. IEEE Trans. on Antennas and Propagation, 1987, 35(3): 305-312. * [37] TSENG N. A very efficient RCS data compression and reconstruction technique. Columbus: The Ohio State University, 1992. * [38] ATLAS R A. Sonar for generalized target description and its similarity to animal echolocation systems. Journal of the Acoustical Society of America, 1976, 59(1): 97-105. * [39] TSAO J, STEINBERG B D. Reduction of sidelobe and speckle artifacts in microwave imaging: the CLEAN technique. IEEE Trans. on Antennas and Propagation, 1988, 36(4): 543-556. * [40] KOETS A, MOSES R, L. Feature extraction using attributed scattering center models on SAR imagery. Proc. of the SPIE-Algorithms for Synthetic Aperture Radar Imagery VI, 1999. DOI: 10.1117/12.357628. * [41] AKYILDIZ Y, MOSES R L. Scattering center model for SAR imagery. Proc. of the Conference on SAR Image Analysis, Modeling and Techniques II, 1999. DOI: 10.1117/12.373151. * [42] DE GRAAF S R. Parametric estimation of complex 2-D sinusoids. Proc. of the 4th Annual ASSP Workshop on Spectrum Estimation and Modeling, 1988. DOI: 10.1109/SPECT.1988.206228. * [43] KIM K T, KIM H T. One-dimensional scattering centre extraction for efficient radar target classification. IEE Proceedings-Radar Sonar and Navigation, 1999, 146(3): 147-158. * [44] BOSE R, FEEDMAN A, STEINBERG B D. Sequence CLEAN: a modified deconvolution technique for microwave images of contiguous targets. IEEE Trans. on Aerospace and Electronic Systems, 2002, 38(1): 89-97. * [45] CHOI I S, KIM H T. Two-dimensional evolutionary programming-based CLEAN. IEEE Trans. on Aerospace and Electronic Systems, 2003, 39(1): 373-382. * [46] MARTORELLA M, ACITO N, BERIZZI F. Statistical CLEAN technique for ISAR imaging. IEEE Trans. on Geoscience and Remote Sensing, 2007, 45(11): 3552-3560. * [47] PEPIN M P, CLARK M P, LI J. On the applicability of 2-D damped exponential models to synthetic aperture radar. Proc. of the International Conference on Acoustics, Speech, and Signal Processing, 1995. DOI: 10.1109/ICASSP.1995.479900. * [48] LI J, STOICA P. Efficient mixed-spectrum estimation with applications to target feature extraction. IEEE Trans. on Signal Processing, 1996, 44(2): 281-295. * [49] LIU Z, LI J. Feature extraction of SAR targets consisting of trihedral and dihedral corner reflectors. IEE Proceedings-Radar, Sonar, and Navigation, 1998, 145(3): 161-172. * [50] CARRIERE R, MOSES R L. Autoregressive moving average modeling of radar target signatures. Proc. of the IEEE National Radar Conference, 1988. DOI: 10.1109/NRC.1988.10962. * [51] STEEDLY W M, MOSES R L. High resolution exponential modeling of fully polarized radar returns. IEEE Trans. on Aerospace and Electronic Systems, 1991, 27(3): 459-469. * [52] WALTON E K. Far-field measurements and maximum entropy analysis of lossy material on a conducting plate. IEEE Trans. on Antennas and Propagation, 1989, 37(8): 1042-1047. * [53] PIERSON JR W E, YING C J, MOSES R L, et al. Accuracy and computational comparisons of TLS-Prony, Burg, and FFT-based scattering center extraction algorithms. Proc. of the SPIE, Automatic Object Recognition III, 1993. DOI: 10.1117/12.160587. * [54] CARRIERE R, MOSES R L. High resolution radar target modeling using a modified Prony estimator. IEEE Trans. on Antennas and Propagation, 1992, 40(1): 13-18. * [55] YAMADA H, OHMIYA M, OGAWA Y, et al. Superresolution techniques for time-domain measurements with a network analyzer. IEEE Trans. on Antennas and Propagation, 1991, 39(2): 177-183. * [56] MOGHADDAR A, OGAWA Y, WALTON E K. Estimating the time-delay and frequency decay parameter of scattering components using a modified MUSIC algorithm. IEEE Trans. on Antennas and Propagation, 1994, 42(10): 1412-1418. * [57] ODENDAAL J W, BARNARD E, PISTORIUS C W I. Two-dimensional super resolution radar imaging using the MUSIC algorithm. IEEE Trans. on Antennas and Propagation, 1994, 42(10): 1386-1391. * [58] KIM K, KIM S, KIM H. Two-dimensional ISAR imaging using full polarization and super-resolution processing techniques. IEE Proceedings-Radar, Sonar and Navigation, 1998, 145(4): 240-246. * [59] QUINQUIS A, DEMETER S, RADOI E. Enhancing the resolution of radar range profiles using a class of subspace eigenanalysis based techniques. Digital Signal Processing, 2001, 11(4): 288-303. * [60] QUINQUIS A, RADOI E, TOTIR F C. Some radar imagery results using superresolution techniques. IEEE Trans. on Antennas and Propagation, 2004, 52(5): 1230-1244. * [61] ZHANG Y H, GU X. Effects of amplitude and phase errors on 2D MUSIC and 2D ESPRIT algorithms in ISAR imaging. Proc. of the Asian-Pacific Conference on Synthetic Aperture Radar, 2009. DOI: 10.1109/APSAR.2009.5374274. * [62] JIN D X, FANG D G, FU J S. Application of PR cosine-modulated filter bank to multichannel superresolution radar imaging. Proc. of the International Conference on Information, Communications and Signal Processing, 1997. DOI: 10.1109/ICICS.1997.647143. * [63] DOBRE O A, RADOI E. Advances in subspace eigenanalysis based algorithms: from 1D toward 3D superresolution techniques. Proc. of the 5th International Conference on Telecommunications in Modern Satellite, Cable and Broadcasting Service, 2001. DOI: 10.1109/TELSKS.2001.955836. * [64] BURROWS M L. Two-dimensional esprit with tracking for radar imaging and feature extraction. IEEE Trans. on Antennas and Propagation, 2004, 52(2): 524-532. * [65] DAI D H, WANG X S, XING S Q, et al. Full-polarization scattering center extraction and parameter estimation: P-ESPRIT algorithm. Journal of Electronics and Information Technology, 2008, 30(8): 1963-1967. (in Chinese) * [66] LI S Y, SUN H J, LV X, et al. Near-field scattering centers estimation using a far-field 3D ESPRIT type method. Signal Processing, 2012, 92(10): 2519-2524. * [67] WANG X, ZHANG M, ZHAO J. Efficient cross-range scaling method via two-dimensional unitary ESPRIT scattering center extraction algorithm. IEEE Geoscience and Remote Sensing Letters, 2015, 12(5): 928-932. * [68] ZHAO J, ZHANG M, WANG X, et al. Three-dimensional super resolution ISAR imaging based on 2D unitary ESPRIT scattering centre extraction technique. IET Radar, Sonar and Navigation, 2017, 11(1): 98-106. * [69] HUA Y, BAQAI F A, ZHU Y, et al. Imaging of point scatterers from step frequency ISAR data. IEEE Trans. on Aerospace and Electronic Systems, 1993, 29(1): 195-205. * [70] MCCLURE M, QIU R C, CARIN L. On the superresolution identification of observables from swept-frequency scattering data. IEEE Trans. on Antennas and Propagation, 1997, 45(4): 631-641. * [71] FULLER D F, SAVILLE M A. The spectrum parted linked image test (SPLIT) algorithm for estimating the frequency dependence of scattering center amplitudes. SPIE Proceedings, Algorithms. for Synthetic Aperture Radar Imagery XVI, 2009. DOI: doi:org/10.1117/12.819329. * [72] FULLER D F, SAVILLE M A. Classification of canonical scattering through sub-band analysis. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery XVI, 2010. DOI: 10.1117/12.850558. * [73] CLARK M P, SCHARF L L. Two-dimensional modal analysis based on maximum likelihood. IEEE Trans. on Signal Processing, 1994, 42(6): 1443-1452. * [74] YING C J, CHANG H C, MOSES R L, et al. Complex SAR phase history modeling using two dimensional parametric estimation techniques. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery III, 1996. DOI: 10.1117/12.242046. * [75] LIJ, STOICA P, ZHANG D. An efficient algorithm for two-dimensional frequency estimation. Multidimensional Systems and Signal Processing, 1996, 7(2): 151-178. * [76] TU M W, GUPTA I J, WALTON E K. Application of maximum likelihood estimation to radar imaging. IEEE Trans. on Antennas and Propagation, 1997, 45(1): 20-27. * [77] SHI Z G, ZHOU J X, ZHAO H Z, et al. A GTD scattering center model parameter estimation method based on CPSO. Acta Electronica Sinica, 2007, 35(6): 1102-1107. (in Chinese) * [78] WANG X, DONG C Z, YIN H C. Parameter estimation of GTD model combining RELAX and PSO. Systems Engineering and Electronics, 2011, 33(6): 1221-1225. (in Chinese) * [79] YANG Z L, FANG D G, SHENG W X, et al. Frequency extrapolation by genetic algorithm based on GTD model for radar cross section. Proc. of the International Symposium on Antennas, Propagation and EM Theory, 2000. DOI: 10.1109/ISAPE.2000.894849. * [80] CETIN M, KARL W C. Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. on Image Processing, 2001, 10(4): 623-631. * [81] ZWEIG G. Super-resolution Fourier transforms by optimization and ISAR imaging. IEE Proceedings-Radar, Sonar and Navigation, 2003, 150: 247-253. * [82] CETIN M, LANTERMAN A D. Region-enhanced passive radar imaging. IEE Proceedings-Radar, Sonar and Navigation, 2005, 152: 185-194. * [83] WANG X L, WANG Z M. A super-resolution SAR imaging method based on basis pursuit. Proc. of the International Symposium on Multi-Spectral Image Processing and Pattern Recognition, 2005. DOI: 10.1117/12.655038. * [84] WANG Z M, WANG W W W. Fast and adaptive method for SAR super resolution imaging based on point scattering model and optimal basis selection. IEEE Trans. on Image Processing, 2009, 18(7): 1477-1486. * [85] WANG X L, WANG C L, WANG Z M. Super-resolution processing of SAR images by match pursuit method based on Fourier dictionary. Proc. of the International Conference on Computer Graphics, Imaging and Visualisation, 2007. DOI: 10.1109/CISP.2010.5646695. * [86] WU M, XING M D, ZHANG L, et al. Super-resolution imaging algorithm based on attributed scattering center model. Proc. of the International Conference on Signal and Information Processing, 2014. DOI: 10.1109/chimasip.2014.688924. * [87] WANG Y, JJANG Y, WANG Y H, et al. Scattering center estimation of HRRP via atomic norm minimization. Proc. of the IEEE Radar Conference, 2017. DOI: 10.1109/RADAR.2017.7944185. * [88] LIU H C, JIU B, LIU H W, et al. Super resolution ISAR imaging based on sparse Bayesian learning. IEEE Trans. on Geoscience and Remote Sensing, 2014, 52(8): 5005-5013. * [89] XING X Y, YAN H, YIN H C, et al. A convolutional neural network for parameter estimation of the Bi-GTD model. IEEE Trans. on Antennas and Propagation, 2023, 71(6): 5378-5391. * [90] CHEN V C, LING H. Joint time-frequency analysis for radar signal and image processing. IEEE Signal Processing Magazine, 1999, 16(2): 81-93. * [91] MOORE J, TRINITNALIA L, LING H, et al. Super-resolved time-frequency processing of wideband radar echo using ESPRIT. Microwave and Optical Technology Letters, 1995, 9(1): 17-19. * [92] MOORE J, LING H. Super-resolved time-frequency analysis of wideband backscattered data. IEEE Trans. on Antennas and Propagation, 1997, 43(6): 221-227. * [93] WHITELONIS N. Radar signature analysis using a joint time-frequency distribution based on compressed sensing. IEEE Trans. on Antennas and Propagation, 2014, 62(2): 755-763. * [94] BHALLA R, LING H. ISAR image simulation of targets with moving parts using the shooting and bouncing ray technique. Proc. of the Antennas and Propagation Society International Symposium, 1994. * [95] BHALLA R, LING H. Fast inverse synthetic aperture radar image simulation of complex targets using ray shooting. Proc. of the IEEE International Conference on Image processing, 1994. DOI: 10.1109/ICIP.1994.413356. * [96] BHALLA R, LING H. Image domain ray tube integration formula for the shooting and bouncing ray technique. Radio Science, 1995, 30(5): 1435-1446. * [97] BHALLA R, LING H. A fast algorithm for signature prediction and image formation using the shooting and bouncing ray technique. IEEE Trans. on Antennas and Propagation, 1995, 43(7): 727-731. * [98] BHALLA R, LING H. 3D scattering center extraction from Xpatch. Proc. of the IEEE Antennas and Propagation Society International Symposium, 1995. DOI: 10.1109/APS.1995.530962. * [99] BHALLA R, LING H. Three-dimensional scattering center extraction using the shooting and bouncing ray technique. IEEE Trans. on Antennas and Propagation, 1996, 44(11): 1445-1453. * [100] BHALLA R. Fast algorithms for signature prediction, image formation and scattering center extraction using the shooting and bouncing ray technique. Austin: The University of Texas at Austin, 1996. * [101] BHALLA R, MOORE J, LING H. A global scattering center representation of complex targets using the shooting and bouncing ray technique. IEEE Trans. on Antennas and Propagation, 1997, 45(12): 1850-1856. * [102] BHALLA R, LING H, MOORE J, et al. 3D scattering center representation of complex targets using the shooting and bouncing ray technique: a review. IEEE Antennas and Propagation Magazine, 1998, 40(5): 30-39. * [103] WANG S Y, JENG S K. A deterministic method for generating a scattering-center model to reconstruct the RCS pattern of complex radar targets. IEEE Trans. on Electromagnetic Compatibility, 1997, 39(4): 315-323. * [104] SULLIVAN T D. A technique of convolving unequally spaced samples using fast Fourier transforms. Albuquerque: Sandia National Laboratories, 1990. * [105] YUN D J, LEE J I, BAE K U, et al. Precise scattering center extraction for ISAR image using the shooting and bouncing ray. Proc. of the IEEE International Symposium on Antennas and Propagation, 2016. [https://ieeexplore.ieee.org/document/7821439](https://ieeexplore.ieee.org/document/7821439). * [106] YUN D J, LEE J I, BAE K U, et al. Improvement in computation time of 3-D scattering center extraction using the shooting and bouncing ray technique. IEEE Trans. on Antennas and Propagation, 2017, 65(8): 4191-4199. * [107] BUNGER R. Fast imaging and scattering center model extraction with full-wave computational electromagnetics formations. Progress in Electromagnetics Research M, 2019, 81: 21-30. * [108] BUDENDICK H, EIBERT T F. Application of a fast equivalent currents based algorithm for scattering center visualization of vehicles. Proc. of the IEEE Antennas and Propagation Society International Symposium, 2010. DOI: 10.1109/APS.2010.5561056. * [109] BUDENDICK H, EIBERT T F. Bistatic image formation from shooting and bouncing rays simulated current distributions. Progress in Electromagnetics Research, 2011, 119: 1-18. * [110] HE Y, HE S Y, ZHANG Y H, et al. A forward approach to establish parametric scattering center models for known complex radar targets applied to SAR ATR. IEEE Trans. on Antennas and Propagation, 2014, 62(12): 6192-6205. * [111] HE Y, ZHU G Q, HE S Y, et al. Range profile analysis of complex targets based on ray clustering. Proc. of the IEEE International Conference on Computational Electromagnetics, 2015. DOI: 10.1109/COMPEM.2015.7052605. * [112] ZHANG L, ZHU G Q, HE S Y. Research on the position correction of component-level parametric scattering center models established in a forward approach. Proc. of the Progress in Electromagnetic Research Symposium, 2016. DOI: 10.1109/PIERS.2016.7735024. * [113] LIU J, HE S Y, ZHANG Y H, et al. Scattering centers diagnosis and parameters modification of the complex targets' geometry model based on the limited observed data. Proc. of the Progress in Electromagnetic Research Symposium, 2016. DOI: 10.1109/PIERS.2016.7735170. * [114] ZHANG L, HE S Y, ZHU G Q. Forward calculation of three-dimensional position of scattering center from double scattering. Proc. of the National Conference on Microwave Millimeter Waves, 2017, 2: 33-36. (in Chinese) * [115] ZHANG L, HE S Y, ZHU G Q, et al. Forward derivation and analysis for 3D scattering center position of radar target. Journal of Electronics and Information Technology, 2018, 40(12): 2854-2860. (in Chinese) * [116] CHA W, ZHANG L, HE S Y, et al. Analysis of forward modeling of complex target scattering center. Journal of Microwaves, 2018, 34(2): 20-24. (in Chinese) * [117] ZHANG L L, ZHANG Y H, HE S Y, et al. Forward method for parametric modeling of scattering centers of complex medium targets. Proc. of the National Conference on Microwave and Millimeter Waves, 2017, 2: 515-518. (in Chinese) * [118] QU Q Y, GUO K Y, MU H J, et al. Miss distance measurement based on stable scattering centers of extended targets. Systems Engineering and Electronics, 2013, 35(4): 692-699. (in Chinese) * [119] GUO K Y, LI Q F, SHENG X Q, et al. Sliding scattering center model for extended streamlined targets. Progress in Electromagnetics Research, 2013, 139(3): 499-516. * [120] QU Q Y, GUO K Y, SHENG X Q. Applications of sliding scattering centers in feature extraction. Proc. of the IEEE International Conference on Computational Electromagnetics, 2015. DOI: 10.1109/COMPEM.2015.7052628. * [121] QU Q Y, GUO K Y, SHENG X Q. Scattering centers induced by creeping waves on streamlined cone-shaped targets in bistatic mode. IEEE Antennas and Wireless Propagation Letters, 2015, 14(7): 462-465. * [122] QU Q Y, GUO K Y, SHENG X Q, et al. On scattering centers of cone-shaped targets in bistatic mode. Proc. of the IEEE International Symposium on Antennas and Propagation, 2015. DOI: 10.1109/APS.2015.7304635. * [123] GUO K Y, NIU T Y, QU Q Y, et al. Study on time-frequency image characteristics of scattering center. Journal of Electronics and Information Technology, 2016, 38(2): 478-485. (in Chinese) * [124] GUO K Y, QU Q Y, FENG A, et al. Miss distance estimation based on scattering center model using time-frequency analysis. IEEE Antennas and Wireless Propagation Letters, 2016, 15: 1012-1015. * [125] LI Q F, GUO K Y, TANG B, et al. Scattering center modelling based on compressed sensing principle from undersampling scattering field data. Proc. of the IEEE International Geoscience and Remote Sensing Symposium, 2016. DOI: 10.1109/IGARSS.2016.7729690. * [126] ZHAO X, GUO K Y, SHENG X Q. Scattering center model for edge diffraction based on EEC formula. Proc. of the Progress in Electromagnetic Research Symposium, 2016. DOI: 10.1109/PIERS.2016.7734321. * [127] LI Q F, GUO K Y, SHENG X Q. Angular glint simulation based on scattering center model. Proc of the IEEE Geoscience and Remote Sensing Symposium, 2016. DOI: 10.1109/IGARSS.2016.7729683. * [128] GUO K Y, NIU T Y, SHENG X Q. Influence of multiple scattering centers with various attributes on radar angular measurements. Journal of Electronics and Information Technology, 2017, 39(9): 2238-2244. (in Chinese) * [129] LI Q F, GUO K Y, WANG J, et al. Scattering center modeling using adaptive segmental compressive sampling. Journal of Transactions of Beijing Institute of Technology, 2017, 26(4): 484-493. (in Chinese) * [130] GUO K Y, WANG J X, SHENG X Q. Modification of complex target scattering center modeling. Systems Engineering and Electronics, 2018, 40(8): 1679-1685. (in Chinese) * [131] GUO K Y, NIU T Y, SHENG X Q. Location reconstructions of attributed scattering centers by monopulse radar. IET Radar, Sonar & Navigation. 2018, 12(9): 1005-1011. * [132] GUO K Y, HAN X X, SHENG X Q. Scattering center models of backscattering waves by dielectric spheroid objects. Optical Express, 2018, 26(4): 5060-5074. * [133] YAN H, CHEN Y, LI S, et al. A fast algorithm for establishing 3D scattering center model for ship targets over sea surface using the shooting and bouncing ray technique. Journal of Radar, 2019, 8(1): 107-116. (in Chinese) * [134] LU J W, YAN H, YIN H C, et al. Edge diffraction correction for 3D scattering center modeling based on the shooting and bouncing ray technique. Journal of Xidian University, 2021, 48(2): 117-124, 189. (in Chinese) * [135] LU J W, YAN H, ZHANG L, et al. 3D-GTD model construction method using the shooting and bouncing ray technique. Systems Engineering and Electronics, 2021, 43(8): 2028-2036. (in Chinese) * [136] ZHANG L, YAN H, LU J W, et al. An improved SBR-based 3-D scattering center modeling algorithm for wideband RCS reconstruction. Proc. of the International Applied Computational Electromagnetics Society Symposium, 2021. DOI: 10.23919/ACES-China52398.2021.9581583. * [137] JACKSON J A, MOSES R L. 3D feature estimation for sparse, nonlinear bistatic SAR apertures. Proc. of the IEEE Radar Conference, 2010. DOI: 10.1109/RADAR.2010.5494608. * [138] JACKSON J A, MOSES R L. Synthetic aperture radar 3D feature extraction for arbitrary flight paths. IEEE Trans. on Aerospace and Electronic Systems, 2012, 48(3): 2065-2084. * [139] HAMMOND G B. Target classification of canonical scatterers using classical estimation and dictionary based techniques. Wright Patterson AFB: Air Force Institute of Technology, 2012. * [140] CROSSER M P. Improved dictionary formation and search for synthetic aperture radar canonical shape feature extraction. Wright Patterson AFB: Air Force Institute of Technology, 2014. * [141] RADEMACHER R W. Bayesian methods and confidence intervals for automatic target recognition of SAR canonical shapes. Wright Patterson AFB: Air Force Institute of Technology, 2014. * [142] JONSSON R, GENELL A, LOSAUS D, et al. Scattering center parameter estimation using a polynomial model for the amplitude aspect dependence. SPIE Proceedings, Algorithms for Synthetic Aperture Radar Imagery IX, 2002. DOI: 10.1117/12.478695. * [143] VARSHNEY K, CETIN M, FISHER J, et al. Sparse representation in structured dictionaries with application to synthetic aperture radar. IEEE Trans. on Signal Processing, 2008, 56(8): 3548-3561. * [144] STOJANOVIC I, CETIN M, KARL W C. Joint space-aspect reconstruction of wide-angle SAR exploiting sparsity. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery XV, 2008. DOI: 10.1117/12.786288. * [145] AUSTIN C D, ERTIN E, MOSES R L. Sparse signal methods for 3-D radar imaging. IEEE Journal of Selected Topics in Signal Processing, 2011, 5(3): 408-423. * [146] CETIN M, STOJANOVIC I, ONHON N O, et al. Sparsity-driven synthetic aperture radar imaging: reconstruction, autofocusing, moving targets, and compressed sensing. IEEE Signal Processing Magazine, 2014, 31(4): 27-40. * [147] HU Y N, ZHOU J X, FU Q. The polynomial model of 3-dimensional attributed scattering center coefficient. Radar Science and Technology, 2013, 11(5): 544-568. (in Chinese) * [148] GAO Y X, LI Z Y, SHENG J L, et al. Extraction method for anisotropy characteristic of scattering center in wide-angle SAR imagery. Journal of Electronics and Information Technology, 2016, 38(8): 1956-1961. (in Chinese) * [149] ZHOU Y. High frequency electromagnetic scattering prediction and scattering feature extraction. Austin: The University of Texas at Austin, 2005. * [150] RAYNAL A M. Feature-based exploitation of multidimensional radar signatures. Austin: The University of Texas at Austin, 2008. * [151] ZHOU J X, ZHAO H Z, SHI Z G, et al. Global scattering center model extraction of radar targets based on wideband measurements. IEEE Trans. on Antennas and Propagation, 2008, 56(7): 2051-2060. * [152] BAI X R, ZHOU F, BAO Z. High-resolution radar imaging of space targets based on HRRP series. IEEE Trans. on Geoscience and Remote Sensing, 2014, 52(5): 2369-2381. * [153] ZHOU J X, SHI Z G, FU Q. Three-dimensional scattering center extraction based on wide aperture data at a single elevation. IEEE Trans. on Geoscience and Remote Sensing, 2015, 53(3): 1638-1655. * [154] CUI S, LI S, YAN H, et al. A method of 3D scattering center extraction based on HRRPs. Proc. of the National Conference on Target and Environment Modeling and Simulation Technology, 2015: 122-127. (in Chinese) * [155] CUI S, LI S, YAN H. A method of 3D scattering center extraction based on multiple HRRP series. Journal of System Simulation, 2018, 30(2): 443-451. * [156] HU J M, WEI W, ZHAI Q L, et al. Global scattering center extraction for radar targets using a modified RANSAC method. IEEE Trans. on Antennas and Propagation, 2016, 64(8): 3573-3586. * [157] CUI S, LI S, YAN H. A method of 3-D scattering center extraction based on ISAR images. Proc. of the International Conference on Electronic Information and Communication Technology, 2016. DOI: 10.1109/ICEICICT.2016.7879735. * [158] ZHOU Z F. Parametric scattering center model reconstruction of canonical bodies and its application. Beijing: China Second Academy of Aerospace Science and Industry, 2016. (in Chinese) * [159] YAN H, ZHANG L, LU J W, et al. 3-D wide-band global scattering center modeling based on SBR and clustering techniques. Proc. of the International Applied Computational Electromagnetics Society Symposium, 2021. DOI: 10.23919/ACES-China52398.2021.9581972. * [160] ZHANG L. Electromagnetic scattering center modeling for complex targets and applications to SAR interpretation and recognition. Wuhan: Wuhan University, 2020. (in Chinese) * [161] WALTON E K, MOGHADDAR A. High resolution imaging of radar targets using narrow band data. Antennas and Propagation Society Symposium 1991 Digest, 1991: 1020-1023. * [162] ERER I, KARTAL M, KAYRAN A H. 2-D data extrapolation for high resolution radar imaging using autoregressive lattice modelling. IEE Proceedings-Radar, Sonar and Navigation, 2001, 148(5): 277-283. * [163] GUPTA I J. High-resolution radar imaging using 2-D linear prediction. IEEE Trans. on Antennas and Propagation, 1994, 42(1): 31-37. * [164] GUPTA I J, BEALS M J, MOGHADDAR A. Data extrapolation for high resolution radar imaging. IEEE Trans. on Antennas and Propagation, 1994, 42(11): 1540-1545. * [165] KEYDEL E R, LEE S W. Signature prediction for model-based automatic target recognition. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery III, 1996. DOI: 10.1117/12.242042. * [166] CHIANG H C. Feature-based classification with application to synthetic aperture radar. Columbus: The Ohio State University, 1999. * [167] CHIANG H C, MOSES R L. ATR performance prediction using attributed scattering features. Proc. of the SPIE, Algorithms for Synthetic Aperture Radar Imagery VI, 1999. DOI: 10.1117/12.357693. * [168] CHIANG H C, MOSES R L, POTTER L C. Model-based classification of radar images. IEEE Trans. on Information Theory, 2000, 46(5): 1842-1854. * [169] ZHANG X. High resolution parameter modeling of radar targets and its application in automatic target recognition. Changsha: National University of Defense Technology, 1997. (in Chinese) * [170] JIANG W D, CHEN Z P, ZHUANG Z W. A study of radar target scattering center extracted and its recognition method in optical region. Systems Engineering and Electronics, 2000, 22(7): 72-74. (in Chinese) * [171] WANG D W. Target electromagnetic feature extraction and identification for ultra wide band radar. Changsha: National University of Defense Technology, 2006. (in Chinese) * [172] WANG D W, MA X Y, SU Y. GTD model-based target identification for ultra-wideband radar using matching pursuits and a likelihood-ratio test in the frequency domain. Microwave and Optical Technology Letters, 2006, 48(6): 1215-1218. * [173] WANG J. A study on radar optical region target scattering center extraction and its application. Nanjing: Nanjing University of Aeronautics and Astronautics, 2010. (in Chinese) * [174] JI K F. Research on feature extraction and classification of SAR images. Changsha: National University of Defense Technology, 2003. (in Chinese) * [175] LIU Y J, GE D B, ZHANG Z Z. Automatic target recognition with synthetic aperture radar image. Journal of Microwaves, 2007(S1): 211-215. (in Chinese) * [176] LIN Y S, ZHANG L, XUE A K, et al. SAR imagery scattering center extraction and target recognition based on scattering center model. Proc. of the 6th World Congress on Intelligent Control and Automation, 2006. DOI: 10.1109/WCICA.2006.1713871. * [177] ZHOU J X, SH Z G, CHENG X. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. on Geoscience and Remote Sensing, 2011, 49(10): 3713-3729. * [178] MA C H, WEN G J, DING B Y, et al. Three-dimensional electromagnetic model-based scattering center matching method for synthetic aperture radar automatic target recognition by combining spatial and attribute information. Journal of Applied Remote Sensing, 2016, 10(1): 016025. * [179] DING B Y, WEN G J. A region matching approach based on 3-D scattering center model with application to SAR target recognition. IEEE Sensors Journal, 2018, 18(11): 4623-4632. * [180] DING B Y, WEN G J. Target reconstruction based on 3-D scattering center model for robust SAR ATR. IEEE Trans. on Geoscience and Remote Sensing, 2018, 56(7): 3772-3785. * [181] CHANG L C, GUPTA I J, BURNSIDE W D, et al. A data compression technique for scattered fields from complex targets. IEEE Trans. on Antennas and Propagation, 1997, 45(8): 1245-1251. * [182] XIONG Y, FANG D G, SHENG W X. The simultaneous interpolation of RCS in both in the spatial and frequency domains using model-based parameter estimation. Chinese Journal of Radio Science, 2001, 16(3): 287-290. (in Chinese) * [183] YANG Z L, FANG D G, SHENG W X. Frequency extrapolation by genetic algorithm based on GTD model for radar cross section. Chinese Journal of Electronics, 2001, 10(4): 552-556. * [184] XIONG Y, FANG J G, LIU T J. Interpolation and extrapolation in computational electromagnetics. Chinese Journal of Radio Science, 2002, 17(4): 325-333. (in Chinese) * [185] YANG Z L. Radar target modeling, detection and identification. Nanjing: Nanjing University of Science and Technology, 2002. (in Chinese) * [186] XIONG Y. Interpolation and extrapolation in computational electromagnetics. Nanjing: Nanjing University of Science and Technology, 2003. (in Chinese) * [187] NIE Z P, FANG J G. Modeling of electromagnetic scatter ing characteristics of targets and environments-theory, method and implementation (Application). Beijing: National Defense Industry Press, 2009. (in Chinese) * [188] QIU Z Q. Research on radar target scattering center extraction based on the spatial spectrum estimation algorithm. Chengdu: University of Electronic Science and Technology of China, 2016. (in Chinese) * [189] HU L P, YAN H, ZHONG W J, et al. Three-dimensional scattering center modeling and a fast SAR simulation method for ship targets. Journal of Xidian University, 2021, 48(2): 72-83. (in Chinese) * [190] LI Y C, XU F. Target reconstruction based on scattering mechanisms. Proc. of the IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar, 2015: 283-284. * [191] XU F, JIN Y Q, MOREIRA A. A preliminary study on SAR advanced information retrieval and scene reconstruction. IEEE Trans. on Geoscience and Remote Sensing, 2016, 13(10): 1443-1447. * [192] PETERS JR L, WEIMER F C. Tracking radars for complex targets. Proceedings of the IEEE, 1963, 51: 2149-2162. * [193] ROSS R A, BECHTEL M E. Scattering-center theory and radar glint analysis. IEEE Trans. on Aerospace and Electronic Systems, 1968, 4(5): 756-762. * [194] CHANG L. Removal of undesired scattering centers using a radar image technique. Columbus: The Ohio State University, 1993.
The parametric scattering center model of radar target has the advantages of simplicity, sparsity and mechanism relevant, making it widely applied in fields such as radar data compression and rapid generation, radar imaging, feature extraction and recognition. This paper summarizes and analyzes the research situation, development trend, and difficult problems on scattering center (SC) parametric modeling from three aspects: parametric representation, determination method of model parameters, and application. Keywords:radar target, scattering center (SC), parametric model, radar target imaging, radar target simulation, radar target recognition. [1]10.23919/JSEE.2024.000032 [1]10.23919/JSEE.2024.000032
Provide a brief summary of the text.
ieee/f64e11ea_a639_4f36_bef8_f8d027dabc89.md
# The SPECCHIO Spectral Information System Andreas Hueni\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Laure A. Chisholm\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Cindy Ong\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Tim J. Malthus\\({}^{\\textcircled{\\tiny\\textregistered}}\\), Mathew Wyatt, Simon A. Trim, Michael E. Schaepman\\({}^{\\textcircled{\\tiny\\textregistered}}\\), and Medhavy Thankappan Manuscript received April 29, 2020; revised July 28, 2020 and August 23, 2020; accepted September 14, 2020. Date of publication September 21, 2020; date of current version October 2, 2020. This work was supported in part by the COST Action Eurospec, in part by the Australian National Data Service project DC-10; in part by the APEX Airborne Prism Experiment, in part by the Swiss Commission on Remote Sensing, and in part by MeIEOC projects through the EMRP and EMPIR Programmes co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. The work of Andreas Hueni and Michael E. Schaepman was supported by the University of Zurich Priority Programme on Global Change and Biodiversity. _(Corresponding author: Andreas Hueni., Simon A. Trim, and Michael E. Schaepman are with Remote Sensing Laboratories, University of Zurich, 8057 Zurich, Switzerland (e-mail: [email protected]; [email protected]; [email protected])._ Laurie A. Chisholm is with the Faculty of Science, University of Wollongong, Wollongong, NSW 2522, Australia (e-mail: [email protected]). Cindy Ong is with Mineral Resources, CSIRO, Clayton, VIC 3169, Australia (e-mail: [email protected]). Tim J. Malthus is with Coasts Program, CSIRO Oceans and Atmosphere, Brisbane, QLD 4102, Australia (e-mail: [email protected]). Mathew Wyatt is with the Indian Ocean Marine Research Centre, Australian Institute of Marine Science, Crawley WA, Australia (e-mail: [email protected]). Medhavy Thankappan is with the Environmental Geoscience Division, Geoscience Australia, Canberra, ACT 2609, Australia (e-mail: [email protected]). Digital Object Identifier 10.1109/ISTARS.2020.3025117 ## I Introduction Spectral signatures, acquired by spectroradiometers measuring emitted or reflected electromagnetic radiation, are used for a wide range of Earth System science purposes [1]. The quality and interpretation of air- or satellite-borne, remotely sensed spectral signatures relies essentially on calibration [2], validation, comparisons, and models [3, 4], all of which, in turn, often rely on _in situ_ spectral data. Consequently, field and laboratory spectroscopy are indispensable tools to provide the required reference and training data, but they also represent a research method in their own right [5]. The value of spectral data is strongly linked to information about the measurement context [6], i.e., the description of the target and its sampling environment at the time of measurement. Proximal sensing methods offer generally a higher degree of control over explanatory variables and the statistical sampling used in the experiment than airborne or space-based acquisitions. The target and its extent, the time of day and the illumination conditions may be chosen more freely (and repeatedly), while the measurement context can be defined by auxiliary _in situ_ measurements and protocols. In many cases, datasets obtained in such a manner are viewed to be of veridical, i.e., truthful, nature, colloquially referred to as \"ground truth.\" This may be linked to the belief that proximity and perceived control of the sampling process result in correct data, with many newer users of field spectroscopy underestimating the involved complexities [7]. It is however a fact that all measured data are uncertain and thus there may be no such thing as \"ground truth\" [8]. Furthermore, comparisons with datasets acquired by other sensors at different spatial resolutions, instantaneous fields-of-view, and viewing/illumination angles are hampered by scaling and BRDF issues [3, 8, 9, 10]. This once more corroborates the need for precise documentation of measurement conditions [7], in particular if datasets are to be made fit for long-term use and applicable for a variety of purposes by a wider community. We argue here that the term \"ground truth\" refers to a more advanced set of metadata available of the target measured _in situ_, as well as more intrinsic knowledge of the target, rather than to a superior physical measurement on the ground. The technical solution to enable such long-term usability and data sharing is the spectral database [11, 12, 13], acting as a repository for spectral data and their metadata, where the metadata provide the alluded measurement context, essentially giving meaning to the data [14]. A number of spectral databases have appeared over the past decade since the second version of the SPECCHO spectral database system [11] was designed and implemented. Examples of such systems are the Ahvaz Spectral Geodatabase Platform [15], a workflow for spectroradiometric field surveys including a spectral database [16], a landcover database in Egypt [17], a multispectral material signature database [18], a spectral library for outcrop characterization [19], and the generic EcoSIS solution [20], amongst many others. All of these works are based to a large extent on the metadata schemas introduced by SPECCHO versions 1 and 2 [11, 13], but add their individual flavours to accomplish applicationspecific services, such as geographic information system, spectral processing or analysis functionality. This indicates a paradigm shift towards more informed systems, which we term Spectral Information Systems (SIS) and define as follows: _SIS are systems for building and providing spectral information, utilizing spectral databases as repositories for spectral data and associated metadata._ SIS support the spectroscopy data life cycle [21] by giving metadata-specific guidance during data acquisition, providing automated data ingestion, functions for metadata augmentation (i.e., annotating spectral data with metadata), and spectral data and metadata processing, thus enabling the information retrieval to build knowledge and new conclusions leading to improved experimental planning (see Fig. 1). Information is inferred from data [22] by both metadata augmentation and data processing. Our experiences with designing and using SPECCHIO V2 as well as the review of the implementations of other spectral libraries alluded to above have helped to shape the requirements for the next generation of spectral information systems. This requirement analysis was significantly supported by the Australian National Data Service (ANDS) data capture project DC-10, aimed at establishing an Australian spectral database system. The most essential findings are summarized as follows. 1. Metadata requirements are a function of the different user groups and their application domain, with each group tending to use a set of general meta attributes plus domain specific ones [23, 24]. 2. Native sensor file format support by the data ingestion process is an ongoing task as industry continues to develop spectral sensors to meet scientific requirements, e.g., the measurement of fluorescence [25]. The SIS must allow generic spectral data storage, i.e., provide multi-instrument support. Essentially, while the storage is generic, the file reading is sensor or company specific. 3. Sharing data within research groups requires a more detailed management of user rights to allow collaborative research. 4. The demand for increased visibility of data requires the feeding of data discovery portals, where a portal is a website that gives users unified access to content [26, 27]. 5. Monolithic systems with built-in scientific processing can never provide the analytical flexibility required by the broad range of disciplines and in particular by the per se individualistic nature of scientists. 6. The scalability of the SIS with number of spectra and related metadata quickly becomes a relevant issue with the deployment of automated sensors [4] and the aggregation of data on a continental scale, such as in the framework of Digital Earth Australia [28]. 7. Access to the system should include a web browser-based option to enable easy, interactive data exploration without the need of installing specialized software. Version 3 of SPECCHIO was designed to further meet these requirements by offering a flexible metadata system, enhancing the support of new sensors by automating the sensor definition in the database, supporting higher-level languages to allow scientists writing their own algorithms, and redesigning the storage system to enable scalability. Furthermore, the system was updated to a modern client-server architecture with increased system security to accommodate the hosting constraints of many institutions. This article introduces the concepts chosen for the implementation of SPECCHIO V3, documents the achieved results in terms of system capability and availability, demonstrates the system use in a case study, presents lessons learned, and discusses future system capabilities. It furthermore provides the required knowledge background for SPECCHIO end-users to customize their individual SPECCHIO instances by leveraging in particular the new, flexible, and powerful Entity-Attribute-Value based metadata storage, and optimize their system usage. ## II Concepts The concepts described in this section address the latest requirements for spectral information systems and reflect the solutions chosen for the SPECCHIO V3 system. ### _SPECCHIO V3 System Architecture_ SPECCHIO V3 is based on a client-server-based architecture (see Fig. 2) using the open-source Glassfish application server1 and the open-source Jersey RESTful web services framework.2 All communication of the SPECCHIO Java client with the spectral database on the server side including user authentication is handled via the Glassfish server in the SPECCHIO application service, effectively shielding the database from direct user access via Structured Query Language (SQL) calls. Java objects are passed between client and server encoded as XML via Hypertext Transfer Protocol Secure (HTTPS), but communication may also use the unencrypted Hypertext Transfer Protocol (HTTP). Fig. 1: Spectroscopy data life cycle, supported by a spectral information system. Higher-level languages also rely on the SPECCHIO Java client for communication with the SPECCHIO application service. The web browser interface is supported through the Glassfish server by the SPECCHIO web service. This web service itself uses the SPECCHIO application programming interface (API) to communicate with the SPECCHIO application service. ### _Support of Higher-Level Processing Languages_ The number of applications of spectroscopy is enormous [29], [30] and consequently an ever-growing plethora of analysis techniques exist. Algorithms to process spectral data are developed by scientists using various programming languages and must invariably deal with spectral data selection, input and output. These basic functions are made available via the SPECCHIO API and thus allow the development of code that can operate on a common data pool, namely the SPECCHIO database run by the MySQL Relational Database Management System (see Fig. 3). The SPECCHIO API provides a large number of functions to interact with the SPECCHIO database server, which are as follows: 1. spectral data selection via metadata space queries; 2. grouping of selected spectral data by metadata attributes; 3. extraction of metadata vectors for a given spectral dataset; 4. insert and update of spectral data and metadata; and 5. linking of new spectral information with existing metadata. The API allows the writing of code that supports the data life cycle stages of data ingestion, augmentation, information building, and retrieval [31]. A simple example of information building for a given set of spectra would be the determination of solar angles based on the UTC and latitude/longitude metadata parameters, which in turn would contribute to metadata augmentation. Thus, the generic SPECCHIO API supports the implementation of application or domain specific workflows by end users. ### _Flexible Metadata Storage and Redundancy Reduction_ Metadata are of prime importance within spectral information systems as they define the context of the spectral data and enable their retrieval. There are no metadata standards of spectral data collections yet, although work toward such a goal is underway [23], [32]. It is expected that a standard would define a minimal set of mandatory attributes and allow for optional attributes. The applicability of spectroscopy to many fields, and in particular its ability to estimate bio-geophysical parameters has led to an ever-increasing demand to store application specific metadata. A static, traditional relational database model, such as adopted for SPECCHIO version 2, offers no solution to such dynamic requirements. Hence, the Entity-Attribute-Value (EAV) paradigm [33] was chosen as the new data storage concept. By doing so, we took advantage of our previous experience of the EAV approach in the APEX Calibration and Information System [34], which is used to handle and process laboratory calibration data of the APEX airborne imaging spectroradiometer [35]. Within the EAV approach, attributes are defined in a meta-layer. Entities, i.e., the spectral data, refer to these attributes and actual attribute values are stored in a generic storage container [36]. The SPECCHIO system uses a generic value table that can store attribute values as integer, double, date/time, string, categorical, spatial field, or binary. The default storage field as well as the cardinality per spectrum are part of the attribute definition. For any given attribute, the cardinality defines the number of permitted metadata values per spectrum, e.g., a capture time can occur only once per spectrum, while the latter may be associated with several keywords. Categorical values are linked to defined vocabularies that are implemented as taxonomies. The taxonomy approach was based on the one used in the Australian Ecological Knowledge and Observation System [37]. Fig. 3: Tiers of the SPECCHIO system for higher-level language support, utilizing Java Bridge to interface scientific higher-level code with the SPECCHIO API. Fig. 2: SPECCHIO V3 system architecture showing the encapsulation of the MySQL-based spectral database by using a Glassfish application server for all communication. Binary values can hold items such as pictures or PDF files encoded as binary streams. The interpretation of the content itself is a task of the system software and as such is irrelevant to the metadata storage concept. Attributes are grouped into metadata categories to allow configurable, application/domain specific graphical user interfaces. Metadata of spectral data collections are highly redundant. Typically, a statistically relevant number of measurements will be acquired for the same target, also known as the measurand. The resulting spectra will usually have common attributes such as integration time, spatial location, and target description. The metadata storage model is normalized such that several spectra can refer to the same attribute value. Data normalization is carried out during data ingestion by using an attribute-value lookup table (LUT) containing already inserted values per database user to maintain system integrity. The data insert process checks the LUT for an identical attribute value, and, if existing, inserts a cross reference to the spectrum entity. For new values both the value and a cross-reference are inserted and the new value added to the LUT. ### _Metadata Storage Levels_ Metadata are generally associated with a spectrum. This may seem obvious in first instance, but a more thorough analysis shows that many metaparameters are often shared by several spectra, as pointed out above. The SPECCHIO system has always supported the structuring of spectral data by hierarchies. This is in effect a grouping function and is exploited in the SPECCHIO system to carry out easy selections and updates via the hierarchical tree structure. Linking metadata at the spectrum level, however, imposes some limitations once the sizes of spectral collections grow. The APEX spectral ground control point campaign, as more comprehensively introduced later in the case study, comprises some 84'000 spectra and serves to illustrate the issue at this point. Two problems present themselves when annotating such a large dataset: 1) metaparameters that apply to all spectra, like a document describing a sampling approach common to all data acquisitions, will be stored once as a value, but will be linked to all spectra, creating \\(\\sim\\)84 000 entries in the spectrum to value cross-relational table, and 2) new datasets added to this campaign need to have these common metaparameters redefined explicitly. Adding the hierarchy as a further storage level solves both issues: a single link is created between the value and the hierarchy, and new data inserted below this hierarchy will automatically inherit metadata defined at the hierarchy level. Fig. 4 illustrates the storage levels within the database. It must be noted that the table _hierarchy_x_spectrum_ is filled in all cases to speed up data selections via hierarchies, and hence no storage penalty is paid when linking metaparameter values at the hierarchy level. ### _Campaign Handling_ Data storage in SPECCHIO is organized by campaigns, where a campaign is a high-level container for data collected, for example, a particular purpose or within a certain project. Actual sampling campaigns can be constrained both spatially and temporally, but SPECCHIO applies no such restrictions, i.e., the campaign is a conceptual container grouping data that are in some manner related to each other. A fundamental concept of the campaign is its relation to file system hierarchies holding spectral input files.3 A campaign can be related to several directory structures, acting as data sources during data ingestion. Footnote 3: A list of supported input files can be found online. Available: [https://specchio.ch/faq/#what-file-formats-are-supported](https://specchio.ch/faq/#what-file-formats-are-supported) Campaigns can be built in the system over time by adding new data sources, all contributing to the same campaign. These sources can even be spread over different computers that may be situated in separate networks. Each data source is essentially an entry point into a file system hierarchy. The data ingestion process parses the underlying folders and files by using these entry points. Data loading replicates the hierarchy structure of each source within the database. Re-invocations of the data loader lead to the identification of additional files and folders and a consecutive loading. We term this feature the \"delta-loading\" capability. It supports the gradual building of campaigns, e.g., from data generated by a regular source of spectra, such as spectrometers mounted on flux towers [38] or flown on unmanned aerial vehicles [39]. ### _Research Groups_ The concept of the research group allows the collaboration of researchers within the SPECCHIO system, working on a particular campaign. Quite often, remote sensing campaigns involve participants from different institutions, each team handling a different aspect of the measurement process. In such cases, the resulting data can also be spread across the participating institutions. A research group is automatically created for each campaign. Initially, the user creating the campaign will be the only group member. Additional members can be added at any time to an existing campaign, which in turn lets them add their own data sources as well as add other team members. Fig. 4: Illustration of the storage levels by linking metaparameter values to spectra using the EAV paradigm. (a) Linking at spectrum level. (b) Linking at hierarchy level. ### _Sensors, Instruments, and Calibrations_ Sensors, instruments, and calibrations are part of the SPECCHIO relational database model. A sensor refers to the blueprint specification of a spectrometer, i.e., it is a theoretical concept. An instance of a sensor is called an instrument, i.e., it relates to an actual device that is usually identified by a serial number. Instruments tend to be wavelength calibrated, specifying an average wavelength per spectral band. The associated calibration file cannot be made to substitute the serial number as a means for identifying a specific device, as an instrument can be recalibrated over time, resulting in a different calibration file, while the serial number naturally remains constant. Depending on the manufacturer, instruments resample their calibrated wavelengths to the sensor blueprint specification, while many others deliver the instrument and calibration specific center wavelength per band with each measured spectrum. Furthermore, instruments can also relate to radiometric calibration coefficients. Instrument calibrations are handled via the calibration entity in the database. Each calibration holds the parameters that define the radiometric and spectral performance of a calibrated instrument and every spectrum captured by an instrument refers to the appropriate calibration in the database. Consequently, instrument coefficients such as wavelengths for a particular calibration are only stored once within the database. The generation of sensors, instruments, and calibrations yet unknown to the system is automated upon data loading and calibration specific metadata are parsed from the input files where provided. The update of these database system tables requires administrator rights [40] to maintain the integrity of the system. The file loading process however allows such inserts by encapsulating them in a process on the server side, hence shielded from direct user interaction. ### _Generic Spectral Data Storage and Handling_ Generic spectral vector storage in the SPECCHIO spectral database is based on binary large objects. Spectral vectors are stored as floating-point vectors represented as binary strings. This approach allows the storage of spectra irrespective of the number of spectral bands and also increases the retrieval flexibility and speed as spectra can be subset within SQL queries, e.g., allowing the selection of single spectral bands without the need to load the full spectrum into memory. The system must also generically handle spectral data as the database can hold spectra acquired by different instruments. The concept is based on the spectral spaces paradigm [41], where a spectral space holds spectral vectors that share common characteristics: same number of spectral bands, identical center wavelengths and physical unit of measurement. Spaces are used throughout the system for processing, visualization, and file output. A space is a Java class comprising a Java array to hold the spectral vectors and information about the center wavelengths and physical unit. To deal with the handling of spaces, we introduce the Space Factory. The Space Factory is a conceptual, central component of the SPECCHIO system. It creates new spaces based on given inputs and contains the logic to form \"non-mixed\" spaces. As an example, assume the use case of creating spectral plots of a number of spectra that were acquired by different instruments. To do so requires that spectral vectors are plotted versus their related wavelengths. Thus, spectra must be compiled into their spectral spaces first before any processing or plotting can be done. In a first step, the user will select the spectra to be plotted by defining query conditions that are passed to the SPECCHIO EAV query engine. The query engine affects a subspace projection [42]. This yields a number of spectrum IDs that are matching the user's selection. These IDs are then handed to the Space Factory. The Space Factory creates spaces for all existing combinations of the sensors, instruments, calibrations, and measurement units associated with the selected spectra (see Fig. 5). Utilizing the Space Factory ensures that all spectra contained by a space have a common wavelength per band and the same measurement unit. Spectral spaces and the Space Factory are being used extensively when implementing any spectral processing based on the SPECCHIO API. ## III Results ### _Comparison of SPECCHO Versions 2 and 3_ This section highlights the changes that were made in the upgrade from SPECCHIO V2 to V3. Each of the following table blocks Table I lists the capability or quantity for V2 and V3 and the specific update (\\(>\\)) that was applied. ### _Open Source_ The new SPECCHIO version has been moved to open source as per ANDS regulations. The source code of version 3 was initially deposited on an ANDS project related github4 account, but merged consecutively with the version curated by the Remote Sensing Laboratories (RSL) at the University Zurich. This federated SPECCHIO code is available via github [43]. Footnote 4: [Online]. Available: [https://github.com/IntersectAustralia/dc10](https://github.com/IntersectAustralia/dc10) ### _System Availability_ Most end users prefer to either connect to an existing SPECCHIO instance, where data can be shared with other existing Fig. 5: Building of spaces by the Space Factory based on user defined query conditions. users, or to setup their own local instance while avoiding the complexities of an installation at the server end from scratch. ### _Clients_ The SPECCHO client software is able to connect to any SPECCHO server instance. It is compiled in two versions supporting generic platforms and MacOS X specifically. The installation package is available for download from the SPECCHO webpage.5 At the time of writing, SPECCHO runs seamlessly on Java version 8, build 212 or lower. Users with higher Java build numbers should install the latest version of the SPECCHO client or refer to further information given in the SPECCHO FAQ6 to avoid certification problems caused by more recent versions of Java. Footnote 5: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/) Footnote 6: Online. Available: [https://specchio.ch/faq/](https://specchio.ch/faq/) Footnote 7: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/) ### _SPECCHO Virtual Machine_ The complete SPECCHO system including database, Glassfish application server and client has been setup in a CentOS 7 system within an Oracle Virtual Machine. Users can download7 this readymade solution and run it on their own machines. Footnote 7: Online. Available: [https://specchio.ch/downloads/](https://specchio.ch/downloads/) ### _Australian SPECCHO Instance_ The new SPECCHO version was made available to the Australian community in mid-2013 and operated by the University of Wollongong. This instance is planned to transition to Geoscience Australia (GA) to provide operational hosting and long-term custodianship of SPECCHO. GA expects to operate this Australian instance as a continental-wide data source within the framework of Digital Earth Australia Program [28], where it is expected to be used routinely for calibration and validation of multisource satellite data [44]. A metadata feed has been implemented for the Research Data Australia service of the ANDS portal. Any SPECCHO server can be configured to support publishing of information to ANDS. A similar data feed has been conceptualized for the Terrestrial Ecosystem Research Network (TERN)8 as well, but has not been implemented at the time of writing. A spectral dataset may be published on the ANDS portal by carrying out a data selection in the SPECCHO user interface, choose a principal investigator and hitting the \"Publish Collection\" button, which in turn will autogenerate an RIF-CS XML file that is sent to the ANDS server and ingested on a periodic basis. An ANDS Collection Key will be generated upon publishing and added as new metadata value to all exported spectra, allowing their identification within the SPECCHO system. Footnote 8: Online. Available: www.tern.org.au ### _Worldwide SPECCHO Online Instance_ The University of Zurich maintains an online instance of the SPECCHO system, available to users worldwide for testing and productive purposes. The productive database contains some 154 700 spectra (Date: 27.04.2020). ### _Metadata Attributes_ The metadata supported by SPECCHIO has been considerably updated, utilizing the EAV paradigm. The attribute table is prefilled with 380 entries of eight different data types (see Table II). A detailed list of all available attributes can be displayed via a function within the SPECCHIO client application. The large number of floating-point data type attributes is mainly related to the support of bio- and geophysical variables from the domains vegetation, soil, and geochemistry. New attributes can be added to the system by administrators using MySQL insert statements. Once added, they become immediately available to all clients after the SPECCHIO application service has been restarted. ### _Metadata Entry Methods and Redundancy Reduction_ Entering metadata has been made easier and faster by supporting metadata augmentation from tabular data held in Microsoft Excel files. Existing spectral data can be updated with new metadata by using matching between metaparameters existing in both the database and the input file, e.g., sample plot numbers encoded within the spectral file names may be matched with corresponding numbers in the Excel file using wildcard9 definitions. Footnote 9: Wildcard: a symbol such as an asterisk which can be used to represent any character or range of characters in certain commands. The efficiency of the automated metadata redundancy reduction is essentially a function of the redundancy of the input data as only existing redundancies can be minimized. Reductions for, e.g., Analytical Spectral Devices spectrometer binary files amount to an average of 70% with a standard deviation of 10%. ### _Supported Input File Formats_ The number of supported input files has been enhanced to 19 different formats. Native file loading is the preferred option as metadata can be automatically extracted and ingested into the SPECCHIO metaparameter table. The SPECCHIO webpage features a collection of spectral file formats with example files provided to help the user community checking on file format compliance.10 Footnote 10: [Online]. Available: [https://specechio.ch/faq/](https://specechio.ch/faq/) ### _Speecchio API_ The SPECCHIO API is implemented in a Java class and documented online [47]. Any programming language supporting Java either natively such as MATLAB [48] or via bridging technologies, e.g., R via the Java package [49] or Python via JPype [50], can therefore be used to interface SPECCHIO (see Fig. 3). All other SPECCHIO classes available in the client may also be used to interact with the system to maximum effect. Use cases of the SPECCHIO API can be found online.11 Footnote 11: [Online]. Available: [https://specechio.ch/guides/](https://specechio.ch/guides/) ### _SPECCHIO Web Interface_ The building of dynamic interactive web pages for spectral data exploration was first prototyped using the VAADIN framework.12 The concept was greatly refined in collaboration with the University of Applied Sciences of Northwestern Switzerland (FHNW), leading to an appealing solution,13 where data can be queried by dynamic metadata restrictions [51]. This implementation uses Java and Java Script and relies on the SPECCHIO Java API, thus greatly reducing the required implementation and updating efforts. Footnote 12: [Online]. Available: [http://vx22.geo.uzh.ch:8080/SPECCHIO_Web_Interface/](http://vx22.geo.uzh.ch:8080/SPECCHIO_Web_Interface/) ### _SPECCHIO Graphical User Interface_ Most of SPECCHIO's graphical user interfaces (GUI) were redesigned due to the change to the EAV based metaparameter storage. As a consequence, no software updates are required when new metadata attributes are added to the system. The building of GUIs like the Metadata Editor (see Fig. 6) is purely generic and dependent on the metadata configuration of the SPECCHIO server the client is connected to. The introduction of an attribute called the Application Domain allows the control of the metadata categories shown by default. The Application Domain is a taxonomy that can be extended or modified by the system administrator via MySQL statements. It thus enables end users to be presented with categories tuned according to their research domain. Fig. 6 shows the default categories for the Spectral Ground Control Point (SGCP) domain [8]. ## IV Case Study This section exemplifies the practical application of SPECCHIO. We selected the spectral ground control point (SGCP)campaign carried out in the framework of calibration and validation for the APEX airborne imaging spectrometer [8], [35], [52] to serve as an example. This campaign comprises some 101'300 spectra (Date: 28.04.2020) at various processing levels (digital numbers, radiances, and reflectance factors), collected over ten years of APEX operation. A fair amount of labor has been invested in annotating these data with spatial location and elevation, target classification, UTC time stamp, solar angles, cloud cover, photographs, field protocol scans, processing algorithm notes serving as provenance information, spatial sampling scheme, beam geometry [46], sensor to target distance, measurement support definition [10], and corresponding airborne mission identifier (see also Fig. 6 for an example of an SGCP reflectance set displayed in the Metadata Editor). The life cycle steps applying to this SGCP campaign are shown in Fig. 7. Data are imported from ASD binary files and augmented with most of their metadata using the SPECCHIO Metadata Editor (see Fig. 6). Additional metadata are inserted by algorithms written in MATLAB as described below. These import and processing steps can be carried out by all researchers added as collaborators to the SGCP campaign. This allows that each field team can individually upload their SGCP data into the database. Each field mission gets an airborne mission designator in its top folder to allow easy identification. This can be observed in Fig. 6, where the hierarchy names under the campaign \"APEX Spectral Ground Control\" all start with APEX mission designators, like M0150. This arrangement, combined with a guideline on how to load and augment SGCP data, enables the loading of data into SPECCHIO from various machines and operating systems and by different people at their own time. Radiance data are processed in a purpose-built, interactive MATLAB [48] software tool, utilizing the SPECCHIO API, to produce reflectance factors, involving the following steps: 1. automated flagging of white reference and target spectra in the metadata; 2. correction of radiometric steps between detectors [53] and storage of corrected radiances as intermediate products in the database; 3. interpolation of reference panel radiances over time, re-sampled to the time stamps of the target spectra; and 4. storage of the computed reflectance factors in the database. Reflectance data are used to validate and quality control APEX surface reflectance data and APEX at-sensor radiances, the latter by employing radiative transfer modeling [8]. These validation processes can be largely automated by combining the metadata of both _in situ_ and airborne datasets, as originally conceptualized for the APEX processing and archiving facility [52] and recently implemented operationally [54]. In essence, the SPECCHIO system is queried for each flight line to identify spectra matching the airborne acquisition in both space and time. The spectrum metadata is sufficiently detailed to produce validation products with automated, target-specific annotations. An example of such an automated validation is shown in Fig. 8, indicating some remaining calibration problems, such as a loss of energy in the blue wavelengths below 450 nm or interpolation artifacts in water vapor absorption regions. An analysis of the UZH RSL in-house database, hosting the APEX SGCP campaign among others, shows that the average number of metaparameters per spectrum is 15, while a carefully curated dataset like the APEX SGCP campaign reaches a mean of 36 (see Fig. 10). Specific information about instruments, including their spectral and radiometric calibration, is not part of the metaparameter count mentioned above, but is regarded a system information which can only be changed by administrators or server processes having administrator rights. Any user can however inspect these data using the Instrumentation Metadata Editor (see Fig. 9), such as the individual components that make up a radiometric calibration of an ASD instrument. ## V Discussion The development of SPECCHIO version 3 has been a major effort as the whole architecture has largely been redesigned. The use of the EAV paradigm for the storage of metadata is one of the most eminent changes as it allows for the quick adaptation Fig. 6: SPECCHIO Metadata Editor graphical user interface illustrating the hierarchical data browser for data selection (left side), metadata fields grouped by categories (middle) and category selection panel showing the default configuration for SGCPs (right side). Fig. 7: Spectroscopy data life cycle as applied for SGCP campaign data. of new metadata attributes within the system. This is in sharp contrast to previous versions where a database model update and software upgrade had been required. New metaparameters are instantly available to the users after being added to the system, the only exception are new binary contents where both the server and client software would need upgrading as the interpretation is done in software. The paradigm change from spectral database to spectral information system is reflected in the new software by the EAV based metadata storage but also in the new API, offering many functions to select, group, and reinsert data, essentially allowing the building of information from algorithms implemented in higher level programming languages. The support of such languages is one key step toward the use of the SPECCHO data pool for dynamic applications, e.g., continuous data insert from tower mounted instruments, and to involve more researchers by allowing them the use of their development environment of choice. In combination with the new research group functionality, a team of researchers may work on the same data source while writing their algorithms in different programming languages. One focus of current research is the definition of mandatory and optional metaparameters [23, 32]. The previous version of SPECCHO supported a preliminary data quality scheme prescribing optional and mandatory metaparameters. This has been dropped in the new version, as it had never been used by any SPECCHO end user and research by Rasaiah _et al._[23] indicates that requirements differ between applications and user groups. Future versions may again include such a feature, which at that point will allow more flexibility due to the underlying EAV based storage supporting the definition of application-specific metadata requirements. While data quality is obviously very important, there are currently no data quality indicators implemented in the system. Again, there is no technical limitation in doing so, but a missing scientific approach on how to best estimate the quality of a data set, where quality ideally is defined as \"fit for purpose.\" Thus, in the current version, data are imported \"as is\" and not assigned any automatic quality flag. A future extension of SPECCHO in the framework of MetEOC-314 will introduce the storage and propagation of spectroradiometric uncertainties, at which point the notion of data quality will no longer only be qualitative but quantitative. Footnote 14: [Online]. Available: [http://empir.npl.co.uk/meteoc](http://empir.npl.co.uk/meteoc) One measurement of data quality is the metadata space density [40], based on the assumption that more metadata relates to a higher descriptive power of the metadata space, enabling the interpretation of the scientific data [55]. The metadata analysis of the RSL in-house database, as presented in the case study, demonstrates that carefully curated datasets reach a mean of 36 metaparameters per spectrum of a maximum 380 possible entries (see Fig. 10). This statistical analysis also demonstrates that spectral metadata spaces are essentia Fig. 8: Example of an automated validation result of an APEX HDRF cube, showing a comparison of spectra with SGCP data showing the target variation as grey envelope (top left), a true-color image of the scene zoomed into SGCP neighborhood with the SGCP indicated in the middle by a red circle (top right), the ratio between APEX and ASD (bottom left), and absolute differences in reflectance (bottom right). Fig. 10: Histograms of number of metaparameters per spectrum for all campaigns and for the APEX SGCP campaign, showing a bimodality with the distribution around a mean of 36 associated with the well-curated APEX SGCP campaign. Fig. 9: Instrumentation Metadata Editor showing the digital number spectrum being part of the radiometric calibration coefficients for the 3’FOV fore optic of ASD instrument 18140. thus confirming our flexible EAV storage choice where only the available metaparameters take up storage space. It must be noted that augmenting and processing a spectral dataset still requires manual labor, dedication, and attention to the detail, despite streamlined interfaces, group update functions, and automated calculation algorithms. A certain amount of development time has been spent on implementing new file format readers. It is an irksome duty of the maintainers of the code, as almost every new sensor becoming available appears to adopt another flavor of file format. We advocate that these proprietary formats should be dropped in favor of a standardized file format, such as the combination of ISO 19156 standard and Sensor Model Language proposed by Jimenez _et al._[32] or the SpectroML standard extended for field spectroscopy data and metadata [56]. ## VI Conclusion SPECCCHIO version 3 represents a major release of the SPECCHIO system, upgrading it to a spectral information system. The key improvements are a flexible metadata storage system that is easily extended to cater for the needs of different science domains, and a rich API that allows the automation of all SPECCHIO system functions. Scientific end-users can thus integrate direct SPECCHIO database access in their processing algorithms written in a programming language of their choice by using common Java bridging technologies. Moving to open source opens the opportunity to involve more developers worldwide and further improve the system. ## Acknowledgment This project has greatly benefitted through interactions with EcoSIS, the information technology services at the University of Wollongong, and SpecNet, by work carried out in the framework of the COST Actions OPTIMISE and SENSECO, and by programming support through student semester projects at FHNW, Switzerland. ## References * An assessment,\" _Remote Sens. Environ._, vol. 113, pp. 123-137, 2009. * [2] K. J. Thome, \"In-flight intersensor radiometric calibration using vicious approaches,\" in _Post-Launch Calibration of Satellite Sensors_, S. A. Morain, Ed. London, U.K.: Taylor Francis, 2004, pp. 95-102. * What is it and why do we need it?\" _Remote Sens. Environ._, vol. 103, pp. 227-235, 2006. * [4] A. Porcar-Castell _et al._, \"EUROSPEC: At the interface between remote-sensing and ecosystem CO2 flux measurements in Europe,\" _B biogeoscciences_, vol. 12, pp. 6103-6124, 2015. * [5] E. J. Milton, \"Field spectroscopy,\" in _Geoinformatics_, vol. 1, P. Atkinson, Ed. Oxford, U.K.: EOLSS Publishers/UNESCO, 2009, pp. 209-239. * [6] E. J. Milton, N. P. Fox, and M. Schapemman, \"Progress in field spectroscopy,\" in _Proc. Geosci. Remote Sens. Symp._, 2006, pp. 1966-1968. * [7] E. J. Milton, M. E. Schaepman, K. Anderson, M. Kneubuhler, and N. Fox, \"Progress in field spectroscopy,\" _Remote Sens. Environ._, vol. 113, pp. 92-109, 2009. * some considerations,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 3, pp. 1117-1135, Mar. 2017. * [9] K. Pfutzner, R. E. Bartolo, B. Ryan, and A. Bollhofer, \"Issues to consider when designing a spectral library database,\" in _Proc. Spatial Intell., Innovation Praxis, Nat. Biennial Conf. Spatial Sci. Inst._, 2005, pp. 416-425. * [10] K. Anderson _et al._, \"Inter-comparison of hemispherical conical reflectance factors (HCRF) measured with four fibre-based spectrometers,\" _Opt. Express_, vol. 21, pp. 605-617, 2013. * [11] A. Hueni, J. Nieke, J. Schopfer, M. Kneubuhler, and K. Itten, \"The spectral database SPECCHIO for improved long term usability and data sharing,\" _Comput. Geosci._, vol. 35, pp. 557-565, 2009. * [12] S. Bojinski, M. Schaepman, D. Schlaepfer, and K. Itten, \"SPECCCHIO: A Web-accessible database for the administration and storage of heterogeneous spectral data,\" _Photogrammetry Remote Sens._, vol. 57, pp. 204-211, 2002. * [13] S. Bojinski, M. Schaepman, D. Schlaepfer, and K. Itten, \"SPECCCHIO: a spectrum database for remote sensing applications,\" _Comput. Geosci._, vol. 29, pp. 27-38, 2003. * [14] L. Floridi, \"Is information meaningful data?\" _Philosophy Phenomenological Res._, vol. 70, pp. 351-370, 2005. * [15] M. Karami, K. Rangaran, and A. Saberi, \"Using GIS servers and interactive maps in spectral data sharing and administration: Case study of Ahvaz Spectral Geodatabase Platform (ASGP),\" _Comput. Geosci._, vol. 60, pp. 23-33, 2013. * [16] L. Pompilio, P. Villa, M. Boschetti, and M. Pepe, \"Spectroradiometric field surveys in remote sensing practice: A workflow proposal, from planning to analysis,\" _IEEE Geosci. Remote Sens. Mag._, vol. 1, no. 2, pp. 37-51, Jun. 2013. * [17] S. Arafat, E. Farg, M. Shokr, and G. Al-Kazaz, \"Internet-based spectral database for different land covers in Egypt,\" _Adv. Remote Sens._, vol. 2, pp. 85-92, 2013. * [18] S. Iregnfried and J. Hock, _Acquisition and Storage of Multispectral Material Signatures-Workflow Design and Implementation_. Karlsruhe, Germany: KIT Scientific Publishing, 2015, pp. 123-135. * [19] L. Colini _et al._, \"Mit Etna (Italy) and Sahara desert (Algeria) sites: CAL/VAL activities for hyperspectral data and development of spectral libraries for outcropting surfaces characterization,\" in _Proc. EARSEL SIG Imag. Spectroscopy Workshop Zurich_, 2017, pp. 129-130. * [20] EcoSIS Executive Team, \"Ecological spectral information system (EcosISIS).\" [Online]. Available: [https://ecosis.org](https://ecosis.org), Accessed: 2020. * [21] L. Chisholm and A. Hueni, \"The spectroscopy dataset lifecycle: Best practice for exchange and dissemination,\" in _AuaCover Good Practice Guidelines: A Technical Handbook Supporting Calibration and Validation Activities of Remotely Sensed Data Product_, A. Held, S. Phinn, M. Soto-Berelov, and S. D. Jones, Eds. Canberra, ACT, Australia: TERN AusCover, 2015, pp. 234-248. * [22] J. Rowley, \"The wisdom hierarchy: Representations of the DIKW hierarchy,\" _J. Inf. Sci._, vol. 33, pp. 163-180, 2007. * [23] B. Rassiah, S. Jones, C. Bellman, and T. Malthus, \"Critical metadata for spectroscopy field campaigns,\" _Remote Sens._, vol. 6, pp. 3662-3680, 2014. * [24] B. Rassiah, S. Jones, C. Bellman, T. Malthus, and A. Hueni, \"Assessing field spectroscopy metadata quality,\" _Remote Sens._, vol. 7, 2015, Art. no. 4499. * [25] A. Burkart _et al._, \"A method for uncertainty assessment of passive sum-induced chlorophyll fluorescence retrieval using an infrared reference light,\" _IEEE Sens. J._, vol. 15, no. 8, pp. 4603-4611, Aug. 2015. * [26] W. Tang and J. Selwood, _Spatial Portals. Gateways to Geographic Information_. Redlands, CA, USA: ESI Press, 2005. * [27] B. Vokcher, A. Richter, and M. Mittubick, \"From geoportals to geographic knowledge portals,\" _ISPRR Int. J. Geo-Inf._, vol. 2, pp. 256-275, 2013. * foundations and lessons learned,\" _Remote Sens. Environ._, vol. 202, pp. 276-292, 2017. * [29] G. Shaw and D. Manolakis, \"Signal processing for hyperspectral image exploitation,\" _IEEE Signal Process. Mag._, vol. 19, no. 1, pp. 12-16, Jan. 2002. * prospective technologies and applications,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2006, pp. 2005-2008. * [31] A. Hueni, L. Suarez, L. Chisholm, and A. Held, \"The use of spectral databases for remote sensing of agricultural crops,\" in _Hyperspectral Remote Sensing of Vegetation: Fundamentals, Sensor Systems, Spectral Libraries, and Data Mining for Vegetation_, 2nd ed. vol. 1, P. S. Thenkhauli, G. J. Lyon, and A. Hueuele, Eds. Boca Raton, FL, USA: CRC Press, 2018, p. 449. * [32] M. Jimenez, M. Gonzalez, A. Amaro, and A. Fernandez-Renau, \"Field spectroscopy metadata system based on ISO and OGC standards,\" _ISPRS Int. J. Geo-Inf._, vol. 3, pp. 1003-1022, 2014. * [33] P. Nadkarni, L. Marenco, R. Chen, E. Skoufos, G. Shepherd, and P. Miller, \"Organization of heterogeneous scientific data using the EAV/CR representation,\" _J. Amer. Med. Informat. Assoc._, vol. 6, pp. 478-493, 1999. * imaging spectrometer) calibration information system,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 11, pp. 5169-5180, Nov. 2013. * [35] M. Schaepman _et al._, \"Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX),\" _Remote Sens. Environ._, vol. 158, pp. 207-219, 2015. * [36] V. Dinu and P. Nadkarni, \"Guidelines for the effective use of entity-attribute-value modeling for biomedical databases,\" _Int. J. Med. Informat._, vol. 76, pp. 769-779, 2007. * [37] D. J. Turner, A. K. Smyth, C. M. Walker, and A. J. Lowe, \"AEKOS: Next-generation online data and information infrastructure for the ecological science community,\" in _Terrestrial Ecosystem Research Infrastructures: Challenges and Opportunities_, A. Chabbi and H. W. Loescher, Eds. Boca Raton, FL, USA: CRC Press, 2017. * [38] M. Balzarodo _et al._, \"Ground-based optical measurements at European flux sites: A review of methods, instruments and current controversies,\" _Sensors_, vol. 11, pp. 7954-7981, 2011. * [39] A. Burkart, S. Cogliati, A. Schickling, and U. Rascher, \"A novel UAV-based ultra-light weight spectrometer for field spectroscopy,\" _IEEE Sens. J._, vol. 99, 2013. * [40] A. Hueni, T. Malthus, M. Kneubuehler, and M. Schaepman, \"Data exchange between distributed spectral databases,\" _Comput. Geosci._, vol. 37, pp. 861-873, 2011. * [41] D. Landgrebe, _On Information Extraction Principles for Hyperspectral Data_. West Lafayette, IN, USA: Purdue Univ., 1997. * [42] A. Hueni, J. Nieke, J. Schopfer, M. Kneubuhler, and K. Itten, \"Metadata of spectral data collections,\" in _Proc. 5th EARSeL Workshop Imag. Spectroscopy_, 2007, p. 14. * [43] A. Hueni, \"SPECCHO source code.\" [Online]. Available: [https://github.com/SPECCCHIODB/SPECCCHIO](https://github.com/SPECCCHIODB/SPECCCHIO), Accessed: 2020. * [44] C. Ong, T. Malthus, I. C. Lau, M. Thankappan, and G. Byrne, \"The development of a standardised validation approach for surface reflectance data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2018, pp. 6456-6459. * [45] European Commission DG XI, CORINE land cover, European Commission Directorate-General Environment, Nuclear Safety and Civil Protection, Office for Official Publications Eur. Communities, Luxembourg city, Luxembourg, 1993. * [46] G. Schaepman-Strub, M. Schaepman, T. H. Painter, S. Dangel, and J. V. Martonchuk, \"Reflectance quantities in optical remote sensing: definitions and case studies,\" _Remote Sens. Environ._, vol. 103, pp. 27-42, 2006. * [47] SPECCHO, \"SPECCHO API.\" [Online]. Available: [https://specchio.ch/javadoc/](https://specchio.ch/javadoc/), Accessed: 2020. * [48] The MathWorks Inc., Natick, MA, USA, _Matlab_, 2017. * [49] S. Urbanek, \"Lfava: Low-level R to Java interface,\" Jun. 7, 2020. [Online]. Available: [http://CRAN.R-project.org/package](http://CRAN.R-project.org/package) = Java * [50] JPype, \"Jype documentation.\" [Online]. Available: [https://jyppe.readthedocs.io/en/latest/index.html](https://jyppe.readthedocs.io/en/latest/index.html), Accessed: 2020. * [51] A. Hueni, C. Schibli, R. Rossi, and M. Gwerder, \"SPECCHO spectral information system web interface,\" in _Proc. EARSeL SIG IS Zurich_, 2017, pp. 123-124. * [52] A. Hueni _et al._, \"Structure, components and interfaces of the airborne prism experiment (APEX) processing and archiving facility,\" _IEEE Trans. Geosci. Remote Sens._, vol. 47, no. 1, pp. 29-43, Jan. 2009. * [53] A. Hueni and A. Bialek, \"Cause, effect and correction of field spectroradiometer inter-channel radiometric steps,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 4, pp. 1542-1551, Apr. 2017. * [54] C. Meiller, H. Kuchelle, M. Werfeli, and A. Hueni, \"A calibration and validation tool for data quality analysis of airborne imaging spectroscopy data,\" in _Proc. Int. Geosci. Remote Sens. Symp._, 2020, pp. 6234-6237. * [55] W. K. Michener, \"Metadata,\" in _Ecological Data: Design, Management and Processing_, W. K. Michener and J. W. Brunt, Eds. Oxford, U.K.: Blackwell Sci., 2000, pp. 92-116. * [56] T. Malthus and A. Hueni, \"An XML-based format for the exchange of spectroradiometric data,\" in _Proc. EARSeL SIG IS_, 2009.
Spectral Information Systems provide a framework to assemble, curate, and serve spectral data and their associated metadata. This article documents the evolution of the SPECCHIO system, devised to enable long-term usability and data-sharing of field spectroradiometer data. The new capabilities include a modern, web-based client-server architecture, a flexible metadata storage scheme for generic metadata handling, and a rich application programming interface, enabling scientists to directly access spectral data and metadata from their programming environment of choice. The SPECCHIO system source code has been moved into the open source domain to stimulate contributions from the spectroscopy community while binary distributions, including the SPECCHIO virtual machine, simplify the installation and use of the system for the end-users. Information systems, metadata, relational databases, spectroradiometers, spectroscopy.
Provide a brief summary of the text.
ieee/f681c875_dbd7_4ee4_b0c9_f3cf5926982c.md
# Design, Implementation, and Control of a Ball-Balancing Robot Sangsin Park Department of Mechanical Engineering, Korea National University of Transportation, Chungju-ai 27469, South Korea e-mail: [email protected] This work was supported by Korea National University of Transportation Industry-Academic Cooperation Foundation, in 2022. ## 1 Introduction The field of robotics has seen significant strides since its early days, with innovations transforming how robots are utilized across various industries. One area that has particularly benefited from these innovations is the service and entertainment industry. From the simple, repetitive motions of early mechanical automatons to today's sophisticated robots capable of dynamic interactions with humans, the progress has been extraordinary. Robots have become integral to theme parks, restaurants, and even live performances, offering new forms of engagement and immersive experiences. Innovations in artificial intelligence and sensor technology have enabled robots to perform complex tasks, respond to human emotions, and navigate through intricate environments, significantly enhancing their role in the service and entertainment industry. One of the robots in this field is the ball-balancing robot (BBR), a marvel of modern engineering and control systems. These robots, which balance on a single spherical wheel, provide unique advantages in maneuverability and interaction. Their ability to move smoothly in any direction without turning makes them ideal for dynamic and crowded environments such as theme parks, museums, and live events. BBRs can perform various entertaining tasks, from guiding visitors and delivering interactive performances to serving as mobile information kiosks. Their agility and responsiveness make them not only functional but also engaging and entertaining, capturing the imagination of audiences and pushing the boundaries of robotic entertainment. As these robots continue to advance, their role in the service and entertainment industry is expected to grow, offering even more innovative and captivating experiences. An example of research relating a BBR is the study by Lauwers et al. [1], which presents a dynamically stable single-wheel robot designed for omnidirectional movement, which balances on a spherical wheel using an inverse mouse-ball drive mechanism. The control system, combining a Proportional-Integral (PI) inner loop and an Linear Quadratic Regulator (LQR) outer loop, effectively maintains balance but struggles with precise trajectory tracking due to unmodeled frictional forces. The continuous research is the work by Nagarajan et al. [2]. They focuses on a BBR with a four-motor inverse mouse-ball drive and a yaw drive mechanism for unlimited rotation about its vertical axis. They introduce a trajectory planning algorithm for smooth motion, validated through extensive experiments that showcase the robot's ability to handle disturbances and perform precise movements. The robot's design and control innovations highlight its potential for human interaction and dynamic navigation, making it a valuable asset in entertainment environments. Moreover, Nagarajan and Hollis [3] introduceshape-accelerated balancing systems, a type of underactuated system where shape configurations map to accelerations in position space, exemplified by balancing mobile robots like the ballbot. Additionally, Seyfarth et al. [4] present SIMbot, a ballbot driven by a spherical induction motor (SIM), which simplifies the mechanical complexity of traditional ball-bal robots by reducing the number of active moving parts to just two--the body and the ball. Experimental results demonstrate SIMbot's effective balancing, station-keeping, precise point-to-point motion, and recovery from initial lean angles. The study by Kumagai and Ochiai [5, 6] presents the development and control of a BBR. This research highlights the innovative design and engineering solutions implemented to achieve dynamic stability and precise maneuverability. Another study is the work by Hertig et al. [7]. For a comprehensive approach to state estimation for a BBR, they propose a method that employs an Extended Kalman Filter (EKF) to fuse sensory information from incremental encoders, gyroscopes, and accelerometers. Unlike previous methods that separate attitude and position estimation, this unified approach allows for information flow in both directions, enhancing the overall accuracy of state estimates. The study by Pham et al. [8] delves into the innovative design and control mechanisms of a human riddle BBR. They design the robot with a dynamic stability mechanism and implement the control strategy such as a double-loop approach, combining a PI inner loop for immediate control and a LQR outer loop for overall stability and trajectory tracking. The paper highlights the practical applications of the ball segway in environments requiring high agility and stability, such as human riding scenarios. This research significantly contributes to the field by presenting a comprehensive control strategy and validating it with real-world experiments. Moreover, the research by Pham et al. [9] address the complex challenge of synchronizing the motion between the ball and the body of the BBR to maintain equilibrium and achieve precise trajectory tracking. They propose a synchronization controller (SC) design that incorporates synchronization and coupling errors, and they apply this method to a dynamic model of the BBR. They validate their approach through extensive simulations and real-world experiments, The results demonstrate that the SC method offers stabilization accuracy and robust performance in both balancing and tracking tasks, even in the presence of external disturbances. Jo and Oh [10] address the challenge of balancing and tracking control for a BBR by considering the contact forces between the robot and the ball. They propose a control framework that utilizes a projected task space dynamics approach with quadratic programming to handle inequality constraints such as friction and unilateral constraints. By dividing the task space dynamics into separate dynamics for the robot and the ball, the control input can be derived more effectively through the ball task dynamics. This decomposition allows for optimal contact force computation, which is crucial for maintaining balance and accurate trajectory tracking. Lee and Park [11] introduce a virtual angle-based sliding mode control method to address the challenges of underactuation and nonlinear dynamics inherent in a BBR. This method uses a single-loop controller to achieve both trajectory tracking and balancing, simplifying the control system and enhancing robustness against external disturbances and model uncertainties. Xiao et al. [12] design a BBR capable of carrying heavy loads. They develop a cascaded LQR-PI controller to address the drivetrain's nonlinear dynamics and high stiction issues. This controller combines an outer LQR loop for optimal trajectory generation with an inner PI loop for precise tracking and friction compensation. The work by Jang et al. [13] presents a control method for a BBR. This method combines a virtual angle-based control strategy with an adaptive observer using radial basis function neural networks to estimate velocity information, thereby eliminating the need for direct velocity measurements. The proposed control system addresses the challenges of underactuation and nonlinearities in a BBR, ensuring the convergence of tracking and balancing errors without the local minimum problems associated with hierarchical control methods. Fankhauser and Gwerder [14] proposes a comprehensive three-dimensional model to improve upon the limitations of current planar decomposition methods. Real-time simulations with a gain-scheduled controller demonstrate significant improvements, enabling the ballbot to follow complex trajectories with enhanced performance and robustness. Lal et al. [15] introduce the development of a BBR and the design of its controller. The modular design facilitates future expansions and adjustments. Real-time control tests using a linear quadratic regulator show effective performance. The study by Inal et al. [16] introduces fully-coupled 3D dynamic model for spherical wheeled self-balancing robots, which addresses limitations of previous 2.5D models by capturing important aspects such as yaw dynamics and coupled inertial effects. The simulations show that the 3D model can accurately track desired trajectories and handle complex motions, highlighting its advantages over simpler models. Endo and Nakamura [17] develop the \"B.B. Rider\" (Basketball Rider) robot, which human can ride. The robot uses a basketball as a low-cost and effective spherical tire, incorporating orthogonal wheels and a six-axis force-torque sensor to enable movement and balance. My contributions to the BBR include the development of a unique thin and narrow body design to enable functional expansion for future BBRs, the design of a customized stepped planetary gear for the actuators, and the experimental validation of the new design using a compensator composed of an observer and full-state feedback. The paper is organized as follows. Section II provides the development of the BBR including details of the actuator and electronic parts. The mathematical model of the BBR is described in Section III and a balancing control scheme is presented in Section IV. In Sections V, experiment results are presented. Finally, discussion and conclusions are described in Sections VI and VII, respectively. ## II Development of a Ball-Balancing Robot The aims of the robot's design are to interact with humans in service areas and to assist at live events. Thus, I have developed a ball-balancing robot (BBR) with a mass of 8.9 kg, a height of 90 cm, and a thin body with a diameter of 14.5 cm. A rendering of the BBR is presented in Fig. 1. The following provides a detailed description. ### _An Actuator_ Three actuators are needed to rotate the ball. Each actuator consists of an omniwheel, a customized stepped planetary gear, a brushless DC (BLDC) motor, and a magnetic encoder. An exploded view of the actuator is shown in Fig. 2. The diameter of the omniwheel is 100 mm. The BLDC motor used is the Maxon EC-i 40 100W motor, and the motor driver is the Elmo Gold Solo Whistle. A magnetic encoder by RENISHAW is used to measure the omniwheel's angular velocity. Regarding the customized planetary gear, a sectional view of the gear is presented in Fig. 3. This design can be used to achieve high gear ratios while maintaining a compact and efficient form factor. The features of this gear are as follows. The sun gear, which serves as the input, is connected to the motor shaft and drives the planetary gears. The ring gear is fixed in place, while the carrier, which serves as the output, is connected to the omniwheel. Each planet gear is composed of two different gears that rotate around the sun gear, and provides two stages of gear reduction. Thus, its primary advantage is the ability to provide higher gear ratios within a smaller space compared to a single-stage planetary gear. To calculate the gear ratio of that planetary gear system where the sun gear is the input and the carrier is the output with the ring gear fixed, the relationship is derived as follows. \\[\\left|\\frac{\\omega_{r}-\\omega_{c}}{\\omega_{s}-\\omega_{c}}\\right|=\\frac{N_{driving }}{N_{driven}}, \\tag{1}\\] where \\(\\omega_{s}\\), \\(\\omega_{r}\\), \\(\\omega_{c}\\), \\(N_{driving}\\), and \\(N_{driven}\\) are the angular velocities of sun gear, ring gear, and carrier, product of the number of teeth on the driving gears, and product of the number of teeth on the driven gears, respectively. Rewriting (2), \\[\\frac{\\omega_{r}-\\omega_{c}}{\\omega_{s}-\\omega_{c}}=-\\frac{N_{p_{2}}N_{s}}{N_{r }N_{p_{1}}}=-K, \\tag{2}\\] Figure 1: A rendering of a designed ball-balancing robot. Figure 3: A sectional view of a customized stepped planetary gear. Figure 2: An exploded view of the actuator. where \\(N_{s}\\), \\(N_{r}\\), \\(N_{p_{1}}\\), and \\(N_{p_{2}}\\) are the numbers of teeth on the sun, ring, first stage planet, and second stage planet gears, respectively. Rearranging (2), \\[\\omega_{r}=(1+K)\\omega_{c}-K\\omega_{s}. \\tag{3}\\] The left side of (3) is \\(\\omega_{r}=0\\) because the ring gear is fixed. Thus, \\[\\omega_{c} =\\frac{K}{1+K}\\omega_{s}\\] \\[=\\frac{N_{p_{2}}N_{s}}{N_{r}N_{p_{1}}+N_{p_{2}}N_{s}}\\omega_{s} \\tag{4}\\] The sun, ring, and planet gears are designed with a module of 1.0. In addition, the number of teeth on the sun and ring gears are 7 and 30, respectively. For the planet gears, the number of teeth on the first and second stages are 15 and 8, respectively. From (4), the gear ratio of the customized stepped planetary gear is 9.0357:1. ### Battery Packs and An Interface Module To separate the PC power source from the power source for the motor drivers, I fabricated two battery packs using 18650 Li-Ion batteries with battery management systems, which are shown in Fig. 4. The 13S1P battery pack (33.15Ah) consists of 13 cells connected in series, with a nominal voltage of 48.1V and a charging voltage of 54.6V. This battery pack is used as the power source for driving the BLDC motor and operating the microcontroller and an inertial measurement unit (IMU). The 4S2P battery pack (28Ah) consists of two sets of 4 cells connected in series, which are then connected in parallel. It has a nominal voltage of 14.8V and a charging voltage of 16.8V. This battery pack is used as the power source for operating the mini PC. The capacity of the two batteries is sufficient to operate the robot hardware and PC for around 1 hour. The interface module consists of three stacked PCBs. The bottom PCB has two DC-DC converters that converts 48V to 12V and 5V. The second layer PCB is an STM32F413VGT microcontroller PCB used for acquiring IMU sensor values. The top PCB is where the IMU sensor and connectors are mounted. ### Fabrication In Fig. 6, the actual assembled BBR is shown and the hardware specifications of the BBR are summarized in Table 1. Aluminum profiles are used as the body frame to support the load and maintain its shape. The plates for placing parts in the body frame and the rim structure for holding a ball are made using a 3D printer. I design the unique robot's body to be thin and narrow to enable functional expansion for future ball-balancing robots. For example, equipment for interacting with people or cases for use in performances can be attached to the body frame. ## III The Mathmatical Model of the Ball-Balancing Robot The mathematical models for the ball-balancing robot (BBR) are described by dividing it into three planes. Each model has a virtual wheel that rotates the ball. Since the model for the frontal plane is the same as that for the longitudinal plane, it is not described separately. Therefore, only the models for the longitudinal plane and the horizontal plane are detailed. ### Equations of Motion in the Longitudinal Plane The model in the longitudinal plane is shown in Fig. 7. Regarding the body, \\(m_{B},J_{By},L_{B}\\), and \\(\\vartheta_{y}\\) are the mass, moment of inertia, length between the center of the ball and the mass center of the body, and the orientation of the body, respectively. For the virtual wheel, \\(m_{W}\\), \\(J_{W}\\), \\(r_{W}\\), and \\(\\dot{\\vartheta}_{y}\\) are Figure 4: (a) A 1351P battery pack (rated 48.1V, charged 54.6V) for motor drivers and an interface module and (b) a 4S2P battery (rated 14.8V, charged 16.8V) for a mini PC, respectively. Figure 5: An interface module (1st layer: DC-DC converter PCB, 2nd layer: microcontroller PCB, 3rd layer: IMU and connectors PCB). the mass, moment of inertia, radius, and angular velocity of the wheel, respectively. Finally, for the ball, \\(m_{A}\\), \\(J_{Ay}\\), \\(r_{A}\\), \\(x_{A}\\), and \\(\\dot{\\phi}_{y}\\) are the mass, moment of inertia, radius, displacement, and angular velocity of the ball, respectively. To derive the relationship with the ball and the wheel, we consider that the velocity of the ball is same as that of the wheel at \\(c_{y}\\). First, the velocity of the ball at \\(c_{y}\\) is \\[v_{B,c_{y}} =\\begin{bmatrix}\\dot{x}_{A}\\\\ 0\\\\ 0\\end{bmatrix}+\\begin{bmatrix}0\\\\ \\dot{\\phi}_{y}\\\\ 0\\end{bmatrix}\\times\\begin{bmatrix}r_{A}s\\theta_{y}\\\\ 0\\\\ r_{A}c\\theta_{y}\\end{bmatrix}\\] \\[=\\begin{bmatrix}r_{A}\\dot{\\phi}_{y}+r_{A}c\\theta_{y}\\dot{\\phi}_{ y}\\\\ 0\\\\ -r_{A}s\\theta_{y}\\dot{\\phi}_{y}\\end{bmatrix}. \\tag{5}\\] Second, the velocity of the wheel at \\(c_{y}\\) is \\[v_{W,c_{y}} =\\begin{bmatrix}\\dot{x}_{A}\\\\ 0\\\\ 0\\end{bmatrix}+\\begin{bmatrix}0\\\\ \\dot{\\phi}_{y}\\\\ 0\\end{bmatrix}\\times\\begin{bmatrix}(r_{A}+r_{W})s\\theta_{y}\\\\ 0\\\\ (r_{A}+r_{W})c\\theta_{y}\\end{bmatrix}\\] \\[+\\begin{bmatrix}0\\\\ -\\dot{\\psi}_{y}\\\\ 0\\end{bmatrix}\\times\\begin{bmatrix}-r_{W}s\\theta_{y}\\\\ 0\\\\ -r_{W}c\\theta_{y}\\end{bmatrix}\\] \\[=\\begin{bmatrix}r_{A}\\dot{\\phi}_{y}+(r_{A}+r_{W})c\\theta_{y}\\dot {\\phi}_{y}+r_{W}c\\theta_{y}\\dot{\\psi}_{y}\\\\ 0\\\\ -(r_{A}+r_{W})s\\theta_{y}\\dot{\\phi}_{y}-r_{W}s\\theta_{y}\\dot{\\psi}_{y}\\end{bmatrix}. \\tag{6}\\] With the no-slip condition, (5) is equal to (6). Thus, \\[r_{A}\\dot{\\phi}_{y}=(r_{A}+r_{W})\\dot{\\phi}_{y}+r_{W}\\dot{\\psi}_{y}. \\tag{7}\\] Rearranging (7), we get following. \\[\\dot{\\psi}_{y}=\\frac{r_{A}}{rw}(\\dot{\\phi}_{y}-\\dot{\\phi}_{y})-\\dot{\\phi}_{y} \\tag{8}\\] To derive the equations of motion, the Lagrangian method is used. For the ball, the potential energy is \\(v_{A\\psi}=0\\), and the kinetic energy is \\[T_{Ay}=\\frac{1}{2}m_{A}r_{A}^{2}\\dot{\\phi}_{y}^{2}+\\frac{1}{2}J_{Ay}\\dot{\\phi} _{y}^{2}. \\tag{9}\\] For the wheel, the kinetic and potential energy are as follows. \\[T_{wy} =\\frac{1}{2}m_{W}\\left[\\{r_{A}\\dot{\\phi}_{y}+(r_{A}+r_{W})c\\theta _{y}\\dot{\\phi}_{y}\\}^{2}\\right.\\] \\[\\left.+\\{(r_{A}+r_{W})s\\theta_{y}\\dot{\\phi}_{y}\\}^{2}\\right]+ \\frac{1}{2}J_{Wy}\\dot{\\psi}_{y}^{2}, \\tag{10}\\] \\[V_{wy} =m_{W}g(r_{A}+r_{W})c\\theta_{y}. \\tag{11}\\] \\begin{table} \\begin{tabular}{c c} \\hline \\hline Mass & 8.9 kg \\\\ \\hline Height & 0.9 m \\\\ \\hline Degrees of freedom & 3 \\\\ \\hline Motor & Maxon EC-14 40, 487, 100W \\(\\times\\) 3 \\\\ \\hline Speed reducer & Customized stepped planetary gear, ratio 9.0357-1 \\\\ \\hline Motor driver & ELMO G-SOLWHIZ0V100S \\(\\times\\) 3 \\\\ \\hline Sensor & RENISHAW Orbis \\({}^{*}\\) true, absolute rotary encoder \\(\\times\\) 3 \\\\ \\hline ANALGO DEVICES 6-axis IMU \\(\\times\\) 1 \\\\ \\hline Communication & CAN 2.0A \\\\ \\hline PC & INTEB \\(\\times\\) NC1GANK3, \\\\ & CPU(3-13)150, 1.2GHz, \\\\ & RAM 32GB, SSD 256GB \\\\ \\hline OS & Ubuntn 20.04 based on \\\\ & a Xenom-patched Linux kernel \\\\ \\hline Supply voltages & 48.1V for motor drivers and an interface module \\\\ & 14.8V for PC \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Hardware specifications of the ball-balancing robot. Figure 6: The developed ball-balancing robot. Figure 7: The model of the longitudinal plane. Substituting (8) into (10) and then simplifying (10) yields \\[T_{wy} = \\frac{1}{2}m_{W}\\left\\{r_{A}^{2}\\dot{\\sigma}_{y}^{2}+(r_{A}+r_{W})^ {2}\\dot{\\sigma}_{y}^{2}\\right. \\tag{12}\\] \\[\\left.+2\\ r_{A}(r_{A}+r_{W})c\\theta_{y}\\dot{\\phi}_{y}\\dot{\\phi}_{ y}\\right\\}\\] \\[+\\frac{1}{2}J_{W}\\left\\{\\frac{r_{A}}{r_{W}}(\\dot{\\phi}_{y}-\\dot{ \\theta}_{y})-\\dot{\\theta}_{y}\\right\\}^{2}.\\] For the body, the kinetic and potential energy are as follows. \\[T_{By} = \\frac{1}{2}m_{B}\\left(r_{A}^{2}\\dot{\\phi}_{y}^{2}+L_{B}^{2}\\dot{ \\theta}_{y}^{2}\\right. \\tag{13}\\] \\[\\left.+2\\ r_{A}L_{B}c\\dot{\\phi}_{y}\\dot{\\phi}_{y}\\right)+\\frac{1 }{2}J_{B}\\dot{\\phi}_{y}^{2},\\] \\[V_{By} = m_{B}L_{B}c\\dot{\\phi}_{y}. \\tag{14}\\] Thus, the total kinetic energy is \\(T_{y}=T_{Ay}+T_{Wy}+T_{By}\\), and the total potential energy is \\(V_{y}=V_{Ay}+V_{Wy}+V_{By}\\). Define the generalized coordinates \\(q_{y}=[\\phi_{y}~{}\\dot{\\theta}_{y}]^{T}\\). The Lagrangian is defined to be \\(L(q_{y},~{}\\dot{q}_{y})=T_{y}-V_{y}\\). The Euler-Lagrange equations of motion in the longitudinal plane are \\[\\frac{d}{dt}\\frac{\\partial L}{\\partial\\dot{q}_{y}}-\\frac{\\partial L}{\\partial q _{y}}=\\left[\\begin{array}{c}r_{A}/r_{W}\\\\ -r_{A}/r_{W}\\end{array}\\right]\\tau_{y} \\tag{15}\\] After calculating the derivatives in the Euler-Lagrange equations and reorganizing the terms, the equations of motion can be expressed as follows. \\[M(q_{y})\\ddot{q}_{y}+C(q,~{}\\dot{q}_{y})+G(q)=\\left[\\begin{array}{c}r_{A}/r _{W}\\\\ -r_{A}/r_{W}\\end{array}\\right]\\tau_{y}, \\tag{16}\\] where \\(\\tau_{y}\\) is the torque of the virtual wheel. The mass matrix, \\(M(q_{y})\\), is \\[M(q_{y})=\\left[\\begin{array}{cc}M_{y,11}&M_{y,12}\\\\ M_{y,12}&M_{y,22}\\end{array}\\right]\\] where \\[M_{y,11} = (m_{A}+m_{W}+m_{B})r_{A}^{2}+J_{Ay}+J_{Wy}\\frac{r_{A}^{2}}{r_{W}^ {2}}\\] \\[M_{y,12} = (m_{W}(r_{A}+r_{W})+m_{B}L_{B})\\ r_{A}c\\theta_{y}\\] \\[-J_{Wy}\\frac{r_{A}}{r_{W}}(\\frac{r_{A}}{r_{W}}+1)\\] \\[M_{y,22} = m_{W}(r_{A}+r_{W})^{2}+m_{B}L_{B}^{2}+J_{By}\\] \\[+J_{Wy}(\\frac{r_{A}}{r_{W}}+1)^{2}.\\] The vector of Coriolis and centrifugal force, \\(C(q,~{}\\dot{q}_{y})\\), and the vector of gravitational force, \\(G(q)\\), are \\[C(q,~{}\\dot{q}_{y})=\\left[\\begin{array}{c}-\\left\\{m_{W}(r_{A}+r_{W})+m_{B}L_ {B}\\right\\}r_{A}s\\theta_{y}\\dot{\\theta}_{y}^{2}\\\\ 0\\end{array}\\right],\\] and \\[G(q)=\\left[\\begin{array}{c}0\\\\ -\\left\\{m_{W}(r_{A}+r_{W})+m_{B}L_{B}\\right\\}gs\\theta_{y}\\end{array}\\right],\\] respectively. ### Equations of motion in the horizontal plane The model in the horizontal plane is shown in Fig. 8. (a). The notation rules for the horizontal plane are the same as those for the longitudinal plane. In the model of the horizontal plane, the \\(\\alpha\\) is the angle from the vertical axis of the body to the point where the ball and the wheel meet and is shown in Fig. 8. (b). The relationship with the ball and the wheel is derived from the condition that the velocities at \\(c_{z}\\) are equal. Thus, \\[r_{A}s\\alpha\\dot{\\phi}_{z}=(r_{A}s\\alpha+r_{W})\\dot{\\theta}_{z}+r_{W}\\dot{ \\psi}_{z} \\tag{17}\\] Rearranging (17), we get following. \\[\\dot{\\psi}_{z}=\\frac{r_{A}s\\alpha}{r_{W}}(\\dot{\\phi}_{z}-\\dot{\\theta}_{z})- \\dot{\\theta}_{z} \\tag{18}\\] The total potential energy in the horizontal plane is \\(V_{z}=0\\), and the total kinetic energy is as follows. \\[T_{z}=\\frac{1}{2}J_{Az}\\dot{\\phi}_{z}^{2}+\\frac{1}{2}J_{Wz}\\left\\{\\frac{r_{A} s\\alpha}{r_{W}}(\\dot{\\phi}_{z}-\\dot{\\theta}_{z})-\\dot{\\theta}_{z}\\right\\}^{2}+ \\frac{1}{2}J_{B}\\dot{\\phi}_{z}^{2} \\tag{19}\\] Define the generalized coordinates \\(q_{z}=[\\phi_{z}~{}\\theta_{z}]^{T}\\). The Lagrangian is defined to be \\(L(q_{z},~{}\\dot{q}_{z})=T_{z}\\). From the Euler-Lagrange equations, the equations of motion in the horizontal plane is derived as follows. \\[M(q_{z})\\ddot{q}_{z}=\\left[\\begin{array}{cc}\\frac{r_{A}s\\alpha}{r_{W}}\\tau_ {z}-\\tau_{y}\\\\ -\\frac{r_{A}s\\alpha}{r_{W}}\\tau_{z}\\end{array}\\right], \\tag{20}\\] where \\(\\tau_{z}\\) and \\(\\tau_{f}\\) are the torque of the virtual wheel and the friction torque between the ball and the ground, respectively. The mass matrix, \\(M(q_{z})\\), is \\[M(q_{z})=\\left[\\begin{array}{cc}M_{z,11}&M_{z,12}\\\\ M_{z,12}&M_{z,22}\\end{array}\\right]\\] where \\[M_{z,11} = J_{Az}+J_{Wz}\\frac{r_{A}^{2}s^{2}\\alpha}{r_{W}^{2}}\\] \\[M_{z,12} = -\\left(1+\\frac{r_{W}}{r_{A}s\\alpha}\\right)J_{Wz}\\frac{r_{A}^{2}s^{ 2}\\alpha}{r_{W}^{2}}\\] \\[M_{z,22} = J_{Bz}+\\left(1+\\frac{r_{W}}{r_{A}s\\alpha}\\right)^{2}J_{Wz}\\frac{r_ {A}^{2}s^{2}\\alpha}{r_{W}^{2}}.\\] Figure 8: (a) The model of the horizontal plane. (b) The \\(\\alpha\\) is the zenith angle of the wheel. From the no-rotation condition of the ball against the ground, \\(\\vec{\\phi}_{z}=0\\), then (20) can be simplified as follows. \\[\\vec{\\theta}_{z}=C_{z}\\tau_{z}\\, \\tag{21}\\] where \\[C_{z}=-\\frac{r_{A}r_{W}s\\alpha}{J_{Wz}r_{A}^{2}s^{2}\\alpha+(J_{Bz}+J_{Wz})r_{W}^ {2}+2J_{Wz}r_{A}r_{W}s\\alpha}.\\] ### Torque Conversion The torques on the virtual wheels have to be converted into the torques for the real omnirwheels. To derive the relationship between the virtual and real ones, we use the property that the sum of the torques acting on the ball is the same in both the virtual and real models. The diagram of each model is shown in Fig. 9. First, referring to Fig. 9. (a), the tangential forces exerted by the real omnirwheels on the ball and moment arms are \\[F_{W1} =\\frac{\\tau_{1}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 0\\end{bmatrix},F_{W2}=\\frac{\\tau_{2}}{r_{W}}\\begin{bmatrix}-\\frac{\\sqrt{3}}{1} \\\\ -\\frac{1}{2}\\\\ 0\\end{bmatrix},\\] \\[F_{W3} =\\frac{\\tau_{3}}{r_{W}}\\begin{bmatrix}\\frac{\\sqrt{3}}{2}\\\\ -\\frac{1}{2}\\\\ 0\\end{bmatrix},\\] \\[r_{W1} =\\begin{bmatrix}r_{A}s\\alpha\\\\ 0\\\\ r_{A}\\alpha\\end{bmatrix},r_{W2}=\\begin{bmatrix}-\\frac{1}{2}r_{A}s\\alpha\\\\ \\frac{\\sqrt{3}}{2}r_{A}s\\alpha\\\\ r_{A}\\alpha\\end{bmatrix},\\] \\[r_{W3} =\\begin{bmatrix}-\\frac{1}{2}r_{A}s\\alpha\\\\ -\\frac{\\sqrt{3}}{2}r_{A}s\\alpha\\\\ r_{A}\\alpha\\end{bmatrix}.\\] The torques are \\(T_{i}=r_{W\\bar{\\textsc{i}}}\\times F_{W\\bar{\\textsc{ii}}}\\quad(i=1,\\ 2,\\ 3)\\), and the sum of the torques is \\[T_{OW} =T_{1}+T_{2}+T_{3}\\] \\[=\\frac{r_{A}\\tau_{1}}{r_{W}}\\begin{bmatrix}-c\\alpha\\\\ 0\\\\ s\\alpha\\end{bmatrix}\\] \\[\\quad+\\frac{r_{A}\\tau_{2}}{r_{W}}\\begin{bmatrix}\\frac{1}{2}c\\alpha \\\\ -\\frac{\\sqrt{3}}{2}c\\alpha\\\\ s\\alpha\\end{bmatrix}+\\frac{r_{A}\\tau_{3}}{r_{W}}\\begin{bmatrix}\\frac{1}{2}c \\alpha\\\\ \\frac{\\sqrt{3}}{2}c\\alpha\\\\ s\\alpha\\end{bmatrix}. \\tag{22}\\] Second, referring to Fig. 9. (b), the tangential forces exerted by the virtual wheels on the ball and moment arms are \\[F_{Wx} =\\frac{\\tau_{x}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 0\\end{bmatrix},F_{Wy}=\\frac{\\tau_{y}}{r_{W}}\\begin{bmatrix}1\\\\ 0\\\\ 0\\end{bmatrix},F_{Wz}=\\frac{\\tau_{z}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 0\\end{bmatrix},F_{Wx}=\\frac{\\tau_{z}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 0\\end{bmatrix},\\] \\[r_{Wx} =\\begin{bmatrix}0\\\\ 0\\\\ r_{A}\\end{bmatrix},r_{Wy}=\\begin{bmatrix}0\\\\ 0\\\\ r_{A}\\end{bmatrix},r_{Wz}=\\begin{bmatrix}r_{A}s\\alpha\\\\ 0\\\\ 0\\end{bmatrix}.\\] The torques are \\(T_{i}=r_{W\\bar{\\textsc{ii}}}\\times F_{W\\bar{\\textsc{ii}}}\\quad(i=x,\\ y,\\ z)\\), and the sum of the torques is \\[T_{VW} =T_{x}+T_{y}+T_{z}\\] \\[=\\frac{r_{A}\\tau_{x}}{r_{W}}\\begin{bmatrix}-1\\\\ 0\\\\ 0\\end{bmatrix}+\\frac{r_{A}\\tau_{y}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 0\\end{bmatrix}+\\frac{r_{A}s\\alpha\\tau_{z}}{r_{W}}\\begin{bmatrix}0\\\\ 1\\\\ 1\\end{bmatrix}. \\tag{23}\\] Thus, from the torque conservation, (22) is equal to (23), \\[T_{OW}=T_{VW}. \\tag{24}\\] Rearranging (24), the relationship between the virtual torques and the real ones can be represented in matrix form as follows. \\[\\begin{bmatrix}\\tau_{1}\\\\ \\tau_{2}\\\\ \\tau_{3}\\end{bmatrix}=\\begin{bmatrix}\\frac{2}{3c\\alpha}&0&\\frac{1}{3}\\\\ -\\frac{1}{3c\\alpha}&-\\frac{\\sqrt{3}}{2\\alpha}&\\frac{1}{3}\\\\ -\\frac{1}{3c\\alpha}&\\frac{\\sqrt{3}}{3c\\alpha}&\\frac{1}{3}\\\\ \\end{bmatrix}\\begin{bmatrix}\\tau_{x}\\\\ \\tau_{y}\\\\ \\tau_{z}\\end{bmatrix}. \\tag{25}\\] ## IV Balancing Control Scheme A ball-balancing robot (BBR) must inherently maintain its posture upright, necessitating balancing control. Since the equilibrium point of the BBR is the upright position, the equations of motion in the longitudinal plane, (16), can be linearized at that point. If the state vector is defined as \\(X_{y}=\\left[\\phi_{y}\\ \\theta_{y}\\ \\dot{\\phi}_{y}\\ \\dot{\\phi}_{y}\\right]^{T}\\), the equilibrium point is \\(X_{y}=0\\). Linearizing (16) is \\[\\begin{bmatrix}\\dot{\\phi}_{y}\\\\ \\dot{\\phi}_{y}\\end{bmatrix}=M_{L}^{-1}\\begin{cases}-G_{L}+\\begin{bmatrix}r_{A }/r_{W}\\\\ -r_{A}/r_{W}\\end{bmatrix}r_{y}\\end{cases}. \\tag{26}\\] Assume that \\[M_{L}^{-1}=\\begin{bmatrix}\\mu_{1}&\\mu_{2}\\\\ \\mu_{3}&\\mu_{4}\\end{bmatrix}, \\tag{27}\\] Figure 9. (a) The diagram of the torques, tangential forces and moment arms of the three omnirwheels. (b) The diagram of the torques, tangential forces and moment arms of the virtual wheels in the longitudinal and horizontal planes, respectively. where \\(\\mu_{i}(i=1,2,3,4)\\) is each element of the inverse matrix of the linearized mass matrix, (26) is expressed in the state-space form as follows. \\[\\begin{bmatrix}\\dot{\\phi}_{y}\\\\ \\dot{\\theta}_{y}\\\\ \\dot{\\phi}_{y}\\\\ \\dot{\\phi}_{y}\\\\ \\end{bmatrix}=\\begin{bmatrix}0&0&1&0\\\\ 0&0&0&1\\\\ 0&\\mu_{2}\\left\\{m_{W}(r_{A}+r_{W})-m_{B}L_{B}\\right\\}g&0&0\\\\ 0&\\mu_{4}\\left\\{m_{W}(r_{A}+r_{W})-m_{B}L_{B}\\right\\}g&0&0\\\\ \\end{bmatrix}\\\\ \\times\\begin{bmatrix}\\dot{\\phi}_{y}\\\\ \\dot{\\phi}_{y}\\\\ \\dot{\\phi}_{y}\\\\ \\end{bmatrix}+\\begin{bmatrix}0\\\\ \\frac{r_{A}}{r_{W}}(\\mu_{1}-\\mu_{2})\\\\ \\frac{r_{A}}{r_{W}}(\\mu_{3}-\\mu_{4})\\\\ \\end{bmatrix}\\tau_{y}. \\tag{28}\\] In case of the horizontal plane, define the state vector as \\(X_{z}=\\begin{bmatrix}\\theta_{z}\\,\\dot{\\theta}_{z}\\end{bmatrix}^{T}\\). Equation (21) is expressed in the state-space form as follows. \\[\\begin{bmatrix}\\dot{\\theta}_{z}\\\\ \\dot{\\theta}_{z}\\\\ \\end{bmatrix}=\\begin{bmatrix}0&1\\\\ 0&0\\\\ \\end{bmatrix}\\begin{bmatrix}\\dot{\\theta}_{z}\\\\ \\dot{\\theta}_{z}\\\\ \\end{bmatrix}+\\begin{bmatrix}0\\\\ C_{z}\\\\ \\end{bmatrix}\\tau_{z} \\tag{29}\\] For balancing control, the compensator shown in Fig. 10 is applied. The compensator consists of a closed-loop observer and full-state feedback, and it works to keep the states at zero. The matrix \\(\\hat{A}\\) and vector \\(\\hat{B}\\) of the closed-loop observer are determined from (28) and (29). Also, the matrix \\(\\hat{C}\\) is the identity matrix. The observer receives two inputs: wheel torques and measured states. It adjusts the model states through the state errors to reduce the modeling error. To measure the states of the BBR, including the angles and angular velocity of the body posture and those of the ball, the IMU and encoder data are used. Specifically, the angles of the body posture, except for the yaw angle, are estimated through a Kalman Filter using IMU data because the IMU is not integrated with a magnetometer. ## V Experimental results An experiment is conducted to verify whether the developed ball-balancing robot (BBR) can maintain an upright position using the applied balancing controller. In Fig. 11, the balancing experiment is shown, illustrating the robot's ability to maintain equilibrium. The blue dashed line serves as a reference for the upright position. The robot moved backward slightly to maintain its balance from Fig. 11. (1) to (5), but it started to fall from (6). The orange dashed line of the (6), compared to the reference line, shows the robot is tilted to Figure 11: The snapshots of the balancing experiment. Figure 10: A schematic of the compensator for balancing control. can effectively balance, it faces challenges such as vibrations caused by the omniwheels passing over the grooves of a basketball. To address this, future iterations should consider designing a smoother ball and incorporating additional controllers to eliminate disturbances. Additionally, a LiDAR (Light Detection and Ranging) will be attached on top of the BBR for autonomous movements, and speakers and LEDs will be installed on the sides for entertainment. Despite these challenges, the innovative design and control strategies developed in this study lay a solid foundation for the future development of more stable and efficient ball-balancing robots. These advancements will enhance the robot's applicability in dynamic environments, making it a valuable tool for various service and entertainment applications. ## References * [1] T. B. Lauwers, G. A. Kantor, and R. L. Hollis, \"A dynamically stable single-wheeled mobile robot with intensive mouse-ball drive,\" in _Proc. IEEE Int. Conf. Robot. Autom. (ICRA)_, May 2006, pp. 2884-2889. * [2] U. Nagarajan, G. Kantor, and R. Hollis, \"The ballbot: An omnidirectional balancing mobile robot,\" _Int. J. Robot. Res._, vol. 33, no. 6, pp. 917-930, May 2014. * [3] U. Nagarajan and R. Hollis, \"Shape space planner for shape-accelerated balancing mobile robots,\" _Int. J. Robot. Res._, vol. 32, no. 11, pp. 1323-1341, Sep. 2013. * [4] G. Seyfarth, A. Bhatia, O. Sassnick, M. Shomin, M. Kumagai, and R. Hollis, \"Initial results for a ballbot driven with a spherical induction motor,\" in _Proc. IEEE Int. Conf. Robot. Autom. (ICRA)_, May 2016, pp. 3771-3776. * [5] M. Kumagai and T. Ochiai, \"Development of a robot balancing on a ball,\" in _Proc. Int. Conf. Control. Autom. Syst._, Oct. 2008, pp. 433-438. * [6] M. Kumagai and T. Ochiai, \"Development of a robot balanced on a ball--Application of passive motion to transport,\" in _Proc. IEEE Int. Conf. Robot. Autom._, May 2009, pp. 4106-4111. * [7] L. Hertig, D. Schindler, M. Bloesch, C. D. Remy, and R. Siegwart, \"Unified state estimation for a ballbot,\" in _Proc. IEEE Int. Conf. Robot. Autom._, May 2013, pp. 2471-2476. * [8] D. Pham, H. Kim, J. Kim, and S. Lee, \"Balancing and transferring control of a ball seysup using a double-loop approach,\" _IEEE Control Syst._, vol. 38, no. 2, pp. 15-37, Jul. 2018. * [9] D. B. Pham, X. Q. Duong, D. S. Nguyen, M. C. Hoang, D. Phan, E. Asadi, and H. Khayayram, \"Balancing and tracking control of ballbot mobile robots using a novel synchronization controller along with online system identification,\" _IEEE Trans. Ind. Electron._, vol. 70, no. 1, pp. 657-668, Jan. 2023. * [10] J. Jo and Y. Oh, \"Contact force based balancing and tracking control of a ballbot using projected task space dynamics with inequality constraints,\" in _Proc. 17th Int. Conf. Ubiquitous Robots (UR)_, Jun. 2020, pp. 118-123. * [11] S.-M. Lee and B. S. Park, \"Robust control for trajectory tracking and balancing of a ballbot,\" _IEEE Access_, vol. 8, pp. 159324-159330, 2020. * [12] C. Xiao, M. Mansouri, D. Lam, J. Ramos, and E. T. Hsiao-Wecksler, \"Design and control of a ballbot driven with high agility, minimal footprint, and high payload,\" in _Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS)_, vol. 32, Oct. 2023, pp. 376-383. * [13] H. Jang, C. Hyun, and B. S. Park, \"Virtual angle-based adaptive control for trajectory tracking and balancing of ball-balancing robots without velocity measurements,\" _Int. J. Adapt. Control Signal Process._, vol. 37, no. 8, pp. 2204-2215, Aug. 2023. * [14] P. Farhhauser and C. Gwerder, \"Modeling and control of a ballbot,\" BA: thesis, Dept. Mech, Process Eng., ETH, Zurich, Switzerland, 2010. * [15] I. Lal, M. Nicoura, A. Codrean, and L. Busoniu, \"Hardware and control design of a ball balancing robot,\" in _Proc. IEEE 22nd Int. Symp. Design Dage. Electron. Circuits Syst. (DDCES)_, Apr. 2019, pp. 1-6. * [16] A. N. Inal, O. Morgul, and U. Saranli, \"A 3D dynamic model of a spherical wheeled self-balancing robot,\" in _Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst._, Oct. 2012, pp. 5381-5386. * [17] T. Endo and Y. Nakamura, \"An omnidirectional vehicle on a basketball,\" in _Proc. 12th Int. Conf. Adv. Robot. (ICAR)_, 2005, pp. 573-578. \\begin{tabular}{c c} & SANCSIN PARK received the B.S. degree in mechanical engineering from Inha University, Incheon, South Korea, in 2005, and the M.S. and Ph.D. degrees in mechanical engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2007 and 2017, respectively. He was a Postdoctoral Researcher with the Department of Mechanical Engineering, University of Nevada, Las Vegas, from 2017 to 2019, a Principal Researcher with Rainbow Robotics, Daejeon, in 2019, and the Team Leader of Gentle Monster, Seoul, South Korea. Since 2022, he has been an Assistant Professor with the Mechanical Engineering Department, Korea National University of Transportation (KNUT), Chungju-si, South Korea. His research interests include robot mechanism design, robot system integration, real-world application, and legged robots. \\\\ \\end{tabular}
This paper presents the design, implementation, and control of a ball-balancing robot developed for interactive applications in service and entertainment environments. The robot features a unique thin and narrow body design to facilitate future functional expansions and a customized stepped planetary gear for the actuators, improving torque and precision. A compensator incorporating an observer and full-state feedback is developed and experimentally validated, ensuring accurate and stable control of the robot. Future work will focus on enhancing the stability and control mechanisms to improve the robot's performance in various applications. Ball-balancing robot, customized stepped planetary gear, compensator. + Footnote †: The associate editor coordinating the review of this manuscript and approving it for publication was Agustin Leobardo Herrera-May\\({}^{\\copyright}\\). + Footnote †: The associate editor coordinating the review of this manuscript and approving it for publication was Agustin Leobardo Herrera-May\\({}^{\\copyright}\\). + Footnote †: The associate editor coordinating the review of this manuscript and approving it for publication was Agustin Leobardo Herrera-May\\({}^{\\copyright}\\). + Footnote †: The associate editor coordinating the review of this manuscript and approving it for publication was Agustin Leobardo Herrera-May\\({}^{\\copyright}\\).
Give a concise overview of the text below.
ieee/f7c9ebd4_73ba_4570_bd15_ba6943a43e0e.md
A GNSS/Barometric Altimeter Tightly Coupled Integration for Three-Dimensional Semi-Indoor Mapping With Android Smartphones Jeonghyeon Yun\\({}^{\\copyright}\\),, and Byungwoon Park\\({}^{\\copyright}\\), Manuscript received 1 September 2023; revised 11 January 2024; accepted 8 February 2024. Date of publication 13 February 2024; date of current version 22 February 2024. This work was supported in part by the Future Space Navigation and Satellite Research Center through the National Research Foundation, Ministry of Science and ICT, the Republic of Korea under Grant 2022MIA3C207440; and in part by the National Research and Development Project Development of Ground-Based Centimeter-Level Maritime Precise PNT Technologies through the Ministry of Oceans and Fisheries under Grant 1525012253. _Corresponding author: Bwangwoon Park._ The authors are with the Department of Aerospace Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea (e-mail: [email protected], [email protected]). Digital Object Identifier 10.1109/LGRS.2024.365610 ## I Introduction Three-dimensional mapping techniques within a global navigation satellite system (GNSS)-challenged environment offer valuable data for a range of applications [1], including urban navigation, seamless integration of indoor and outdoor positioning [2], as well as indoor disaster search and rescue (SAR). While outdoor positions mainly rely on GNSS, indoor localization techniques require the utilization of feature matching through vision-based or LiDAR-based methods [3]. Despite various advantages of the feature matching techniques, indoor mapping is limited to providing relative 2-D positioning nature rather than furnishing the direct latitude and longitude coordinates for the current position. To implement the seamless integration of indoor and outdoor positioning, it is essential to develop an appropriate algorithm that facilitates the transition from absolute coordinates to relative ones. In addition, this algorithm should effectively prevent the GNSS results from becoming distorted at the boundary. Computing accurate absolute coordinates near a window that connects indoor and outdoor areas, even in a challenging environment with a small number of visible satellites, would pave the way for seamlessly assigning 3-D coordinates to each 2-D relative indoor location by matching indoor and outdoor data. Notably, the integration of the algorithm into smartphones will empower efficient indoor SAR operations. GNSS positioning methods typically require a minimum of four or more satellites: three for determining the user's 3-D location and the fourth to correct the clock bias errors. Even though a window serves as the only architectural connection between indoor and outdoor spaces, affording relatively unobstructed access to GNSS signals, the reception is allowed solely in the window's direction. The subsequence of this obstruction is an increased satellite dilution of precision (DOP), which directly degrades the accuracy of user positioning and leads to extensive error propagation along the window orientation. Furthermore, the limited visibility in a single direction frequently fails to meet the minimum number of satellites required for positioning [4, 5, 6]. Consequently, relying solely on GNSS satellite signals in these circumstances presents a significant challenge, demanding supplementary measurements or techniques to enhance the positioning process. Additional sensors integrated into smartphones, such as an inertial navigation system (INS), magnetometer, and barometer, can offer a potent solution for complementing the low visibility of GNSS indoor and enabling the integration of innovative technologies. Unlike other sensors that measure directions or spatiotemporally relative observables, the barometer has a unique capability of providing a measurement equivalent to range in 3-D space: altitude. By detecting variations in atmospheric pressure, the barometer enables altitude calculation independently of external infrastructure. Even though the barometer holds the potential to address the lack of visible GNSS satellites, the current approach primarily involves utilizing altitude variations obtained from barometer in a similar way as the INS is integrated with GNSS. The prevailing approach involves leveraging altitudinal variations derived from barometers to partially bound the divergence of the INS for both loosely [7, 8] and tightly coupled (TC) [9, 10] integration of GNSS and INS. However, these techniques based on integration with the prior positions may result in significant deviation if the previous GNSS position was corrupted. Moreover, since this technique relies exclusively on GNSS for initial position determination, it fails to provide any absolute position if fewer than four GNSS satellites are available. While the previous studies mainly employed the barometer to prevent damage to the positioning engine rather than for direct positioning purposes, Gaglione et al. [11] introduced a system for GNSS/Barometer loosely coupled (LC) integration, enhancing vertical accuracy by combining GNSS-derived altitude values with those from the barometer. Nevertheless, these LC positioning methods are capable of operating only when 3-D GNSS positions are attainable. Rao et al. [12] introduced a method of obtaining altitude based on information gathered from nearby peers. However, this approach has the disadvantage of increasing errors in cases of biased peer placement relative to the user on different floors within a building. Nunes et al. [13] utilized a calibrated altimeter after completely removing the inherent bias, prior to proceeding with the process of calculating latitude and longitude. However, as the altitude derived from the altimeter corresponds to an orthometric height above the sea level, not an ellipsoidal height, a bias equal to the geoid height would inevitably impact the accuracy of latitude and longitude estimation, particularly in the absence of priori information. Consequently, potential horizontal errors might arise due to the estimation of biased altitude data. This letter proposes an approach that directly incorporates barometer measurement with GNSS observations, introducing a GNSS/barometer TC integration algorithm. Unlike existing methodologies, this approach incorporates barometer measurements not only as a measurement value but also as a state within the GNSS observation equation. This enables the direct calculation a 3-D position on the Earth-centered Earth-fixed (ECEF) frame using only three satellites, even when the initial position is unknown. Furthermore, even when the initial position is initialized at the Earth's center, the final position can be iteratively calculated by substituting the estimated position's geoid height, enhancing accuracy. ## II Barometer Height Information There is a natural correlation between elevation of the altitude and the decrease in air pressure. This inherent relationship forms the groundwork for calculating altitude by analyzing the noticeable pressure difference between sea level and the user's current height. One of the representative formulas for obtaining altitude \\(H_{r}\\) from a barometer is equal to \\[H_{r}=\\frac{10^{\\frac{\\text{int}_{\\text{TO}}(\\frac{\\text{HS}}{2})}{3.28877}}-1}{ -6.8755856\\times 10^{-6}}\\div 3.281 \\tag{1}\\] where PF and PS are pressures at the user's height and the sea level, respectively [14]. The altitude calculation on smartphone typically employs a fixed standard PS of 1013.25 hPa, leading to a significant bias in absolute altitude computation [14] in (1). Instead of relying on the fixed standard PS, we adjusted the altitude using real-time PS data measured at the nearby Seoul Meteorological Station [15], located 9.76 km away from Sejong University. This correction was based on the relative pressure difference observed at the measurement site. Since the height calculated by the barometer is the orthometric height (\\(H_{r}\\)), the introduction of geoid height (\\(N_{r}\\)), referencing the height from sea level, becomes essential to align with the ellipsoid height (\\(h_{r}\\)) in the GNSS observation equation as depicted by \\[h_{r}=H_{r}+N_{r}. \\tag{2}\\] For the Seoul area, a geoid height of 23.294 m was applied. Fig. 1 presents a comparison between the altitude obtained from GNSS and barometer (BARO) within a semi-indoor environment. As evident from the bottom of Fig. 1, there was frequent visibility of more than ten satellites, which were far fewer than that seen in open sky. Nonetheless, the sky plot in Fig. 2 demonstrates that all the observed satellites were concentrated in the northeast direction. As summarized in Table I, the skewed satellite geometry resulted in high DOP values, causing the vertical root mean square error (RMSE) Fig. 1: Comparison between GNSS and barometer-derived altitudes (top) and number of visible satellites (bottom) within a semi-indoor test environment. Fig. 2: Snapshot of GNSS satellite skylpot in open sky (left) and semi-indoor (right) with blocked signals (indicated by red-shaded area). to increase up to 39.16 m, approximately ten times higher than the standard value in open sky environments [16, 17, 18]. On the other hand, the barometer consistently provides stable altitude measurements regardless of indoor or outdoor conditions, featuring a modeling error of 3.0 m and noise of 0.3 m. Based on these statistics, when combining GNSS and barometric measurements, the weight combination is executed by setting the sigma value of \\(\\sigma_{G}\\) to \\(\\sigma_{B}\\) in a ratio of 10:1, as depicted in the following: \\[W=\\left[\\begin{array}{cccc}\\left(\\sigma_{\\rho_{i}^{\\prime}}\\right)^{2}&&&\\\\ &\\left(\\sigma_{\\rho_{i}^{\\prime}}\\right)^{2}&&&\\\\ &&&\\left(\\sigma_{\\rho_{i}^{\\prime}}\\right)^{2}&&\\\\ &&&\\left(\\sigma_{\\rho_{i}^{\\prime}}\\right)^{2}&&\\\\ &&&\\left(\\sigma_{h_{r}}\\right)^{2}\\end{array}\\right]. \\tag{3}\\] ## III GNSS/Barometer TC Integration Traditional GNSS positioning methods require at least four visible satellites to calculate a 3-D receiver position (\\(X_{r}\\), \\(Y_{r}\\), \\(Z_{r}\\)) and clock error (\\(B_{r}\\)) through triangulation [19, 20, 21, 22]. The variable \\(\\delta\\rho_{r}^{i}\\) denotes the pseudorange (\\(\\rho\\)) residual for the \\(i\\)th satellite, while \\(\\tilde{e}\\) is the line-of-sight (LOS) vector between the receiver and satellites, which is given by \\[\\left[\\begin{array}{c}\\delta\\rho_{r}^{1}\\\\ \\delta\\rho_{r}^{2}\\\\ \\vdots\\\\ \\delta\\rho_{r}^{n}\\end{array}\\right]=\\left[\\begin{array}{cccc}\\tilde{e}_{x }^{1}&\\tilde{e}_{y}^{1}&\\tilde{e}_{z}^{1}&-1\\\\ \\tilde{e}_{x}^{2}&\\tilde{e}_{y}^{2}&\\tilde{e}_{z}^{2}&-1\\\\ \\vdots&\\vdots&\\vdots&\\vdots\\\\ \\tilde{e}_{x}^{n}&\\tilde{e}_{x}^{n}&\\tilde{e}_{x}^{n}&-1\\end{array}\\right] \\left[\\begin{array}{c}X_{r}\\\\ Y_{r}\\\\ Z_{r}\\\\ B_{r}\\end{array}\\right]. \\tag{4}\\] Here, the pseudorange residual \\(\\delta\\rho_{r}^{i}\\) is calculated as \\(\\delta\\rho_{r}^{i}=d_{r}^{i}+B_{r}-\\rho_{r}^{i}\\), where \\(d\\) signifies a distance computed using a temporally updated position for each iterative step. GNSS signals can be blocked inside a building, leading to reduced accuracy or even the potential for unavailability of positioning with only three or fewer signals. The lower plot in Fig. 1 shows the variation in the number of satellites over time. These variations demonstrate a notably erratic pattern, frequently dipping below the three-satellite threshold. One way to address this issue is to add altitude information obtained from a barometer, which measures atmospheric pressure [23], to the existing three GNSS measurements. This method enables aligning the number of measurements to the solution state count. However, integrating these two different types of measurements presents a complexity because they are measured in different domains: GNSS observables in range and barometric altitude in position. For this reason, previous studies have predominantly employed an LC approach to enhance the vertical accuracy by integrating the barometric altitude within the position domain. This enhancement is unfeasible when restricted to only three GNSS measurements [14]. This letter proposes a GNSS/BARO TC integration method to calculate a 3-D position using GNSS measurements and barometric altitude. The 3-D position (\\(X_{r}\\), \\(Y_{r}\\), \\(Z_{r}\\)) obtained from GNSS measurements exists in the ECEF coordinate, while the altitude is a singular value in the latitude-longitude-height (LLH) coordinate system. Consequently, transformations from LLH to ECEF [14] are required to the altitude (\\(h_{r}\\)) as described in the following: \\[h_{r}=\\cos\\phi\\cos\\lambda\\cdot X_{r}+\\cos\\phi\\sin\\lambda\\cdot Y_{r}+\\sin\\phi \\cdot Z_{r} \\tag{5}\\] where \\(\\phi\\) and \\(\\lambda\\) are latitude and longitude, respectively. By converting altitude into a 3-D position and subsequently substituting it into (4), a novel TC integration equation for GNSS/BARO fusion can be derived for a greater than or equal to 3, as presented in the following: \\[\\left[\\begin{array}{c}\\delta\\rho_{r}^{1}\\\\ \\delta\\rho_{r}^{2}\\\\ \\vdots\\\\ \\delta\\rho_{r}^{n}\\\\ h_{r}\\end{array}\\right]=\\left[\\begin{array}{cccc}\\tilde{e}_{x}^{1}&\\tilde{e }_{y}^{1}&\\tilde{e}_{z}^{1}&-1\\\\ \\tilde{e}_{x}^{2}&\\tilde{e}_{y}^{2}&\\tilde{e}_{z}^{2}&-1\\\\ \\vdots&\\vdots&\\vdots&\\vdots\\\\ \\tilde{e}_{x}^{n}&\\tilde{e}_{y}^{n}&\\tilde{e}_{z}^{n}&-1\\\\ \\cos\\phi\\cos\\lambda&\\cos\\phi\\sin\\lambda&\\sin\\phi&0\\end{array}\\right]\\left[ \\begin{array}{c}X_{r}\\\\ Y_{r}\\\\ Z_{r}\\\\ B_{r}\\end{array}\\right]. \\tag{6}\\] Given that (6) is a generalized formula, the accurate altitude estimation, adjusted by relative pressure, enhances the overall accuracy of TC positioning in 3-D for the overdetermined (6). Furthermore, position availability is improved as the position calculation is possible even with only three visible satellites (i.e., \\(n=3\\)), whereas conventional GNSS positioning techniques, such as LS and LC, are not available. Combining (3) and (6), the GNSS/BARO TC integration can determine the position through a weighted least-square process as shown in the following: \\[X=\\left(H^{T}\\cdot W^{-1}\\cdot H\\right)^{-1}\\cdot H^{T}\\cdot W^{-1}\\cdot Z. \\tag{7}\\] To validate the feasibility of (6), an iterative positioning process was executed for the GNSS/BARO TC integration within scenarios involving only three satellites available. The initial values for (\\(X_{r}\\), \\(Y_{r}\\), \\(Z_{r}\\)) were set to nearly (0, 0, 0), and then, iterative computations of positions were conducted for each step of the iteration. As shown in Fig. 3, the GNSS/BARO TC integrated position consistently converged toward the target coordinates (\\(-\\)3052145.3420, 4039491.9398, 3866241.0827) in just six iterations. ## IV Semi-Indoor GNSS/BARO TC Test and Analysis An experiment was carried out in the Navigation System Laboratory, located on the tenth floor of Chungmu-gwan at Sejong University. The objective of the experiment was to simulate a scenario using the location determined by the Fig. 3: GNSS/BARO TC positioning results as per the iteration steps. GNSS signals received semi-indoor, as previously described. As shown in Fig. 4, Android smartphones were installed near the window. GnssLogger application was used for logging raw GNSS and barometer measurements [24] for 30 min from 2023/05/09 07:20 to 07:50 UTC. The satellite geometry, as previously illustrated in Fig. 2, represents a challenging environment where only satellites in the northeast direction are observable due to the building structure. To evaluate the performances of GNSS/BARO TC integration, it was compared against conventional positioning methods and the National Marine Electronics Association (NMEA) location [25], directly provided by smartphone chipsets. As depicted in Fig. 5, the error distributions of all positioning techniques align along the northeast axis due to the skewed satellite geometry. Positioning solely relying on GNSS satellites yielded LS GNSS [26] results with horizontal and vertical rms errors of 33.75 and 70.18 m, respectively, posing significant challenges to accurate semi-indoor localization. In the case of the GNSS/BARO LC technique, the incorporation of barometer altitude and Kalman filter gradually reduced the vertical error; however, the horizontal error still stood at 38 m for the 95th percentile. Beyond the error statistics, a more concerning aspect is the limited availability of these methods [27]. Both methods were unable to solve the position when three satellites were visible, spanning a 37-s interval within the time period of 07:27:32 to 07:40:42, resulting in a 4.7% unavailability. The NMEA positioning result significantly diminished the error variance observed in the GNSS LS and GNSS/BARO LC outcomes, indicating potential fusion with other smartphone sensors such as INS, followed by the application of static mode filtering techniques. Despite the reduced error variance, the horizontal error of the NMEA positioning showed a significant bias, with a mean value of 12.92 m, as summarized in Fig. 6 and Table II. The NMEA location was erroneously marked outside the building on the map, even though the smartphone was located indoor. The vertical mean error was 22.45 m, presenting a challenge in determining the exact floor level of the person, as it indicated a position two levels above the true floor. In contrast, the GNSS/BARO TC with Kalman filter results exhibited a mean error of 4.74 m horizontally and 2.43 m vertically, which improved the NMEA results by 63.31% and 89.18%, respectively. As mentioned previously, although the errors were distributed in the northeast direction toward the window, it was still possible to distinguish which side of the building the device was located on. In addition, the position was continuously determined even when an insufficient number of satellites were visible. The most impressive contribution of this integration was that GNSS/BARO TC positioning correctly identified the person's location on the tenth floor, while the NMEA positioning displayed the location on the 12th floor. Fig. 4: Semi-indoor signal reception environment at a window side on the tenth floor of Sejong University Chung-mu building. Fig. 5: Comparison between GNSS/barometer TC results and NMEA using Galaxy S23 (left: horizontal result and right: calculated building floor). Fig. 6: Position error comparison of GNSS/barometer TC to NMEA positioning using Galaxy S23 (left: horizontal error and right: vertical error). ## V Conclusion This letter introduces an innovative method aimed at enhancing positioning accuracy and availability by integrating barometer altitude measurements into GNSS raw measurements in smartphones. In contrast to conventional integrating positioning methods, the proposed approach supplements the GNSS observation matrix with barometer altitude measurements, enabling positioning even in scenarios where the number of visible GNSS satellites is insufficient. This approach is expected to serve as a vital bridge in achieving seamless mapping between indoor and outdoor environments. The Federal Communications Commission (FCC) and the Public Safety and Homeland Security Bureau (PSHSB) mandate that 80% of all wireless 911 calls and emergency request for assistance (RFA) must meet a tolerance of 50 m horizontally and \\(\\pm\\)3 m vertically as of 2021. In addition, FCC has announced plans to significantly improve indoor positioning horizontal/vertical accuracy by 2026 through the development of \\(z\\)-axis accuracy technology [28]. The GNSS/BARO TC integration method presented in this study achieved the required accuracy level mandated by the FCC, with a horizontal rms of 5.31 m and a vertical rms of 2.52 m. In contrast, the conventional approach yielded a location with a vertical error of 22.46 m surpassing the requirement of 3 m. The proposed technique has the potential to enhance the availability and accuracy of wireless 911 calls and indoor SAR, aligning with the requirements of indoor positioning. ## References * [1] C. Wen et al., \"Toward efficient 3-D colored mapping in GPS-G GNSS-denied environments,\" _IEEE Geosci. Remote Sens. Lett._, vol. 17, no. 1, pp. 147-151, Jan. 2020, doi: 10.1109/LGRS.2019.2916844. * [2] N. Li et al., \"Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LiDAR system,\" _Remote Sens._, vol. 12, no. 19, p. 3271, Oct. 2020, doi: 10.3390rs1293271. * [3] S. Zou et al., \"Edge-preserving stereo matching using LiDAR points and image line features,\" _IEEE Geosci. Remote Sens. Lett._, vol. 20, pp. 1-5, 2023, doi: 10.1109/LGRS.2023.3239030. * [4] Y. Lee, Y. Hwang, J. Y. Ahn, J. Seo, and B. Park, \"Seamless accurate positioning in deep urban area based on mode switching between DGNSS and multipath mitigation positioning,\" _IEEE Trans. Intell. Transg. Syst._, vol. 24, no. 6, pp. 5856-5870, Jun. 2023, doi: 10.1109/TITS.2023.3256040. * [5] Y. Lee, P. Wang, and B. Park, \"Nonlinear regression-based GNSS multipath dynamic map construction and its application in deep urban areas,\" _IEEE Trans. Intell. Transg. Syst._, vol. 24, no. 5, pp. 5082-5093, May 2023, doi: 10.1109/TITS.2023.3246493. * [6] Y. Lee and B. Park, \"Nonlinear regression-based GNSS multipath modelling in deep urban area,\" _Mathematics_, vol. 10, no. 3, p. 412, Jan. 2022, doi: 10.3390/math1030412. * [7] J. Park, D. Lee, and C. Park, \"Implementation of vehicle navigation system using GNSS, INS, odometer and barometer,\" _J. Positioning, Navigat._, _Timing_, vol. 4, no. 3, pp. 141-150, Sep. 2015, doi: 10.1103/jspm.2015.43.141. * [8] V. Sokolovic, G. Dikic, and R. Stancic, \"Integration of INS, GPS, magnetometer and barometer for improving accuracy navigation of the vehicle,\" _Deferica Sci. J._, vol. 63, no. 5, pp. 451-455, Oct. 2013, doi: 10.1429/36.63.4534. * [9] Y. C. Tien, Y. L. Chen, and K. W. Chiang, \"Adaptive strategy-based tightly-coupled INS/GNSS integration system aided by odometer and barometer,\" _Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci._, vol. 42, pp. 881-888, Jun. 2019, doi: 10.5194/isprs:archives-xli-2-w13-881-2019. * [10] K.-W. Chiang, H.-W. Chang, Y.-H. Li, G.-J. Tsai, and P.-C. Hsu, \"Assessment for INS/GNSS/odometer/barometer integration in loosely-coupled and tightly-coupled scheme in a GNSS-degraded environment,\" _IEEE Sensors J._, vol. 20, no. 6, pp. 3057-3069, Mar. 2020, doi: 10.1109/JSEN.2019.2954532. * [11] S. Gaglione et al., \"GPS/barometer augmented navigation system: Integration and integrity monitoring,\" in _Proc. IEEE Metrology for Aerosp._, Jun. 2015, pp. 166-171, doi: 10.1109/METROAEROSPACE.2015.7180647. * [12] M. Rao, L. Lo Presti, and J. Samson, \"Iterative altitude-aiding algorithm for improved GNSS positioning,\" _IET Radar, Sonar Navigat._, vol. 5, no. 7, p. 788, 2011, doi: 10.1049/iet-rsn.2010.0187. * [13] F. D. Nunes, F. M. G. Sousa, and J. M. N. Leitao, \"Performance analysis of altimeter-aided GNSS receiver for indoor scenarios,\" in _Proc. 7th Conf. Telecommun._, Santa Maria de Faria, Portugal, Aug. 2009, pp. 1-4. * [14] J. Yun and B. Park, \"A technique for finding indoor rescuer using GNSS and barometer sensors on Android smartphones in building disaster situations,\" in _Proc. 34th Int. Tech. Meeting Satellite Division Inst. Navigat._, Oct. 2021, pp. 1961-1980, doi: 10.33012/2021.18007. * [15] Korea Meteorological Administration National Climate Data Center. (May 2023). _The National Climate and Weather Database_. Korea Meteorological Administration National Climate Data Center. [Online]. Available: [https://data.kma.go.kr/data/grnd/selectAosRItmlist.d0?pgmN0=36](https://data.kma.go.kr/data/grnd/selectAosRItmlist.d0?pgmN0=36) * [16] D.-K. Lee, Y. Lee, and B. Park, \"Carrier phase residual modeling and fault monitoring using short-baseline double difference and machine learning,\" _Mathematics_, vol. 11, no. 12, p. 2696, Jun. 2023, doi: 10.3390/math11122696. * [17] C. Lim, B. Park, and Y. Yun, \"L1 SFMC SBAS message for service expansion of multi-constellation GNSS support,\" _IEEE Access_, vol. 11, pp. 81690-81710, 2023, doi: 10.1109/ACCESS.2023.3300580. * [18] H. Yoon, H. Seok, C. Lim, and B. Park, \"An online SBAS service to improve drone navigation performance in high-elevation masked areas,\" _Sensors_, vol. 20, no. 11, p. 3047, May 2020, doi: 10.3390/s20113047. * [19] J. Yun, C. Lim, and B. Park, \"Inherent limitations of smartphone GNSS positioning and effective methods to increase the accuracy utilizing dual-frequency measurements,\" _Sensors_, vol. 22, no. 24, p. 9879, Dec. 2022, doi: 10.3390/s22249879. * [20] J. Kim et al., \"Accuracy improvement of DGPS for low-cost single-frequency receiver using modified Flachen korrektur parameter correction,\" _ISPRS Int. J. Geo-Inf._, vol. 6, no. 7, p. 222, Jul. 2017, doi: 10.3390/ijgi607022. * [21] K. Park and J. Seo, \"Single-antenna-based GPS antijamming method exploiting polarization diversity,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 57, no. 2, pp. 919-934, Apr. 2021, doi: 10.1109/TAES.2020.3034025. * [22] J. Lee, S. Pullen, and P. Enge, \"Sigma overwotounding using a position domain method for the local area augmentation of GPS,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 45, no. 4, pp. 1262-1274, Oct. 2009, doi: 10.1109/TAES.2009.5310297. * [23] S. Kim, J. Yun, and B. Park, \"Methodology of correcting barometer using moving drone and RTK receiver,\" _J. Adv. Navigat. Technol._, vol. 26, no. 2, pp. 63-71, 2022, doi: 10.12673/jant.2022.26.2.63. * [24] G. Fu, M. Khider, and F. van Diggelen, \"Android raw GNSS measurement datasets for precise positioning,\" in _Proc. 33rd Int. Tech. Meeting Satell. Division Inst. Navigat. (ION GNSS+)_, Oct. 2020, pp. 1925-1937, doi: 10.33012/2020.17628. * [25] B. Park, J. Lee, Y. Kim, H. Yun, and C. Kee, \"DGPS enhancement to GPS NMEA output data: DGPS by correction projection to positioning,\" _J. Navigat._, vol. 66, no. 2, pp. 249-264, Mar. 2013, doi: 10.1017/s0373463312000471. * [26] C. Jiang, S. Chen, Y. Chen, J. Shen, D. Liu, and Y. Bo, \"Superfor position estimation based on optimization in GNSS,\" _IEEE Commun. Lett._, vol. 25, no. 2, pp. 479-483, Feb. 2021, doi: 10.1109/LCOMM.2020.3024791. * [27] A. Al-Hourani and I. Guvenc, \"On modeling satellite-to-ground path-loss in urban environments,\" _IEEE Commun. Lett._, vol. 25, no. 3, pp. 696-700, Mar. 2021, doi: 10.1109/LCOMM.2020.3037351. * [28] Federal Communications Commission. (Jan. 16, 2020). _Wireless E911 Location Accuracy Requirements_. [Online]. Available: [https://www.federalregister.gov/documents/2020/01/16/2019-28483/wireless-e911-location-accuracy-requirements](https://www.federalregister.gov/documents/2020/01/16/2019-28483/wireless-e911-location-accuracy-requirements)
This letter introduces a GNSS/attimeter tightly coupled (TC) integration algorithm that combines barometric altitude measurements with GNSS observations to determine 3-D positions. By transforming barometric altitude measurements into Earth-centered Earth-fixed (ECEF) coordinates, a novel TC integration equation is derived. This equation enables accurate positioning with only three GNSS satellites and one altitude measurement, overcoming the limitations of conventional GNSS positioning techniques. A semi-indoor experiment for various smartphones was conducted to validate the proposed method, comparing it against conventional positioning techniques and the National Marine Electronics Association (NMEA) location provided by smartphone chipsets. The results demonstrated that the GNSS/BARO TC integration significantly reduced errors and improved accuracy. Notably, the mean error of 4.74 m horizontally and 2.43 m vertically improved the NMEA results by 63.31% and 89.18%, respectively, even under challenging indoor conditions. The proposed method met FCC-required accuracy levels, making it suitable for enhancing the accuracy and availability of wireless 911 calls, aligning with indoor positioning mandates. Android smartphone, barometer, disaster relief, enhanced-911, GNSS, integrating positioning. + Footnote †: publicationbox: 2024
Summarize the following text.
ieee/f88134ec_427a_4e30_95d2_760bd1d2d2e3.md
Landslide Susceptibility Mapping Using Ant Colony Optimization Strategy and Deep Belief Network in Jiuzhaigou Region Yibing Xiong, Yi Zhou, Futao Wang, Shixin Wang, Jingming Wang, Jianwan Ji, and Zhenqing Wang Manuscript received June 17, 2021; revised September 2, 2021 and October 2, 2021; accepted October 20, 2021. Date of publication October 27, 2021; date of current version November 10, 2021. This work was supported in part by the National Key R&D Program of China under Grant 2017YFB0504011 and Grant 2019YFC1510202 and in part by the Subproject of Strategic Priority Science and Technology Special Program of Chinese Academy of Sciences under Grant XDA19090123. (_Corresponding authors: Yi Zhou; Futao Wang.)_Yibing Xiong, Futao Wang, Jingming Wang, Jianwan Ji, and Zhenqing Wang are with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China, and also with the University of Chinese Academy of Sciences, Beijing 100049, China and also with the University of Chinese Academy of Sciences, Beijing 100049, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Yi Zhou and Shixin Wang are with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/ISTARS.2021.3122825 ## I Introduction As ONE of the most common geological hazards worldwide, landslides have caused huge human casualties and economic losses every year. Especially in China, landslides are the most widespread and costly geological disasters [1]. According to the statistics yearbook of the Geological Disaster Guidance Center of the Ministry of Natural Resources, in 2020, there were 7840 geological disasters in China, of which 4810 were landslide geological disasters, accounting for 60.4% [2]. Compared with 2019, it showed an upward trend in both the number of dead and missing persons and the direct economic loss caused by geological disasters [3]. Landslide susceptibility mapping (LSM) refers to assess the possibility of landslide occurrence in the study area after comprehensive analysis of geological and environmental conditions [4]. Relevant research could be traced back to the end of the 1960s. The United States, France, and other countries combined geographical and environmental characteristics to analyze the risk of landslides by zoning [5]. The result of LSM could 1) provide application services for organizing and implementing disaster prevention and mitigation measures scientifically and economically; 2) provide scientific and technological support for the formulation of emergency programs in areas threatened by geological hazards; 3) provide decision-making reference for the assessment of the suitability of sites for infrastructure and other construction projects and the rationality of their spatial layout; 4) provide a scientific basis for the medium- and long-term construction planning of the region [6, 7, 8, 9]. LSM model is an important pivot to reveal the potential complex connection between landslide causative factors and samples. Considering the applicability of the model according to specific regional characteristics also determines the predictive ability of the model, so the model selection is a crucial part of LSM process [10]. With the booming development of computer, geographical information science (GIS) and remote sensing (RS) technologies, more and more algorithms, models and methods were successfully used in LSM studies [11]. For instance, Carrara _et al._[12] grouped existing methods into two main categories: deterministic and nondeterministic methods. Among them, the deterministic methods are usually based on physical process of landslide [13]. These models [14, 15, 16] have the advantages of clear physical significance, high prediction accuracy, and timely feedback, but they are often difficult or too costly to obtain for large-scale monitoring data. Nondeterministic methods can be divided into heuristic methods, such as fuzzy logic [17] and analytic hierarchy process [18], which often rely on the empirical knowledge of experts and are subjective and uncertain. The other category is based on data-driven approaches, which can be subdivided into statistical and machine learning (ML) models such as frequency ratio models [19], support vector machines (SVM) [20], random forest (RF) [21], and deep neural networks (DNNs) [22]. Among them, statistical models refer to reveal the relationship between variables in data through mathematical equations; ML models refer to learn from data without relying on rule functions. As for logistic regression and bootstrap models, they borrowed statistical models into ML models to dilute the distinction between the two [23]. However, these models have a shallow structure with only one or zero hidden layers, which have disadvantages such as limited training time, easy to fall into local optimum, and unstable convergence [24]. Deep learning (DL) as a novel ML model provides new ideas and methods for LSM studies. Due to powerful learning and abstraction capabilities, it is extremely effective in solving problems such as classification, regression, and dimensionality reduction [25]. DL is able to learn more complex and advanced hidden features through hierarchical analysis of features [26]. Therefore, this method is beneficial to reflect the complex nonlinear relationship between causative factors and samples in LSM [27]. Deep belief network (DBN), convolutional neural network (CNN), and improved methods as a novel DL algorithm have been successfully used in LSM [28, 29, 30, 31]. Multiple restricted Boltzmann machine (RBM) in DBN had extreme variability, which corresponds to the translational invariance of image. This feature is highly relevant for studying LSM in regions with similar geological backgrounds [32]. There exist several uncertainty elements in LSM, such as landslide inventory, causative factors, and the quality of ML models [33]. However, due to the complex network structure, DL model often requires multitudinous parameter adjustments in different LSM applications. There is no single or optimum DL model for all LSM. It is essential to make corresponding parameter adjustments according to the features of different regions. At present, the selection of DL parameters often uses trial-and-error or grid search methods to experimentally test the parameters [34, 35, 36], which is often random and lacks a theoretical basis. Furthermore, Bui _et al._[37] in 2019 proposed swarm intelligence algorithms and DL neural network for flood susceptibility mapping. Neurons were optimized resulting in better performance than the benchmark model. In 2020, Bui and Shahabi [38] proposed a method of relevance vector machine optimized by the imperialist competitive algorithm. Their results show that the proposed algorithm can improve performance by selecting the optimal width of the radial basis function. Similarly, Pham _et al._[39] in 2020 proposed 3-D CNN with moth flame algorithm as the optimizer. Their results show that the moth flame algorithm outperforms AdaGrad optimizer in all statistical measurements. Although there exist a few excellent studies applied to LSM using optimization algorithms, the existing optimization methods applied to LSM have a shortcoming: these algorithms only optimize for a single variable or optimize multiple variables separately, failing to consider the relationship between variables. The goal of existing optimization is to find the maximum or minimum value of the objective function. However, taking the maximum or minimum value of a single variable can easily lead to a local optimal solution. There is little research on a significant acceptable linear combination solution. In other words, although one variable becomes worse, the other variable becomes better than it. In this study, ant colony optimization (ACO), as a heuristic swarm intelligence algorithm, is used to optimize multiple parameters simultaneously of DBN and benchmark algorithm by setting multitype ant. It has the following advantages. 1. This algorithm makes decisions using local heuristic information and artificial pheromone trajectories, which can obtain more combinations of solutions. 2. Pheromone evaporation causes its tracking strength to decrease with time, which helps prevent the algorithm from falling into a local optimum. 3. ACO can perform flexible and robust combination search by setting multiple types of ants [40]. With the expansion of ACO research to the continuous domain, ACO has been applied to ML parameter optimization [41]. This article addresses the problems of the above existing LSM studies and discusses the applicability of ant colony optimization strategy and deep belief network (ACO-DBN) in LSM of Jiuzhaigou earthquake region. Then, landslide causative factors were preferably selected in combination with the geographic environment characteristics of Jiuzhaigou. Furthermore, this article built the original benchmark ML models (RF, SVM, and DBN) and the corresponding multitype ACO optimized models (ACO-ML). The performance of ACO-DBN and benchmark models were compared using performance evaluation indexes, landslide density, and Wilcoxon signed rank test. ## II Study Area and Data ### _Description of the Study Area_ Jiuzhaigou County is located in the northeast of Aba Tibetan and Qiang Autonomous Prefecture in Sichuan Province, the ditch has many lakes and river valleys. The high mountain and deep valley landform lead to poor slope stability, which is a typical area with a high incidence of landslide geological hazards [42, 43]. At 21:00 on August 8, 2017, a sudden M\\({}_{\\text{S}}\\) 7.0 magnitude earthquake occurred in Jiuzhaigou County with an epicenter at 33.22 \\({}^{\\circ}\\)N, 103.83 \\({}^{\\circ}\\)E and a source depth of 20 km. There were numerous active fault zones near the earthquake intersection. Earthquake is one of the three main triggering factors of landslide. Under the action of powerful earthquake force, the inertia force borne on the slope body changed abruptly, whereas the surface deformation changes and cracks increase. This phenomenon triggered a large number of seismic landslides, causing huge casualties, property losses, and environmental damages. LSM studies for Jiuzhaigou earthquake region are beneficial to the postdisaster reconstruction, land use planning, engineering site selection, and secondary disaster prevention and control in earthquake area [44]. Based on the existing research base and data in the Jiuzhaigou earthquake area, this article selected the key area where the Jiuzhaigou earthquake landslide occurred as the study area, with longitude ranging from 103.71 \\({}^{\\circ}\\)E to 103.90 \\({}^{\\circ}\\)E and latitude ranging from 33.11 \\({}^{\\circ}\\)N to 33.33 \\({}^{\\circ}\\)N, whose total area is about 208 km\\({}^{2}\\). ### _Landslide Inventory_ Landslide inventory, as the first step in susceptibility mapping, is necessary for modeling using ML methods. In this study, landslide inventory was constructed mainly by combining two methods: regional landslide survey and visual interpretation based on satellite images, which complement and validate each other. Regional landslide survey data comes from the \"Geological Cloud\" platform ([http://geocloud.cgs.gov.cn/](http://geocloud.cgs.gov.cn/)) developed by the China Geological Survey of the Ministry of Natural Resources, which can obtain information on geological hazards in different regions and locations corresponding to different types of disasters; data for visual interpretation based on satellite images mainly use GF-1 (acquisition time was 2017/08/14), GF-2 (2017/08/09), GeoEye-1 (2017/08/16), and Google Earth images as data sources for landslide inventory, and rely on the typical spectral, shape, texture, location, and other features of landslides to establish interpretation flags for interpretation. The above mentioned were selection methods of landslide points, whereas the different selection methods of nonlandslide points also affected results of LSM. Therefore, this article adopted the fractal theory in [45] for selection of nonlandslide points and obtained the low susceptibility area (\\(<\\)0.3). The fractal method was used as a candidate for nonlandslide points. At present, in studies using ML for LSM, researchers usually focus more on the location as well as the number of landslide occurrences. Therefore, some of the planar landslide units could all be transformed into uniform point elements by GIS software. A total of 7184 landslide inventory were finally selected, with a landslide to nonlandslide points ratio of 1:1, of which 60% of the inventory were used as the training set and the remaining 40% were used as the testing set, and the distribution of the landslide samples is shown in Fig. 1. ### _Landslide Causative Factors_ LSM studies are primarily based on the following assumptions: areas where landslides occurred in the past are more likely to experience landslides in the future; landslides occur as a result of the combined effect of related geographic environments, and these factors can be analyzed empirically or statistically; and the location of landslides can be predicted through environmental analysis and surveys [46]. Therefore, geographic environmental as landslide causative factors in the study area is also necessary for LSM studies. Reichenbach _et al._[47] combined the frequency of factors to classify them into geological, topographic, hydrological, land cover, and other types. However, due to the differences in the geographical environment of the study area, the types of selected factors are often different. Combining previous LSM studies in Jiuzhaigou earthquake and the frequency of use with the characteristics of geographical environment, a total of 16 landslide causative factors were selected as experimental study factors [48, 49, 50]. They were distance from the epicenter, slope direction, curvature, distance from the fracture, gully density, land use, distance from the water system, vertical fracture distance, slope position, NDVI, slope and local terrain relief, distance from the road, rainfall, slope, lithology, and elevation. The primary data sources of factors include: DEM, which was downloaded from the Geospatial Data Cloud Platform ([http://www.gscloud.cn/](http://www.gscloud.cn/)); land Fig. 1: Study area and landslide inventory. use, which was obtained from the GlobalLand30 product ([http://www.globallandcover.com/](http://www.globallandcover.com/)); the geological database, which was provided by the China University of Geosciences (Beijing) National Mineral Resource Potential Evaluation Project Team; the rainfall data, which was provided by the Resource and Environment Data Cloud Platform of Chinese Academy of Sciences ([http://www.resdc.cn/](http://www.resdc.cn/)). All data were processed and resampled to 30 m\\(\\times\\)30 m resolution raster data using ArcGIS. ## 3 Methodology ### _Deep Belief Network_ As a typical DL method, DBN is a DNN ML method with connectionism as the leading idea. The structure of DBN mainly consists of several multilayer RBM stacked together. The DBN training process is divided into two steps: pretraining and fine-tuning. First, the multilayer perceptron model is obtained by pretraining RBM layer by layer. Then, the BP algorithm is used to fine-tune the connection weights. DBN can represent the input signal vector \\(x\\) and a total of \\(L\\) hidden layer forward neural networks. Assuming that the state of hidden layer \\(\\tau\\) is \\(h^{\\tau}\\), the formula can be expressed as \\[P\\left(x,a^{(1)}\\ldots a^{(L)}\\right)\\] \\[=\\ \\left(\\prod_{\\tau\\,=\\,0}^{L-2}P\\left(a^{(\\tau)}a^{(\\tau+1)} \\right)\\right)P\\left(a^{(L-1)},a^{(L)}\\right) \\tag{1}\\] where \\(a^{(0)}\\!=x\\) and \\(P(a^{(\\tau)}a^{(\\tau+1)})\\) stands for probability density of \\(L\\) hidden layer as a conditionally visible layer. \\(P(a^{(L-1)},a^{(L)})\\) stands for probability density of visible and hidden layers of the uppermost RBM. The specific learning process is as follows. 1. The first RBM network is trained by using landslide causative factors as input layers ( \\(a^{(0)}=\\ x\\)). 2. The state of each training sample in the hidden layer of the first RBM network can be regarded as a feature representation of the sample. It can be obtained from averaging the activation function (\\(P(\\ a^{(1)}=\\ 1a^{(0)})\\)) or sampling the hidden layer (\\(P(a^{(1)}a^{(0)})\\)). 3. The second RBM network is trained by using the hidden layer of the first RBM as the visible layer input. This method is repeatedly used until all RBMs are traversed, then pretraining is complete. 4. Finally, the network is fine-tuned by connecting the last layer of RBM hidden layers to the labeled sample set. BP algorithm is used to fine-tuning weights of the network. Landslide susceptibility probability values obtained by the softmax regression layer. Although DL has many advantages in solving complex nonlinear problems, it also suffers from instabilities such as model overfitting and low convergence efficiency. This experiment mainly used the addition of BN layer and Adam optimizer adaptive learning rate to solve the overfitting as well as to improve the slow convergence problem. Among them, BN was a strategy proposed by Ioffe and Szegedy [51] that took a normalization process at one layer of the network's input before moving to the next layer of the network. The formula can be expressed as \\[y^{(k)}=\\gamma^{(k)}\\ \\bar{x}^{(k)}+\\beta^{(k)} \\tag{2}\\] where \\(y^{(k)}\\) stands for BN result of the \\(k\\)th layer and \\(\\bar{x}^{(k)}\\) stands for standard deviation normalized result. \\(\\gamma^{(k)}\\) and \\(\\beta^{(k)}\\) are learning parameters. In 2015, Kingma and Ba [52] proposed Adam optimizer, which combines the advantages of both AdaGrad and RMSProp optimization algorithms. It has advantages of simple implement, computationally efficient and calculates different adaptive learning rates for different parameters [53]. The structure of DBN applied to LSM is shown in Fig. 2. ### _Ant Colony Optimization Strategy_ As a typical heuristic swarm intelligence algorithm, ACO, proposed by Marco Dorigo in 1992, is to find the optimal planning path by probabilistic calculation. ACO is a positive feedback mechanism, where ant's choice of path is related to the pheromone concentration, and eventually the ant will choose the path with high pheromone concentration to reach the optimal solution under the regulation mechanism [54]. This study used ACO to optimize ML models such as RF, SVM, and DBN. At the same time, we modified the traditional ACO by adopting multitype ants to optimize multiple parameters simultaneously. In ACO-ML, the number of ant types is the same as the number of parameters to be optimized. So ACO-ML could optimize multiple parameters of the model simultaneously to achieve the optimal solution of the objective function. For the DBN model, we used ACO to optimize the batch size \\((B)\\) in BN and the initial learning rate \\((L)\\) in Adam. Prediction accuracy was chosen as the objective function \\((F(B,L))\\). And the values of \\(B\\) and \\(L\\) were solved for a given interval \\[F_{\\max}\\left(B,L\\right) \\tag{3}\\] \\[\\mathrm{s.t.}\\left\\{\\begin{array}{l}B\\in\\left[B_{\\min},B_{\\max} \\right]\\\\ L\\in\\left[L_{\\min},L_{\\max}\\right]\\end{array}\\right.. \\tag{4}\\] The basic idea is to find the optimal solution of the objective function in the shortest path through iteration. Meanwhile, to ensure the efficiency of the optimization algorithm, we set the following two termination conditions: 1) no significant improvement in accuracy; 2) the number of iterations reaches the maximum. The flowchart of the entire experiment is shown in Fig. 3. ### _Comparison Methods_ The reason for choosing multiple models for comparison is that, on the one hand, the accuracy of the models can be verified with each other, and on the other hand, the optimal model can be selected for regional landslide susceptibility modeling [55]. As mentioned above, there were various classification criteria for LSM models. By focusing on the model of this study and statistics of methods that have been used frequently in recent years, in this article, SVM and RF were selected as comparison models for accuracy evaluation and analysis. And these model parameters were also optimized by ACO. SVM is a nonparametric supervised maximum likelihood algorithm that uses statistical theory to solve classification or regression problems [56]. SVM is a generalized linear classifier, which essentially solves for a maximum hyperplane distance for discrimination. It tackles complex nonlinear feature problems by converting the data to a higher dimensional feature space through kernel functions, which are trained to achieve linear differentiability [57]. Therefore, in LSM studies, kernel functions and other hyperparameters are crucial to the experimental results. The robustness and sparsity of SVM models, as well as their ability to handle high-dimensional data and solve nonlinear problems, have led to an increasing interest in the field of LSM studies [58]. RF is an integrated learning method by combining multiple independent decision trees, and it is also a supervised classification algorithm [59]. Applying the RF model to LSM is to randomly select \\(n\\) from all the factors of each node of the basic decision tree to form a subset and conduct model training separately. According to different decision tree results for voting statistics, the probability of votes in each classification category is final landslide susceptibility. There are many hyperparameters in the RF model, such as the number of combined trees, the depth of the tree, and the number of feature selections each time. Reasonable parameter settings can effectively avoid overfitting. With its simplicity, ease of implementation, high efficiency, and low sensitivity to hyperparameters, it has demonstrated powerful performance of the integrated learning method in many applications. The generalization ability of this ultimate integration has led to an increasing attention in LSM studies [60]. ### _Performance Evaluation_ To compare and analyze the predictive performance of multiple models, this article selected five single-threshold statistical indicators of accuracy, precision, specificity, sensitivity, and F1 as well as ROC curve and AUC value multithreshold indicators to comprehensively evaluate the predictive ability of the models. Accuracy is the most common evaluation indicator that reflects the probability of being correctly predicted in all samples. Precision indicates the percentage of examples classified as positive cases that are actually positive cases. Specificity and sensitivity stand for the probability of negative and positive samples being correctly predicted, respectively. F1 combines the results of specificity and sensitivity. The higher the F1 value, the more accurate the model prediction result. ROC curve can effectively evaluate the binary classifier model performance, and AUC value can quantitatively evaluate the results. The main calculation formula is as follows: \\[\\mathrm{Accuracy} = \\frac{(\\mathrm{TP}+\\mathrm{TN})}{(\\mathrm{TP}+\\mathrm{TN}+ \\mathrm{FP}+\\mathrm{FN})} \\tag{5}\\] \\[\\mathrm{Precision} = \\frac{\\mathrm{TP}}{(\\mathrm{TP}+\\mathrm{FP})}\\] (6) \\[\\mathrm{Specificity} = \\frac{\\mathrm{TN}}{(\\mathrm{TN}+\\mathrm{FP})}\\] (7) \\[\\mathrm{Sensitivity} = \\frac{\\mathrm{TP}}{(\\mathrm{TP}+\\mathrm{FN})}\\] (8) \\[F1 = \\frac{2*\\mathrm{TP}}{(2*\\mathrm{TP}+\\mathrm{FP}+\\mathrm{FN})} \\tag{9}\\] Fig. 2: Landslide susceptibility modeling of DBN. where TP and TN stand for positive and negative samples being correctly predicted, respectively. Similarly, FN and FP represent positive and negative samples being incorrectly predicted, respectively. ## IV Experimental Results ### _Optimization of Landslide Causative Factors_ Through the description of causative factors in the previous article, it could be seen that there were many factors affecting LSM, and these factors themselves were strongly correlated because of the same or similar data sources. At the same time, in a DL model, too large feature dimensions could easily cause the model to overfit and reduce execution efficiency. Learning abnormal noise features of too many samples would also reduce the generalization ability of the model, so the factors need to be optimized. The correlation coefficient matrix of each factor was obtained by calculation, as presented in Table I. The larger the absolute value of correlation coefficient between two factors, the stronger correlation and the more redundant data. Therefore, some of the factors need to be eliminated. Factor selection is mainly based on the following two principles. 1. There was a moderate or strong correlation between a specific factor and many other factors. When the absolute value of the correlation coefficient is greater than 0.8, the two factors are highly correlated. Similarly, the absolute value between 0.5 and 0.8 is moderately correlated. For example, correlation coefficients between vertical fracture distance and distance from the epicenter, distance from the road, rainfall, and elevation were 0.95, -0.81, -0.66, -0.55, respectively. Therefore, we concluded that it had a moderate or strong correlation with other factors. And vertical fracture distance, distance from the epicenter, rainfall, and elevation factors could be screened out with this idea. 2. For only two factors, there was a moderate or strong correlation. For example, the correlation coefficient between slope and local terrain relief was 0.89, whereas the correlation with other factors was weak. Then, factor selection could be performed by evaluating the importance of features through RF. As shown in Fig. 4, the relative importance of slope and local terrain relief are 0.111 and Fig. 3: ACO-ML model flowchart. 0.091. Comparatively speaking, we chose the slope factor because of its higher importance. Local terrain relief, gully density, and distance from the fracture factors could be removed by this method. Through correlation and importance evaluation of the above landslide causative factors, a total of nine factors, including lithology, slope, distance from the road, NDVI, slope position, distance from the water system, land use, curvature, and slope direction, were finally selected to participate in the construction process of Juzhaigou earthquake LSM (see Fig. 5). ### _ACO-ML Models Building_ Since LSM using DL requires processing a large amount of sample data, algorithm complexity and time complexity required high hardware and software conditions for the experiments. Python 3.6, TensorFlow 1.4.0, pandas, scikit-learn, and scikit-opt were used to implement the ACO-ML models. DBN structure in this article had a total of six layers, including a factor input layer, a four-layer RBM structure, and an output layer, and the connection between layers was 9-400-200-100-50-2, where 9 was the number of input landslide causative factors, Fig. 4: RF importance ranking of causative factors. F1–F16 represent distance from the epicenter, slope direction, curvature, distance from the fracture, gully density, land use, distance from the water system, vertical fracture distance, slope position, NDVI, slope and local terrain relief, distance from the road, rainfall, slope, lithology, and elevation, respectively. Fig. 5: Landslide causative factors. and 400, 200, 100, and 50 are the number of RBM neurons, respectively, and 2 was final after softmax regression layer to predict the probability of landslide occurrence. Since this experiment was to solve the regression problem, the hyperbolic tangent (tanh) activation function was selected in the network parameter setting, and L2 weight attenuation term regularization and Early Stop method were used at the same time. The loss function used cross-entropy and L2 adjustment term combinations. Initial learning rate and batch size are optimized by ACO strategy. The parameter intervals were set and searched by ant colony to reach the optimal solution of the objective function. Algorithm parameters and optimized results are presented in Table II. LSM using RF and SVM were based on scikit-learn, and data structure and modeling process were consistent with DBN to facilitate comparative analysis. To explore the generality of the ACO strategy, the parameters of traditional ML were also optimized by this method. Finally, the number of decision trees, the maximum depth, and the criterion were selected as the main parameters of RF. And the kernel function, penalty factor C, and gamma value were selected as the main parameters of SVM. The accuracy priority criterion was adopted, and parameters were also optimized by ACO strategy. ### _Evaluation and Comparison of LSM Models_ In order to verify performance and stability models, this article used 2874 test samples (ratio of landslide and nonlandslide samples was 1:1). The above indicators were used to evaluate the predictive performance of the six models. It could be divided into two categories: unoptimized and optimized by ACO. As presented in Table III, overall, the three models optimized by ACO outperformed their respective original models in all metrics. As a DL model, DBN had a great advantage over traditional ML methods by comparing the results. Moreover, the accuracy of ACO-DBN model was 93.11%, which was better than the other models, indicating that overall performance of ACO-DBN model classifier was better. The precision reflects the probability that the predicted positive samples (landslide samples) are correctly classified. In LSM, the prediction expectation is that more landslide samples are correctly predicted. The precision of RF was higher than SVM model, which was inconsistent with the accuracy, so it was difficult to optimize the model. ACO-DBN model was better than the comparison model in these two aspects, showing better consistency. On the other hand, the specificity and sensitivity indicators reflect the probability of being correctly predicted in true landslide and nonlandslide samples, and F1 index is the overall evaluation of specificity and sensitivity. ACO-DBN model had the most considerable F1 value and the most robust correct prediction ability for actual samples when both indicators perform well. Fig. 6 shows ROC curves of the six models and the corresponding AUC values, and it could be seen from the figure that the overall curve of ACO-DBN model was located at the top, and the corresponding AUC value was the largest, which is 0.973. Both landslide susceptibility prediction and zoning are performed by inputting predicted data from the entire study area into the model for LSM by a trained model. The prediction emphasizes the probability of landslide occurrence, and the precise value of probability of each unit can be obtained. However, actual decision-makers tend to focus only on the level of landslide occurrence. Landslide susceptibility zoning, which is based on the prediction, will use a quantitative statistical method to divide the successive predicted probability values into certain intervals. This experiment used the natural interruption point grading method to classify the prediction into five categories: very low, low, moderate, high, and very high susceptibility zones. Fig. 7 corresponds to LSM results of the six models, respectively, and overall consistent results verified the applicability of ACO-DBN model in Jiuzhaigou region. The distribution areas of high and very high susceptibility areas were mainly concentrated in the northwest and southeast regions, which were closer to the epicenter. And predicted results were consistent with the marked landslide sample locations. ### _Validation of Landslide Distribution_ The landslide distribution characteristics test is a statistical partitioning of causative factors such as slope, distance from the water system and other factors with the prediction results of model. Statistical results could verify the accuracy and reasonableness of model predictions. As shown in Fig. 8, landslide proportions were calculated separately in different nine causative factors. The proportions could reflect the distribution of predicted landslide units located in different causative factors more intuitively. In general, landslides in Jiuzhaigou region had a clear distribution characteristic in each factor. It can be seen from Fig. 8 that as slope increases, the proportion of landslide units gradually increases. Combined with the number of landslides, landslides mainly occurred in the 30\\({}^{\\circ}\\)-45\\({}^{\\circ}\\) slope area. This was consistent with the fact that earthquake triggered unstable slope units in Jiuzhaigou region. The distribution of landslides was related to distance from the water system. The closer the distance to the water system, the higher the water content of the rock and soil in the slope. The erosion of river makes the viscosity of the rock and soil coefficient smaller. It also makes the slope more prone to instability and sliding. As shown in Fig. 8, landslides are mainly distributed in the range of 0-700 m from the water system, and the result is consistent with the above reasons. Land use from woodland and lithology from lower and middle carboniferous have a high susceptibility to landslide. Similar to the results of Pham _et al._[61], the probability of landslides increases as the distance from the road decreases. The result demonstrates that landslide frequency and associated damage is highly likely to increase with road construction and expansion [62]. ## V Discussion ### _Effectiveness of ACO Strategy_ The performance of a certain model depends not only on the quality of the algorithm itself but also on the detailed tuning of the parameters for certain application. As a swarm intelligence algorithm, ACO is an excellent solution to fine-tuning parameters in a specified range. As previously mentioned, the results of ACO-ML were improved compared with those before adjustment. Moreover, we needed to know whether this difference was statistically significant. Therefore, we performed a Wilcoxon signed rank test on the LSM test sample. Null hypothesis was Fig. 6: Comparison of ROC curves and AUC values in multiple models. that there was no difference in predictive power between the two models. To test this hypothesis, we conducted statistical experiments. The results in Table IV show that all \\(p\\).values are less than the significant level of 0.05. Therefore, the hypothesis was rejected, in other words, the difference was statistically significant. As presented in Table III, traditional ML methods have a significant improvement in accuracy and other metrics after ACO strategy, which shows RF and SVM are sensitive to the choice of parameters. Compared to traditional ML methods, two DL methods showed great advantages in model performance such as more than 3% improvement in accuracy and precision. It was shown that DL methods were more effective in exploring the complex and rich relationships between landslide causative factors. Compared to unoptimized DBN model, ACO-DBN model performance, such as specificity and sensitivity, was improved in various degrees. The improvement using ACO was relatively small compared to traditional ML, which also indicated that DBN was relatively insensitive to the selection of parameters. This characteristic was more advantageous for LSM under different complicated environments and factors. In summary, the proposed ACO strategy is also essential if we need to further refine the accuracy and efficiency. ### _Landslide Density Rationalization Analysis_ Landslide susceptibility studies require not only the highest possible accuracy and efficiency but also a reasonable spatial distribution of landslides. The spatial distribution of landslides needs to follow the following principle: the number of landslides Fig. 7: LSM results of Jiuzhaigou. (a) RF. (b) SVM. (c) DBN. (d) ACO-RF. (e) ACO-SVM. (f) ACO-DBN. increases with the increase of landslide susceptibility index. Landslide density reflects the proportion of landslides in the total number of landslides within a certain susceptibility index range. The calculation formula is as follows: \\[\\mathrm{LD}_{i}=\\frac{L_{i}/L}{R_{i}/R} \\tag{10}\\] where \\(\\mathrm{LD}_{i}\\) stand for landslide density in the specified susceptibility index interval. \\(L_{i}\\) and \\(R_{i}\\) represent number of landslides and total number of mapping units in the specified susceptibility index interval, respectively. Similarly, \\(L\\) and \\(R\\) represent number of landslides and total number of mapping units in the whole study area, respectively. Fig. 9 shows the landslide densities and trends for the three optimized models. The landslide density of ACO-RF shows an overall increasing trend. However, it locally exhibits stochasticity, which means that as the index increases, the landslide density noticeably decreases within a certain range (0.3-0.4 and 0.65-0.75). By analyzing the landslide density of ACO-SVM, it can be seen that the results show an imbalance in the distribution. In other words, there is no significant increase in the interval from 0 to 0.95, whereas 0.95 to 1 has an excess of predicted results. By comparing the above results, the landslide density of ACO-DBN shows a smooth increasing trend. Therefore, it is more scientific and reasonable to use the proposed model to apply LSM in the whole study area, which was more advantageous for disaster prevention and mitigation decisions. ### _Effectiveness of Factors Optimization_ Feature optimization is a controversial issue for landslide susceptibility studies. On the one hand, feature selection diminishes the spatial dimensionality of the dataset, resulting in a reduction in the model complexity and the computation time. High spatial dimensionality for DL models also tends to cause overfitting and convergence difficulties. If some factors are selected with too high correlation and importance, the model will over-rely on these factors, thus weakening the influence of other factors in LSM. On the other hand, data dimensionality reduction can also lead to issues in ML modeling. Excluding factors may degrade performance due to the complex nonlinear relationships between factors. A factor may not be directly related to the occurrence of landslides, but may be related to other factors that influence the occurrence of landslides [63]. Fig. 8: Landslide distribution in different factors. Overall, if model efficiency and generalization ability are a higher priority than ease of implementation then factor optimization can be adopted. And several studies used factor optimization when the number of factors was very sufficient (16 factors in this article). We verified the importance of feature selection by comparing some metrics before and after optimization. Prediction time is reduced by about half by feature selection as shown in Table V, which indicates an increase in model efficiency. The loss of the model becomes smaller after optimization, which indicates that the generalization ability of the model has been improved and the risk of overfitting is reduced. Although all the original factors achieve a slight advantage in precision, there is no improvement in the F1 value. Therefore, weakening or eliminating the correlation between factors has more advantages than disadvantages for LSM. ### _Landslide-Prone Areas in Jiushaigou Region_ As previously noted, it can be seen in Table III and Fig. 7 that ACO-ML differs from traditional ML models in the distribution of landslide susceptibility index. Combined with the landslide density analysis, it was more scientific and reasonable that the prediction results of landslide susceptibility localization by ACO-DBN. Therefore, we chose three Google Earth images with a time interval of two years, which were before the earthquake, six days after the earthquake, and two years after the earthquake. And we located seven sites where new landslides occurred after the earthquake and two years after the earthquake. It was worth noting that these images were not within the scope of visual interpretation using the RS images described in the previous section. Taking ACO-DBN as an example (see Fig. 10), the locations of new landslide geological hazards are in areas with high and very high landslide susceptibility. It demonstrated the correctness of LSM that the results successfully predicted numerous new landslides that occurred two years later. Based on the above analysis, ACO-DBN model showed good predictive performance and correctness for LSM. Therefore, this proposed model has significant advantages and research value in dealing with complex nonlinear factors. Fig. 9: Landslide density in different models. ## VI Conclusion This article took the earthquake-induced landslide in Jiumzhaigou region as an example, and addressed the existing problem of insufficient parameter optimization of LSM model. ACO-DBN model was proposed to solve the above problems. At the same time, this article carried out mapping and validation work by optimizing causative factors and multimodel comparison and analysis. According to the experimental results and analysis, the following conclusions could be drawn. 1. ACO-ML is superior to unoptimized ML in LSM for its robust search and good ability to combine parameters. Meanwhile, the improved multitype ACO algorithm can optimize multiple parameters simultaneously, effectively avoiding the decrease of prediction accuracy due to improper optimization. 2. Comparison of evaluation indexes and landslide density through multiple models, the accuracy of ACO-DBN model is 93.11% and landslide density is more consistent with basic assumptions. The proposed improved model is more scientific, explainable, and reasonable. 3. The final LSM shows that the northwest and southeast of Jiumzhaigou earthquake region were high susceptibility areas with a high probability of future landslides. The ACO-DBN model has the advantage of an accurate prediction range by the verification of the newly occurred landslide site. 4. The correlation coefficient matrix of causative factors and the RF feature importance evaluation could effectively remove moderate or strong correlation and low contribution characteristics between factors. The prediction results are consistent with the actual situation by statistics of the distribution of landslides under different causative factors. The result reflects the advantages of heuristically optimized DL in LSM and other disaster prevention and mitigation under large-scale and multiple sample conditions. ## Acknowledgment The authors thank Brother Q. Hu for providing part of the data. The authors would like to sincerely thank editors and reviewers for their valuable comments. ## References * [1] S. Xu, \"Study on dynamic landslide susceptibility mapping based on multi-source remote sensing imagery,\" Ph.D. dissertation, China Univ. Geosci., Wuhan, China, 2018. * [2] L. Zhao, \"Forecast of geological disasters in China in 2020 and its trend in 2021,\" 2021. Accessed: Jan. 14, 2021. [Online]. Available: [http://www.yidianizu.com/article/039cvLm3?COLLC=6518193728x=op398&appli=dsd_op398](http://www.yidianizu.com/article/039cvLm3?COLLC=6518193728x=op398&appli=dsd_op398) * [3] Z. Fang, \"National geological hazard bulletin (2019),\" 2020. Accessed: Apr. 12, 2020. [Online]. Available: [https://www.cgs.gov.cn/gzd/rzd/rzdw/202003/](https://www.cgs.gov.cn/gzd/rzd/rzdw/202003/)(20200331_004559.html). Fig. 10: Verification of landslide-prone areas in Jiumzhaigou region. * [4] E. E. Brabb, \"Innovative approaches to landslide hazard mapping,\" in _Proc 4th Int. Symp. Landslides_, Toronto, ON, Canada, 1984, pp. 307-324. * [5] Z. Chen, \"Landslide susceptibility mapping in Baxie River basin,\" M.S. Thesis, Lanzhou Univ., Lanzhou, China, 2015. * [6] H. R. Pourghasemi, Z. T. Yansari, P. Panagos, and B. Pradhan, \"Analysis and evaluation of landslide susceptibility: A review on articles published during 2005-2016 (periods of 2005-2012 and 2013-2016),\" _Arab. J. Geosci._, vol. 11, no. 9, pp. 1-12, 2018. * [7] W. Liang, \"Study on assessment methodology of landslide and debris flow geological hazards,\" Ph.D. dissertation, Nanjing Agricultural Univ., Nanjing, China, 2012. * [8] Y. Yi, Z. Zhang, W. Zhang, H. Jia, and J. Zhang, \"Landslide susceptibility mapping using multiscale sampling strategy and convolutional neural network: A case study in Jiuzhaigou region,\" _Catena_, vol. 195, no. 8, pp. 1-13, 2020. * [9] K.-T. Chang, A. Merghadi, A. P. Yunus, B. T. Pham, and J. Dou, \"Evaluating scale effects of topographic variables in landslide susceptibility models using GIS-based machine learning techniques,\" _Sci. Rep._, vol. 9, no. 1, 2019, Art. no. 12296. * [10] Q. Hu, \"Research of landslide detection and susceptibility assessment based on remote sensing and machine learning model,\" Ph.D. dissertation, Univ. China. Acad. Sci., Beijing, China, 2020. * [11] C. J. van Westen, E. Castellanos, and S. L. Kuriakose, \"Spatial data for landslide susceptibility, hazard, and vulnerability assessment: An overview,\" _Eng. Geol._, vol. 102, no. 3, pp. 112-131, 2008. * [12] A. Carrara, M. Cardinali, R. Detti, F. Guzzetti, V. Pasqui, and P. Reichenbach, \"GIS techniques and statistical models in evaluating landslide hazard,\" _Earth Surf. Processes Landforms_, vol. 16, no. 5, pp. 427-445, 1991. * [13] F. Guzzetti, P. Reichenbach, F. Ardizzone, M. Cardinali, and M. Galli, \"Estimating the quality of landslide susceptibility models,\" _Geomorphology_, vol. 81, no. 1, pp. 166-184, 2006. * [14] D. R. Montgomery and W. E. Dietrich, \"A physically based model for the topographic control on shallow landslide,\" _Water Resour. Res._, vol. 30, no. 4, pp. 1153-1171, 1994. * [15] R. Pack, D. Tarboton, and C. Goodwin, \"The SINMAP approach to terrain stability mapping,\" in _Proc. 8th Cong. Int. Assoc. Eng. Geol._, Vancouver, BC, Canada, 1998, pp. 1157-1165. * [16] X. Chen, \"Application of artificial intelligence to assessment of earthquake-induced landslide susceptibility,\" Ph.D. dissertation, Inst. Geol. China Earthq. Admin., Beijing, China, 2007. * [17] V. Barrile, F. Ciriani, G. Leonardi, and R. Palamara, \"A fuzzy-based methodology for landslide susceptibility mapping,\" _Proc.--Soc. Behav. Sci._, vol. 223, no. 10, pp. 896-902, 2016. * [18] S. Park, C. Choi, B. Kim, and J. Kim, \"Landslide susceptibility mapping using frequency ratio, analytic hierarchy process, logistic regression, and artificial neural network methods at the lliej area, Korea,\" _Environ. Earth Sci._, vol. 68, no. 5, pp. 1443-1464, 2013. * [19] S. Lee and T. Sambult, \"Landlide susceptibility mapping in the Damer-rei Romed area, Cambodia using frequency ratio and logistic regression models,\" _Environ. Geol._, vol. 50, no. 6, pp. 847-856, 2006. * [20] S. Chen, Z. Miao, L. Wu, and Y. He, \"Application of an incomplete landslide inventory and one class classifier to earthquake-induced landslide susceptibility mapping,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 1649-1660, 2020. * [21] F. Chen, B. Yu, and B. Li, \"A practical trial of landslide detection from single-temporal Landsat images using contour-based proposals and random forest: A case study of national Nepal,\" _Landslides_, vol. 15, no. 3, pp. 453-464, 2018. * [22] A. B. Nassif, I. Shahin, I. Attili, M. Arzeh, and K. Shaalan, \"Speech recognition using deep neural networks: A systematic review,\" _IEEE Access_, vol. 7, pp. 19143-19165, 2019. * [23] H. Shahid, M. Hashim, and B. B. Ahmad, \"Remote sensing and GIS-based landslide susceptibility mapping using frequency ratio, logistic regression, and fuzzy logic methods at the Central Zab Basin, Iran,\" _Environ. Earth Sci._, vol. 73, no. 12, pp. 8647-8668, 2015. * [24] J. Yao _et al._, \"Assessment of landslide susceptibility combining deep learning with semi-supervised learning in liahao County, Jilin Province, China,\" _Appl. Sci._, vol. 10, no. 16, pp. 1-24, 2020. * [25] A. Mohamed, G. E. Dahl, and G. Hinton, \"Acoustic modeling using deep belief networks,\" _IEEE Trans. Audio, Speech Lang. Process._, vol. 20, no. 1, pp. 14-22, Jan. 2012. * [26] V. Nhu _et al._, \"Effectiveness assessment of Keras based deep learning with different robust optimization algorithms for shallow landslide susceptibility mapping at tropical area,\" _Catena_, vol. 188, May 2020, Art. no. 104458. * [27] D. T. Bui, P. Tsangaratos, V. Nguyen, N. V. Liem, and P. T. Trinh, \"Comparing the prediction performance of a deep learning neural network model with conventional machine learning models in landslide susceptibility assessment,\" _Catena_, vol. 188, May 2020, Art. no.104426. * [28] W. Wang, Z. He, Z. Han, Y. Li, J. Dou, and J. Huang, \"Mapping the susceptibility to landslides based on the deep belief network: A case study in Sichuan Province, China,\" _Natural Hazards_, vol. 103, no. 3, pp. 3239-3261, 2020. * [29] Y. Wang, Z. Fang, and H. Hong, \"Comparison of convolutional neural networks for landslide susceptibility mapping in Yanshan County, China,\" _Sci. Total Environ._, vol. 666, pp. 975-993, 2019. * [30] Z. Fang, Y. Wang, L. Peng, and H. Hong, \"Integration of convolutional neural network and conventional machine learning classifiers for landslide susceptibility mapping,\" _Comput. Geosci._, vol. 139, 2020, Art. no. 104470. * [31] Y. Chen, D. Ming, X. Ling, X. Lv, and C. Zhou, \"Landslide susceptibility mapping using feature fusion-based CPCNN-ML in Lantau Island, Hong Kong,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 3625-3639, 2021. * [32] T. Chen, Z. Zhong, R. Niu, T. Niu, and S. Chen, \"Mapping landslide susceptibility based on deep belief network,\" _Geomatics Inf. Sci. Wuhan Univ._, vol. 45, no. 11, pp. 1809-1817, 2020. * [33] J. Dou _et al._, \"Evaluating GIS-based multiple statistical models and data mining for earthquake and rainfall-induced landslide susceptibility using the LiDAR DEM,\" _Remote Sens._, vol. 11, no. 6, pp. 1-30, 2019. * [34] S. Bera, V. K. Upadhyay, B. Guru, and T. Oommen, \"Landslide inventory and susceptibility models considering the landslide typology using deep learning: Himalayas, India,\" _Natural Hazards_, vol. 108, no. 1, pp. 1257-1289, 2021. * [35] S. Lee, W. Baek, H. Jung, and S. Lee, \"Susceptibility mapping on urban landslides using deep learning approaches in Mt. Umyeon,\" _Appl. Sci._, vol. 10, no. 22, pp. 1-18, 2021. * [36] B. Nguyen and Y. Kim, \"Landslide spatial probability prediction: A comparative assessment of naive Bayes, ensemble learning, and deep learning approaches,\" _Bull. Eng. Geol. Environ._, vol. 80, no. 6, pp. 4291-4321, 2021. * [37] Q.-T. Bui, Q.-H. Nguyen, X. L. Nguyen, V. D. Pham, H. D. Nguyen, and V.-M. Pham, \"Verification of novel integrations of swarm intelligence algorithms into deep learning neural network for flood susceptibility mapping,\" _J. Hydrol._, vol. 581, 2019, Art. no. 124379. * [38] D. T. Bui _et al._, \"A novel integrated approach of relevance vector machine optimized by imperialis competitive algorithm for spatial modeling of shallow landslides,\" _Remote Sens._, vol. 10, no. 10, pp. 1-27, 2018. * [39] V. D. Pham, Q.-H. Nguyen, H.-D. Nguyen, V.-M. Pham, V. M. Vu, and Q.-T. Bui, \"Convolutional neural network--Optimized moth flame algorithm for shallow landslide susceptible analysis,\" _IEEE Access_, vol. 8, pp. 32727-32736, 2020. * [40] M. I. Sameen, B. Pradhan, H. Z. M. Shafri, M. R. Mezanal, and H. B. Hamid, \"Integration of ant colony optimization and object-based analysis for LiDAR data classification,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 10, no. 5, pp. 2055-2066, May 2017. * [41] M. Li, Q. Gong, W. Wu, H. Hai, C. Ma, and C. Che, \"Nonlinear correction of LVDT sensor based on ACO-BP neural network.\" _J. Phys., Conf. Ser._, vol. 1678, no. 1, pp. 1-6, 2020. * [42] X. Fan _et al._, \"Coseismic landslides triggered by the 8th August 2017 Ms 7.0 Juzhuziou earthquake (Sichuan). Factors controlling their spatial distribution and implications for the seismogenic blind fault identification,\" _Landslides_, vol. 15, pp. 967-983, 2018. * [43] Y. Yi, Z. Zhang, W. Zhang, Q. Xu, C. Deng, and Q. Li, \"GIS-based earthquake-triggered-landlandlide susceptibility mapping with an integrated weighted index model in liuzhaigou region of Sichuan Province China,\" _Natural Hazards Earth Syst. Sci._, vol. 19, no. 9, pp. 1973-1988, 2019. * [44] C. Wu, P. Cui, Y. Li, I. A. Ayala, C. Huang, and S. Yi, \"Seismogenic fault and topography control on the spatial patterns of landslides triggered by the 2017 Jiuzhaigou earthquake,\" _J. M. Sci._, vol. 15, no. 4, pp. 793-807, 2018. * [45] Q. Hu, Y. Zhou, S. Wang, and F. Wang, \"Machine learning and fractal theory models for landslide susceptibility mapping: Case study from the Jinsha River basin,\"* [48] J. Wang, W. Jin, Y. Cui, W. Zhang, C. Wu, and P. Alessandro, \"Earthquake-triggered landslides affecting a UNESCO natural site: The 2017 Jiuzhaigou Earthquake in the World National Park, China,\" _J. Mt. Sci._, vol. 15, no. 7, pp. 1412-1428, 2018. * [49] Y. Tian, C. Xu, S. Ma, X. Xu, S. Wang, and H. Zhang, \"Inventory and spatial distribution of landslides triggered by the 8th August 2017 MW 6.5 Juzhaigou Earthquake, China,\" _J. Earth Sci._, vol. 20, no. 1, pp. 206-217, 2019. * [50] S. Ma, C. Xu, Y. Tian, and X. Xu, \"Application of logistic regression model for hazard assessment of earthquake-triggered landslides: A case study of 2017 Jiuzhaigou (China) Ms 7.0 event,\" _Seismol. Geol._, vol. 41, no. 1, pp. 165-180, 2019. * [51] S. Ioffe and C. Szegedy, \"Batch normalization: Accelerating deep network training by reducing internal covariate shift,\" in _Proc. 32nd Int. Conf. Mach. Learn._, Mountain View, CA, USA, 2015, pp. 448-456. * [52] D. P. Kingma and J. Ba, \"Adam: A method for stochastic optimization,\" in _Proc. Int. Conf. Learn. Represent. (ICLR)_, San Diego, CA, USA, pp. 1-15, 2015, _arXiv:1412.6980_. * [53] D. Y. Ji, J. Ahn, and S. Ji, \"An effective optimization method for machine learning based on ADAM,\" _Appl. Sci._, vol. 10, no. 3, 2020, Art. no. 1073. * [54] Y. Zhang, X. Chen, D. Lv, and Y. Zhang, \"Optimization of urban heat effect mitigation based on multi-type ant colony algorithm,\" _Appl. Soft Comput._, vol. 112, 2021, Art. no. 107758. * [55] P. Vorpahl, H. Elseneber, M. Marker, and B. Schroder, \"How can statistical models help to determine driving factors of landslides?,\" _Ecol. Model._, vol. 239, pp. 27-39, 2012. * [56] J. Dou _et al._, \"Improved landslide assessment using support vector machine with bagging, boosting, and stacking ensemble machine learning framework in a mountainous watershed, Japan,\" _Landslides_, vol. 17, no. 3, pp. 641-658, 2020. * [57] N. Cristianini and N. Schoelkopf, \"Support vector machines and kernel methods: The new generation of learning machines,\" _AI Mag._, vol. 23, no. 3, pp. 31-41, 2002. * [58] G. Mountrakis, J. Im, and C. Ogole, \"Support vector machines'remote sensing: A review,\" _ISPRS J. Photogramm._, vol. 66, no. 3, pp. 247-259, 2011. * [59] W. Chen _et al._, \"GIS-based landslide susceptibility evaluation using a novel hybrid integration approach of bivariate statistical based random forest method,\" _Catena_, vol. 164, pp. 135-149, 2018. * [60] A. Merghadi _et al._, \"Machine learning methods for landslide susceptibility studies: A comparative overview of algorithm performance,\" _Earth-Sci. Rev._, vol. 207, 2020, Art. no. 103225. * [61] B. T. Pham _et al._, \"Coupling RBF neural network with ensemble learning techniques for landslide susceptibility mapping,\" _Catena_, vol. 195, pp. 1-15, 2020. * [62] A. Jafafari, A. Najafi, J. Rezaeian, and A. Satarrian, \"Modeling erosion and sediment delivery from unpayed roads in the north mountainous forest of Iran,\" _Int. J. Geomath._, vol. 6, no. 2, pp. 343-356, 2015. * [63] C. J. van Westen, T. W. J. van Asch, and R. Soeters, \"Landslide hazard and risk zonation--Why is it still so difficult?,\" _Bull. Eng. Geol. Environ._, vol. 65, no. 2, pp. 167-184, 2006. \\begin{tabular}{c c} & Yibing Xiong was born in Anhui, China, in 1996. He received the B.S. degree in remote sensing science and technology from Central South University, Changsha, China, in 2019. He is currently working toward the M.S. degree in cartography and geographical information system with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, and University of Chinese Academy of Science, Beijing, China. His research interests include application of deep learning in remote sensing of environment and disaster, landslide extraction, and landslide susceptibility mapping. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yi Zhou received the B.S. degree in natural resources from Nanjing University, Nanjing, China, in 1986. She is currently a Researcher with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China. Her research interests include remote sensing of environment and disaster, remote sensing of urban systems and urbanization, and quantitative monitoring and evaluation of environmental remote sensing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Futao Wang received the B.S. degree in resource and environmental engineering from Shandong University of Technology, Zibo, China, in 2005, the M.S. degree in mapping and geographic information from Guilin University of Technology, Guilin, China, in 2008, and the Ph.D. degree in cartography and geographical information system from the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, in 2011. He is currently an Associate Research Fellow with the Aerospace Information Research Institute, Chinese Academy of Sciences. His research interests include remote disaster remote sensing monitoring and assessment. Dr. Wang is currently a member of the Youth Innovation Promotion Association of Chinese Academy of Sciences, and a Reviewer for _International Journal of Disaster Risk Science and Journal of Remote Sensing_. \\\\ \\end{tabular} \\begin{tabular}{c c} & Shixin Wang received the B.S. degree in natural resources from Nanjing University, Nanjing, China, in 1987, and the M.S. degree in cartography and geographical information system from Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, in 1990. He is currently a Researcher with Aerospace Information Research Institute, Chinese Academy of Sciences. His research interests include remote sensing of environment and disaster. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jingming Wang received the B.S. degree in geographic information science from China University of Geosciences, Beijing, China, in 2019. He is currently working toward the M.S. degree in cartography and geographical information system with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, and the University of Chinese Academy of Science, Beijing, China. His research interests include application of deep learning in remote sensing of environment and flood disaster. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jianwan Ji received the B.S degree in geographic information system from Chengduvi University of In- and the M.S. degree in cartography and geographical information system from Fujian Normal University, Fuzhou, China, in 2018. He is currently working toward the Ph.D. degree in cartography and geographical information system with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, and the University of Chinese Academy of Science, Beijing, China. His research interests include application of remote sensing in environment and regional coordinated development analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhenqing Wang received the B.S. degree in geographic information science from China University of Petroleum, Qingdao, China, in 2019. He is currently working toward the Ph.D. degree in cartography and geographical information system with the Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China, and the University of Chinese Academy of Science, Beijing, China. His research interests include remote sensing semantic segmentation and object detection based on deep learning. \\\\ \\end{tabular}
Landslide susceptibility mapping (LSM) is the primary link of geological disaster risk evaluation, which is significant for postearthquake emergency response and rebuilding after disasters. Existing LSM studies applying deep learning (DL) methods have shortcomings such as easy overfitting, slow convergence, and insufficient hyperparametric optimization. In response to these problems, this study proposes an ensemble model based on ant colony optimization strategy and deep belief network (ACO-DBN). In ACO-DBN, DL optimization strategies were added to DBN and their combined parameters were optimized with ACO. Next, Pearson's correlation coefficient and random forest importance ranking methods were utilized to optimize landslide causative factors. Then, the Jiuzhaigou earthquake ross selected as an example to explore the applicability of this model. Besides, we conducted the Wilcoxon signed rank test in order to verify that the differences were statistically significant. In a comprehensive comparative all indexes and landslide density, the model proposed in this article shows good rationality, scientificity, and interpretability. The newly occurred landslide site further demonstrates that heuristically optimized DL could make scientific and accurate evaluation of landslide susceptibility. Ant colony optimization (ACO) algorithm, deep belief network (DBN), Jiuzhaigou earthquake, landslide susceptibility mapping (LSM).
Provide a brief summary of the text.
ieee/f9d75061_0bec_4de4_bb8f_eeadecd12175.md
# Statistical Regularization for TomoSAR Imaging With Multiple Polarimetric Observations Gustavo Daniel Martin-del-Campo-Becerra, Eduardo Torres-Garcia, Deni Librado Torres-Roman, Sergio Alejandro Serafin-Garcia, and Andreas Reigber, ###### Polarimetric synthetic aperture radar (PolSAR), SAR tomography (TomoSAR), statistical regularization, weighted covariance fitting (WCF). ## Nomenclature ### _A. List of Acronyms_ AIC Akaike information criterion. BIC Bayesian information criterion. DR Detection rate. EDC Efficient detection criterion. KL Kullback-Leibler. InSAR Interferometric SAR. JPL Jet propulsion laboratory. ## I Introduction Two main limitations arise when analyzing conventional synthetic aperture radar (SAR) imagery. On one hand, the superposition of different scattering contributions, sharing the same resolution cell, cannot be determined, since only the total backscattering is measured. On the other hand, the height of reflectors is unknown, since the elevation angle cannot be retrieved, as the SAR geometry is symmetric in elevation. These limitations are tackled via two extensions of conventional SAR: Polarimetric SAR (PolSAR) and interferometric SAR (InSAR), respectively. Taking advantage of contributions from differentpolarimetric channels, PolSAR [1, 2] permits discriminating the different types of scattering mechanisms and examining the physics of reflection processes. Whereas InSAR [3, 4] is capable of generating high resolution digital elevation models by analyzing the phase difference between two SAR images, acquired at slightly different positions. More recently, aimed at producing three-dimensional imagery of the illuminated scene and solving the layover problem, SAR tomography (TomoSAR) was introduced [5]. As seen in Fig. 1, by acquiring several SAR images at different line-of-sight (LOS), an aperture in elevation is generated (so-called tomographic aperture). Then, exploiting antenna array theory, as done for SAR systems, a resolution in the perpendicular to the LOS (PLOS) is defined. While fusing InSAR and PolSAR permits determining the vertical location of scattering mechanisms [6], combining TopoSAR and PolSAR allows determining how energy distributes over space, locating the radiating (backscattering) sources and extracting the associated scattering mechanisms. Moreover, the height of reflectors is recovered from the local maxima along the estimated power spectrum pattern (PSP). By generalizing the TomoSAR signal model to the fully polarimetric configuration, conventional focusing techniques like matched spatial filtering (MSF), Capon, and multiple signal classification (MUSIC) can be adapted to the polarimetric case [7]. Making use of same signal model, this article extends the super-resolution method weighted covariance fitting (WCF) based iterative spectral estimator (WISE) [8] to the polarimetric configuration, called hereafter PolWISE. The main goal is easing both the discrimination of the different types of scattering mechanisms and the analysis of the scattering processes occurring in the illuminated scene. PolWISE attains super-resolution, performing suppression of artefacts and ambiguity reduction. It can be understood as a postprocessing step for the enhancement of TomoSAR imagery recovered using conventional focusing techniques. The retrievals of methods like PolMSF, PolCapon, or PolMUSIC act as first input (zero iteration) to the PolWISE iterative procedure, which seeks converging into a unique solution. PolWISE poses two particular challenges as follows. 1. Being a statistical regularization approach, PolWISE reduces the TomoSAR inverse problem to the selection of a regularization parameter, whose correct setting assures retrieving good-fitted solutions. Aimed at properly selecting such regularization parameter, the L-Curve method [9, 10] is refactored to work exclusively with data covariance matrices, since the scattering vector may not be always available, especially when not working at full resolution. Manipulations previously done to the data covariance matrix (e.g., presumming) must be equivalent in the scattering vector, which may not be at all times feasible. 2. Being PolWISE an iterative technique, over/under regularization is prevented by terminating the procedure at an appropriate iteration. Specifically, the iterative approach is terminated after reaching a solution that maximizes the Kullback-Leibler (KL) information function [11, 12] via minimizing akaike information criterion (AIC), bayesian information criterion (BIC), or efficient detection criterion (EDC). The novel strategy is assessed through simulations and on fully polarimetric TomoSAR airborne data at L-band, acquired from an urban scenario. Scattering patterns are examined in detail by using alpha mean angle [2] as polarimetric indicator. Scattering patterns in urban scenarios are very complex [7], since a single resolution cell may gather several contributions. Particularly, the following phenomena could occur: single-bounce reflections from surfaces as building roofs and structures made out of glass, double-bounce reflections from wall-ground and tree-trunk-ground interactions, and volume reflections from trees' canopy. The main contributions of this article are recapitulated as follows. 1. PolWISE is introduced as the polarimetric version of WISE, a super-resolution statistical regularization technique for TomoSAR imaging. 2. The L-Curve method is refactored to work exclusively with data covariance matrices, aimed at properly selecting PolWISE's regularization parameter. 3. A stopping rule based on KL information criterion is employed to terminate the PolWISE iterative procedure, preventing under/over regularization and avoiding unnecessary iterations. 4. The capabilities of PolWISE are assessed for the single, dual and full channel cases via simulations. Afterwards, tomograms are produced and scattering mechanisms are analyzed in complex urban scenes, using fully polarimetric TomoSAR airborne data. The rest of this article is organized as follows. Section II addresses the classical TomoSAR signal model. Section III briefly Fig. 1: TomoSAR acquisition geometry using parallel passes (not to scale). summarizes WISE. Section IV presents the extended polarimetric TomoSAR signal model. Section V introduces PolWISE. Section VI addresses the simulations performed. Section VII shows the experimental results. Finally, Section VIII concludes this article. ## II TomoSAR Signal Model Observe Fig. 1, the TomoSAR acquisition geometry comprises \\(L\\) flight tracks (passes) with different LOS. Each pass produces one SAR image. After all images are collected, the ensemble of SAR imagery is coherently combined using InSAR techniques. Coregistration is assumed independent on height. Subsequently, the spatial spectral estimation problem [13, Chapter 6] is solved by means of the \\(L\\) processed signals, considered from now on as a sensors array. Recall that the spatial spectral estimation problem consists on determining how energy distributes over space and where the radiating (backscattering) sources are located. The TomoSAR inverse problem is represented via the linear equation [8, 9, 12] \\[\\mathbf{y}=\\mathbf{A}\\mathbf{s}+\\mathbf{n}, \\tag{1}\\] where vector \\(\\mathbf{y}\\) gathers the processed signals of the \\(L\\) passes for certain azimuth-range position. Vector \\(\\mathbf{s}\\) of size \\(M\\) holds the corresponding reflectivity values at the PLOS elevation positions \\(\\{z_{m}\\}_{m=1}^{M}\\). Vector \\(\\mathbf{n}\\) accounts for the additive noise. Finally, matrix \\(\\mathbf{A}\\) contains the interferometric phase information of the backscattering sources located along the PLOS elevation positions \\(\\{z_{m}\\}_{m=1}^{M}\\), above the reference focusing plane. Matrix \\(\\mathbf{A}\\) (so-called steering matrix) comprises the steering vectors \\(\\{\\mathbf{a}(z_{m})\\}_{m=1}^{M}\\), as defined in [8]. Vector \\(\\mathbf{n}\\) is characterized by the correlation matrix \\[\\mathbf{R}_{\\mathbf{n}}=\\mathrm{E}\\big{(}\\mathbf{n}\\mathbf{n}^{+}\\big{)}=N_{0 }\\;\\mathbf{I}, \\tag{2}\\] where \\(N_{0}\\) is the power spectral density of the white noise power [14]. The correlation matrix of vector \\(\\mathbf{s}\\) is defined as \\[\\mathbf{R}_{\\mathbf{s}}=\\;\\mathrm{E}\\big{(}\\mathbf{s}\\mathbf{s}^{+}\\big{)}= \\mathbf{D}\\left(\\mathbf{b}\\right), \\tag{3}\\] where vector \\(\\mathbf{b}=\\{|s_{m}|^{2}\\}_{m=1}^{M}\\). Aimed at easing the mathematical developments that led to WISE [8], entries of vector \\(\\mathbf{s}\\) are assumed uncorrelated. Finally, the correlation matrix of vector \\(\\mathbf{y}\\) is given by \\[\\mathbf{R}_{\\mathbf{y}}=\\;\\mathrm{E}\\big{(}\\mathbf{y}\\mathbf{y}^{+}\\big{)}= \\mathbf{A}\\mathbf{R}_{\\mathbf{s}}\\mathbf{A}^{+}+\\mathbf{R}_{\\mathbf{n}}. \\tag{4}\\] The sampled (measured) data covariance matrix is expressed as \\[\\mathbf{Y}=\\frac{1}{J}\\;\\sum_{j=1}^{J}\\mathbf{y}_{(j)}\\mathbf{y}_{(j)}^{+}, \\tag{5}\\] where \\(J\\) indicates the number of independent realizations (looks) of the signal acquisitions. In practice, TomoSAR is customarily treated as an ergodic process, meaning that its statistical properties are deduced from a single random realization. Multilooking is achieved through the averaging of adjacent values among the set of data covariance matrices \\(\\mathbf{Y}\\), i.e., using, by instance, a Boxcar filter. Given the data covariance matrix \\(\\mathbf{Y}\\) and the steering matrix \\(\\mathbf{A}\\), together with some prior knowledge on the problem (e.g., about the statistics of the signal and noise), the nonlinear TomoSAR inverse problem consists in estimating the second-order statistics of the reflectivity vector \\(\\mathbf{s}\\), i.e., the PSP vector \\(\\mathbf{b}\\) at the principal diagonal of matrix \\(\\mathbf{R}_{\\mathbf{s}}\\). The TomoSAR problem is ill-conditioned, as it does not accomplish the _uniqueness_ Hadamard condition [15, Chapter 15]. The usage of \\(\\mathbf{Y}\\) for focusing, instead of utilizing vector \\(\\mathbf{y}\\), is aimed at increasing accuracy in presence of signal-dependent (multiplicative) noise and handling the multiple nondeterministic sources [15, Chapter 18]. Several nonparametric and parametric techniques are available for retrieving the PSP as part of the direction-of-arrival estimation framework [13, Chapter 6]. Imposing some form of constraints and/or making appropriate assumptions, the different focusing techniques must guarantee retrieving well-conditioned solutions to the nonlinear TomoSAR inverse problem. On one hand, nonparametric methods do not make any assumption on the covariance structure of the data. They only assume that matrix \\(\\mathbf{A}\\) is calibrated, as the functional form of vectors \\(\\{\\mathbf{a}_{m}\\}_{m=1}^{M}\\)is known [13, Chapter 6]. On the other hand, parametric methods assume a PSP composed of point-type like backscattering sources [13, Chapter 5], reducing the spatial spectral estimation problem to the problem of selecting an integer-valued parameter [so-called model order (MO)], which describes the number of source signals impinging on the sensors array [13, Appendix C]. Parametric methods include techniques like MUSIC (summarized in [12]), whereas nonparametric ones include techniques like MSF and Capon. These three approaches are the most common TomoSAR focusing techniques, widely used (among other reasons) because of their simple implementation. MSF, by instance, is defined via [8] \\[\\mathbf{\\hat{b}}_{\\text{MSF}}=\\mathbf{A}^{+}\\mathbf{Y}\\mathbf{A}, \\tag{6}\\] whilst Capon is given by [8] \\[\\left\\{\\hat{b}_{\\text{Capon}_{m}}=\\frac{1}{\\mathbf{a}_{m}^{+}\\mathbf{R}_{ \\mathbf{y}}^{-}\\mathbf{1}\\mathbf{a}_{m}}\\right\\}_{m=1}^{M}, \\tag{7}\\] with \\(\\mathbf{R}_{\\mathbf{y}}\\) approximated via \\(\\mathbf{Y}\\) in (5). MSF attains a resolution \\(\\rho_{\\text{PLOS}}=\\lambda r_{1}/2D_{\\text{PLOS}}\\;\\) (see Fig. 1) constrained to the acquisition geometry [5, 16], with \\(\\lambda\\) as the carrier wavelength. Finer resolution is attained with larger tomographic aperture \\(D_{\\text{PLOS}}\\). Furthermore, assuming evenly distributed passes, \\(d=\\lambda r_{1}/2Z_{\\text{PLOS}}\\;\\) defines the maximum allowed baseline to avoid ambiguities along the PLOS height range of interest \\(Z_{\\text{PLOS}}\\)[16]. In general, Capon and MUSIC attain finer resolution and ambiguity suppression than MSF, easing the acquisition geometry constrain, as they involve an alternative inversion of the spectrum [17]. Conversely, MSF achieves better radiometric accuracy [18]. Moreover, Capon must be provided with full-rank (invertible) covariance matrices, whereas MUSIC depends on a proper MO selection [11]. ## III WCF-Based Iterative Spectral Estimator WISE is a statistical regularization approach that guarantees retrieving well-conditioned solutions by incorporating known properties of the solution into the solver. The WCF optimization problem [8, 19, 20] \\[\\mathbf{\\hat{b}}_{\\text{WCF}}=\\underset{\\mathbf{b}}{\\text{argmin}}\\left\\{\\left\\| \\mathbf{y}^{+}\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{y}+\\frac{\\text{tr}\\left\\{ \\mathbf{R}_{\\mathbf{y}}\\right\\}}{\\text{tr}\\left\\{\\mathbf{Y}\\right\\}}\\right\\|^ {2}\\right\\} \\tag{8}\\] is solved with the standard gradient method. Accordingly, the derivative of the objective function is set to \\(0\\), yielding \\[\\left\\{\\mathbf{a}_{m}^{+}\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{Y}\\mathbf{R}_{ \\mathbf{y}}^{-1}\\;\\mathbf{a}_{m}=\\frac{1}{\\text{tr}\\left\\{\\mathbf{Y}\\right\\}} \\;\\mathbf{a}_{m}^{+}\\mathbf{a}_{m}\\right\\}_{m=1}^{M}. \\tag{9}\\] Subsequently, the asymptotic of the filter output is defined via Capon. Multiplying both sides in (9) by the expression in (7) and performing a sequence of simple manipulations produces the solver [8] \\[\\left\\{\\hat{b}_{\\text{WCF}m}=\\frac{\\text{tr}\\left\\{\\mathbf{Y}\\right\\}}{ \\mathbf{a}_{m}^{+}\\mathbf{a}_{m}}\\;\\left(\\frac{\\mathbf{a}_{m}^{+}\\mathbf{R}_{ \\mathbf{y}}^{-1}\\mathbf{Y}\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{a}_{m}}{\\mathbf{ a}_{m}^{+}\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{a}_{m}}\\right)\\right\\}_{m=1}^{M}. \\tag{10}\\] Note the dependency of (10) on matrix \\(\\mathbf{R}_{\\mathbf{y}}\\), which in turn depends on two parameters, vector \\(\\mathbf{b}\\) in (3) and factor \\(N_{0}\\) in (2). Aimed at easing the computation of (10), subtle assumptions and modifications are done, yielding [8] \\[\\left\\{\\hat{b}_{\\text{WISE}m}^{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\;\\;\\;\\;\\;\\;\\;\\ Afterwards, maximizing (17) in terms of the polarization state \\(\\mathbf{q}\\) by \\[\\max_{\\left\\|\\mathbf{q}\\right\\|=1}\\left\\{\\frac{\\text{tr}\\left\\{\\mathbf{Y}_{ \\text{Pul}}\\right\\}}{\\mathbf{q}^{+}\\mathbf{B}_{m}^{+}\\mathbf{B}_{m}\\mathbf{q}} \\left(\\frac{\\mathbf{q}^{+}\\mathbf{B}_{m}^{+}\\mathbf{R}_{\\gamma_{\\text{Pul}}}^{- 1}\\mathbf{Y}_{\\text{Pul}}\\mathbf{R}_{\\gamma_{\\text{Pul}}}^{-1}\\mathbf{B}_{m} \\mathbf{q}}{\\mathbf{q}^{+}\\mathbf{B}_{m}^{+}\\mathbf{R}_{\\gamma_{\\text{Pul}}}^{- 1}\\mathbf{B}_{m}\\mathbf{q}}\\right)\\right\\} \\tag{18}\\] produces \\[\\left\\{\\hat{b}_{\\text{WCF}m}=\\frac{\\text{tr}\\left\\{\\mathbf{Y}_{\\text{Pul}} \\right\\}}{\\mathbf{B}_{m}^{+}\\mathbf{B}_{m}}\\,w_{m_{\\text{max}}}\\right\\}_{m=1}^ {M}, \\tag{19}\\] with \\[\\left\\{w_{m_{\\text{max}}}\\right\\}_{p=1}^{P},\\;\\;\\left\\{\\mathbf{v} _{m_{\\text{max}}}\\right\\}_{p=1}^{P}\\] \\[\\quad=\\text{eigen}\\left\\{\\frac{\\mathbf{B}_{m}^{+}\\mathbf{R}_{ \\gamma_{\\text{Pul}}}^{-1}\\mathbf{Y}_{\\text{Pul}}\\mathbf{R}_{\\gamma_{\\text{Pul} }}^{-1}\\mathbf{B}_{m}}{\\mathbf{B}_{m}^{-}\\mathbf{R}_{\\gamma_{\\text{Pul}}}^{- 1}\\mathbf{B}_{m}}\\right\\}. \\tag{20}\\] Value \\(w_{m_{\\text{max}}}\\) refers to the maximum eigenvalue; whereas, \\(\\mathbf{v}_{m_{\\text{max}}}\\) is the associated eigenvector. Vector \\(\\mathbf{v}_{m_{\\text{max}}}\\) is regarded as a polarimetric reflection mechanism, which specifies optimum polarimetric combinations in sense of WCF. PolWISE performs subtle assumptions and modifications to (20), aimed at easing its computation. Contrary to PolCapon, PolWISE does not replace \\(\\mathbf{R}_{\\gamma_{\\text{Pul}}}\\) with \\(\\mathbf{Y}_{\\text{Pul}}\\), but with the block matrix \\[\\mathbf{C}=\\begin{bmatrix}\\mathbf{R}_{\\gamma p=1}&\\mathbf{0}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{R}_{\\gamma p=2}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{0}&\\mathbf{R}_{\\gamma p=P}\\end{bmatrix}. \\tag{21}\\] Matrix \\(\\mathbf{C}\\) contains matrices \\(\\left\\{\\mathbf{R}_{\\gamma p}\\right\\}_{p=1}^{P}\\) at its main diagonal, modeled each as in (4). The usage of matrix \\(\\mathbf{C}\\) to approximate \\(\\mathbf{R}_{\\gamma_{\\text{Pul}}}\\) infers dependency on vectors \\(\\left\\{\\mathbf{b}_{p}\\right\\}_{p=1}^{P}\\), i.e., the PSP of each polarization channel. The true PSP is unknown, nonetheless, it can be estimated through focusing techniques like MSF, Capon, or MUSIC. Moreover, instead of focusing one polarization channel at a time, the polarimetric versions of such techniques [7] recover the contributions of each polarization channel at once. Utilizing PolCapon, by instance, yields \\[\\left\\{\\mathbf{e}_{m}=\\left\\{\\hat{b}_{\\text{PolCapon}_{m_{p}}}\\right\\}_{p=1} ^{P}=w_{m_{\\text{min}}}^{-1}\\left\\{v_{m_{\\text{min}}}\\right\\}_{p=1}^{P}\\right\\} _{m=1}^{M}, \\tag{22}\\] with \\(w_{m_{\\text{min}}}\\) and \\(\\mathbf{v}_{m_{\\text{min}}}\\) retrieved from (16). PolWISE recognizes that \\(1/\\mathbf{u}_{m}^{+}\\mathbf{R}_{\\gamma_{\\text{Pul}}}^{-1}\\mathbf{u}_{m}\\) in (17) is actually the polarimetric PSP vector \\(\\mathbf{b}_{\\text{Pul}}\\) (see (14)). Yet, the true polarimetric PSP is not known and requires to be estimated. In this order of ideas, vectors \\(\\left\\{\\mathbf{e}_{m}\\right\\}_{m=1}^{M}\\) in (22) are utilized as input and to estimate matrix \\(\\mathbf{C}\\) in (21). Consequently, (20) can be reduced to \\[\\left\\{w_{m_{p}}\\right\\}_{p=1}^{P},\\;\\;\\left\\{\\mathbf{v}_{m_{p}} \\right\\}_{p=1}^{P}\\] \\[\\quad=\\;\\text{eigen}\\left\\{\\left(\\mathbf{B}_{m}^{+}\\mathbf{\\hat{ C}}^{-1}\\mathbf{Y}_{\\text{Pul}}\\mathbf{\\hat{C}}^{-1}\\mathbf{B}_{m}\\right) \\odot\\mathbf{E}_{m}\\right\\}, \\tag{23}\\] with matrix \\(\\mathbf{E}_{m}=\\left[\\mathbf{e}_{m}^{\\text{T}},\\;\\ldots,\\;\\mathbf{e}_{m}^{ \\text{T}}\\right]\\;\\text{of size}\\;P\\times P\\). Vector \\(\\mathbf{e}_{m}\\) can be retrieved by means of different polarimetric focusing techniques (e.g., PolMSF, PolCapon, or PolMUSIC [7]); thus, seeking to converge into a unique solution, PolWISE is introduced as an iterative procedure via \\[\\left\\{\\left\\{\\hat{b}_{\\text{PolWISE}_{m_{p}}}^{i+1}\\right\\}_{p=1 }^{P}=\\frac{\\text{tr}\\left\\{\\mathbf{Y}_{\\text{Pul}}\\right\\}}{\\mathbf{B}_{m}^{+ }\\mathbf{B}_{m}}\\;w_{m_{\\text{min}}}^{[i]}\\left\\{v_{m_{\\text{max}_{p}}}^{[i]} \\right\\}_{p=1}^{P}\\right\\}_{m=1}^{M},\\] \\[\\quad i\\!=\\!0,\\;\\ldots,\\;I; \\tag{24}\\] with \\(w_{m_{\\text{max}}}\\) and \\(\\mathbf{v}_{m_{\\text{max}}}\\) as in (23), \\[\\left\\{\\mathbf{e}_{m}^{i+1}=\\left\\{\\hat{b}_{\\text{PolWISE}_{m_{p}}}^{[i+1]} \\right\\}_{p=1}^{P}\\right\\}_{m=1}^{M}, \\tag{25}\\] and with \\(\\{\\mathbf{e}_{m}^{[i=0]}\\}_{m=1}^{M}\\) recovered using a conventional polarimetric focusing technique like PolCapon in (22). ### _Regularization Parameter Selection_ As discussed previously, factor \\(N_{0}\\) in (2) plays the role of a regularization parameter, key to avoid under/over regularization. This parameter is required to build block matrix \\(\\mathbf{\\hat{C}}\\), an estimate of (21). All matrices \\(\\{\\mathbf{\\hat{R}}_{\\gamma p}\\}_{p=1}^{P}\\) inside \\(\\mathbf{\\hat{C}}\\) are assumed sharing a common value \\(N_{0}\\). Aimed at properly selecting such regularization parameter, previous related studies [9] recommend using the L-Curve method. Described next for the single channel case, it is simple to extend the L-Curve method to the polarimetric configuration. Given a set of candidates \\(\\{N_{0n}\\}_{n=1}^{N}\\), this technique consists in forming a smooth curve (with the shape of a letter \"L\") by plotting a collection of points \\[L_{C}\\;\\left(N_{0n}\\right)=\\left[\\text{ln}\\left\\{\\left\\|\\mathbf{A}\\mathbf{\\hat{ s}}\\left(N_{0n}\\right)-\\mathbf{y}\\right\\|\\right\\},\\text{ln}\\left\\{\\left\\| \\mathbf{\\hat{s}}\\left(N_{0n}\\right)\\right\\|\\right\\}\\right] \\tag{26}\\] with \\(\\{\\mathbf{\\hat{s}}\\left(N_{0n}\\right)=\\mathbf{F}(N_{0n})\\mathbf{y}\\}_{n=1}^{N}\\). The proper value for \\(N_{0}\\) corresponds to the point at the corner of the L-curve. Note that expression (26) requires a solution operator \\(\\mathbf{F}\\) to estimate the complex reflectivity vector \\(\\mathbf{s}\\). At the same time, this operator must be utilized to recover the PSP, such that \\(\\mathbf{\\hat{b}}=\\left\\{\\mathbf{F}\\mathbf{y}\\mathbf{y}^{+}\\mathbf{F}^{+}\\right\\}_{ \\text{diag}}=\\left\\{\\mathbf{F}\\mathbf{Y}\\mathbf{F}^{+}\\right\\}_{\\text{diag}}\\). For example, Tikhonov's regularization employs \\(\\mathbf{F}=(\\mathbf{A}^{+}\\mathbf{A}+N_{0}\\mathbf{I})\\mathbf{A}^{+}\\)[8]. The approach described previously infers two main disadvantages. On one hand, the scattering vector \\(\\mathbf{y}\\) needs to be provided; on the other hand, the employed solver (e.g., WISE) must be represented with a solution operator \\(\\mathbf{F}\\). These two conditions might not be always satisfied. In order to avoid these issues, (26) is modified to work exclusively with data covariance matrices \\[L_{C}\\;\\left(N_{0n}\\right)\\] \\[=\\left[\\text{ln}\\left\\{\\left\\|\\left\\{\\mathbf{\\hat{R}}_{\\mathbf{y}} \\right\\}_{\\text{diag}}-\\left\\{\\mathbf{Y}\\right\\}_{\\text{diag}}\\right\\| \\right\\},\\text{ln}\\left\\{\\left\\|\\mathbf{\\hat{b}}\\left(N_{0n}\\right)\\| \\right\\}\\right]\\;. \\tag{27}\\] The polarimetric version of (27) replaces matrix \\(\\mathbf{Y}\\) with \\(\\mathbf{Y}_{\\text{Pul}}\\) and matrix \\(\\mathbf{\\hat{R}}_{\\mathbf{y}}\\) with matrix \\(\\mathbf{\\hat{C}}\\). We refer to the algorithm in [10] to find the corner of the L-curve. Using the Menger's curvature of a circumcircle, local curvatures within the L-curve are computed using three sampled points. The value assigned to \\(N_{0}\\) corresponds to the maximum positive curvature of the L-curve. Being a maximization problem, a search method (e.g., golden section) is utilized to select \\(N_{0}\\) more efficiiAlthough the L-Curve method offers satisfactory approximations, the setting of \\(N_{0}\\) is not considered optimal. In fact, there is no known technique for the optimal selection of regularization parameters [21]. Furthermore, factor \\(N_{0}\\) should be selected ideally at each iteration; however, this is unpractical for real scenarios, because of the high computational complexity. Therefore, factor \\(N_{0}\\) is chosen only once, before the PolWISE iterative procedure starts. ### _Stopping Rule_ The PolWISE iterative procedure aims to converge into a unique solution, overcoming the usage of different methods to compute a first estimate of the PSP. Moreover, this strategy also aids easing inaccuracies in the regularization parameter \\(N_{0}\\). Yet, wrong number of iterations may result in under/over regularization. Perturbation errors (i.e., \\(\\mathbf{y}+\\Delta\\mathbf{y}\\), \\(\\mathbf{Y}+\\Delta\\mathbf{Y}\\)) might not be appropriately suppressed or/and errors due to regularization might be introduced Taking the latter into account, a stopping rule is employed to detect convergence, avoiding unnecessary iterations. Correspondingly, the PolWISE iterative procedure terminates after reaching a solution that maximizes the KL information function [11, 13, Appendix C] \\[\\int p\\left(\\mathbf{y}\\right)\\ln\\left\\{\\frac{p\\left(\\mathbf{y}\\right)}{\\hat{p} \\left(\\mathbf{y}\\right)}\\right\\}d\\mathbf{y}, \\tag{28}\\] which measures the discrepancy between the true probability density function (pdf) \\(p(\\mathbf{y})\\) and the pdf of the data model \\(\\hat{p}(\\mathbf{y})\\). The maximum likelihood (ML) solution to the TomoSAR problem is given via [8, 9] \\[\\hat{\\mathbf{b}}_{\\text{ML}}=\\underset{\\mathbf{b}}{\\text{argmin}}\\left\\{\\ln \\left\\{p\\left(\\mathbf{y}\\left|\\mathbf{b}\\right.\\right)\\right\\}\\right\\}, \\tag{29}\\] with the log-likelihood function \\[\\ln\\,\\left\\{p\\left(\\mathbf{y}\\left|\\mathbf{b}\\right.\\right)\\right\\}=\\ln\\left\\{ \\det\\left\\{\\mathbf{R}_{\\mathbf{y}}\\right\\}\\right\\}+\\mathbf{y}^{+}\\mathbf{R}_{ \\mathbf{y}}^{-1}\\mathbf{y}. \\tag{30}\\] Using the property \\(\\mathbf{y}^{+}\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{y}=\\operatorname{tr}\\left\\{ \\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{y}\\mathbf{y}^{+}\\right\\}=\\operatorname{tr }\\left\\{\\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{Y}\\right\\}\\), the usage of the scattering vector \\(\\mathbf{y}\\) is avoided, such that \\[\\ln\\,\\left\\{p\\left(\\mathbf{y}\\left|\\mathbf{b}\\right.\\right)\\right\\}=\\ln\\left\\{ \\det\\left\\{\\mathbf{R}_{\\mathbf{y}}\\right\\}\\right\\}+\\operatorname{tr}\\left\\{ \\mathbf{R}_{\\mathbf{y}}^{-1}\\mathbf{Y}\\right\\}. \\tag{31}\\] Note that the true correlation matrix \\(\\mathbf{R}_{\\mathbf{y}}\\) is unknown and must be estimated. Finally, the polarimetric version of (31) replaces matrix \\(\\mathbf{Y}\\) with \\(\\mathbf{Y}_{\\text{Pol}}\\) and matrix \\(\\hat{\\mathbf{R}}_{\\mathbf{y}}\\) with matrix \\(\\hat{\\mathbf{C}}\\). Making use of (31), AIC [11, 13, Appendix C] is employed to maximize the KL information function in (28) by minimizing \\[\\mathrm{AIC}\\,\\left(\\hat{\\mathbf{b}}^{[i]}\\right)=-\\ln\\left\\{p \\left(\\mathbf{y}\\left|\\hat{\\mathbf{b}}^{[i]}\\right.\\right)\\right\\}+i,\\] \\[i=1,\\;\\ldots,\\;I\\;. \\tag{32}\\] In a similar manner, BIC [11, 13, Appendix C] is defined as the minimizer of \\[\\mathrm{BIC}\\,\\left(\\hat{\\mathbf{b}}^{[i]}\\right) = -\\ln\\left\\{p\\left(\\mathbf{y}\\left|\\hat{\\mathbf{b}}^{[i]}\\right. \\right)\\right\\}+0.5\\cdot i\\cdot\\ln\\left\\{L\\right\\}, \\tag{33}\\] \\[i = 1,\\;\\ldots,\\;I;\\] whereas EDC [22] as the minimizer of \\[\\mathrm{EDC}\\,\\left(\\hat{\\mathbf{b}}^{[i]}\\right) = -\\ln\\left\\{p\\left(\\mathbf{y}\\left|\\hat{\\mathbf{b}}^{[i]}\\right. \\right)\\right\\}+i\\cdot\\sqrt{L\\cdot\\ln\\left\\{L\\right\\}},\\] \\[i = 1,\\;\\ldots,\\;I; \\tag{34}\\] where \\(L\\) is the number of flight tracks. ### _Summary_ The implementation of PolWISE is summarized as follows. _Step 1:_: Recover \\(\\{\\mathbf{e}_{m}^{[i=0]}\\}_{m=1}^{M}\\) by means of a conventional polarimetric technique like PolMSF, PolCapon, or PolMUSIC. By instance, the usage of PolCapon is exemplified in (22). _Step 2:_: Given \\(\\{\\mathbf{e}_{m}^{[i=0]}\\}_{m=1}^{M}\\), a set of candidates \\(\\{N_{0}\\}_{n=1}^{N}\\) in (2) and the corresponding set of estimates \\(\\{\\hat{\\mathbf{C}}_{n}\\}_{n=1}^{N}\\) in (21), select a suitable value \\(N_{0}\\) using the polarimetric version of the L-Curve method in (27). Matrix \\(\\mathbf{Y}_{\\text{Pol}}\\) is constructed similar to (5), while the set \\(\\{\\hat{\\mathbf{b}}(N_{0})\\}_{n=1}^{N}\\) is calculated as in (19) with \\(w_{m_{\\text{max}}}\\) from (23). We recommend utilizing the algorithm in [10] to find the corner of the L-curve. _Step 3:_: Once \\(N_{0}\\) has been selected, perform PolWISE in (24) using the previously computed \\(\\{\\mathbf{e}_{m}^{[i=0]}\\}_{m=1}^{M}\\)as first input. _Step 4:_: Compute AIC in (32), BIC in (33), or EDC in (34) after each PolWISE iteration. Terminate the iterative procedure when the obtained values increase steadily. The minimum obtained value indicates the most appropriate PolWISE estimate \\(\\{\\hat{\\mathbf{b}}_{\\text{PolWISE}_{p}}\\}_{p=1}^{P}\\) in (24). We recommend setting a maximum number of iterations. ## VI Simulations This section assesses the capabilities of PolWISE for the single, dual and full channel cases via simulations. Sample covariance matrices \\(\\{\\mathbf{Y}_{\\text{Pol}}\\}_{m=1}^{M}\\) are constructed using \\(J=300\\) independent realizations. They gather the echoes from the scatterers displaced along the PLOS height direction. The simulated scenes comprise different number of targets, each target gathers 100 scatterers with equal reflectivity, following a Gaussian distribution. In this way, statistical uncertainty is introduced with each independent realization and we do not rely on additive noise to introduce decorrelation. Moreover, the location of the phase-centers corresponds to the mean values. PolWISE requires a first estimate of the polarimetric PSP as zero iteration. Thus, we refer to PolCapon in (22) for such a purpose. The results presented down below (see Figs. 2-8) include the retrievals from PolCapon, PolMUSIC [7], and PolWISE (as an enhancement of PolCapon). PolMUSIC is computed utilizing a MO equal to the number of targets; whereas, PolWISE is computed using BIC as stopping rule. The quality of attained results is quantified with two metrics as follows. 1. _Root mean square error (RMSE):_ When all phase-centers are detected, the RMSE between the true locations \\(\\{\\hat{z}_{h}\\}_{h=1}^{H}\\) and the ones found \\(\\{\\widehat{z}_{h}\\}_{h=1}^{H}\\) is calculated via \\[\\text{RMSE}\\,\\left(\\hat{\\mathbf{z}},\\,\\,\\widehat{\\hat{\\mathbf{z}}}\\right)=\\sqrt{ \\sum_{h=1}^{H}\\frac{\\left(\\hat{z}_{h}-\\widehat{z}_{h}\\right)^{2}}{H}}.\\] (35)Phase-centers are recovered from the local maxima along the normalized (0 to 1) estimated pseudopower. 2. _Detection rate (DR):_ Given as a percentage, represents the number of times that all phase centers are detected. Those trials with a RMSE larger than 1.5 m are ignored. We consider a L-band SAR sensor (0.23 m wavelength) at a nominal altitude of 3000 m. The TomoSAR geometry accounts for 15 evenly distributed flight tracks, spanning a tomographic aperture of 120 m. The slant range distance from the targets to the master track is about 5000 m, with a Fourier resolution \\(\\rho_{\\text{PLOS}}\\) of approximately 4.8 m. The height range of interest \\(Z_{\\text{PLOS}}\\) (see Fig. 1) is discretized with steps \\(\\Delta z=\ icefrac{{\\rho_{\\text{PLOS}}}}{{\\zeta}}\\), where \\(\\varsigma\\) is an oversampling factor greater than or equal to one. For \\(\\varsigma=50\\), we make use of \\(M=290\\) samples within a \\(Z_{\\text{PLOS}}\\) from \\(-7\\) m to 21 m. Table I summarizes the case of study; it includes the number of simulated targets and the PLOS location of their phase-centers. Spread refers to the standard deviation of the considered Gaussian distributions. ### _Single Channel (Channel 1)_ For a signal-to-noise ratio (SNR) of 10 dB and 500 Monte Carlo trials, PolMUSIC attains an average RMSE of 0.08 m and a DR of 100%; whereas, PolWISE achieves an average RMSE of 0.62 m and a DR of 97%. Conversely, PolCapon is not able of discriminating all phase-centers; therefore, its RMSE and DR cannot be computed. Fig. 2 shows a single trial out of the 500, for PolCapon, PolMUSIC, and PolWISE. ### _Dual Channel (Channels 1 and 3)_ For a SNR of 15 dB and 500 Monte Carlo trials, PolMUSIC attains an average RMSE of 0.04 m and a DR of 100%; whereas, PolWISE achieves an average RMSE of 0.20 m and a DR of 100%. On the other hand, PolCapon is not able of discriminating all phase-centers; therefore, its RMSE and DR cannot be calculated. Figs. 3-5 show a single trial out of the 500, for PolCapon, PolMUSIC, and PolWISE, respectively. ### _Full Channel_ For a SNR of 20 dB and 500 Monte Carlo trials, PolWISE achieves an average RMSE of 0.24 m and a DR of 100%. The RMSE and DR of PolCapon and PolMUSIC cannot be calculated, as they are not able of detecting all phase-centers. Figs. 6-8 show a single trial out of the 500, for PolCapon, PolMUSIC, and PolWISE, correspondingly. ### _Different SNR_ Aimed at analyzing the influence of the SNR on the RMSE and DR measurements, 500 Mote Carlo trials are performed for different SNR. For the single, dual and full channel cases (as in Table I), SNR of 0, 5, 10, 15, 20, and 25 dB are considered. PolWISE is computed using three stopping rules (i.e., AIC, BIC, and EDC) and a maximum number of iterations \\(I=150\\). Figs. 9-11 depict attained results. ### _Discussion_ As seen in Figs. 9-11, PolCapon is not able to discriminate all targets for SNR below 20 dB in the single and dual channel cases. Moreover, PolCapon does not detect all targets for all considered SNRs in the full channel case. The latter allows showing one of the main advantages of PolWISE as postprocessing step, i.e., resolution enhancement. By instance, for SNR above 5 dB, PolWISE detects all targets in most of the trials of the single channel case, with a DR above 95%. In general, the average RMSE decreases with the SNR, whereas the DR increases. All employed stopping rules (i.e., AIC, BIC, and EDC) detect convergence of the PolWISE iterative procedure successfully. The trend of the curves is independent of the stopping rule. It is important to remark that the number of iterations to achieve convergence increases with the number of channels. By instance, for a SNR of 15 dB, PolWISE requires of about 30 iterations for single channel, whereas dual and full channel entail approximately 70 and 100 iterations, correspondingly. Dual and full channel require of more iterations to prevent the false occurrence of backscattering sources; the more iterations, the more refined the result. In order to attain a DR close to 100%, increasingly high SNR ratio is needed for the single, dual and full channel cases. Accordingly, PolWISE requires of SNR above 5 dB for single channel, SNR above 15 dB for dual channel and SNR above 20 dB for full channel. Note that PolWISE detects more easily point-type like targets than distributed targets. For the full channel case, which includes distributed targets, significantly more SNR is required in order to discriminate all phase-centers. For lower SNR, PolWISE treats some distributed targets as noise, suppressing them. PolWISE does notpreserve radiometric accuracy, as it only focusses on locating the positions from where most backscattering comes. Moreover, PolWISE is not capable of recovering the contour of the true PSP, i.e., how all scatterers distribute along PLOS height direction. Observe the output in Fig. 8(d); for an adequate SNR, PolWISE estimates all phase-centers (including those of the distributed targets) but not the outline of the actual PSP. PolMUSIC performs better than PolWISE in the single channel case for SNR above 10 dB and also in the dual channel case for SNR above 5 dB. In both the single and dual channel cases, PolMUSIC attains a DR of 100% and an average RMSE below Fig. 3: PolCapon dual channel. (a) First channel. (b) Third channel. (c) Combined channels. Fig. 2: Single channel. (a) PolCapon. (b) PolMUSIC. (c) PolWISE. Fig. 4: PoIMUSIC dual channel. (a) First channel. (b) Third channel. (c) Combined channels. Fig. 5: PoIWISE dual channel. (a) First channel. (b) Third channel. (c) Combined channels. Fig. 6: PolCapon full channel. (a) First channel. (b) Second channel. (c) Third channel. (d) Combined channels. Fig. 7: PolMUSIC full channel. (a) First channel. (b) Second channel. (c) Third channel. (d) Combined channels. Fig. 8: PoIWISE full channel. (a) First channel. (b) Second channel. (c) Third channel. (d) Combined channels. Fig. 10: Dual channel. (Left) Average RMSE against SNR. (Right) DR against SNR. Fig. 9: Single channel. (Left) Average RMSE against SNR. (Right) DR against SNR. 0.1 m for SNR above 10 dB. Note that, for a SNR of 5 dB, PolMUSIC achieves higher DR for dual channel than for single channel. PolMUSIC seems to benefit from a higher MO in the dual channel case. PolWISE performs better than PolMUSIC for full channel, as PolMUSIC is not able of detecting all targets for all considered SNRs. As expected, PolMUSIC functions better with point-type targets than with distributed targets. In general, the performance of PolWISE depends on the accuracy of its first input, retrieved in this article via PolCapon. The higher RMSE attained in the single and dual channel cases (compared to PolMUSIC) is due to those pairs of targets that PolCapon detects as a single target. Specifically, those located at \\(-3.5\\) m and \\(-2\\) m in the first channel [see Fig. 2(a)] and those located at 16 m and 17.3 m in the third channel [see Fig. 3(b)]. In such instances, the information provided by PolCapon is not accurate enough to retrieve results like those of PolMUSIC. This aspect also impacts the DR obtained, as the information given by PolCapon may not be at all times sufficient to discriminate all targets among the different trials. As it can be seen in Figs. 9 and 10, the results achieved by PolWISE approach those of PolMUSIC for higher SNR. Although the iterative procedure of PolWISE seeks to converge into a unique solution, the simulations indicate that an adequate SNR is also necessary to achieve that goal. Described in [12], MUSIC improves performance with the use of dedicated MO selection tools, such as those based on information theoretic criteria. In addition, WISE might be employed as prostprocessing step to enhance MUSIC's resolution. Applying this strategy to the polarimetric case is beyond the scope of this article, whose main objective is to introduce PolWISE. Nevertheless, future work plans to use PolWISE in combination with PolMUSIC. PolWISE has higher computational complexity than conventional methods like PolCapon and PolMUSIC. By instance, it requires the inversion of matrix \\(\\mathbf{\\hat{C}}\\) in (21), whose dimensions increase with the number of channels. Furthermore, PolWISE entails the selection of factor \\(N_{0}\\) (chosen via L-Curve method), which also involves the inversion of matrix \\(\\mathbf{\\hat{C}}\\). In addition to the aforementioned, the need for more iterations to converge as the number of channels grows, increases processing time considerably. ## VII Experimental Results Experiments are conducted using a fully-polarimetric TomoSAR dataset at L-band, acquired in 2015 in Munich, Germany, by jet propulsion laboratory (JPL)/national aeronautics and space administration (NASA) [23]. The uninhabited aerial vehicle SAR (UAVSAR) system of JPL/NASA was mounted on a Gulfstream G-III airplane at a nominal altitude of 12.5 km, attaining a swath of 22 km and length of 60 km. Incidence angles range from 25\\({}^{\\circ}\\) to 65\\({}^{\\circ}\\), whereas the noise equivalent sigma-zero ranges from -35 dB to -53 dB across the swath [24]. For 80 MHz chirp bandwidth and a wavelength of 0.24 m, the acquired single look complex (SLC) images reach a resolution of 0.8 m in azimuth and 1.66 m in range [23]. The TomoSAR acquisition geometry considers seven passes at different flight altitudes, as depicted in Table II. Fig. 12 shows the primary image. The imagery was completed on a heading of 193\\({}^{\\circ}\\), with a vertical Fourier resolution of approximately 6 m in far range and 2.8 m in near range. Two regions of interest (ROIs) are considered, where the Maximilinearum and the Bavarian state chancellery are located, correspondingly. Both edifices are almost parallel to the sensor trajectory. Fig. 13 displays the intensity images of the first ROI for channels HH, HV, and VV. The red rectangles indicate where the building is placed, whereas the azimuth and range indices permit identifying its position. The red line, crossing the building, specifies the orientation, along azimuth-range axes, of the Fig. 11: Full channel. (Left) Average RMSE against SNR. (Right) DR against SNR. tomograms presented afterward. Fig. 14 describes the test region via a Google Earth image and a polarimetric SLC SAR image. The height of the structures comprising the edifice are also specified. Figs. 15-17 show the corresponding tomograms, employing PolCapon, PolMUSIC, and PolWISE, respectively. _X_-axis (azimuth) is given in samples to identify easily the area from where the tomograms originate. The MO for MUSIC is set manually to 3. BIC in (33) is utilized to terminate PolWISE. A 5 \\(\\times\\) 10 (range/azimuth) boxcar filter is employed to spatially average the covariance matrices. Pseudopower is presented in a dB scale, where 0 dB refers to the maximum attained value. \\(Z_{\\text{PLOS}}\\) is discretized with steps \\(\\Delta z=2.8\\) m / 10 = 0.28 m. Thus, we make use of \\(M=232\\) samples within a \\(Z_{\\text{PLOS}}\\) from \\(-20\\) m to 45 m. Since the addressed polarimetric focusing techniques allow retrieving the contributions from each polarimetric channel, it is possible to perform the eigenvector analysis of the corresponding 3 \\(\\times\\) 3 coherency matrices. Such analysis \"provides a basis invariant description of the scatterer with a specific decomposition into types of scattering processes (the eigenvectors) and their relative magnitudes (the eigenvalues)\" [25]. Among the mean parameters of the dominant scattering mechanism, extracted from the coherency matrices, there is the alpha mean angle \\(\\bar{\\alpha}\\). Values of \\(\\bar{\\alpha}\\) close to 0deg suggest single-bounce reflection, \\(\\bar{\\alpha}\\) values close to 45deg suggest volume reflection, and \\(\\bar{\\alpha}\\) values close to 90deg suggest double-bounce reflection. Fig. 18 shows the scattering patterns retrieved from the \\(\\bar{\\alpha}\\) of the dominant reflector [25, Chapter 7] for PolCapon, PolMUSIC, and PolWISE. A mask based on a threshold of the pseudopower is applied to set to color black those samples with low backscattering values [7]. The aim of such analysis is to show that further studies can be performed with the retrievals of focusing techniques, which have been extended to the polarimetric configuration. Besides \\(\\bar{\\alpha}\\), parameters like the entropy (\\(H\\)) and anisotropy (\\(A\\)) might be also obtained [25, Chapter 7]. A similar strategy is followed for the second ROI. Fig. 19 displays the corresponding intensity images for channels HH, HV, and VV. Fig. 20 describes the test region and specifies the height of the structures comprising the Bavarian state channelley. Figs. 21-23 present the corresponding tomograms for Fig. 12: SLC SAR image of the test site in Munich, Germany, 2015 (near range on top). Colors correspond to channels HH (red), VV (blue), and HV (green). Fig. 13: Intensity images in dB from the ROI where the Maximilianeum is located. (a) HH. (b) HV. (c) VV. PolCapon, PolMUSIC, and PolWISE, respectively. \\(Z_{\\text{PLOS}}\\) is discretized with steps \\(\\Delta z=2.8\\) m/10 = 0.28 m. Thus, we make use of \\(M=214\\) samples within a \\(Z_{\\text{PLOS}}\\) from \\(-15\\) m to \\(45\\) m. Fig. 24 shows the corresponding attained \\(\\bar{\\alpha}\\) values. ### _Discussion_ Contrasting PolCapon, PolMUSIC and PolWISE, at first glance we can assess that PolWISE attains finer resolution, besides of performing ambiguity reduction and artefacts suppression. These enhancements facilitate estimating the height of reflectors, extracted from the local maxima along the recovered polarimetric pseudopower. The several structures comprising the edifices are easier to discern. To a limited extent, PolWISE retrieves sufficiently narrow peaks to identify numerous components at different height locations. In the case of the Maximilaineum in Fig. 17, observe the two towers at the extremes, the central building and one wing on either side. Whereas, in the case of the Bavarian state chancellery in Fig. 23, note the central building (where the dome is placed) and both wings, one on each side. The topography on the flanks of both edifices can be also observed. PolMUSIC achieves a finer resolution than PolCapon. Moreover, it is worth recalling that the use of specific MO selection tools [12] may increase the performance of PolMUSIC. In addition, PolMUSIC can be combined with PolWISE to improve the resolution. The polarimetric versions of Capon, MUSIC, and WISE are employed in order to separate the associated scattering mechanisms into three channels. The first channel contains mainly double bounce-reflection, the second channel volume reflection and the third channel single-bounce reflection. PolCapon and PolMUSIC provides information on certain structures in all channels, as shown in color white in Figs. 15(a), 16(a), 21(a), and 22(a). In contrast, PolWISE [see Figs. 17(a) and 23(a)] does not share information about the same structure among channels. The pseudopower profiles of each channel complement each other but do not overlap. In line with the simulations, PolMUSIC and PolWISE perform better for point-type like targets, commonly associated to polarizations HH and VV. Note that for both edifices, Maximilaineum in Fig. 13 and Bavarian state chancellery in Fig. 19, polarization HH attains highest intensity levels; followed by VV and HV, in that order. Contrariwise, PolWISE filters out most of the contributions in HV, usually associated to distributed targets. Due to lower backscattering, some distributed targets are suppressed, since they are considered noise. At each iteration, PolWISE pursue refining its current input and achieving super-resolution. The method concentrates first on those backscattering sources with higher energy and, sometimes, ignores (suppresses) those targets with lower backscattering. Fig. 14: Region where the Maximilaineum is placed. (a) Google Earth image. (b) Polarimetric SLC SAR image [colors correspond to channels HH (red), VV (blue), and HV (green)]. (c) Front view of the edifice (Google Earth). Figure 15: PolCapon tomograms from the ROI in Fig. 13. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: single-bounce reflection. Fig. 16: PoIMUSIC tomograms from the ROI in Fig. 13. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: single-bounce reflection. Figure 17: PolWISE tomograms from the ROI in Fig. 13. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: single-bounce reflection. Recall that a proper selection of PolWISE's regularization parameter is key to obtain good-fitted solutions. Nevertheless, due to practical reasons and to reduce computation time, such regularization parameter is chosen only once, before the iterative procedure starts. Theoretically, selecting the regularization parameter at each iteration increases the performance of PolWISE, also in the case of distributed targets; nonetheless, processing time increases meaningfully, making this approach unpractical. In contrast to PolCapon and PolMUSIC, as seen in Figs. 18(c) and 24(c), thanks to the characteristics of PolWISE, it is easier to categorize the places where a certain type of scattering mechanism is present. In the case of the Maximilianneum, double-bounce reflection prevails at the wall-ground interaction points below the central building. Scattering at the top of the edifice and numerous targets along the walls, also correspond to double-bounce reflection. Note that the topography on the Fig. 19: Intensity images in dB from the ROI where the Bavarian state chancellery is located. (a) HH. (b) HV. (c) VV. flanks is characterized by double-bounce reflection, presumably due to tree-trunk-ground interactions. Single-bounce reflection occurs in the roof of the central building and along the walls, especially in the central building. Unlike the edifice wings, which join the central building to the towers at the extremes, the central building possesses large glazed windows, where most single-bounce reflection concentrates. Finally, volume reflection mainly occurs on the flanks of the edifice, where the vegetation is located. In the case of the Bavarian state chancellery, backscattering from wall-ground interactions at the topographic height of about 0 m indicate double-bounce reflection. Scattering at the top of the edifice, targets at the columns and at the stairs of the central building, and targets at the dome above the central building, also correspond to double-bounce reflection. Single-bounce reflection occurs in the roof, in the dome and mainly along the glazed walls of both wings, on either side of the edifice. The topography at left hand is characterized by single-bounce reflection; whereas the topography at right hand is characterized by both, single- and double-bounce reflection. Note that most volume reflection pertains to the right flank of the edifice. This suggests double-bounce reflection due to tree-trunk-ground interactions, differing to the left flank. Finally, observe the presence of another edifice at the edge, on the right. Scattering at the top of this edifice correspond to double-bounce reflection; whereas single-bounce reflection occurs in the roof and in some targets along its walls. Methods like PolCapon, PolMUSIC, and PolWISE increase the dimension of the observation space, aimed at calculating optimal polarization combinations [7]. Particularly, PolWISE is considered optimal in sense of WCF. Nevertheless, in order to properly solve the WCF optimization problem, PolWISE relies in the correct selection of a regularization parameter. Setting the regularization parameter wrongly, might cause misleading solutions. Furthermore, due to the nature of regularization approaches like PolWISE, some information may be filtered out after the iterative procedure is terminated. Therefore, it is recommended treating PolWISE results as information complementary to its input, in this case PolCapon. The current research makes use of the L-Curve method [21] to select the involved regularization parameter, showing satisfactory results. Nevertheless, the L-Curve method depends on a correct range and density of sample values to perform adequately. Further research is underway to ensure proper selection of regularization parameters at all times, regardless of these factors. We are currently studying how this issue is addressed in other areas, for example in geophysical diffraction tomography [26]. Fig. 20: Region where the Bavarian state chancellery is located. (a) Google Earth image. (b) Polarimetric SLC SAR image [colors correspond to channels HH (red), VV (blue), and HV (green)]. (c) Front view of the edifice (Google Earth). Fig. 21: PolCapon tomograms from the ROI in Fig. 19. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: single-bounce reflection. Figure 22: PolMUSIC tomograms from the ROI in Fig. 19. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: single-bounce reflection. Fig. 23: PoIWISE tomograms from the ROI in Fig. 19. (a) Lexicographic [Red (first Channel), Green (second Channel), Blue (third Channel)]. (b) Pauli. (c) First Channel: double-bounce reflection. (d) Second Channel: volume scattering. (e) Third Channel: singles-bounce reflection. ## VIII Conclusion This article introduces PolWISE, a super-resolution technique, which reduces the WCF optimization problem to the selection of a regularization parameter. When such regularization parameter is chosen appropriately, by instance, via L-Curve method, PolWISE provides good-fitted results. PolWISE attains finer resolution given lower resolution imagery, obtained, for example, using PolCapon. Furthermore, it also achieves ambiguity reduction and artefacts suppression. These enhancements facilitate the estimation of the height of reflectors (one of the main goals of TomoSAR) and the analyses of the scattering processes occurring in the illuminated scene. PolWISE is an iterative approach, which holds higher computational complexity than conventional techniques like PolCapon and PolWISE. Moreover, the number of iterations required to converge increases with the number of channels, augmenting processing time. Therefore, the usage of a stopping rule (e.g., AIC, BIC, and EDC) is recommended to terminate the iterative procedure immediately after convergence is reached. This also prevents under/over regularization due to wrong number of iterations. PolWISE performs better in urban scenarios characterized by point-type like targets. More SNR is needed to discern the phase-centers of distributed targets. For the full channel simulated scenario in Table I, SNR above 20 dB is required to discriminate all phase-centers. For lower SNR, PolWISE treats some distributed targets as noise, suppressing them. PolWISE results split into three channels: the first channel refers to double bounce-reflection, the second channel to volume reflection, and the third channel to single-bounce reflection. The pseudopower profiles of each channel complement each other but do not overlap. Thanks to the characteristics of PolWISE, it is easier to label the locations of certain type of scattering mechanism by computing, for example, the polarimetric indicator alpha mean angle \\(\\bar{\\alpha}\\). Nonetheless, we suggest treating PolWISE results as complementary information, since some information might be filtered out. ## Acknowledgment The authors would like to thank Dr. S. Hensley from JPL/NASA for providing the UAVSAR data utilized in the reported experiments. ## References * [1] S. R. Cloude and E. Pottier, \"A review of target decomposition theorems in radar polarimetry,\" _IEEE Trans. Geosci. Remote Sens._, vol. 34, no. 2, pp. 498-515, Mar. 1996. * [2] S. R. Cloude and E. Pottier, \"An entropy-based classification scheme for land applications of polarimetric SAR,\" _IEEE Trans. Geosci. Remote Sens._, vol. 35, no. 1, pp. 68-78, Jan. 1997. * [3] P. A. Rosen et al., \"synthetic aperture radar interferometry,\" _Proc. IEEE_, vol. 88, no. 3, pp. 333-382, Mar. 2000. * [4] G. Krieger et al., \"AmDEM-X: A satellite formation for high-resolution SAR interferometry,\" _IEEE Trans. Geosci. Remote Sens._, vol. 45, no. 11, pp. 3317-3341, Nov. 2007. * [5] A. Reigber and A. Moreira, \"First demonstration of airborne SAR tomography using multibaseline L-band data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 38, no. 5, pp. 2142-2152, Sep. 2000. * [6] S. R. Cloude and K. P. Papathanassiou, \"Polarimetric SAR interferometry,\" _IEEE Trans. Geosci. Remote Sens._, vol. 36, no. 5, pp. 1551-1565, Sep. 1998. Fig. 24: Bavarian state chancellery: alpha mean angle \\(\\bar{\\alpha}\\) in degrees. (a) PolCapon. (b) PolMUSIC. (c) PolWISE. * [7] S. Sauer, L. Ferro-Famil, A. Reigber, and E. Pottier, \"Three-dimensional imaging and scattering mechanism estimation over urban scenes using dual-baseline polarimetric hSAR observations at L-band,\" _IEEE Trans. Geosc. Remote Sens._, vol. 49, no. 11, pp. 4616-4629, Nov. 2011. * [8] G. Martin del Campo, M. Nannini, and A. Reigber, \"Statistical regularization for enhanced TomoSAR imaging,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 1567-1589, 2020. * [9] G. D. Martin-del-Campo-Becerra, S. A. Serafin-Garcia, A. Reigber, and S. Ortega-Cisneros, \"Parameter selection criteria for Tomo-SAR focusing,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 1580-1602, 2021. * [10] A. CultreraandL. Callegaro, \"A simple algorithm to find the L-curvecorner in the regularization of inverse problems,\" _IOP SciNotes_, vol. 1, no. 2, pp. 1-6, Aug. 2020. * [11] P. Stoica and Y. Selen, \"Model-order selection: A review of information criterion rules,\" _IEEE Signal Process. Mag._, vol. 21, no. 4, pp. 36-47, Jul. 2004. * [12] G. D. Martin-del-Campo-Becerra, S. A. Serafin-Garcia, A. Reigber, S. Ortega-Cisneros, and M. Nannini, \"Resolution enhancement of spatial parametric methods via regularization,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 14, pp. 11335-11351, 2021. * [13] P. Stoica and R. L. Moses, _Spectral Analysis of Signals_, vol. 1. Upper Saddle River, NJ, USA: Prentice-Hall, 2005. * [14] F. Gini, F. Lombardini, and M. Montanari, \"Layer solution in multiphase SAR interferometry,\" _IEEE Trans. Aerosp. Electron. Syst._, vol. 38, no. 4, pp. 1344-1356, Oct. 2002. * [15] H. H. Barrett and K. J. Myers, _Foundations of Image Science_. New York, NY, USA: Willey, 2004. * [16] M. Nannini, R. Scheiber, and A. Moreira, \"Estimation of the minimum number of tracks for SAR tomography,\" _IEEE Trans. Geosci. Remote Sens._, vol. 47, no. 2, pp. 531-543, Feb. 2009. * [17] M. Schmitt and U. Stilla, \"Maximum-likelihood-based approach for single-pass synthetic aperture radar tomography over urban areas,\" _IET Radar. Sonar Navigation_, vol. 8, no. 9, pp. 1145-1153, Apr. 2014. * [18] F. Lombardini and F. Viviani, \"Radiometrically robust superresolution tomography: First analyses,\" in _Proc. IEEE Int. Geosc. Remote Sens. Symp._, 2016, pp. 24-27. * [19] P. Stoica, P. Babu, and J. Li, \"New method of sparse parameter estimation in separable models and its use for spectral analysis of irregularly sampled data,\" _IEEE Trans. Sign. Process._, vol. 59, no. 1, pp. 35-47, Jan. 2011. * [20] P. Stoica, P. Babu, and J. Li, \"SPICE: A sparse covariance-based estimation method for array processing,\" _IEEE Trans. Sign. Process._, vol. 59, no. 2, pp. 629-638, Feb. 2011. * [21] J. L. Mueller and S. Siltanen, _Linear and Nonlinear Inverse Problems With Practical Applications_, vol. 10. Philadelphia, PA, USA: SIAM, 2012. * [22] F. Lombardini and F. Gini, \"Model order selection in multi-baseline interferometric radar systems,\" _EURASIP J. Adv. Signal Process._, vol. 2005, no. 20, pp. 3206-3219, 2005. * [23] S. Hensley et al., \"UVASAR Tomography of Munich,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2019, pp. 1140-1143. * [24] C. E. Jones, B. Minchew, B. Holt, and S. Hensley, \"Studies of the Deepwater Horizon oil spill with the UAVSAR radar,\" in _Monitoring and Modeling the Depameter Horizon Oil Spill: A Kerod-Breaking Enterprise_, Y. Liu, A. MacFadyen, Z.-G. Ji, and R. H. Weisberg, Eds. Washington, DC, USA, Aug. 2011, pp. 33-50. * [25] J. S. Lee and E. Pottier, _Polarimetric Radar Imaging: From Basics to Applications_. Boca Raton, FL, USA: CRC, 2017. * [26] E. T. F. Santos and A. Bassrei, \"L-and \\(\\Theta\\)-curve approaches for the selection of regularization parameter in geophysical diffraction tomography,\" _Comput. Geosciences_, vol. 33, no. 5, pp. 618-629, May 2007. \\begin{tabular}{c c} & Gustavo Daniel Martin-del-Campo-Becerra received the Engineering degree in electronics and communications engineering from the University of Guadalajara, Guadalajara, Mexico, in 2008, and the M.Sci. and Dr.Sci. (Ph.D. equivalent) degrees in electrical engineering (_with specialization in telecommunications_) from the Center for Research and Advanced Studies (Cinvestav), National Polytechnic Institute, Guadalajara, Mexico, in 2013 and 2017, respectively. Since 2017, he has been with the Microwaves and Radar Institute (HR), German Aerospace Center (DLR), as a Research Scientist. His research interests include the applications of signal processing and machine learning to remote sensing, particularly SAR tomography (TomoSAR), inverse problems, random fields estimation, and adaptive spatial analysis. \\\\ \\end{tabular} \\begin{tabular}{c c} & Eduardo Torres-Garcia received the Engineering degree in telematics from the Polytechnic University of Juornuen Rosas, Guanajaito, Mexico, in 2020, and the M.Sci. George in electrical engineering in 2022 (_with specialization in telecommunications_) from the Center for Research and Advanced Studies (Cinvestav), National Polytechnic Institute, Guadalajara, Mexico, where he is currently working toward the Dr.Sci. (Ph.D. equivalent) degree in electrical engineering. His research interests include the applications of signal processing to remote sensing, specifically, SAR tomography (TomoSAR), and Polarimetric SAR (PolSAR). \\\\ \\end{tabular} \\begin{tabular}{c c} & Deni Librado Torres-Roman received the Ph.D. degree in telecommunication from the Technical University Dresden, Dresden, Germany, in 1986. He was a Professor with the University of Oriente, Cuba. He has been a Researcher 3-C with CINVESTAV-PIN, Guadalajara, Mexico, since 1996. He has coauthored a book about data transmission. His research interests include hardware and software design for applications in the telecommunication area. Prof. Deni was the recipient of the Telecommunication Research Prize in 1993 from AHCIET Association and the 1995 Best Paper Award from AHCIET Review, Spain. In recent years, he is working with tensor algebra for MS- and HS-Imaging, and video processing. \\\\ \\end{tabular} \\begin{tabular}{c c} & Serpio Alejandro Serafin-Garcia received the Engineering degree in electronics and communications engineering from the University of Guadalajara, Guadalajara, Mexico, in 2017, and the M.Sci. degree in electrical engineering (_with specialization in telecommunications_) from the Center for Research technic Institute, Guadalajara, Mexico, in 2021. He is currently working toward the Ph.D. degree with the Microwaves and Radar Institute (HR), German Aerospace Center (DLR). His research interests include the applications of signal processing and machine learning to remote sensing, particularly SAR tomography (TomoSAR). \\\\ \\end{tabular} \\begin{tabular}{c c} & Andreas Reigber (Fellow, IEEE) received the Diploma degree in physics from the University of Konstanz, Konstanz, Germany, in 1997, the Ph.D. degree in engineering from the University of Stuttgart, Stuttgart, Germany, in 2001, and the Habilitation degree from the Berlin University of Technology, Berlin, Germany, in 2008. He is currently the Head of the SAR Technology Department, Microwaves and Radar Institute (HR), German Aerospace Center (DLR), Wessling, Germany, where he is leading the development and operation of state-of-the-art airborne SAR sensors. He is also a Professor of remote sensing and digital image processing with the Berlin University of Technology, Berlin, Germany. His research interests include various aspects of multimodal, multichannel, and high-resolution SAR processing and pseudprocessing. Dr. Reigber was the recipient of several prize paper awards, among them the IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS) Prize Paper Award in 2001 and 2016 for his works on polarimetric SAR tomography and nonlocal speckle filtering, respectively, and also the IEEE TGRS Letters Prize Paper Award in 2006 for his work on multipass SAR processing. \\\\ \\end{tabular}
The polarimetric versions of focusing techniques for synthetic aperture radar (SAR) tomography (TomoSAR), apart from estimating the pseudopower and retrieving the height of reflectors from the recovered local maxima, allow extracting the associated scattering mechanisms. Additionally, scattering patterns can be examined by means of polarimetric indicators like alpha mean angle, used to associate observables with physical properties of the medium. Aimed at easing the analysis of the scattering processes occurring in the illuminated scene, this article extends the weighted covariance fitting (WCF) based iterative spectral estimator (WISE) to the polarimetric configuration, called hereafter PolWISE. The addressed technique attains finer resolution than conventional methods like PolCapon, performing suppression of artefacts and ambiguity reduction. PolWISE is a statistical regularization approach, which reduces the TomoSAR inverse problem to the selection of a regularization parameter, chosen via the L-Curve method. Furthermore, being PolWISE an iterative technique, under/over regularization is prevented by terminating the procedure at an appropriate iteration. A stopping rule based on Kullback-Leibler information criterion is employed. The PolWISE algorithm is assessed thoroughly through simulations and experiments on fully polarimetric TomoSAR airborne data at L-band, acquired from an urban scenario.
Condense the content of the following passage.
ieee/fa625386_b1c7_42ef_879a_9bbf39801580.md
# Detection and Monitoring of Floating Plastic Debris on Inland Waters From Sentinel-2 Time Series Daniele Cerra \\({}^{\\copyright}\\),, Stefan Auer \\({}^{\\copyright}\\),, Adrian Baissero \\({}^{\\copyright}\\), and Felix Bachofer \\({}^{\\copyright}\\) Received 23 August 2024; revised 26 October 2024; accepted 18 November 2024. Date of publication 20 November 2024; date of current version 6 December 2024. _(Corresponding author: Adrian Baissero.)_ Daniele Cerra, Stefan Auer, and Felix Bachofer are with the Earth Observation Center (EOC), German Aerospace Center (DLR), 82234 Wessling, Germany (e-mail: [email protected]; [email protected]; [email protected]). Adrian Baissero is with the Environmental Remote Sensing Research Group (GITA), University of Alcala, 28801 Madrid, Spain (e-mail: [email protected]).Digital Object Identifier 10.1109/ISTARS.2024.3502796 ## I Introduction Plastic debris pollution has become a pressing environmental issue worldwide, particularly in aquatic environments. The widespread use of plastic in the daily life, its economic significance, improper waste management, as well as its durability have led to the emission and persistence of enormous amounts of plastic into the environment [1, 2, 3], transported by aeolian processes, surface runoff, and aquatic environments [2]. Accumulations of plastic debris can be found in freshwater systems [4, 5], at coasts [6, 7], in floating garbage patches in ocean gyres [8], and in the deep sea [9, 10]. Significant plastic accumulations have been documented and investigated for the Atlantic Ocean [11], the Indian Ocean [8], and even the Antarctic region [12]. Environmental concerns associated with plastic pollution include the risk for organisms by ingestion or entanglement, bioaccumulation in the food chain [13], leakage of toxic additives [14], and the impact on ecosystem services [15]. In a widely respected study, Lebreton et al. [16] modeled a leakage of 1.15 to 2.41 million metric tons of plastic entering the oceans from rivers. They identified that the 20 most polluting rivers account for 67% of the global river emissions and that 15 of them are located in Asia. Meijer et al. [17] refined this study, concluding that the emissions from small rivers were heavily underestimated, and in total more than 1600 rivers account for 80% of the emissions, with a total contribution between 0.8 and 2.7 million metric tons. Van Calcar and van Emmerink [18] investigated the abundance of floating macroplastic at 24 stations in rivers of seven countries in Asia and Europe, and found that on average the Asian rivers transport up to 30 times more plastic. Much of the plastic material is retained for a long time in the river catchments, before it is emitted into the ocean, triggered by extreme hydrological events [19]. The detection and monitoring of plastic debris in the aforementioned and other ecosystems are essential for developing effective strategies to mitigate its adverse impacts, by estimating the volume of accumulated plastic and plastic washed away by floodwaters. Advancements in remote sensing technology, coupled with the availability of open satellite data and improved big-data processing capabilities, offer promising opportunities for comprehensive and continuous monitoring of plastic debris using satellite Earth observation (EO) [20]. Furthermore, highlighting the visibility of debris on water from space may help increase public awareness about widespread pollution, which could trigger mitigation activities in turn. This article introduces a framework for detecting and monitoring accumulations of plastic debris in inland waters based on open remote sensing data time series and information fusion. The presented results allow for discussing the opportunities and limitations in monitoring floating debris cover with the available spaceborne missions, which have variable properties in terms of spectral and spatial resolution, revisit times, and data accessibility. The described phenomena can be clearly visible in high-resolution EO data. Relevant examples across three continents from Google Earth images are reported in Fig. 1 and will be the case studies reported in this article: Guatema, Bosnia, and Egypt. In all cases, plastic accumulation caused by dams regulating the flow of inland waters is evident therein. This article aims to detect and monitor the floating plastic debris for such cases at lower spatial resolution, while exploiting the higher temporal resolution of observation systems with open-data policies, such as image acquisitions provided by the Copernicus program. ### _Related Work_ Multiple methods are available for monitoring plastic in river and freshwater systems [21]. Traditional approaches often focus on localized applications, such as tracking plastics with GPS devices [22], active sampling with tools, such as nets [23], passive sampling using methods, such as floating booms or garbage collectors [24], visual counting from bridges or vessels [25], the use of installed cameras [26] or through autonomous aerial vehicle (AAV), and airborne campaigns [27]. In contrast, satellite-based EO technologies provide a broader coverage on the ground, faster revisit times--in the order of one week with Sentinel-2 data [28]--, and can access any location on the Earth's surface, allowing for the monitoring of extensive aquatic systems on a larger scale. Detailed reviews of the listed methodologies are provided by Topouzelis et al. [29] and Waqas [30]. The effective monitoring of marine and coastal plastic debris with satellite EO requires a clear separation of observational requirements between shoreline and in-water detection, as the latter faces significant challenges due to signal attenuation in the infrared range and variability in temporal scales. It is recommended in [31] to consider quantifying the signal-to-noise ratio for accurate detection, for both submerged and emerged debris. Optical Sentinel-2 satellite data have been predominantly utilized for floating plastic debris detection in marine and coastal environments (e.g., [32, 7, 33, 34]). One of the most employed spectral indices defined based on Sentinel-2 spectral properties is the floating debris index (FDI) introduced in [7] to detect floating macroplastic on clear water surfaces. Hu [35] investigated the challenges and capabilities of Sentinel-2 data for the remote detection of marine debris. The study concludes that, while detecting macroplastics and large debris patches is feasible with Sentinel-2's optimal tradeoff between resolution and coverage, interpreting its data requires caution due to varying spatial resolutions across different bands. The creators of the MARIDA dataset conclude in a recent work that an analysis solely based on spectral features is limited for plastic detection from space, and they improve the performance of classification algorithms by embedding spatial information, such as textural features [36]. Nevertheless, the analysis of inland coastal waters at Sentinel-2 resolutions is carried out at 20 m ground sampling distance (GSD), or 10 m if only 3 bands in the visible range and band 8 in the near infrared are used. Thus, the extraction of spatial features from the affected areas is significantly constrained. Russwurm et al. [37] applied U-NET and U-NET++ architectures trained on MARIDA and the FloatingObjects [38] database to detect nearshore marine debris in Sentinel-2 data, which outperformed the RF baseline model trained with the same datasets. Hu [39] highlights that while Sentinel-2 is widely used for detecting marine debris, the mixed band resolutions of the sensor and the subpixel coverage of debris necessitate careful interpretation of spectral data to avoid misclassifications, and emphasizes the importance of both pixel averaging and subtraction in designing effective detection algorithms. The plastic litter project investigated the remote detection of floating plastic debris in aquatic environments using both unmanned aerial systems (UAS) and Sentinel-2 satellite imagery. The project demonstrated that high-resolution UAS images significantly enhance the geo-spatial accuracy of satellite data, and established that large artificial plastic targets can serve as valuable references for calibrating and validating remote sensing algorithms. The findings underscore the feasibility of detecting floating marine litter with Sentinel-2 under specific conditions, considering factors, such as plastic type, coverage fraction, biofouling, and submersion effects [40, 41, 42]. Taggio et al. [43] exemplified the use of a linear combination of unsupervised (\\(K\\)-means) and supervised (light gradient boosting model) classification probability for detecting floating plastic debris in pan-sharpened hyperspectral PRISMA data. In contrast to studies on plastic detection on the open sea or at coastlines with satellite remote sensing, relatively fewer publications focus on rivers and water channels. Studies in this area mostly rely on airborne and AAV data, which were used in [27] and [44] to measure the amount of plastic debris in the Klang River in Malaysia and in Indonesia. Hyperspectral AAV sensors have been employed together with lab measurements to characterize plastic debris in rivers in [45]. However, the free-access Copernicus Sentinel missions form the basis for research efforts in this direction as well. The feasibility of separating plastic from water in a river dam was demonstrated by [46], where a stack of 142 Sentinel-1 images was used to generate a plastic heat map over the extent of the water surface, with the cross-polarized signal showing the best sensitivity for the identification of plastic. Lavender et al. [47] integrated the class Fig. 1: High-resolution images from Google Earth for the test sites reported in this article (2024 Maxar Technologies, CNES, Airbus). From left to right: Hidrovacas (Guatemala, April 2020), Visegrad (Bosnia, March 2021) as representative site for the Lim/Dina rivers system, and narrow Nile tributary (Egypt, June 2020). Accumulation of plastic debris is evident for all cases. of plastic into a classification method based on deep learning, relying on Sentinel-1 and Sentinel-2 images, and also identifying plastic found on land, such as landfills, tire graveyards, and greenhouses. The separation of the plastic types is defined in a decision tree, where empirical indices are used to add contextual information. However, the authors noted that using optical alongside SAR data for plastic classification on water poses challenges due to the time shift between data acquisitions. Solegomez et al. [48] examined different neural network architectures for debris detection on rivers with Sentinel-2, identifying transferability and reference data generation as key challenges. Sakti et al. [49] applied machine learning techniques for detecting illegally dumped waste on river surfaces in Indonesia, combining results based on Sentinel-2 and high-resolution optical images from AAV and Pleiades at the decision level. ### _Research Framework and Objectives_ This work makes three contributions to the state of the art. First, floating plastic debris is detected and monitored by exploiting both spectral and temporal features, relying on multispectral image time-series spanning several months. Unlike existing methods, this approach eliminates the need to preselect images containing plastic debris or manually outline areas of interest--common practices required by current floating plastic detection algorithms [50, 51, 52]. Second, by defining robust detection rules and relying on spectral bands with higher spatial resolution, we enable the identification and monitoring of plastic debris in inland waters, even when only a few pixels show significant changes in reflectance. This is particularly effective for narrow water channels less than 20 m wide, where the ability to capture such small-scale changes is critical. Finally, spectral unmixing (SU) is employed to estimate the surface area occupied by debris at a subpixel level. This technique also aids in visually exploring surrounding areas, highlighting previously undetected accumulations of floating plastic. In comparison to previous studies, this work shifts focus from solely analyzing the spectral characteristics of plastic--an approach found to be difficult to generalize across different case studies and scenarios--toward detecting temporal changes in inland water bodies. Several challenges identified in earlier research, such as those involving the limitations of Sentinel-2 data in distinguishing floating plastics from other materials and artifacts described in [39] and [53], are mitigated here through an innovative strategy. First, the comparison between a plastic candidate and a water pixel does not happen between neighboring pixels, which introduces problems in [39], but is done on the same location at different times. Second, we reduce coregistration artifacts reported in [39] between spectral bands at 20 m and 10 min Sentinel-2 as a source of problems, by relying almost exclusively on the higher resolution bands. Finally, both foam and boats, representing additional sources of false positives, are more prominent on coastlines and open waters than in inland waters, especially for narrow channels, where our analysis is focused. The advantages of our time-series approach, as opposed to single-image spectral indices like the FDI, are demonstrated in the experiments section. In addition, a cloud-based application developed in Google Earth Engine (GEE) enables the analysis of large areas with minimal supervision, exploiting all Sentinel-2 acquisitions available across temporal intervals of several months. The result is a vectorized output identifying plastic accumulation areas for each image in the dataset. ## II Methodology This section introduces the workflow (see Fig. 2) for the analysis of data covering a specific period, divided into time steps, to derive candidate areas for plastic cover across multi-temporal optical satellite images. The detection of floating debris is based on the spectral changes introduced by the appearance of plastic on water, requiring: initial presence of water surface, increase of signal intensity if plastic occurs, especially in the infrared spectral range, removal of false alarms due to clouds and urban areas, as well as sufficient spatial extent to identify plastic accumulation over time at the available GSD. The initial map of detected plastic candidate regions allows identifying prominent areas of interest for subsequent detailed monitoring and analysis, where local dynamics of plastic accumulation can be refined using subpixel analysis. The detected plastic can be used as input for subpixel quantification of the surface covered by accumulation of plastic, resulting in an improved characterization of plastic dynamics, which is especially valuable to study the accumulation of debris in narrow water channels. This in turn can be used to refine the monitoring of plastic cover change and its large-scale screening for the identification of further, less-prominent plastic cover on water surfaces. The search for candidate areas for plastic debris relies on empirical knowledge. This involves chaining together a series Fig. 2: Workflow including the identification of plastic candidates and additional plastic characterization steps. Images are selected as detailed in Fig. 3, and candidates are identified as described in Section II-A, and can be used to characterize locally the spectral features of floating plastic. The full stack of images can then be retrieved and further analyzed on the detected hotspot. By applying SU, a map highlighting possible additional plastic accumulations can be produced and visually explored, and the total surface covered by plastic debris estimated at subpixel level. of simple detectors aimed at separating plastic from other scene elements in a stack of multispectral bottom of atmosphere (atmosppherically corrected level 2A) images, without explicitly learning its spectral properties. To this end, an image time series is collected and analyzed. It is important to note that while this method is informed by practical experience and preliminary observations, there is currently no precise way to estimate its accuracy due to the absence of a comprehensive reference dataset. ### _Image Time-Series Selection_ Fig. 3 visualizes the algorithm creating the image time series, which starts by filtering the Sentinel-2 image archive in a specified time frame, selecting a point of interest and the number of images to be included [examples are given in the case studies (see Section III)]. Thereafter, available images are sorted by cloud cover percentage and the image having the lowest cloud cover is selected. A 15-day time window is centered around the corresponding acquisition date, and images within that period are removed from the candidates. This step avoids including acquisitions too close in time to one another. The value 15 has been selected based on testing at several sites around the world, and constitutes an adequate compromise between minimizing the cloud coverage and allowing sufficient temporal sampling. The algorithm iterates the selection process until the requested amount of image tiles in the stack is reached. ### _Water Mask_ The search for debris is carried out on water surfaces with the aim of detecting the appearance of plastic, which is assumed to be not present in at least one image in the time series. Accordingly, a water mask \\({m_{w}}^{i}\\) is generated from an image \\(i\\) in the stack as \\[{m_{w}}^{i}=\\frac{{b_{\\text{NIR}}}^{i}-{b_{\\text{Green}}}^{i}}{{b_{\\text{NIR}}} ^{i}+{b_{\\text{Green}}}^{i}}>T_{w} \\tag{1}\\] where \\({b_{\\text{NIR}}}^{i}\\) is the near-infrared channel for image \\(i\\), \\({b_{\\text{Green}}}^{i}\\) is the green channel, and \\(T_{w}\\) is a threshold for defining the binary mask. The left part of the right term of the equation represents the normalized difference water index (NDWI) as defined by [54], but modified following the NIR-Green band combination [55] approach, which allows for better discrimination of inland vegetation and has been shown to be more reliable in water body detection [56, 57, 58]. In our analysis, we use bands 8 and 3 from Sentinel-2 data for the near-infrared and green channels, respectively, both of which are among the bands with the highest spatial resolution in the Sentinel-2 sensor. Subsequently, water pixels are detected in an image composite \\(i_{c}\\) as \\({m_{w}}^{c}\\) derived by combining the minimum intensity values for each pixel across the time series. This allows obtaining reliable results for cases in which the water surface is covered most of the time by plastic throughout the analyzed period, except for a single image. A pixel for the final water mask \\(m_{w}\\) is set to 1 if at least two images (across the time series composed of \\(K\\) images or in the composite) fulfill the NDWI criterion and indicate the presence of water \\[m_{w}=\\left(\\sum_{i=1}^{K}{m_{w}}^{i}+{m_{w}}^{c}\\right)\\geq 2. \\tag{2}\\] In this work, we suggest to use two fixed thresholds for \\(T_{w}\\): 0.06 for large water bodies such as dams, lakes and wide rivers (used, among others, in [59] and [60]), and 0.3 for small water channels. A higher threshold is needed here, as most of the water pixels therein result mixed with other land cover classes, and we favor completeness over correctness, as we observed that false alarms do not propagate to the final output in the case of study considered. A value of 0.3 is also suggested in [61] as generic threshold to detect water with NDWI. In any case, in our application described in following sections, the setting of this threshold can also be manual, if different contexts require to tweak this parameter. The remaining indicators for the detection of plastic appearance are related to the level and change of intensity in the image and their spectral characteristics. ### _Changes in Reflectance_ It is assumed that plastic appearance or disappearance leads to at least one remarkable change of signal intensity on the water surface in one of the images. Therefore, the following condition \\({m_{\\Delta t}}^{t_{i+1}}_{i}\\) is verified for pixel values of two temporally adjacent images acquired at times \\(t_{i}\\) and \\(t_{i+1}\\), with \\(i\\) indexing the images in the time series \\[{m_{\\Delta t_{i}}}^{t_{i+1}}=\\frac{{\\sum_{n=1}^{N}\\left({b_{n,t_{i+1}}}-{b_{n,t_{i}}}\\right)}}{{\\sum_{n=1}^{N}\\left|\\left({b_{n,t_{i}}}\\right)\\right|}}>T_{I}. \\tag{3}\\] In the equation, \\(b\\) is the respective pixel intensity value, \\(n\\) and \\(t\\) are, respectively, the indices of band and time, and \\(N\\) is the total number of bands used (in this case Sentinel-2 10-\\(m\\) bands B2, B3, B4, and B8). The threshold \\(T_{I}\\) defines the amount of required intensity change as indicator for the appearance of plastic. For instance, the intensity needs to result at least doubled if \\(T_{I}\\) is 1, which is set as default value. As a result, the binary masks \\({m_{\\Delta t_{i}}}^{t_{i+1}}_{i}\\) are set to 1 for all pixels fulfilling the intensity change criterion. In parallel, pixels are negatively flagged whenever their intensity exceeds a threshold \\(T_{\\text{max}}\\). This derives from considering that the signal intensity of plastic is not as dominant as specular reflections from metallic/glassy surfaces or white cloud pixels, which could raise false alarms. Pixels with intensities larger than \\(T_{\\text{max}}\\), with default value 4000 for removing obvious pixels with high intensity, are, therefore, discarded. Finally, the individual masks of boolean type are summed up Fig. 3: Workflow for selection of Sentinel-2 images across a given temporal interval. to a value \\(m_{\\Delta}\\) iterating over all time steps, corresponding to a logical or operator \\[m_{\\Delta}=\\sum_{i=1}^{K-1}m_{\\Delta}{}_{t_{i+1}}^{t_{i+1}} \\tag{4}\\] where \\(K\\) is the total number of images in the multitemporal stack, sorted according to acquisition date. Considering the spectral characteristics of plastic, a spectral check of the Sentinel-2 bands is included. As reported in [32], plastic appears more dominant in the infrared in comparison to shorter wavelengths in Sentinel-2 data. Thus, the intensity of band 8 (NIR) is required to be higher than the intensities of bands 4 to 7 (red and red edge channels), using then in this case also bands 5-7 with a GSD of 20 m. The resulting binary masks are noted as \\(m_{st_{i+1}}\\) and are summed up to a value \\(m_{s}\\) iterating over all time steps. ### _Clouds and Urban Interference_ Additional information is provided by quality layer masks associated to the multispectral data product. Since plastic is not distinguishable in case of heavy cloud cover, the image metadata are used to select images with a cloud coverage percentage below a threshold. Thereafter, cloud masks of the products, e.g., as provided for Sentinel-2 by Sen2Cor [62] or MAJA [63], allow us to exclude cloudy pixels from the start. The binary cloud masks of all images in the time series are combined to derive a joint cloud mask \\(m_{\\text{cloud}}\\), favoring recall over precision, as frequently false negatives are contained in the provided cloud mask for Sentinel-2 products [64]. In addition, false alarms in the water mask, i.e., pixels in settlements and city areas, are filtered out by incorporating the global World Settlement Footprint (WSF) dataset, delineating human settlements as a binary mask \\(m_{\\text{WSF}}\\)[65]. ### _Combined Plastic Detection Index_ Finally, the described binary indicators are combined into a robust plastic candidate mask by multiplication, corresponding to a logical and operator \\[m_{\\text{plastic}}=m_{w}*m_{\\Delta}*m_{s}*m_{\\text{cloud}}*m_{\\text{WSF}}. \\tag{5}\\] In order to refine the initial results, it is assumed that plastic debris accumulates over time in bottlenecks of the inland waters system. Accordingly, isolated candidate pixels are spatially filtered out from \\(m_{\\text{plastic}}\\) with a \\(z\\times z\\) median filter. The filtering also reduces the impact of a possible coregistration error, which is specified as \\(<20\\) m with a standard deviation of \\(2\\sigma\\) for all Sentinel-2 geo-referenced products [66, 28]. The binary mask of plastic candidate areas provides the basis to select areas of interest for the following steps. Altogether, after creating an image time series, the candidate search is steered by parameters for the water mask (threshold \\(T_{w}\\), default values described above), intensity analysis (threshold \\(T_{i}\\), default: 1; maximum value \\(T_{\\text{max}}\\)), and median filtering (filter size \\(z\\), default: 3). The thresholds for the candidate search are kept constant for all scenes but can be set to different ad hoc values from the user, e.g., in case of challenging situations with smog or haze on Sentinel-2 image tiles. Summarizing, the strategy behind the candidate search for plastic debris can be expressed as follows. The identification of water bodies is based on the NDWI with a threshold defined in the literature. The thresholds related to signal intensity in the time series are connected to salient signal changes and the removal of intensity outliers (reflections, cloud pixels). In combination with basic spectral properties assigned to plastic, the elements allow to identify candidate areas on water across an image time series. ### _Quantification of Plastic Surface_ When inland waters act as reservoirs for plastic pollution, it is important to understand and quantify where and for how long plastics are retained. Whenever multitemporal acquisitions are available, and a region of interest containing floating plastic debris has been located, more complex waste accumulation dynamics in the area can be studied. First, plastic candidates are identified across the image time series, as described above, and the largest plastic accumulation is identified. Subsequently, the image with lowest (ideally none) plastic presence in a sensitive area is selected as initial state, and the change detection results at following times are analyzed. In order to increase the accuracy of the results, the patches of plastic throughout the stack are vectorized, with detection of plastic at different times being considered if located in a nearby area only. The most straight-forward analysis performs change detection on the full multitemporal image stack at each time step, estimating the total number of pixels covered by new accumulation of plastic, and considering the total surface as the total area of these pixels. As surface can be greatly overestimated for inland waters analysis using Sentinel-2 data, in which often the appearing plastic occupies only a fraction of each image element, a second approach relying on SU can mitigate such errors by estimating the fractional coverage of plastic in each pixel. The process of SU aims at providing accurate information at subpixel level on a scene, by decomposing the spectral signature associated with an image element in signals typically belonging to macroscopically pure materials, or endmembers. The contribution of a given material to the spectrum of an image element is a fractional quantity, usually named abundance [67]. SU has been modeled as either a linear or nonlinear process. In linear SU, the contributions forming the spectrum related to a given image element are directly proportional to the fractions of the target occupied by different materials: all solar light reaching the target is, therefore, assumed to be either absorbed or reflected to or away from the sensor. In the literature, the simpler linear model is often adopted in practical applications since it represents a reliable first approximation of the actual material interactions and generally provides valuable results [68]. Even though water exhibits nonlinearities in the scattering of light and, therefore, in spectral mixtures, the strong difference in albedo with respect to plastic debris makes the linear model suited for our task, as errors in water abundances do not have an impact on our results. Furthermore, nonlinear interactions cannot be modeled with the reduced spectral resolution characterizing the data used in this article. We, thus, model the spectrum of a pixel \\(\\mathbf{p}\\) with \\(m\\) bands as a linear combination of \\(n\\) reference spectra \\(\\mathbf{S}=[s_{1},s_{2},\\ldots,s_{n}]\\in\\mathbb{R}^{m\\times n}\\), weighted by \\(n\\) scalar fractional abundances \\(\\mathbf{x}=[x_{1},x_{2},\\ldots,x_{n}]^{T}\\in\\mathbb{R}^{n\\times 1}\\), plus a residual vector \\(\\mathbf{r}\\in\\mathbb{R}^{m\\times 1}\\) containing the portion of the signal, which cannot be represented in terms of the basis vectors of choice \\[\\mathbf{p}=\\sum_{i=1}^{n}x_{i}s_{i}+r=\\mathbf{S}\\mathbf{x}+\\mathbf{r}. \\tag{6}\\] Here, \\(\\mathbf{r}\\) collects several quantities, which are hard to separate, such as noise, over- or underestimation of atmospheric interaction, missing materials in \\(\\mathbf{S}\\), variations in the spectra of a single material within the scene, wrong estimation of the abundances \\(\\mathbf{x}\\), and nonlinear effects [69]. Mathematically, the mixing problem in (6) may be solved through a set of linear equations using least squares approaches [67]. In this work, we enforce the nonnegativity constraint to all abundances as negative values have no physical meaning, but not the often imposed sum-to-one constraint: as the spectral library used is assumed to be incomplete, contributions from not included materials may introduce errors in the process. Furthermore, the sum-to-one constraint may propagate of errors due to nonaccounted for nonlinear scattering effects in water to plastic abundance estimation. Therefore, we use the constrained version of (6) with \\(x_{i}\\geq 0,\\forall i\\). Usually, the full process of SU undergoes the following main steps, each of them feeding the output to the next one: estimation of the number of materials \\(n\\) present in the scene, creation of a spectral library \\(\\mathbf{S}\\) containing a representative end member \\(s_{i}\\) for each material \\(i\\), and pixelwise abundance estimation \\(\\mathbf{x}\\). In layman's terms, an example for the final output of an SU process may be the estimation of the total area within a pixel covered by water, soil, vegetation, or asphalt, assuming the related library \\(\\mathbf{S}\\) is constituted by the spectra of these four materials. In this work, the use of SU has two major advantages. First, it can be used to increase the accuracy of the estimation of the total water surface covered by floating plastic debris, which would otherwise be approximated to the total area of detected pixels in Sentinel-2 data (100 m\\({}^{2}\\) if only bands at 10 m resolution are considered). For example, a study in [70] reports overestimations of more than 100% when estimating the area covered by correctly detected solar panels in a region. Second, floating plastic debris is characterized by high spectral variability [71], and it is difficult to determine its spectral characteristics a priori. Therefore, locating the best candidate endmember for plastic in a large area may highlight other plastic accumulations therein by inspecting results of the abundance estimation step, under the assumption that the type of pollutants is similar in the area, without an explicit selection of their relevant spectral features. While multispectral sensing has largely succeeded at classifying entire pixels, SU as analysis of the constituent substances within a pixel is limited by a relatively low number of spectral measurements [72]. The reason behind this is that the mathematical model applied to represent each spectrum as a linear combination of endmembers requires \\(n\\) not to be larger than the number of available spectral bands \\(m\\): the matrix \\(\\mathbf{S}\\) should be orthogonal, as it needs to be inverted in order to mathematically solve (6), and therefore, the problem becomes overdetermined if \\(n>m\\). This problem becomes even more pronounced at a GSD of \\(10\\) m, for which only 4 bands are available for Sentinel-2. The mentioned GSD is the only suitable for plastic detection in inland waters, where accumulations of debris rarely span a full pixel if bands at \\(20\\) m are employed. For the same reason, data-driven automatic endmember extraction algorithms trying to identify pure pixels, such as vertex components analysis [73], are hard to apply in our case. The mentioned restrictions do not apply to data having both high spectral and spatial resolution, such as airborne imaging spectrometer data, which in our case are not available on the area. In order to tackle this problem, we force the spectral library to be composed by a limited number of relevant macroclasses, including the plastic endmember. The selected endmembers must be easily separable according to their spectral features: as mentioned, if these are highly correlated the endmembers matrix would become nonorthogonal, the inversion needed unstable, and the estimated fractions \\(\\mathbf{x}\\) highly sensitive to random error [74]. Thus, the number of classes is not larger than the available Sentinel-2 spectral bands, and the spectra are not linearly dependent, making the unmixing process mathematically feasible. Our library is, therefore, composed by the average spectrum of preselected regions containing water, vegetation, bare soil, and urban structures, in addition to plastic, pushing the process to its very limits and requiring an assessment of the stability of its results. The plastic endmember is instead selected as the average of the best candidates computed as in Section II-A. The training area collection step for the other classes can be either manual or automatic. In the case of manual selection, this is relatively inexpensive to carry out, as all the mentioned classes are easy to identify in a true color representation of the acquired scenes at 10 m GSD. The automatic endmember selection step uses instead the maximum values in a nearby area of normalized difference vegetation index to select a representative pixel for vegetation, and does the same for water (highest NDWI) and soil (highest band ratio of red over green outside of detected plastic). In final results, the urban area can be masked out using the WSF layer and the relevant area of the image is subset by dilating the water mask computed as described in Section II-A. The analysis of the resulting abundance map for plastic is finally used to estimate the total surface covered by debris and its changes over time, and as an aid to spot other accumulations of pollutants in nearby areas, allowing assessing up to 500 km\\({}^{2}\\) on ground in few minutes of visual inspection (see Section III). ## III Cases of Study Large-scale identification of plastic debris with subsequent local analysis, monitoring, and characterization of water dynamics, is carried out in a GEE environment by allowing an user to select a large area (in the order of several hundreds of km\\({}^{2}\\)) and a time frame, and visualizing the plastic candidates detection. The code has been published as a GEE app and is freely available upon request. In subsequent steps, single Sentinel-2 scenes can be further analysed to estimate the surface covered by floating debris, its behavior through time, and detect similar phenomena in the same area. The processing chain is exemplified for two types of test sites: river dams (Bosnia, Serbia, Guatema) and water channels near Cairo. The test sites, selected across three continents, represent different kinds of plastic appearance (frequency of appearance and disappearance), variable surface cover (rural, suburban), and variable challenges in terms of cloud cover impact. Moreover, the test sites are different in terms of the size of the observed water surface, favoring different approaches in the classification step. The Visegrad dam test site adds complementary aspects to the related work in [47], while the results for the other test sites exemplify the benefit of large scale screening and monitoring of debris coverage. High-resolution Google Earth images for these sites for the same analyzed time frame are reported in Fig. 1. The sites in Bosnia and Guatema are also studied in [52], where the presence of plastic is confirmed by the analysis of higher resolution optical data, such as PlanetScope at 3 meters spatial resolution, and photographs acquired on site are reported. As our methodology is designed to work with large datasets, we aimed at minimizing processing times. Table I reports the processing times for the current case studies. The area reported per image for the candidate search (and for the final optional step of SU) corresponds to a Sentinel-2 tile, with all of them covering the same area. If the region of interest spans more than one Sentinel-2 tile, it will then analyze all of the images contained in them within the stated date range. Following the candidate search process, the user then manually defines a new region of interest (ROI), within which the vectorization process is carried out. The reported times may scale with upgrades to GEE on the server side, or optimized processor components. For the search of plastic candidate areas, atmospherically corrected Sentinel-2 images are collected over a given time period. Table II provides an overview of the test sites, periods, and image time-series sizes. The parameters for the candidate search have been set in order to allow an user to inspect any area with minimal supervision, and are defined as follows: \\(T_{\\text{max}}\\) is set to 4000 to identify thick cloud pixels, \\(T_{w}\\) is set to 0.3 for water channels, which we consider to be around 20 m wide as in the cases of study reported in the experimental section, and 0.06 for wider rivers. Default values are used for \\(T_{i}\\), \\(m\\), and \\(n\\) as described in Section II-A. In order to have a base plastic-free image to use in the change detection process, a synthetic composite is generated by selecting all pixels in the area having lowest plastic persistence across the stack. ### _River Dams_ We analyze three dams: Hidrovacas in Guatema, Potpecko in Serbia, and Visegrad in Bosnia. The application developed in GEE is run with the default parameters described above, and applied to large areas including the spots polluted by plastic. The output of the plastic detector across a large scene of approximately 200 km\\({}^{2}\\) is reported in Fig. 4. A period of 6 months for the year 2020 is screened, with a selection of cloud-free images distributed across the time window as described in Section II-A. A rapid visual inspection of the change detection results therein shows a single critical area found by the algorithm, which corresponds to the Hidrovacas dam. A higher resolution PlanetScope image acquired on the \\(25\\) March 2021 is additionally reported, matching well the presence of plastic in the detected spot in Fig. 5(f), acquired on the same day. In the same figure, the initial candidates detected as floating plastic are overlaid on a true color composite of a sample image selected from the multitemporal stack. For each image in the stack, the automatically derived vectorized polygons additionally report the change detection results with respect to a reference synthetic plastic-free image, composed as described above. The polygons result accurate, and an estimation of the surface covered by plastic can be given for any image. The area of plastic surface in m\\({}^{2}\\) is estimated for Fig. 5(b)-(f), respectively, as: 9000, 15 500, 3700, 6100, and 4300. These quantities are likely overestimated, as a pixel belonging to the polygon is assumed to be completely covered by plastic: nevertheless, considering the size of the area affected, the impact of the overestimation is not as large as for the case of smaller water channels, which will be analyzed later with SU in order to mitigate this aspect. The detection is successful in spite of some related challenges. First, the river section upstream of the dam appears to be covered by plastic for the full-time period, deactivating indicators related to signal intensity there. Second, the area is more impacted by cloud cover, restricting the size of the feasible images in the time series for the analysis. The same workflow and parameters are applied on the other side of the world for the same time period, namely to the Potpecko dam in Serbia. Also in this case, Fig. 6 shows that a large area of approximately 50 km\\({}^{2}\\) reveals the main affected area to be located close to the dam in the scene, with no other visible false alarms. A higher resolution PlanetScope image acquired Figure 4: Detection of floating plastic debris in 2020 at Hidrovacas Dam, Guatema, over an area approximately 20 \\(\\times\\) 10 km. The image displays a true-color composite from Sentinel-2 data, using a median composite from spring 2020. Detected plastic areas are highlighted in red, located in the center of the image (circled in red), demonstrating how the analysis can be conducted at larger scales with no false alarms. The inset at the top left shows a PlanetScope image from 25 March 2020 (2020 Planet Labs), with the plastic extent outlined in red. Figure 5: Detected plastic debris from the Guatema scene presented in Fig. 4. From left to right: (a) initial detected plastic candidates detected from the full multitemporal stack in highlighted in red and overlaid on selected true color composition subset of a single Sentinel-2 scene; (b)–(f) Top row: vectorized plastic patches extracted from each of the images included in the candidate search. Bottom row: corresponding images without the vectorized plastic overlay, showing the original Sentinel-2 true color composition subset. Subimage (f) was acquired on the same day as the higher resolution PlanetScope image (2021 Planet Labs) in the insert of Fig. 4, with the vectorized plastic mask aligning with the plastic extent outlined in that image. (a) Detection. (b) 2020/02/04. (e) 2020/04/09. (d) 2021/01/29. (e) 2021/02/03. (f) 2021/03/25. on the 20 March 2020 is additionally reported, matching the presence of plastic in the detected spot. The plastic detected in Potpecko dam in the Lim river is transported afterwards into the Drina river, and accumulates again in Visegrad dam (Bosnia). At the moment of writing this is a recurring and unsolved problem, with 5000 cubic metres of different kinds of waste estimated to be present in January 2024 [75]. In the analysed timeframe, plastic debris appears at Potpecko in 2020. Thereafter, the plastic passes the barrier and disembogues into the Drina river, where it accumulates again at the Visegrad dam in 2021. Details of this detection and the connected site of Visegrad are detailed in Figs. 8 and 9. Also in this case, a higher resolution PlanetScope image (2021 Planet Labs) in the insert of Fig. 8 agrees with the vectorized plastic mask. The transferability of the method is highlighted by the use of the same parameters for all dam sites reported in this article. Fig. 6: Plastic detection for 2020 on Potpecko dam, Serbia, covering an approximate area of 30 \\(\\times\\) 15 km. The image shows a true color composite from the Sentinel-2 median image for spring 2020, with the plastic extent similarly outlined in red. No false alarms are present in the scene. The insert displays a PlanetScope image acquired on 20 March 2020, (2020 Planet Labs), with the plastic extent outlined in red. Further details on the multitemporal Sentinel-2 dataset and plastic detection results are provided in Fig. 7(a)–(g). Fig. 7: Plastic candidates identified at dam sites along the Lim river (Serbia). True-color composites of Sentinel-2 subsets from 2020 are shown (refer to Fig. 6 for the full view). Subimages (a)—(f) were captured on the same day as the higher resolution PlanetScope image in the insert of Fig. 6 [(a) 20th March, (b) 9th April, (c) 10th July, (d) 28th July, (e) 22nd August, and (f) 6th September] and the plastic mask derived in (g) aligns with the plastic extent outlined in the PlanetScope image. The occurrence of debris at the described dam sites is well known and allows us to showcase results obtained by the proposed methods. However, the methods are also applicable to identify and characterize unknown sites of plastic debris over large scale in order to exploit the opportunities of continuous, large-scale mapping of the Earth surface as provided by Sentinel-2. This is reported in the following section for a test region located in the North of Cairo, Egypt. ### _Water Channels_ Plastic debris in the region north of Cairo is smaller in scale and more spatially distributed. Plastic accumulations are either related to man-made barrages, where plastic is sometimes released, or river bends. The parameters are here set as in the river dam examples presented before, with the difference of the threshold for water channels described in the methodology section. An example of the growing accumulation of debris from March to July 2020 is reported in Fig. 10, with a partially transparent Sentinel-2 true color composite overlaid on high-resolution base map available in GEE, reported in order to better characterize the context. A plastic-free image was found for August 2020, after the plastic in the channel was either removed or released downstream, and used it as starting point for the change detection process. The vectorized polygons report, for each month, the change detection results with respect to previous month, instead of the detected changes with an automatically generated synthetic plastic-free image as in the Hidrovacas case. The polygons result accurate with overlaps only in mixed pixels. The area of the detected change between the first and last image lets us estimate a total surface covered by plastic debris of 8400 Fig. 8: Plastic detection for 2021 on Visegrad dam, Bosnia, covering an approximate area of 40 \\(\\times\\) 20 km. The image shows a true color composite from the Sentinel-2 median image for winter and spring 2021, with the plastic extent similarly outlined in red. No false alarms are present in the scene. The insert displays a PlanetScope image acquired on 23 February 2021, (2021 Planet Labs), with the plastic extent outlined in red. Further details on the multitemporal Sentinel-2 dataset and plastic detection results are provided in Fig. 9(a)–(f). Fig. 9: Plastic candidates identified at dam site in Visegrad dam (Drina) for the year 2021. (a) captured on the same day as the higher resolution PlanetScope image in the insert of Fig. 8, and the plastic mask derived in (f) is comparable with the plastic extent therein and outlined in the PlanetScope image. (a) February 23. (b) May 9. (c) June 23. (d) July 8. (e) August 7. (f) Mask. m\\({}^{2}\\). The single accumulations by month from March to July measure, respectively, 1802, 2503, 1201, 1201, and 800 m\\({}^{2}\\). Their sum is slightly larger with respect to the previous overall value, because of the mentioned slight overlap between the areas. As mentioned, estimating the area by simply summing the area defined by detected pixels can lead to a large overestimation, up to 100% [70, 76]. Therefore, an SU procedure is carried out by selecting an average of the five pixels with highest reflectance in the plastic detection, plus manually selected averaged pixels for the classes vegetation, soil, water, and urban. Results are reported at the bottom left of Fig. 11 for the image acquired on the 12 July 2020, where the maximum extension of plastic was detected. Here, a composite of three abundance maps only is reported, quantifying the percentage of each pixel covered, respectively, by plastic (red), urban (blue), and vegetation (green). The estimated area seems more accurate with respect to the previously reported value, and reduced to 5669 m\\({}^{2}\\) accumulated in a time span of 4 months. In this case, there is no validation data, but the unmixing results reported show how the concentration of plastic is found to be higher in the center of the river and mixed with soil/road and vegetation on the border of the river, which we consider to be a realistic arrangement. The case of SU using automatically extracted endmembers yields results where plastic is sometimes confused with soil/urban, with an estimated area of 4929 m\\({}^{2}\\) possibly underestimating the affected area. The results of SU can be further explored in order to detect additional plastic Fig. 11: Results of SU used to quantify surface occupied by plastic and explore the scene for additional debris accumulations. (Left) Detail of manually derived unmixing results for Sentinel-2 scene acquired on 12 July 2020, showing the area with the maximum plastic accumulation from the time series (first row of Fig. 10, bottom-left). Plastic abundance is highlighted in red, vegetation in green, and urban areas in blue. The main detection is circled in violet, with an additional detection to the upper right circled in yellow. The false alarm rate is acceptable, as plastic is not confused with urban areas (in blue), and focusing on water channels in the image aids in visual inspection. Right (from top to bottom): SU results and Sentinel-2 subset in true color composite subset for the area circled in yellow; unmixing results with manual and automatic endmember selection for the area circled in violet. Fig. 10: Monthly detected accumulation of floating plastic debris on a small distributary of the Nile river, north of Cairo city center, near Izbat Al Ini. Top: The river flows northward, with plastic beginning to accumulate in the area marked by a red polygon (March 2020). Different colors indicate the accumulation of subsequent months extending southwards until July 2020. The background is a true color composite of the Sentinel-2 image for the 12 July 2020, blended with high-resolution GEE busemap images from later dates in order to better characterize the context (2023 Airbus, CNES/Airbus, Maxar Technologies). Bottom: A sequence of true color composite subsets from Sentinel-2 images acquired in 2020, respectively on the 17th of July (after the discharge, also used as plastic-free reference), 24th of March, 18th of April, 18th of May, 12th of June, and 12th of July. The vectorized plastic areas (top) were computed for each case, based on detected changes between consecutive images. accumulation. The left side of Fig. 11 reports the same RGB composite of the abundance of plastic, urban and vegetation on a wider area spanning approximately 15 \\(\\times\\) 5 km. Only few false alarms can be spotted in the scene, and can be easily filtered out as they belong to fields or bright objects outside of the channels system. The upper right corner, circled in yellow, presents an additional plastic detection. The detail reported below shows the presence of plastic at a river bend in the Sentinel-2 image, with an additional exposed area, which is estimated as mostly composed by vegetation in the SU process. In this case, we could not use the water mask adopted for the Visegrad case reported next, as small channels such as the ones under analysis are not contained in the dataset. It would be possible to create a water mask directly from the data, but there is the danger of not including portions of the channels occluded by plastic, so this is recommended only if the water mask is considerably dilated in order to be sure to include any debris accumulation. An additional example for the results of automatic SU is reported in Fig. 12 to derive abundance maps quantifying the concentration of plastic in the Visegrad dam. The land here is masked out by applying the water mask described in Section II-A. The depicted abundances are plastic in red, vegetation in green, and soil/man-made structures in blue, with the water abundance not reported. The plastic accumulation results well defined, and the bridge to the north-east of the image is correctly assumed to be composed of other man-made materials. In the western side of the detected plastic area high abundances of vegetation are detected. The spectral features in the examined image elements (using the bands at 10 m GSD) reported as insert in Fig. 12(b) suggest an accumulation of floating organic material, whose flow is hindered by the debris, with the spectra from the plastic accumulation area appearing significantly different. Additional vegetation is detected along the borders of the water body due to the imperfect water mask fit, caused by mixed pixels, small coregistration errors, and seasonal variability of water. In the rest of the image few commission errors can be observed, generally as isolated pixels: these may be easily filtered out using a median filter, which was not applied here in order to show the original output of the algorithm. Results reported here offer a new perspective on this case study with respect to the recent analysis reported in [52], based on a higher resolution PlanetScope image acquired one day after the Sentinel-2 image used here. Assuming that the plastic distribution did not change considerably in 24 h, the differences can be summarized as follows. First, the whole floating debris area is reported in [52] to be composed of plastic; nevertheless, spectral analysis derived from the SU step shows the eastern portion of the debris to be mainly composed by vegetation and organic material adhering to the plastic. This happens as optically bright pixels are only defined according to the difference between NIR and visible bands in the mentioned work, which also matches the spectral characteristics of vegetation. Second, the image obtained here is selected automatically within a time period of several months, while in [52], the relevant image is manually selected. Finally, the water mask is derived automatically or using auxiliary datasets in our case, while it is manually outlined in the other case. ### _Comparison With Detections From Single Images: FDI_ Comparing the obtained results with other methods within the GEE processing environment is not straightforward. One challenge lies in the fact that there is a lack of algorithms that use image time series as input, as most existing methods rely on one or two static images. In addition, a key advantage of the proposed approach is that it does not require prior knowledge of which images or dates are affected by floating plastic debris. To the best of authors' knowledge, this information is needed by other methods. In this section, we compare our multitemporal plastic detection methodology with results obtained from single selected images using FDI [77], the most well-known spectral index designed specifically for detecting floating plastic, particularly in Sentinel-2 imagery. The FDI is defined as follows: Fig. 12: Automatic SU results. (a) True color composite of the Sentinel-2 scene from Visegrad Dam, acquired on 20 March 2022. (b) Results of automatic SU using a fixed spectral library composed by endmembers automatically derived from spectral indices. Plastic abundance is shown in red, vegetation in green, and soil/urban areas in blue. The insert highlights how spectra in the affected area, exhibiting high abundance values for vegetation, differ from plastic spectra by showing typical vegetation characteristics, such as the sharp increase from the red to near-infrared spectrum, reducing the likelihood of false alarms. \\[\\text{FDI}=R_{\\text{NIR}}-R^{\\prime}_{\\text{NIR}} \\tag{7}\\] where \\[R^{\\prime}_{\\text{NIR}}=R_{\\text{RE2}}+(R_{\\text{SWIR1}}-R_{\\text{RED}})*\\frac{( \\lambda_{\\text{NIR}}-\\lambda_{\\text{RED}})}{\\lambda_{\\text{SWIR1}}-\\lambda_{ \\text{RED}}}*10. \\tag{8}\\] In this context, \\(R\\) represents the reflectance in the NIR, SWIR 1, and Red-Edge 2 bands, respectively, with \\(\\lambda\\) denoting the central wavelength of these bands. Fig. 13 shows the results on the cases of study reported in this article: a smaller water channel (Cairo) and two larger water bodies in different environments (Visegrad and Hidrovacas). The authors of the FDI rely on a Naive Bayes classification of FDI values using a pretrained model for macrolplastic detection. To provide the best possible comparison, we manually adjusted the FDI thresholds to match our results as closely as possible, selecting values between 0 and 1 for each case. These thresholds vary widely depending on the location, and the specific values are reported in the caption of Fig. 13. In addition, we masked the results using the same water mask used in our methodology. As illustrated in the examples, the FDI is more prone to generating false positives, especially on the edges of the channels. In the case of Cairo, the NDWI threshold used for water channels (0.06) favors completeness over correctness, introducing false positives in the water mask. These false positives do not translate into plastic detections with the method described in this article, but they do produce several false alarms with the FDI approach. The main limitation of the FDI approach is the lack of multitemporal analysis, which enables better discrimination of static objects with similar reflective properties to plastic, such as rocks, sand accumulations, or floating vegetation, which are not detected in our change detection maps. In addition, FDI relies on Sentinel-2 bands with 20-m resolution, whereas the methodology presented in this article uses only 10-m resolution bands, enabling plastic detection in smaller water channels and reducing false alarms along the edges of water bodies. Furthermore, a workflow relying on classification as described by FDI's authors would be less suitable for large-scale scenarios. ## IV Application An example for how an user can select a case of study in the GEE app is reported in Fig. 14 for an area approximately Fig. 14: Example of running the GEE app (available upon request). Detection of plastic debris for a water channel located upstream of the one shown in Fig. 10, observed two years later. From left to right: true color composites of Sentinel-2 images acquired on 2 July and 21 August 2022, with overlaid plastic detection on the latter (partially transparent and overlaid on high-resolution base map available in GEE, 2023Aybus, CNS/Airetus, Maxar Technologies). The GEE app interface is shown with the user-defined parameters. Bottom: A wider subset of the image acquired on 21 August 2022, showing no false alarms outside the detection (circled in red). In the insert, higher resolution PlanetScope image acquired on the August 21 2022 (2020 Planet Labs) with plastic extent outlined in red, matching the detection above. These results can be obtained without any prior knowledge of the temporal plastic dynamics in the area nor relevant expertise in remote sensing. Fig. 13: Plastic detection results for (a) and (d) Visegrad, (b) and (e) Hidrovacas, and (c) and (f) Cairo in 2020. The right column shows the Sentinel-2 images, while the left column displays as overlay the thresholded FDI. The applied thresholds to FDI values are respectively 0.18, 0.1, and 0.43. 300 m upstream of the case of study reported in Section III-B, two years after the events analyzed therein. An user selects an area, either a predefined one or a custom region of interest, and suitable parameters among three possible thresholds \\(T_{w}\\) for the water map (either fixed for narrow channels/larger water surfaces, or user-defined), the desired number of images \\(K\\), and the temporal interval time\\({}_{1}\\)-time\\({}_{2}\\). The application presents then a true color combination for the images automatically selected from the stack and the detected plastic mask, where no false alarms are present in spite of several surrounding crops visibly changing appearance in this time frame. A higher resolution PlanetScope image acquired on the same day is additionally reported, matching the presence of plastic in the detected spot. The scene can be further explored, and approximately 12 km downstream, close to the settlements of Izbat Yusif and Izbat Sisil, additional plastic accumulations are found, with no false alarms present (see Fig. 15). These are also confirmed by PlanetScope data acquired on the same day. Apart from the examples reported, additional routines are implemented in the app allowing to generate the vectorized areas for plastic cover in each image in the stack, output a gif animation of the images, and perform the automatic unmixing procedure. Implementing the methods in a cloud environment proved effective, allowing easily switching between test sites, checking data availability and properties, and screening long sequences of time series. Further insights could be derived by high-resolution image analysis, if areas affected by this phenomenon and the temporal interval of interest are either known a priori, or discovered through our proposed application. For the three cases of study contained in this work, namely Hidrovacas dam, Lim/Drina system, and Nile river, high-resolution images from Google Earth acquired during the analyzed temporal intervals (see Table II) are reported in Fig. 1. This set of images shows more precisely the plastic distribution in these areas, and also serve as validation for our detection therein. It must be remarked that temporal resolution for commercial high-resolution satellites is considerably lower with respect to medium-resolution missions such as Sentinel-2: nevertheless, in the future an increased number of missions of this kind could be in orbit, possibly with associated increase in data availability and decrease in cost. ## V Conclusion The potential for detecting floating plastic in inland waters and monitoring changes in debris cover with open remote sensing time-series data is investigated in this work. Results for river dams and narrow water channels show that it is possible to search for and delineate floating plastic debris patches using robust spectral/temporal change detection. The cloud-based application developed in GEE provides a rapid and quasi-automatic screening of an area during a period of several months, automatically selecting the best-suited images and reporting aggregate results for the whole period, enabling the subsequent analysis of the single scenes in the multitemporal image time series. The main advantages of the method are the robustness derived from multitemporal analysis of open satellite data with low revisit time (a comparison with spectral indices based on single images is reported), and the automatic selection of relevant images across the multitemporal stack. The process does not require expert knowledge in remote sensing, nor the time-consuming workflow of searching, downloading, and processing data from any image data archive. Nevertheless, a priory knowledge of potential debris hotspots in terms of location and temporal interval of interest is advantageous, as well as the availability of high-resolution images in the area. The search for plastic candidates is based on a compromise between the available number of images and the extent of the time period of interest. A suitable number of images is required to capture the appearance of plastic on water surfaces. The approach is mainly limited by the size of the debris patches, missed clouds/cloud shadows, as well as the spectral signal, which should have high enough reflectance. The amount of cloud cover is connected to the number of pixels conveying meaningful information but also indicates the overall quality Fig. 15: Results of additional exploration of the area approximately 12 km further upstream from the location shown in Fig. 14. Top: true color composite of Sentinel-2 scene acquired on 21 August 2022, showing an initial detection of additional plastic accumulations found in correspondence of two river dams, reported in red and circled, respectively, in violet and yellow. No false alarms are observed. Bottom: Detailed views of the plastic accumulations with Sentinel-2 subsets from 2 July and 21 August 2022, with automatically vectorized plastic areas. Also shown in a higher resolution PlanetScope image from 21 August 2022, with manually vectorized plastic areas outlined in red, demonstrating good alignment with the Sentinel-2-based detections. of images for the task, and is dependent on the location of the area of interest; furthermore, small clouds may not be detected by the automatic cloud mask provided in Sentinel-2 products. Reducing the number of images selected or extending the temporal interval for the time series are the preferable approaches in challenging regions affected by frequent cloud cover. Regarding cloud shadows, these may get notoriously confused in remote sensing analysis with water [78], yielding false detections in the proposed multitemporal analysis. The masking of topographic shadows would add robustness to the results [79]. Additional sources for false alarms are river sand banks, dried up ponds, and areas of haze and smog, which can meet the criteria for plastic candidates in the image time series. For reducing the number of false alarms, the threshold for the level of intensity change should be increased. The method is still limited if applied to water channels narrower than 20 m (pixels result too mixed even at 10 m GSD), or if other features may emerge from the water such as bare soil in shallow channels or water reservoirs. Different ways of further exploring the results are suggested. First, plastic detection results can be visually analyzed in the surrounding area in order to spot previously unknown debris patches (see Fig. 15). In addition, local spectral features of plastic can be included in the analysis, relying on the detected patch of plastic in order to perform SU, either in fully automatic mode or with user-defined endmembers, or to train a supervised classifier. SU can yield an estimation of the surface covered by plastic with subpixel precision (see Fig. 11), or synthetic images highlighting regions featuring concentrations of plastic in the area. The integration of Sentinel-1 data to the candidate search has been investigated. As reported in [46], monitoring debris with SAR data stacks is feasible for river dams. However, in our studies comprising narrow water channels, the spatial resolution of the Sentinel-1 data (5 m \\(\\times\\) 20 m in interferometric wide swath mode) resulted coarse, and adding this information to our analysis did not yield any improvement, even for larger water surfaces. As a possible contribution to the framework, Sentinel-1 data can help in further mitigating false alarms related to cloud shadows, if intensity changes in the optical images are not confirmed by complementary changes in temporally close SAR images. It would be valuable to derive a spectral library of reflectance of typical floating plastic debris samples, in order to capture variations of plastic debris according to its type and location, under the assumption that this type of pollutant may exhibit different spectral features in different parts of the world. In this regard, an approach based on deep learning such as [48] could benefit by including the multitemporal aspect of plastic dynamics in the workflow, e.g., by using as parallel input a multitemporal composite of Sentinel-2 band 8 (NIR) capturing the changes from water to plastic for a region of interest. Still, this would not solve the problem of applying the method to narrow water channels, as such an approach would be less robust in the presence of mixed pixels. Furthermore, the explicit learning of plastic features may not be easily transferable for sites characterized by different plastic types or land-cover classes. In our analysis, the restrictions caused by the sole use of the 4 available Sentinel-2 bands at 10 m GSD do not allow a rich spectral characterization of the targets, with three bands at 20 m used to remove false alarms only. Nevertheless, plastic exhibiting spectrally similar properties can be detected using SU (see Fig. 11). In some cases, we could verify that the size of the detected areas may allow the presence of pure pixels at 30 m GSD, which is the typical resolution for recently launched imaging spectrometers, such as EnMAP [80], PRISMA [81], and DESIS [82]. For example, the plastic accumulation in the Hidrovacas dam as shown in Fig. 1 covers an area larger than 100 \\(\\times\\) 100 m, which is feasible for this kind of analysis. Future work should, therefore, include the analysis of hyperspectral data, which could be employed effectively with the described SU procedure, enabling additional detection of floating debris. The approach utilizing open Sentinel-2 data can contribute to overall mitigation strategies by identifying and monitoring large floating plastic patches accumulating at natural or human-made barriers. The information can be used to facilitate the extraction of debris before entering the open sea, and to characterize the dynamics of accumulations of plastic in critical areas. It might also contribute to an increased awareness of the general public by highlighting the visibility of debris on water bodies from space, triggering mitigation activities in turn. ## References * [1] A. L. Andrady, _An Environmental Primer_, Hoboken, NJ, USA: Wiley, 2003, pp. 1-75. * [2] A. Shamskhany, Z. Li, P. Patel, and S. Karimpour, \"Evidence of microplastic size impact on mobility and transport in the marine environment: A review and synthesis of recent research,\" _Front. Mat. Sci._, vol. 8, 2021, Art. no. 760649. * [3] M. G. Kibria, N. I. Masuk, R. Safayet, H. Q. Nguyen, and M. Mourshed, \"Plastic waste: Challenges and opportunities to mitigate pollution and effective management,\" _Int. J. Environ. Res._, vol. 17, no. 1, 2023, Art. no. 20. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/36711426](https://www.ncbi.nlm.nih.gov/pubmed/36711426) * [4] T. van Emmerik and A. Schwarz, \"Plastic debris in rivers,\" _WIREs Water_, vol. 7, no. 1, 2019, Art. no. e1398. * [5] V. Nava et al., \"Plastic debris in lakes and reservoirs,\" _Nature_, vol. 619, no. 7969, pp. 317-322, 2023. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/37438590](https://www.ncbi.nlm.nih.gov/pubmed/37438590) * [6] M. A. Browne et al., \"Accumulation of microplastic on shorelines worldwide: Sources and sinks,\" _Environ. Sci. Technol._, vol. 45, no. 21, pp. 9175-9179, 2011. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/21894925](https://www.ncbi.nlm.nih.gov/pubmed/21894925) * [7] L. Biermann, D. Clewley, V. Martinez-Vicente, and K. Topouzelis, \"Finding plastic patches in coastal waters using optical satellite data,\" _Sci. Rep._, vol. no. 1, 2020, Art. no. 5364. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/32327674](https://www.ncbi.nlm.nih.gov/pubmed/32327674) * [8] C. Patiaratchi et al., \"Plastics in the indian ocean--sources, transport, distribution, and impacts,\" _Ocean Sci._, vol. 18, no. 1, pp. 1-28, 2022. * [9] S. Chiba et al., \"Human footprint in the abyss: 30 year records of deep-sea plastic debris,\" _Max. Policy_, vol. 96, pp. 204-212, 2018. * [10] M. Egger, F. Sulu-Gambari, and L. Lebreton, \"First evidence of plastic fallout from the North Pacific Garbage Patch,\" _Sci. Rep._, vol. 10, no. 1, 2020, Art. no. 7495. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/32376835](https://www.ncbi.nlm.nih.gov/pubmed/32376835) * [11] K. L. Law et al., \"Plastic accumulation in the North Atlantic subtropical gyre,\" _Science_, vol. 329, no. 5996, pp. 1185-1188, 2010. [Online]. Available: [https://www.science.org/doi/abs/10.1126/science.1192321](https://www.science.org/doi/abs/10.1126/science.1192321) * [12] A. Lacerda et al., \"Plastics in sea surface waters around the Antarctic Peninsula,\" _Sci. Rep._, vol. 9, no. 1, 2019, Art. no. 3977. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/30850657](https://www.ncbi.nlm.nih.gov/pubmed/30850657)* [13] W. C. Li, H. F. Tse, and L. Fok, \"Plastic waste in the marine environment: A review of sources, occurrence and effects,\" _Sci. Total Environ._, vol. 566/567, pp. 333-349, 2016. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0048969716310154](https://www.sciencedirect.com/science/article/pii/S0048969716310154) * [14] G. G. N. Thushami and J. D. M. Seneviratthana, \"Plastic pollution in the marine environment,\" _Hellyon_, vol. 6, no. 8, 2020, Art. no. e04709. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/32923712](https://www.ncbi.nlm.nih.gov/pubmed/32923712) * [15] R. Kumar et al., \"Impacts of plastic pollution on ecosystem services, sustainable development goals, and need to focus on circular economy and policy interventions,\" _Sustainability_, vol. 13, no. 17, 2021, Art. no. 9963. * [16] L. C. M. Lebreton, J. van der Zwet, J. W. Damstrege, B. Slat, A. Andrady, and J. Reisser, \"River plastic emissions to the world's oceans,\" _Nature Commun._, vol. 8, 2017, Art. no. 15611. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/28589961](https://www.ncbi.nlm.nih.gov/pubmed/28589961) * [17] L. J. J. Meijer, T. van Emmerik, R. van der Ent, C. Schmidt, and L. Lebreton, \"More than 1000 rivers account for 80% of global riverine plastic emissions into the ocean,\" _Sci. Adv._, vol. 7, no. 18, 2021, Art. no. e025803. [Online]. Available: [https://www.science.org/doi/abs/10.1126/sciadv.va25803](https://www.science.org/doi/abs/10.1126/sciadv.va25803) * [18] C. J. van Calcar and T. H. M. van Emmerik, \"Abundance of plastic debris across European and Asian rivers,\" _Environ. Res. Lett._, vol. 14, no. 12, 2019, Art. no. 124051. * [19] T. van Emmerik, Y. Mellink, R. Hauk, K. Waldschlager, and L. Schreyers, \"Rivers as plastic reservoirs,\" _Front. Water_, vol. 3, 2022, Art. no. 786936. * [20] C. Kruse et al., \"Satellite monitoring of terrestrial plastic waste,\" _PLoS One_, vol. 18, no. 1, 2023, Art. no. e0278997. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/36652417](https://www.ncbi.nlm.nih.gov/pubmed/36652417) * [21] T. van Emmerik, M. Loozen, K. van Oeveren, F. Buschman, and G. Prinsen, \"Riverine plastic emission from Jakata into the ocean,\" _Environ. Res. Lett._, vol. 14, no. 8, 2019, Art. no. 084033. * [22] R. Newboud, \"Understanding river plastic transport with tracers and GPS,\" _Nature Rev. Earth Environ._, vol. 2, no. 9, pp. 591-591, 2021. * [23] T. van Emmerik et al., \"A methodology to characterize riverine macroplastic emission into the ocean,\" _Front. Mat. Sci._, vol. 5, 2018, Art. no. 372. * [24] J. Gasperi, R. Dris, T. Bonin, V. Rocher, and B. Tassin, \"Assessment of floating plastic debris in surface water along the Seine river,\" _Environ. Pollut._, vol. 195, pp. 163-1666, 2014. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/275204189](https://www.ncbi.nlm.nih.gov/pubmed/275204189) * [25] D. Gonzalez-Fernandez and G. Hanke, \"Toward a harmonized approach for monitoring of riverine floating macro litter inputs to the marine environment,\" _Front. Mat. Sci._, vol. 4, 2017, Art. no. 86. * [26] N. Gnann, B. Baschek, and T. A. Ternes, \"Close-range remote sensing-based detection and identification of macroplastics on water assisted by artificial intelligence: A review,\" _Water Res._, vol. 222, 2022, Art. no. 118902. * [27] M. Geracds, T. van Emmerik, R. de Vries, and M. S. Bu Ab Razak, \"Riverine plastic litter monitoring using unmanned aerial vehicles (UAVs),\" _Remote Sens._, vol. 11, no. 17, 2019, Art. no. 2045. [Online]. Available: [https://www.mdpi.com/2072-42921/11/7/2045](https://www.mdpi.com/2072-42921/11/7/2045) * [28] M. Drusch et al., \"Sentinel-2: ESA's optical high-resolution mission for GMES operational services,\" _Remote Sens. Environ._, vol. 120, pp. 25-36, 2012. * [29] K. Topouzelis, D. Papageorgiou, G. Suaria, and S. Aliani, \"Floating marine litter detection algorithms and techniques using optical remote sensing data: A review,\" _Marl. Pollut. Bull._, vol. 170, 2021, Art. no. 112675. * [30] M. Waqas, M. S. Wong, A. Stocchio, S. Abbas, S. Hafeez, and R. Zhu, \"Marine plastic pollution detection and identification by using remote sensing-meta analysis,\" _Marl. Pollut. Bull._, vol. 197, 2023, Art. no. 115746. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/37951122](https://www.ncbi.nlm.nih.gov/pubmed/37951122) * [31] V. Martinez-Vicente et al., \"Measuring marine plastic debris from space: Initial assessment of observation rsmende Sens.\" _Remote Sens._, vol. 11, no. 20, 2019, Art. no. 2443. * [32] K. Themistocleous, C. Papoutsa, S. Michaelides, and D. Hadjimitsis, \"Investigating detection of floating plastic litter from space using Sentinel-2 imagery,\" _Remote Sens._, vol. 12, no. 16, 2020, Art. no. 26ads. * [33] B. Basu, S. Samigrahi, A. Sarkar Basu, and F. Pilla, \"Development of novel classification algorithms for detection of floating plastic debris in coastal waterbodies using multispectral Sentinel-2 remote sensing imagery,\" _Remote Sens._, vol. 13, no. 8, 2021, Art. no. 117106. * [34] S. Sannigrahi, B. Basu, A. S. Basu, and F. Pilla, \"Development of automated marine floating plastic detection system using Sentinel-2 imagery and machine learning models,\" _Mar. Pollut. Bull._, vol. 178, 2022, Art. no. 113527. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0025326X22002090](https://www.sciencedirect.com/science/article/pii/S0025326X22002090) * [35] C. Hu, \"Remote detection of marine debris using satellite observations in the visible and near infrared spectral range: Challenges and potentials,\" _Remote Sens. Environ._, vol. 259, 2021, Art. no. 112414. * [36] P. Mikeli, K. Kikaki, I. Kakogeorgiou, and K. Karantzalos, \"How challenging is the discrimination of floating materials on the sea surface using high resolution multispectral satellite data?,\" _Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci._, vol. XLIII-B3-2022, pp. 151-157, 2022. * [37] M. Russwurm, S. J. Venkatesa, and D. Tuia, \"Large-scale detection of marine debris in coastal areas with Sentinel-2,\" _Science_, vol. 26, no. 12, 2023, Art. no. 108402. * [38] J. Mifdal, N. Longeep, and M. Russwurm, \"Towards detecting floating objects on a global scale with learned spatial features using Sentinel 2,\" _ISPRAnn. Photogrammetry, Remote Sens. Spatial Inf. Sci._, vol. V-3-2021, pp. 285-293, 2021. * [39] C. Hu, \"Remote detection of marine debris using Sentinel-2 imagery: A cautious note on spectral interpretations,\" _Mar. Pollut. Bull._, vol. 183, 2022, Art. no. 114082. [Online]. Available: [https://www.ncbi.nlm.nih.gov/pubmed/36067679](https://www.ncbi.nlm.nih.gov/pubmed/36067679) * [40] K. Topouzelis, A. Papakonstantinou, and S. P. Garaba, \"Detection of floating plastics from satellite and unmanned aerial systems (plastic litter project 2018),\" _Int. J. Appl. Earth Observ. Geoinformation_, vol. 79, pp. 175-183, 2019. * [41] K. Topouzelis, D. Papageorgiou, A. Karagaintanks, A. Papakonstantinou, and M. Arias Ballesteros, \"Remote sensing of sea surface artificial floating plastic targets with Sentinel-2 and unmanned aerial systems (plastic litter project 2019),\" _Remote Sens._, vol. 12, no. 12, 2020, Art. no. 2013. * [42] D. Papageorgiou, K. Topouzelis, G. Suaria, S. Aliani, and P. Corradi, \"Sentinel-2 detection of floating marine litter targets with partial spectral unmixing and spectral comparison with other floating materials (plastic litter project 2021),\" Remote Sens., vol. 14, no. 23, 2022, Art. no. 5997. * [43] N. Taggio et al., \"A combination of machine learning algorithms for marine plastic litter detection exploiting hyperspectral PRISMA data,\" _Remote Sens._, vol. 14, no. 15, 2022, Art. no. 3606. [Online]. Available: [https://www.mdpi.com/2072-4292/14/15/3606](https://www.mdpi.com/2072-4292/14/15/3606) * [44] M. Elvedl et al., \"Plastic monitor: Detecting riverine plastic glomformations, fluxes and pathways in Indonesia,\" in _Proc. ESA Living Planet Symp._, Bonn, Germany, 2022. Accessed: Nov. 28, 2024. [Online]. Available: [https://ipss22.d3ervices.com/index.php/p2page_id=18446&v=List&d=15&day=all&ses=21355.html#](https://ipss22.d3ervices.com/index.php/p2page_id=18446&v=List&d=15&day=all&ses=21355.html#) * [45] P. Tasseron, L. Schreyers, J. Peller, L. Biermann, and T. van Emmerik, \"Towards robust river plastic detection: Combining lab and field-based hyperspectral imagery,\" _Earth Space Sci._, vol. 9, 2022, Art. no. e02202EAC02518. * [46] M. D. Simpson et al., \"Monitoring of plastic islands in river environment using Sentinel-1 SAR data,\" _Remote Sens._, vol. 14, no. 18, 2022, Art. no. 4473. [Online]. Available: [https://www.mdpi.com/2072-42921/14/18/4473](https://www.mdpi.com/2072-42921/14/18/4473) * [47] S. Lavender, \"Detection of waste plastics in the environment: Application of copernicus earth observation data,\" _Remote Sens._, vol. 14, no. 19, 2022, Art. no. 4772. [Online]. Available: [https://www.mdpi.com/2072-4292/14/19/4772](https://www.mdpi.com/2072-4292/14/19/4772) * [48] A. Sole Gomez, L. Scandolo, and E. Eisemann, \"A learning approach for river debris detection,\" _Int. J. Appl. Earth Observ. Geinf._, vol. 107, 2022, Art. no. 102682. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0302](https://www.sciencedirect.com/science/article/pii/S0302)* [54] B.-C. Gao, \"NDWI-a normalized difference water index for remote sensing of vegetation liquid water from space,\" _Remote Sens. Environ._, vol. 58, no. 3, pp. 257-266, 1996. * [55] S. K. Mefeeters, \"The use of the normalized difference water index UNDIW in the delineation of open water features,\" _Int. J. Remote Sens._, vol. 17, no. 7, pp. 1425-1432, 1996. * [56] S. Singh and M. L. Kansal, \"Chamoli flash-flood mapping and evaluation with a supervised classifier and NDWI thresholding using Sentinel-2 optical data in Google earth engine,\" _Earth Sci. Informat._, vol. 15, no. 2, pp. 1073-1086, 2022. [Online]. Available: [https://search.ebscohost.com/login.aspx?direct=true&AutrbType=url.ip](https://search.ebscohost.com/login.aspx?direct=true&AutrbType=url.ip), shib&db=dbmap&AN=157412576&lang=es&site=ehost-live&scope=site&curation index=.euv technique for surface water mapping using Landsat imagery,\" _Remote Sens. Environ._, vol. 140, pp. 23-35, 2014. * [57] G. Kaplan and U. Avdan, \"Object-based water body extraction model using Sentinel-2 satellite imagery,\" _Eur. J. Remote Sens._, vol. 50, pp. 137-143, 2017. * [58] J. Escobar et al., \"Waterhole detection using a vegetation index in desert bighorn sheep (ovis canadensis cremobates) habitat,\" _PLoS One_, vol. 14, 2019, Art. no. e0211202. * [59] M. Tesfaye and L. Breuer, \"Performance of water indices for large-scale water resources monitoring using Sentinel-2 data in Ethiopia,\" _Environ. Moint. Assessment_, vol. 196, 2024, Art. no. 467. * [60] S. K. McFeeters, \"Using the normalized difference water index (NDWI) within a geographic information system to detect swimming pools for mosquito abatement: A practical approach,\" _Remote Sens._, vol. 5, no. 7, pp. 3544-3561, 2013. [Online]. Available: [https://www.mdpi.com/2072-42925/7/3544](https://www.mdpi.com/2072-42925/7/3544) * [61] J. Louis et al., \"Sentinel-2 Sen2Cor: L2A processor for users,\" in _Proc. Living Planet Symp._, 2016, pp. 1-8. * [62] O. Hagolle, M. Huc, C. Desjardins, S. Auer, and R. Richter, \"MAJA algorithm theoretical basis document,\" Dec. 2017. [Online]. Available: [https://doi.org/10.5281/zenodo.1209633](https://doi.org/10.5281/zenodo.1209633) * [63] L. Baetens, C. Desjardins, and O. Hagolle, \"Validation of copernics Sentinel-2cloud masks obtained from maja, sen2cor, and fmask processors using reference cloud masks generated with a supervised active learning procedure,\" _Remote Sens._, vol. 11, no. 4, 2019, Art. no. 433. [Online]. Available: [https://www.mdpi.com/2072-4292/11/4/433](https://www.mdpi.com/2072-4292/11/4/433) * [64] M. Marconcini et al., \"Outlining where humans live, the world settlement footprint 2015,\" _Sci. Data_, vol. 7, no. 1, 2020, Art. no. 242. * [65] M. Claverie et al., \"The harmonized landsat and Sentinel-2 surface reflectance data set,\" _Remote Sens. Environ._, vol. 219, pp. 145-161, 2018. * [66] J. M. Bioucas-Dias et al., \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 5, no. 2, pp. 354-379, Apr. 2012. * [67] N. Dobigeon, Y. Altmann, N. Brun, and S. Mousssoui, _Linear and Nonlinear Unmixing in Hyperspectral Imaging_ (Data Handling in Science and Technology 30). Amsterdam, The Netherlands: Elsevier, 2016, pp. 185-224. * [68] D. Cerra, R. Muller, and P. Reinartz, \"Noise reduction in hyperspectral images through spectral unmixing,\" _IEEE Geosci. Remote Sens. Lett._, vol. 11, no. 1, pp. 109-113, Jan. 2014. * [69] D. Cerra, C. Ji, and U. Heiden, \"Solar panels area estimation using the spaceborne imaging spectrometer DEISIS: Outperforming multispectral sensors,\" _ISPRS Ann. Photogrammetry, Remote Sens. Spatial Inf. Sci._, vol. V-1, no. 2022, pp. 9-14, Jun. 2022. [Online]. Available: [https://elith.dt.de/189792/](https://elith.dt.de/189792/) * [70] M. Moshtaghi, E. Knaeps, S. Sterckx, S. Garaba, and D. Meire, \"Spectral reflectance of marine macroplastics in the VNIR and SWIR measured in a controlled environment,\" _Sci. Rep._, vol. 11, no. 1, pp. 1-12, 2021. * [71] N. Keshava and J. F. Mustard, \"Spectral unmixing,\" _IEEE Signal Process. Mag._, vol. 19, no. 1, pp. 44-57, Jan. 2002. * [72] J. M. P. Nascimento and J. M. B. Dias, \"Vertex component analysis: A fast algorithm to unmix hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 4, pp. 898-910, Apr. 2005. * [73] F. D. Van der Meer and X. Jia, \"Collinearity and orthogonality of endmembers in linear spectral unmixing,\" _Int. J. Appl. Earth Observ. Geoinit._, vol. 18, pp. 491-503, 2012. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S030324411001474](https://www.sciencedirect.com/science/article/pii/S030324411001474) * [74] A. Emric, Bosnian river's floating waste dump threatens health, tourism. Reuters: Accessed: 16 Jan. 2024. 2024. [Online]. Available: [https://www.reuters.com/world/europe/bosnian-rivers-floating-waste-dump-threatens-health-tourism-2024-01-0/8/](https://www.reuters.com/world/europe/bosnian-rivers-floating-waste-dump-threatens-health-tourism-2024-01-0/8/) * [75] D. Cerra, S. Auer, A. Baissero, and F. Bachofer, \"Estimation of floating plastic debris surface in inland waters using spectral unmixing with multispectral data,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, 2024, pp. 4393-4396. * [76] L. Biermann, D. Clewley, and V. Martinez-Vicente, \"Finding plastic patches in coastal waters using optical satellite data,\" _Nature, Sci. Rep._, 2020, Art. no. 5364. * [77] K. E. Sawaya, L. G. Olmanson, N. J. Heinert, P. L. Brezonik, and M. E. Bauer, \"Extending satellite remote sensing to local scales: Land and water resource monitoring using high-resolution imagery,\" _Remote Sens. Environ._, vol. 88, no. 1, pp. 144-156, 2003. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0034425703002384](https://www.sciencedirect.com/science/article/pii/S0034425703002384) * [78] A. Hollstein, K. Segl, L. Guantre, M. Brell, and M. Enesco, \"Ready-to-use methods for the detection of clouds, cirrus, snow, shadow, water and clear sky pixels in Sentinel-2 MSI images,\" _Remote Sens._, vol. 8, no. 8, 2016, Art. no. 666. * [79] L. Guantre et al., \"The EnMAP spaceborne imaging spectroscopy mission for earth observation,\" _Remote Sens._, vol. 7, no. 7, pp. 8830-8857, 2015. * [80] M. Niroumand-Jadidi, F. Bovolo, and L. Bruzzone, \"Water quality retrieval from PRISMA hyperspectral images: First experience in a turbid lake and comparison with Sentinel-2,\" _Remote Sens._, vol. 12, no. 23, 2020, Art. no. 3984. * [81] K. Alonso et al., \"Data products, quality and validation of the DLR earth sensing imaging spectrometer (DESIS),\" _Sensors_, vol. 19, no. 20, 2019, Art. no. 4471.
Floating plastic debris on water surfaces imposes both short- and long-term burdens on nature. Hence, identifying and monitoring plastic is important to document the location and scale of the phenomenon. Evaluating the opportunities provided by multitemporal Earth observation data, this article proposes a framework to detect and monitor plastic debris floating on inland waters using optical satellite image time series. First, the detection of plastic candidates is conducted with a rule-based approach relying on variations in signal intensity, temporal patterns, spectral features, and information fusion. Second, identified sensitive areas can be monitored over time, and the extent of plastic cover at subpixel level estimated using spectral unmixing. The method needs as main input parameters only a temporal time frame and an area of interest, therefore, no specific image must be selected nor region of interest manually outlined. Several examples are reported in which the same workflow, developed as a Google Earth Engine application, is successfully applied to identify critical affected areas in full Sentinel-2 scenes, across different continents and contexts, and exhibiting floating plastic debris varying both in type and dynamics. Image time series, inland waters, plastic debris, sentinel-2, Google Earth Engine (GEE), spectral unmixing (SU).
Provide a brief summary of the text.